Continuous integration and delivery consolidate several activities, ranging from frequent code changes to compiling, building, testing, and deployment to customers. During these activities, software professionals seek additional information to perform the task at hand. Developers that spend a considerable amount of time and effort to identify such information can be distracted from doing productive work. By identifying the types of information that software professionals seek, we can better understand the processes, practices, and tools that are required to develop a quality product on time. A better understanding of the information needs of software practitioners has several benefits, such as staying competitive, increasing awareness of the issues that can hinder a timely release, and building a visualization tool that can help practitioners to address their information needs. We conducted a multiple-case holistic study with 5 different companies (34 unique participants) to identify information needs in continuous integration and delivery. This study attempts to capture the importance, frequency, required effort (e.g., sequence of actions required to collect information), current approach to handling, and associated stakeholders with respect to identified needs. We identified 27 information needs associated with different stakeholders (i.e., developers, testers, project managers, release team, and compliance authority). The identified needs were categorized as testing, code & commit, confidence, bug, and artifacts. We discussed whether the information needs were aligned with the tools used to address them.
Technical abilities (also known as hard skills) are just as crucial as soft skills (such as communication, cooperation, teamwork, etc.) in attaining professional success. Therefore it is important to pay much attention to soft skills when developing the curriculum of engineering educations. Many elements can have a direct or indirect impact on students’ soft skills, including course topic, course module (i.e., laboratories, seminars, etc.), the medium of instruction, and learning activities. Many academics have investigated the development of soft skills in a variety of disciplines, including engineering, science, and business. The purpose of this study is to assess the perceived impact of coaching on the development of soft skills in MS and BS engineering students. During four planned sessions over a six-month period, MS students acted as coachers, while BS students received coaching from MS students. After each coaching session, all students were asked to complete a survey to evaluate their perception for how their soft skills had developed. The results of the perceived effects of introducing coaching activities are presented in this article. This article is a first step, in the series of our investigation, in identifying the students’ perceptions about the development of soft skills. According to the survey, the MS engineering students who were the coachers had perceived to improve most of their soft skills. However, in the perception of BS students, their soft skills did not improve as compared to MS students, prompting us to conduct additional research in the future to discover what hampered the growth of BS students’ soft skills as well as how MS students’ soft skills were enhanced.
Software products are increasingly used in critical infrastructures, and verifying the security of these products has become a necessary part of every software development project. Effective and practical methods and processes are needed by software vendors and infrastructure operators to meet the existing extensive demand for security. This article describes a lightweight security risk assessment method that flags security issues as early as possible in the software project, namely during requirements analysis. The method requires minimal training effort, adds low overhead, and makes it possible to show immediate results to affected stakeholders. We present a longitudinal case study of how a large enterprise developing complex telecom products adopted this method all the way from pilot studies to full-scale regular use. Lessons learned from the case study provide knowledge about the impact that upskilling and training of requirements engineers have on reducing the risk of malfunctions or security vulnerabilities in situations where it is not possible to have security experts go through all requirements. The case study highlights the challenges of process changes in large organizations as well as the pros and cons of having centralized, distributed, or semi-distributed workforce for security assurance in requirements engineering.
Software processes, such as RUP and agile methods, focus their requirements engineering part on use cases and thus functional requirements. Complex products, such as radio network control software, need special handling of non-functional requirements as well. We describe how we used the eclipse process framework to augment the open and minimal OpenUP/basic process with improvements found in management of capacity requirements in a case-study at Ericsson. The result is compared with another project improving RUP to handle performance requirements. The major differences between the improvements are that 1) they suggest a special, dedicated performance manager role and we suggest that present roles are augmented, 2) they suggest a bottom-up approach to performance verification while we focus on system performance first, i.e. top-down. Further, we suggest augmenting UMLl-2 models with capacity attributes to improve information flow from requirements to implementation.
There is evidence to suggest that the software industry has not yet matured as regards management of nonfunctional requirements (NFRs). Consequently the cost of achieving required quality is unnecessarily high. To try and avoid this, the telecommunication systems provider Ericsson defined a research task to try and improve the management of requirements for capacity, which is one of the most critical NFRs. Linkoping University joined in the effort and conducted an interview series to investigate good practice within different parts of the company. Inspired by the interviews and an ongoing process improvement project a model for improvement was created and activities were synthesized. This paper contributes the results from the interview series, and details the subprocesses of specification that should be improved. Such improvements are about understanding the relationship between numerical entities at all system levels, augmenting UML specifications to make NFRs visible, working with time budgets, and testing the sub system level components on the same level as they are specified.
Contemporary software processes and modeling languages have a strong focus on Functional Requirements (FRs), whereas information of Non-Functional Requirements (NFRs) are managed with text-based documentation and individual skills of the personnel. In order to get a better understanding of how capacity requirements are handled, we carried out an interview series with various branches of Ericsson. The analysis of this material revealed 18 Capacity Sub-Processes (CSPs) that need to be attended to create a capacity-oriented development. In this paper we describe all these sub-processes and their mapping into an extension of the OpenUP/Basic software process. Such an extension will support a process engineer in realizing the sub-processes, and has at the same time shown that there are no internal inconsistencies of the CSPs. The extension provides a context for continued research in using UML to support negotiation between requirements and existing design.
Non-functional requirements crosscut functional models and are more difficult to enforce in system models. This paper describes a long-term research collaboration regarding capacity requirements between Linköping University and Ericsson AB. We describe an industrial case study on non-functional requirements as a background. Succeeding efforts dedicated to capacity include a detailed description of the term, a best practice inventory within Ericsson, and a pragmatic approach for how to annotate UML models with capacity information. The results are also represented as a method plug-in to the OpenUP software process and an anatomy facilitating the possibility to assess and improve an organization’s abilities to develop for capacity. The results combine into a method for how to improve the treatment of capacity requirements in large-scale software systems. Both product and process views are included, with emphasis on the latter.
Even though non-functional requirements (NFRs) are critical in order to provide software of good quality, the literature of NFRs is relatively sparse. We describe how NFRs are treated in two development organizations, an Ericsson application center and the IT department of the Swedish Meteorological and Hydrological Institute. We have interviewed professionals about problems they face and their ideas on how to improve the situation. Both organizations are aware of NFRs and related problems but their main focus is on functional requirements,primarily because existing methods focus on these. The most tangible problems experienced are that many NFRs remain undiscovered and that NFRs are stated in non-measurable terms. It became clear that the size andstructure of the organization require proper distribution of employees’ interest, authority and competence of NFRs. We argue that a feasible solution might be to strengthen the position of architectural requirements, which are more likely to emphasize NFRs.
When teaching software engineering courses it is highly important to have good text books that are well-founded, up-to-date, and easily accessible to students. However, currently available text books on the market are either very broad or highly specialized, making it hard to select appropriate books for specific software engineering courses. Moreover, due to the rapidly changing subject of software engineering, books tend to become obsolete, which make students hesitate to buy books even if they are part of the listed course literature. In this paper, we briefly explain and discuss an approach of using a web-based system for creating collaborative and peer-reviewed text books that can be customized individually for specific courses. We describe and discuss the proposed system from a use case perspective.
Teaching larger software engineering project courses at the end of a computing curriculum is a way for students to learn some aspects of real-world jobs in industry. Such courses, often referred to as capstone courses, are effective for learning how to apply the skills they have acquired in, for example, design, test, and configuration management. However, these courses are typically performed in small teams, giving only a limited realistic perspective of problems faced when working in real companies. This paper describes an alternative approach to classic capstone projects, with the aim of being more realistic from an organizational, process, and communication perspective. This methodology, called the company approach, is described by intended learning outcomes, teaching/learning activities, and assessment tasks. The approach is implemented and evaluated in a larger Masters student course.
The task of finding an optimal selection of requirements for the next release of a software system is difficult as requirements may depend on each other in complex ways. The paper presents the results from an in-depth study of the interdependencies within 5 distinct sets of requirements, each including 20 high-priority requirements of 5 distinct products from 5 different companies. The results show that: (1) roughly 20% of the requirements are responsible for 75% of the interdependencies; (2) only a few requirements are singular; (3) customer-specific bespoke development tend to include more functionality- related dependencies whereas market-driven product development have an emphasis on value-related dependencies. Several strategies for reducing the effort needed for identifying and managing interdependencies are outlined. A technique for visualization of interdependencies with the aim of supporting release planning is also discussed. The complexity of requirements interdependency analysis is studied in relation to metrics of requirements coupling. Finally, a number of issues for further research are identified
Automated testing is an essential component of Continuous Integration (CI) and Delivery (CD), such as scheduling automated test sessions on overnight builds. That allows stakeholders to execute entire test suites and achieve exhaustive test coverage, since running all tests is often infeasible during work hours, i.e., in parallel to development activities. On the other hand, developers also need test feedback from CI servers when pushing changes, even if not all test cases are executed. In this paper we evaluate similarity-based test case selection (SBTCS) on integration-level tests executed on continuous integration pipelines of two companies. We select test cases that maximise diversity of test coverage and reduce feedback time to developers. Our results confirm existing evidence that SBTCS is a strong candidate for test optimisation, by reducing feedback time (up to 92% faster in our case studies) while achieving full test coverage using only information from test artefacts themselves.
Requirement engineers in industry are faced with the complexity of handling large amounts of requirements as development moves from traditional bespoke projects towards market-driven development. There is a need for usable and useful models that recognize this reality and support the engineers in a continuous effort of choosing which requirements to accept and which to dismiss off hand using the goals and product strategies put forward by management. This paper presents an evaluation of such a model that is built based on needs identified in industry. The evaluation's primary goal is to test the model's usability and usefulness in a lab environment prior to large scale industry piloting, and is a part of a large technology transfer effort. The evaluation uses 179 subjects from three different Swedish Universities, which is a large portion of the university students educated in requirements engineering in Sweden during 2004 and 2005. The results provide a strong indication that the model is indeed both useful and usable and ready for industry trials. © 2006 Elsevier B.V. All rights reserved.
Bug report assignment is an important part of software maintenance. In particular, incorrect assignments of bug reports to development teams can be very expensive in large software development projects. Several studies propose automating bug assignment techniques using machine learning in open source software contexts, but no study exists for large-scale proprietary projects in industry. The goal of this study is to evaluate automated bug assignment techniques that are based on machine learning classification. In particular, we study the state-of-the-art ensemble learner Stacked Generalization (SG) that combines several classifiers. We collect more than 50,000 bug reports from five development projects from two companies in different domains. We implement automated bug assignment and evaluate the performance in a set of controlled experiments. We show that SG scales to large scale industrial application and that it outperforms the use of individual classifiers for bug assignment, reaching prediction accuracies from 50 % to 89 % when large training sets are used. In addition, we show how old training data can decrease the prediction accuracy of bug assignment. We advice industry to use SG for bug assignment in proprietary contexts, using at least 2,000 bug reports for training. Finally, we highlight the importance of not solely relying on results from cross-validation when evaluating automated bug assignment.
Maintenance costs can be substantial for organizations with very large and complex software systems. This paper describes research for reducing anomaly report turnaround time which, if successful, would contribute to reducing maintenance costs and at the same time maintaining a good customer perception. Specifically, we are addressing the problem of the manual, laborious, and inaccurate process of assigning anomaly reports to the correct design teams. In large organizations with complex systems this is particularly problematic because the receiver of the anomaly report from customer may not have detailed knowledge of the whole system. As a consequence, anomaly reports may be wrongly routed around in the organization causing delays and unnecessary work. We have developed and validated machine learning approach, based on stacked generalization, to automatically route anomaly reports to the correct design teams in the organization. A research prototype has been implemented and evaluated on roughly one year of real anomaly reports on a large and complex system at Ericsson AB. The prediction accuracy of the automation is approaching that of humans, indicating that the anomaly report handling time could be significantly reduced by using our approach.
Today it is very common for software systems to be built by teams located in more than one country. For example, a project team may be located in the US while the team lead resides in Sweden. How then should students be trained for this kind of work? Senior design or capstone projects offer students real-world hands-on experience but rarely while working internationally. One reason is that most instructors do not have international business contacts that allow them to find project sponsors in other countries. Another reason is the fear of having to invest a huge amount of time managing an international project. In this paper we present the general concepts related to "International Capstone Project Exchanges", the basic model behind the exchanges (student teams are led by an industry sponsor residing in a different country) and several alternate models that have been used in practice. We will give examples from projects in the US, Germany, Sweden, Australia, and Colombia. We have extended the model beyond software projects to include engineering projects as well as marketing, and journalism. We conclude with a description of an International Capstone Project Exchange website that we have developed to aid any university in establishing their own international project exchange.
BACKGROUND: Agile software development methods have a number of reported benefits on productivity, project visibility, software quality and other areas. There are also negative effects reported. However, the base of empirical evidence to the claimed effects needs more empirical studies. AIM: The purpose of the research was to contribute with empirical evidence on the impact of using agile principles and practices in large-scale, industrial software development. Research was focused on impacts within seven areas: Internal software documentation, Knowledge sharing, Project visibility, Pressure and stress, Coordination effectiveness, and Productivity. METHOD: Research was carried out as a multiple-case study on two contemporary, large-scale software development projects with different levels of agile adoption at Ericsson. Empirical data was collected through a survey of project members. RESULTS AND CONCLUSIONS: Intentional implementation of agile principles and practices were found to: correlate with a more balanced use of internal software documentation, contribute to knowledge sharing, correlate with increased project visibility and coordination effectiveness, reduce the need for other types of coordination mechanisms, and possibly increase productivity. No correlation with increase in pressure and stress were found.
Understanding the dynamic behavior of a system is a key determinant to successful system maintenance. This paper contributes two studies at Ericsson Radio Systems of the perfective maintenance of large and distributed systems. Our approach is a holistic method based on tracing and the technical solution to acquisition of trace data is to use CORBA interceptors. Our method has proven useful in solving a wide variety of problems in design as well as implementation and test-all this at a small price. Examples of improvements are performance, new test cases and merging of objects
Testing of a large-scale and complex software system requires many types of knowledge, skills and personality traits. Contrasting the idea of a perfect all-round tester, this paper presents the Testing Hopscotch model with six complementary profiles, and the key characteristics considered as most relevant for each profile. The model is based on 60 interviews with engineers from three large-scale companies in different industry segments. The Testing Hopscotch model was well received by three focus groups including 22 participants, which further strengthens the validity of the model.
Contrasting the idea of a team with all-round testers, the Testing Hopscotch model includes six complementary profiles, tailored for different types of testing. The model is based on 60 interviews and three focus groups with 22 participants. The validation of the Testing Hopscotch model included ten validation workshops with 58 participants from six companies developing large-scale and complex software systems. The validation showed how the model provided valuable insights and promoted good discussions, helping companies identify what they need to do in order to improve testing in each individual case. The results from the validation workshops were confirmed at a cross-company workshop with 33 participants from seven companies and six universities. Based on the diverse nature of the seven companies involved in the study, it is reasonable to expect that the Testing Hopscotch model is relevant to a large segment of the software industry at large. The validation of the Testing Hopscotch model showed that the model is novel, actionable and useful in practice, helping companies identify what they need to do to improve testing in their organization.
Shortened time-to-market cycles and increasingly complex systems are just some of the challenges faced by industry. The requirement engineering process needs to adapt to these challenges in order to guarantee that the end product fulfils the customer expectations as well as the necessary safety norms. The goal of this paper is to investigate the way engineers work in practice with the requirement engeneering processes at different stages of the development, with a particular focus on the use of requirements in cross domain development and to compare this to the existing theory in the domain.
Capacity in telecommunication systems is highly related to operator revenue. As a vendor of such systems, Ericsson AB is continuously improving its processes for estimating, specifying, tuning, and testing the capacity of delivered systems. In order to systematize process improvements Ericsson AB and Linköping University joined forces to create an anatomy of Capacity Sub Processes (CSPs). The anatomy is the result of an interview series conducted to document good practices amongst organizations active in capacity improvement. In this paper we analyze four different development processes in terms of how far they have reached in their process maturity according to our anatomy and show possible improvement directions. Three of the processes are currently in use at Ericsson, and the fourth is the OpenUP/Basic process which we have used as a reference process in earlier research. We also include an analysis of the observed good practices. The result mainly confirms the order of CSPs in the anatomy, but we need to use our information of the maturity of products and the major life cycle in the organization in order to fully explain the role of the anatomy in planning of improvements.
Using design models instead of executable code has shown itself to be an efficient way of increasing abstraction level of software development. However, applying established code-based software engineering methods to design models can be a challenge - due to different abstraction levels, the same metrics as for code are not applicable for the design models. One of practical challenges in using metrics at the model level is applying complexity-prediction formulas developed using code-based metrics to design models. The existing formulas do not apply as they do not take into consideration the behavior part of the models - e.g. State charts. In this paper we address this challenge by conducting a case study at one of the large telecom products at Ericsson with the goal to identify which metrics can predict complex, hard to understand and hard to maintain software modules based on their design models. We use both statistical methods like regression to build prediction formulas and qualitative interviews to codify expert designers' perception of which software modules are complex. The results of this case study show that such measures as the number of non-self-transitions, transition per states or state depth can be combined in order to identify software units that are perceived as complex by expert designers. Our conclusion is that these metrics can be used in other companies to predict complex modules, but the coefficients should be recalculated per product to increase the prediction accuracy.
In recent years, expert systems technology has become commercially mature, but widespread delivery of systems in regular use is still slow. This thesis discusses three main difficulties in the development and delivery of expert systems, namely,the knowledge acquisition bottleneck, i.e. the problem of formalizing the expert knowledge into a computer-based representation.the migration problem, where we argue that the different requirements on a development environment and a delivery environment call for systematic methods to transfer knowledge bases between the environments.the user acceptance barrier, where we believe that user interface issues and concerns for a smooth integration into the end-user’s working environment play a crucial role for the successful use of expert systems.In this thesis, each of these areas is surveyed and discussed in the light of experience gained from a number of expert system projects performed by us since 1983. Two of these projects, a spot-welding robot configuration system and an antibody analysis advisor, are presented in greater detail in the thesis.
Contribution: This article identifies the participation of external stakeholders as a key contributing factor for positive outcomes in project-based software engineering courses. A model for overlapping virtuous circles of lasting positive impact on both stakeholders and students from such courses is proposed. Background: Project-based courses are widespread in software engineering education, and there are numerous designs for such courses presented in literature. It is found that the needs and motivations of external stakeholders, from industry and government sectors, in these courses has received limited attention in related work. Intended Outcomes: A course design that prepares students for graduate level studies and professional life, through close proximity to external stakeholders in a highly realistic setting, working on "live" projects. Application Design: Building on a long tradition of university-industry collaboration dating back to 1977, as well as findings in related work, students are assigned to live projects proposed by external stakeholders from industry and government, working in close proximity with their respective stakeholders throughout the project. The course places great emphasis on coaching over instruction, treating the many unforeseen challenges of such projects as a valuable part of the learning experience. Findings: Based on interviews with stakeholders and students, it is found that stakeholder and student outcomes are interdependent and build upon one another, and that positive outcomes for both groups are necessary for the sustainability of the course over multiple iterations.
Larger project courses, such as capstone projects, are essential in a modern computing curriculum. Assessing such projects is, how- ever, extremely challenging. There are various aspects and trade-offs of assessments that can affect the quality of a project course. Individual assessments can give fair grading of individuals, but may loose focus of the project as a group activity. Extensive teacher involvement is necessary for objective assessment, but may affect the way students are working. Continuous feedback to students can enhance learning, but may be hard to combine with fair assessment. Most previous work is focusing on some specific assessment aspect, whereas we in this paper present an assessment model that consists of a collection of assessment activities, each covering different aspects. We have applied, developed, and improved these activities during a six-year period and evaluated their usefulness by performing a questionnaire-based survey.
In a modern computing curriculum, large project courses are essential to give students hands-on experience of working in a realistic software engineering project. Assessing such projects is, however, extremely challenging. There are various aspects and tradeoffs of assessments that can affect the course quality. Individual assessments can give fair grading of individuals, but may loose focus of the project as a group activity. Extensive teacher involvement is necessary for objective assessment, but may affect the way students are working. Continuous feedback to students can enhance learning, but may be hard to combine with fair assessment. Most previous work is focusing on some specific assessment aspect, whereas we in this paper present an assessment model that consists of a collection of assessment activities, each covering different aspects. We have applied, developed, and improved these activities during a seven-year period. To evaluate the usefulness of the model, we perform questionnaire-based surveys over a two-years period. Furthermore, we design and execute an experiment that studies to what extent students can perform fair peer assessment and to what degree the assessments of students and teachers agree. We analyze the results, discuss findings, and summarize lessons learned.
Establishing buyer awareness contains two major business processes: (1) sellers advertise their products and (2) buyers become aware of them or first learn of them. On the Internet, e-mail is the major means by using messaging for realizing buyer awareness, but critical challenges are ahead from the perspectives of legitimacy and efficiency. This paper provides an infrastructure for establishing buyer awareness, called SAMI (Specification, Agreement, and Matchmaking Infrastructure), enabling all sellers' advertisements to formally reach potential buyers instead of being unread. Under the SAMI, sellers and buyers need to adjust to each other: (1) buyers should launch their Web services to recieve sellers' advertisements and install their own agents to process all of them and (2) sellers should specify their products simply but with rich semantics that the agents can process instead of using natural language which is relatively hard to process. Three contributions are included. First, the concept of personal ontology is defined for sellers and buyers to add rich semantics into their keywords for semantic matchmaking. Second, an ontology as well as an XML Schema is designed for encoding messages for simplifying for agreement of messaging. Third, an algorithm is provided to match buyers' keywords (requirements) with sellers' keywords (advertisments) with regard to synonym, polysemy and partial matching. In particular all keywords can be any classes or instances from different ontologies. A prototype system has been implemented and tested
Past decade saw much hype in the area of information technology. The emerging of semantic Web makes us ask if it is another hype. This paper focuses on its potential application in Internet commerce and intends to answer the question to some degree. The contributions are: first, we find and examine twelve potential advantages of applying semantic Web for Internet commerce; second, we conduct a case study of e-procurement in order to show its advantages for each process of e-procurement; lastly, we identify critical research issues that may transfer the potential advantages into tangible benefits.
XML-based frameworks or industry standards for Internet Commerce are rapidly launched and changed. The contribution of this paper is to increase the understanding and facilitate comparison and evaluation of the most commonly referred frameworks. The paper provides a survey of the architecture and message definition of BizTalk, cXML, eCo Framework, ICE (Information and Content Exchange), IOTP (Internet Open Trading Protocol), OAG (Open Applications Group), RosettaNet, xCBL, ebXML and ontology.org. The relationships between these frameworks are cooperative and competitive and thus the merger and change are unavoidable. At present eCo Framework and xCBL are tightly cooperative and supported by others. The competing initiative is centered around Microsoft's BizTalk, supported by cXML and OAG. The future will probably see closer cooperation to make formats compatible. Microsoft is both promoting BizTalk and is a member of eCo Framework.