liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Sandahl, Kristian
Alternative names
Publications (10 of 29) Show all publications
Nilsson, S., Buffoni, L., Sandahl, K., Johansson, H. & Tahir Sheikh, B. (2018). Empirical Study of Requirements Engineering in Cross Domain Development. In: Dorian Marjanović, Mario Štorga, Stanko Škec, Nenad Bojčetić and Neven Pavković (Ed.), DS 92: Proceedings of the DESIGN 2018 15th International Design Conference: . Paper presented at DESIGN 2018 15th International Design Conference, May 21-24, 2018, Dubrovnic, Croatia (pp. 857-868). Glasgow: The Design Society, 92
Open this publication in new window or tab >>Empirical Study of Requirements Engineering in Cross Domain Development
Show others...
2018 (English)In: DS 92: Proceedings of the DESIGN 2018 15th International Design Conference / [ed] Dorian Marjanović, Mario Štorga, Stanko Škec, Nenad Bojčetić and Neven Pavković, Glasgow: The Design Society , 2018, Vol. 92, p. 857-868Conference paper, Published paper (Other academic)
Abstract [en]

Shortened time-to-market cycles and increasingly complex systems are just some of the challenges faced by industry. The requirement engineering process needs to adapt to these challenges in order to guarantee that the end product fulfils the customer expectations as well as the necessary safety norms. The goal of this paper is to investigate the way engineers work in practice with the requirement engeneering processes at different stages of the development, with a particular focus on the use of requirements in cross domain development and to compare this to the existing theory in the domain.

Place, publisher, year, edition, pages
Glasgow: The Design Society, 2018
Series
Design, ISSN 1847-9073 ; 92
Keywords
Systems engineering (SE); complex systems; requirements management
National Category
Other Mechanical Engineering Computer Systems Embedded Systems
Identifiers
urn:nbn:se:liu:diva-150356 (URN)10.21278/idc.2018.0466 (DOI)9789537738594 (ISBN)
Conference
DESIGN 2018 15th International Design Conference, May 21-24, 2018, Dubrovnic, Croatia
Available from: 2018-08-18 Created: 2018-08-18 Last updated: 2018-08-23Bibliographically approved
Knudson, D., Kalafatis, S., Kleiner, C., Zahos, S., Seegebarth, B., Detterfelt, J., . . . Roos, M. (2018). Global software engineering experience through international capstone project exchanges. In: Proceedings - International Conference on Software Engineering: . Paper presented at 13th IEEE/ACM International Conference on Global Software Engineering, ICGSE 2018 (pp. 54-58). New York: ACM Digital Library
Open this publication in new window or tab >>Global software engineering experience through international capstone project exchanges
Show others...
2018 (English)In: Proceedings - International Conference on Software Engineering, New York: ACM Digital Library, 2018, p. 54-58Conference paper, Published paper (Refereed)
Abstract [en]

Today it is very common for software systems to be built by teams located in more than one country. For example, a project team may be located in the US while the team lead resides in Sweden. How then should students be trained for this kind of work? Senior design or capstone projects offer students real-world hands-on experience but rarely while working internationally. One reason is that most instructors do not have international business contacts that allow them to find project sponsors in other countries. Another reason is the fear of having to invest a huge amount of time managing an international project. In this paper we present the general concepts related to "International Capstone Project Exchanges", the basic model behind the exchanges (student teams are led by an industry sponsor residing in a different country) and several alternate models that have been used in practice. We will give examples from projects in the US, Germany, Sweden, Australia, and Colombia. We have extended the model beyond software projects to include engineering projects as well as marketing, and journalism. We conclude with a description of an International Capstone Project Exchange website that we have developed to aid any university in establishing their own international project exchange.

Place, publisher, year, edition, pages
New York: ACM Digital Library, 2018
Keywords
Capstone Project, Senior Design Project, Global Software Engineering, International Collaboration, Software Engineering Education, Industry-Sponsored Projects
National Category
Software Engineering
Identifiers
urn:nbn:se:liu:diva-152003 (URN)10.1145/3196369.3196387 (DOI)000455705600010 ()2-s2.0-85051525634 (Scopus ID)978-1-4503-5717-3 (ISBN)
Conference
13th IEEE/ACM International Conference on Global Software Engineering, ICGSE 2018
Note

Funding agencies:  Australian Endeavour Executive Fellowship

Available from: 2018-10-14 Created: 2018-10-14 Last updated: 2019-02-04
Gomes de Oliveira Neto, F., Ahmad, A., Leifler, O., Sandahl, K. & Enoiu, E. (2018). Improving continuous integration with similarity-based test case selection. In: Proceedings of the 13th International Workshop on Automation of Software Test: . Paper presented at AST'18 2018 ACM/IEEE 13th International Workshop on Automation of Software Test (pp. 39-45). New York: ACM Digital Library
Open this publication in new window or tab >>Improving continuous integration with similarity-based test case selection
Show others...
2018 (English)In: Proceedings of the 13th International Workshop on Automation of Software Test, New York: ACM Digital Library, 2018, p. 39-45Conference paper, Published paper (Refereed)
Abstract [en]

Automated testing is an essential component of Continuous Integration (CI) and Delivery (CD), such as scheduling automated test sessions on overnight builds. That allows stakeholders to execute entire test suites and achieve exhaustive test coverage, since running all tests is often infeasible during work hours, i.e., in parallel to development activities. On the other hand, developers also need test feedback from CI servers when pushing changes, even if not all test cases are executed. In this paper we evaluate similarity-based test case selection (SBTCS) on integration-level tests executed on continuous integration pipelines of two companies. We select test cases that maximise diversity of test coverage and reduce feedback time to developers. Our results confirm existing evidence that SBTCS is a strong candidate for test optimisation, by reducing feedback time (up to 92% faster in our case studies) while achieving full test coverage using only information from test artefacts themselves.

Place, publisher, year, edition, pages
New York: ACM Digital Library, 2018
Series
International Workshop on Automation of Software Test, ISSN 2377-8628
Keywords
Similarity based test case selection, Continuous integration, Automated testing
National Category
Software Engineering
Identifiers
urn:nbn:se:liu:diva-152002 (URN)10.1145/3194733.3194744 (DOI)000458922700009 ()978-1-4503-5743-2 (ISBN)
Conference
AST'18 2018 ACM/IEEE 13th International Workshop on Automation of Software Test
Note

Funding agencies: Chalmers Software Center7 [30]

Available from: 2018-10-14 Created: 2018-10-14 Last updated: 2019-03-05
Jonsson, L., Borg, M., Broman, D., Sandahl, K., Eldh, S. & Runeson, P. (2016). Automated bug assignment: Ensemble-based machine learning in large scale industrial contexts. Journal of Empirical Software Engineering, 21(4), 1533-1578
Open this publication in new window or tab >>Automated bug assignment: Ensemble-based machine learning in large scale industrial contexts
Show others...
2016 (English)In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 21, no 4, p. 1533-1578Article in journal (Refereed) Published
Abstract [en]

Bug report assignment is an important part of software maintenance. In particular, incorrect assignments of bug reports to development teams can be very expensive in large software development projects. Several studies propose automating bug assignment techniques using machine learning in open source software contexts, but no study exists for large-scale proprietary projects in industry. The goal of this study is to evaluate automated bug assignment techniques that are based on machine learning classification. In particular, we study the state-of-the-art ensemble learner Stacked Generalization (SG) that combines several classifiers. We collect more than 50,000 bug reports from five development projects from two companies in different domains. We implement automated bug assignment and evaluate the performance in a set of controlled experiments. We show that SG scales to large scale industrial application and that it outperforms the use of individual classifiers for bug assignment, reaching prediction accuracies from 50 % to 89 % when large training sets are used. In addition, we show how old training data can decrease the prediction accuracy of bug assignment. We advice industry to use SG for bug assignment in proprietary contexts, using at least 2,000 bug reports for training. Finally, we highlight the importance of not solely relying on results from cross-validation when evaluating automated bug assignment.

Place, publisher, year, edition, pages
SPRINGER, 2016
Keywords
Machine learning; Ensemble learning; Classification; Bug reports; Bug assignment; Industrial scale; Large scale
National Category
Software Engineering
Identifiers
urn:nbn:se:liu:diva-130374 (URN)10.1007/s10664-015-9401-9 (DOI)000379060700004 ()
Note

Funding Agencies|Industrial Excellence Center EASE Embedded Applications Software Engineering

Available from: 2016-08-15 Created: 2016-08-05 Last updated: 2018-05-17
Vasilevskaya, M., Broman, D. & Sandahl, K. (2015). Assessing Large Project Courses: Model, Activities, and Lessons Learned. ACM Transactions on Computing Education, 15(4), 20:1-20:30
Open this publication in new window or tab >>Assessing Large Project Courses: Model, Activities, and Lessons Learned
2015 (English)In: ACM Transactions on Computing Education, ISSN 1946-6226, E-ISSN 1946-6226, Vol. 15, no 4, p. 20:1-20:30Article in journal (Refereed) Published
Abstract [en]

In a modern computing curriculum, large project courses are essential to give students hands-on experience of working in a realistic software engineering project. Assessing such projects is, however, extremely challenging. There are various aspects and tradeoffs of assessments that can affect the course quality. Individual assessments can give fair grading of individuals, but may loose focus of the project as a group activity. Extensive teacher involvement is necessary for objective assessment, but may affect the way students are working. Continuous feedback to students can enhance learning, but may be hard to combine with fair assessment. Most previous work is focusing on some specific assessment aspect, whereas we in this paper present an assessment model that consists of a collection of assessment activities, each covering different aspects. We have applied, developed, and improved these activities during a seven-year period. To evaluate the usefulness of the model, we perform questionnaire-based surveys over a two-years period. Furthermore, we design and execute an experiment that studies to what extent students can perform fair peer assessment and to what degree the assessments of students and teachers agree. We analyze the results, discuss findings, and summarize lessons learned.

Place, publisher, year, edition, pages
ACM Special Interest Group on Computer Science Education, 2015
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-123544 (URN)10.1145/2732156 (DOI)000367991400005 ()
Available from: 2015-12-21 Created: 2015-12-21 Last updated: 2018-01-10Bibliographically approved
Vasilevskaya, M., Broman, D. & Sandahl, K. (2014). An Assessment Model for Large Project Courses. In: Proceedings of the 45th ACM Technical Symposium on Computer Science Education (SIGCSE): . Paper presented at 45th ACM Technical Symposium on Computer Science Education (SIGCSE 2014), Atlanta, GA, USA, March 5-8, 2014. Association for Computing Machinery (ACM)
Open this publication in new window or tab >>An Assessment Model for Large Project Courses
2014 (English)In: Proceedings of the 45th ACM Technical Symposium on Computer Science Education (SIGCSE), Association for Computing Machinery (ACM), 2014Conference paper, Published paper (Refereed)
Abstract [en]

Larger project courses, such as capstone projects, are essential in a modern computing curriculum. Assessing such projects is, how- ever, extremely challenging. There are various aspects and trade-offs of assessments that can affect the quality of a project course. Individual assessments can give fair grading of individuals, but may loose focus of the project as a group activity. Extensive teacher involvement is necessary for objective assessment, but may affect the way students are working. Continuous feedback to students can enhance learning, but may be hard to combine with fair assessment. Most previous work is focusing on some specific assessment aspect, whereas we in this paper present an assessment model that consists of a collection of assessment activities, each covering different aspects. We have applied, developed, and improved these activities during a six-year period and evaluated their usefulness by performing a questionnaire-based survey.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2014
Keywords
Project Courses; Assessment; Software Engineering
National Category
Educational Sciences
Identifiers
urn:nbn:se:liu:diva-103001 (URN)10.1145/2538862.2538947 (DOI)2-s2.0-84899725619 (Scopus ID)978-1-4503-2605-6 (ISBN)
Conference
45th ACM Technical Symposium on Computer Science Education (SIGCSE 2014), Atlanta, GA, USA, March 5-8, 2014
Available from: 2014-02-22 Created: 2014-01-09 Last updated: 2015-04-02Bibliographically approved
Rezaei, H., Ebersjö, F., Sandahl, K. & Staron, M. (2014). Identifying and Managing Complex Modules in Executable Software Design Models - Empirical Assessment of a Large Telecom Software Product. In: Frank Vogelezang & Maya Daneva (Ed.), 2014 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (IWSM-MENSURA), Rotterdam, The Netherlands, October 6-8, 2014: . Paper presented at 2014 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (IWSM-MENSURA), Rotterdam, The Netherlands, October 6-8, 2014 (pp. 243-251). Los Alamitos: IEEE Computer Society
Open this publication in new window or tab >>Identifying and Managing Complex Modules in Executable Software Design Models - Empirical Assessment of a Large Telecom Software Product
2014 (English)In: 2014 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (IWSM-MENSURA), Rotterdam, The Netherlands, October 6-8, 2014 / [ed] Frank Vogelezang & Maya Daneva, Los Alamitos: IEEE Computer Society, 2014, p. 243-251Conference paper, Published paper (Refereed)
Abstract [en]

Using design models instead of executable code has shown itself to be an efficient way of increasing abstraction level of software development. However, applying established code-based software engineering methods to design models can be a challenge - due to different abstraction levels, the same metrics as for code are not applicable for the design models. One of practical challenges in using metrics at the model level is applying complexity-prediction formulas developed using code-based metrics to design models. The existing formulas do not apply as they do not take into consideration the behavior part of the models - e.g. State charts. In this paper we address this challenge by conducting a case study at one of the large telecom products at Ericsson with the goal to identify which metrics can predict complex, hard to understand and hard to maintain software modules based on their design models. We use both statistical methods like regression to build prediction formulas and qualitative interviews to codify expert designers' perception of which software modules are complex. The results of this case study show that such measures as the number of non-self-transitions, transition per states or state depth can be combined in order to identify software units that are perceived as complex by expert designers. Our conclusion is that these metrics can be used in other companies to predict complex modules, but the coefficients should be recalculated per product to increase the prediction accuracy.

Place, publisher, year, edition, pages
Los Alamitos: IEEE Computer Society, 2014
Keywords
complexity, maintainability, modeling, reliability, software metrics
National Category
Software Engineering
Identifiers
urn:nbn:se:liu:diva-118997 (URN)10.1109/IWSM.Mensura.2014.27 (DOI)978-1-4799-4174-2 (ISBN)
Conference
2014 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (IWSM-MENSURA), Rotterdam, The Netherlands, October 6-8, 2014
Funder
Linköpings universitet
Available from: 2015-06-05 Created: 2015-06-05 Last updated: 2018-01-11
Lagerberg, L., Skude, T., Sandahl, K., Emanuelsson, P. & Ståhl, D. (2013). The impact of agile principles and practices on large-scale software development projects: A multiple-case study of two projects at Ericsson. In: Empirical Software Engineering and Measurement, 2013: . Paper presented at 2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, 10-11 October 2013, Baltimore, MD, USA (pp. 348-356). Los Alamitos: IEEE
Open this publication in new window or tab >>The impact of agile principles and practices on large-scale software development projects: A multiple-case study of two projects at Ericsson
Show others...
2013 (English)In: Empirical Software Engineering and Measurement, 2013, Los Alamitos: IEEE , 2013, p. 348-356Conference paper, Published paper (Refereed)
Abstract [en]

BACKGROUND: Agile software development methods have a number of reported benefits on productivity, project visibility, software quality and other areas. There are also negative effects reported. However, the base of empirical evidence to the claimed effects needs more empirical studies. AIM: The purpose of the research was to contribute with empirical evidence on the impact of using agile principles and practices in large-scale, industrial software development. Research was focused on impacts within seven areas: Internal software documentation, Knowledge sharing, Project visibility, Pressure and stress, Coordination effectiveness, and Productivity. METHOD: Research was carried out as a multiple-case study on two contemporary, large-scale software development projects with different levels of agile adoption at Ericsson. Empirical data was collected through a survey of project members. RESULTS AND CONCLUSIONS: Intentional implementation of agile principles and practices were found to: correlate with a more balanced use of internal software documentation, contribute to knowledge sharing, correlate with increased project visibility and coordination effectiveness, reduce the need for other types of coordination mechanisms, and possibly increase productivity. No correlation with increase in pressure and stress were found.

Place, publisher, year, edition, pages
Los Alamitos: IEEE, 2013
Series
IEEE International Symposium on Empirical Software Engineering and Measurement. Proceedings, ISSN 1938-6451
Keywords
Agile software development, large-scale software development, multiple-case study, survey, empirical software engineering
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-107735 (URN)10.1109/ESEM.2013.53 (DOI)978-0-7695-5056-5 (ISBN)
Conference
2013 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, 10-11 October 2013, Baltimore, MD, USA
Available from: 2014-06-19 Created: 2014-06-19 Last updated: 2018-01-11Bibliographically approved
Broman, D., Sandahl, K. & Abu Baker, M. (2012). The Company Approach to Software Engineering Project Courses. IEEE Transactions on Education, 55(4), 445-452
Open this publication in new window or tab >>The Company Approach to Software Engineering Project Courses
2012 (English)In: IEEE Transactions on Education, ISSN 0018-9359, E-ISSN 1557-9638, Vol. 55, no 4, p. 445-452Article in journal (Refereed) Published
Abstract [en]

Teaching larger software engineering project courses at the end of a computing curriculum is a way for students to learn some aspects of real-world jobs in industry. Such courses, often referred to as capstone courses, are effective for learning how to apply the skills they have acquired in, for example, design, test, and configuration management. However, these courses are typically performed in small teams, giving only a limited realistic perspective of problems faced when working in real companies. This paper describes an alternative approach to classic capstone projects, with the aim of being more realistic from an organizational, process, and communication perspective. This methodology, called the company approach, is described by intended learning outcomes, teaching/learning activities, and assessment tasks. The approach is implemented and evaluated in a larger Masters student course.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2012
Keywords
Capstone projects, company approach, constructive alignment, software engineering (SE)
National Category
Learning
Identifiers
urn:nbn:se:liu:diva-69483 (URN)10.1109/TE.2012.2187208 (DOI)000314465800001 ()
Note

Funding Agencies|Department of Computer and Information Science, Linkoping University, Sweden||

Available from: 2011-06-28 Created: 2011-06-28 Last updated: 2017-12-11
Jonsson, L., Broman, D., Sandahl, K. & Eldh, S. (2012). Towards Automated Anomaly Report Assignment in Large Complex Systems using Stacked Generalization. In: Software Testing, Verification and Validation (ICST), 2012: . Paper presented at Fifth IEEE International Conference on Software Testing, Verification and Validation (ICST 2012), 17-21 April 2012, Montreal, QC, Canada (pp. 437-446). IEEE
Open this publication in new window or tab >>Towards Automated Anomaly Report Assignment in Large Complex Systems using Stacked Generalization
2012 (English)In: Software Testing, Verification and Validation (ICST), 2012, IEEE , 2012, p. 437-446Conference paper, Published paper (Refereed)
Abstract [en]

Maintenance costs can be substantial for organizations with very large and complex software systems. This paper describes research for reducing anomaly report turnaround time which, if successful, would contribute to reducing maintenance costs and at the same time maintaining a good customer perception. Specifically, we are addressing the problem of the manual, laborious, and inaccurate process of assigning anomaly reports to the correct design teams. In large organizations with complex systems this is particularly problematic because the receiver of the anomaly report from customer may not have detailed knowledge of the whole system. As a consequence, anomaly reports may be wrongly routed around in the organization causing delays and unnecessary work. We have developed and validated machine learning approach, based on stacked generalization, to automatically route anomaly reports to the correct design teams in the organization. A research prototype has been implemented and evaluated on roughly one year of real anomaly reports on a large and complex system at Ericsson AB. The prediction accuracy of the automation is approaching that of humans, indicating that the anomaly report handling time could be significantly reduced by using our approach.

Place, publisher, year, edition, pages
IEEE, 2012
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-93231 (URN)10.1109/ICST.2012.124 (DOI)978-1-4577-1906-6 (ISBN)
Conference
Fifth IEEE International Conference on Software Testing, Verification and Validation (ICST 2012), 17-21 April 2012, Montreal, QC, Canada
Note

Finansierat av Ericsson AB

Available from: 2013-05-27 Created: 2013-05-27 Last updated: 2018-05-17Bibliographically approved
Organisations

Search in DiVA

Show all publications