liu.seSearch for publications in DiVA
Change search
Refine search result
1234567 1 - 50 of 321
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abu Baker, Mohamed
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Agile Prototyping: A combination of different approaches into one main process2009Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Software prototyping is considered to be one of the most important tools that are used by software engineersnowadays to be able to understand the customer’s requirements, and develop software products that are efficient,reliable, and acceptable economically. Software engineers can choose any of the available prototyping approaches tobe used, based on the software that they intend to develop and how fast they would like to go during the softwaredevelopment. But generally speaking all prototyping approaches are aimed to help the engineers to understand thecustomer’s true needs, examine different software solutions and quality aspect, verification activities…etc, that mightaffect the quality of the software underdevelopment, as well as avoiding any potential development risks.A combination of several prototyping approaches, and brainstorming techniques which have fulfilled the aim of theknowledge extraction approach, have resulted in developing a prototyping approach that the engineers will use todevelop one and only one throwaway prototype to extract more knowledge than expected, in order to improve thequality of the software underdevelopment by spending more time studying it from different points of view.The knowledge extraction approach, then, was applied to the developed prototyping approach in which thedeveloped model was treated as software prototype, in order to gain more knowledge out of it. This activity hasresulted in several points of view, and improvements that were implemented to the developed model and as a resultAgile Prototyping AP, was developed. AP integrated more development approaches to the first developedprototyping model, such as: agile, documentation, software configuration management, and fractional factorialdesign, in which the main aim of developing one, and only one prototype, to help the engineers gaining moreknowledge, and reducing effort, time, and cost of development was accomplished but still developing softwareproducts with satisfying quality is done by developing an evolutionary prototyping and building throwawayprototypes on top of it.

    Download full text (pdf)
    FULLTEXT01
    Download (pdf)
    COVER01
  • 2.
    Andersson, Jesper
    et al.
    MSI Universitet Växjö, Sweden.
    Ericsson, Morgan
    MSI Universitet Växjö, Sweden.
    Kessler, Christoph
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Löwe, Welf
    MSI Universitet Växjö, Sweden.
    Profile-Guided Composition2008In: 7th Int. Symposium on Software Composition SC 2008,2008, Berlin: Springer , 2008, p. 157-Conference paper (Refereed)
    Abstract [en]

    We present an approach that generates context-aware, optimized libraries of algorithms and data structures. The search space contains all combinations of implementation variants of algorithms and data structures including dynamically switching and converting between them. Based on profiling, the best implementation for a certain context is precomputed at deployment time and selected at runtime. In our experiments, the profile-guided composition outperforms the individual variants in almost all cases.

  • 3.
    Andersson, Niclas
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Comparative Evaluation and Industrial Application of Code Generator Generators1992Conference paper (Other academic)
    Abstract [en]

    The past ten to fifteen years has seen active research in the area of automatically generating the code generator part of compilers from formal specifications. However, less work has been done on evaluating and applying these systems in an industrial setting. This paper attempts to fill this gap.Three systems for automatic generation of code generators are evaluated in this paper: CGSS, BEG and TWIG. CGSS is an older Graham-Glanville style system based on pattern matching through parsing, whereas BEG and TWIG are more recent systems based on tree pattern matching combined with dynamic programming. An industrial-strength code generator previously implemented for a special-purpose language using the CGSS system is described and compared in some detail to our new implementation based on the BEG system. Several problems of integrating local and global register allocation within automatically generated code generators are described, and some solutions proposed. We finally conclude that current technology of automatically generating code generators is viable in an industrial setting. However, further research needs to be done on the problem of properly integrating register allocation with instruction selection, when both are generated from declarative specifications.

  • 4.
    Andersson, Niclas
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Generating Parallel Code from Object Oriented Mathematical Models1995In: PPOPP 1995, 1995, p. 48-57Conference paper (Refereed)
    Abstract [en]

    For a long time efficient use of parallel computers has been hindered by dependencies introduced in software through low-level implementation practice. In this paper we present a programming environment and language called Object-Math (Object oriented Mathematical language for scientific computing), which aims at eliminating this problem by allowing the user to represent mathematical equation-based models directly in the system. The system performs analysis of mathematical models to extract parallelism and automatically generates parallel code for numerical solution.In the context of industrial applications in mechanical analysis, we have so far primarily explored generation of parallel code for solving systems of ordinary differential equations (ODEs), in addition to preliminary work on generating code for solving partial differential equations. Two approaches to extracting parallelism have been implemented and evaluated: extracting parallelism at the equation system level and at the single equation level, respectively. We found that for several applications the corresponding systems of equations do not partition well into subsystems. This means that the equation system level approach is of restricted general applicability. Thus, we focused on the equation-level approach which yielded significant parallelism for ODE systems solution. For the bearing simulation applications we present here, the achieved speedup is however critically dependent on low communication latency of the parallel computer.

  • 5.
    Andersson, Niclas
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Object Oriented Mathematical Modelling and Compilation to Parallel Code1997In: Parallel Computing in Optimization / [ed] Athanasios Migdalas, Panos M. Pardalos and Sverre Storøy, Kluwer Academic Publishers, 1997Chapter in book (Other academic)
    Abstract [en]

    The current state of the art in programming for scientific computing is still rather low-level. The mathematical model behind a computing application usually is written using pen and paper, whereas the corresponding numerical software often is developed manually in Fortran or C. This is especially true in application areas such as mechanical analysis, where complex non-linear problems are the norm, and high performance is required. Ideally, a high-level programming environment would provide computer support for these development steps. This motivated the development of the ObjectMath system. Using ObjectMath, complex mathematical models may be structured in an object oriented way, symbolically simplified, and transformed to efficient numerical code in C++ or Fortran.

    However, many scientific computing problems are quite computationally demanding, which makes it desirable to use parallel computers. Unfortunately, generating parallel code from arbitrary mathematical models is an intractable problem. Therefore, we have focused most of our efforts on a specific problem domain where the main computation is to solve ordinary differential equation systems where most of the computing time is spent in application specific code, rather than in the serial solver kernel. We have investigated automatic parallelisation of the computation of ordinary differential equation systems at three different levels of granularity: the equation system level, the equation level, and the clustered task level. At the clustered task level we employ domain specific knowledge and existing scheduling and clustering algorithms to partition and distribute the computation.

  • 6. Order onlineBuy this publication >>
    Aronsson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Automatic Parallelization of Equation-Based Simulation Programs2006Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Modern equation-based object-oriented modeling languages which have emerged during the past decades make it easier to build models of large and complex systems. The increasing size and complexity of modeled systems requires high performance execution of the simulation code derived from such models. More efficient compilation and code optimization techniques can help to some extent. However, a number of heavy-duty simulation applications require the use of high performance parallel computers in order to obtain acceptable execution times. Unfortunately, the possible additional performance offered by parallel computer architectures requires the simulation program to be expressed in a way that makes the potential parallelism accessible to the parallel computer. Manual parallelization of computer programs is generally a tedious and error prone process. Therefore, it would be very attractive to achieve automatic parallelization of simulation programs.

    This thesis presents solutions to the research problem of finding practically usable methods for automatic parallelization of simulation codes produced from models in typical equationbased object-oriented languages. The methods have been implemented in a tool to automatically translate models in the Modelica modeling language to parallel codes which can be efficiently executed on parallel computers. The tool has been evaluated on several application models. The research problem includes the problem of how to extract a sufficient amount of parallelism from equations represented in the form of a data dependency graph (task graph), requiring analysis of the code at a level as detailed as individual expressions. Moreover, efficient clustering algorithms for building clusters of tasks from the task graph are also required. One of the major contributions of this thesis work is a new approach for merging fine-grained tasks by using a graph rewrite system. Results from using this method show that it is efficient in merging task graphs, thereby decreasing their size, while still retaining a reasonable amount of parallelism. Moreover, the new task-merging approach is generally applicable to programs which can be represented as static (or almost static) task graphs, not only to code from equation-based models.

    An early prototype called DSBPart was developed to perform parallelization of codes produced by the Dymola tool. The final research prototype is the ModPar tool which is part of the OpenModelica framework. Results from using the DSBpart and ModPar tools show that the amount of parallelism of complex models varies substantially between different application models, and in some cases can produce reasonable speedups. Also, different optimization techniques used on the system of equations from a model affect the amount of parallelism of the model and thus influence how much is gained by parallelization.

    Download full text (pdf)
    FULLTEXT01
    Download (pdf)
    COVER01
  • 7.
    Aronsson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Automatic Parallelization of Simulation Code from Equation Based Simulation Languages2002Licentiate thesis, monograph (Other academic)
    Abstract [en]

    Modern state-of-the-art equation based object oriented modeling languages such as Modelica have enabled easy modeling of large and complex physical systems. When such complex models are to be simulated, simulation tools typically perform a number of optimizations on the underlying set of equations in the modeled system, with the goal of gaining better simulation performance by decreasing the equation system size and complexity. The tools then typically generate efficient code to obtain fast execution of the simulations. However, with increasing complexity of modeled systems the number of equations and variables are increasing. Therefore, to be able to simulate these large complex systems in an efficient way parallel computing can be exploited.

    This thesis presents the work of building an automatic parallelization tool that produces an efficient parallel version of the simulation code by building a data dependency graph (task graph) from the simulation code and applying efficient scheduling and clustering algorithms on the task graph. Various scheduling and clustering algorithms, adapted for the requirements from this type of simulation code, have been implemented and evaluated. The scheduling and clustering algorithms presented and evaluated can also be used for functional dataflow languages in general, since the algorithms work on a task graph with dataflow edges between nodes.

    Results are given in form of speedup measurements and task graph statistics produced by the tool. The conclusion drawn is that some of the algorithms investigated and adapted in this work give reasonable measured speedup results for some specific Modelica models, e.g. a model of a thermofluid pipe gave a speedup of about 2.5 on 8 processors in a PC-cluster. However, future work lies in finding a good algorithm that works well in general.

    Download full text (pdf)
    FULLTEXT01
  • 8.
    Aronsson, Peter
    et al.
    MathCore Engineering AB, Linköping, Sweden.
    Broman, David
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Extendable Physical Unit Checking with Understandable Error Reporting2009In: Proceedings of the 7th International Modelica Conference, Como, Italy, 20-22 September 2009, Linköping: Linköping University Electronic Press, Linköpings universitet , 2009, p. 890-897Conference paper (Refereed)
    Abstract [en]

    Dimensional analysis and physical unit checking are important tools for helping users to detect and correct mistakes in dynamic mathematical models. To make tools useful in a broad range of domains, it is important to also support other units than the SI standard. For instance, such units are common in biochemical or financial modeling. Furthermore, if two or more units turn out be in conflict after checking, it is vital that the reported unit information is given in an understandable format for the user, e.g., “N.m” should preferably be shown instead of “m2.kg.s-2”, even if they represent the same unit. Presently, there is no standardized solution to handle these problems for Modelica models. The contribution presented in this paper is twofold. Firstly, we propose an extension to the Modelica language that makes it possible for a library designer to define both new base units and derived units within Modelica models and packets. Today this information is implicitly defined in the specification. Secondly, we describe and analyze a solution to the problem of presenting units to users in a more convenient way, based on an algorithm using Mixed Integer Programming (MIP). Both solutions are implemented, tested, and illustrated with several examples.

  • 9.
    Aronsson, Peter
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    A Task Merging Technique for Parallelization of Modelica Models2005In: 4th International Modelica Conference, 2005, p. -128Conference paper (Refereed)
    Abstract [en]

    This paper presents improvements on techniques of merging tasks in task graphs generated in the ModPar automatic parallelization module of the OpenModelica compiler. Automatic parallelization is performed on Modelica models by building data dependency graphs called task graphs from the model equations. To handle large task graphs with fine granularity, i.e. low ratio of execution and communication cost, the tasks are merged. This is done by using a graph rewrite system(GRS), which is a set of graph transformation rules applied on the task graph. In this paper we have solved the confluence problem of the task merging system by giving priorities to the merge rules. A GRS is confluent if the application order of the graph transformations does not matter, i.e. the same result is gained regardless of application order. We also present a Modelica model suited for automatic parallelization and show results on this using the ModPar module in the OpenModelica compiler.

    Download full text (pdf)
    fulltext
  • 10.
    Aronsson, Peter
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Automatic Parallelization in OpenModelica2004Conference paper (Refereed)
  • 11.
    Aronsson, Peter
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Parallel Code Generation in MathModelica / An Object Oriented Component Based Simulation Environment2001In: Proceedings of Workshop on Parallel/High Performance Object-Oriented Scientific Computing (POOSC’01), 2001Conference paper (Refereed)
    Abstract [en]

    Modelica is an a-causal, equation based, object oriented modeling lan- guage for modeling and efficient simulation of large and complex multi domain systems. The Modelica language, with its strong software component model, makes it possible to use visual component programming, where large complex physical systems can be modeled and composed in a graphical way. One tool with support for both graphical modeling, textual programming and simulation is MathModelica. To deal with growing complexity of modeled systems in the Modelica language, the need for parallelization becomes increasingly important in order to keep sim- ulation time within reasonable limits. The first step in Modelica compilation results in an Ordinary Differential Equa- tion system or a Differential Algebraic Equation system, depending on the spe- cific Modelica model. The Modelica compiler typically performs optimizations on this system of equations to reduce its size. The optimized code consists of simple arithmetic operations, assignments, and function calls. This paper presents an automatic parallelization tool that builds a task graph from the optimized sequential code produced by a commercial Modelica compiler. Var- ious scheduling algorithms have been implemented, as well as specific enhance- ments to cluster nodes for better computation/communication tradeoff. Finally, the tool generates simulation code, in a master-slave fashion, using MPI.

    Download full text (pdf)
    fulltext
  • 12.
    Asghar, Adeel
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Pop, Adrian
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Sjölund, Martin
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Efficient Debugging of Large Algorithmic Modelica Applications2012Conference paper (Refereed)
    Abstract [en]

    Modelica models often contain functions with algorithmic code. The fraction of algorithmiccode is increasing in Modelica models since Modelica, in addition to equation-based modeling, is also used for embedded system control code and symbolic model transformations in compilers using the MetaModelica language extension. For these reasons, debugging of algorithmic Modelica code is becoming increasingly relevant.

    Our earlier work in debuggers for the algorithmic subset of Modelica used trace-based techniques. These have the advantages of being very portable, but turned out to have too much overhead for very large applications.

    The new debugger is the first Modelica debugger that can operate without trace information. Instead it communicates with a low-level C-language symbolic debugger, the Gnu debugger GDB, to directly extract information from a running executable, set and remove breakpoints, etc. This is made possible by the new bootstrapped OpenModelica compiler which keeps track of a detailed mapping from the high level Modelica code down to the generated C code compiled to machine code.

    The debugger is operational, supports browsing of both standard Modelica data structures and tree/list data structures, and operates efficiently on large applications such as the OpenModelica compiler with more than 100 000 lines of code.

    Download full text (pdf)
    fulltext
  • 13.
    Asghar, Syed Adeel
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Tariq, Sonia
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Design and Implementation of a User Friendly OpenModelica Graphical Connection Editor2010Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    OpenModelica (www.openmodelica.org) is an open-source Modelica-based modeling and simulation environment intended for industrial as well as academic usage. Its long-term development is supported by a non-profit organization – the Open Source Modelica Consortium OSMC, where Linköping University is a member.The main reason behind this thesis was the need for a user friendly, efficient and modular OpenModelica graphical connection editor. The already existing open source editors were either textual or not so user friendly. As a part of this thesis work a new open source Qt-based cross platform graphical user interface was designed and implemented, called OMEdit, partially based on an existing GUI for hydraulic systems, HOPSAN. The usage of Qt C++ libraries makes this tool more future safe and also allows it to be easily integrated into other parts of the OpenModelica platform.This thesis aims at developing an advanced open source user friendly graphical user interface that provides the users with easy-to-use model creation, connection editing, simulation of models, and plotting of results. The interface is extensible enough to support user-defined extensions/models. Models can be both textual and graphical. From the annotation information in the Modelica models (e.g. Modelica Standard Library components) a connection tree and diagrams can be created. The communication to the OpenModelica Compiler (OMC) Subsystem is performed through a Corba client-server interface. The OMC Corba server provides an interactive API interface. The connection editor will function as the front-end and OMC as the backend. OMEdit communicates with OMC through the interactive API interface, requests the model information and creates models/connection diagrams based on the Modelica annotations standard version 3.2.

    Download full text (pdf)
    FULLTEXT01
  • 14.
    Asghar, Syed Adeel
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Tariq, Sonia
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Torabzadeh-Tari, Mohsen
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Pop, Adrian
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Sjölund, Martin
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Vasaiely, Parham
    EADS Innovation Works, Engineering & Architecture, Hamburg, Germany.
    Schamai, Wladimir
    EADS Innovation Works, Engineering & Architecture, Hamburg, Germany.
    An Open Source Modelica Graphic Editor Integrated with Electronic Notebooks and Interactive Simulation2011In: Proceedings of the 8th International Modelica Conference, March 20th-22nd, Technical Univeristy, Dresden, Germany / [ed] Christoph Clauß, Linköping: Linköping University Electronic Press, 2011, Vol. 63, p. 739-747Conference paper (Refereed)
    Abstract [en]

    This paper describes the first open source Modelica graphic editor which is integrated with interactive electronic notebooks and online interactive simulation. The work is motivated by the need for easy-to-use graphic editing of Modelica models using OpenModelica, as well as needs in teaching where the student should be able to interactively modify and simulate models in an electronic book. Models can be both textual and graphical. The interactive online simulation makes the simulation respond in real-time to model changes, which is useful in a number of contexts including immediate feedback to students.

    Download full text (pdf)
    fulltext
  • 15.
    Assmann, Uwe
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Architectural styles for active documents2005In: Science of Computer Programming, ISSN 0167-6423, E-ISSN 1872-7964, Vol. 56, no 1-2, p. 79-98Article in journal (Refereed)
    Abstract [en]

    This paper proposes several novel architectural styles for active documents. Active documents are documents that contain not only data, but also servlets, applets, expressions in spreadsheet languages, and other forms of software. To grasp the different forms of architectures, several novel concepts are defined. Invasive document composition is a type-safe form of template expansion and extension, transconsistency is a form of transclusion for architectures, and staged architectures provide a form of staged programming on the architectural level. With these concepts, it is possible to explain the architectures of many document processing applications for Web and office, and we define the architectural styles of wizard-parametrized, script-parametrized, transconsistent, stream-based, and staged active documents. Finally, we give a hypothesis of active document composition: it consists of four elements, namely, explicit architecture, invasiveness, transconsistency, and staging. On the basis of this hypothesis, many applications in Web engineering and document processing get a common background, and can be compared and simplified. © 2004 Elsevier B.V. All rights reserved.

  • 16.
    Assmann, Uwe
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Automatic Roundtrip Engineering2003In: Electronical Notes in Theoretical Computer Science, ISSN 1571-0661, E-ISSN 1571-0661, Vol. 82, no 5, p. 33-41Article in journal (Other academic)
    Abstract [en]

    A systematic method for roundtrip engineering of systems, automatic roundtrip engineering (ARE), is presented. It relies on the automatic derivation of inverses for domain transformations. While roundtrip engineering is a well known system engineering method, systematic conditions for its deployment have not yet been formalized, and this is done in the paper. Secondly, ARE is a generic architectural style for different architectural scenarios. To show this, the paper gives a first classification, defining several subclasses of ARE systems: sequenced ARE systems, automatic Model-View-Controller engineering (MVARE), and bidirectional aspect systems (Beavers). Sequenced ARE systems extend the ARE principle to chains of transformations. MVARE systems project a domain into a set of simpler ones, simplifying system understanding. Beaving systems generalize aspect-oriented programming to roundtrip engineering. All ARE classes describe different generic application architectures and have a great potential to simplify the construction of roundtrip engineering tools and applications.

  • 17.
    Assmann, Uwe
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Beyond generic component parameters2002In: Component deployment: IFIP/ACM Working Conference, CD 2002, Berlin, Germany, June 20-21, 2002 : proceedings / [ed] Judith Bishop, Springer Berlin/Heidelberg, 2002, Vol. 2370, p. 153-168Chapter in book (Refereed)
    Abstract [en]

    For flexible use in application contexts, software components should be parameterized, but also extended appropriately. Until now, there is no language mechanism to solve both problems uniformly. This paper presents a new concept, component hooks. Hooks are similar to generic component parameters but go some steps beyond. Firstly, they allow genericity on arbitrary program elements, leading to generic program elements. Secondly, they introduce an abstraction layer on generic parameters, allowing for structured generic parameters that bind several program elements together. Thirdly, if they are abstract set or sequence values, they can also be used to extend components. Lastly, since they only rely on a meta model they are a language independent concept which can be applied to all languages. Hooks form a basic parameterization concept for components written in languages with a meta model. For such languages, hooks generalize many well known generic language mechanisms, such as macros, semantic macros, generic type parameters, or nested generics. They also provide a basic concept to realize simple forms of aspect weavers and other advanced software engineering concepts.

  • 18.
    Assmann, Uwe
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Composing Frameworks and Components for Families of Semantic Web Applications2003In: International Workshop on Principles and Practice of Semantic Web Reasoning (PPSWR 03), 2003Conference paper (Refereed)
  • 19.
    Assmann, Uwe
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Composing frameworks and components for families of Semantic Web applications2003In: Principles and Practice of Semantic Web Reasoning International Workshop, PPSWR 2003, Mumbai, India, December 8, 2003. Proceedings / [ed] François Bry, Nicola Henze and Jan Maluszynski, Springer Berlin/Heidelberg, 2003, Vol. 2901, p. 1-15Chapter in book (Refereed)
    Abstract [en]

    This paper outlines a first methodology for a framework and component technology for Semantic Web applications, layered constraint frameworks. Due to the heterogeneity of the Semantic Web, different ontology languages will coexist. Applications must be able to work with several of them, and for good reuse, they should be parameterized by them. As a solution, we combine layered frameworks with architecture systems and explicit constraint specifications. Layered constraint frameworks can be partially instantiated on 6 levels, allowing for extensive reuse of components and variability of applications. Not only that applications can be instantiated for a certain product or web service family, also architectural styles, component models, and ontology languages can be reused or varied in applications. And hence, for the first time, this proposes a reuse technology for ontology-based applications on the heterogeneous Semantic Web.

  • 20.
    Assmann, Uwe
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Invasive Software Composition2003Book (Other academic)
    Abstract [en]

    Over the past two decades, software engineering has come a long way from object-based to object-oriented to component-based design and development.

    Invasive software composition is a new technique that unifies and extends recent software engineering concepts like generic programming, aspect-oriented development, architecture systems, or subject-oriented development. To improve reuse, this new method regards software components as grayboxes and integrates them during composition. Building on a minimal set of program transformations, composition operator libraries can be developed that parameterize, extend, connect, mediate, and aspect-weave components.

    The book is centered around the JAVA language and the freely available demonstrator library COMPOST. It provides a wealth of materials for researchers, students, and professional software architects alike.

  • 21.
    Assmann, Uwe
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    MDA - Foundations and Applications (MDAFA) 20042004In: MDA - Foundations and Applications MDAFA 2004,2004, Linköping, Sweden: Linköpings universitet , 2004Conference paper (Refereed)
  • 22.
    Assmann, Uwe
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Henriksson, Jakob
    Technical University of Dresden, Germany.
    Maluszynski, Jan
    Linköping University, Department of Computer and Information Science, TCSLAB - Theoretical Computer Science Laboratory. Linköping University, The Institute of Technology.
    Combining safe rules and ontologies by interfacing of reasoners2006In: Principles and Practice of Semantic Web Reasoning 4th International Workshop, PPSWR 2006, Budva, Montenegro, June 10-11, 2006, Revised Selected Papers / [ed] Jóse Júlio Alferes, James Bailey, Wolfgang May and Uta Schwertel, Springer Berlin/Heidelberg, 2006, Vol. 4187, p. 33-47Chapter in book (Refereed)
    Abstract [en]

    The paper presents a framework for hybrid combination of rule languages with constraint languages including but not restricted to Description-Logic-based ontology languages. It shows how reasoning in a combined language can be done by interfacing reasoners of the component languages. A prototype system based on the presented principle integrates Datalog with OWL by interfacing XSB Prolog [2] with a DIG-compliant [1] DL reasoner (e.g. Racer [17]).

  • 23.
    Assmann, Uwe
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Lövdahl, Johan
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    A demo of OptimixJ2003In: Applications of Graph Transformations with Industrial Relevance Second International Workshop, AGTIVE 2003, Charlottesville, VA, USA, September 27 - October 1, 2003, Revised Selected and Invited Papers / [ed] John L. Pfaltz, Manfred Nagl and Boris Böhlen, Springer Berlin/Heidelberg, 2003, Vol. 3062, p. 468-472Chapter in book (Refereed)
    Abstract [en]

    OptimixJ is a graph rewrite tool that generates Java code from rewrite specifications. Java classes are treated as graph schemas, enabling OptimixJ to extend legacy Java applications through code weaving in a simple way. The demo shows how OptimixJ has been used to implement graph rewriting for RDF/XML documents in the context of the Semantic Web.

  • 24.
    Assmann, Uwe
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Lövdahl, Johan
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Integrating Graph Rewrite Tools with Standard Tools2003In: International Conference on Graph Transformations in Industrial Applications (AGTIVE 03), Springer-Verlag , 2003Conference paper (Refereed)
  • 25.
    Assmann, Uwe
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Lövdahl, Johan
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Integrating graph rewriting and standard software tools2004In: Applications of Graph Transformations with Industrial Relevance Second International Workshop, AGTIVE 2003, Charlottesville, VA, USA, September 27 - October 1, 2003, Revised Selected and Invited Papers / [ed] John L. Pfaltz, Manfred Nagl and Boris Böhlen, Springer Berlin/Heidelberg, 2004, Vol. 3062, p. 134-148Chapter in book (Refereed)
    Abstract [en]

    OptimixJ is a graph rewrite tool that can be embedded easily into the standard software process. Applications and models can be developed in Java or UML and extended by graph rewrite systems. We discuss how OptimixJ solves several problems that arise: the model-ownership problem, the embedded graphs problem, the library adaptation problem, and the target code encapsulation problem. We also show how the tool can be adapted to host language extensions or to new host languages in a very simple way, relying on the criterion of sublanguage projection. This reduces the effort for adapting OptimixJ to other host languages considerably.

  • 26.
    Assmann, Uwe
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Neumann, Rainer
    Quo vadis Komponentensysteme? Von Modulen zu grauen Komponenten2003In: HMD - Praxis Wirtschaftsinform., ISSN 1436-3011, Vol. 231Article in journal (Other academic)
    Abstract [de]

    Die steigende Komplexität von Anwendungen zwingt die Forschung zur Entwicklung immer neuer Techniken zur Erstellung flexibler Bausteine, die sich leicht an sich ändernde Anforderungen anpassen lassen und zugleich mit der Größe der zu entwickelnden Systeme skalieren. Dieser Artikel stellt den Wandel von den frühen Modularisierungstechniken über objektorientierte Systeme und Komponentensystemen hin zu Aspektsystemen dar. Dabei wird gezeigt, wie sich die Sichtweisen auf Schnittstellen und das Zusammensetzen von Bausteinen ändern, wobei die Aspekte Adaption und Kopplung gegenüber dem rein funktionalen Aspekt immer mehr an Bedeutung gewinnen.

  • 27.
    Assmann, Uwe
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Pulvermüller, E
    Cointe, P
    Bouraquadi, N
    Cointe, I
    Proceedings of Software Composition (SC) -- Workshop at ETAPS 20042004In: Workshop at ETAPS 2004,2004, Spain: Electronic Transactions of Theoretical Computer Science ENTCS , 2004Conference paper (Refereed)
  • 28.
    Auguston, Mikhail
    et al.
    University of Latvia, Riga.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    PARFORMAN - an Assertion Language for Specifying Behaviour when Debugging Parallel Applications1993In: Parallel and Distributed Processing, 1993, IEEE , 1993, p. 150-157Conference paper (Refereed)
    Abstract [en]

    PARFORMAN (PARallel FORMal ANnotation language) is a specification language for expressing the intended behaviour or known types of error conditions when debugging or testing parallel programs. The high-level debugging approach which is supported by PARFORMAN is model-based. Models of intended or faulty behaviour can be succinctly specified in PARFORMAN. These models are then compared with the actual behaviour in terms of execution traces of events, in order to localize possible bugs. PARFORMAN is based on an axiomatic model of target program behaviour. This model, called H-space (history-space), is formally defined through a set of general axioms about three basic relations between events. Events may be sequentially ordered, they may be parallel, or one of them might be included in another composite event. The notion of an event grammar is introduced to describe allowed event patterns over a certain application domain or language. Auxiliary composite events such as snapshots are introduced to be able to define the notion “occurred at the same time” at suitable levels of abstraction. In addition to debugging and testing, PARFORMAN can also be used to specify profiles and performance measurements

  • 29.
    Bachmann, Bernhard
    et al.
    Dept. Mathematics and Engineering, University of Applied Sciences, Bielefeld, Germany.
    Ochel, Lennart
    Dept. Mathematics and Engineering, University of Applied Sciences, Bielefeld, Germany.
    Ruge, Vitalij
    Dept. Mathematics and Engineering, University of Applied Sciences, Bielefeld, Germany.
    Gebremedhin, Mahder
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Fritzson, Peter
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Nezhadali, Vaheed
    Linköping University, Department of Electrical Engineering, Vehicular Systems. Linköping University, Faculty of Science & Engineering.
    Eriksson, Lars
    Linköping University, Department of Electrical Engineering, Vehicular Systems. Linköping University, Faculty of Science & Engineering.
    Sivertsson, Martin
    Linköping University, Department of Electrical Engineering, Vehicular Systems. Linköping University, Faculty of Science & Engineering.
    Parallel Multiple-Shooting and Collocation Optimization with OpenModelica2012In: Proceedings of the 9th International MODELICA Conference; September 3-5; 2012; Munich; Germany, Linköping University Electronic Press, 2012, p. 659-668, article id 067Conference paper (Refereed)
    Abstract [en]

    Nonlinear model predictive control (NMPC) has become increasingly important for today’s control engineers during the last decade. In order to apply NMPC a nonlinear optimal control problem (NOCP) must be solved which needs a high computational effort.

    State-of-the-art solution algorithms are based on multiple shooting or collocation algorithms; which are required to solve the underlying dynamic model formulation. This paper describes a general discretization scheme applied to the dynamic model description which can be further concretized to reproduce the mul-tiple shooting or collocation approach. Furthermore; this approach can be refined to represent a total collocation method in order to solve the underlying NOCP much more efficiently. Further speedup of optimization has been achieved by parallelizing the calculation of model specific parts (e.g. constraints; Jacobians; etc.) and is presented in the coming sections.

    The corresponding discretized optimization problem has been solved by the interior optimizer Ipopt. The proposed parallelized algorithms have been tested on different applications. As industrial relevant application an optimal control of a Diesel-Electric power train has been investigated. The modeling and problem description has been done in Optimica and Modelica. The simulation has been performed using OpenModelica. Speedup curves for parallel execution are presented.

  • 30.
    Bednarski, Andrzej
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    A dynamic programming approach to optimal retargetable code generation for irregular architectures2002Licentiate thesis, monograph (Other academic)
    Abstract [en]

    In this thesis we address the problem of optimal code generation for irregular architectures such as Digital Signal Processors (DSPs). Code generation consists mainly of three tasks: instruction selection, instruction scheduling and register allocation. These tasks have been discovered to be NP-difficult for most of the architectures and most situations.

    A common approach to code generation consists in solving each task separately, i.e. in a decoupled manner, which is easier from an engineering point of view. Decoupled phase based compilers produce good code quality for regular architectures, but if applied to DSPs the resulting code is of significantly lower performance due to strong interdependencies between the different tasks.

    We report on a novel method for fully integrated code generation based on dynamic programming. It handles the most important tasks of code generation in a single optimization step and produces optimal code sequence. Our dynamic programming algorithm is applicable to small, yet not trivial problem instances with up to 50 instructions per basic block if data locality is not an issue, and up to 20 instructions if we take data locality on irregular processor architectures into account.

    In order to obtain a retargetable framework we developed a first version of a structured hardware description language, ADML, which is based on XML. We implemented a prototype framework of such a retargetable system for optimal code generation.

    As far as we know from the literature, this is the first time that the main tasks of code generation are solved optimally in a single and fully integrated optimization step that additionally considers data placement in registers. 

  • 31. Order onlineBuy this publication >>
    Bednarski, Andrzej
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Integrated Optimal Code Generation for Digital Signal Processors2006Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In this thesis we address the problem of optimal code generation for irregular architectures such as Digital Signal Processors (DSPs).

    Code generation consists mainly of three interrelated optimization tasks: instruction selection (with resource allocation), instruction scheduling and register allocation. These tasks have been discovered to be NP-hard for most architectures and most situations. A common approach to code generation consists in solving each task separately, i.e. in a decoupled manner, which is easier from a software engineering point of view. Phase-decoupled compilers produce good code quality for regular architectures, but if applied to DSPs the resulting code is of significantly lower performance due to strong interdependences between the different tasks.

    We developed a novel method for fully integrated code generation at the basic block level, based on dynamic programming. It handles the most important tasks of code generation in a single optimization step and produces an optimal code sequence. Our dynamic programming algorithm is applicable to small, yet not trivial problem instances with up to 50 instructions per basic block if data locality is not an issue, and up to 20 instructions if we take data locality with optimal scheduling of data transfers on irregular processor architectures into account. For larger problem instances we have developed heuristic relaxations.

    In order to obtain a retargetable framework we developed a structured architecture specification language, xADML, which is based on XML. We implemented such a framework, called OPTIMIST that is parameterized by an xADML architecture specification.

    The thesis further provides an Integer Linear Programming formulation of fully integrated optimal code generation for VLIW architectures with a homogeneous register file. Where it terminates successfully, the ILP-based optimizer mostly works faster than the dynamic programming approach; on the other hand, it fails for several larger examples where dynamic programming still provides a solution. Hence, the two approaches complement each other. In particular, we show how the dynamic programming approach can be used to precondition the ILP formulation.

    As far as we know from the literature, this is for the first time that the main tasks of code generation are solved optimally in a single and fully integrated optimization step that additionally considers data placement in register sets and optimal scheduling of data transfers between different registers sets.

    Download full text (pdf)
    FULLTEXT01
  • 32.
    Bednarski, Andrzej
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Kessler, Christoph
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Energy-Optimal Integrated VLIW Code Generation2004In: CPC04 11th Int. Workshop on Compilers for Parallel Computers,2004, 2004, p. 227-238Conference paper (Other academic)
  • 33.
    Bednarski, Andrzej
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Kessler, Christoph
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Exploiting Symmetries for Optimal Integrated Code Generation2004In: Int. Conf. on Embedded Systems and Applications ESA04,2004, 2004Conference paper (Refereed)
  • 34.
    Bednarski, Andrzej
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Kessler, Christoph
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Integer Linear Programming versus Dynamic Programming for Optimal Integrated VLIW Code Generation2006In: 12th Int. Workshop on Compilers for Parallel Computers,2006, 2006, p. 73-Conference paper (Refereed)
  • 35.
    Bednarski, Andrzej
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Optimal integrated code generation for VLIW architectures2003In: Proc. of CPC'03 10th Int. Workshop on Compilers for Parallel Computers, Amsterdam, The Netherlands, January 2003', Leiden, The Netherlands: Leiden Institute of Advanced Computer Science , 2003Conference paper (Refereed)
  • 36.
    Bednarski, Andrzej
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Optimal integrated VLIW code generation with Integer Linear Programming2006In: Euro-Par 2006 Parallel Processing 12th International Euro-Par Conference, Dresden, Germany, August 28 – September 1, 2006. Proceedings / [ed] Wolfgang E. Nagel, Wolfgang V. Walter and Wolfgang Lehner, Springer Berlin/Heidelberg, 2006, Vol. 4128, p. 461-472Chapter in book (Refereed)
    Abstract [en]

    We give an Integer Linear Programming (ILP) solution that fully integrates all steps of code generation, i.e. instruction selection, register allocation and instruction scheduling, on the basic block level for VLIW processors.

    In earlier work, we contributed a dynamic programming (DP) based method for optimal integrated code generation, implemented in our retargetable code generator OPTIMIST. In this paper we give first results to evaluate and compare our ILP formulation with our DP method on a VLIW processor. We also demonstrate how to precondition the ILP model by a heuristic relaxation of the DP method to improve ILP optimization time.

  • 37.
    Benkner, Siegfried
    et al.
    University of Vienna.
    Pllana, Sabri
    University of Vienna.
    Larsson Träff, Jesper
    University of Vienna.
    Tsigas, Philippas
    Chalmers.
    Dolinsky, Uwe
    Codeplay Software.
    Augonnet, Cèdric
    INRIA Bordeaux.
    Bachmayer, Beverly
    Intel GmbH.
    Kessler, Christoph
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Moloney, David
    Movidius.
    Osipov, Vitaly
    Karlsruhe Institute of Technology.
    PEPPHER: Efficient and Productive Usage of Hybrid Computing Systems2011In: IEEE Micro, ISSN 0272-1732, E-ISSN 1937-4143, Vol. 31, no 5, p. 28-41Article in journal (Refereed)
    Abstract [en]

    PEPPHER, a three-year European FP7 project, addresses efficient utilization of hybrid (heterogeneous) computer systems consisting of multicore CPUs with GPU-type accelerators. This article outlines the PEPPHER performance-aware component model, performance prediction means, runtime system, and other aspects of the project. A larger example demonstrates performance portability with the PEPPHER approach across hybrid systems with one to four GPUs.

  • 38. Berg, Karin
    et al.
    Nyström, Kaj
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Hydrological Modeling in Modelica2005In: 4th International Modelica Conference, March 2005,2005, 2005Conference paper (Other academic)
  • 39.
    Bilos, Rober
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Fritzson, Peter
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Experience from a Token Sequence Representation of Programs, Documents, and their Deltas1988Conference paper (Refereed)
    Abstract [en]

    A primary goal with this work has been to investigate the consequences of a token-based program and document representation. This representation lies between plain text and trees in complexity.We have found that a program or a document represented as a token sequence saves on the average 50 representation. Another advantage is that deltas between program versions stored in source code control systems become insensitive to changes in whitespace or formatting style. Statistics from version-handling, computation of deltas, and storage is presented in the paper.

  • 40.
    Borg, Andreas
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Contributions to management and validation of non-functional requirements2004Licentiate thesis, monograph (Other academic)
    Abstract [en]

    Non-functional requirements (NFRs) are essential when considering software quality in that they shall represent the right quality of the intended software. It is generally hard to get hold of NFRs and to specify them in measurable terms, and most software development methods applied today focus on functional requirements (FRs). Moreover, NFRs are relatively unexplored in the literature and knowledge regarding real-world treatment of NFRs is particularly rare.

    A case study and a literature survey were performed to provide this kind of knowledge, which also served as a problem inventory to outline future research activities. An interview series with practitioners at two large software development organizations was carried out. As a major result, it was established that too few NFRs are considered in development and that they are stated in vague terms. Moreover, it was observed that organizational power structures strongly influence the quality of the forthcoming software, and that processes need to be well suited for dealing with NFRs.

    It was selected among several options to explore how processes can be better suited to handle NFRs by adding the information of actual feature use. A case study was performed in which the feature use of an interactive product management tool was measured indirectly from log files of an industrial user, and the approach was also applied to the problem of requirements selection. The results showed that the idea is feasible and that quality aspects can be effectively addressed by considering actual feature use.

    An agenda for continued research comprises: further studies in system usage data acquisition, modelling of NFRs, and comparing means for predicting feasibility of NFRs. One strong candidate is weaving high-level requirement models with models of available components.

  • 41. Order onlineBuy this publication >>
    Borg, Andreas
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Processes and Models for Capacity Requirements in Telecommunication Systems2009Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Capacity is an essential quality factor in telecommunication systems. The ability to develop systems with the lowest cost per subscriber and transaction, that also meet the highest availability requirements and at the same time allow for scalability, is a true challenge for a telecommunication systems provider. This thesis describes a research collaboration between Linköping University and Ericsson AB aimed at improving the management, representation, and implementation of capacity requirements in large-scale software engineering.

    An industrial case study on non-functional requirements in general was conducted to provide the explorative research background, and a richer understanding of identified difficulties was gained by dedicating subsequent investigations to capacity. A best practice inventory within Ericsson regarding the management of capacity requirements and their refinement into design and implementation was carried out. It revealed that capacity requirements crosscut most of the development process and the system lifecycle, thus widening the research context considerably. The interview series resulted in the specification of 19 capacity sub-processes; these were represented as a method plug-in to the OpenUP software development process in order to construct a coherent package of knowledge as well as to communicate the results. They also provide the basis of an empirically grounded anatomy which has been validated in a focus group. The anatomy enables the assessment and stepwise improvement of an organization’s ability to develop for capacity, thus keeping the initial cost low. Moreover, the notion of capacity is discussed and a pragmatic approach for how to support model-based, function-oriented development with capacity information by its annotation in UML models is presented. The results combine into a method for how to improve the treatment of capacity requirements in large-scale software systems.

    List of papers
    1. The Bad Conscience of Requirements Engineering: An Investigation in Real-World Treatment of Non-Functional Requirements
    Open this publication in new window or tab >>The Bad Conscience of Requirements Engineering: An Investigation in Real-World Treatment of Non-Functional Requirements
    2003 (English)In: Third Conference on Software Engineering Research and Practice in Sweden (SERPS'03), Lund, 2003, p. 1-8Conference paper, Published paper (Refereed)
    Abstract [en]

    Even though non-functional requirements (NFRs) are critical in order to provide software of good quality, the literature of NFRs is relatively sparse. We describe how NFRs are treated in two development organizations, an Ericsson application center and the IT department of the Swedish Meteorological and Hydrological Institute. We have interviewed professionals about problems they face and their ideas on how to improve the situation. Both organizations are aware of NFRs and related problems but their main focus is on functional requirements,primarily because existing methods focus on these. The most tangible problems experienced are that many NFRs remain undiscovered and that NFRs are stated in non-measurable terms. It became clear that the size andstructure of the organization require proper distribution of employees’ interest, authority and competence of NFRs. We argue that a feasible solution might be to strengthen the position of architectural requirements, which are more likely to emphasize NFRs.

    Keywords
    Non-functional requirements, case study
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-16790 (URN)
    Available from: 2009-02-25 Created: 2009-02-19 Last updated: 2018-01-13Bibliographically approved
    2. Good Practice and Improvement Model of Handling Capacity Requirements of Large Telecommunication Systems
    Open this publication in new window or tab >>Good Practice and Improvement Model of Handling Capacity Requirements of Large Telecommunication Systems
    2006 (English)In: 14th IEEE International Requirements Engineering Conference (RE'06), Minneapolis/S:t Paul, Los Alamitos, CA: IEEE Computer Society , 2006, p. 245-250Conference paper, Published paper (Refereed)
    Abstract [en]

    There is evidence to suggest that the software industry has not yet matured as regards management of nonfunctional requirements (NFRs). Consequently the cost of achieving required quality is unnecessarily high. To try and avoid this, the telecommunication systems provider Ericsson defined a research task to try and improve the management of requirements for capacity, which is one of the most critical NFRs. Linkoping University joined in the effort and conducted an interview series to investigate good practice within different parts of the company. Inspired by the interviews and an ongoing process improvement project a model for improvement was created and activities were synthesized. This paper contributes the results from the interview series, and details the subprocesses of specification that should be improved. Such improvements are about understanding the relationship between numerical entities at all system levels, augmenting UML specifications to make NFRs visible, working with time budgets, and testing the sub system level components on the same level as they are specified.

    Place, publisher, year, edition, pages
    Los Alamitos, CA: IEEE Computer Society, 2006
    Keywords
    Non-functional requirements, capacity, process improvement
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-16791 (URN)10.1109/RE.2006.28 (DOI)0-7695-2555-5 (ISBN)978-0-7695-2555-6 (ISBN)
    Available from: 2009-02-19 Created: 2009-02-19 Last updated: 2018-01-13Bibliographically approved
    3. Integrating an Improvement Model of Handling Capacity Requirements with OpenUP/Basic Process
    Open this publication in new window or tab >>Integrating an Improvement Model of Handling Capacity Requirements with OpenUP/Basic Process
    2007 (English)In: 13th International working conference on Requirements Engineering: Foundations for Software Quality (REFSQ'07), Trondheim, Norway, Berlin Heidelberg: Springer , 2007, p. 341-354Conference paper, Published paper (Refereed)
    Abstract [en]

    Contemporary software processes and modeling languages have a strong focus on Functional Requirements (FRs), whereas information of Non-Functional Requirements (NFRs) are managed with text-based documentation and individual skills of the personnel. In order to get a better understanding of how capacity requirements are handled, we carried out an interview series with various branches of Ericsson. The analysis of this material revealed 18 Capacity Sub-Processes (CSPs) that need to be attended to create a capacity-oriented development. In this paper we describe all these sub-processes and their mapping into an extension of the OpenUP/Basic software process. Such an extension will support a process engineer in realizing the sub-processes, and has at the same time shown that there are no internal inconsistencies of the CSPs. The extension provides a context for continued research in using UML to support negotiation between requirements and existing design.

    Place, publisher, year, edition, pages
    Berlin Heidelberg: Springer, 2007
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743 ; 4542
    Keywords
    Capacity requirements, OpenUP/Basic, method plug-in, Eclipse Process Framework, process improvement
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-16792 (URN)10.1007/978-3-540-73031-6_26 (DOI)978-3-540-73030-9 (ISBN)
    Available from: 2009-02-19 Created: 2009-02-19 Last updated: 2018-01-13Bibliographically approved
    4. Extending the OpenUP/Basic Requirements Discipline to Specify Capacity Requirements
    Open this publication in new window or tab >>Extending the OpenUP/Basic Requirements Discipline to Specify Capacity Requirements
    2007 (English)In: Requirements Engineering Conference, 2007. RE '07, IEEE Computer Society, 2007, p. 328-333Conference paper, Published paper (Refereed)
    Abstract [en]

    Software processes, such as RUP and agile methods, focus their requirements engineering part on use cases and thus functional requirements. Complex products, such as radio network control software, need special handling of non-functional requirements as well. We describe how we used the eclipse process framework to augment the open and minimal OpenUP/basic process with improvements found in management of capacity requirements in a case-study at Ericsson. The result is compared with another project improving RUP to handle performance requirements. The major differences between the improvements are that 1) they suggest a special, dedicated performance manager role and we suggest that present roles are augmented, 2) they suggest a bottom-up approach to performance verification while we focus on system performance first, i.e. top-down. Further, we suggest augmenting UMLl-2 models with capacity attributes to improve information flow from requirements to implementation.

    Place, publisher, year, edition, pages
    IEEE Computer Society, 2007
    Series
    International Requirements Engineering Conference. Proceedings, ISSN 1090-705X
    Keywords
    Capacity requirements, process improvement, method plug-in, OpenUP/Basic, Eclipse Process Framework
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-16797 (URN)10.1109/RE.2007.24 (DOI)000251576800040 ()978-0-7695-2935-6 (ISBN)
    Conference
    15th IEEE International Requirements Engineering Conference, 15-19 October 2007, Delhi, India
    Available from: 2009-02-19 Created: 2009-02-19 Last updated: 2018-01-13Bibliographically approved
    5. A Case Study in Assessing and Improving Capacity Using an Anatomy of Good Practice
    Open this publication in new window or tab >>A Case Study in Assessing and Improving Capacity Using an Anatomy of Good Practice
    2007 (English)In: The 6th joint meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE 2007), Dubrovnik, Croatia, New York: ACM , 2007, p. 509-512Conference paper, Published paper (Refereed)
    Abstract [en]

    Capacity in telecommunication systems is highly related to operator revenue. As a vendor of such systems, Ericsson AB is continuously improving its processes for estimating, specifying, tuning, and testing the capacity of delivered systems. In order to systematize process improvements Ericsson AB and Linköping University joined forces to create an anatomy of Capacity Sub Processes (CSPs). The anatomy is the result of an interview series conducted to document good practices amongst organizations active in capacity improvement. In this paper we analyze four different development processes in terms of how far they have reached in their process maturity according to our anatomy and show possible improvement directions. Three of the processes are currently in use at Ericsson, and the fourth is the OpenUP/Basic process which we have used as a reference process in earlier research. We also include an analysis of the observed good practices. The result mainly confirms the order of CSPs in the anatomy, but we need to use our information of the maturity of products and the major life cycle in the organization in order to fully explain the role of the anatomy in planning of improvements.

    Place, publisher, year, edition, pages
    New York: ACM, 2007
    Keywords
    Capacity, non-functional requirements, process improvement
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-16801 (URN)10.1145/1287624.1287697 (DOI)978-1-59593-811-4 (ISBN)
    Available from: 2009-02-19 Created: 2009-02-19 Last updated: 2018-01-13Bibliographically approved
    6. A Method for Improving the Treatment of Capacity Requirements in Large Telecommunication Systems
    Open this publication in new window or tab >> A Method for Improving the Treatment of Capacity Requirements in Large Telecommunication Systems
    (English)Manuscript (Other academic)
    Abstract [en]

    Non-functional requirements crosscut functional models and are more difficult to enforce in system models. This paper describes a long-term research collaboration regarding capacity requirements between Linköping University and Ericsson AB. We describe an industrial case study on non-functional requirements as a background. Succeeding efforts dedicated to capacity include a detailed description of the term, a best practice inventory within Ericsson, and a pragmatic approach for how to annotate UML models with capacity information. The results are also represented as a method plug-in to the OpenUP software process and an anatomy facilitating the possibility to assess and improve an organization’s abilities to develop for capacity. The results combine into a method for how to improve the treatment of capacity requirements in large-scale software systems. Both product and process views are included, with emphasis on the latter.

    Keywords
    Non-functional requirements, capacity requirements, process improvement, anatomy, UML, OpenUP, Eclipse Process Framework
    National Category
    Software Engineering
    Identifiers
    urn:nbn:se:liu:diva-16805 (URN)
    Available from: 2009-02-19 Created: 2009-02-19 Last updated: 2018-01-13Bibliographically approved
    Download full text (pdf)
    Processes and Models for Capacity Requirements in Telecommunication Systems
    Download (pdf)
    Cover
  • 42.
    Borg, Andreas
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Karlsson, J
    Olsson, J
    Sandahl, Kristian
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Measuring the Use of Features in a Requirements Engineering Tool-An Industrial Case Study2004In: Fourth Conference on Software Engineering Researchand Practice in Sweden,2004, 2004, p. 101-Conference paper (Refereed)
  • 43.
    Borg, Andreas
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Karlsson, J
    Olsson, S
    Sandahl, Kristian
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Supporting Requirements Selection by Measuring Feature Use2004In: Tenth International Workshop on Requirements Engineering: Foundation forSoftware Quality REFSQ04,2004, 2004Conference paper (Refereed)
  • 44.
    Borg, Andreas
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Patel, Mikael
    Ericsson AB, Linköping, Sweden.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Extending the OpenUP/Basic Requirements Discipline to Specify Capacity Requirements2007In: Requirements Engineering Conference, 2007. RE '07, IEEE Computer Society, 2007, p. 328-333Conference paper (Refereed)
    Abstract [en]

    Software processes, such as RUP and agile methods, focus their requirements engineering part on use cases and thus functional requirements. Complex products, such as radio network control software, need special handling of non-functional requirements as well. We describe how we used the eclipse process framework to augment the open and minimal OpenUP/basic process with improvements found in management of capacity requirements in a case-study at Ericsson. The result is compared with another project improving RUP to handle performance requirements. The major differences between the improvements are that 1) they suggest a special, dedicated performance manager role and we suggest that present roles are augmented, 2) they suggest a bottom-up approach to performance verification while we focus on system performance first, i.e. top-down. Further, we suggest augmenting UMLl-2 models with capacity attributes to improve information flow from requirements to implementation.

  • 45.
    Borg, Andreas
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Patel, Mikael
    Ericsson AB, Linköping, Sweden.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Good Practice and Improvement Model of Handling Capacity Requirements of Large Telecommunication Systems2006In: 14th IEEE International Requirements Engineering Conference (RE'06), Minneapolis/S:t Paul, Los Alamitos, CA: IEEE Computer Society , 2006, p. 245-250Conference paper (Refereed)
    Abstract [en]

    There is evidence to suggest that the software industry has not yet matured as regards management of nonfunctional requirements (NFRs). Consequently the cost of achieving required quality is unnecessarily high. To try and avoid this, the telecommunication systems provider Ericsson defined a research task to try and improve the management of requirements for capacity, which is one of the most critical NFRs. Linkoping University joined in the effort and conducted an interview series to investigate good practice within different parts of the company. Inspired by the interviews and an ongoing process improvement project a model for improvement was created and activities were synthesized. This paper contributes the results from the interview series, and details the subprocesses of specification that should be improved. Such improvements are about understanding the relationship between numerical entities at all system levels, augmenting UML specifications to make NFRs visible, working with time budgets, and testing the sub system level components on the same level as they are specified.

  • 46.
    Borg, Andreas
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Patel, Mikael
    Ericsson AB, Linköping, Sweden.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Integrating an Improvement Model of Handling Capacity Requirements with OpenUP/Basic Process2007In: 13th International working conference on Requirements Engineering: Foundations for Software Quality (REFSQ'07), Trondheim, Norway, Berlin Heidelberg: Springer , 2007, p. 341-354Conference paper (Refereed)
    Abstract [en]

    Contemporary software processes and modeling languages have a strong focus on Functional Requirements (FRs), whereas information of Non-Functional Requirements (NFRs) are managed with text-based documentation and individual skills of the personnel. In order to get a better understanding of how capacity requirements are handled, we carried out an interview series with various branches of Ericsson. The analysis of this material revealed 18 Capacity Sub-Processes (CSPs) that need to be attended to create a capacity-oriented development. In this paper we describe all these sub-processes and their mapping into an extension of the OpenUP/Basic software process. Such an extension will support a process engineer in realizing the sub-processes, and has at the same time shown that there are no internal inconsistencies of the CSPs. The extension provides a context for continued research in using UML to support negotiation between requirements and existing design.

  • 47.
    Borg, Andreas
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Patel, Mikael
    Ericsson AB.
    Sandahl, Kristian
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory.
    Modelling Capacity Requirements in Large-Scale Telecommunication Systems2008In: Eighth Conference on Software Engineering Research and Practice in Sweden SERPS08,2008, 2008Conference paper (Refereed)
    Abstract [en]

       

  • 48.
    Borg, Andreas
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Patel, Mikael
    Ericsson AB, Linköping, Sweden.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    A Method for Improving the Treatment of Capacity Requirements in Large Telecommunication SystemsManuscript (Other academic)
    Abstract [en]

    Non-functional requirements crosscut functional models and are more difficult to enforce in system models. This paper describes a long-term research collaboration regarding capacity requirements between Linköping University and Ericsson AB. We describe an industrial case study on non-functional requirements as a background. Succeeding efforts dedicated to capacity include a detailed description of the term, a best practice inventory within Ericsson, and a pragmatic approach for how to annotate UML models with capacity information. The results are also represented as a method plug-in to the OpenUP software process and an anatomy facilitating the possibility to assess and improve an organization’s abilities to develop for capacity. The results combine into a method for how to improve the treatment of capacity requirements in large-scale software systems. Both product and process views are included, with emphasis on the latter.

  • 49.
    Borg, Andreas
    et al.
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Yong, Angela
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Carlshamre, Pär
    Linköping University, Department of Computer and Information Science, MDA - Human Computer Interfaces. Linköping University, The Institute of Technology.
    Sandahl, Kristian
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    The Bad Conscience of Requirements Engineering: An Investigation in Real-World Treatment of Non-Functional Requirements2003In: Third Conference on Software Engineering Research and Practice in Sweden (SERPS'03), Lund, 2003, p. 1-8Conference paper (Refereed)
    Abstract [en]

    Even though non-functional requirements (NFRs) are critical in order to provide software of good quality, the literature of NFRs is relatively sparse. We describe how NFRs are treated in two development organizations, an Ericsson application center and the IT department of the Swedish Meteorological and Hydrological Institute. We have interviewed professionals about problems they face and their ideas on how to improve the situation. Both organizations are aware of NFRs and related problems but their main focus is on functional requirements,primarily because existing methods focus on these. The most tangible problems experienced are that many NFRs remain undiscovered and that NFRs are stated in non-measurable terms. It became clear that the size andstructure of the organization require proper distribution of employees’ interest, authority and competence of NFRs. We argue that a feasible solution might be to strengthen the position of architectural requirements, which are more likely to emphasize NFRs.

    Download full text (pdf)
    FULLTEXT01
  • 50.
    Broman, David
    Linköping University, Department of Computer and Information Science, PELAB - Programming Environment Laboratory. Linköping University, The Institute of Technology.
    Flow Lambda Calculus for Declarative Physical Connection Semantics2007Report (Other academic)
    Abstract [en]

    One of the most fundamental language constructs of equation-based object-oriented languages is the possibility to state acausal connections, where both potential variables and flow variables exist. Several of the state-of-the art languages in this category are informally specified using natural language. This can make the languages hard to interpret, reason about, and disable the possibility to guarantee the absence of certain errors. In this work, we construct a formal operational small-step semantics based on the lambda-calculus. The calculus is then extended with more convenient modeling capabilities. Examples are given that demonstrate the expressiveness of the language, and some tests are made to verify the correctness of the semantics.

    Download full text (pdf)
    Flow Lambda Calculus for Declarative Physical Connection Semantics
1234567 1 - 50 of 321
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf