liu.seSearch for publications in DiVA
Change search
Refine search result
1234567 51 - 100 of 9043
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Abugessaisa, Imad
    et al.
    Linköping University, Department of Computer and Information Science, GIS - Geographical Information Science Group. Linköping University, The Institute of Technology.
    Sivertun, Åke
    Linköping University, Department of Computer and Information Science, GIS - Geographical Information Science Group. Linköping University, The Institute of Technology.
    ICT in regional networks in the field of culture and development cooperation in 15th Americas Conference on Information Systems 2009, AMCIS 2009, vol 4, issue , pp 2667-26742009In: AMCIS 2009 Proceedings, 15th Americas Conference on Information Systems 2009,, 2009, Vol. 4, p. 2667-2674Conference paper (Refereed)
    Abstract [en]

    This paper presents results from a mapping study commissioned by Swedish International Development Agency (Sida), which has supported regional and global networking within the field of culture. The mapping study was carried out in East Africa to find ways to collect and use experiences already made and to draw conclusions on the status and use of ICT in the culture networks supported by Sida. Another goal was to gain an understanding of the level to which ICT is used within the cultural sectors in East Africa. The study was focused on two main cultural sectors - museums and drama/theatre. A variety of research methods were used. It was found that there were well-established culture networks in both theatre and museum sectors. Through these networks, ICT has been used for sharing knowledge as well as being a tool for communication. Also it has supported dialogue among many different culture groups in the region. © (2009) by the AIS/ICIS Administrative Office All rights reserved.

  • 52.
    Abugessaisa, Imad
    et al.
    Linköping University, Department of Computer and Information Science, GIS - Geographical Information Science Group. Linköping University, The Institute of Technology.
    Sivertun, Åke
    Linköping University, Department of Computer and Information Science, GIS - Geographical Information Science Group. Linköping University, The Institute of Technology.
    Ontological Approach to Modeling Information Systems2004In: Proceedings of the Fourth International Conference on Computer and information Technology (Cit'04), 14–16 September, Wuhan, China: IEEE Computer Society, Washington, DC, 2004, p. 1122-1127Conference paper (Other academic)
    Abstract [en]

    In recent years, the use of formal tools in information system modeling and development represents a potential area of research in computer science. In 1967, the term ontology appeared for the first time in computer science literature as S. H. Mealy introduced it as a basic foundation in data modeling. The main objective of this paper is to discuss the concept of ontology (from a philosophical perspective) as it was used to bridge the gap between philosophy and information systems science, and to investigate ontology types that can be found during ontological investigation and the methods used in the investigation process. The secondary objective of this paper is to study different design and engineering approaches of ontology as well as development environments that are used to create and edit ontologies.

  • 53.
    Abugessaisa, Imad
    et al.
    Linköping University, Department of Computer and Information Science, GIS - Geographical Information Science Group. Linköping University, The Institute of Technology.
    Sivertun, Åke
    Linköping University, Department of Computer and Information Science, GIS - Geographical Information Science Group. Linköping University, The Institute of Technology.
    Le Duc, Michael
    Linköping University, Department of Computer and Information Science, GIS - Geographical Information Science Group. Linköping University, The Institute of Technology.
    A Systemic View on Swedish Traffic Accident Data Acquisition System2007In: Proceedings of the 14th International Conference on Road Safety on Four Continents (RS4C), 14-16 November, Bangkok, Thailand, Sweden: VTI , 2007, p. 1-12Conference paper (Refereed)
    Abstract [en]

    This paper presents work in progress to study information sharing among road safety organizations. The focus is to study accident data acquisition system. In 2002, Swedish Road Transport authority (SRT) has accepted STRADA as accident reporting system to be used by the police all over Sweden. Such system is vital for coordinating, maintaining and auditing road safety in the country. Normally road accidents are reported by the police or by Emergency unit at the hospital. However more than 50% of the hospitals in Sweden didn’t use the system which decrease the utilization of the system and reduce the quality of the information that demanded. By using system thinking approach in this study we try to see why such situation is occurred and how changes can be introduced and handle to overcome such problem. Interviews conducted with focus group and different users of the system. To investigate the issues related to the acceptance of the system we use Technology Acceptance Model (TAM). We recommend getting the user involved in the life cycle of the STRADA and also the developers could use enabling system to overcome problems in related to system usability and complexity. Also we suggest the use of iterative development to govern the life cycle.

  • 54.
    Abugessaisa, Imad
    et al.
    Linköping University, Department of Computer and Information Science, GIS - Geographical Information Science Group. Linköping University, The Institute of Technology.
    Sivertun, Åke
    Linköping University, Department of Computer and Information Science, GIS - Geographical Information Science Group. Linköping University, The Institute of Technology.
    Le Duc, Michael
    Linköping University, Department of Computer and Information Science, GIS - Geographical Information Science Group. Linköping University, The Institute of Technology.
    GLOBESAFE: A Platform for Information-Sharing Among Road Safety Organizations2007In: IFIP-W.G. 9th International Conference on Social Implications of Computers in Developing Countries: May 2007, São Paulo, Brazil, 2007, p. 1-10Conference paper (Refereed)
  • 55.
    Abugessaisa, Imad
    et al.
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Sivertun, Åke
    Linköping University, Department of Computer and Information Science, GIS. Linköping University, The Institute of Technology.
    Le Duc, Michael
    Linköping University, Department of Computer and Information Science, GIS. Linköping University, The Institute of Technology.
    Map as Interface for Shared Information: A Study of Design Principles and User Interaction Satisfaction2006In: IADIS International Conference WWW/Internet 2006: Murcia, Spain, 2006, p. 377-384Conference paper (Refereed)
  • 56.
    Achichi, Manel
    et al.
    Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier (LIRMM), France; University of Montpellier, France.
    Cheatham, Michelle
    Wright State University, USA.
    Dragisic, Zlatan
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Euzenat, Jerome
    INRIA, France; University Grenoble Alpes, Grenoble, France.
    Faria, Daniel
    Instituto Gulbenkian de Ciencia, Lisbon, Portugal.
    Ferrara, Alfio
    Universita degli studi di Milano, Italy.
    Flouris, Giorgos
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Fundulaki, Irini
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Harrow, Ian
    Pistoia Alliance Inc., USA.
    Ivanova, Valentina
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Jimenez-Ruiz, Ernesto
    University of Oslo, Norway.
    Kolthoff, Kristian
    University of Mannheim, Germany.
    Kuss, Elena
    University of Mannheim, Germany.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Leopold, Henrik
    Vrije Universiteit Amsterdam, Netherlands.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Meilicke, Christian
    University of Mannheim, Germany.
    Mohammadi, Majid
    Technical University of Delft, Netherlands.
    Montanelli, Stefano
    Universita degli studi di Milano, Italy.
    Pesquita, Catia
    Universidade de Lisboa, Portugal.
    Saveta, Tzanina
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Shvaiko, Pavel
    Informatica Trentina, Trento, Italy.
    Splendiani, Andrea
    Pistoia Alliance Inc., USA.
    Stuckenschmidt, Heiner
    University of Mannheim, Germany.
    Thieblin, Elodie
    Institut de Recherche en Informatique de Toulouse (IRIT), France; Universite Toulouse II, Toulouse, France.
    Todorov, Konstantin
    Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier (LIRMM), France; University of Montpellier, France.
    Trojahn, Cassia
    Institut de Recherche en Informatique de Toulouse (IRIT); Universite Toulouse II, Toulouse, France.
    Zamazal, Ondrej
    University of Economics, Prague, Czech Republic.
    Results of the Ontology Alignment Evaluation Initiative 20172017In: Proceedings of the 12th International Workshop on Ontology Matching co-located with the 16th International Semantic Web Conference (ISWC 2017) / [ed] Pavel Shvaiko, Jerome Euzenat, Ernesto Jimenez-Ruiz, Michelle Cheatham, Oktie Hassanzadeh, Aachen, Germany: CEUR Workshop Proceedings , 2017, p. 61-113Conference paper (Refereed)
  • 57.
    Achichi, Manel
    et al.
    LIRMM, University of Montpellier, France.
    Cheatham, Michelle
    Wright State University, USA.
    Dragisic, Zlatan
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Euzenat, Jerome
    INRIA, France; Univ. Grenoble Alpes, Grenoble, France.
    Faria, Daniel
    Instituto Gulbenkian de Ciencia, Lisbon, Portugal.
    Ferrara, Alfio
    Universita degli studi di Milano, Italy.
    Flouris, Giorgos
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Fundulaki, Irini
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Harrow, Ian
    Pistoia Alliance Inc., USA.
    Ivanova, Valentina
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Jiménez-Ruiz, Ernesto
    University of Oslo, Norway; University of Oxford, UK.
    Kuss, Elena
    University of Mannheim, Germany.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Leopold, Henrik
    Vrije Universiteit Amsterdam, The Netherlands.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Meilicke, Christian
    University of Mannheim, Germany.
    Montanelli, Stefano
    Universita degli studi di Milano, Italy.
    Pesquita, Catia
    Universidade de Lisboa, Portugal.
    Saveta, Tzanina
    Institute of Computer Science-FORTH, Heraklion, Greece.
    Shvaiko, Pavel
    TasLab, Informatica Trentina, Trento, Italy.
    Splendiani, Andrea
    Novartis Institutes for Biomedical Research, Basel, Switzerland.
    Stuckenschmidt, Heiner
    University of Mannheim, Germany.
    Todorov, Konstantin
    LIRMM, University of Montpellier, France.
    Trojahn, Cassia
    IRIT, Toulouse, France; Université Toulouse II, Toulouse, France.
    Zamazal, Ondřej
    University of Economics, Prague, Czech Republic.
    Results of the Ontology Alignment Evaluation Initiative 20162016In: Proceedings of the 11th International Workshop on Ontology Matching, Aachen, Germany: CEUR Workshop Proceedings , 2016, p. 73-129Conference paper (Refereed)
  • 58.
    Achu, Denis
    Linköping University, Department of Computer and Information Science.
    Application of Gis in Temporal and Spatial Analyses of Dengue Fever Outbreak: Case of Rio de Janeiro, Brazil2009Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Since Dengue fever (DF) and its related forms, Dengue Hemorrhagic fever (DHF) and Dengue Shock Syndrome (DSS) have become important health concerns worldwide, it is also imperative to develop methods which will help in the analysis of the incidences. Dengue fever cases are growing in number as it also invades widely, affecting larger number of countries and crossing climatic boundaries. Considering that the disease as of now has neither an effective vaccine nor a cure, monitoring in order to prevent or control is the resorted alternative. GIS and its related technologies offer a wealth of interesting capabilities towards achieving this goal.

    The intention of this study was to develop methods to describe dengue fever outbreaks taking Rio de Janeiro, Brazil as a case study. Careful study of Census data with appropriate attributes was made to find out their potential influence on dengue fever incidence in the various regions or census districts. Dengue incidence data from year 2000 to year 2008 reported by the municipal secretariat of Rio was used to extract the necessary census districts. Base map files in MapInfo format were converted to shape files.  Using ArcGIS it was possible to merge the dengue fever incidence data with the available base map file of the City of Rio according to corresponding census districts. Choropleth maps were then created using different attributes from which patterns and trends could be used to describe the characteristic of the outbreak with respect to the socio-economic conditions. Incidence data were also plotted in Excel to see temporal variations. Cluster analysis were performed with the Moran I technique on critical periods and years of dengue outbreak. Using the square root of dengue incidence from January to April 2002 and 2008, inverse distance was selected as the conceptualised spatial relationship, Euclidean distance as the distance method. More detailed analyses were then done on the selected critical years of dengue outbreak, (years 2002 and 2008), to investigate the influence of socio-economic variables on dengue incidence per census district.

     

    Dengue incidence rate appeared to be higher during the rainy and warmer months between December and May. Outbreaks of dengue occurred in years 2002 and 2008 over the study period of year 2000 to 2008. Some factors included in the census data were influential in the dengue prevalence according to districts. Satisfactory results can be achieved by using this strategy as a quick method for assessing potential dengue attack, spread and possible enabling conditions. The method has the advantage where there is limited access to field work, less financial means for acquisition of data and other vital resources.

    A number of difficulties were encountered during the study however and leaves areas where further work can be done for improvements. More variables would be required in order to make a complete and comprehensive description of influential conditions and factors.  There is still a gap in the analytical tools required for multi-dimensional investigations as the ones encountered in this study.  It is vital to integrate ‘GPS’ and ‘Remote Sensing’ in order to obtain a variety of up-to-date data with higher resolution.

     

    Download full text (pdf)
    FULLTEXT01
  • 59.
    Ackerholm, Nils
    Linköping University, Department of Computer and Information Science.
    Att stödja utan att styra eller störa: Användbarhetsstudie av personlig anpassning på webbsidor2005Independent thesis Advanced level (degree of Magister)Student thesis
    Abstract [sv]

    Ett överflöd av information gör att den information vi verkligen vill ha blir svårare att hitta. För att råda bot på detta har det gjorts försök att hjälpa användaren att hitta genom personlig anpassning av webbsidor. Meningen är att personlig anpassning ska göra det enkelt för användaren, och att systemet därmed har hög användbarhet. Att se om det verkligen är så är syftet med denna studie.

    Inom ett projekt som avser ge familjer med tonåriga diabetiker IT-stöd gjordes en heuristisk utvärdering och intervjuer för att undersöka ett antal funktioner för personlig anpassning ur användbarhetsperspektiv.

    På det hela taget går det inte att säga att personlig anpassning vare sig är bra eller dåligt användbarhetsmässigt utan det avgörs av funktionernas utformning och kontexten. Det viktiga är att ge användaren stöd i sitt användande utan att för den skull styra eller störa hennes användning.

    Download full text (pdf)
    FULLTEXT01
  • 60.
    Acosta, Maribel
    et al.
    Karlsruhe Institute of Technology.
    Hartig, Olaf
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Sequeda, Juan
    Capsenta.
    Federated RDF query processing2019In: Encyclopedia of big data technologies / [ed] Sherif Sakr, Albert Zomaya, Cham: Springer, 2019Chapter in book (Refereed)
    Abstract [en]

    Federated RDF query processing is concerned with querying a federation of RDF data sources where the queries are expressed using a declarative query language (typically, the RDF query language SPARQL), and the data sources are autonomous and heterogeneous. The current literature in this context assumes that the data and the data sources are semantically homogeneous, while heterogeneity occurs at the level of data formats and access protocols.

  • 61.
    Adegboye, Oluwatayomi Rereloluwa
    et al.
    Univ Mediterranean Karpasia, Turkiye.
    Feda, Afi Kekeli
    European Univ Lefke, Turkiye.
    Ojekemi, Opeoluwa Seun
    Univ Mediterranean Karpasia, Turkiye.
    Agyekum, Ephraim Bonah
    Ural Fed Univ, Russia.
    Hussien, Abdelazim
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Fayoum Univ, Egypt; Appl Sci Private Univ, Jordan; Middle East Univ, Jordan.
    Kamel, Salah
    Aswan Univ, Egypt.
    Chaotic opposition learning with mirror reflection and worst individual disturbance grey wolf optimizer for continuous global numerical optimization2024In: Scientific Reports, E-ISSN 2045-2322, Vol. 14, no 1, article id 4660Article in journal (Refereed)
    Abstract [en]

    The effective meta-heuristic technique known as the grey wolf optimizer (GWO) has shown its proficiency. However, due to its reliance on the alpha wolf for guiding the position updates of search agents, the risk of being trapped in a local optimal solution is notable. Furthermore, during stagnation, the convergence of other search wolves towards this alpha wolf results in a lack of diversity within the population. Hence, this research introduces an enhanced version of the GWO algorithm designed to tackle numerical optimization challenges. The enhanced GWO incorporates innovative approaches such as Chaotic Opposition Learning (COL), Mirror Reflection Strategy (MRS), and Worst Individual Disturbance (WID), and it's called CMWGWO. MRS, in particular, empowers certain wolves to extend their exploration range, thus enhancing the global search capability. By employing COL, diversification is intensified, leading to reduced solution stagnation, improved search precision, and an overall boost in accuracy. The integration of WID fosters more effective information exchange between the least and most successful wolves, facilitating a successful exit from local optima and significantly enhancing exploration potential. To validate the superiority of CMWGWO, a comprehensive evaluation is conducted. A wide array of 23 benchmark functions, spanning dimensions from 30 to 500, ten CEC19 functions, and three engineering problems are used for experimentation. The empirical findings vividly demonstrate that CMWGWO surpasses the original GWO in terms of convergence accuracy and robust optimization capabilities.

  • 62.
    Adhikarla, Sridhar
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Arts and Sciences.
    Automated Bug Classification.: Bug Report Routing2020Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    With the growing software technologies companies tend to develop automated solutions to save time and money. Automated solutions have seen tremendous growth in the software industry and have benefited from extensive machine learning research. Although extensive research has been done in the area of automated bug classification, with the new data being collected, more precise methods are yet to be developed. An automated bug classifier will process the content of the bug report and assign it to the person or department that would fix the problem.

    A bug report typically contains an unstructured text field where the problem is described in detail. A lot of research regarding information extraction from such text fields has been done. This thesis uses a topic modeling technique, Latent Dirichlet Allocation (LDA), and a numerical statistic Term Frequency - Inverse Document Frequency (TF-IDF), to generate two different features from the unstructured text fields of the bug report. A third set of features was created by concatenating the TF-IDF and the LDA features. The class distribution of the data used in this thesis changes over time. To explore if time has an impact on the prediction, the age of the bug report was introduced as a feature. The importance of this feature, when used along with the LDA and TF-IDF features, was also explored in this thesis.

    These generated feature vectors were used as predictors to train three different classification models; multinomial logistic regression, dense neural networks, and DO-probit. The prediction of the classifiers, for the correct department to handle a bug, was evaluated on the accuracy and the F1-score of the prediction. For comparison, the predictions from a Support Vector Machine (SVM) using a linear kernel was treated as the baseline.

    The best results for the multinomial logistic regression and the dense neural networks classifiers were obtained when the TF-IDF features of the bug reports were used as predictors. Among the three classifiers trained the dense neural network had the best performance, though the classifier was not able to perform better than the SVM baseline. Using age as a feature did not give a significant improvement in the predictive performance of the classifiers, but was able to identify some interesting patterns in the data. Further research on other ways of using the age of the bug reports could be promising.

    Download full text (pdf)
    Thesis_sriad858
  • 63.
    Adiththan, Arun
    et al.
    CUNY, NY 10019 USA.
    Ramesh, S.
    Gen Motors RandD, MI 48090 USA.
    Samii, Soheil
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Gen Motors RandD, MI 48090 USA.
    Cloud-assisted Control of Ground Vehicles using Adaptive Computation Offloading Techniques2018In: PROCEEDINGS OF THE 2018 DESIGN, AUTOMATION and TEST IN EUROPE CONFERENCE and EXHIBITION (DATE), IEEE , 2018, p. 589-592Conference paper (Refereed)
    Abstract [en]

    The existing approaches to design efficient safety critical control applications is constrained by limited in-vehicle sensing and computational capabilities. In the context of automated driving, we argue that there is a need to leverage resources "out-of-the-vehicle" to meet the sensing and powerful processing requirements of sophisticated algorithms (e.g., deep neural networks). To realize the need, a suitable computation offloading technique that meets the vehicle safety and stability requirements, even in the presence of unreliable communication network, has to be identified. In this work, we propose an adaptive offloading technique for control computations into the cloud. The proposed approach considers both current network conditions and control application requirements to determine the feasibility of leveraging remote computation and storage resources. As a case study, we describe a cloud-based path following controller application that leverages crowdsensed data for path planning.

  • 64.
    Adlercreutz, Anna
    et al.
    Linköping University, Department of Computer and Information Science.
    Ahlstedt, Oskar
    Linköping University, Department of Computer and Information Science.
    Bengtsson, Linnéa
    Linköping University, Department of Computer and Information Science.
    Månsson, Andreas
    Linköping University, Department of Computer and Information Science.
    Romell, Gustaf
    Linköping University, Department of Computer and Information Science.
    Stigson, Isak
    Linköping University, Department of Computer and Information Science.
    Sund, Tobias
    Linköping University, Department of Computer and Information Science.
    Wedlund, Lisa
    Linköping University, Department of Computer and Information Science.
    En praktisk studie kring utvecklingen av webbapplikationen Studentlunchen2015Independent thesis Basic level (degree of Bachelor), 12 credits / 18 HE creditsStudent thesis
    Abstract [en]

    This report declares the experiences and results arrived from the development process around the e-shop Studentlunchen. Studentlunchen is a web application to be used by students to order lunch during weekdays. In order to make Studentlunchen as user friendly and intuitive as possible the e-shop has been developed with focus on functionality and an attractive design. In the report there is a technical description regarding the web application together with a discussion concerning the developed solutions. Furthermore the report discusses and evaluates the working process Scrum and how it has been used. As a result of complying with the Scrum methodology to deliver working functionality after every sprint, focus has been directed towards achieving this instead of implementing many features that isn’t fully completed. With thorough development of the initial prototype, the basic idea of the design and functionality surrounding Studentlunchen could be kept throughout the development process. This was one of the great contributors to the projects overall success and helped the team achieve the goal to make a user-friendly web application.

    Download full text (pdf)
    fulltext
  • 65.
    Adnan, Muhammad
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, The Institute of Technology.
    Usability Evaluation of Smart Phone Application Store2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this study, the usability of smart phone application store app is evaluated. The study was performed on different smart phone operating systems. Data about usability was gathered through surveys and think aloud based experiment. Anova analysis was also performed on data to identify significant issues. A lot of smartphone users reported issues with installing, locating and searching about apps. Many users had issues with uninstalling of apps and navigating the search results when looking for apps. The smartphone operating system and the app store does not provide seamless navigation and alot of content is not tailored for smart phone users.

    Download full text (pdf)
    Master Thesis_Muhammad Adnan
  • 66.
    Adok, Claudia
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Retrieval of Cloud Top Pressure2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis the predictive models the multilayer perceptron and random forest are evaluated to predict cloud top pressure. The dataset used in this thesis contains brightness temperatures, reflectances and other useful variables to determine the cloud top pressure from the Advanced Very High Resolution Radiometer (AVHRR) instrument on the two satellites NOAA-17 and NOAA-18 during the time period 2006-2009. The dataset also contains numerical weather prediction (NWP) variables calculated using mathematical models. In the dataset there are also observed cloud top pressure and cloud top height estimates from the more accurate instrument on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite. The predicted cloud top pressure is converted into an interpolated cloud top height. The predicted pressure and interpolated height are then evaluated against the more accurate and observed cloud top pressure and cloud top height from the instrument on the satellite CALIPSO.

    The predictive models have been performed on the data using different sampling strategies to take into account the performance of individual cloud classes prevalent in the data. The multilayer perceptron is performed using both the original response cloud top pressure and a log transformed repsonse to avoid negative values as output which is prevalent when using the original response. Results show that overall the random forest model performs better than the multilayer perceptron in terms of root mean squared error and mean absolute error.

    Download full text (pdf)
    Retrieval of Cloud Top Pressure
  • 67.
    Adolfsson, Dan
    et al.
    NXP Semiconductors corp., Eindhoven, the Netherlands.
    Siew, Joanna
    Philips Applied Technologies, Eindhoven, the Netherlands.
    Larsson, Erik
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Marinissen, Erik Jan
    IMEC, Leuven, Belgium).
    Deterministic Scan-Chain Diagnosis for Intermittent Faults2009In: European Test Symposium (ETS 2009), Sevilla, Spain, May 25-29, 2009 (Poster)., 2009Conference paper (Other academic)
  • 68.
    Adolfsson, Dan
    et al.
    NXP Semiconductors corp., Eindhoven, the Netherlands.
    Siew, Joanna
    Philips Applied Technologies, Eindhoven, the Netherlands.
    Marinissen, Erik Jan
    IMEC, Leuven, Belgium.
    Larsson, Erik
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    On Scan Chain Diagnosis for Intermittent Faults2009In: IEEE Asian Test Symposium (ATS), Taichung, Taiwan, November 23-26, 2009., 2009, p. 47-54Conference paper (Refereed)
    Abstract [en]

    Diagnosis is increasingly important, not only for individual analysis of failing ICs, but also for high-volume test response analysis which enables yield and test improvement. Scan chain defects constitute a significant fraction of the overall digital defect universe, and hence it is well justified that scan chain diagnosis has received increasing research attention in recent years. In this paper, we address the problem of scan chain diagnosis for intermittent faults. We show that the conventional scan chain test pattern is likely to miss an intermittent fault, or inaccurately diagnose it. We propose an improved scan chain test pattern which we show to be effective. Subsequently, we demonstrate that the conventional bound calculation algorithm is likely to produce wrong results in the case of an intermittent fault. We propose a new lowerbound calculation method which does generate correct and tight bounds, even for an intermittence probability as low as 10%.

    Download full text (pdf)
    FULLTEXT01
  • 69.
    Adolfsson, Fredrik
    Linköping University, Department of Computer and Information Science, Software and Systems.
    A Model-Based Approach to Hands Overlay for Augmented Reality2021Independent thesis Basic level (degree of Bachelor), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Augmented Reality is a technology where the user sees the environment mixed with a virtual reality containing things such as text, animations, pictures, and videos. Remote guidance is a sub-field of Augmented Reality where guidance is given remotely to identify and solve problems without being there in person. Using hands overlay, the guide can use his or her hand to point and show gestures in real-time. To do this one needs to track the hands and create a video stream that represents them. The video stream of the hands is then overlaid on top of the video from the individual getting help. A solution currently used in the industry is to use image segmentation, which is done by segmenting an image to foreground and background to decide what to include. This requires distinct differences between the pixels that should be included and the ones that should be discarded to work correctly. This thesis instead investigates a model-based approach to hand tracking, where one tracks points of interest on the hands to build a 3D model of them. A model-based solution is based on sensor data, meaning that it would not have the limitations that image segmentation has. A prototype is developed and integrated into the existing solution. The hand modeling is done in a Unity application and then transferred into the existing application. The results show that there is a clear but not too significant overhead, so it can run on a normal computer. The prototype works as a proof of concept and shows the potential of a model-based approach.

    Download full text (pdf)
    fulltext
  • 70.
    Adolfsson, Henrik
    Linköping University, Department of Computer and Information Science, Database and information techniques.
    Comparison of Auto-Scaling Policies Using Docker Swarm2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    When deploying software engineering applications in the cloud there are two similar software components used. These are Virtual Machines and Containers. In recent years containers have seen an increase in popularity and usage, in part because of tools such as Docker and Kubernetes. Virtual Machines (VM) have also seen an increase in usage as more companies move to solutions in the cloud with services like Amazon Web Services, Google Compute Engine, Microsoft Azure and DigitalOcean. There are also some solutions using auto-scaling, a technique where VMs are commisioned and deployed to as load increases in order to increase application performace. As the application load decreases VMs are decommisioned to reduce costs.

    In this thesis we implement and evaluate auto-scaling policies that use both Virtual Machines and Containers. We compare four different policies, including two baseline policies. For the non-baseline policies we define a policy where we use a single Container for every Virtual Machine and a policy where we use several Containers per Virtual Machine. To compare the policies we deploy an image serving application and run workloads to test them. We find that the choice of deployment strategy and policy matters for response time and error rate. We also find that deploying applications as described in the methodis estimated to take roughly 2 to 3 minutes.

    Download full text (pdf)
    fulltext
  • 71.
    Adolfsson, Rickard
    et al.
    Linköping University, Department of Computer and Information Science.
    Andersson, Eric
    Linköping University, Department of Computer and Information Science.
    Improving sales forecast accuracy for restaurants2019Independent thesis Basic level (university diploma), 10,5 credits / 16 HE creditsStudent thesis
    Abstract [en]

    Data mining and machine learning techniques are becoming more popular in helping companies with decision-making, due to these processes’ ability to automatically search through very large amounts of data and discover patterns that can be hard to see with human eyes.

    Onslip is one of the companies looking to achieve more value from its data. They provide a cloud-based cash register to small businesses, with a primary focus on restaurants. Restaurants are heavily affected by variations in sales. They sell products with short expiration dates, low profit margins and much of their expenses are tied to personnel. By predicting future demand, it is possible to plan inventory levels and make more effective employee schedules, thus reducing food waste and putting less stress on workers.

    The project described in this report, examines how sales forecasts can be improved by incorporating factors known to affect sales in the training of machine learning models. Several different models are trained to predict the future sales of 130 different restaurants, using varying amounts of additional information. The accuracy of the predictions are then compared against each other. Factors known to impact sales have been chosen and categorized into restaurant information, sales history, calendar data and weather information.

    The results show that, by providing additional information, the vast majority of forecasts could be improved significantly. In 7 of 8 examined cases, the addition of more sales factors had an average positive effect on the predictions. The average improvement was 6.88% for product sales predictions, and 26.62% for total sales. The sales history information was most important to the models’ decisions, followed by the calendar category. It also became evident that not every factor that impacts sales had been captured, and further improvement is possible by examining each company individually.

    Download full text (pdf)
    fulltext
  • 72.
    Adolfsson, Sofie
    Linköping University, Department of Computer and Information Science.
    ’The Big Five of Teamwork’ i en flygtrafikledningsdomän: En observationsstudie på Arlanda ATCC2018Independent thesis Basic level (degree of Bachelor), 12 credits / 18 HE creditsStudent thesis
    Abstract [en]

    Today, many industries are dependent on a solid teamwork. However, there is a need for objective measurement assessment for teamwork and therefore this project aims to create and test an observation protocol based on the theoretical model ’The Big Five of Teamwork’ compiled by Salas, Sims & Burke (2005). The observation protocol was used to observe teamwork between two air traffic controllers at Arlanda ATCC. After the observations the air traffic controllers answered a survey to receive subjective aspects from the model. A total of 15 structured observations were conducted. The results revealed that it’s possible to estimate teamwork on air traffic controllers using an observation protocol based on six of eight components, where team orientation and shared mental model were not included. The components appeared to be more than just an observable behavior, thus only observations does not give a fair picture of the component. The result also showed that air traffic controllers themselves perceive all components as a part of the work. The observations showed that the cooperation could look different and differ from team to team, and that air traffic controllers adapt to each other’s needs. 

    Download full text (pdf)
    fulltext
  • 73.
    af Ugglas, Axel
    Linköping University, Department of Computer and Information Science.
    Digital communication, Virtual reality & Game development2022Independent thesis Basic level (degree of Bachelor), 12 credits / 18 HE creditsStudent thesis
    Abstract [en]

    This thesis has investigated whether digital communication in a Virtual Reality stands to the test compared to more traditional means of communication such as text messaging, phone calls, video-meetings and face-to-face communication. The chosen measure and theory was Social Presence which refers to the degree to which one perceives the presence of participants in digital communication. In order to investigate this, a Virtual Reality Escape Room game was developed with multiplayer functionality and speech output capabilities. Four tests where then carried out with groups of 3-4 people, who’s aim was to jointly solve a set of puzzles. After a gameplay duration of 25 minutes, the participants were then individually handed questionnaires to examine their experience via quantitative methods. This thesis also dives deep into the virtual experience via the networked minds measure and touches a great deal upon game development for the pursuit of scientific means. Due to complications, such as resources and game design deficiencies, the results were non-conclusive, besides the participants themselves. The contribution of this thesis lies not in the results themselves but in the methods and the lessons learned that could be of value for further examination in this research area, which argued, calls for legitimate interest.

  • 74. Aganovic, Dario
    et al.
    Pandikow, Asmus
    Linköping University, Department of Computer and Information Science, RTSLAB - Real-Time Systems Laboratory. Linköping University, The Institute of Technology.
    Towards Enabling Innovation Processes for Dynamic Extended Manufacturing Enterprises2002In: Proceedings of the Digital Enterprise Technology Conference, 2002Conference paper (Other academic)
  • 75. Order onlineBuy this publication >>
    Aghaee Ghaleshahi, Nima
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Thermal Issues in Testing of Advanced Systems on Chip2015Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Many cutting-edge computer and electronic products are powered by advanced Systems-on-Chip (SoC). Advanced SoCs encompass superb performance together with large number of functions. This is achieved by efficient integration of huge number of transistors. Such very large scale integration is enabled by a core-based design paradigm as well as deep-submicron and 3D-stacked-IC technologies. These technologies are susceptible to reliability and testing complications caused by thermal issues. Three crucial thermal issues related to temperature variations, temperature gradients, and temperature cycling are addressed in this thesis.

    Existing test scheduling techniques rely on temperature simulations to generate schedules that meet thermal constraints such as overheating prevention. The difference between the simulated temperatures and the actual temperatures is called temperature error. This error, for past technologies, is negligible. However, advanced SoCs experience large errors due to large process variations. Such large errors have costly consequences, such as overheating, and must be taken care of. This thesis presents an adaptive approach to generate test schedules that handle such temperature errors.

    Advanced SoCs manufactured as 3D stacked ICs experience large temperature gradients. Temperature gradients accelerate certain early-life defect mechanisms. These mechanisms can be artificially accelerated using gradient-based, burn-in like, operations so that the defects are detected before shipping. Moreover, temperature gradients exacerbate some delay-related defects. In order to detect such defects, testing must be performed when appropriate temperature-gradients are enforced. A schedule-based technique that enforces the temperature-gradients for burn-in like operations is proposed in this thesis. This technique is further developed to support testing for delay-related defects while appropriate gradients are enforced.

    The last thermal issue addressed by this thesis is related to temperature cycling. Temperature cycling test procedures are usually applied to safety-critical applications to detect cycling-related early-life failures. Such failures affect advanced SoCs, particularly through-silicon-via structures in 3D-stacked-ICs. An efficient schedule-based cycling-test technique that combines cycling acceleration with testing is proposed in this thesis. The proposed technique fits into existing 3D testing procedures and does not require temperature chambers. Therefore, the overall cycling acceleration and testing cost can be drastically reduced.

    All the proposed techniques have been implemented and evaluated with extensive experiments based on ITC’02 benchmarks as well as a number of 3D stacked ICs. Experiments show that the proposed techniques work effectively and reduce the costs, in particular the costs related to addressing thermal issues and early-life failures. We have also developed a fast temperature simulation technique based on a closed-form solution for the temperature equations. Experiments demonstrate that the proposed simulation technique reduces the schedule generation time by more than half.

    Download full text (pdf)
    fulltext
    Download (pdf)
    omslag
    Download (jpg)
    presentationsbild
  • 76.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    He, Zhiyuan
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Eles, Petru Ion
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Temperature-Aware SoC Test Scheduling Considering Inter-Chip Process Variation2010In: 19th IEEE Asian Test Symposium (ATS10), Shanghai, China, December 1-4, 2010., 2010Conference paper (Refereed)
    Abstract [en]

    Systems on Chip implemented with deep submicron technologies suffer from two undesirable effects, high power density, thus high temperature, and high process variation, which must be addressed in the test process. This paper presents two temperature-aware scheduling approaches to maximize the test throughput in the presence of inter-chip process variation. The first approach, an off-line technique, improves the test throughput by extending the traditional scheduling method. The second approach, a hybrid one, improves further the test throughput with a chip classification scheme at test time based on the reading of a temperature sensor. Experimental results have demonstrated the efficiency of the proposed methods.

  • 77.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, Software and Systems.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Adaptive Temperature-Aware SoC Test Scheduling Considering Process Variation2011In: Digital System Design (DSD), 2011 14th Euromicro Conference on, IEEE, 2011, p. 197-204Conference paper (Refereed)
    Abstract [en]

    High temperature and process variation areundesirable effects for modern systems-on-chip. The hightemperature is a prominent issue during test and should be takencare of during the test process. Modern SoCs, affected by largeprocess variation, experience rapid and large temperaturedeviations and, therefore, a traditional static test schedule which isunaware of these deviations will be suboptimal in terms of speedand/or thermal-safety. This paper presents an adaptive testscheduling method which addresses the temperature deviationsand acts accordingly in order to improve the test speed andthermal-safety. The proposed method is divided into acomputationally intense offline-phase, and a very simple online-phase.In the offline-phase a schedule tree is constructed, and inthe online-phase the appropriate path in the schedule tree istraversed, step by step and based on temperature sensor readings.Experiments have demonstrated the efficiency of the proposedmethod.

  • 78.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    An Efficient Temperature-Gradient Based Burn-In Technique for 3D Stacked ICs2014In: Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, IEEE conference proceedings, 2014Conference paper (Refereed)
    Abstract [en]

    Burn-in is usually carried out with high temperature and elevated voltage. Since some of the early-life failures depend not only on high temperature but also on temperature gradients, simply raising up the temperature of an IC is not sufficient to detect them. This is especially true for 3D stacked ICs, since they have usually very large temperature gradients. The efficient detection of these early-life failures requires that specific temperature gradients are enforced as a part of the burn-in process. This paper presents an efficient method to do so by applying high power stimuli to the cores of the IC under burn-in through the test access mechanism. Therefore, no external heating equipment is required. The scheduling of the heating and cooling intervals to achieve the required temperature gradients is based on thermal simulations and is guided by functions derived from a set of thermal equations. Experimental results demonstrate the efficiency of the proposed method.

  • 79.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    An Integrated Temperature-Cycling Acceleration and Test Technique for 3D Stacked ICs2015In: 20th Asia and South Pacific Design Automation Conference (ASP-DAC 2015), Chiba/Tokyo, Japan, Jan. 19-22, 2015., Institute of Electrical and Electronics Engineers (IEEE), 2015, p. 526-531Conference paper (Refereed)
    Abstract [en]

    In a modern 3D IC, electrical connections between vertically stacked dies are made using through silicon vias. Through silicon vias are subject to undesirable early-life effects such as protrusion as well as void formation and growth. These effects result in opens, resistive opens, and stress induced carrier mobility reduction, and consequently circuit failures. Operating the ICs under extreme temperature cycling can effectively accelerate such early-life failures and make them detectable at the manufacturing test process. An integrated temperature-cycling acceleration and test technique is introduced in this paper that integrates a temperature-cycling acceleration procedure with pre-, mid-, and post-bond tests for 3D ICs. Moreover, it reduces the need for costly temperature chamber based temperature-cycling acceleration procedures. All these result in a reduction in the overall test costs. The proposed method is a schedule-based solution that creates the required temperature cycling effect along with performing the tests. Experimental results demonstrate its efficiency.

  • 80.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Efficient Test Application for Rapid Multi-Temperature Testing2015In: Proceedings of the 25th edition on Great Lakes Symposium on VLSI, Association for Computing Machinery (ACM), 2015, p. 3-8Conference paper (Other academic)
    Abstract [en]

    Different defects may manifest themselves at different temperatures. Therefore, the tests that target such temperature-dependent defects must be applied at different temperatures appropriate for detecting them. Such multi-temperature testing scheme applies tests at different required temperatures. It is known that a test's power dissipation depends on the previously applied test. Therefore, the same set of tests when organized differently dissipates different amounts of power. The technique proposed in this paper organizes the tests efficiently so that the resulted power levels lead to the required temperatures. Consequently a rapid multi-temperature testing is achieved. Experimental studies demonstrate the efficiency of the proposed technique.

  • 81.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Heuristics for Adaptive Temperature-Aware SoC Test Scheduling Considering Process Variation2011In: The 11th Swedish System-on-Chip Conference, Varberg, Sweden, May 2-3, 2011, 2011Conference paper (Other academic)
    Abstract [en]

    High working temperature and process variation are undesirable effects for modern systems-on-chip. The high temperature should be taken care of during the test. On the other hand, large process variations induce rapid and large temperature deviations causing the traditional static test schedules to be suboptimal in terms of speed and/or thermal-safety. A remedy to this problem is an adaptive test schedule which addresses the temperature deviations by reacting to them. Our adaptive method is divided into a computationally intense offline-phase, and a very simple online-phase. In this paper, heuristics are proposed for the offline phase in which the optimized schedule tree is found. In the online-phase, based on the temperature sensor readings the appropriate path in the schedule tree is traversed. Experiments are made to tune the proposed heuristics and to demonstrate their efficiency.

  • 82.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Process-variation and Temperature Aware SoC Test Scheduling Technique2013In: Journal of electronic testing, ISSN 0923-8174, E-ISSN 1573-0727, Vol. 29, no 4, p. 499-520Article in journal (Refereed)
    Abstract [en]

    High temperature and process variation are undesirable phenomena affecting modern Systems-on-Chip (SoC). High temperature is a well-known issue, in particular during test, and should be taken care of in the test process. Modern SoCs are affected by large process variation and therefore experience large and time-variant temperature deviations. A traditional test schedule which ignores these deviations will be suboptimal in terms of speed or thermal-safety. This paper presents an adaptive test scheduling method which acts in response to the temperature deviations in order to improve the test speed and thermal safety. The method consists of an offline phase and an online phase. In the offline phase a schedule tree is constructed and in the online phase the appropriate path in the schedule tree is traversed based on temperature sensor readings. The proposed technique is designed to keep the online phase very simple by shifting the complexity into the offline phase. In order to efficiently produce high-quality schedules, an optimization heuristic which utilizes a dedicated thermal simulation is developed. Experiments are performed on a number of SoCs including the ITC'02 benchmarks and the experimental results demonstrate that the proposed technique significantly improves the cost of the test in comparison with the best existing test scheduling method.

    Download full text (pdf)
    fulltext
  • 83.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Process-Variation and Temperature Aware SoC Test Scheduling Using Particle Swarm Optimization2011In: The 6th IEEE International Design and Test Workshop (IDT'11), Beirut, Lebanon, December 11–14, 2011., IEEE , 2011Conference paper (Refereed)
    Abstract [en]

    High working temperature and process variation are undesirable effects for modern systems-on-chip. It is well recognized that the high temperature should be taken care of during the test process. Since large process variations induce rapid and large temperature deviations, traditional static test schedules are suboptimal in terms of speed and/or thermalsafety. A solution to this problem is to use an adaptive test schedule which addresses the temperature deviations by reacting to them. We propose an adaptive method that consists of a computationally intense offline-phase and a very simple onlinephase. In the offline-phase, a near optimal schedule tree is constructed and in the online-phase, based on the temperature sensor readings, an appropriate path in the schedule tree is traversed. In this paper, particle swarm optimization is introduced into the offline-phase and the implications are studied. Experimental results demonstrate the advantage of the proposed method.

  • 84.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Process-Variation Aware Multi-temperature Test Scheduling2014In: 27th International Conference on VLSI Design and 13th International Conference on Embedded Systems, IEEE conference proceedings, 2014, p. 32-37Conference paper (Refereed)
    Abstract [en]

    Chips manufactured with deep sub micron technologies are prone to large process variation and temperature-dependent defects. In order to provide high test efficiency, the tests for temperature-dependent defects should be applied at appropriate temperature ranges. Existing static scheduling techniques achieve these specified temperatures by scheduling the tests, specially developed heating sequences, and cooling intervals together. Because of the temperature uncertainty induced by process variation, a static test schedule is not capable of applying the tests at intended temperatures in an efficient manner. As a result the test cost will be very high. In this paper, an adaptive test scheduling method is introduced that utilizes on-chip temperature sensors in order to adapt the test schedule to the actual temperatures. The proposed method generates a low cost schedule tree based on the variation statistics and thermal simulations in the design phase. During the test, a chip selects an appropriate schedule dynamically based on temperature sensor readings. A 23% decrease in the likelihood that tests are not applied at the intended temperatures is observed in the experimental studies in addition to 20% reduction in test application time.

  • 85.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Temperature-Gradient Based Burn-In for 3D Stacked ICs2013In: The 12th Swedish System-on-Chip Conference (SSoCC 2013), Ystad, Sweden, May 6-7, 2013 (not reviewed, not printed)., 2013Conference paper (Other academic)
    Abstract [en]

    3D Stacked IC fabrication, using Through-Silicon-Vias, is a promising technology for future integrated circuits. However, large temperature gradients may exacerbate early-life-failures to the extent that the commercialization of 3D Stacked ICs is challenged. The effective detection of these early-life-failures requires that burn-in is performed when the IC’s temperatures comply with the thermal maps that properly specify the temperature gradients. In this paper, two methods that efficiently generate and maintain the specified thermal maps are proposed. The thermal maps are achieved by applying heating and cooling intervals to the chips under test through test access mechanisms. Therefore, no external heating system is required. The scheduling of the heating and cooling intervals is based on thermal simulations. The schedule generation is guided by functions that are derived from the temperature equations. Experimental results demonstrate the efficiency of the proposed method.

  • 86.
    Aghaee Ghaleshahi, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Temperature-Gradient Based Test Scheduling for 3D Stacked ICs2013In: 2013 IEEE International Conference on Electronics, Circuits, and Systems, IEEE conference proceedings, 2013, p. 405-408Conference paper (Refereed)
    Abstract [en]

    Defects that are dependent on temperature-gradients (e.g., delay-faults) introduce a challenge for achieving an effective test process, in particular for 3D ICs. Testing for such defects must be performed when the proper temperature gradients are enforced on the IC, otherwise these defects may escape the test. In this paper, a technique that efficiently heats up the IC during test so that it complies with the specified temperature gradients is proposed. The specified temperature gradients are achieved by applying heating sequences to the cores of the IC under test trough test access mechanism; thus no external heating mechanism is required. The scheduling of the test and heating sequences is based on thermal simulations. The schedule generation is guided by functions derived from the IC's temperature equation. Experimental results demonstrate that the proposed technique offers considerable test time savings.

  • 87.
    Aghaee, Nima
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Faculty of Science & Engineering.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Faculty of Science & Engineering.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, Faculty of Science & Engineering.
    A Test-Ordering Based Temperature-Cycling Acceleration Technique for 3D Stacked ICs2015In: Journal of electronic testing, ISSN 0923-8174, E-ISSN 1573-0727, ISSN 0923-8174, Vol. 31, no 5, p. 503-523Article in journal (Refereed)
    Abstract [en]

    n a modern three-dimensional integrated circuit (3D IC), vertically stacked dies are interconnected using through silicon vias. 3D ICs are subject to undesirable temperature-cycling phenomena such as through silicon via protrusion as well as void formation and growth. These cycling effects that occur during early life result in opens, resistive opens, and stress induced carrier mobility reduction. Consequently these early-life failures lead to products that fail shortly after the start of their use. Artificially-accelerated temperature cycling, before the manufacturing test, helps to detect such early-life failures that are otherwise undetectable. A test-ordering based temperature-cycling acceleration technique is introduced in this paper that integrates a temperature-cycling acceleration procedure with pre-, mid-, and post-bond tests for 3D ICs. Moreover, it reduces the need for costly temperature chamber based temperature-cycling acceleration methods. All these result in a reduction in the overall test costs. The proposed method is a test-ordering and schedule based solution that enforces the required temperature cycling effect and simultaneously performs the tests whenever appropriate. Experimental results demonstrate the efficiency of the proposed technique.

    Download full text (pdf)
    fulltext
  • 88.
    Aghaee, Nima
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Eles, Petru
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Temperature-Gradient-Based Burn-In and Test Scheduling for 3-D Stacked ICs2015In: IEEE Transactions on Very Large Scale Integration (vlsi) Systems, ISSN 1063-8210, E-ISSN 1557-9999, Vol. 23, no 12, p. 2992-3005Article in journal (Refereed)
    Abstract [en]

    Large temperature gradients exacerbate various types of defects including early-life failures and delay faults. Efficient detection of these defects requires that burn-in and test for delay faults, respectively, are performed when temperature gradients with proper magnitudes are enforced on an Integrated Circuit (IC). This issue is much more important for 3-D stacked ICs (3-D SICs) compared with 2-D ICs because of the larger temperature gradients in 3-D SICs. In this paper, two methods to efficiently enforce the specified temperature gradients on the IC, for burn-in and delay-fault test, are proposed. The specified temperature gradients are enforced by applying high-power stimuli to the cores of the IC under test through the test access mechanism. Therefore, no external heating mechanism is required. The tests, high power stimuli, and cooling intervals are scheduled together based on temperature simulations so that the desired temperature gradients are rapidly enforced. The schedule generation is guided by functions derived from a set of thermal equations. The experimental results demonstrate the efficiency of the proposed methods.

    Download full text (pdf)
    fulltext
  • 89.
    Agheli, Pouya
    et al.
    EURECOM, France.
    Pappas, Nikolaos
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Kountouris, Marios
    EURECOM, France.
    Semantic Source Coding for Two Users with Heterogeneous Goals2022In: 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), IEEE , 2022, p. 4983-4988Conference paper (Refereed)
    Abstract [en]

    We study a multiuser system in which an information source provides status updates to two monitors with heterogeneous goals. Semantic filtering is first performed to select the most useful realizations for each monitor. Packets are then encoded and sent so that each monitor can timely fulfill its goal. In this regard, some realizations are important for both monitors, while every other realization is informative for only one monitor. We determine the optimal real codeword lengths assigned to the selected packet arrivals in the sense of maximizing a weighted sum of semantics-aware utility functions for the two monitors. Our analytical and numerical results provide the optimal design parameters for different arrival rates and highlight the improvement in timely status update delivery using semantic filtering and source coding.

  • 90. Order onlineBuy this publication >>
    Aghighi, Meysam
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Computational Complexity of some Optimization Problems in Planning2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Automated planning is known to be computationally hard in the general case. Propositional planning is PSPACE-complete and first-order planning is undecidable. One method for analyzing the computational complexity of planning is to study restricted subsets of planning instances, with the aim of differentiating instances with varying complexity. We use this methodology for studying the computational complexity of planning. Finding new tractable (i.e. polynomial-time solvable) problems has been a particularly important goal for researchers in the area. The reason behind this is not only to differentiate between easy and hard planning instances, but also to use polynomial-time solvable instances in order to construct better heuristic functions and improve planners. We identify a new class of tractable cost-optimal planning instances by restricting the causal graph. We study the computational complexity of oversubscription planning (such as the net-benefit problem) under various restrictions and reveal strong connections with classical planning. Inspired by this, we present a method for compiling oversubscription planning problems into the ordinary plan existence problem. We further study the parameterized complexity of cost-optimal and net-benefit planning under the same restrictions and show that the choice of numeric domain for the action costs has a great impact on the parameterized complexity. We finally consider the parameterized complexity of certain problems related to partial-order planning. In some applications, less restricted plans than total-order plans are needed. Therefore, a partial-order plan is being used instead. When dealing with partial-order plans, one important question is how to achieve optimal partial order plans, i.e. having the highest degree of freedom according to some notion of flexibility. We study several optimization problems for partial-order plans, such as finding a minimum deordering or reordering, and finding the minimum parallel execution length.

    List of papers
    1. Oversubscription planning: Complexity and compilability
    Open this publication in new window or tab >>Oversubscription planning: Complexity and compilability
    2014 (English)In: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AI Access Foundation , 2014, Vol. 3, p. 2221-2227Conference paper, Published paper (Refereed)
    Abstract [en]

    Many real-world planning problems are oversubscription problems where all goals are not simultaneously achievable and the planner needs to find a feasible subset. We present complexity results for the so-called partial satisfaction and net benefit problems under various restrictions; this extends previous work by van den Briel et al. Our results reveal strong connections between these problems and with classical planning. We also present a method for efficiently compiling oversubscription problems into the ordinary plan existence problem; this can be viewed as a continuation of earlier work by Keyder and Geffner.

    Place, publisher, year, edition, pages
    AI Access Foundation, 2014
    National Category
    Computer and Information Sciences
    Identifiers
    urn:nbn:se:liu:diva-116727 (URN)000485439702031 ()2-s2.0-84908192348 (Scopus ID)9781577356790 (ISBN)
    Conference
    28th AAAI Conference on Artificial Intelligence, AAAI 2014, 26th Innovative Applications of Artificial Intelligence Conference, IAAI 2014 and the 5th Symposium on Educational Advances in Artificial Intelligence, EAAI 2014
    Available from: 2015-04-09 Created: 2015-04-02 Last updated: 2020-06-29
    2. Tractable Cost-Optimal Planning over Restricted Polytree Causal Graphs
    Open this publication in new window or tab >>Tractable Cost-Optimal Planning over Restricted Polytree Causal Graphs
    2015 (English)In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI Press, 2015Conference paper, Published paper (Refereed)
    Abstract [en]

    Causal graphs are widely used to analyze the complexity of planning problems. Many tractable classes have been identified with their aid and state-of-the-art heuristics have been derived by exploiting such classes. In particular, Katz and Keyder have studied causal graphs that are hourglasses (which is a generalization of forks and inverted-forks) and shown that the corresponding cost-optimal planning problem is tractable under certain restrictions. We continue this work by studying polytrees (which is a generalization of hourglasses) under similar restrictions. We prove tractability of cost-optimal planning by providing an algorithm based on a novel notion of variable isomorphism. Our algorithm also sheds light on the k-consistency procedure for identifying unsolvable planning instances. We speculate that this may, at least partially, explain why merge-and-shrink heuristics have been successful for recognizing unsolvable instances.

    Place, publisher, year, edition, pages
    AAAI Press, 2015
    Series
    Proceedings of the AAAI Conference on Artificial Intelligence, ISSN 2159-5399, E-ISSN 2374-3468
    Keywords
    automated planning, causal graph, polynomial-time algorithm, cost-optimal planning, polytree
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:liu:diva-118729 (URN)000485625503038 ()978-1-57735-703-2 (ISBN)
    Conference
    29th AAAI Conference on Artificial Intelligence (AAAI-15), January 25–30, Austin, TX, USA
    Funder
    CUGS (National Graduate School in Computer Science)
    Available from: 2015-06-03 Created: 2015-06-03 Last updated: 2022-02-18
    3. Cost-optimal and Net-benefit Planning: A Parameterised Complexity View
    Open this publication in new window or tab >>Cost-optimal and Net-benefit Planning: A Parameterised Complexity View
    2015 (English)In: 24th International Joint Conference on Artificial Intelligence (IJCAI-15), IJCAI-INT JOINT CONF ARTIF INTELL, ALBERT-LUDWIGS UNIV FREIBURG GEORGES-KOHLER-ALLEE, INST INFORMATIK, GEB 052, FREIBURG, D-79110, GERMANY , 2015, p. 1487-1493Conference paper, Published paper (Refereed)
    Abstract [en]

    Cost-optimal planning (COP) uses action costs and asks for a minimum-cost plan. It is sometimes assumed that there is no harm in using actions with zero cost or rational cost. Classical complexity analysis does not contradict this assumption; planning is PSPACE-complete regardless of whether action costs are positive or non-negative, integer or rational. We thus apply parameterised complexity analysis to shed more light on this issue. Our main results are the following. COP is W[2]-complete for positive integer costs, i.e. it is no harder than finding a minimum-length plan, but it is para-NPhard if the costs are non-negative integers or positive rationals. This is a very strong indication that the latter cases are substantially harder. Net-benefit planning (NBP) additionally assigns goal utilities and asks for a plan with maximum difference between its utility and its cost. NBP is para-NP-hard even when action costs and utilities are positive integers, suggesting that it is harder than COP. In addition, we also analyse a large number of subclasses, using both the PUBS restrictions and restricting the number of preconditions and effects.

    Place, publisher, year, edition, pages
    IJCAI-INT JOINT CONF ARTIF INTELL, ALBERT-LUDWIGS UNIV FREIBURG GEORGES-KOHLER-ALLEE, INST INFORMATIK, GEB 052, FREIBURG, D-79110, GERMANY, 2015
    National Category
    Transport Systems and Logistics
    Identifiers
    urn:nbn:se:liu:diva-128181 (URN)000442637801080 ()9781577357384 (ISBN)
    Conference
    24th International Joint Conference on Artificial Intelligence (IJCAI-15), Buenos Aires, Argentina, Jul 25-31, 2015
    Funder
    CUGS (National Graduate School in Computer Science), 1054Swedish Research Council, 621- 2014-4086
    Available from: 2016-05-20 Created: 2016-05-20 Last updated: 2019-07-03Bibliographically approved
    4. A Multi-parameter Complexity Analysis of Cost-optimal and Net-benefit Planning
    Open this publication in new window or tab >>A Multi-parameter Complexity Analysis of Cost-optimal and Net-benefit Planning
    2016 (English)In: Twenty-Sixth International Conference on Automated Planning and Scheduling King's College, London June 12, 2016 – June 17, 2016 / [ed] Amanda Coles, Andrew Coles, Stefan Edelkamp, Daniele Magazzeni, Scott Sanner, AAAI Press, 2016, p. 2-10Conference paper, Published paper (Refereed)
    Abstract [en]

    Aghighi and Bäckström have previously studied cost-optimal planning (COP) and net-benefit planning (NBP) for three action cost domains: the positive integers (Z_+), the non-negative integers (Z_0) and the positive rationals (Q_+). These were indistinguishable under standard complexity analysis for both problems, but separated for COP using parameterised complexity analysis. With the plan cost, k, as parameter, COP was W[2]-complete for Z_+, but para-NP-hard for both Z_0 and Q_+, i.e. presumably much harder. NBP was para-NP-hard for all three domains, thus remaining unseparable. We continue by considering combinations with several additional parameters and also the non-negative rationals (Q_0). Examples of new parameters are the plan length, l, and the largest denominator of the action costs, d. Our findings include: (1) COP remains W[2]-hard for all domains, even if combining all parameters; (2) COP for Z_0 is in W[2] for the combined parameter {k,l}; (3) COP for Q_+ is in W[2] for {k,d} and (4) COP for Q_0 is in W[2] for {k,d,l}. For NBP we consider further additional parameters, where the most crucial one for reducing complexity is the sum of variable utilities. Our results help to understand the previous results, eg. the separation between Z_+ and Q_+ for COP, and to refine the previous connections with empirical findings.

    Place, publisher, year, edition, pages
    AAAI Press, 2016
    Keywords
    cost-optimal planning, parameterised complexity, numeric domains
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:liu:diva-136278 (URN)000492982200001 ()9781577357575 (ISBN)
    Conference
    Twenty-Sixth International Conference on Automated Planning and Scheduling (ICAPS-16), London, UK, June 12–17, 2016
    Available from: 2017-04-05 Created: 2017-04-05 Last updated: 2020-06-29Bibliographically approved
    5. Plan Reordering and Parallel Execution -- A Parameterized Complexity View
    Open this publication in new window or tab >>Plan Reordering and Parallel Execution -- A Parameterized Complexity View
    2017 (English)Conference paper, Published paper (Refereed)
    Abstract [en]

    Bäckström has previously studied a number of optimization problems for partial-order plans, like finding a minimum deordering (MCD) or reordering (MCR), and finding the minimum parallel execution length (PPL), which are all NP-complete. We revisit these problems, but applying parameterized complexity analysis rather than standard complexity analysis. We consider various parameters, including both the original and desired size of the plan order, as well as its width and height. Our findings include that MCD and MCR are W[2]-hard and in W[P] when parameterized with the desired order size, and MCD is fixed-parameter tractable (fpt) when parameterized with the original order size. Problem PPL is fpt if parameterized with the size of the non-concurrency relation, but para-NP-hard in most other cases. We also consider this problem when the number (k) of agents, or processors, is restricted, finding that this number is a crucial parameter; this problem is fixed-parameter tractable with the order size, the parallel execution length and k as parameter, but para-NP-hard without k as parameter.

    Place, publisher, year, edition, pages
    AAAI Press, 2017
    Keywords
    Partially ordered plan, Parameterized complexity, Complexity of planning, Plan reordering, Parallel plan execution
    National Category
    Computer Systems
    Identifiers
    urn:nbn:se:liu:diva-136279 (URN)000485630703082 ()
    Conference
    Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)
    Available from: 2017-04-05 Created: 2017-04-05 Last updated: 2020-06-29Bibliographically approved
    Download full text (pdf)
    fulltext
    Download (pdf)
    omslag
    Download (jpg)
    presentationsbild
  • 91.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Bäckström, Christer
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    A Multi-parameter Complexity Analysis of Cost-optimal and Net-benefit Planning2016In: Twenty-Sixth International Conference on Automated Planning and Scheduling King's College, London June 12, 2016 – June 17, 2016 / [ed] Amanda Coles, Andrew Coles, Stefan Edelkamp, Daniele Magazzeni, Scott Sanner, AAAI Press, 2016, p. 2-10Conference paper (Refereed)
    Abstract [en]

    Aghighi and Bäckström have previously studied cost-optimal planning (COP) and net-benefit planning (NBP) for three action cost domains: the positive integers (Z_+), the non-negative integers (Z_0) and the positive rationals (Q_+). These were indistinguishable under standard complexity analysis for both problems, but separated for COP using parameterised complexity analysis. With the plan cost, k, as parameter, COP was W[2]-complete for Z_+, but para-NP-hard for both Z_0 and Q_+, i.e. presumably much harder. NBP was para-NP-hard for all three domains, thus remaining unseparable. We continue by considering combinations with several additional parameters and also the non-negative rationals (Q_0). Examples of new parameters are the plan length, l, and the largest denominator of the action costs, d. Our findings include: (1) COP remains W[2]-hard for all domains, even if combining all parameters; (2) COP for Z_0 is in W[2] for the combined parameter {k,l}; (3) COP for Q_+ is in W[2] for {k,d} and (4) COP for Q_0 is in W[2] for {k,d,l}. For NBP we consider further additional parameters, where the most crucial one for reducing complexity is the sum of variable utilities. Our results help to understand the previous results, eg. the separation between Z_+ and Q_+ for COP, and to refine the previous connections with empirical findings.

    Download full text (pdf)
    A Multi-parameter Complexity Analysis of Cost-optimal and Net-benefit Planning
  • 92.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Bäckström, Christer
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Cost-optimal and Net-benefit Planning: A Parameterised Complexity View2015In: 24th International Joint Conference on Artificial Intelligence (IJCAI-15), IJCAI-INT JOINT CONF ARTIF INTELL, ALBERT-LUDWIGS UNIV FREIBURG GEORGES-KOHLER-ALLEE, INST INFORMATIK, GEB 052, FREIBURG, D-79110, GERMANY , 2015, p. 1487-1493Conference paper (Refereed)
    Abstract [en]

    Cost-optimal planning (COP) uses action costs and asks for a minimum-cost plan. It is sometimes assumed that there is no harm in using actions with zero cost or rational cost. Classical complexity analysis does not contradict this assumption; planning is PSPACE-complete regardless of whether action costs are positive or non-negative, integer or rational. We thus apply parameterised complexity analysis to shed more light on this issue. Our main results are the following. COP is W[2]-complete for positive integer costs, i.e. it is no harder than finding a minimum-length plan, but it is para-NPhard if the costs are non-negative integers or positive rationals. This is a very strong indication that the latter cases are substantially harder. Net-benefit planning (NBP) additionally assigns goal utilities and asks for a plan with maximum difference between its utility and its cost. NBP is para-NP-hard even when action costs and utilities are positive integers, suggesting that it is harder than COP. In addition, we also analyse a large number of subclasses, using both the PUBS restrictions and restricting the number of preconditions and effects.

    Download full text (pdf)
    fulltext
  • 93.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Bäckström, Christer
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Plan Reordering and Parallel Execution -- A Parameterized Complexity View2017Conference paper (Refereed)
    Abstract [en]

    Bäckström has previously studied a number of optimization problems for partial-order plans, like finding a minimum deordering (MCD) or reordering (MCR), and finding the minimum parallel execution length (PPL), which are all NP-complete. We revisit these problems, but applying parameterized complexity analysis rather than standard complexity analysis. We consider various parameters, including both the original and desired size of the plan order, as well as its width and height. Our findings include that MCD and MCR are W[2]-hard and in W[P] when parameterized with the desired order size, and MCD is fixed-parameter tractable (fpt) when parameterized with the original order size. Problem PPL is fpt if parameterized with the size of the non-concurrency relation, but para-NP-hard in most other cases. We also consider this problem when the number (k) of agents, or processors, is restricted, finding that this number is a crucial parameter; this problem is fixed-parameter tractable with the order size, the parallel execution length and k as parameter, but para-NP-hard without k as parameter.

  • 94.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Bäckström, Christer
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Jonsson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Ståhlberg, Simon
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Analysing Approximability and Heuristics in Planning Using the Exponential-Time Hypothesis2016In: ECAI 2016: 22ND EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, IOS Press, 2016, Vol. 285, p. 184-192Conference paper (Refereed)
    Abstract [en]

    Cost-optimal planning has become a very well-studied topic within planning. Needless to say, cost-optimal planning has proven to be computationally hard both theoretically and in practice. Since cost-optimal planning is an optimisation problem, it is natural to analyse it from an approximation point of view. Even though such studies may be valuable in themselves, additional motivation is provided by the fact that there is a very close link between approximability and the performance of heuristics used in heuristic search. The aim of this paper is to analyse approximability (and indirectly the performance of heuristics) with respect to lower time bounds. That is, we are not content by merely classifying problems into complexity classes - we also study their time complexity. This is achieved by replacing standard complexity-theoretic assumptions (such as P not equal NP) with the exponential time hypothesis (ETH). This enables us to analyse, for instance, the performance of the h(+) heuristic and obtain general trade-off results that correlate approximability bounds with bounds on time complexity.

  • 95.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Bäckström, Christer
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Jonsson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Ståhlberg, Simon
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Refining complexity analyses in planning by exploiting the exponential time hypothesis2016In: Annals of Mathematics and Artificial Intelligence, ISSN 1012-2443, E-ISSN 1573-7470, Vol. 78, no 2, p. 157-175Article in journal (Refereed)
    Abstract [en]

    The use of computational complexity in planning, and in AI in general, has always been a disputed topic. A major problem with ordinary worst-case analyses is that they do not provide any quantitative information: they do not tell us much about the running time of concrete algorithms, nor do they tell us much about the running time of optimal algorithms. We address problems like this by presenting results based on the exponential time hypothesis (ETH), which is a widely accepted hypothesis concerning the time complexity of 3-SAT. By using this approach, we provide, for instance, almost matching upper and lower bounds onthe time complexity of propositional planning.

    Download full text (pdf)
    fulltext
  • 96.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Jonsson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Oversubscription planning: Complexity and compilability2014In: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AI Access Foundation , 2014, Vol. 3, p. 2221-2227Conference paper (Refereed)
    Abstract [en]

    Many real-world planning problems are oversubscription problems where all goals are not simultaneously achievable and the planner needs to find a feasible subset. We present complexity results for the so-called partial satisfaction and net benefit problems under various restrictions; this extends previous work by van den Briel et al. Our results reveal strong connections between these problems and with classical planning. We also present a method for efficiently compiling oversubscription problems into the ordinary plan existence problem; this can be viewed as a continuation of earlier work by Keyder and Geffner.

  • 97.
    Aghighi, Meysam
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Jonsson, Peter
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Ståhlberg, Simon
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, The Institute of Technology.
    Tractable Cost-Optimal Planning over Restricted Polytree Causal Graphs2015In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI Press, 2015Conference paper (Refereed)
    Abstract [en]

    Causal graphs are widely used to analyze the complexity of planning problems. Many tractable classes have been identified with their aid and state-of-the-art heuristics have been derived by exploiting such classes. In particular, Katz and Keyder have studied causal graphs that are hourglasses (which is a generalization of forks and inverted-forks) and shown that the corresponding cost-optimal planning problem is tractable under certain restrictions. We continue this work by studying polytrees (which is a generalization of hourglasses) under similar restrictions. We prove tractability of cost-optimal planning by providing an algorithm based on a novel notion of variable isomorphism. Our algorithm also sheds light on the k-consistency procedure for identifying unsolvable planning instances. We speculate that this may, at least partially, explain why merge-and-shrink heuristics have been successful for recognizing unsolvable instances.

  • 98.
    Aghili, Mohammed
    Linköping University, Department of Computer and Information Science.
    Jämförelse av aggregeringswebbdelar i MOSS 20072010Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    En typisk funktion på startsidan till många webbportaler är den webbdel som presenterar exempelvis desenaste blogginläggen, nyheterna eller händelserna som har lagts till på webbplatsen. Dessa funktioner ärkända som aggreggeringswebbdelar. Eftersom startsidan är den sida som besöks mest jämfört med alla andrawebbsidor i portalen innebär det i sin tur att denna funktion utnyttjas väldigt ofta.Detta arbete syftar till att finna ett antal olika metoder som kan användas för att uppnå denna funktion ochatt ta reda på hur väl dessa webbdelar presterar.Denna rapport presenterar både de olika metoder som fanns och resultaten på en systematisk testning avdessa. Resultaten av testerna presenteras på ett överskådligt sätt.Slutligen dras slutsatser angående resultaten. Resultaten förespråkar inte en specifik metod, den metod somlämpar sig bäst för varje enskild sammanhang avgörs till största del av andra faktorer såsom frekvens avbesökare eller ändringar på innehållet som metoden söker igenom.

    Download full text (pdf)
    exjobb_rapport_mohammed_2010-09-20
  • 99.
    Agyekum, Ephraim Bonah
    et al.
    Ural Fed Univ, Russia.
    Ampah, Jeffrey Dankwa
    Tianjin Univ, Peoples R China.
    Khan, Tahir
    Zhejiang Univ, Peoples R China.
    Giri, Nimay Chandra
    Centurion Univ Technol & Management, India; Centur Univ Technol & Managemention, India.
    Hussien, Abdelazim
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering. Fayoum Univ, Egypt; Appl Sci Private Univ, Jordan; Middle East Univ, Jordan.
    Velkin, Vladimir Ivanovich
    Ural Fed Univ, Russia.
    Mehmood, Usman
    Bahcesehir Cyprus Univ, Turkiye; Univ Punjab, Pakistan.
    Kamel, Salah
    Aswan Univ, Egypt.
    Towards a reduction of emissions and cost-savings in homes: Techno-economic and environmental impact of two different solar water heaters2024In: Energy Reports, E-ISSN 2352-4847, Vol. 11, p. 963-981Article in journal (Refereed)
    Abstract [en]

    South Africa currently has the highest carbon emission intensity per kilowatt of electricity generation globally, and its government intends to reduce it. Some of the measures taken by the government include a reduction of emissions in the building sector using solar water heating (SWH) systems. However, there is currently no study in the country that comprehensively assesses the technical, economic, and environmental impact of SWH systems across the country. This study therefore used the System Advisor Model (SAM) to model two different technologies of SWH systems (i.e., flat plate (FPC) and evacuated tube (EPC) SWH) at five different locations (i.e., Pretoria, Upington, Kimberley, Durban, and Cape Town) strategically selected across the country. According to the study, the optimum azimuth for both the evacuated tube and flat plate SWH system in South Africa is 0 degrees. Installing FPC and EPC at the different locations would yield payback periods of 3.2 to 4.4 years and 3.5 to 4.3 years, respectively. Comparably, levelized cost of energy for the FPC and EPC will range from 7.47 to 9.62 cents/kWh and 7.66 to 9.24 cents/kWh, respectively, based on where the SWH system is located. Depending on where the facility is located, the annual cost savings for the FPC system would be between $486 and $625, while the EPC system would save between $529 and $638. Using SWHs can reduce CO2 emissions by 75-77% for the evacuated tube system and 69-76% for the flat plate system annually, depending on the location.

  • 100.
    Ahl, Linda
    Linköping University, Department of Computer and Information Science.
    Hur fungerar datorer?: En fallstudie av att utveckla pedagogisk multimedia för ett datorhistoriskt museum.2004Independent thesis Basic level (professional degree)Student thesis
    Abstract [sv]

    Få människor vet hur datorer fungerar, vilka komponenter de är uppbyggda av och hur dessa samverkar. I detta examensarbete har en prototyp till en multimediepresentation utvecklats. Presentationen kommer att placeras på ett datorhistoriskt museum och dess syfte kommer där att vara att hjälpa människor förstå hur datorer fungerar. Prototypen är baserad på bilder och enklare animationer som förklarar samverkan och funktion hos de olika datorkomponenterna, bland annat genom att visa scenarier som många människor troligtvis känner igen från sin vardag.

    Målet med arbetet har varit att inskaffa kunskap kring hur multimedia kan användas för att illustrera tekniska processer, samt kunskap kring hur multimediepresentationer skall utveck-las. Därför har en systemutvecklingsmetod tagits fram som är anpassad till denna typ av system och som använts vid utvecklingen av prototypen.

    Systemutvecklingsmetoden är av iterativ modell, eftersom det visat sig att ett iterativt arbetssätt är att föredra framför ett linjärt vid multimedieutveckling. Detta beror på att det i denna typ av arbete där det till en början oftast är oklart vilka krav och önskemål som finns på slutprodukten är svårt att gå enkelriktat genom utvecklingsprocessen, d v s att göra ett steg helt färdigt innan nästa påbörjas.

    När det gäller multimedia är en slutsats att det med fördel kan användas för att visa och förklara tekniska förlopp och att det verkar vara ett användbart hjälpmedel inom utbildning och museiverksamhet.

    Download full text (pdf)
    FULLTEXT01
1234567 51 - 100 of 9043
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf