liu.seSearch for publications in DiVA
1 - 13 of 13
rss atomLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
  • Public defence: 2018-12-14 09:00 Hasselquistsalen, Linköping
    Halvarsson, Camilla
    Linköping University, Department of Clinical and Experimental Medicine, Division of Microbiology and Molecular Medicine. Linköping University, Faculty of Medicine and Health Sciences.
    Hypoxia inducible factor 1 alpha: dependent and independent regulation of hematopoietic stem cells and leukemia2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis has studied the role of low oxygen levels, or hypoxia, in hematopoietic stem cells (HSCs) and how, at the molecular level, it regulates stem cell maintenance and protects against oxidative stress induced by reactive oxygen species (ROS). HSCs reside within the bone marrow in specific niches created by a unique vascularized environment, which is suggested to be hypoxic and crucial for HSCs by maintaining a quiescent state of cell cycle and by redirecting metabolism away from the mitochondria to glycolysis. The niches are also believed to limit the production of ROS, which could damage DNA and disrupt the stem cell features. The hypoxia-responsive protein hypoxia-inducible factor 1 alpha (HIF-1α) is a major regulator of the hypoxic cell response in HSCs as well as in leukemic stem cells. Both these cells are thought to reside in the bone marrow where they are protected from stress and chemotherapy by niche cells and hypoxia.

    The thesis demonstrates that pyruvate dehydrogenase kinase 1 regulates a metabolic shift to glycolysis, and maintains the engraftment potential of both HSCs and multipotent progenitors upon transplantation. Furthermore, we wanted to determine whether HIF-1α or other signaling pathways are involved in protecting HSCs from ROS-induced cell death. Overexpression, silencing or a knockout mouse model of Hif-1α could not identify HIF-1α as important for protecting HSCs from oxidative stress-induced cell death through inhibition of synthesis of the antioxidant glutathione. Gene expression analysis instead identified the transcription factor nuclear factor kappa B (NF-κB) as induced by hypoxia. By studying NF- κB signaling we found increased NF-κB activity in cells cultured in hypoxia compared to normoxia. Suppression of inhibitor of kappa B indicated a putative role of NF-κB signaling in hypoxia-induced protection against oxidative stress. The findings show that hypoxia-induced protection to elevated levels of ROS upon glutathione depletion seems to be attributed to activation of the NF-κB signaling pathway independently of HIF-1α.

    To address the question whether hypoxic in vitro cultures support maintenance and promote HSC expansion we performed a limited dilution-transplantation assay. Our data indicate that hypoxic cultures maintain more long-term-reconstituting HSCs than normoxia, but this could not be confirmed statistically. Finally, we wanted to study the mechanisms by which hypoxia protect against chemotherapy. We could demonstrate that hypoxic culture protects leukemic cell lines against apoptosis induced by chemotherapy or inhibitors used for treatment of leukemia. This multidrug resistance seems to be mediated by ATP-binding cassette transporter genes, which are upregulated by hypoxia and whose inhibition has been shown to increase chemosensitivity. In addition, HIF-1α was upregulated in the leukemic cell lines in hypoxia and its inhibition increased the sensitivity to chemotherapy, indicating a role in inducing chemotherapy resistance.

    Conclusively, the results presented in this thesis stress the importance of hypoxia in regulating metabolism, oxidative-stress response and maintenance of both HSCs as well as leukemic cells, especially through the critical transcription factors HIF-1α and NF-κB and their target genes.  

    List of papers
    1. Pyruvate dehydrogenase kinase 1 is essential for transplantable mouse bone marrow hematopoietic stem cell and progenitor function
    Open this publication in new window or tab >>Pyruvate dehydrogenase kinase 1 is essential for transplantable mouse bone marrow hematopoietic stem cell and progenitor function
    2017 (English)In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 12, no 2, article id e0171714Article in journal (Refereed) Published
    Abstract [en]

    Background Accumulating evidence suggests that hypoxic areas in the bone marrow are crucial for maintenance of hematopoietic stem cells (HSCs) by supporting a quiescent state of cell cycle and regulating the transplantation capacity of long-term (LT)-HSCs. In addition, HSCs seem to express a metabolic profile of energy production away from mitochondrial oxidative phosphorylation in favor of glycolysis. At oxygen deprivation, hypoxia inducible factor 1 alpha (HIF-1 alpha) is known to induce glycolytic enzymes as well as suppressing mitochondrial energy production by inducing pyruvate dehydrogenase kinase 1 (Pdk1) in most cell types. It has not been established whether PDK1 is essential for HSC function and mediates hypoxia-adapting functions in HSCs. While the Pdk gene family contains four members (Pdk1-4), it was recently shown that Pdk2 and Pdk4 have an important role in regulating LT-HSCs. Principle findings Here we demonstrate that PDK1 activity is crucial for transplantable HSC function. Whereas Pdkl, Pdk2, and Pdk3 transcripts were expressed at higher levels in different subtypes of HSCs compared to differentiated cells, we could not detect any major differences in expression between LT-HSCs and more short-term HSCs and multipotent progenitors. When studying HIF-1 alpha-mediated regulation of Pdk activity in vitro, Pdk1 was the most robust target regulated by hypoxia, whereas Pdk2, Pdk3, and Pdk4 were not affected. Contrary, genetic ablation in a cre-inducible Hif-1 alpha knockout mouse did not support a link between HIF-1 alpha and Pdk1. Silencing of Pdk1 by shRNA lentiviral gene transfer partially impaired progenitor colony formation in vitro and had a strong negative effect on both long-term and short-term engraftment in mice. Conclusions Our study demonstrates that PDK1 has broad effects in hematopoiesis and is a critical factor for engraftment of both HSCs and multipotent progenitors upon transplantation to recipient mice. While Pdk1 was a robust hypoxia-inducible gene mediated by HIF-1 alpha in vitro, we could not find evidence of any in vivo links between Pdk1 and HIF-1 alpha.

    Place, publisher, year, edition, pages
    PUBLIC LIBRARY SCIENCE, 2017
    National Category
    Cell and Molecular Biology
    Identifiers
    urn:nbn:se:liu:diva-136062 (URN)10.1371/journal.pone.0171714 (DOI)000394231800095 ()28182733 (PubMedID)
    Note

    Funding Agencies|Swedish Research Council; Swedish Cancer Society; Swedish Childhood Cancer Foundation; County Council of Ostergotland; Faculty of Medicine at Linkoping University; Ollie and Elof Ericssons Foundation

    Available from: 2017-03-27 Created: 2017-03-27 Last updated: 2018-10-18
    2. Letter: Hypoxic and normoxic in vitro cultures maintain similar numbers of long-term reconstituting hematopoietic stem cells from mouse bone marrow
    Open this publication in new window or tab >>Letter: Hypoxic and normoxic in vitro cultures maintain similar numbers of long-term reconstituting hematopoietic stem cells from mouse bone marrow
    2012 (English)In: Experimental Hematology, ISSN 0301-472X, E-ISSN 1873-2399, Vol. 40, no 11, p. 879-881Article in journal, Letter (Other academic) Published
    Abstract [en]

    n/a

    Place, publisher, year, edition, pages
    Elsevier, 2012
    National Category
    Medical and Health Sciences
    Identifiers
    urn:nbn:se:liu:diva-85627 (URN)10.1016/j.exphem.2012.07.005 (DOI)000310182400001 ()
    Available from: 2012-11-26 Created: 2012-11-26 Last updated: 2018-10-18
  • Public defence: 2018-12-14 09:15 Domteatern, Visualiseringscenter C, Kungsgatan 54, Norrköping
    Miandji, Ehsan
    Linköping University, Department of Science and Technology, Media and Information Technology. Linköping University, Faculty of Science & Engineering.
    Sparse representation of visual data for compression and compressed sensing2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The ongoing advances in computational photography have introduced a range of new imaging techniques for capturing multidimensional visual data such as light fields, BRDFs, BTFs, and more. A key challenge inherent to such imaging techniques is the large amount of high dimensional visual data that is produced, often requiring GBs, or even TBs, of storage. Moreover, the utilization of these datasets in real time applications poses many difficulties due to the large memory footprint. Furthermore, the acquisition of large-scale visual data is very challenging and expensive in most cases. This thesis makes several contributions with regards to acquisition, compression, and real time rendering of high dimensional visual data in computer graphics and imaging applications.

    Contributions of this thesis reside on the strong foundation of sparse representations. Numerous applications are presented that utilize sparse representations for compression and compressed sensing of visual data. Specifically, we present a single sensor light field camera design, a compressive rendering method, a real time precomputed photorealistic rendering technique, light field (video) compression and real time rendering, compressive BRDF capture, and more. Another key contribution of this thesis is a general framework for compression and compressed sensing of visual data, regardless of the dimensionality. As a result, any type of discrete visual data with arbitrary dimensionality can be captured, compressed, and rendered in real time.

    This thesis makes two theoretical contributions. In particular, uniqueness conditions for recovering a sparse signal under an ensemble of multidimensional dictionaries is presented. The theoretical results discussed here are useful for designing efficient capturing devices for multidimensional visual data. Moreover, we derive the probability of successful recovery of a noisy sparse signal using OMP, one of the most widely used algorithms for solving compressed sensing problems.

    List of papers
    1. OMP-based DOA estimation performance analysis
    Open this publication in new window or tab >>OMP-based DOA estimation performance analysis
    2018 (English)In: Digital signal processing (Print), ISSN 1051-2004, E-ISSN 1095-4333, Vol. 79, p. 57-65Article in journal (Refereed) Published
    Abstract [en]

    In this paper, we present a new performance guarantee for Orthogonal Matching Pursuit (OMP) in the context of the Direction Of Arrival (DOA) estimation problem. For the first time, the effect of parameters such as sensor array configuration, as well as signal to noise ratio and dynamic range of the sources is thoroughly analyzed. In particular, we formulate a lower bound for the probability of detection and an upper bound for the estimation error. The proposed performance guarantee is further developed to include the estimation error as a user-defined parameter for the probability of detection. Numerical results show acceptable correlation between theoretical and empirical simulations. (C) 2018 Elsevier Inc. All rights reserved.

    Place, publisher, year, edition, pages
    ACADEMIC PRESS INC ELSEVIER SCIENCE, 2018
    Keywords
    Direction of arrival; Orthogonal Matching Pursuit (OMP); Mutual coherence; Array configuration
    National Category
    Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-149841 (URN)10.1016/j.dsp.2018.04.006 (DOI)000437386200006 ()
    Available from: 2018-08-02 Created: 2018-08-02 Last updated: 2018-11-23
    2. On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence
    Open this publication in new window or tab >>On Probability of Support Recovery for Orthogonal Matching Pursuit Using Mutual Coherence
    2017 (English)In: IEEE Signal Processing Letters, ISSN 1070-9908, E-ISSN 1558-2361, Vol. 24, no 11, p. 1646-1650Article in journal (Refereed) Published
    Abstract [en]

    In this paper we present a new coherence-based performance guarantee for the Orthogonal Matching Pursuit (OMP) algorithm. A lower bound for the probability of correctly identifying the support of a sparse signal with additive white Gaussian noise is derived. Compared to previous work, the new bound takes into account the signal parameters such as dynamic range, noise variance, and sparsity. Numerical simulations show significant improvements over previous work and a closer match to empirically obtained results of the OMP algorithm.

    Place, publisher, year, edition, pages
    IEEE Signal Processing Society, 2017
    Keywords
    Compressed Sensing (CS), Sparse Recovery, Orthogonal Matching Pursuit (OMP), Mutual Coherence
    National Category
    Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-141613 (URN)10.1109/LSP.2017.2753939 (DOI)000412501600001 ()
    Available from: 2017-10-03 Created: 2017-10-03 Last updated: 2018-11-23Bibliographically approved
    3. ON NONLOCAL IMAGE COMPLETION USING AN ENSEMBLE OF DICTIONARIES
    Open this publication in new window or tab >>ON NONLOCAL IMAGE COMPLETION USING AN ENSEMBLE OF DICTIONARIES
    2016 (English)In: 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), IEEE , 2016, p. 2519-2523Conference paper, Published paper (Refereed)
    Abstract [en]

    In this paper we consider the problem of nonlocal image completion from random measurements and using an ensemble of dictionaries. Utilizing recent advances in the field of compressed sensing, we derive conditions under which one can uniquely recover an incomplete image with overwhelming probability. The theoretical results are complemented by numerical simulations using various ensembles of analytical and training-based dictionaries.

    Place, publisher, year, edition, pages
    IEEE, 2016
    Series
    IEEE International Conference on Image Processing ICIP, ISSN 1522-4880
    Keywords
    compressed sensing; image completion; nonlocal; inverse problems; uniqueness conditions
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-134107 (URN)10.1109/ICIP.2016.7532813 (DOI)000390782002114 ()978-1-4673-9961-6 (ISBN)
    Conference
    23rd IEEE International Conference on Image Processing (ICIP)
    Available from: 2017-01-22 Created: 2017-01-22 Last updated: 2018-11-23
    4. Compressive Image Reconstruction in Reduced Union of Subspaces
    Open this publication in new window or tab >>Compressive Image Reconstruction in Reduced Union of Subspaces
    2015 (English)In: Computer Graphics Forum, ISSN 1467-8659, Vol. 34, no 2, p. 33-44Article in journal (Refereed) Published
    Abstract [en]

    We present a new compressed sensing framework for reconstruction of incomplete and possibly noisy images and their higher dimensional variants, e.g. animations and light-fields. The algorithm relies on a learning-based basis representation. We train an ensemble of intrinsically two-dimensional (2D) dictionaries that operate locally on a set of 2D patches extracted from the input data. We show that one can convert the problem of 2D sparse signal recovery to an equivalent 1D form, enabling us to utilize a large family of sparse solvers. The proposed framework represents the input signals in a reduced union of subspaces model, while allowing sparsity in each subspace. Such a model leads to a much more sparse representation than widely used methods such as K-SVD. To evaluate our method, we apply it to three different scenarios where the signal dimensionality varies from 2D (images) to 3D (animations) and 4D (light-fields). We show that our method outperforms state-of-the-art algorithms in computer graphics and image processing literature.

    Place, publisher, year, edition, pages
    John Wiley & Sons Ltd, 2015
    Keywords
    Image reconstruction, compressed sensing, light field imaging
    National Category
    Signal Processing
    Identifiers
    urn:nbn:se:liu:diva-119639 (URN)10.1111/cgf.12539 (DOI)000358326600008 ()
    Conference
    Eurographics 2015
    Projects
    VPS
    Funder
    Swedish Foundation for Strategic Research , IIS11-0081
    Available from: 2015-06-23 Created: 2015-06-23 Last updated: 2018-11-23Bibliographically approved
    5. Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes
    Open this publication in new window or tab >>Learning Based Compression of Surface Light Fields for Real-time Rendering of Global Illumination Scenes
    2013 (English)In: Proceedings of ACM SIGGRAPH ASIA 2013, ACM Press, 2013Conference paper, Published paper (Refereed)
    Abstract [en]

    We present an algorithm for compression and real-time rendering of surface light fields (SLF) encoding the visual appearance of objects in static scenes with high frequency variations. We apply a non-local clustering in order to exploit spatial coherence in the SLFdata. To efficiently encode the data in each cluster, we introducea learning based approach, Clustered Exemplar Orthogonal Bases(CEOB), which trains a compact dictionary of orthogonal basispairs, enabling efficient sparse projection of the SLF data. In ad-dition, we discuss the application of the traditional Clustered Principal Component Analysis (CPCA) on SLF data, and show that inmost cases, CEOB outperforms CPCA, K-SVD and spherical harmonics in terms of memory footprint, rendering performance andreconstruction quality. Our method enables efficient reconstructionand real-time rendering of scenes with complex materials and lightsources, not possible to render in real-time using previous methods.

    Place, publisher, year, edition, pages
    ACM Press, 2013
    Keywords
    computer graphics, global illumination, real-time, machine learning
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-99433 (URN)10.1145/2542355.2542385 (DOI)978-1-4503-2629-2 (ISBN)
    Conference
    SIGGRAPH Asia, 19-22 November 2013, Hong Kong
    Projects
    VPS
    Funder
    Swedish Foundation for Strategic Research , IIS11-0081Swedish Research Council
    Available from: 2013-10-17 Created: 2013-10-17 Last updated: 2018-11-23Bibliographically approved
  • Public defence: 2018-12-14 10:00 I101, I-huset, Linköping
    Hagman, William
    Linköping University, Department of Behavioural Sciences and Learning, Psychology. Linköping University, Faculty of Arts and Sciences.
    When are nudges acceptable?: Influences of beneficiaries, techniques, alternatives and choice architects2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Interventions aimed to change behavior (so called nudges) are becoming more and more popular among policymakers. However, in order to be able to effectively use nudges, it is important to understand when and why people find them acceptable. The objective of this thesis is therefore to improve the understanding of when nudges are judged to be acceptable. The thesis focuses on a model for behavioral change. The model contains two parts, nudge technique and acceptance of nudges. Nudge technique refers to how the nudge is designed to function in regard to psychological mechanism and functionality.

    The nudge technique part of the model is expanded and problematized from an ethical perspective in the first part of this thesis, by exemplifying psychological mechanisms behind different techniques and explaining why they might be intrusive to individuals’ freedom of choice. In the second part of this thesis it is discussed why acceptance is an important component of making nudging legitimate and effective. This is followed by a discussion of how acceptance is empirically measured. The empirical part of the thesis is based on four papers which all use a quantitative online survey approach to study the judgements of nudges from the general public.

    Paper 1 was a first attempt to measure whether nudges which are common in the nudge literature are acceptable interventions according to the general public. We found that the nudges that were categorized as pro-self were more likely to be rated as acceptable and less likely to be perceived as intrusive to freedom of choice compared to pro-social nudges. Furthermore, the effect of decision styles and worldview on acceptance was explored. In paper 2, we explored whether the difference between acceptance found for pro-social nudges and proself nudges could be increased by framing nudges as beneficial for society or individuals. The framing had no effect on acceptance but, as in paper 1, pro-social nudges were found to be more intrusive to freedom of choice compared to pro-self framed nudges. Moreover, different nudge techniques had different rates of acceptance even with the same explicit goal for the nudges. In paper 3, we examined whether the alternative to nudges affects the perceived acceptability and intrusiveness of default-changing nudge techniques. The alternatives given to the nudges were either to enforce the intended behavioral change with legislation or to do nothing at all in order to change the behavior. We find no difference in aggregated acceptance, however, the judgements vary depending on individuals’ worldview. Paper 4 explored if the choice architect’s (the creator/proposer of the nudge) political affiliation affects acceptance rating for proposed nudge interventions and legislation. We find that acceptance of both nudges and legislation increases with the level of matching between people’s political orientation and the choice architect’s political affiliation.

    Taken together, the findings suggest that there is more to creating an acceptable nudge than to merely take a nudge technique that was acceptable in one context and apply it in another. Moreover, nudges that are rated as more beneficial towards individuals compared to society at large are in general more likely to be found acceptable and less intrusive to freedom of choice. It is important to have knowledge about the target population (e.g. their decision styles, world-views, and political orientation) to avoid backfires when implementing nudges.  

    List of papers
    1. Public Views on Policies Involving Nudges
    Open this publication in new window or tab >>Public Views on Policies Involving Nudges
    2015 (English)In: Review of Philosophy and Psychology, ISSN 1878-5158, E-ISSN 1878-5166, Vol. 6, no 3, p. 439-453Article in journal (Refereed) Published
    Abstract [en]

    When should nudging be deemed as permissible and when should it be deemed as intrusive to individuals’ freedom of choice? Should all types of nudges be judged the same? To date the debate concerning these issues has largely proceeded without much input from the general public. The main objective of this study is to elicit public views on the use of nudges in policy. In particular we investigate attitudes toward two broad categories of nudges that we label pro-self (i.e. focusing on private welfare) and pro-social (i.e. focusing on social welfare) nudges. In addition we explore how individual differences in thinking and feeling influence attitudes toward nudges. General population samples in Sweden and the United States (n=952) were presented with vignettes describing nudge-policies and rated acceptability and intrusiveness on freedom of choice. To test for individual differences, measures on cultural cognition and analytical thinking were included. Results show that the level of acceptance toward nudge-policies was generally high in both countries, but were slightly higher among Swedes than Americans. Somewhat paradoxically a majority of the respondents also perceived the presented nudge-policies as intrusive to freedom of choice. Nudge- polices classified as pro-social had a significantly lower acceptance rate compared to pro-self nudges (p<.0001). Individuals with a more individualistic worldview were less likely to perceive nudges as acceptable, while individuals more prone to analytical thinking were less likely to perceive nudges as intrusive to freedom of choice. To conclude, our findings suggest that the notion of “one-nudge- fits-all” is not tenable. Recognizing this is an important aspect both for successfully implementing nudges as well as nuancing nudge theory. 

    Keywords
    Nudge; Libertarian Paternalism; Acceptability; Autonomi
    National Category
    Economics
    Identifiers
    urn:nbn:se:liu:diva-119071 (URN)10.1007/s13164-015-0263-2 (DOI)
    Projects
    Neuroekonomi
    Available from: 2015-06-08 Created: 2015-06-08 Last updated: 2018-11-22
  • Public defence: 2018-12-14 10:15 Ada Lovelace, Building B, Linköping
    Andersson Naesseth, Christian
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, Faculty of Science & Engineering.
    Machine learning using approximate inference: Variational and sequential Monte Carlo methods2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Automatic decision making and pattern recognition under uncertainty are difficult tasks that are ubiquitous in our everyday life. The systems we design, and technology we develop, requires us to coherently represent and work with uncertainty in data. Probabilistic models and probabilistic inference gives us a powerful framework for solving this problem. Using this framework, while enticing, results in difficult-to-compute integrals and probabilities when conditioning on the observed data. This means we have a need for approximate inference, methods that solves the problem approximately using a systematic approach. In this thesis we develop new methods for efficient approximate inference in probabilistic models.

    There are generally two approaches to approximate inference, variational methods and Monte Carlo methods. In Monte Carlo methods we use a large number of random samples to approximate the integral of interest. With variational methods, on the other hand, we turn the integration problem into that of an optimization problem. We develop algorithms of both types and bridge the gap between them.

    First, we present a self-contained tutorial to the popular sequential Monte Carlo (SMC) class of methods. Next, we propose new algorithms and applications based on SMC for approximate inference in probabilistic graphical models. We derive nested sequential Monte Carlo, a new algorithm particularly well suited for inference in a large class of high-dimensional probabilistic models. Then, inspired by similar ideas we derive interacting particle Markov chain Monte Carlo to make use of parallelization to speed up approximate inference for universal probabilistic programming languages. After that, we show how we can make use of the rejection sampling process when generating gamma distributed random variables to speed up variational inference. Finally, we bridge the gap between SMC and variational methods by developing variational sequential Monte Carlo, a new flexible family of variational approximations.

    List of papers
    1. Capacity estimation of two-dimensional channels using Sequential Monte Carlo
    Open this publication in new window or tab >>Capacity estimation of two-dimensional channels using Sequential Monte Carlo
    2014 (English)In: 2014 IEEE Information Theory Workshop, 2014, p. 431-435Conference paper, Published paper (Refereed)
    Abstract [en]

    We derive a new Sequential-Monte-Carlo-based algorithm to estimate the capacity of two-dimensional channel models. The focus is on computing the noiseless capacity of the 2-D (1, ∞) run-length limited constrained channel, but the underlying idea is generally applicable. The proposed algorithm is profiled against a state-of-the-art method, yielding more than an order of magnitude improvement in estimation accuracy for a given computation time.

    National Category
    Control Engineering Computer Sciences Probability Theory and Statistics
    Identifiers
    urn:nbn:se:liu:diva-112966 (URN)10.1109/ITW.2014.6970868 (DOI)
    Conference
    Information Theory Workshop
    Available from: 2015-01-06 Created: 2015-01-06 Last updated: 2018-11-09
    2. Sequential Monte Carlo for Graphical Models
    Open this publication in new window or tab >>Sequential Monte Carlo for Graphical Models
    2014 (English)In: Advances in Neural Information Processing Systems, 2014, p. 1862-1870Conference paper, Published paper (Refereed)
    Abstract [en]

    We propose a new framework for how to use sequential Monte Carlo (SMC) algorithms for inference in probabilistic graphical models (PGM). Via a sequential decomposition of the PGM we find a sequence of auxiliary distributions defined on a monotonically increasing sequence of probability spaces. By targeting these auxiliary distributions using SMC we are able to approximate the full joint distribution defined by the PGM. One of the key merits of the SMC sampler is that it provides an unbiased estimate of the partition function of the model. We also show how it can be used within a particle Markov chain Monte Carlo framework in order to construct high-dimensional block-sampling algorithms for general PGMs.

    National Category
    Computer Sciences Probability Theory and Statistics Control Engineering
    Identifiers
    urn:nbn:se:liu:diva-112967 (URN)
    Conference
    Neural Information Processing Systems (NIPS)
    Available from: 2015-01-06 Created: 2015-01-06 Last updated: 2018-11-09Bibliographically approved
    3. Nested Sequential Monte Carlo Methods
    Open this publication in new window or tab >>Nested Sequential Monte Carlo Methods
    2015 (English)In: Proceedings of The 32nd International Conference on Machine Learning / [ed] Francis Bach, David Blei, Journal of Machine Learning Research (Online) , 2015, Vol. 37, p. 1292-1301Conference paper, Published paper (Refereed)
    Abstract [en]

    We propose nested sequential Monte Carlo (NSMC), a methodology to sample from sequences of probability distributions, even where the random variables are high-dimensional. NSMC generalises the SMC framework by requiring only approximate, properly weighted, samples from the SMC proposal distribution, while still resulting in a correct SMC algorithm. Furthermore, NSMC can in itself be used to produce such properly weighted samples. Consequently, one NSMC sampler can be used to construct an efficient high-dimensional proposal distribution for another NSMC sampler, and this nesting of the algorithm can be done to an arbitrary degree. This allows us to consider complex and high-dimensional models using SMC. We show results that motivate the efficacy of our approach on several filtering problems with dimensions in the order of 100 to 1 000.

    Place, publisher, year, edition, pages
    Journal of Machine Learning Research (Online), 2015
    Series
    JMLR Workshop and Conference Proceedings, ISSN 1938-7228 ; 37
    National Category
    Computer Sciences Control Engineering Probability Theory and Statistics
    Identifiers
    urn:nbn:se:liu:diva-122698 (URN)
    Conference
    32nd International Conference on Machine Learning, Lille, France, 6-11 July, 2015
    Available from: 2015-11-16 Created: 2015-11-16 Last updated: 2018-11-09Bibliographically approved
    4. Interacting Particle Markov Chain Monte Carlo
    Open this publication in new window or tab >>Interacting Particle Markov Chain Monte Carlo
    Show others...
    2016 (English)In: Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016Conference paper, Published paper (Refereed)
    Abstract [en]

    We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method based on an interacting pool of standard and conditional sequential Monte Carlo samplers. Like related methods, iPMCMC is a Markov chain Monte Carlo sampler on an extended space. We present empirical results that show significant improvements in mixing rates relative to both non-interacting PMCMC samplers and a single PMCMC sampler with an equivalent memory and computational budget. An additional advantage of the iPMCMC method is that it is suitable for distributed and multi-core architectures.

    Keywords
    Sequential Monte Carlo, Probabilistic programming, parallelisation
    National Category
    Computer Sciences Control Engineering Probability Theory and Statistics
    Identifiers
    urn:nbn:se:liu:diva-130043 (URN)
    Conference
    International Conference on Machine Learning (ICML), New York, USA, June 19-24, 2016
    Projects
    CADICS
    Funder
    Cancer and Allergy Foundation
    Available from: 2016-07-05 Created: 2016-07-05 Last updated: 2018-11-09
    5. Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms
    Open this publication in new window or tab >>Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms
    2017 (English)In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017Conference paper, Published paper (Refereed)
    Abstract [en]

    Variational inference using the reparameterization trick has enabled large-scale approximate Bayesian inference in complex probabilistic models, leveraging stochastic optimization to sidestep intractable expectations. The reparameterization trick is applicable when we can simulate a random variable by applying a differentiable deterministic function on an auxiliary random variable whose distribution is fixed. For many distributions of interest (such as the gamma or Dirichlet), simulation of random variables relies on acceptance-rejection sampling. The discontinuity introduced by the accept-reject step means that standard reparameterization tricks are not applicable. We propose a new method that lets us leverage reparameterization gradients even when variables are outputs of a acceptance-rejection sampling algorithm. Our approach enables reparameterization on a larger class of variational distributions. In several studies of real and synthetic data, we show that the variance of the estimator of the gradient is significantly lower than other state-of-the-art methods. This leads to faster convergence of stochastic gradient variational inference.

    Series
    Proceedings of Machine Learning Research, ISSN 1938-7228 ; 54
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-152645 (URN)
    Conference
    Artificial Intelligence and Statistics, 20-22 April 2017, Fort Lauderdale, FL, USA
    Available from: 2018-11-09 Created: 2018-11-09 Last updated: 2018-11-21
    6. Variational Sequential Monte Carlo
    Open this publication in new window or tab >>Variational Sequential Monte Carlo
    2018 (English)In: Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, 2018Conference paper, Published paper (Refereed)
    Abstract [en]

    Many recent advances in large scale probabilistic inference rely on variational methods. The success of variational approaches depends on (i) formulating a flexible parametric family of distributions, and (ii) optimizing the parameters to find the member of this family that most closely approximates the exact posterior. In this paper we present a new approximating family of distributions, the variational sequential Monte Carlo (VSMC) family, and show how to optimize it in variational inference. VSMC melds variational inference (VI) and sequential Monte Carlo (SMC), providing practitioners with flexible, accurate, and powerful Bayesian inference. The VSMC family is a variational family that can approximate the posterior arbitrarily well, while still allowing for efficient optimization of its parameters. We demonstrate its utility on state space models, stochastic volatility models for financial data, and deep Markov models of brain neural circuits.

    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-152646 (URN)
    Conference
    International Conference on Artificial Intelligence and Statistics, Playa Blanca, Lanzarote, Canary Islands, April 9 - 11, 2018
    Available from: 2018-11-09 Created: 2018-11-09 Last updated: 2018-11-16Bibliographically approved
  • Public defence: 2018-12-14 13:00 Berzeliussalen, Linköping
    Vavruch, Ludvig
    Linköping University, Department of Clinical and Experimental Medicine, Division of Surgery, Orthopedics and Oncology. Linköping University, Faculty of Medicine and Health Sciences. Linköping University, Center for Medical Image Science and Visualization (CMIV). Region Östergötland, Center for Surgery, Orthopaedics and Cancer Treatment, Department of Orthopaedics in Linköping.
    Adolescent Idiopathic Scoliosis: A Deformity in Three Dimensions2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Scoliosis is a complex three-dimensional deformity of the spine. Even though it has been known for centuries, treatment of the deformity has focused on correcting only in the frontal plane. In the last decades, the need for three-dimensional assessment regarding scoliosis has been highlighted to better understand the cause and the principles of treating scoliosis. The overall aim of this dissertation is to provide knowledge to assess scoliosis as a three-dimensional problem.

    The severity of scoliosis is measured with the Cobb angle from standing radiographs. Computed tomography (CT) examinations are used throughout this thesis. The first paper investigates the difference in Cobb angle measured from standing radiographs and supine CT examinations. The standing radiographs had larger Cobb angles with a mean difference of 11° and a linear correlation between the two examinations from 128 consecutive patients with adolescent idiopathic scoliosis (AIS) planned for surgery.

    The second paper compares the axial shape of vertebrae in 20 patients with AIS with a reference group. Clear asymmetry was observed in all vertebrae – superior and inferior end vertebrae as well as the apical vertebra – compared with corresponding vertebrae among the reference group. The asymmetry was most pronounced in the apical vertebra. A novel parameter, frontal vertebral body rotation (FVBR), was introduced to describe the internal rotation of the vertebrae in the axial plane.

    Pelvic incidence (PI) is a measurement of the position of the sacrum in relation to the femoral heads. This is relevant in scoliosis because PI determines the pelvic configuration acting as a foundation to the spine. PI has traditionally been measured from standing radiographs. The third study investigates PI three-dimensionally, based on low-dose CT examinations, in 37 patients with Lenke type 1 or 5 curves compared with a reference group. A significantly higher PI was observed in patients with Lenke type 5 curves compared with the reference group and patients with Lenke type 1 curves.

    Severe AIS is treated with corrective surgery. Two approaches are available: the predominant posterior approach and the anterior approach. In the fourth paper, these two approaches are evaluated with regard to three-dimensional correction, how well the correction is maintained over a 2-year follow-up and patient-reported outcome measures. Twenty-seven patients treated with the posterior approach and 26 patients treated with the anterior approach, all with Lenke type 1 curves, were included. Fewer vertebrae were fused in the anterior group, but the posterior group had a better correction of the deformity in the frontal plane. No difference was observed regarding three-dimensional correction and patient-reported outcome measures.

    AIS is truly a complex three-dimensional deformity. More research is needed to fully comprehend the complexity of the scoliotic spine.

    List of papers
    1. A Comparison of Cobb Angle: Standing Versus Supine Images of Late-Onset Idiopathic Scoliosis
    Open this publication in new window or tab >>A Comparison of Cobb Angle: Standing Versus Supine Images of Late-Onset Idiopathic Scoliosis
    2016 (English)In: Polish Journal Of Radiology, ISSN 1733-134X, Vol. 81, p. 270-276Article in journal (Refereed) Published
    Abstract [en]

    Background: Scoliosis is traditionally evaluated by measuring the Cobb angle in radiograph images taken while the patient is standing. However, low-dose computed tomography (CT) images, which are taken while the patient is in a supine position, provide new opportunities to evaluate scoliosis. Few studies have investigated how the patient's position, standing or supine, affects measurements. The purpose of this study was to compare the Cobb angle in images from patients while standing versus supine.less thanbr /greater thanMaterial/methods: A total of 128 consecutive patients (97 females and 21 males; mean age 15.5 [11-26] years) with late-onset scoliosis requiring corrective surgery were enrolled. One observer evaluated the type of curve (Lenke classification) and measured the Cobb angle in whole-spine radiography (standing) and scout images from low-dose CT (supine) were taken on the same day.less thanbr /greater thanResults: For all primary curves, the mean Cobb angle was 59 (SD 12) while standing and 48 (SD 12) while in the supine position, with a mean difference of 11 (SD 5). The correlation between primary standing and supine images had an r value of 0.899 (95% CI 0.860-0.928) and an intra-class correlation coefficient value of 0.969. The correlation between the difference in standing and supine images from primary and secondary curves had an r value of 0.340 (95% CI 0.177-0.484).less thanbr /greater thanConclusions: We found a strong correlation between the Cobb angle in images obtained while the patient was standing versus supine for primary and secondary curves. This study is only applicable for patients with severe curves requiring surgical treatment. It enables additional studies based on low-dose CT.

    Place, publisher, year, edition, pages
    Medical Science International, 2016
    Keywords
    Scoliosis; Spine; Supine Position
    National Category
    Radiology, Nuclear Medicine and Medical Imaging
    Identifiers
    urn:nbn:se:liu:diva-145863 (URN)10.12659/PJR.895949 (DOI)27354881 (PubMedID)
    Available from: 2018-03-20 Created: 2018-03-20 Last updated: 2018-11-23Bibliographically approved
    2. Vertebral Axial Asymmetry in Adolescent Idiopathic Scoliosis.
    Open this publication in new window or tab >>Vertebral Axial Asymmetry in Adolescent Idiopathic Scoliosis.
    2018 (English)In: Spine Deformity, ISSN 2212-1358, Vol. 6, no 2, p. 112-120.e1Article in journal (Refereed) Published
    Abstract [en]

    Study Design

    Retrospective study.

    Objectives

    To investigate parameters of axial vertebral deformation in patients with scoliosis compared to a control group, and to determine whether these parameters correlated with the severity of spine curvature, measured as the Cobb angle.

    Summary of Background Data

    Adolescent idiopathic scoliosis (AIS) is the most common type of spinal deformity. Many studies have investigated vertebral deformation, in terms of wedging and pedicle deformations, but few studies have investigated actual structural changes within vertebrae.

    Methods

    This study included 20 patients with AIS (Lenke 1–3, mean age: 15.6 years, range: 11–20). We compared preoperative low-dose computed tomography(CT) examinations of patients with AIS to those of a control group matched for age and sex. The control individuals had no spinal deformity, but they were admitted to the emergency department for trauma CTs. We measured the Cobb angles and the axial vertebral rotation (AVR), axial vertebral bodyasymmetry (AVBA), and frontal vertebral body rotation (FVBR) for the superior end, inferior end, and apical vertebrae, with in-house–developed software. Correlations between entities were investigated with the Pearson correlation test.

    Results

    The average Cobb angles were 49.3° and 1.3° for the scoliotic and control groups, respectively. The patient and control groups showed significant differences in the AVRs of all three vertebra levels (p < .01), the AVBAs of the superior end and apical vertebrae (p < .008), and the FVBR of the apical vertebra (p = .011). Correlations were only found between the AVBA and FVBR in the superior end vertebra (r = 0.728, p < .001) and in the apical vertebra (r = 0.713, p < .001).

    Conclusions

    Compared with controls, patients with scoliosis showed clear morphologic differences in the midaxial plane vertebrae. Differences in AVR, AVBA, and FVBR were most pronounced at the apical vertebra. The FVBR provided valuable additional information about the internal rotation and deformation of vertebrae.

    Level of Evidence

    Level III.

    Place, publisher, year, edition, pages
    Elsevier, 2018
    Keywords
    Scoliosis; Morphology; Three-dimensional; Vertebral rotation; Low-dose CT
    National Category
    Orthopaedics
    Identifiers
    urn:nbn:se:liu:diva-145864 (URN)10.1016/j.jspd.2017.09.001 (DOI)29413732 (PubMedID)2-s2.0-85032338953 (Scopus ID)
    Available from: 2018-03-20 Created: 2018-03-20 Last updated: 2018-11-23Bibliographically approved
    3. Three-dimensional pelvic incidence is much higher in (thoraco)lumbar scoliosis than in controls
    Open this publication in new window or tab >>Three-dimensional pelvic incidence is much higher in (thoraco)lumbar scoliosis than in controls
    Show others...
    2018 (English)In: European spine journal, ISSN 0940-6719, E-ISSN 1432-0932Article in journal (Refereed) Epub ahead of print
    Abstract [en]

    Purpose

    The pelvic incidence (PI) is used to describe the sagittal spino-pelvic alignment. In previous studies, radiographs were used, leading to less accuracy in establishing the three-dimensional (3D) spino-pelvic parameters. The purpose of this study is to analyze the differences in the 3D sagittal spino-pelvic alignment in adolescent idiopathic scoliosis (AIS) subjects and non-scoliotic controls.

    Methods

    Thirty-seven female AIS patients that underwent preoperative supine low-dose computed tomography imaging of the spine, hips and pelvis as part of their general workup were included and compared to 44 non-scoliotic age-matched female controls. A previously validated computerized method was used to measure the PI in 3D, as the angle between the line orthogonal to the inclination of the sacral endplate and the line connecting the center of the sacral endplate with the hip axis.

    Results

    The PI was on average 46.8° ± 12.4° in AIS patients and 41.3° ± 11.4° in controls (p = 0.025), with a higher PI in Lenke type 5 curves (50.6° ± 16.2°) as compared to controls (p = 0.042), whereas the Lenke type 1 curves (45.9° ± 12.2°) did not differ from controls (p = 0.141).

    Conclusion

    Lenke type 5 curves show a significantly higher PI than controls, whereas the Lenke type 1 curves did not differ from controls. This suggests a role of pelvic morphology and spino-pelvic alignment in the pathogenesis of idiopathic scoliosis. Further longitudinal studies should explore the exact role of the PI in the initiation and progression of different AIS types.

    Place, publisher, year, edition, pages
    Heidelberg: Springer, 2018
    Keywords
    Idiopathic scoliosis, Sagittal alignment, Pelvic incidence, Three-dimensional analysis, Computed tomography
    National Category
    Orthopaedics
    Identifiers
    urn:nbn:se:liu:diva-152573 (URN)10.1007/s00586-018-5718-6 (DOI)30128762 (PubMedID)2-s2.0-85051834138 (Scopus ID)
    Available from: 2018-11-07 Created: 2018-11-07 Last updated: 2018-11-23Bibliographically approved
  • Public defence: 2018-12-14 13:00 Belladonna, Linköping
    Ali, Zaheer
    Linköping University, Department of Medical and Health Sciences, Division of Cardiovascular Medicine. Linköping University, Faculty of Medicine and Health Sciences.
    Investigating mechanisms of angiogenesis in health and disease using zebrafish models2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Angiogenesis, the growth of blood vessels from an existing vasculature, can occur by sprouting from preexisting vessels or by vessel splitting (intussusception). Pathological angiogenesis drives choroidal neovascularization (CNV) in age related macular degeneration (AMD) which is commonly restricted under the retinal pigment epithelium (RPE), called occult CNV, but may also involve vessels penetrating through the RPE into the sub-retinal space. Pathological vessels are poorly developed, insufficiently perfused and highly leaky, phenotypes that are considered to drive disease progression and lead to poor prognosis. Currently, a number of anti-angiogenic drugs exists, the majority of which target vascular endothelial factor (VEGF), but although they often are highly beneficial for treating eye diseases in the short-term, they are generally of limited efficacy in other diseases such as cancer, and also have poorer efficacy when used for treatment of eye diseases in the long-term. A better understanding of the mechanisms underlying pathological angiogenesis can generate new targets for treatment leading to development of better drugs for cancer and retinopathies, but perhaps also other angiogenesis-dependent diseases, in the future. In this thesis mechanisms involved in developmental angiogenesis or pathological angiogenesis in the choroid, cornea or melanoma was identified. These findings highlight the need to further elaborate our knowledge related to angiogenesis in different tissues/conditions for a more targeted, and potentially effective treatment of diseases in the future.

    In paper I, we for the first time identified the choriocapillaries (CCs) in adult zebrafish and found that occult CNV could be induced by exposing the fish to severe hypoxia. Interestingly, we found that occult CNV relied on intussusception, involving not only de novo generation of intussusceptive pillars but also a previously poorly understood mechanism called pillar splitting. This involved HIF-VEGF-VEGFR2 signaling and evidence that this also occurred in both rats and humans suffering from AMD suggested that the mechanism was conserved and clinically relevant.

    In contrast, we found in paper II that the development of CCs in the zebrafish relies on sprouting angiogenesis, involve continuous remodeling, and delayed maturation of the vasculature in 2D. The initial development was found to occur by a unique process of tissuewide synchronized vasculogenesis. As expected, VEGFA via VEGFR2 was also critical for the development of these vessels in the zebrafish embryo, but surprisingly this was independent on hypoxia-inducible factor (HIF)-1.

    Inflammatory nuclear factor-kB (NF-kB) signaling is involved in the progression of angiogenesis, but this signaling pathway has mainly been studied in the inflammatory cells and the role of NF-kB in the endothelial cells during angiogenesis is poorly understood. In paper III, we found that blocking NF-kB signaling using a specific IKK2 blocker IMD0354, specifically blocks pathological as well as developmental angiogenesis by targeting endothelial cell NF-kB signaling in the endothelial cells. Using a rat model for suture-induced corneal neovascularization, IMD0354 treatment lead to reduced production of inflammatory C-C motif chemokine ligand 2 (CCL2), C-X-C motif chemokine ligand 5 (CXCL5) and VEGF, and thereby reduced pathological corneal angiogenesis in this model.

    Using the zebrafish tumor xenograft model in paper IV, we found an association between Microphthalmia associated transcription factor (MITF) and pigment epithelium derived factor (PEDF), which was involved in pathological tumor angiogenesis and metastasis. Similarly, in paper V we used zebrafish transplantation models to study and investigate the use of biocompatible polymers for the delivery of pro-angiogenic FGF-2 as a potential treatment strategy for ischemic diseases such as myocardial infarction (MI). Conclusively, this thesis provides new insights into diverse fields of angiogenic assays using zebrafish, and reveals new mechanisms of angiogenesis in health and disease. This work will hopefully provide a foundation for further studies into occult CNV related to AMD, a process that has not been possible to study previously in pre-clinical models. In addition, zebrafish xenograft or other transplantation models used in this work will likely be important to study cancer biology and to develop more attractive pharmaceutical preparations based on biocompatible hydrogels formulated as microspheres in the future.

    List of papers
    1. Selective IKK2 inhibitor IMD0354 disrupts NF-kappa B signaling to suppress corneal inflammation and angiogenesis
    Open this publication in new window or tab >>Selective IKK2 inhibitor IMD0354 disrupts NF-kappa B signaling to suppress corneal inflammation and angiogenesis
    Show others...
    2018 (English)In: Angiogenesis, ISSN 0969-6970, E-ISSN 1573-7209, Vol. 21, no 2, p. 267-285Article in journal (Refereed) Published
    Abstract [en]

    Corneal neovascularization is a sight-threatening condition caused by angiogenesis in the normally avascular cornea. Neovascularization of the cornea is often associated with an inflammatory response, thus targeting VEGF-A alone yields only a limited efficacy. The NF-kappa B signaling pathway plays important roles in inflammation and angiogenesis. Here, we study consequences of the inhibition of NF-kappa B activation through selective blockade of the IKK complex I kappa B kinase beta (IKK2) using the compound IMD0354, focusing on the effects of inflammation and pathological angiogenesis in the cornea. In vitro, IMD0354 treatment diminished HUVEC migration and tube formation without an increase in cell death and arrested rat aortic ring sprouting. In HUVEC, the IMD0354 treatment caused a dose-dependent reduction in VEGF-A expression, suppressed TNF alpha-stimulated expression of chemokines CCL2 and CXCL5, and diminished actin filament fibers and cell filopodia formation. In developing zebrafish embryos, IMD0354 treatment reduced expression of Vegf-a and disrupted retinal angiogenesis. In inflammation-induced angiogenesis in the rat cornea, systemic selective IKK2 inhibition decreased inflammatory cell invasion, suppressed CCL2, CXCL5, Cxcr2, and TNF-alpha expression and exhibited anti-angiogenic effects such as reduced limbal vessel dilation, reduced VEGF-A expression and reduced angiogenic sprouting, without noticeable toxic effect. In summary, targeting NF-kappa B by selective IKK2 inhibition dampened the inflammatory and angiogenic responses in vivo by modulating the endothelial cell expression profile and motility, thus indicating an important role of NF-kappa B signaling in the development of pathologic corneal neovascularization.

    Place, publisher, year, edition, pages
    Springer Netherlands, 2018
    Keywords
    Cornea; Neovascularization; NF-kappa B; IMD0354; IKK2; VEGF
    National Category
    Cell and Molecular Biology
    Identifiers
    urn:nbn:se:liu:diva-147373 (URN)10.1007/s10456-018-9594-9 (DOI)000428924500007 ()29332242 (PubMedID)2-s2.0-85041334437 (Scopus ID)
    Note

    Funding Agencies|Swedish Research Council [2012-2472]; Swedish Foundation Stiftelsen Synframjandets Forskningsfond/Ogonfonden; Svenska Sallskapet for Medicinsk Forskning; Linkoping Universitet; Jeanssons Stiftelser

    Available from: 2018-05-18 Created: 2018-05-18 Last updated: 2018-12-07Bibliographically approved
    2. Regulatory and Functional Connection of Microphthalmia-Associated Transcription Factor and Anti-Metastatic Pigment Epithelium Derived Factor in Melanoma
    Open this publication in new window or tab >>Regulatory and Functional Connection of Microphthalmia-Associated Transcription Factor and Anti-Metastatic Pigment Epithelium Derived Factor in Melanoma
    Show others...
    2014 (English)In: Neoplasia, ISSN 1522-8002, E-ISSN 1476-5586, Vol. 16, no 6, p. 529-542Article in journal (Refereed) Published
    Abstract [en]

    Pigment epithelium-derived factor (PEDF), a member of the serine protease inhibitor superfamily, has potent anti-metastatic effects in cutaneous melanoma through its direct actions on endothelial and melanoma cells. Here we show that PEDF expression positively correlates with microphthalmia-associated transcription factor ( MITF) in melanoma cell lines and human samples. High PEDF and MITF expression is characteristic of low aggressive melanomas classified according to molecular and pathological criteria, whereas both factors are decreased in senescent melanocytes and naevi. Importantly, MITF silencing down-regulates PEDF expression in melanoma cell lines and primary melanocytes, suggesting that the correlation in the expression reflects a causal relationship. In agreement, analysis of Chromatin immunoprecipitation coupled to high throughput sequencing (ChIP-seq) data sets revealed three MITF binding regions within the first intron of SERPINF1, and reporter assays demonstrated that the binding of MITF to these regions is sufficient to drive transcription. Finally, we demonstrate that exogenous PEDF expression efficiently halts in vitro migration and invasion, as well as in vivo dissemination of melanoma cells induced by MITF silencing. In summary, these results identify PEDF as a novel transcriptional target of MITF and support a relevant functional role for the MITF-PEDF axis in the biology of melanoma.

    Place, publisher, year, edition, pages
    Neoplasia, 2014
    National Category
    Clinical Medicine
    Identifiers
    urn:nbn:se:liu:diva-110497 (URN)10.1016/j.neo.2014.06.001 (DOI)000340553600007 ()25030625 (PubMedID)
    Note

    Funding Agencies|Ministerio de Ciencia y Competitividad of Spain [SAF-2010-19256, SAF-2011-24225, SAF-2012-32117, FIS 11/02568, RD09/0076/0101, PT13/0010/0012, PI12/01552]; LiU-Cancer; Svenska Sallskapet for Medicinsk Forskning; Ake Wibergs Stiftelse; Goesta Fraenkels Stifelse; Fundacion Cientifica de la Asociacion Espanola Contra el Cancer

    Available from: 2014-09-15 Created: 2014-09-12 Last updated: 2018-12-07
    3. Adjustable delivery of pro-angiogenic FGF-2 by alginate: collagen microspheres
    Open this publication in new window or tab >>Adjustable delivery of pro-angiogenic FGF-2 by alginate: collagen microspheres
    Show others...
    2018 (English)In: BIOLOGY OPEN, ISSN 2046-6390, Vol. 7, no 3, article id UNSP bio027060Article in journal (Refereed) Published
    Abstract [en]

    Therapeutic induction of blood vessel growth (angiogenesis) in ischemic tissues holds great potential for treatment of myocardial infarction and stroke. Achieving sustained angiogenesis and vascular maturation has, however, been highly challenging. Here, we demonstrate that alginate: collagen hydrogels containing therapeutic, pro-angiogenic FGF-2, and formulated as microspheres, is a promising and clinically relevant vehicle for therapeutic angiogenesis. By titrating the amount of readily dissolvable and degradable collagen with more slowly degradable alginate in the hydrogel mixture, the degradation rates of the biomaterial controlling the release kinetics of embedded proangiogenic FGF-2 can be adjusted. Furthermore, we elaborate a microsphere synthesis protocol allowing accurate control over sphere size, also a critical determinant of degradation/release rate. As expected, alginate: collagen microspheres were completely biocompatible and did not cause any adverse reactions when injected in mice. Importantly, the amount of pro-angiogenic FGF-2 released from such microspheres led to robust induction of angiogenesis in zebrafish embryos similar to that achieved by injecting FGF-2-releasing cells. These findings highlight the use of microspheres constructed from alginate: collagen hydrogels as a promising and clinically relevant delivery system for pro-angiogenic therapy.

    Place, publisher, year, edition, pages
    COMPANY OF BIOLOGISTS LTD, 2018
    Keywords
    Hydrogels; Microspheres; Angiogenesis; Vasculature; Zebrafish
    National Category
    Cell and Molecular Biology
    Identifiers
    urn:nbn:se:liu:diva-147419 (URN)10.1242/bio.027060 (DOI)000429100500002 ()29449216 (PubMedID)
    Note

    Funding Agencies|Svenska Sallskapet for Medicinsk Forskning; Ake-Wiberg Foundation; Goesta Fraenkel Foundation; Ahrens Stiftelse; Ollie och Elof Ericssons Stiftelse; Carmen och Bertil Ragners Stiftelse; KI Stiftelser och fonder; Loo och Hans Ostermans Stiftelse for Medicinsk Forskning; Vetenskapsradet; Linkoping University

    Available from: 2018-05-17 Created: 2018-05-17 Last updated: 2018-12-07
  • Public defence: 2018-12-18 10:15 Planck, Fysikhuset, Linköping
    Lilja, Louise
    Linköping University, Department of Physics, Chemistry and Biology, Semiconductor Materials. Linköping University, Faculty of Science & Engineering.
    4H-SiC epitaxy investigating carrier lifetime and substrate off-axis dependence2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Silicon carbide (SiC) is a wide bandgap semiconductor with unique material properties making it useful for various device applications using high power, high frequency and high temperature. Compared to Si-based electronics, SiC based electronics have an improved energy efficiency. One of the most critical problems is to reduce this planets power consumption, where large improvements can be made enhancing the energy efficiency. Independent on how the electrical power is generated, power conversion is needed and about 10% of the electrical power is lost for every power conversion step using Si-based electronics. Since the efficiency is related to the performance of the semiconductor device, SiC can make contributions to the efficiency. Compared to Si, SiC has three times larger bandgap, about ten times higher breakdown electric field strength and about three times higher thermal conductivity. The wide bandgap together with the chemical stability of SiC makes it possible for SiC electronic devices to operate at much higher temperatures (>250°C) compared to Si-based devices and do not require large cooling units as with Si power converters.

    The current status for 4H-SiC devices regard unipolar devices (≤ 1700 V), such as metal-oxide-semiconductor field-effect transistors (MOSFETs) and Schottky barrier diodes (SBDs), are now on the market for mass production. The research focus is now on high-voltage (>10 kV) bipolar devices, such as, bipolar junction transistors (BJTs), p‑i‑n diodes and insulated-gate bipolar transistors (IGBTs).

    The focus of this thesis are material improvements relevant for the development of 4H-SiC high-voltage bipolar devices. A key parameter for such devices is the minority carrier lifetime, where long carrier lifetimes reduce the on-resistance through conductivity modulation. However, too long carrier lifetimes give long reverse recovery times leading to large switching losses. Thus, a tailored carrier lifetime is needed for the specific application. Carrier lifetimes of the epilayers can both be controlled by the CVD growth conditions and by post-growth processing, such as thermal oxidation and carbon implantation followed by thermal annealing. Emphasis in this thesis (Paper 1‑2) is to find optimal CVD growth conditions (growth temperature, C/Si ratio, growth rate, doping) improving the carrier lifetime. Since the main lifetime limiting defect has shown to be the Z1/2 center, identified as isolated carbon vacancies, growth conditions minimizing the Z1/2 concentration are strived for.

    To achieve high-voltage bipolar devices, thick epilayers of high quality is needed. An important factor is then the growth rate that needs to be relatively high in order to reduce the fabrication time, and thus the cost of the final device. In this thesis the growth process has been optimized for high growth rates (30 µm/h) using standard silane and propane chemistry (Paper 3), compared to other chemistries that includes chlorine, which results in corroded reactor parts and new defects in the epitaxial layers.

    Another important parameter for 4H-SiC bipolar devices is the basal plane dislocations (BPDs) in the substrate and epilayers, since the BPDs can act as source of nucleation and expansion of Shockley stacking faults (SSFs). The expanded SSFs give a lowered carrier lifetime and form a potential barrier for carrier transport leading to an increased forward voltage drop which in turn leads to bipolar degradation. The bipolar degradation is detrimental for 4H-SiC bipolar devices. Several strategies are developed to reduce the density of BPDs including buffer layers, growth interrupts and decreasing the substrates off-cut angle. Paper 4‑6 is focused on developing a CVD growth process for low substrate off-cut angles (1° and 2°) compared to the today’s standard off-cut angle of 4°. By reducing the substrate off-cut angle the number of BPDs intersecting the substrate surface is reduced. In addition, the conversion from BPDs to threading edge dislocations (TEDs) during epitaxial growth is increased with lower off-cut angles.

    List of papers
    1. Influence of Growth Temperature on Carrier Lifetime in 4H-SiC Epilayers
    Open this publication in new window or tab >>Influence of Growth Temperature on Carrier Lifetime in 4H-SiC Epilayers
    Show others...
    2013 (English)Conference paper, Published paper (Refereed)
    Abstract [en]

    Carrier lifetime and formation of defects have been investigated as a function of growth temperature in n-type 4H-SiC epitaxial layers, grown by horizontal hot-wall CVD. Emphasis has been put on having fixed conditions except for the growth temperature, hence growth rate, doping and epilayer thickness were constant in all epilayers independent of growth temperature. An increasing growth temperature gave higher Z1/2 concentrations along with decreasing carrier lifetime. A correlation between growth temperature and D1 defect was also observed.

    Place, publisher, year, edition, pages
    Trans Tech Publications Inc., 2013
    Keywords
    Atomic Force Microscopy, Carrier Lifetime, DLTS, Epitaxial Growth, Horizontal Hot-Wall CVD, Intrinsic Defect, Photoluminescence (PL)
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-88341 (URN)10.4028/www.scientific.net/MSF.740-742.637 (DOI)000319785500151 ()
    Conference
    9th European Conference on Silicon Carbide and Related Materials (ECSCRM 2012), 2-6 September 2012, St Petersburg, Russia
    Available from: 2013-02-04 Created: 2013-02-04 Last updated: 2018-12-10
    2. Smooth 4H-SiC epilayers grown with high growth rates with silane/propane chemistry using 4° off-cut substrates
    Open this publication in new window or tab >>Smooth 4H-SiC epilayers grown with high growth rates with silane/propane chemistry using 4° off-cut substrates
    2016 (English)In: Silicon Carbide and Related Materials 2015 / [ed] Fabrizio Roccaforte, Francesco La Via, Roberta Nipoti, Danilo Crippa, Filippo Giannazzo and Mario Saggio, Trans Tech Publications, 2016, Vol. 858, p. 209-212Conference paper, Published paper (Refereed)
    Abstract [en]

    4H-SiC epilayers with very smooth surfaces were grown with high growth rates on 4° off-cut substrates using standard silane/propane chemistry. Specular surfaces with RMS values below 0.2 nm are presented for epilayers grown with growth rates up to 30 μm/h using horizontal hot-wall chemical vapor deposition, with up to 100 μm thickness. Optimization of in-situ etching conditions and C/Si ratio are presented.

    Place, publisher, year, edition, pages
    Trans Tech Publications, 2016
    Series
    Materials Science Forum, ISSN 1662-9752 ; 858
    Keywords
    Atomic force microscopy, Chemical vapor deposition, Epitaxial growth, Silicon carbide
    National Category
    Materials Engineering
    Identifiers
    urn:nbn:se:liu:diva-153288 (URN)10.4028/www.scientific.net/MSF.858.209 (DOI)
    Conference
    The 16th International Conference on Silicon Carbide and Related Materials (ICSCRM2015), Giardini Naxos, Sicily, Italy, October 4th October 9th, 2015.
    Available from: 2018-12-10 Created: 2018-12-10 Last updated: 2018-12-10
    3. Improved Epilayer Surface Morphology on 2 degrees off-cut 4H-SiC Substrates
    Open this publication in new window or tab >>Improved Epilayer Surface Morphology on 2 degrees off-cut 4H-SiC Substrates
    2014 (English)In: SILICON CARBIDE AND RELATED MATERIALS 2013, PTS 1 AND 2, Trans Tech Publications , 2014, Vol. 778-780, p. 206-209Conference paper, Published paper (Refereed)
    Abstract [en]

    Homoepitaxial layers of 4H-SiC were grown with horizontal hot-wall CVD on 2 degrees off-cut substrates, with the purpose of improving the surface morphology of the epilayers and reducing the density of surface morphological defects. In-situ etching conditions in either pure hydrogen or in a mixture of silane and hydrogen prior to the growth were compared as well as C/Si ratios in the range 0.8 to 1.0 during growth. The smoothest epilayer surface, together with lowest defect density, was achieved with growth at a C/Si ratio of 0.9 after an in-situ etching in pure hydrogen atmosphere.

    Place, publisher, year, edition, pages
    Trans Tech Publications, 2014
    Series
    Materials Science Forum, ISSN 1662-9752 ; 778-780
    Keywords
    epitaxial growth; horizontal hot-wall CVD; atomic force microscopy; vicinal off angle
    National Category
    Engineering and Technology
    Identifiers
    urn:nbn:se:liu:diva-108194 (URN)10.4028/www.scientific.net/MSF.778-780.206 (DOI)000336634100048 ()
    Conference
    SILICON CARBIDE AND RELATED MATERIALS 2013
    Available from: 2014-06-26 Created: 2014-06-26 Last updated: 2018-12-10
    4. In-grown stacking-faults in 4H-SiC epilayers grown on 2 degrees off-cut substrates
    Open this publication in new window or tab >>In-grown stacking-faults in 4H-SiC epilayers grown on 2 degrees off-cut substrates
    2015 (English)In: Physica status solidi. B, Basic research, ISSN 0370-1972, E-ISSN 1521-3951, Vol. 252, no 6, p. 1319-1324Article in journal (Refereed) Published
    Abstract [en]

    4H-SiC epilayers were grown on 2 degrees off-cut substrates using standard silane/propane chemistry, with the aim of characterizing in-grown stacking faults. The stacking faults were analyzed with low temperature photoluminescence spectroscopy, room temperature photoluminescence mappings, room temperature cathodoluminescence and synchrotron white beam X-ray topography. At least three different types of in-grown stacking faults were observed, including double Shockley stacking faults, triple Shockley stacking faults and bar-shaped stacking faults. Those stacking faults are all previously found in 4 degrees and 8 degrees off-cut epilayers; however, the geometrical size is larger in epilayers grown on 2 degrees off-cut substrates due to lower off-cut angle. The stacking faults were formed close to the epilayer/substrate interface during the epitaxial growth. (C) 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim

    Place, publisher, year, edition, pages
    WILEY-V C H VERLAG GMBH, 2015
    Keywords
    chemical vapor deposition; epitaxy; photoluminescence; SiC; stacking faults
    National Category
    Chemical Sciences
    Identifiers
    urn:nbn:se:liu:diva-120065 (URN)10.1002/pssb.201451710 (DOI)000355756200018 ()
    Note

    Funding Agencies|Swedish Research Council (VR); Advanced Functional Materials (AFM); Swedish Foundation for Strategic Research (SSF)

    Available from: 2015-07-06 Created: 2015-07-06 Last updated: 2018-12-10
  • Public defence: 2018-12-19 09:15 Planck, Fysikhuset, Linköping
    Lundström, Maria
    Linköping University, Department of Physics, Chemistry and Biology, Biology. Linköping University, Faculty of Science & Engineering.
    Exploring Fennoscandian agricultural history through genetic analysis of aged crop materials2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Crop plants have undergone a multitude of genetic changes during and following their domestication. The spread of agriculture brought the crops to new geographic regions exposing them to new environments and selection pressures along the way. This gave rise to many local variants with traits favoured both by agricultural practices and the environment.

    Agriculture was introduced in Fennoscandia (Norway, Sweden, Finland and Denmark) around 4000 BC. The composition of the archaeobotanical record gives some clues as to which species were cultivated, but macroscale analyses rarely reach beyond that. Therefore, methods like genetic analysis are necessary to expand our knowledge about the history of crop cultivation. Under optimal conditions, DNA can survive in biological samples for several hundred thousand years. The preservation of plant specimens in the Fennoscandian climate has, however, rarely been explored. This thesis therefore attempts to dive deeper into the Fennoscandian cultivation history through genetic analyses of aged plant materials from both museum collections and archaeological sources. Cereal grains from a range of preservation conditions were evaluated to find which ones might be of interest for genetic investigations. Desiccated materials gave the highest success rates, in agreement with previous studies. Waterlogged materials appeared to contain small amounts of endogenous DNA, whereas genetic analysis of charred cereals failed completely in all samples.

    Population structure was investigated in 17-19th century materials of both barley and rye from Sweden and Finland. Northern and southern populations of Finnish six-row barley were distinct from one another. In southern Sweden, genetic analysis suggested conserved population structure extending over 200 years. The genetic composition of rye also seemed mostly conserved, but rye did not show geographic population structure across the investigated region in Sweden and Finland.

    A long-standing question in Fennoscandian crop history has been the interpretation of historical written records mentioning Brassica (cole crops, turnips and mustards), as well as the species identity of archaeobotanical finds of Brassica seeds. Thus, Next Generation Sequencing (NGS) was applied to identify which Brassica types that were cultivated in 17th century Kalmar, Sweden. The analysis corroborated morphological species classification in two of the investigated subfossil seeds, whereas no conclusions could be drawn from the remaining samples. The genome coverages were too low to allow subspecies identification.

    Wheat has been cultivated in Fennoscandia since the introduction of agriculture but has increased dramatically in importance over the last century. The functional allele of the wheat nutrition gene NAM-B1 was found to be particularly prominent in Fennoscandian wheats, likely associated with its effect on grain maturation time. Here the evolutionary history of NAM-B1 was investigated to see if it could truly be considered a domestication gene as suggested in a previous study. By studying extant landrace materials of Mediterranean tetraploid wheat, it was found that the non-functional allele showed signs indicative of a selective sweep. This selection did not, however, appear to have occurred during domestication.

    In conclusion, aged plant specimens from both museum and archaeological contexts could contribute greatly to our knowledge about historical cultivation, extending the investigated period into the mid 17th century. Subfossil and waterlogged archaeobotanical materials do contain endogenous DNA, suggesting that they are better suited for genetic analysis than charred ones, at least as far as cereals are concerned. There is potential for classifying archaeological Brassica remains using NGS, even though further optimisation of sample and library preparation may be necessary. And finally, despite NAM-B1 showing signs of selection it should not be considered a domestication gene in tetraploid wheat.

    List of papers
    1. Genetic analyses of Scandinavian desiccated, charred and waterlogged remains of barley (Hordeum vulgare L.)
    Open this publication in new window or tab >>Genetic analyses of Scandinavian desiccated, charred and waterlogged remains of barley (Hordeum vulgare L.)
    Show others...
    2018 (English)In: Journal of Archaeological Science: Reports, ISSN 2352-409X, Vol. 22, p. 11-20Article in journal (Refereed) Published
    Abstract [en]

    Barley, Hordeum vulgare L., has been cultivated in Fennoscandia (Denmark, Norway, Sweden, Finland) since the start of the Neolithic around 4000 years BCE. Genetic studies of extant and 19th century barley landraces from the area have previously shown that distinct genetic groups exist with geographic structure according to latitude, suggesting strong local adaptation of cultivated crops. It is, however, not known what time depth these patterns reflect. Here we evaluate different archaeobotanical specimens of barley, extending several centuries in time, for their potential to answer this question by analysis of aDNA. Forty-six charred grains, nineteen waterlogged specimens and nine desiccated grains were evaluated by PCR and KASP genotyping. The charred samples did not contain any detectable endogenous DNA. Some waterlogged samples permitted amplification of endogenous DNA, however not sufficient for subsequent analysis. Desiccated plant materials provided the highest genotyping success rates of the materials analysed here in agreement with previous studies. Five desiccated grains from a grave from 1679 in southern Sweden were genotyped with 100 SNP markers and data compared to genotypes of 19th century landraces from Fennoscandia. The results showed that the genetic composition of barley grown in southern Sweden changed very little from late 17th to late 19th century and farmers stayed true to locally adapted crops in spite of societal and agricultural development.

    Place, publisher, year, edition, pages
    Elsevier, 2018
    Keywords
    Ancient DNA, Barley, Population structure, 17th century, Landraces
    National Category
    Genetics
    Identifiers
    urn:nbn:se:liu:diva-151282 (URN)10.1016/j.jasrep.2018.09.006 (DOI)
    Available from: 2018-09-14 Created: 2018-09-14 Last updated: 2018-12-11Bibliographically approved
    2. Archaeological and Historical Materials as a Means to Explore Finnish Crop History
    Open this publication in new window or tab >>Archaeological and Historical Materials as a Means to Explore Finnish Crop History
    Show others...
    2018 (English)In: Environmental Archaeology, ISSN 1461-4103, E-ISSN 1749-6314Article in journal (Refereed) Epub ahead of print
    Abstract [en]

    In Northern Europe, barley (Hordeum vulgare L.) has been cultivated for almost 6000 years. Thus far, 150-year-old grains from historical collections have been used to investigate the distribution of barley diversity and how the species has spread across the region. Genetic studies of archaeobotanical material from agrarian sites could potentially clarify earlier migration patterns and cast further light on the origin of barley landraces. In this study, we aimed to evaluate different archaeological and historical materials with respect to DNA content, and to explore connections between Late Iron Age and medieval barley populations and historical samples of barley landraces in north-west Europe. The material analysed consisted of archaeological samples of charred barley grains from four sites in southern Finland, and historical material, with 33 samples obtained from two herbaria and the seed collections of the Swedish museum of cultural history.

    The DNA concentrations obtained from charred archaeological barley remains were too low for successful KASP genotyping confirming previously reported difficulties in obtaining aDNA from charred remains. Historical samples from herbaria and seed collection confirmed previously shown strong genetic differentiation between two-row and six-row barley. Six-row barley accessions from northern and southern Finland tended to cluster apart, while no geographical structuring was observed among two-row barley. Genotyping of functional markers revealed that the majority of barley cultivated in Finland in the late nineteenth and early twentieth century was late-flowering under increasing day-length, supporting previous findings from northern European barley.

    Place, publisher, year, edition, pages
    Routledge, 2018
    Keywords
    aDNA, archaeobotany, barley, genetic diversity, Hordeum vulgare, KASP, landraces
    National Category
    Genetics
    Identifiers
    urn:nbn:se:liu:diva-151277 (URN)10.1080/14614103.2018.1482598 (DOI)2-s2.0-85048366875 (Scopus ID)
    Available from: 2018-09-14 Created: 2018-09-14 Last updated: 2018-12-11Bibliographically approved
    3. Evolutionary history of the NAM-B1 gene in wild and domesticated tetraploid wheat
    Open this publication in new window or tab >>Evolutionary history of the NAM-B1 gene in wild and domesticated tetraploid wheat
    2017 (English)In: BMC Genetics, ISSN 1471-2156, E-ISSN 1471-2156, Vol. 18, article id 118Article in journal (Refereed) Published
    Abstract [en]

    Background

    The NAM-B1 gene in wheat has for almost three decades been extensively studied and utilized in breeding programs because of its significant impact on grain protein and mineral content and pleiotropic effects on senescence rate and grain size. First detected in wild emmer wheat, the wild-type allele of the gene has been introgressed into durum and bread wheat. Later studies have, however, also found the presence of the wild-type allele in some domesticated subspecies. In this study we trace the evolutionary history of the NAM-B1 in tetraploid wheat species and evaluate it as a putative domestication gene.

    Results

    Genotyping of wild and landrace tetraploid accessions showed presence of only null alleles in durum. Domesticated emmer wheats contained both null alleles and the wild-type allele while wild emmers, with one exception, only carried the wild-type allele. One of the null alleles consists of a deletion that covers several 100 kb. The other null-allele, a one-basepair frame-shift insertion, likely arose among wild emmer. This allele was the target of a selective sweep, extending over several 100 kb.

    Conclusions

    The NAM-B1 gene fulfils some criteria for being a domestication gene by encoding a trait of domestication relevance (seed size) and is here shown to have been under positive selection. The presence of both wild-type and null alleles in domesticated emmer does, however, suggest the gene to be a diversification gene in this species. Further studies of genotype-environment interactions are needed to find out under what conditions selection on different NAM-B1 alleles have been beneficial.

    Place, publisher, year, edition, pages
    BioMed Central, 2017
    Keywords
    Selective sweep, Grain protein content (GPC), Emmer, Durum, Domestication gene
    National Category
    Genetics Evolutionary Biology
    Identifiers
    urn:nbn:se:liu:diva-144103 (URN)10.1186/s12863-017-0566-7 (DOI)000418687000001 ()
    Available from: 2018-01-05 Created: 2018-01-05 Last updated: 2018-12-11Bibliographically approved
  • Public defence: 2018-12-19 13:15 Nobel BL32, B-Huset, Linköping
    Maghazeh, Arian
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    System-Level Design of GPU-Based Embedded Systems2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Modern embedded systems deploy several hardware accelerators, in a heterogeneous manner, to deliver high-performance computing. Among such devices, graphics processing units (GPUs) have earned a prominent position by virtue of their immense computing power. However, a system design that relies on sheer throughput of GPUs is often incapable of satisfying the strict power- and time-related constraints faced by the embedded systems.

    This thesis presents several system-level software techniques to optimize the design of GPU-based embedded systems under various graphics and non-graphics applications. As compared to the conventional application-level optimizations, the system-wide view of our proposed techniques brings about several advantages: First, it allows for fully incorporating the limitations and requirements of the various system parts in the design process. Second, it can unveil optimization opportunities through exposing the information flow between the processing components. Third, the techniques are generally applicable to a wide range of applications with similar characteristics. In addition, multiple system-level techniques can be combined together or with application-level techniques to further improve the performance.

    We begin by studying some of the unique attributes of GPU-based embedded systems and discussing several factors that distinguish the design of these systems from that of the conventional high-end GPU-based systems. We then proceed to develop two techniques that address an important challenge in the design of GPU-based embedded systems from different perspectives. The challenge arises from the fact that GPUs require a large amount of workload to be present at runtime in order to deliver a high throughput. However, for some embedded applications, collecting large batches of input data requires an unacceptable waiting time, prompting a trade-off between throughput and latency. We also develop an optimization technique for GPU-based applications to address the memory bottleneck issue by utilizing the GPU L2 cache to shorten data access time. Moreover, in the area of graphics applications, and in particular with a focus on mobile games, we propose a power management scheme to reduce the GPU power consumption by dynamically adjusting the display resolution, while considering the user's visual perception at various resolutions. We also discuss the collective impact of the proposed techniques in tackling the design challenges of emerging complex systems.

    The proposed techniques are assessed by real-life experimentations on GPU-based hardware platforms, which demonstrate the superior performance of our approaches as compared to the state-of-the-art techniques.

    List of papers
    1. General Purpose Computing on Low-Power Embedded GPUs: Has It Come of Age?
    Open this publication in new window or tab >>General Purpose Computing on Low-Power Embedded GPUs: Has It Come of Age?
    2013 (English)In: 13th International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS 2013), Samos, Greece, July 15-18, 2013., IEEE Press, 2013Conference paper, Published paper (Refereed)
    Abstract [en]

    In this paper we evaluate the promise held by low power GPUs for non-graphic workloads that arise in embedded systems. Towards this, we map and implement 5 benchmarks, that find utility in very different application domains, to an embedded GPU. Our results show that apart from accelerated performance, embedded GPUs are promising also because of their energy efficiency which is an important design goal for battery-driven mobile devices. We show that adopting the same optimization strategies as those used for programming high-end GPUs might lead to worse performance on embedded GPUs. This is due to restricted features of embedded GPUs, such as, limited or no user-defined memory, small instruction-set, limited number of registers, among others. We propose techniques to overcome such challenges, e.g., by distributing the workload between GPUs and multi-core CPUs, similar to the spirit of heterogeneous computation.

    Place, publisher, year, edition, pages
    IEEE Press, 2013
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-92626 (URN)10.1109/SAMOS.2013.6621099 (DOI)000332458100004 ()
    Conference
    SAMOS'13
    Available from: 2013-05-14 Created: 2013-05-14 Last updated: 2018-12-07
    2. Saving Energy without Defying Deadlines on Mobile GPU-based Heterogeneous Systems
    Open this publication in new window or tab >>Saving Energy without Defying Deadlines on Mobile GPU-based Heterogeneous Systems
    Show others...
    2014 (English)In: 2014 International Conference on Hardware/Software Codesign and System Synthesis, Association for Computing Machinery (ACM), 2014Conference paper, Published paper (Refereed)
    Abstract [en]

    With the advent of low-power programmable compute cores based on GPUs, GPU-equipped heterogeneous platforms are becoming common in a wide spectrum of industries including safety-critical domains like the automotive industry. While the suitability of GPUs for throughput oriented applications is well-accepted, their applicability for real-time applications remains an open issue. Moreover, in mobile/embedded systems, energy-efficient computing is a major concern and yet, there has been no systematic study on the energy savings that GPUs may potentially provide. In this paper, we propose an approach to utilize both the GPU and the CPU in a heterogeneous fashion to meet the deadlines of a real-time application while ensuring that we maximize the energy savings. We note that GPUs are inherently built to maximize the throughput and this poses a major challenge when deadlines must be satisfied. The problem becomes more acute when we consider the fact that GPUs are more energy efficient than CPUs and thus, a naive approach that is based on maximizing GPU utilization might easily lead to infeasible solutions from a deadline perspective.

    Place, publisher, year, edition, pages
    Association for Computing Machinery (ACM), 2014
    National Category
    Computer and Information Sciences
    Identifiers
    urn:nbn:se:liu:diva-112689 (URN)10.1145/2656075.2656097 (DOI)978-1-4503-3051-0 (ISBN)
    Conference
    International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS 2014), New Delhi, India, October 12-17, 2014
    Available from: 2014-12-08 Created: 2014-12-08 Last updated: 2018-12-07Bibliographically approved
    3. Perception-aware power management for mobile games via dynamic resolution scaling
    Open this publication in new window or tab >>Perception-aware power management for mobile games via dynamic resolution scaling
    Show others...
    2015 (English)In: 2015 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), IEEE , 2015, p. 613-620Conference paper, Published paper (Refereed)
    Abstract [en]

    Modern mobile devices provide ultra-high resolutions in their display panels. This imposes ever increasing workload on the GPU leading to high power consumption and shortened battery life. In this paper, we first show that resolution scaling leads to significant power savings. Second, we propose a perception-aware adaptive scheme that sets the resolution during game play. We exploit the fact that game players are often willing to trade quality for longer battery life. Our scheme uses decision theory, where the predicted user perception is combined with a novel asymmetric loss function that encodes users' alterations in their willingness to save power.

    Place, publisher, year, edition, pages
    IEEE, 2015
    Series
    ICCAD-IEEE ACM International Conference on Computer-Aided Design, ISSN 1933-7760
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-124543 (URN)10.1109/ICCAD.2015.7372626 (DOI)000368929600084 ()978-1-4673-8388-2 (ISBN)
    Conference
    Computer-Aided Design (ICCAD), 2015 IEEE/ACM International Conference on 2-6 Nov. 2015 Austin, TX
    Available from: 2016-02-02 Created: 2016-02-02 Last updated: 2018-12-07
    4. Latency-Aware Packet Processing on CPU-GPU Heterogeneous Systems
    Open this publication in new window or tab >>Latency-Aware Packet Processing on CPU-GPU Heterogeneous Systems
    Show others...
    2017 (English)In: DAC '17 Proceedings of the 54th Annual Design Automation Conference 2017, New York, NY, USA: Association for Computing Machinery (ACM), 2017Conference paper, Published paper (Refereed)
    Abstract [en]

    In response to the tremendous growth of the Internet, towards what we call the Internet of Things (IoT), there is a need to move from costly, high-time-to-market specific-purpose hardware to flexible, low-time-to-market general-purpose devices for packet processing. Among several such devices, GPUs have attracted attention in the past, mainly because the high computing demand of packet processing applications can, potentially, be satisfied by these throughput-oriented machines. However, another important aspect of such applications is the packet latency which, if not handled carefully, will overshadow the throughput benefits. Unfortunately, until now, this aspect has been mostly ignored. To address this issue, we propose a method that considers the variable bit rate of the traffic and, depending on the current rate, minimizes the latency, while meeting the rate demand. We propose a persistent kernel based software architecture to overcome the challenges inherent in GPU implementation like kernel invocation overhead, CPU-GPU communication and memory access overhead. We have chosen packet classification as the packet processing application to demonstrate our technique. Using the proposed approach, we are able to reduce the packet latency on average by a factor of 3.5, compared to the state-of-the-art solutions, without any packet drop.

    Place, publisher, year, edition, pages
    New York, NY, USA: Association for Computing Machinery (ACM), 2017
    Series
    Design Automation Conference DAC, ISSN 0738-100X
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-141212 (URN)10.1145/3061639.3062269 (DOI)000424895400129 ()2-s2.0-85023612665 (Scopus ID)978-1-4503-4927-7 (ISBN)
    Conference
    54th ACM/EDAC/IEEE Design Automation Conference (DAC), Austin, TX, USA, June 18-22, 2017
    Available from: 2017-09-27 Created: 2017-09-27 Last updated: 2018-12-07Bibliographically approved
  • Public defence: 2019-01-11 13:00 ACAS, A Building, Linköping
    Odar, Susanne
    Linköping University, Department of Management and Engineering, Industrial Economics. Linköping University, Faculty of Science & Engineering.
    Managementinitiativ, mening och verksamhetsresultat: En retrospektiv studie av en teknikintensiv verksamhet2019Doctoral thesis, monograph (Other academic)
    Abstract [sv]

    I denna studie betraktas en organisations utveckling med ett meningsskapande-perspektiv, vilket kort innebär att handlingar skapar mening och mening skapar handlingar. Weicks (1995) inflytelserika tankemodell av meningsskapande har utvecklats till en modell och metod som kan tillämpas på ett empiriskt material. Utvecklingen inom ASEA/ABB reläverksamheten under en trettioårsperiod från tidigt 1980-tal till år 2010 beskrivs och analyseras.

    Studien handlar om att förstå hur organisationer utvecklas, och hur chefer och medarbetare kan påverka en organisations utveckling och bidra till verksamhetsresultat. Syftet har avgränsats genom valet av teoretiskt perspektiv, metod och forskningsfrågor. Forskningsfrågorna rör samspelet mellan så kallade managementinitiativ, mening och verksamhetens utveckling. Managementinitiativ är en typ av handlingar som chefer i en organisation kan besluta om. Hur dessa påverkar och påverkas av de uppfattningar som finns hos chefer och medarbetare i organisationen beskrivs och analyseras. Det valda fallet omfattar 85 stycken initiativ. Studien har visat att en verksamhets utveckling bäst förstås genom en analys av den aktuella verksamheten och dess omvärld, och att det går att finna mönster som upprepas över tid inom ramen för verksamheten

    Metoden och modellen är generaliserbara och kan användas för empiriska studier av meningsskapande i grupper, organisationer och samhällen. Det kanske viktigaste bidraget med modellen och metoden är att samspelet mellan olika nivåer – individ-, interaktions-, struktur- och kulturnivån – kan studeras över tid samt att fokus kan riktas mot substansen, innehållet, i meningsskapandet såväl som processen. Data omfattar handlingar, argument, förväntningar och utfästelser. I denna studie har dessa betraktats som uttryck för meningsskapande, men de kan ses som uttryck för andra perspektiv. Metoden och modellen kan även användas inom andra processtudier där dessa kategoriers utveckling över tiden är av intresse, och där samspelet mellan olika nivåer är av betydelse.

    Ambitionen är i första hand att bidra till det pågående samtalet inom meningsskapande, men en förhoppning är också göra området tillgängligt för forskare inom andra områden och praktiker som inte kommit i kontakt med meningsskapande tidigare.

  • Public defence: 2019-01-18 13:15 K3, Kåkenhus, Norrköping
    Dahlberg, Joen
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, Faculty of Science & Engineering.
    Cost allocation methods in cooperative transportation planning2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Transportation, together with transportation planning for goods, provides good conditions for economic growth and is a natural part of modern society. However, transportation has negative side effects, including emissions and traffic congestion. A freight forwarder may consolidate shippers’ goods in order to reduce some of the negative side effects, thus reducing emissions and/or congestion as well as operational costs. The negative side effects as well as operational costs can be further reduced if a number of freight forwarders cooperate and consolidate their collective goods flows. Consolidation refers to the process of merging a number of the freight forwarders’ shipments of goods into a single shipment. In this case, the freight forwarders are cooperating with competitors (the other freight forwarders).

    Fair cost allocations are important for establishing and maintaining cost-efficient cooperation among competing stakeholders. Cooperative game theory defines a number of criteria for fair cost allocations and the problem associated with the decision process for allocating costs is referred to as the cost allocation problem. In this thesis, cooperative game theory is used as an academic tool to study cooperation among stakeholders in two transportation planning applications, namely 1) the distribution of goods bound for urban areas and 2) the transportation of wood between harvest areas and industries.

    In transportation planning application 1, there is a cooperation among a number of freight forwarders and a municipality. Freight forwarders’ goods bound for an urban area are consolidated at a facility located just outside the urban area. In this thesis, operational costs for distributing the goods are assessed by solving vehicle routing problems. Common methods from cooperative game theory are used for allocating the operational costs among the freight forwarders and the municipality. In transportation planning application 2, forest companies cooperate in terms of the supply and transportation of common resources, or more specifically, different types of wood. Each forest company has harvest areas and industries to which the wood is transported. The resources may be bartered, that is, the forest companies may transport wood from each other’s harvest areas.

    In the cooperative game theory literature, the stakeholders are often treated equally in the context of transportation planning. However, there seems to be a lack of studies on cooperations where at least one stakeholder differs from the other stakeholders in some fundamental way, for instance, as an initiator or an enabler of the cooperation. Such cooperations are considered in this thesis. The municipality and one of the forest companies are considered to be the initiators in their respective applications.

    Five papers are appended to this thesis and the overall aim is to contribute to the research into cooperative transportation planning by using concepts from cooperative game theory to develop methods for allocating costs among cooperating stakeholders. The purpose of this thesis is to provide decision support for planners in the decisionmaking process of transportation planning to establish cost-efficient and stable cooperations.

    Some of the main outcomes of this thesis are viable and practical methods that could be used in real-life situations to allocate costs among cooperating stakeholders, as well as support for decisionmakers who are concerned with transportation planning. This is done by demonstrating the potential of cooperation, such as cost reduction, and by suggesting how costs can be allocated fairly in the transportation planning applications considered. Lastly, a contribution to cooperative game theory is provided; the introduction of a development of the equal profit method for allocating costs. The proposed version is the equal profit method with lexicography, which, in contrast to the former, guarantees to yield at most one solution to any cost allocation problem. Lexicography is used to rank potential cost allocations and the unambiguously best cost allocation is chosen.  

    List of papers
    1. Consolidation in Urban Freight Transportation - Cost Allocation Models
    Open this publication in new window or tab >>Consolidation in Urban Freight Transportation - Cost Allocation Models
    2018 (English)In: Asia-Pacific journal of operational research, ISSN 0217-5959, E-ISSN 1793-7019, Vol. 35, no 4, article id 1850023Article in journal (Refereed) Published
    Abstract [en]

    In this paper, the focus is on the role of the municipality, as an enabler of a collaboration between freight forwarders and the municipality in which the consolidation of goods is considered as a means for goods flow improvement in urban freight transportation. We present a cost allocation model that is based on solution concepts from cooperative game theory, for allocating the operational costs associated with the collaboration. It is assumed that the municipality is willing to carry some cost to ensure a stable collaboration for the potential benefits received, e.g., reduced traffic congestion in the city. The model is applied to some illustrative examples, and the cost allocation results are discussed. It is shown that the role of the municipality may be decisive in achieving a stable collaboration between the freight forwarders, and further that the municipality does not necessarily need to contribute to covering the costs.

    Place, publisher, year, edition, pages
    WORLD SCIENTIFIC PUBL CO PTE LTD, 2018
    Keywords
    Collaboration; cost allocation; city distribution center; municipality
    National Category
    Transport Systems and Logistics
    Identifiers
    urn:nbn:se:liu:diva-150486 (URN)10.1142/S0217595918500239 (DOI)000441395200005 ()
    Note

    Funding Agencies|Swedish Energy Agency; VINNOVA

    Available from: 2018-08-24 Created: 2018-08-24 Last updated: 2018-12-13
    2. A note on the nonuniqueness of the Equal Profit Method
    Open this publication in new window or tab >>A note on the nonuniqueness of the Equal Profit Method
    2017 (English)In: Applied Mathematics and Computation, ISSN 0096-3003, E-ISSN 1873-5649, Vol. 308, p. 84-89Article in journal (Refereed) Published
    Abstract [en]

    When a set of players cooperate, they need to decide how the collective cost should be allocated amongst them. Cooperative game theory provides several methods or solution concepts, that can be used as a tool for cost allocation. In this note, we consider a specific solution concept called the Equal Profit Method (EPM). In some cases, a solution to the EPM is any one of infinitely many solutions. That is, it is not always unique. This leads to a lack of clarity in the characterization of the solutions obtained by the EPM. We present a modified version of the EPM, which unlike its precursor ensures a unique solution. In order to illustrate the differences, we present some numerical examples and comparisons between the two concepts.

    Keywords
    Game Theory, Unique Solution, Solution Concept, EPM, Linear Programming, Lexicography
    National Category
    Transport Systems and Logistics
    Identifiers
    urn:nbn:se:liu:diva-121557 (URN)10.1016/j.amc.2017.03.018 (DOI)000399591500007 ()
    Note

    Funding agencies: Swedish Energy Agency

    Available from: 2015-09-24 Created: 2015-09-24 Last updated: 2018-12-13
    3. Incitements for transportation collaboration by cost allocation
    Open this publication in new window or tab >>Incitements for transportation collaboration by cost allocation
    Show others...
    2018 (English)In: Central European Journal of Operations Research, ISSN 1435-246X, E-ISSN 1613-9178Article in journal (Refereed) Epub ahead of print
    Abstract [en]

    In this paper, we focus on how cost allocation can be used as a means to create incentives for collaboration among companies, with the aim of reducing the total transportation cost. The collaboration is assumed to be preceded by a simultaneous invitation of the companies to collaborate. We make use of concepts from cooperative game theory, including the Shapley value, the Nucleolus and the EPM, and develop specific cost allocation mechanisms aiming to achieve large collaborations among many companies. The cost allocation mechanisms are tested on a case study that involves transportation planning activities. Although the case study is from a specific transportation sector, the findings in this paper can be adapted to collaborations in other types of transportation planning activities. Two of the cost allocation mechanisms ensure that any sequence of companies joining the collaboration represents a complete monotonic path, that is, any sequence of collaborating companies is such that the sequences of allocated costs are non-increasing for all companies.

    Place, publisher, year, edition, pages
    Springer, 2018
    Keywords
    Collaboration, Transportation planning, Monotonic Path, Cost Allocation, Cooperative game theory
    National Category
    Transport Systems and Logistics
    Identifiers
    urn:nbn:se:liu:diva-121558 (URN)10.1007/s10100-018-0530-2 (DOI)
    Funder
    Swedish Energy AgencyVINNOVA
    Available from: 2015-09-24 Created: 2015-09-24 Last updated: 2018-12-13Bibliographically approved
  • Public defence: 2019-01-23 10:15 BL32, B-huset, Linköping
    Shuaib, Budor
    Linköping University, Department of Mathematics, Mathematics and Applied Mathematics. Linköping University, Faculty of Science & Engineering.
    Ghostpeakons2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis we study peakons (peaked solitons), a class of solutions which occur in certain wave equations, such as the Camassa–Holm shallow water equation and its mathematical relatives, the Degasperis–Procesi, Novikov and Geng– Xue equations. These four non-linear partial differential equations are all integrable systems in the sense of having Lax pairs, infinitely many conservation laws, and multipeakon solutions given by explicitly known formulas.

    In the first paper, we develop a method which uses so-called ghostpeakons (peakons with amplitude zero) to find explicit formulas for the characteristic curves associated with the multipeakon solutions of the Camassa–Holm, Degasperis– Procesi and Novikov equations.

    In the second paper, we use the ghostpeakon method to derive explicit formulas for arbitrary multipeakon solutions of the two-componentGeng–Xue equation. The general case involves many inequivalent peakons configurations, depending on the order in which the peakons occur in the two components of the solution, and previously the solution was known only in the so-called interlacing case where the peakons lie alternatingly in one component and in the other. To obtain the solution formulas for an arbitrary configuration, we introduce auxiliary peakons to make the configuration interlacing. By taking suitable limits, we then drive the amplitudes of the auxiliary peakons to zero, leaving the solution formulas for the remaining ordinary peakons. 

  • Public defence: 2019-01-25 13:00 ACAS, A-Building, Linköping
    Radits, Markus
    Linköping University, Department of Management and Engineering, Industrial Economics. Linköping University, Faculty of Science & Engineering.
    A Business Ecology Perspective on Community-Driven Open Source: The Case of the Free and Open Source Content Management System Joomla2019Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis approaches the phenomenon of open source software (OSS) from a managerial and organisational point of view. In a slightly narrower sense, this thesis studies commercialisation aspects around community-driven open source. The term ‘community-driven’ signifies open source projects that are managed, steered, and controlled by communities of volunteers, as opposed to those that are managed, steered, and controlled by single corporate sponsors.

    By adopting a business ecology perspective, this thesis places emphasis on the larger context within which the commercialisation of OSS is embedded (e.g., global and collaborative production regimes, ideological foundations, market characteristics, and diffuse boundary conditions). Because many business benefits arise as a consequence of the activities taking place in the communities and ecosystems around open source projects, a business ecology perspective may be a useful analytical guide for understanding the opportunities, challenges, and risks that firms face in commercializing OSS.

    There are two overarching themes guiding this thesis. The first theme concerns the challenges that firms face in commercialising community-driven open source. There is a tendency in the literature on business ecosystems and open source to emphasise the benefits, opportunities, and positive aspects of behaviour, at the expense of the challenges that firms face. However, business ecosystems are not only spaces of opportunity, they may also pose a variety of challenges that firms need to overcome in order to be successful. To help rectify this imbalance in the literature, the first theme particularly focuses on the challenges that firms face in commercialising community-driven open source. The underlying ambition is to facilitate a more balanced and holistic understanding of the collaborative and competitive dynamics in ecosystems around open source projects.

    The other theme concerns the complex intertwining of community engagement and profit-oriented venturing. As is acknowledged in the literature, the subject of firm-community interaction has become increasingly important because the survival, success, and sustainability of peer production communities has become of strategic relevance to many organisations. However, while many strategic benefits may arise as a consequence of firm-community interaction, there is a lack of research studying how the value-creating logics of firm–community interaction are embedded within the bigger picture in which they occur. Bearing this bigger picture in mind, this thesis explores the intertwining of volunteer community engagement and profit-oriented venturing by focusing on four aspects that are theorised in the literature: reinforcement, complementarity, synergy, and reciprocity.

    This thesis is designed as a qualitative exploratory single-case study. The empirical case is Joomla, a popular open source content management system. In a nutshell, the Joomla case in this thesis comprises the interactions in the Joomla community and the commercial activities around the Joomla platform (e.g., web development, consulting, marketing, customisation, extensions). In order to achieve greater analytical depth, the business ecology perspective is complemented with ideas and propositions from other theoretical areas, such as stakeholder theory, community governance, organizational identity, motivation theory, pricing, and bundling.

    The findings show that the common challenges in commercialising community-driven open source revolve around nine distinct factors that roughly cluster into three domains: the ecosystem, the community, and the firm. In short, the domain of the ecosystem comprises the global operating environment, the pace of change, and the cannibalisation of ideas. The domain of the community comprises the platform policy, platform image, and the voluntary nature of the open source project. And finally, the domain of the firm comprises the blurring boundaries between private and professional lives, the difficulty of estimating costs, and firm dependencies. Based on these insights, a framework for analysing community-based value creation in business ecosystems is proposed. This framework integrates collective innovation, community engagement, and value capture into a unified model of value creation in contexts of firm–community interaction.

    Furthermore, the findings reveal demonstrable effects of reinforcement, complementarity, synergy, and reciprocity in the intertwining of volunteer community engagement and profit-oriented venturing. By showing that this intertwining can be strong in empirical cases where commercial activities are often implicitly assumed to be absent, this thesis provides a more nuanced understanding of firm involvement in the realm of open source.

    Based on the empirical and analytical insights, a number of further theoretical implications are discussed, such as the role of intersubjective trust in relation to the uncertainties that commercial actors face, an alternative way of classifying community types, the metaphor of superorganisms in the context of open source, issues pertaining to the well-being of community participants, and issues in relation to the transitioning of open source developers from a community-based to an entrepreneurial self-identity when commercialising an open source solution. Furthermore, this thesis builds on six sub-studies that make individual contributions of their own.

    In a broad sense, this thesis contributes to the literature streams on the commercialisation of OSS, the business value and strategic aspects of open source, the interrelationships between community forms of organising and entrepreneurial activities, and the nascent research on ecology perspectives on peer-production communities. A variety of opportunities for future research are highlighted.