liu.seSearch for publications in DiVA
Change search
Refine search result
1234 1 - 50 of 187
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abrahamsson, S.
    et al.
    SLU, Umeå, Sweden .
    Ahlinder, J.
    FOI, Umeå, Sweden .
    Waldmann, Patrik
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, The Institute of Technology.
    García-Gil, M. R.
    SLU, Umeå, Sweden .
    Maternal heterozygosity and progeny fitness association in an inbred Scots pine population2013In: Genetica, ISSN 0016-6707, E-ISSN 1573-6857, Vol. 141, no 1-3, p. 41-50Article in journal (Refereed)
    Abstract [en]

    Associations between heterozygosity and fitness traits have typically been investigated in populations characterized by low levels of inbreeding. We investigated the associations between standardized multilocus heterozygosity (stMLH) in mother trees (obtained from12 nuclear microsatellite markers) and five fitness traits measured in progenies from an inbred Scots pine population. The traits studied were proportion of sound seed, mean seed weight, germination rate, mean family height of one-year old seedlings under greenhouse conditions (GH) and mean family height of three-year old seedlings under field conditions (FH). The relatively high average inbreeding coefficient (F) in the population under study corresponds to a mixture of trees with different levels of co-ancestry, potentially resulting from a recent bottleneck. We used both frequentist and Bayesian methods of polynomial regression to investigate the presence of linear and non-linear relations between stMLH and each of the fitness traits. No significant associations were found for any of the traits except for GH, which displayed negative linear effect with stMLH. Negative HFC for GH could potentially be explained by the effect of heterosis caused by mating of two inbred mother trees (Lippman and Zamir 2006), or outbreeding depression at the most heterozygote trees and its negative impact on the fitness of the progeny, while their simultaneous action is also possible (Lynch. 1991). However,since this effect wasn’t detected for FH, we cannot either rule out that the greenhouse conditions introduce artificial effects that disappear under more realistic field conditions.

  • 2.
    Aksomaitis, Algimantas
    et al.
    Kaunas Univ of Technology.
    Burauskaite-Harju, Agne
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    The Momenst of the Maximum of Normally Distributed Dependent Values2009In: Information Technology and Control, ISSN 1392-124X, E-ISSN 2335-884X, Vol. 38, no 4, p. 301-302Article in journal (Refereed)
    Abstract [en]

    In this paper the moments of the maximum of a finite number of random values are analyzed. The largest part of analysis is focused on extremes of dependent normal values. For the case of normal distribution, the moments of the maximum of dependent values are expressed through the moments of independent values.

  • 3.
    Alnervik, Jonna
    et al.
    Linköping University, Department of Computer and Information Science, Statistics.
    Nord Andersson, Peter
    Linköping University, Department of Computer and Information Science, Statistics.
    En retrospektiv studie av vilka patientgrupper som erhåller insulinpump2010Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Målsättning

    Att utreda skillnader i tillgänglighet till insulinpump mellan olika patientgrupper samt vad som orsakar ett byte till insulinpump.

    Metod

    Data från 7224 individer med typ 1 diabetes vid tio olika vårdenheter analyserades för att utreda effekterna av njurfunktion, kön, långtidsblodsocker, insulindos, diabetesduration samt ålder. Jämförelsen mellan patientgrupper utfördes med logistisk regression som en tvärsnittsstudie och Cox-regression för att utreda vad som föregått ett byte till pump.

    Resultat

    Genom logistisk regression erhölls en bild av hur skillnader mellan patienter som använder insulinpump och patienter som inte gör det ser ut i dagsläget. Cox-regressionen tar med ett tidsperspektiv och ger på så sätt svar på vad som föregått ett byte till insulinpump. Dessa analyser gav liknande resultat gällande variabler konstanta över tiden. Kvinnor använder pump i större utsträckning än män och andelen pumpanvändare skiljer sig åt vid olika vårdenheter. I dagsläget visar sig hög ålder sänka sannolikheten att använda insulinpump, vilket bekräftas vid den tidsberoende studien som visade hur sannolikheten att byta till pump är avsevärt lägre vid hög ålder. Långtidsblodsockret har också tydlig effekt på sannolikheten att gå över till pump där ett högt långtidsblodsocker medför hög sannolikhet att byta till insulinpump.

    Slutsatser

    I dagsläget finns det skillnader i andelen insulinpumpanvändare mellan olika patientgrupper och skillnader finns även i de olika gruppernas benägenhet att byta från andra insulinbehandlingar till insulinpump. Beroende av patienters njurfunktion, kön, långtidsblodsocker, insulindos, diabetesduration och ålder har dessa olika sannolikheter att byta till insulinpump.

  • 4.
    Andersson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Statistics.
    Hansson, Josef
    Linköping University, Department of Computer and Information Science, Statistics.
    Metodik för detektering av vägåtgärder via tillståndsdata2010Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The Swedish Transport Administration has, and manages, a database containing information of the status of road condition on all paved and governmental operated Swedish roads. The purpose of the database is to support the Pavement Management System (PMS). The PMS is used to identify sections of roads where there is a need for treatment, how to allocate resources and to get a general picture of the state of the road network condition. All major treatments should be reported which has not always been done.

    The road condition is measured using a number of indicators on e.g. the roads unevenness. Rut depth is an indicator of the roads transverse unevenness. When a treatment has been done the condition drastically changes, which is also reflected by these indicators.

    The purpose of this master thesis is to; by using existing indicators make predictions to find points in time when a road has been treated.

    We have created a SAS-program based on simple linear regression to analyze rut depth changes over time. The function of the program is to find levels changes in the rut depth trend. A drastic negative change means that a treatment has been made.

    The proportion of roads with an alleged date for the latest treatment earlier than the programs latest detected date was 37 percent. It turned out that there are differences in the proportions of possible treatments found by the software and actually reported roads between different regions. The regions North and Central have the highest proportion of differences. There are also differences between the road groups with various amount of traffic. The differences between the regions do not depend entirely on the fact that the proportion of heavily trafficked roads is greater for some regions.

  • 5.
    Ansell, Ricky
    et al.
    Linköping University, Department of Physics, Chemistry and Biology, Biology. Linköping University, Faculty of Science & Engineering. Polismyndigheten - Nationellt Forensiskt Centrum.
    Nordgaard, Anders
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences. Polismyndigheten - Nationellt Forensiskt Centrum.
    Hedell, Ronny
    Polismyndigheten - Nationellt Forensiskt Centrum.
    Interpretation of DNA Evidence: Implications of Thresholds Used in the Forensic Laboratory2014Conference paper (Other academic)
    Abstract [en]

    Evaluation of forensic evidence is a process lined with decisions and balancing, not infrequently with a substantial deal of subjectivity. Already at the crime scene a lot of decisions have to be made about search strategies, the amount of evidence and traces recovered, later prioritised and sent further to the forensic laboratory etc. Within the laboratory there must be several criteria (often in terms of numbers) on how much and what parts of the material should be analysed. In addition there is often a restricted timeframe for delivery of a statement to the commissioner, which in reality might influence on the work done. The path of DNA evidence from the recovery of a trace at the crime scene to the interpretation and evaluation made in court involves several decisions based on cut-offs of different kinds. These include quality assurance thresholds like limits of detection and quantitation, but also less strictly defined thresholds like upper limits on prevalence of alleles not observed in DNA databases. In a verbal scale of conclusions there are lower limits on likelihood ratios for DNA evidence above which the evidence can be said to strongly support, very strongly support, etc. a proposition about the source of the evidence. Such thresholds may be arbitrarily chosen or based on logical reasoning with probabilities. However, likelihood ratios for DNA evidence depend strongly on the population of potential donors, and this may not be understood among the end-users of such a verbal scale. Even apparently strong DNA evidence against a suspect may be reported on each side of a threshold in the scale depending on whether a close relative is part of the donor population or not. In this presentation we review the use of thresholds and cut-offs in DNA analysis and interpretation and investigate the sensitivity of the final evaluation to how such rules are defined. In particular we show what are the effects of cut-offs when multiple propositions about alternative sources of a trace cannot be avoided, e.g. when there are close relatives to the suspect with high propensities to have left the trace. Moreover, we discuss the possibility of including costs (in terms of time or money) for a decision-theoretic approach in which expected values of information could be analysed.

  • 6.
    Arvid, Odencrants
    et al.
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, The Institute of Technology.
    Dennis, Dahl
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Utvärdering av Transportstyrelsens flygtrafiksmodeller2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The Swedish Transport Agency has for a long time collected data on a monthly basis for different variables that are used to make predictions, short projections as well as longer projections. They have used SAS for producing statistical models in air transport. The model with the largest value of coefficient of determination is the method that has been used for a long time. The Swedish Transport Agency felt it was time for an evaluation of their models and methods of how projections is estimated, they would also explore the possibilities to use different, completely new models for forecasting air travel. This Bachelor thesis examines how the Holt-Winters method does compare with SARIMA, error terms such as RMSE, MAPE, R2, AIC and BIC  will be compared between the methods. 

    The results which have been produced showing that there may be a risk that the Holt-Winters models adepts a bit too well in a few variables in which Holt-Winters method has been adapted. But overall the Holt-Winters method generates better forecasts .

  • 7.
    Asker, Christian
    et al.
    Linköping University, Department of Physics, Chemistry and Biology, Theoretical Physics . Linköping University, The Institute of Technology.
    Belonoshko, A. B.
    Applied Materials Physics, Department of Material Science and Engineering, The Royal Institute of Technology, 10044 Stockholm, Swedent.
    Grimvall, Anders
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, The Institute of Technology.
    Abrikosov, Igor
    Linköping University, Department of Physics, Chemistry and Biology, Theoretical Physics . Linköping University, The Institute of Technology.
    Electronic and atomic structure of Mo from high-temperature molecular dynamics simulationsManuscript (preprint) (Other (popular science, discussion, etc.))
    Abstract [en]

    By means of ab initio molecular dynamics (AIMD) simulations we carry out a detailed stdly of the electronic and atomic structure of Mo upon the thermal stabilization of its dynamically unstable face-centered cubic (fcc) phase, Wc calculate how the atomic positions, radial distribution function, and the ei<xtronic density of states of fcc Mo evolve with temperature. The results are compared with those for dynamically stable body-centered cubic (bcc) phase of Mo, as well as with bcc Zr, which is dynamically unstable at T = OK, but (in contrast to fcc Mo) becomes thermodynamically stable at high temperature, In particular, wc emphasize the difference between the local positions of atoms in the simulation boxes at a particular step of AIMD simulation and the average positions, around which the atoms vibrate, and show that the former are solcly responsible for the electronic properties of the material. WE observe that while the average atomic positions in fcc Mo correspond perfectly to the ideal structure at high temperature, the electronic structure of the metal calculated from AIMD differs substantially from the canonical shape of the density of states for the ideal fcc crystaL From a comparison of our results obtained for fcc Mo arid bcc Zr, we advocate the use of the electronic structure calculations, complemented with studies of radial distribution functions, as a sensitive test of a degree of the temperature induced stabilization of phases, which are dynamically unstable at T = OK.

  • 8.
    Bak, Zoltan
    et al.
    Linköping University, Department of Biomedical Engineering. Linköping University, The Institute of Technology.
    Sjöberg, Folke
    Linköping University, Department of Clinical and Experimental Medicine, Burn Unit . Linköping University, Faculty of Health Sciences.
    Eriksson, Olle
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Steinvall, Ingrid
    Linköping University, Department of Clinical and Experimental Medicine. Linköping University, Faculty of Health Sciences.
    Janerot Sjöberg, Birgitta
    Linköping University, Department of Medicine and Health Sciences, Clinical Physiology . Linköping University, Faculty of Health Sciences.
    Cardiac dysfunction after burns2008In: Burns, ISSN 0305-4179, E-ISSN 1879-1409, Vol. 34, no 5, p. 603-609Article in journal (Refereed)
    Abstract [en]

    Objectives

    Using transoesophageal echocardiography (TEE) we investigated the occurrence, and the association of possible abnormalities of motion of the regional wall of the heart (WMA) or diastolic dysfunction with raised troponin concentrations, or both during fluid resuscitation in patients with severe burns.

    Patients and methods

    Ten consecutive adults (aged 36–89 years, two women) with burns exceeding 20% total burned body surface area who needed mechanical ventilation were studied. Their mean Baux index was 92.7, and they were resuscitated according to the Parkland formula. Thirty series of TEE examinations and simultaneous laboratory tests for myocyte damage were done 12, 24, and 36 h after the burn.

    Results

    Half (n = 5) the patients had varying grades of leakage of the marker that correlated with changeable WMA at 12, 24 and 36 h after the burn (p ≤ 0.001, 0.044 and 0.02, respectively). No patient had WMA and normal concentrations of biomarkers or vice versa. The mitral deceleration time was short, but left ventricular filling velocity increased together with stroke volume.

    Conclusion

    Acute myocardial damage recorded by both echocardiography and leakage of troponin was common, and there was a close correlation between them. This is true also when global systolic function is not deteriorated. The mitral flow Doppler pattern suggested restrictive left ventricular diastolic function.

  • 9.
    Bak, Zoltan
    et al.
    Linköping University, Department of Medical and Health Sciences, Anesthesiology. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Anaesthesiology and Surgical Centre, Department of Surgery UHL.
    Sjöberg, Folke
    Linköping University, Department of Clinical and Experimental Medicine, Burn Center. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Reconstruction Centre, Department of Plastic Surgery, Hand surgery UHL.
    Eriksson, Olle
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Steinvall, Ingrid
    Linköping University, Department of Clinical and Experimental Medicine. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Reconstruction Centre, Department of Plastic Surgery, Hand surgery UHL.
    Janerot Sjöberg, Birgitta
    Linköping University, Department of Medical and Health Sciences, Clinical Physiology. Linköping University, Faculty of Health Sciences. Östergötlands Läns Landsting, Heart Centre, Department of Clinical Physiology.
    Hemodynamic Changes During Resuscitation After Burns Using the Parkland Formula2009In: Journal of Trauma, ISSN 0022-5282, E-ISSN 1529-8809, Vol. 66, no 2, p. 329-336Article in journal (Refereed)
    Abstract [en]

    Background: The Parkland formula (2-4 mL/kg/burned area of total body surface area %) with urine output and mean arterial pressure (MAP) as endpoints; for the fluid resuscitation in burns is recommended all over the world. There has recently been a discussion on whether central circulatory endpoints should be used instead, and also whether volumes of fluid should be larger. Despite this, there are few central hemodynamic data available in the literature about the results when the formula is used correctly.

    Methods: Ten burned patients, admitted to our unit early, and with a burned area of >20% of total body sur-face area were investigated at 12, 24, and 36 hours after injury. Using transesophageal echocardiography, pulmonary artery catheterization, and transpulmonary thermodilution to monitor them, we evaluated the cardiovascular coupling when urinary output and MAP were used as endpoints.

    Results: Oxygen transport variables, heart rate, MAP, and left ventricular fractional area, did not change significantly during fluid resuscitation. Left ventricular end-systolic and end-diastolic area and global end-diastolic volume index increased from subnormal values at 12 hours to normal ranges at 24 hours after the burn. Extravascular lung intrathoracal blood volume ratio was increased 12 hours after the burn.

    Conclusions: Preload variables, global systolic function, and oxygen transport recorded simultaneously by three separate methods showed no need to increase the total fluid volume within 36 hours of a major burn. Early (12 hours) signs of central circulatory hypovolemia, however, support more rapid infusion of fluid at the beginning of treatment.

  • 10.
    Bendtsen, Marcus
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Peña, Jose M.
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Science & Engineering.
    Modelling regimes with Bayesian network mixtures2017In: Proceedings of the 30th Annual Workshop of the Swedish Artificial Intelligence Society SAIS 2017, May 15–16, 2017, Karlskrona, Sweden / [ed] Niklas Lavesson, Linköping: Linköping University Electronic Press, 2017, Vol. 137, p. 20-29, article id 002Conference paper (Refereed)
    Abstract [en]

    Bayesian networks (BNs) are advantageous when representing single independence models, however they do not allow us to model changes among the relationships of the random variables over time. Due to such regime changes, it may be necessary to use different BNs at different times in order to have an appropriate model over the random variables. In this paper we propose two extensions to the traditional hidden Markov model, allowing us to represent both the different regimes using different BNs, and potential driving forces behind the regime changes, by modelling potential dependence between state transitions and some observable variables. We show how expectation maximisation can be used to learn the parameters of the proposed model, and run both synthetic and real-world experiments to show the model’s potential.

  • 11.
    Berglund, Frida
    et al.
    Linköping University, Department of Computer and Information Science, Statistics.
    Oskarsson, Mayumi Setsu
    Linköping University, Department of Computer and Information Science, Statistics.
    Modellering av spårvidd över bandel 119 inom Stambanan genom Övre Norrland: Kandidatuppsats i Statistik och dataanalys2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The Swedish Transport Administration (Trafikverket) has been in charge of the maintenance of the railway systems since 2010. The railway requires regular maintenance in order to keep tracks in good condition for passengers and other transports safety. To insure this safety it is important to measure the tracks geometrical condition. The gauge is one of the most important geometrics that cannot be too wide or narrow.

    The aim of this report is to create a model that is able to simulate the deviation from normal gauge from track geometrics and properties.

    The deviation from normal gauge is a random quantity that we modeled as a generalized linear model or a generalized additive model. The models can be used to simulate the possible values of the deviation. It was demonstrated in this study that GAM was able to model most of the variation in the deviation from normal gauge with the information from some track geometrics and properties.

  • 12.
    Burauskaite-Harju, Agne
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Characterizing Temporal Changes and Inter-Site Correlations in Daily and Sub-Daily Precipitation Extremes2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Information on weather extremes is essential for risk awareness in planning of infrastructure and agriculture, and it may also playa key role in our ability to adapt to recurrent or more or less unique extreme events. This thesis reports new statistical methodologies that can aid climate risk assessment under conditions of climate change. This increasing access to high temporal resolution of data is a central factor when developing novel techniques for this purpose. In particular, a procedure is introduced for analysis of long-term changes in daily and sub-daily records of observed or modelled weather extremes. Extreme value theory is employed to enhance the power of the proposed statistical procedure, and inter-site dependence is taken into account to enable regional analyses. Furthermore, new methods are presented to summarize and visualize spatial patterns in the temporal synchrony and dependence of weather events such as heavy precipitation at a network of meteorological stations. The work also demonstrates the significance of accounting for temporal synchrony in the diagnostics of inter-site asymptotic dependence.

    List of papers
    1. A test for network-wide trends in rainfall extremes
    Open this publication in new window or tab >>A test for network-wide trends in rainfall extremes
    2012 (English)In: International Journal of Climatology, ISSN 0899-8418, E-ISSN 1097-0088, ISSN 0899-8418, Vol. 32, no 1, p. 86-94Article in journal (Refereed) Published
    Abstract [en]

    Temporal trends in meteorological extremes are often examined by first reducing daily data to annual index values, such as the 95th or 99th percentiles. Here, we report how this idea can be elaborated to provide an efficient test for trends at a network of stations. The initial step is to make separate estimates of tail probabilities of precipitation amounts for each combination of station and year by fitting a generalised Pareto distribution (GPD) to data above a user-defined threshold. The resulting time series of annual percentile estimates are subsequently fed into a multivariate Mann-Kendall (MK) test for monotonic trends. We performed extensive simulations using artificially generated precipitation data and noted that the power of tests for temporal trends was substantially enhanced when ordinary percentiles were substituted for GPD percentiles. Furthermore, we found that the trend detection was robust to misspecification of the extreme value distribution. An advantage of the MK test is that it can accommodate non-linear trends, and it can also take into account the dependencies between stations in a network. To illustrate our approach, we used long time series of precipitation data from a network of stations in The Netherlands.

    Place, publisher, year, edition, pages
    Wiley, 2012
    Keywords
    climate extremes; precipitation; temporal trend; generalised Pareto distribution; climate indices; global warming
    National Category
    Climate Research Probability Theory and Statistics
    Identifiers
    urn:nbn:se:liu:diva-63099 (URN)10.1002/joc.2263 (DOI)000298733800007 ()
    Note
    funding agencies|Swedish Environmental Protection Agency||Available from: 2010-12-13 Created: 2010-12-10 Last updated: 2017-12-11
    2. Statistical framework for assessing trends in sub-daily and daily precipitation extremes
    Open this publication in new window or tab >>Statistical framework for assessing trends in sub-daily and daily precipitation extremes
    (English)Manuscript (preprint) (Other academic)
    Abstract [en]

    Extreme precipitation events vary with regard to duration, and hence sub-daily data do not necessarily exhibit the same trends as daily data. Here, we present a framework for a comprehensive yet easily undertaken statistical analysis of long-term trends in daily and sub-daily extremes. A parametric peaks-over-threshold model is employed to estimate annual percentiles for data of different temporal resolution. Moreover, a trend-durationfrequency table is used to summarize how the statistical significance of trends in annual percentiles varies with the temporal resolution of the underlying data and the severity of the extremes. The proposed framework also includes nonparametric tests that can integrate information about nonlinear monotonic trends at a network of stations. To illustrate our methodology, we use climate model output data from Kalmar, Sweden, and observational data from Vancouver, Canada. In both these cases, the results show different trends for moderate and high extremes, and also a clear difference in the statistical evidence of trends for daily and sub-daily data.

    Keywords
    Rainfall extremes; precipitation; sub-daily, temporal trend; generalized Pareto distribution; climate indices; global warming
    National Category
    Probability Theory and Statistics
    Identifiers
    urn:nbn:se:liu:diva-71296 (URN)
    Available from: 2011-10-10 Created: 2011-10-10 Last updated: 2011-10-10Bibliographically approved
    3. Characterizing and visualizing spatio-temporal patterns in hourly precipitation records
    Open this publication in new window or tab >>Characterizing and visualizing spatio-temporal patterns in hourly precipitation records
    Show others...
    2012 (English)In: Journal of Theoretical and Applied Climatology, ISSN 0177-798X, E-ISSN 1434-4483, Vol. 109, no 3-4, p. 333-343Article in journal (Refereed) Published
    Abstract [en]

    We develop new techniques to summarize and visualize spatial patterns of coincidence in weather events such as more or less heavy precipitation at a network of meteorological stations. The cosine similarity measure, which has a simple probabilistic interpretation for vectors of binary data, is generalized to characterize spatial dependencies of events that may reach different stations with a variable time lag. More specifically, we reduce such patterns into three parameters (dominant time lag, maximum cross-similarity, and window-maximum similarity) that can easily be computed for each pair of stations in a network. Furthermore, we visualize such threeparameter summaries by using colour-coded maps of dependencies to a given reference station and distance-decay plots for the entire network. Applications to hourly precipitation data from a network of 93 stations in Sweden illustrate how this method can be used to explore spatial patterns in the temporal synchrony of precipitation events.

    Place, publisher, year, edition, pages
    Springer, 2012
    Keywords
    precipitation; hourly rainfall records; spatial dependence; time lag; cosine similarity
    National Category
    Probability Theory and Statistics
    Identifiers
    urn:nbn:se:liu:diva-71297 (URN)10.1007/s00704-011-0574-x (DOI)000307243900002 ()
    Note

    funding agencies|Swedish Research Council (VR)||Gothenburg Atmospheric Science Centre (GAC)||FORMAS|2007-1048-8700*51|

    Available from: 2011-10-10 Created: 2011-10-10 Last updated: 2017-12-08Bibliographically approved
    4. Diagnostics for tail dependence in time-lagged random fields of precipitation
    Open this publication in new window or tab >>Diagnostics for tail dependence in time-lagged random fields of precipitation
    2013 (English)In: Journal of Theoretical and Applied Climatology, ISSN 0177-798X, E-ISSN 1434-4483, Vol. 112, no 3-4, p. 629-636Article in journal (Refereed) Published
    Abstract [en]

    Weather extremes often occur along fronts passing different sites with some time lag. Here, we show how such temporal patterns can be taken into account when exploring inter-site dependence of extremes. We incorporate time lags into existing models and into measures of extremal associations and their relation to the distance between the investigated sites. Furthermore, we define summarizing parameters that can be used to explore tail dependence for a whole network of stations in the presence of fixed or stochastic time lags. Analysis of hourly precipitation data from Sweden showed that our methods can prevent underestimation of the strength and spatial extent of tail dependencies.

    Keywords
    Precipitation; Sub-daily; Tail dependence; Spatial dependence; Time lag
    National Category
    Probability Theory and Statistics
    Identifiers
    urn:nbn:se:liu:diva-71298 (URN)10.1007/s00704-012-0748-1 (DOI)000318246300022 ()
    Available from: 2011-10-10 Created: 2011-10-10 Last updated: 2017-12-08Bibliographically approved
  • 13.
    Burauskaite-Harju, Agne
    et al.
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Grimvall, Anders
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Diagnostics for tail dependence in time-lagged random fields of precipitation2013In: Journal of Theoretical and Applied Climatology, ISSN 0177-798X, E-ISSN 1434-4483, Vol. 112, no 3-4, p. 629-636Article in journal (Refereed)
    Abstract [en]

    Weather extremes often occur along fronts passing different sites with some time lag. Here, we show how such temporal patterns can be taken into account when exploring inter-site dependence of extremes. We incorporate time lags into existing models and into measures of extremal associations and their relation to the distance between the investigated sites. Furthermore, we define summarizing parameters that can be used to explore tail dependence for a whole network of stations in the presence of fixed or stochastic time lags. Analysis of hourly precipitation data from Sweden showed that our methods can prevent underestimation of the strength and spatial extent of tail dependencies.

  • 14.
    Burauskaite-Harju, Agne
    et al.
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Grimvall, Anders
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Brömssen, Claudia von
    Swedish University of Agricultural Sciences, Uppsala, Sweden.
    Statistical framework for assessing trends in sub-daily and daily precipitation extremesManuscript (preprint) (Other academic)
    Abstract [en]

    Extreme precipitation events vary with regard to duration, and hence sub-daily data do not necessarily exhibit the same trends as daily data. Here, we present a framework for a comprehensive yet easily undertaken statistical analysis of long-term trends in daily and sub-daily extremes. A parametric peaks-over-threshold model is employed to estimate annual percentiles for data of different temporal resolution. Moreover, a trend-durationfrequency table is used to summarize how the statistical significance of trends in annual percentiles varies with the temporal resolution of the underlying data and the severity of the extremes. The proposed framework also includes nonparametric tests that can integrate information about nonlinear monotonic trends at a network of stations. To illustrate our methodology, we use climate model output data from Kalmar, Sweden, and observational data from Vancouver, Canada. In both these cases, the results show different trends for moderate and high extremes, and also a clear difference in the statistical evidence of trends for daily and sub-daily data.

  • 15.
    Burauskaite-Harju, Agne
    et al.
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Grimvall, Anders
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    von Brömssen, Claudia
    Swedish University of Agricultural Sciences, Uppsala.
    A test for network-wide trends in rainfall extremes2012In: International Journal of Climatology, ISSN 0899-8418, E-ISSN 1097-0088, ISSN 0899-8418, Vol. 32, no 1, p. 86-94Article in journal (Refereed)
    Abstract [en]

    Temporal trends in meteorological extremes are often examined by first reducing daily data to annual index values, such as the 95th or 99th percentiles. Here, we report how this idea can be elaborated to provide an efficient test for trends at a network of stations. The initial step is to make separate estimates of tail probabilities of precipitation amounts for each combination of station and year by fitting a generalised Pareto distribution (GPD) to data above a user-defined threshold. The resulting time series of annual percentile estimates are subsequently fed into a multivariate Mann-Kendall (MK) test for monotonic trends. We performed extensive simulations using artificially generated precipitation data and noted that the power of tests for temporal trends was substantially enhanced when ordinary percentiles were substituted for GPD percentiles. Furthermore, we found that the trend detection was robust to misspecification of the extreme value distribution. An advantage of the MK test is that it can accommodate non-linear trends, and it can also take into account the dependencies between stations in a network. To illustrate our approach, we used long time series of precipitation data from a network of stations in The Netherlands.

  • 16.
    Burauskaite-Harju, Agne
    et al.
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Grimvall, Anders
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Walther, Alexander
    Department of Earth Sciences, University of Gothenburg, Sweden.
    Achberger, Christine
    Department of Earth Sciences, University of Gothenburg, Sweden.
    Chen, Deliang
    Department of Earth Sciences, University of Gothenburg, Sweden.
    Characterizing and visualizing spatio-temporal patterns in hourly precipitation records2012In: Journal of Theoretical and Applied Climatology, ISSN 0177-798X, E-ISSN 1434-4483, Vol. 109, no 3-4, p. 333-343Article in journal (Refereed)
    Abstract [en]

    We develop new techniques to summarize and visualize spatial patterns of coincidence in weather events such as more or less heavy precipitation at a network of meteorological stations. The cosine similarity measure, which has a simple probabilistic interpretation for vectors of binary data, is generalized to characterize spatial dependencies of events that may reach different stations with a variable time lag. More specifically, we reduce such patterns into three parameters (dominant time lag, maximum cross-similarity, and window-maximum similarity) that can easily be computed for each pair of stations in a network. Furthermore, we visualize such threeparameter summaries by using colour-coded maps of dependencies to a given reference station and distance-decay plots for the entire network. Applications to hourly precipitation data from a network of 93 stations in Sweden illustrate how this method can be used to explore spatial patterns in the temporal synchrony of precipitation events.

  • 17.
    Burdakov, Oleg
    et al.
    Linköping University, Department of Mathematics, Optimization . Linköping University, The Institute of Technology.
    Grimvall, Anders
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Hussian, Mohamed
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Sysoev, Oleg
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Hasse diagrams and the generalized PAV-algorithm for monotonic regression in several explanatory variables2005In: Computational Statistics and Data Analysis, ISSN 0167-9473Article in journal (Refereed)
    Abstract [en]

    Monotonic regression is a nonparametric method for estimation ofmodels in which the expected value of a response variable y increases ordecreases in all coordinates of a vector of explanatory variables x = (x1, …, xp).Here, we examine statistical and computational aspects of our recentlyproposed generalization of the pool-adjacent-violators (PAV) algorithm fromone to several explanatory variables. In particular, we show how the goodnessof-fit and accuracy of obtained solutions can be enhanced by presortingobserved data with respect to their level in a Hasse diagram of the partial orderof the observed x-vectors, and we also demonstrate how these calculations canbe carried out to save computer memory and computational time. Monte Carlosimulations illustrate how rapidly the mean square difference between fittedand expected response values tends to zero, and how quickly the mean squareresidual approaches the true variance of the random error, as the number of observations increases up to 104.

  • 18.
    Burdakov, Oleg
    et al.
    Linköping University, Department of Mathematics, Optimization . Linköping University, The Institute of Technology.
    Grimvall, Anders
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Sysoev, Oleg
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Generalized PAV algorithm with block refinement for partially ordered monotonic regression2009In: Proceedings of the Workshop on Learning Monotone Models from Data / [ed] A. Feelders and R. Potharst, 2009, p. 23-37Conference paper (Other academic)
    Abstract [en]

    In this paper, the monotonic regression problem (MR) is considered. We have recentlygeneralized for MR the well-known Pool-Adjacent-Voilators algorithm(PAV) from the case of completely to partially ordered data sets. Thenew algorithm, called GPAV, combines both high accuracy and lowcomputational complexity which grows quadratically with the problemsize. The actual growth observed in practice is typically far lowerthan quadratic. The fitted values of the exact MR solution composeblocks of equal values. Its GPAV approximation has also a blockstructure. We present here a technique for refining blocks produced bythe GPAV algorithm to make the new blocks more close to those in theexact solution. This substantially improves the accuracy of the GPAVsolution and does not deteriorate its computational complexity. Thecomputational time for the new technique is approximately triple thetime of running the GPAV algorithm. Its efficiency is demonstrated byresults of our numerical experiments.

  • 19.
    Burdakov, Oleg
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Optimization .
    Grimvall, Anders
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Computer and Information Science, Statistics.
    Sysoev, Oleg
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Computer and Information Science, Statistics.
    New optimization algorithms for large-scale isotonic regression in L2-norm2007In: EUROPT-OMS Conference on Optimization,2007, University of Hradec Kralove, Czech Republic: Guadeamus , 2007, p. 44-44Conference paper (Other academic)
    Abstract [en]

    Isotonic regression problem (IR) has numerous important applications in statistics, operations research, biology, image and signal processing and other areas. IR in L2-norm is a minimization problem in which the objective function is the squared Euclidean distance from a given point to a convex set defined by monotonicity constraints of the form: i-th component of the decision vector is less or equal to its j-th component. Unfortunately, the conventional optimization methods are unable to solve IR problems originating from large data sets. The existing IR algorithms, such as the minimum lower sets algorithm by Brunk, the min-max algorithm by Lee, the network flow algorithm by Maxwell & Muchstadt and the IBCR algorithm by Block et al. are able to find exact solution to IR problem for at most a few thousands of variables. The IBCR algorithm, which proved to be the most efficient of them, is not robust enough. An alternative approach is related to solving IR problem approximately. Following this approach, Burdakov et al. developed an algorithm, called GPAV, whose block refinement extension, GPAVR, is able to solve IR problems with a very high accuracy in a far shorter time than the exact algorithms. Apart from this, GPAVR is a very robust algorithm, and it allows us to solve IR problems with over hundred thousands of variables. In this talk, we introduce new exact IR algorithms, which can be viewed as active set methods. They use the approximate solution produced by the GPAVR algorithm as a starting point. We present results of our numerical experiments demonstrating the high efficiency of the new algorithms, especially for very large-scale problems, and their robustness. They are able to solve the problems which all existing exact IR algorithms fail to solve.

  • 20.
    Burdakov, Oleg
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Optimization .
    Grimvall, Anders
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Computer and Information Science, Statistics.
    Sysoev, Oleg
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Computer and Information Science, Statistics.
    Kapyrin, Ivan
    Institute of Numerical Mathematics Russian Academy of Sciences, Moscow, Russia.
    Vassilevski, Yuri
    Institute of Numerical Mathematics Russian Academy of Sciences, Moscow, Russia.
    Monotonic data fitting and interpolation with application to postprocessing of FE solutions2007In: CERFACS 20th Anniversary Conference on High-performance Computing,2007, 2007, p. 11-12Conference paper (Other academic)
    Abstract [en]

    In this talk we consider the isotonic regression (IR) problem which can be formulated as follows. Given a vector $\bar{x} \in R^n$, find $x_* \in R^n$ which solves the problem: \begin{equation}\label{ir2} \begin{array}{cl} \mbox{min} & \|x-\bar{x}\|^2 \\ \mbox{s.t.} & Mx \ge 0. \end{array} \end{equation} The set of constraints $Mx \ge 0$ represents here the monotonicity relations of the form $x_i \le x_j$ for a given set of pairs of the components of $x$. The corresponding row of the matrix $M$ is composed mainly of zeros, but its $i$th and $j$th elements, which are equal to $-1$ and $+1$, respectively. The most challenging applications of (\ref{ir2}) are characterized by very large values of $n$. We introduce new IR algorithms. Our numerical experiments demonstrate the high efficiency of our algorithms, especially for very large-scale problems, and their robustness. They are able to solve some problems which all existing IR algorithms fail to solve. We outline also our new algorithms for monotonicity-preserving interpolation of scattered multivariate data. In this talk we focus on application of our IR algorithms in postprocessing of FE solutions. Non-monotonicity of the numerical solution is a typical drawback of the conventional methods of approximation, such as finite elements (FE), finite volumes, and mixed finite elements. The problem of monotonicity is particularly important in cases of highly anisotropic diffusion tensors or distorted unstructured meshes. For instance, in the nuclear waste transport simulation, the non-monotonicity results in the presence of negative concentrations which may lead to unacceptable concentration and chemistry calculations failure. Another drawback of the conventional methods is a possible violation of the discrete maximum principle, which establishes lower and upper bounds for the solution. We suggest here a least-change correction to the available FE solution $\bar{x} \in R^n$. This postprocessing procedure is aimed on recovering the monotonicity and some other important properties that may not be exhibited by $\bar{x}$. The mathematical formulation of the postprocessing problem is reduced to the following convex quadratic programming problem \begin{equation}\label{ls2} \begin{array}{cl} \mbox{min} & \|x-\bar{x}\|^2 \\ \mbox{s.t.} & Mx \ge 0, \quad l \le x \le u, \quad e^Tx = m, \end{array} \end{equation} where$e=(1,1, \ldots ,1)^T \in R^n$. The set of constraints $Mx \ge 0$ represents here the monotonicity relations between some of the adjacent mesh cells. The constraints $l \le x \le u$ originate from the discrete maximum principle. The last constraint formulates the conservativity requirement. The postprocessing based on (\ref{ls2}) is typically a large scale problem. We introduce here algorithms for solving this problem. They are based on the observation that, in the presence of the monotonicity constraints only, problem (\ref{ls2}) is the classical monotonic regression problem, which can be solved efficiently by some of the available monotonic regression algorithms. This solution is used then for producing the optimal solution to problem (\ref{ls2}) in the presence of all the constraints. We present results of numerical experiments to illustrate the efficiency of our algorithms.

  • 21.
    Burdakov, Oleg
    et al.
    Linköping University, Department of Mathematics, Optimization . Linköping University, Faculty of Science & Engineering.
    Sysoev, Oleg
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    A Dual Active-Set Algorithm for Regularized Monotonic Regression2017In: Journal of Optimization Theory and Applications, ISSN 0022-3239, E-ISSN 1573-2878, Vol. 172, no 3, p. 929-949Article in journal (Refereed)
    Abstract [en]

    Monotonic (isotonic) regression is a powerful tool used for solving a wide range of important applied problems. One of its features, which poses a limitation on its use in some areas, is that it produces a piecewise constant fitted response. For smoothing the fitted response, we introduce a regularization term in the monotonic regression, formulated as a least distance problem with monotonicity constraints. The resulting smoothed monotonic regression is a convex quadratic optimization problem. We focus on the case, where the set of observations is completely (linearly) ordered. Our smoothed pool-adjacent-violators algorithm is designed for solving the regularized problem. It belongs to the class of dual active-set algorithms. We prove that it converges to the optimal solution in a finite number of iterations that does not exceed the problem size. One of its advantages is that the active set is progressively enlarging by including one or, typically, more constraints per iteration. This resulted in solving large-scale test problems in a few iterations, whereas the size of that problems was prohibitively too large for the conventional quadratic optimization solvers. Although the complexity of our algorithm grows quadratically with the problem size, we found its running time to grow almost linearly in our computational experiments.

  • 22.
    Burdakov, Oleg
    et al.
    Linköping University, Department of Mathematics, Optimization . Linköping University, Faculty of Science & Engineering.
    Sysoev, Oleg
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Regularized monotonic regression2016Report (Other academic)
    Abstract [en]

    Monotonic (isotonic) Regression (MR) is a powerful tool used for solving a wide range of important applied problems. One of its features, which poses a limitation on its use in some areas, is that it produces a piecewise constant fitted response. For smoothing the fitted response, we introduce a regularization term in the MR formulated as a least distance problem with monotonicity constraints. The resulting Smoothed Monotonic Regrassion (SMR) is a convex quadratic optimization problem. We focus on the SMR, where the set of observations is completely (linearly) ordered. Our Smoothed Pool-Adjacent-Violators (SPAV) algorithm is designed for solving the SMR. It belongs to the class of dual activeset algorithms. We proved its finite convergence to the optimal solution in, at most, n iterations, where n is the problem size. One of its advantages is that the active set is progressively enlarging by including one or, typically, more constraints per iteration. This resulted in solving large-scale SMR test problems in a few iterations, whereas the size of that problems was prohibitively too large for the conventional quadratic optimization solvers. Although the complexity of the SPAV algorithm is O(n2), its running time was growing in our computational experiments in proportion to n1:16.

  • 23.
    Burdakov, Oleg
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Optimization .
    Sysoev, Oleg
    Linköping University, Department of Computer and Information Science, Statistics.
    Grimvall, Anders
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Mathematics, Statistics.
    Hussian, Mohamed
    Linköping University, Faculty of Arts and Sciences. Linköping University, Department of Mathematics, Statistics.
    An algorithm for isotonic regression problems2004In: European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS / [ed] P. Neittaanmäki, T. Rossi, K. Majava and O. Pironneau, Jyväskylä: University of Jyväskylä , 2004, p. 1-9Conference paper (Refereed)
    Abstract [en]

    We consider the problem of minimizing the distance from a given n-dimensional vector to a set defined by constraintsof the form   xi  xj Such constraints induce a partial order of the components xi, which can be illustrated by an acyclic directed graph.This problem is known as the isotonic regression (IR) problem. It has important applications in statistics, operations research and signal processing. The most of the applied IR problems are characterized by a very large value of n. For such large-scale problems, it is of great practical importance to develop algorithms whose complexity does not rise with n too rapidly.The existing optimization-based algorithms and statistical IR algorithms have either too high computational complexity or too low accuracy of the approximation to the optimal solution they generate. We introduce a new IR algorithm, which can be viewed as a generalization of the Pool-Adjacent-Violator (PAV) algorithm from completely to partially ordered data. Our algorithm combines both low computational complexity O(n2) and high accuracy. This allows us to obtain sufficiently accurate solutions to the IR problems with thousands of observations.

  • 24.
    Burdakov, Oleg
    et al.
    Linköping University, Department of Mathematics, Optimization . Linköping University, The Institute of Technology.
    Sysoev, Oleg
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Grimvall, Anders
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Hussian, Mohammed
    Linköping University, Department of Mathematics, Statistics. Linköping University, Faculty of Arts and Sciences.
    An O(n2) algorithm for isotonic regression problems2006In: Large-Scale Nonlinear Optimization / [ed] G. Di Pillo and M. Roma, Springer-Verlag , 2006, p. 25-33Chapter in book (Refereed)
    Abstract [en]

    Large-Scale Nonlinear Optimization reviews and discusses recent advances in the development of methods and algorithms for nonlinear optimization and its applications, focusing on the large-dimensional case, the current forefront of much research.

    The chapters of the book, authored by some of the most active and well-known researchers in nonlinear optimization, give an updated overview of the field from different and complementary standpoints, including theoretical analysis, algorithmic development, implementation issues and applications

  • 25.
    Dahlin, Johan
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Schön, Thomas Bo
    Department of Information Technology, Uppsala University.
    Villani, Mattias
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    Approximate inference in state space models with intractable likelihoods using Gaussian process optimisation2014Report (Other academic)
    Abstract [en]

    We propose a novel method for MAP parameter inference in nonlinear state space models with intractable likelihoods. The method is based on a combination of Gaussian process optimisation (GPO), sequential Monte Carlo (SMC) and approximate Bayesian computations (ABC). SMC and ABC are used to approximate the intractable likelihood by using the similarity between simulated realisations from the model and the data obtained from the system. The GPO algorithm is used for the MAP parameter estimation given noisy estimates of the log-likelihood. The proposed parameter inference method is evaluated in three problems using both synthetic and real-world data. The results are promising, indicating that the proposed algorithm converges fast and with reasonable accuracy compared with existing methods.

  • 26.
    Danner, Torrin
    Linköping University, Department of Computer and Information Science, Statistics.
    A Bayesian Multilevel Model for Time Series Applied to Learning in Experimental Auctions2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Establishing what variables affect learning rates in experimental auctions can be valuable in determining how competitive bidders in auctions learn. This study aims to be a foray into this field. The differences, both absolute and actual, between participant bids and optimal bids are evaluated in terms of the effects from a variety of variables such as age, sex, etc. An optimal bid in the context of an auction is the best bid a participant can place to win the auction without paying more than the value of the item, thus maximizing their revenues. This study focuses on how two opponent types, humans and computers, affect the rate at which participants learn to optimize their winnings.

    A Bayesian multilevel model for time series is used to model the learning rate of actual bids from participants in an experimental auction study. The variables examined at the first level were auction type, signal, round, interaction effects between auction type and signal and interaction effects between auction type and round. At a 90% credibility interval, the true value of the mean for the intercept and all slopes falls within an interval that also includes 0. Therefore, none of the variables are deemed to be likely to influence the model.

    The variables on the second level were age, IQ, sex and answers from a short quiz about how participants felt when the y won or lost auctions. The posterior distributions of the second level variables also found to be unlikely to influence the model at a 90% credibility interval.

    This study shows that more research is required to be able to determine what variables affect the learning rate in competitive bidding auction studies

  • 27.
    Daréus, Emma
    et al.
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, The Institute of Technology.
    Suhr, Hektor
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, The Institute of Technology.
    En undersökning av sambandet mellan kronisk inflammation och lungcancer2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Cancer is a big health problem in Sweden and one of the leading causesof death. As of recently there has been an increase in the interest in thecancer research community in how chronic inflammation influences theemergence of cancer tumors. In this paper we intend to investigate therelationship between markers of chronic inflammation and lung cancer.We will also look in to the possibility that sialic acid may be a marker oflung cancer.Lung cancer is one of the most common types of cancers in Sweden andhas a high mortality rate compared to other cancers. Reasons for thisbeing both from the fact that it is usually discovered at a later stage and islocated in a sensitive organ make it harder to treat.To do this we will use sialic acid as a marker of inflammation from acohort study done during the early sixties, it will be cross referenced withthe Swedish Cancer Registry from Socialstyrelsen. BMI and age will alsobe used as variables of interest.The method for the paper will use Cox Proportional Hazard Model toestimate the risk of sialic acid relating to the emergence of lung cancer.Sensitivity analysis will be used to consider the effects of smoking as aconfounder variable.The results show that there are reasons to believe that chronicinflammation may have a role in the emergence of lung cancer tumors.There is no evidence in this study that suggest that sialic acid is a goodmarker for an individual having lung cancer. Furthermore, BMI seems tohave a protective effect against lung cancer, but this would need furtherstudy to draw any real conclusions from.

  • 28.
    Du, Yang
    Linköping University, Department of Computer and Information Science, Statistics.
    Comparison of change-point detection algorithms for vector time series2010Independent thesis Advanced level (degree of Master (Two Years)), 30 credits / 45 HE creditsStudent thesis
    Abstract [en]

    Change-point detection aims to reveal sudden changes in sequences of data. Special attention has been paid to the detection of abrupt level shifts, and applications of such techniques can be found in a great variety of fields, such as monitoring of climate change, examination of gene expressions and quality control in the manufacturing industry. In this work, we compared the performance of two methods representing frequentist and Bayesian approaches, respectively. The frequentist approach involved a preliminary search for level shifts using a tree algorithm followed by a dynamic programming algorithm for optimizing the locations and sizes of the level shifts. The Bayesian approach involved an MCMC (Markov chain Monte Carlo) implementation of a method originally proposed by Barry and Hartigan. The two approaches were implemented in R and extensive simulations were carried out to assess both their computational efficiency and ability to detect abrupt level shifts. Our study showed that the overall performance regarding the estimated location and size of change-points was comparable for the Bayesian and frequentist approach. However, the Bayesian approach performed better when the number of change-points was small; whereas the frequentist became stronger when the change-point proportion increased. The latter method was also better at detecting simultaneous change-points in vector time series. Theoretically, the Bayesian approach has a lower computational complexity than the frequentist approach, but suitable settings for the combined tree and dynamic programming can greatly reduce the processing time.

  • 29.
    Edston, Erik
    et al.
    Linköping University, Department of Clinical and Experimental Medicine. Linköping University, Faculty of Health Sciences.
    Eriksson, Olle
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    van Hage, M
    Mast cell tryptase in postmortem serum - Reference values and confounders2007In: International journal of legal medicine (Print), ISSN 0937-9827, E-ISSN 1437-1596, Vol. 121, no 4, p. 275-280Article in journal (Refereed)
    Abstract [en]

    We have investigated the effects of some factors suspected of inducing spuriously increased tryptase concentrations, specifically sampling site, conjunctival petechial bleeding and prone position at the time of death as indicators of premortem asphyxia, and resuscitation efforts by external cardiac massage. Tryptase was measured in blood from the femoral vein in 60 deaths: 39 control cases who died rapidly (within minutes) from natural causes (sudden cardiac death and acute aortic dissection), 16 with death caused by prolonged asphyxia (traumatic compression of the chest and suffocation due to body position or smothering), and five anaphylactic deaths. In 44 of these cases, tryptase was measured in both heart and femoral blood. Mast cell tryptase was analyzed with a commercial FEIA method (Pharmacia Diagnostics AB, Uppsala, Sweden) measuring both a- and ß-tryptase. Assuming that tryptase values in the control group were gamma distributed, we calculated the upper normal limits for tryptase concentrations in femoral blood. It was found that 95% of the controls had values below 44.3 µg/l (femoral blood), SD 5.27 µg/l. All but one of the anaphylactic deaths had tryptase concentrations exceeding that limit. Tryptase was significantly elevated in femoral blood from anaphylactic deaths (p<0.007), compared with the controls. Also, in the cases where death had occurred due to asphyxia tryptase was elevated in femoral blood (p<0.04). A significant difference in tryptase concentrations was seen between blood from the heart and the femoral vessels (p<0.02) in the whole material (n=44). Tryptase concentrations in femoral blood were not influenced by prone position at death, or resuscitation efforts. It is concluded that asphyxia premortem seems to affect tryptase concentrations, that postmortem tryptase measurements should be done in serum from femoral blood, and that the normal upper limit, covering 95%, is 44.3 µg/l. © 2006 Springer-Verlag.

  • 30.
    Eklund, Anders
    et al.
    Virginia Tech Carilion Research Institute, Virginia Tech, Roanoke, VA, USA.
    Dufort, Paul
    Department of Medical Imaging, University of Toronto, Toronto, ON, Canada.
    Villani, Mattias
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    LaConte, Stephen
    Virginia Tech Carilion Research Institute, Virginia Tech, Roanoke, VA, USA/School of Biomedical Engineering and Sciences, Virginia Tech-Wake Forest University, Blacksburg, VA, USA.
    BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs2014In: Frontiers in Neuroinformatics, ISSN 1662-5196, E-ISSN 1662-5196, Vol. 8, no 24Article in journal (Refereed)
    Abstract [en]

    Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm3 brain template in 4–6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/).

  • 31.
    Eklund, Anders
    et al.
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Department of Biomedical Engineering, Division of Biomedical Engineering. Linköping University, Center for Medical Image Science and Visualization (CMIV). Linköping University, Faculty of Science & Engineering. Linköping University, Faculty of Arts and Sciences.
    Lindqvist, Martin A
    Department of Biostatistics, Johns Hopkins University, Baltimore, USA.
    Villani, Mattias
    Linköping University, Department of Computer and Information Science, Statistics. Linköping University, Faculty of Arts and Sciences.
    A Bayesian Heteroscedastic GLM with Application to fMRI Data with Motion Spikes2017In: NeuroImage, ISSN 1053-8119, E-ISSN 1095-9572, Vol. 155, p. 354-369Article in journal (Refereed)
    Abstract [en]

    We propose a voxel-wise general linear model with autoregressive noise and heteroscedastic noise innovations (GLMH) for analyzing functional magnetic resonance imaging (fMRI) data. The model is analyzed from a Bayesian perspective and has the benefit of automatically down-weighting time points close to motion spikes in a data-driven manner. We develop a highly efficient Markov Chain Monte Carlo (MCMC) algorithm that allows for Bayesian variable selection among the regressors to model both the mean (i.e., the design matrix) and variance. This makes it possible to include a broad range of explanatory variables in both the mean and variance (e.g., time trends, activation stimuli, head motion parameters and their temporal derivatives), and to compute the posterior probability of inclusion from the MCMC output. Variable selection is also applied to the lags in the autoregressive noise process, making it possible to infer the lag order from the data simultaneously with all other model parameters. We use both simulated data and real fMRI data from OpenfMRI to illustrate the importance of proper modeling of heteroscedasticity in fMRI data analysis. Our results show that the GLMH tends to detect more brain activity, compared to its homoscedastic counterpart, by allowing the variance to change over time depending on the degree of head motion.