liu.seSearch for publications in DiVA
Change search
Refine search result
123 101 - 115 of 115
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    Savas, Berkant
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Tangent distance algorithm on HOSVD reduced data for handwritten digit recognition2005In: Workshop on Tensor Decompositions and Applications,2005, 2005Conference paper (Other academic)
  • 102.
    Savas, Berkant
    et al.
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Eldén, Lars
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Handwritten digit classification using higher order singular value decomposition2007In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 40, no 3, p. 993-1003Article in journal (Refereed)
    Abstract [en]

    In this paper we present two algorithms for handwritten digit classification based on the higher order singular value decomposition (HOSVD). The first algorithm uses HOSVD for construction of the class models and achieves classification results with error rate lower than 6%. The second algorithm uses the HOSVD for tensor approximation simultaneously in two modes. Classification results for the second algorithm are almost down at 5% even though the approximation reduces the original training data with more than 98% before the construction of the class models. The actual classification in the test phase for both algorithms is conducted by solving a series least squares problems. Considering computational amount for the test presented the second algorithm is twice as efficient as the first one.

  • 103.
    Savas, Berkant
    et al.
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Eldén, Lars
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Krylov-type methods for tensor computations I2013In: Linear Algebra and its Applications, ISSN 0024-3795, E-ISSN 1873-1856, Vol. 438, no 2, p. 891-918Article in journal (Refereed)
    Abstract [en]

    Several Krylov-type procedures are introduced that generalize matrix Krylov methods for tensor computations. They are denoted minimal Krylov recursion, maximal Krylov recursion, and contracted tensor product Krylov recursion. It is proved that, for a given tensor A with multilinear rank-(p; q; r), the minimal Krylov recursion extracts the correct subspaces associated to the tensor in p+q+r number of tensor-vector-vector multiplications. An optimized minimal Krylov procedure is described that, for a given multilinear rank of an approximation, produces a better approximation than the standard minimal recursion. We further generalize the matrix Krylov decomposition to a tensor Krylov decomposition. The tensor Krylov methods are intended for the computation of low multilinear rank approximations of large and sparse tensors, but they are also useful for certain dense and structured tensors for computing their higher order singular value decompositions or obtaining starting points for the best low-rank computations of tensors. A set of numerical experiments, using real-world and synthetic data sets, illustrate some of the properties of the tensor Krylov methods.

  • 104.
    Savas, Berkant
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Lim, Lek-Heng
    ICME Stanford University.
    Quasi-Newton algorithm for best multi-linear rank approximation of tensors2007In: 6th International Congress on Industrial and Applied Mathematics,2007, 2007Conference paper (Other academic)
    Abstract [en]

    In this talk we introduce a novel method for solving the best multilinear rank approximation problem. Our algorithm differs from existing methods in two respects: (1) it exploits the fact that the problem may be viewed as an optimization problem over a product of Grassmann manifolds; (2) it uses Quasi-Newton-like Hessian-approximates specially adapted for Grassmannians and thus avoids the inevitable problem of large Hessians in such problems. Tensor approximation problems occur in various applications involving multidimensional data. The performance of the Quasi-Newton algorithm is compared with the Newton-Grassmann and Higher Order Orthogonal Iteration algorithms for general and symmetric 3-tensors.

  • 105.
    Savas, Berkant
    et al.
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Lim, Lek-Heng
    University of California Berkeley.
    Quasi-Newton Methods on Grassmannians and Multilinear Approximations of Tensors2010In: SIAM Journal on Scientific Computing, ISSN 1064-8275, Vol. 32, no 6, p. 3352-3393Article in journal (Refereed)
    Abstract [en]

    In this paper we proposed quasi-Newton and limited memory quasi-Newton methods for objective functions defined on Grassmannians or a product of Grassmannians. Specifically we defined BFGS and limited memory BFGS updates in local and global coordinates on Grassmannians or a product of these. We proved that, when local coordinates are used, our BFGS updates on Grassmannians share the same optimality property as the usual BFGS updates on Euclidean spaces. When applied to the best multilinear rank approximation problem for general and symmetric tensors, our approach yields fast, robust, and accurate algorithms that exploit the special Grassmannian structure of the respective problems and which work on tensors of large dimensions and arbitrarily high order. Extensive numerical experiments are included to substantiate our claims.

  • 106. Simoncini, V
    et al.
    Elden, Lars
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Inexact Rayleigh quotient-type methods for eigenvalue computations2002In: BIT Numerical Mathematics, ISSN 0006-3835, E-ISSN 1572-9125, Vol. 42, no 1, p. 159-182Article in journal (Refereed)
    Abstract [en]

    We consider the computation of an eigenvalue and corresponding eigenvector of a Hermitian positive definite matrix A is an element of C-nxn, assuming that good approximations of the wanted eigenpair are already available, as may be the case in applications such as structural mechanics. We analyze efficient implementations of inexact Rayleigh quotient-type methods, which involve the approximate solution of a linear system at each iteration by means of the Conjugate Residuals method. We show that the inexact version of the classical Rayleigh quotient iteration is mathematically equivalent to a Newton approach. New insightful bounds relating the inner and outer recurrences are derived. In particular, we show that even if in the inner iterations the norm of the residual for the linear system decreases very slowly, the eigenvalue residual is reduced substantially. Based on the theoretical results, we examine stopping criteria for the inner iteration. We also discuss and motivate a preconditioning strategy for the inner iteration in order to further accelerate the convergence. Numerical experiments illustrate the analysis.

  • 107.
    Simonsson, Lennart
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Computing a Partial SVD of a Matrix with Missing Data2003In: Numerical Linear Algebra and its Applications: XXI International School and Workshop,2003, 2003Conference paper (Other academic)
  • 108.
    Simonsson, Lennart
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Lågrang-approximation av en matris med saknade element2004In: Workshop i tillämpad matematik,2004, 2004Conference paper (Other academic)
  • 109.
    Simonsson, Lennart
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Subspace Computations via Matrix Decompositions and Geometric Optimization2006Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis is concerned with the computation of certain subspaces connected to a given matrix, where the closely related problem of approximating the matrix with one of lower rank is given special attention.

    To determine the rank and obtain bases for fundamental subspaces such as the range and null space of a matrix, computing the singular value decomposition (SVD) is the standard method. When new data are added, like in adaptive signal processing, a more economic alternative to the SVD is to use a rank-revealing UTV (ULV or URV) decomposition since it can be updated more easily.

    The scenario in part I of the thesis is that the matrix to be updated is either a product or a quotient of two other matrices. There exist implicit algorithms for computing the SVD of a product or quotient that operate on the two matrices separately. For the corresponding problem of an URV decomposition of a product or quotient, originally sketched by S. Qiao, we give the details of the updating algorithms. Sample numerical experiments confirm that the quality of the approximate subspaces compared to the ones obtained by the implicitly computed URV, is degraded if the product is formed explicitly in some cases. We argue that the same pros and cons that affect the choice between the URV and ULV decomposition of one matrix, carry over to the choice between the implicit URV decomposition and the more established ULLV decomposition in the quotient case. As a signal processing application, we track the range of an estimated cross-covariance matrix.

    We also describe the updating issues of a decomposition that reveals the ranks of the individual matrices in a product. That decomposition suffers from a difficult decision about the rank of the product and will not be tested as a competitor to the implicit URV decomposition referred to above.

    A common situation in scientific computing is that the matrix is too lagre to admit a full factorization within reasonable time. In that case iterative methods must be employed, where Lanczos type algorithms are the most widely used. In part II we discuss the formulation of standard numerical optimization methods on the Grassmann manifold whose objects are subspaces and focus on the application to numerical linear algebra problems. This approach allow us to (re-)derive algorithms for the partial symmetric eigenvalue problem and the inexact Newton method is given special attention.

    A recent method is the Jacobi-Davidson (JD) algorithm that can be seen both as a variation of an inexact Newton method for solving a set of nonlinear equations/minimizing a function and as an expanding subspace algorithm that is equivalent to Lanczos if the equation in each step is solved exactly. Our contribution is an algorithm that is fairly robust with a subspace that is only twice as large as the desired one. A large part treats the implementation issues associated with the solution of a correction equation including stopping rules and the use of preconditioners.

    Other numerical linear algebra problems call for a pair of subspaces. We give Grassmann type algorithms for the partial SVD problem and low rank approximation of matrices with missing entries, but will restrain ourselves by demonstrating their efficiency for exact solution of the Newton equation.

  • 110.
    Simonsson, Lennart
    et al.
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Elden, Lars
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Grassmann algorithms for low rank approximation of matrices with missing values2010In: BIT NUMERICAL MATHEMATICS, ISSN 0006-3835, Vol. 50, no 1, p. 173-191Article in journal (Refereed)
    Abstract [en]

    The problem of approximating a matrix by another matrix of lower rank, when a modest portion of its elements are missing, is considered. The solution is obtained using Newtons algorithm to find a zero of a vector field on a product manifold. As a preliminary the algorithm is formulated for the well-known case with no missing elements where also a rederivation of the correction equation in a block Jacobi-Davidson method is included. Numerical examples show that the Newton algorithm grows more efficient than an alternating least squares procedure as the amount of missing values increases.

  • 111.
    Skoglund, Ingegerd
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Algorithms for a Partially Regularized Least Squares Problem2007Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Statistical analysis of data from rivers deals with time series which are dependent, e.g., on climatic and seasonal factors. For example, it is a well-known fact that the load of substances in rivers can be strongly dependent on the runoff. It is of interest to find out whether observed changes in riverine loads are due only to natural variation or caused by other factors. Semi-parametric models have been proposed for estimation of time-varying linear relationships between runoff and riverine loads of substances. The aim of this work is to study some numerical methods for solving the linear least squares problem which arises.

    The model gives a linear system of the form A1x1 + A2x2 + n = b1. The vector n consists of identically distributed random variables all with mean zero. The unknowns, x, are split into two groups, x1 and x2. In this model, usually there are more unknowns than observations and the resulting linear system is most often consistent having an infinite number of solutions. Hence some constraint on the parameter vector x is needed. One possibility is to avoid rapid variation in, e.g., the parameters x2. This can be accomplished by regularizing using a matrix A3, which is a discretization of some norm. The problem is formulated

    as a partially regularized least squares problem with one or two regularization parameters. The parameter x2 has here a two-dimensional structure. By using two different regularization parameters it is possible to regularize separately in each dimension.

    We first study (for the case of one parameter only) the conjugate gradient method for solution of the problem. To improve rate of convergence blockpreconditioners of Schur complement type are suggested, analyzed and tested. Also a direct solution method based on QR decomposition is studied. The idea is to first perform operations independent of the values of the regularization parameters. Here we utilize the special block-structure of the problem. We further discuss the choice of regularization parameters and generalize in particular Reinsch’s method to the case with two parameters. Finally the cross-validation technique is treated. Here also a Monte Carlo method is used by which an approximation to the generalized cross-validation function can be computed efficiently.

    List of papers
    1. A block-preconditioner for a special regularized least-squares problem
    Open this publication in new window or tab >>A block-preconditioner for a special regularized least-squares problem
    2007 (English)In: Linear Algebra with Applications, ISSN 1070-5325, Vol. 14, no 6, p. 469-484Article in journal (Refereed) Published
    Abstract [en]

    We consider a linear system of the form A1x1 + A2x2 + =b1. The vector consists of independent and identically distributed random variables all with mean zero. The unknowns are split into two groups x1 and x2. It is assumed that AA1 has full rank and is easy to invert. In this model, usually there are more unknowns than observations and the resulting linear system is most often consistent having an infinite number of solutions. Hence, some constraint on the parameter vector x is needed. One possibility is to avoid rapid variation in, e.g. the parameters x2. This can be accomplished by regularizing using a matrix A3, which is a discretization of some norm (e.g. a Sobolev space norm). We formulate the problem as a partially regularized least-squares problem and use the conjugate gradient method for its solution. Using the special structure of the problem we suggest and analyse block-preconditioners of Schur compliment type. We demonstrate their effectiveness in some numerical tests. The test examples are taken from an application in modelling of substance transport in rivers.

    Keywords
    conjugate gradient, least squares, regularization
    National Category
    Mathematics
    Identifiers
    urn:nbn:se:liu:diva-14423 (URN)10.1002/nla.533 (DOI)
    Available from: 2007-05-02 Created: 2007-05-02 Last updated: 2009-04-26
    2. A direct method for a special regularized least squares problem
    Open this publication in new window or tab >>A direct method for a special regularized least squares problem
    Manuscript (Other academic)
    Identifiers
    urn:nbn:se:liu:diva-14424 (URN)
    Available from: 2007-05-02 Created: 2007-05-02 Last updated: 2010-01-13
  • 112.
    Song, Han Hee
    et al.
    Department of Computer Science, The University of Texas at Austin.
    Savas, Berkant
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Cho, Tae Won
    Department of Computer Science, The University of Texas at Austin.
    Dave, Vacha
    Department of Computer Science, The University of Texas at Austin.
    Lu, Zhengdong
    Institute for Computational Engineering and Sciences, The University of Texas at Austin.
    Dhillon, Inderjit S.
    Department of Computer Science, The University of Texas at Austin.
    Zhang, Yin
    Department of Computer Science, The University of Texas at Austin.
    Qiu, Lili
    Department of Computer Science, The University of Texas at Austin.
    Clustered Embedding of Massive Social Networks2012In: Proceedings of the 12th ACM SIGMETRICS/PERFORMANCE joint international conference on Measurement and Modeling of Computer Systems, Association for Computing Machinery (ACM), 2012, , p. 27p. 331-342Conference paper (Other academic)
    Abstract [en]

    The explosive growth of social networks has created numerous exciting research opportunities. A central concept in the analysis of social networks is a proximity measure, which captures the closeness or similarity between nodes in a social network. Despite much research on proximity measures,  there is a lack of techniques to eciently and accurately compute proximity measures for large-scale social networks. In this paper, we develop a novel dimensionality reduction technique, called clustered spectral graph embedding, to embed the graphs adjacency matrix into a much smaller matrix. The embedded matrix together with the embedding subspaces capture the essential clustering and spectral structure of the original graph and allows a wide range of analysis tasks to be performed in an ecient and accurate fashion. To evaluate our technique, we use three large real-world social  network datasets: Flickr, LiveJournal and MySpace, with up to 2 million nodes and 90 million links. Our results clearly demonstrate the accuracy, scalability and  exibility of our approach in the context of three importantsocial network analysis tasks: proximity estimation, missing link inference, and link prediction.

  • 113.
    Sui, Xin
    et al.
    University of Texas at Austin, USA.
    Lee, Tsung-Hsien
    University of Texas at Austin, USA.
    Whang, Joyce Jiyoung
    University of Texas at Austin, USA.
    Savas, Berkant
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Jain, Saral
    University of Texas at Austin, USA.
    Pingali, Keshav
    University of Texas at Austin, USA.
    Dhillon, Inderjit S.
    University of Texas at Austin, USA.
    Parallel clustered low-rank approximation of graphs and its application to link prediction2012In: Proceedings of the International Workshop on Languages and Compilers for Parallel Computing, Springer Berlin Heidelberg , 2012, p. 76-95Conference paper (Other academic)
    Abstract [en]

    Social network analysis has become a major research area that has impact in diverse applications ranging from search engines to product recommendation systems. A major problem in implementing social network analysis algorithms is the sheer size of many social networks, for example, the Facebook graph has more than 900 million vertices and even small networks may have tens of millions of vertices. One solution to dealing with these large graphs is dimensionality reduction using spectral or SVD analysis of the adjacency matrix of the network, but these global techniques do not necessarily take into account local structures or clusters of the network that are critical in network analysis. A more promising approach is clustered low-rank approximation: instead of computing a global low-rank approximation, the adjacency matrix is first clustered, and then a low-rank approximation of each cluster (i.e., diagonal block) is computed. The resulting algorithm is challenging to parallelize not only because of the large size of the data sets in social network analysis, but also because it requires computing with very diverse data structures ranging from extremely sparse matrices to dense matrices. In this paper, we describe the first parallel implementation of a clustered low-rank approximation algorithm for large social network graphs, and use it to perform link prediction in parallel. Experimental results show that this implementation scales well on large distributed-memory machines; for example, on a Twitter graph with roughly 11 million vertices and 63 million edges, our implementation scales by a factor of 86 on 128 processes and takes less than 2300 seconds, while on a much larger Twitter graph with 41 million vertices and 1.2 billion edges, our implementation scales by a factor of 203 on 256 processes with a running time about 4800 seconds.

  • 114.
    Wikstrom, P.
    et al.
    Wikström, P., Department of Materials Science and Engineering, Division of Energy and Furnace Technology, Royal Institute of Technology (KTH), Brinellvägen 23, S-100 44 Stockholm, Sweden.
    Blasiak, W.
    Department of Materials Science and Engineering, Division of Energy and Furnace Technology, Royal Institute of Technology (KTH), Brinellvägen 23, S-100 44 Stockholm, Sweden.
    Berntsson, Fredrik
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Estimation of the transient surface temperature and heat flux of a steel slab using an inverse method2007In: Applied Thermal Engineering, ISSN 1359-4311, E-ISSN 1873-5606, Vol. 27, no 14-15, p. 2463-2472Article in journal (Refereed)
    Abstract [en]

    In the steel industry it is of great importance to be able to control the surface temperature and heating- or cooling rates during heat treatment processes. An experiment was performed in which a steel slab was heated up to 1250 °C in a fuel fired test furnace. The transient surface temperature and heat flux of a steel slab is calculated using a model for inverse heat conduction. That is, the time dependent local surface temperature and heat flux of a slab is calculated on the basis of temperature measurements in selected points of its interior by using a model of inverse heat conduction. Time- and temperature histories were measured at three points inside a steel slab. Measured temperature histories at the two lower locations of the slab were used as input to calculate the temperature at the position of the third location. A comparison of the experimentally measured and the calculated temperature histories was made to verify the model. The results showed very good agreement and suggest that this model can be applied to similar applications in the Steel industry or in other areas where the target of investigation for some reason is inaccessible to direct measurements. © 2007 Elsevier Ltd. All rights reserved.

  • 115. Wikstrom, Patrik
    et al.
    Blasiak, Wlodzimierz
    Berntsson, Fredrik
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Estimation of the transient surface temperature, heat flux and effective heat transfer coefficient of a slab in an industrial reheating furnace by using an inverse method2007In: STEEL RESEARCH INTERNATIONAL, ISSN 1611-3683, Vol. 78, no 1, p. 63-70Article in journal (Refereed)
    Abstract [en]

    In the steel industry it is of great importance to be able to control the surface temperature and heating or cooling rates during heat treatment processes. In this paper, a steel slab is heated up to 1300 degrees C in an industrial reheating furnace and the temperature data are recorded during the reheating process. The transient local surface temperature, heat flux and effective heat transfer coefficient of the steel slab ares calculated using a model for inverse heat conduction. The calculated surface temperatures are compared with the temperatures achieved by using a model of the heating process with the help of the software STEELTEMP (R) 2D. The results obtained show very good agreement and suggest that the inverse method can be applied to similar high temperature applications with very good accuracy.

123 101 - 115 of 115
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf