liu.seSök publikationer i DiVA
Ändra sökning
Avgränsa sökresultatet
1 - 25 av 25
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Savas, Berkant
    et al.
    Linköpings universitet, Institutionen för teknik och naturvetenskap, Kommunikations- och transportsystem. Linköpings universitet, Tekniska fakulteten.
    Dhillon, Inderjit S.
    University of Texas Austin, USA.
    CLUSTERED MATRIX APPROXIMATION2016Ingår i: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 37, nr 4, s. 1531-1555Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper we develop a novel clustered matrix approximation framework, first showing the motivation behind our research. The proposed methods are particularly well suited for problems with large scale sparse matrices that represent graphs and/or bipartite graphs from information science applications. Our framework and resulting approximations have a number of benefits: (1) the approximations preserve important structure that is present in the original matrix; (2) the approximations contain both global-scale and local-scale information; (3) the procedure is efficient both in computational speed and memory usage; and (4) the resulting approximations are considerably more accurate with less memory usage than truncated SVD approximations, which are optimal with respect to rank. The framework is also quite flexible as it may be modified in various ways to fit the needs of a particular application. In the paper we also derive a probabilistic approach that uses randomness to compute a clustered matrix approximation within the developed framework. We further prove deterministic and probabilistic bounds of the resulting approximation error. Finally, in a series of experiments we evaluate, analyze, and discuss various aspects of the proposed framework. In particular, all the benefits we claim for the clustered matrix approximation are clearly illustrated using real-world and large scale data.

  • 2.
    Savas, Berkant
    et al.
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Eldén, Lars
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Krylov-type methods for tensor computations I2013Ingår i: Linear Algebra and its Applications, ISSN 0024-3795, E-ISSN 1873-1856, Vol. 438, nr 2, s. 891-918Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Several Krylov-type procedures are introduced that generalize matrix Krylov methods for tensor computations. They are denoted minimal Krylov recursion, maximal Krylov recursion, and contracted tensor product Krylov recursion. It is proved that, for a given tensor A with multilinear rank-(p; q; r), the minimal Krylov recursion extracts the correct subspaces associated to the tensor in p+q+r number of tensor-vector-vector multiplications. An optimized minimal Krylov procedure is described that, for a given multilinear rank of an approximation, produces a better approximation than the standard minimal recursion. We further generalize the matrix Krylov decomposition to a tensor Krylov decomposition. The tensor Krylov methods are intended for the computation of low multilinear rank approximations of large and sparse tensors, but they are also useful for certain dense and structured tensors for computing their higher order singular value decompositions or obtaining starting points for the best low-rank computations of tensors. A set of numerical experiments, using real-world and synthetic data sets, illustrate some of the properties of the tensor Krylov methods.

    Ladda ner fulltext (pdf)
    fulltext
  • 3.
    Song, Han Hee
    et al.
    Department of Computer Science, The University of Texas at Austin.
    Savas, Berkant
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Cho, Tae Won
    Department of Computer Science, The University of Texas at Austin.
    Dave, Vacha
    Department of Computer Science, The University of Texas at Austin.
    Lu, Zhengdong
    Institute for Computational Engineering and Sciences, The University of Texas at Austin.
    Dhillon, Inderjit S.
    Department of Computer Science, The University of Texas at Austin.
    Zhang, Yin
    Department of Computer Science, The University of Texas at Austin.
    Qiu, Lili
    Department of Computer Science, The University of Texas at Austin.
    Clustered Embedding of Massive Social Networks2012Ingår i: Proceedings of the 12th ACM SIGMETRICS/PERFORMANCE joint international conference on Measurement and Modeling of Computer Systems, Association for Computing Machinery (ACM), 2012, , s. 27s. 331-342Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    The explosive growth of social networks has created numerous exciting research opportunities. A central concept in the analysis of social networks is a proximity measure, which captures the closeness or similarity between nodes in a social network. Despite much research on proximity measures,  there is a lack of techniques to eciently and accurately compute proximity measures for large-scale social networks. In this paper, we develop a novel dimensionality reduction technique, called clustered spectral graph embedding, to embed the graphs adjacency matrix into a much smaller matrix. The embedded matrix together with the embedding subspaces capture the essential clustering and spectral structure of the original graph and allows a wide range of analysis tasks to be performed in an ecient and accurate fashion. To evaluate our technique, we use three large real-world social  network datasets: Flickr, LiveJournal and MySpace, with up to 2 million nodes and 90 million links. Our results clearly demonstrate the accuracy, scalability and  exibility of our approach in the context of three importantsocial network analysis tasks: proximity estimation, missing link inference, and link prediction.

  • 4.
    Sui, Xin
    et al.
    University of Texas at Austin, USA.
    Lee, Tsung-Hsien
    University of Texas at Austin, USA.
    Whang, Joyce Jiyoung
    University of Texas at Austin, USA.
    Savas, Berkant
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Jain, Saral
    University of Texas at Austin, USA.
    Pingali, Keshav
    University of Texas at Austin, USA.
    Dhillon, Inderjit S.
    University of Texas at Austin, USA.
    Parallel clustered low-rank approximation of graphs and its application to link prediction2012Ingår i: Proceedings of the International Workshop on Languages and Compilers for Parallel Computing, Springer Berlin Heidelberg , 2012, s. 76-95Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Social network analysis has become a major research area that has impact in diverse applications ranging from search engines to product recommendation systems. A major problem in implementing social network analysis algorithms is the sheer size of many social networks, for example, the Facebook graph has more than 900 million vertices and even small networks may have tens of millions of vertices. One solution to dealing with these large graphs is dimensionality reduction using spectral or SVD analysis of the adjacency matrix of the network, but these global techniques do not necessarily take into account local structures or clusters of the network that are critical in network analysis. A more promising approach is clustered low-rank approximation: instead of computing a global low-rank approximation, the adjacency matrix is first clustered, and then a low-rank approximation of each cluster (i.e., diagonal block) is computed. The resulting algorithm is challenging to parallelize not only because of the large size of the data sets in social network analysis, but also because it requires computing with very diverse data structures ranging from extremely sparse matrices to dense matrices. In this paper, we describe the first parallel implementation of a clustered low-rank approximation algorithm for large social network graphs, and use it to perform link prediction in parallel. Experimental results show that this implementation scales well on large distributed-memory machines; for example, on a Twitter graph with roughly 11 million vertices and 63 million edges, our implementation scales by a factor of 86 on 128 processes and takes less than 2300 seconds, while on a much larger Twitter graph with 41 million vertices and 1.2 billion edges, our implementation scales by a factor of 203 on 256 processes with a running time about 4800 seconds.

  • 5.
    Savas, Berkant
    et al.
    University of Texas at Austin..
    Dhillon, Inderjit, S.
    University of Texas at Austin..
    Clustered low rank approximation of graphs in information science applications2011Ingår i: Proceedings of the SIAM International Conference on Data Mining (SDM), 2011, s. 164-175Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present a fast and accurate procedure called clusteredlow rank matrix approximation for massive graphs. The procedure involvesa fast clustering of the graph and then approximates each clusterseparately using existing methods, e.g. the singular value decomposition,or stochastic algorithms. The cluster-wise approximations are thenextended to approximate the entire graph. This approach has severalbenets: (1) important community structure of the graph is preserveddue to the clustering; (2) highly accurate low rank approximationsare achieved; (3) the procedure is ecient both in terms of computationalspeed and memory usage; (4) better performance in problems from variousapplications compared to standard low rank approximation. Further,we generalize stochastic algorithms to the clustered low rank approximationframework and present theoretical bounds for the approximation error.Finally, a set of experiments, using large scale and real-world graphs,show that our methods outperform standard low rank matrix approximationalgorithms.

  • 6.
    Eldén, Lars
    et al.
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Savas, Berkant
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Perturbation Theory and Optimality Conditions for the Best Multilinear Rank Approximation of a Tensor2011Ingår i: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 32, nr 4, s. 1422-1450Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The problem of computing the best rank-(p,q,r) approximation of a third order tensor is considered. First the problem is reformulated as a maximization problem on a product of three Grassmann manifolds. Then expressions for the gradient and the Hessian are derived in a local coordinate system at a stationary point, and conditions for a local maximum are given. A first order perturbation analysis is performed using the Grassmann manifold framework. The analysis is illustrated in a few examples, and it is shown that the perturbation theory for the singular value decomposition is a special case of the tensor theory.

    Ladda ner fulltext (pdf)
    fulltext
  • 7.
    Vasuki, Vishvas
    et al.
    University of Texas at Austin.
    Natarajan, Nagarajan
    University of Texas at Austin.
    Lu, Zhengdong
    University of Texas at Austin.
    Savas, Berkant
    University of Texas at Austin.
    Dhillon, Inderjit S.
    University of Texas at Austin.
    Scalable Affiliation Recommendation using Auxiliary Networks2011Ingår i: ACM Transactions on Intelligent Systems and Technology, ISSN 2157-6904, E-ISSN 2157-6912, Vol. 3, nr 1, s. 3:1-3:20Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Social network analysis has attracted increasing attention in recent years. In many social networks, besides friendship links among users, the phenomenon of users associating themselves with groups or communities is common. Thus, two networks exist simultaneously: the friendship network among users, and the affiliation network between users and groups. In this article, we tackle the affiliation recommendation problem, where the task is to predict or suggest new affiliations between users and communities, given the current state of the friendship and affiliation networks. More generally, affiliations need not be community affiliations—they can be a user?s taste, so affiliation recommendation algorithms have applications beyond community recommendation. In this article, we show that information from the friendship network can indeed be fruitfully exploited in making affiliation recommendations. Using a simple way of combining these networks, we suggest two models of user-community affinity for the purpose of making affiliation recommendations: one based on graph proximity, and another using latent factors to model users and communities. We explore the affiliation recommendation algorithms suggested by these models and evaluate these algorithms on two real-world networks, Orkut and Youtube. In doing so, we motivate and propose a way of evaluating recommenders, by measuring how good the top 50 recommendations are for the average user, and demonstrate the importance of choosing the right evaluation strategy. The algorithms suggested by the graph proximity model turn out to be the most effective. We also introduce scalable versions of these algorithms, and demonstrate their effectiveness. This use of link prediction techniques for the purpose of affiliation recommendation is, to our knowledge, novel.

  • 8.
    Savas, Berkant
    et al.
    University of Texas at Austin.
    Dhillon, Inderjit S.
    University of Texas at Austin.
    Fast and accurate low rank approximation of massive graphs2010Rapport (Övrigt vetenskapligt)
  • 9.
    Lu, Zhengdong
    et al.
    Microsoft Research Asia.
    Savas, Berkant
    University of Texas at Austin.
    Tang, Wei
    University of Texas at Austin.
    Dhillon, Inderjit S.
    University of Texas at Austin.
    Link prediction using multiple sources of information2010Rapport (Övrigt vetenskapligt)
  • 10.
    Savas, Berkant
    et al.
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Lim, Lek-Heng
    University of California Berkeley.
    Quasi-Newton Methods on Grassmannians and Multilinear Approximations of Tensors2010Ingår i: SIAM Journal on Scientific Computing, ISSN 1064-8275, Vol. 32, nr 6, s. 3352-3393Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper we proposed quasi-Newton and limited memory quasi-Newton methods for objective functions defined on Grassmannians or a product of Grassmannians. Specifically we defined BFGS and limited memory BFGS updates in local and global coordinates on Grassmannians or a product of these. We proved that, when local coordinates are used, our BFGS updates on Grassmannians share the same optimality property as the usual BFGS updates on Euclidean spaces. When applied to the best multilinear rank approximation problem for general and symmetric tensors, our approach yields fast, robust, and accurate algorithms that exploit the special Grassmannian structure of the respective problems and which work on tensors of large dimensions and arbitrarily high order. Extensive numerical experiments are included to substantiate our claims.

  • 11.
    Lu, Zhengdong
    et al.
    Microsoft Research Asia.
    Savas, Berkant
    The University of Texas at Austin.
    Tang, Wei
    The University of Texas at Austin.
    Dhillon, Inderjit S.
    The University of Texas at Austin.
    Supervised Link Prediction Using Multiple Sources2010Ingår i: Proceedings of the IEEE International Conference on Data Mining (ICDM), 2010, s. 923-928Konferensbidrag (Refereegranskat)
    Abstract [en]

    Link prediction is a fundamental problem in social network analysis and modern-day commercial applications such as Facebook and Myspace. Most existing research approaches this problem by exploring the topological structure of a social network using only one source of information. However, in many application domains, in addition to the social network of interest, there are a number of auxiliary social networks and/or derived proximity networks available. The contribution of the paper is twofold: (1) a supervised learning framework that can effectively and efficiently learn the dynamics of social networks in the presence of auxiliary networks; (2) a feature design scheme for constructing a rich variety of path-based features using multiple sources, and an effective feature selection strategy based on structured sparsity. Extensive experiments on three real-world collaboration networks show that our model can effectively learn to predict new links using multiple sources, yielding higher prediction accuracy than unsupervised and singlesource supervised models.

  • 12.
    Eldén, Lars
    et al.
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Savas, Berkant
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    A Newton-Grassmann method for computing the best multilinear rank-(r1,r2,r3) approximation of a tensor2009Ingår i: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, Vol. 32, nr 2, s. 248-271Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We derive a Newton method for computing the best rank-$(r_1,r_2,r_3)$ approximation of a given $J\times K\times L$ tensor $\mathcal{A}$. The problem is formulated as an approximation problem on a product of Grassmann manifolds. Incorporating the manifold structure into Newton's method ensures that all iterates generated by the algorithm are points on the Grassmann manifolds. We also introduce a consistent notation for matricizing a tensor, for contracted tensor products and some tensor-algebraic manipulations, which simplify the derivation of the Newton equations and enable straightforward algorithmic implementation. Experiments show a quadratic convergence rate for the Newton–Grassmann algorithm.

  • 13. Beställ onlineKöp publikationen >>
    Savas, Berkant
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Algorithms in data mining using matrix and tensor methods2008Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    In many fields of science, engineering, and economics large amounts of data are stored and there is a need to analyze these data in order to extract information for various purposes. Data mining is a general concept involving different tools for performing this kind of analysis. The development of mathematical models and efficient algorithms is of key importance. In this thesis we discuss algorithms for the reduced rank regression problem and algorithms for the computation of the best multilinear rank approximation of tensors.

    The first two papers deal with the reduced rank regression problem, which is encountered in the field of state-space subspace system identification. More specifically the problem is

    \[

    \min_{\rank(X) = k} \det (B - X A)(B - X A)\tp,

    \]

    where $A$ and $B$ are given matrices and we want to find $X$ under a certain rank condition that minimizes the determinant. This problem is not properly stated since it involves implicit assumptions on $A$ and $B$ so that $(B - X A)(B - X A)\tp$ is never singular. This deficiency of the determinant criterion is fixed by generalizing the minimization criterion to rank reduction and volume minimization of the objective matrix. The volume of a matrix is defined as the product of its nonzero singular values. We give an algorithm that solves the generalized problem and identify properties of the input and output signals causing a singular objective matrix.

    Classification problems occur in many applications. The task is to determine the label or class of an unknown object. The third paper concerns with classification of handwritten digits in the context of tensors or multidimensional data arrays. Tensor and multilinear algebra is an area that attracts more and more attention because of the multidimensional structure of the collected data in various applications. Two classification algorithms are given based on the higher order singular value decomposition (HOSVD). The main algorithm makes a data reduction using HOSVD of 98--99 \% prior the construction of the class models. The models are computed as a set of orthonormal bases spanning the dominant subspaces for the different classes. An unknown digit is expressed as a linear combination of the basis vectors. The resulting algorithm achieves 5\% in classification error with fairly low amount of computations.

    The remaining two papers discuss computational methods for the best multilinear

    rank approximation problem

    \[

    \min_{\cB} \| \cA - \cB\|

    \]

    where $\cA$ is a given tensor and we seek the best low multilinear rank approximation tensor $\cB$. This is a generalization of the best low rank matrix approximation problem. It is well known that for matrices the solution is given by truncating the singular values in the singular value decomposition (SVD) of the matrix. But for tensors in general the truncated HOSVD does not give an optimal approximation. For example, a third order tensor $\cB \in \RR^{I \x J \x K}$ with rank$(\cB) = (r_1,r_2,r_3)$ can be written as the product

    \[

    \cB = \tml{X,Y,Z}{\cC}, \qquad b_{ijk}=\sum_{\lambda,\mu,\nu}

    x_{i\lambda} y_{j\mu} z_{k\nu} c_{\lambda\mu\nu},

    \]

    where $\cC \in \RR^{r_1 \x r_2 \x r_3}$ and $X \in \RR^{I \times r_1}$, $Y \in \RR^{J \times r_2}$, and $Z \in \RR^{K \times r_3}$ are matrices of full column rank. Since it is no restriction to assume that $X$, $Y$, and $Z$ have orthonormal columns and due to these constraints, the approximation problem can be considered as a nonlinear optimization problem defined on a product of Grassmann manifolds. We introduce novel techniques for multilinear algebraic manipulations enabling means for theoretical analysis and algorithmic implementation. These techniques are used to solve the approximation problem using Newton and Quasi-Newton methods specifically adapted to operate on products of Grassmann manifolds. The presented algorithms are suited for small, large and sparse problems and, when applied on difficult problems, they clearly outperform alternating least squares methods, which are standard in the field.

    Delarbeten
    1. Dimensionality reduction and volume minimization - generalization of the determinant minimization criterion for reduced rank regression problems
    Öppna denna publikation i ny flik eller fönster >>Dimensionality reduction and volume minimization - generalization of the determinant minimization criterion for reduced rank regression problems
    2006 (Engelska)Ingår i: Linear Algebra and its Applications, ISSN 0024-3795, Vol. 418, nr 1, s. 201-214Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    In this article we propose a generalization of the determinant minimization criterion. The problem of minimizing the determinant of a matrix expression has implicit assumptions that the objective matrix is always nonsingular. In case of singular objective matrix the determinant would be zero and the minimization problem would be meaningless. To be able to handle all possible cases we generalize the determinant criterion to rank reduction and volume minimization of the objective matrix. The generalized minimization criterion is used to solve the following ordinary reduced rank regression problem:

    minrank(X)=kdet(B-XA)(B-XA)T,

    where A and B are known and X is to be determined. This problem is often encountered in the system identification context.

    Nyckelord
    Volume; Minimization criterion; Determinant; Rank deficient matrix
    Nationell ämneskategori
    Matematik
    Identifikatorer
    urn:nbn:se:liu:diva-13190 (URN)10.1016/j.laa.2006.01.032 (DOI)
    Tillgänglig från: 2008-04-29 Skapad: 2008-04-29 Senast uppdaterad: 2013-11-06
    2. Rank Reduction and Volume Minimization Approach to State-Space Subspace System Identification
    Öppna denna publikation i ny flik eller fönster >>Rank Reduction and Volume Minimization Approach to State-Space Subspace System Identification
    2006 (Engelska)Ingår i: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 86, nr 11, s. 3275-3285Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    In this paper we consider the reduced rank regression problem

    solved by maximum-likelihood-inspired state-space subspace system identification algorithms. We conclude that the determinant criterion is, due to potential rank-deficiencies, not general enough to handle all problem instances. The main part of the paper analyzes the structure of the reduced rank minimization problem and identifies signal properties in terms of geometrical concepts. A more general minimization criterion is considered, rank reduction followed by volume minimization. A numerically sound algorithm for minimizing this criterion is presented and validated on both simulated and experimental data.

    Ort, förlag, år, upplaga, sidor
    Elsevier, 2006
    Nyckelord
    Reduced rank regression, System identification, General algorithm, Determinant minimization criterion, Rank reduction, Volume minimization
    Nationell ämneskategori
    Matematik Reglerteknik
    Identifikatorer
    urn:nbn:se:liu:diva-13191 (URN)10.1016/j.sigpro.2006.01.008 (DOI)
    Tillgänglig från: 2008-04-29 Skapad: 2008-04-29 Senast uppdaterad: 2017-12-13
    3. Handwritten digit classification using higher order singular value decomposition
    Öppna denna publikation i ny flik eller fönster >>Handwritten digit classification using higher order singular value decomposition
    2007 (Engelska)Ingår i: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 40, nr 3, s. 993-1003Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    In this paper we present two algorithms for handwritten digit classification based on the higher order singular value decomposition (HOSVD). The first algorithm uses HOSVD for construction of the class models and achieves classification results with error rate lower than 6%. The second algorithm uses the HOSVD for tensor approximation simultaneously in two modes. Classification results for the second algorithm are almost down at 5% even though the approximation reduces the original training data with more than 98% before the construction of the class models. The actual classification in the test phase for both algorithms is conducted by solving a series least squares problems. Considering computational amount for the test presented the second algorithm is twice as efficient as the first one.

    Nyckelord
    Handwritten digit classification, Tensors, Higher order singular value decomposition, Tensor approximation, Least squares
    Nationell ämneskategori
    Matematik
    Identifikatorer
    urn:nbn:se:liu:diva-13192 (URN)10.1016/j.patcog.2006.08.004 (DOI)
    Tillgänglig från: 2008-04-29 Skapad: 2008-04-29 Senast uppdaterad: 2017-12-13
    4. A Newton-Grassmann method for computing the best multilinear rank-(r1,r2,r3) approximation of a tensor
    Öppna denna publikation i ny flik eller fönster >>A Newton-Grassmann method for computing the best multilinear rank-(r1,r2,r3) approximation of a tensor
    2009 (Engelska)Ingår i: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, Vol. 32, nr 2, s. 248-271Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    We derive a Newton method for computing the best rank-$(r_1,r_2,r_3)$ approximation of a given $J\times K\times L$ tensor $\mathcal{A}$. The problem is formulated as an approximation problem on a product of Grassmann manifolds. Incorporating the manifold structure into Newton's method ensures that all iterates generated by the algorithm are points on the Grassmann manifolds. We also introduce a consistent notation for matricizing a tensor, for contracted tensor products and some tensor-algebraic manipulations, which simplify the derivation of the Newton equations and enable straightforward algorithmic implementation. Experiments show a quadratic convergence rate for the Newton–Grassmann algorithm.

    Nyckelord
    tensor, multilinear, rank, approximation, Grassmann manifold, Newton
    Nationell ämneskategori
    Matematik
    Identifikatorer
    urn:nbn:se:liu:diva-13193 (URN)10.1137/070688316 (DOI)
    Tillgänglig från: 2008-04-29 Skapad: 2008-04-29 Senast uppdaterad: 2013-10-11
    5. Best multilinear rank approximation of tensors with quasi-Newton methods on Grassmannians
    Öppna denna publikation i ny flik eller fönster >>Best multilinear rank approximation of tensors with quasi-Newton methods on Grassmannians
    Manuskript (Övrigt vetenskapligt)
    Identifikatorer
    urn:nbn:se:liu:diva-13194 (URN)
    Tillgänglig från: 2008-04-29 Skapad: 2008-04-29 Senast uppdaterad: 2010-01-13
    Ladda ner fulltext (pdf)
    FULLTEXT01
    Ladda ner (pdf)
    COVER01
  • 14.
    Savas, Berkant
    et al.
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Eldén, Lars
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Handwritten digit classification using higher order singular value decomposition2007Ingår i: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 40, nr 3, s. 993-1003Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper we present two algorithms for handwritten digit classification based on the higher order singular value decomposition (HOSVD). The first algorithm uses HOSVD for construction of the class models and achieves classification results with error rate lower than 6%. The second algorithm uses the HOSVD for tensor approximation simultaneously in two modes. Classification results for the second algorithm are almost down at 5% even though the approximation reduces the original training data with more than 98% before the construction of the class models. The actual classification in the test phase for both algorithms is conducted by solving a series least squares problems. Considering computational amount for the test presented the second algorithm is twice as efficient as the first one.

  • 15.
    Savas, Berkant
    et al.
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Lim, Lek-Heng
    ICME Stanford University.
    Quasi-Newton algorithm for best multi-linear rank approximation of tensors2007Ingår i: 6th International Congress on Industrial and Applied Mathematics,2007, 2007Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    In this talk we introduce a novel method for solving the best multilinear rank approximation problem. Our algorithm differs from existing methods in two respects: (1) it exploits the fact that the problem may be viewed as an optimization problem over a product of Grassmann manifolds; (2) it uses Quasi-Newton-like Hessian-approximates specially adapted for Grassmannians and thus avoids the inevitable problem of large Hessians in such problems. Tensor approximation problems occur in various applications involving multidimensional data. The performance of the Quasi-Newton algorithm is compared with the Newton-Grassmann and Higher Order Orthogonal Iteration algorithms for general and symmetric 3-tensors.

  • 16.
    Savas, Berkant
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Dimensionality reduction and volume minimization - generalization of the determinant minimization criterion for reduced rank regression problems2006Ingår i: Linear Algebra and its Applications, ISSN 0024-3795, Vol. 418, nr 1, s. 201-214Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this article we propose a generalization of the determinant minimization criterion. The problem of minimizing the determinant of a matrix expression has implicit assumptions that the objective matrix is always nonsingular. In case of singular objective matrix the determinant would be zero and the minimization problem would be meaningless. To be able to handle all possible cases we generalize the determinant criterion to rank reduction and volume minimization of the objective matrix. The generalized minimization criterion is used to solve the following ordinary reduced rank regression problem:

    minrank(X)=kdet(B-XA)(B-XA)T,

    where A and B are known and X is to be determined. This problem is often encountered in the system identification context.

  • 17.
    Lindgren, David
    et al.
    Linköpings universitet, Institutionen för systemteknik, Reglerteknik. Linköpings universitet, Tekniska högskolan.
    Savas, Berkant
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Rank Reduction and Volume Minimization Approach to State-Space Subspace System Identification2006Ingår i: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 86, nr 11, s. 3275-3285Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper we consider the reduced rank regression problem

    solved by maximum-likelihood-inspired state-space subspace system identification algorithms. We conclude that the determinant criterion is, due to potential rank-deficiencies, not general enough to handle all problem instances. The main part of the paper analyzes the structure of the reduced rank minimization problem and identifies signal properties in terms of geometrical concepts. A more general minimization criterion is considered, rank reduction followed by volume minimization. A numerically sound algorithm for minimizing this criterion is presented and validated on both simulated and experimental data.

  • 18.
    Savas, Berkant
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Algorithms in data mining: reduced rank regression and classification by tensor methods2005Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    In many fields of science, engineering, and economics large amounts of data are stored and there is a need to analyze these data in order to extract information for various purposes. Data mining is a general concept involving different tools for performing this kind of analysis. The development of mathematical models and efficient algorithms is of key importance. In this thesis, which consists of three appended manuscripts, we discuss algorithms for reduced rank regression and for classification in the context of tensor theory.

    The first two manuscripts deal with the reduced rank regression problem, which is encountered in the field of state-space subspace system identification. More specifically the problem is

    where A and B are given matrices and we want to find X under a certain rank condition that minimizes the determinant. This problem is not properly stated since it involves implicit assumptions on A and B so that (B - XA)(B - XA)T is never singular. This deficiency of the determinant criterion is fixed by generalizing the minimization criterion to rank reduction and volume minimization of the objective matrix. The volume of a matrix is defined as the product of its nonzero singular values. We give an algorithm that solves the generalized problem and identify properties of the input and output signals causing singularity on the objective matrix.

    Classification problems occur in many applications. The task is to determine the label or class of an unknown object. The third appended manuscript concerns with classification of hand written digits in the context of tensors or multidimensional data arrays. Tensor theory is also an area that attracts more and more attention because of the multidimensional structure of the collected data in a various applications. Two classification algorithms are given based on the higher order singular value decomposition (HOSVD). The main algorithm makes a data reduction using HOSVD of 98%- 99% prior the construction of the class models. The models are computed as a set of orthonormal bases spanning the dominant subspaces for the different classes. An unknown digit is expressed as a linear combination of the basis vectors. The amount of computations is fairly low and the performance reasonably good, 5% in error rate.

    Delarbeten
    1. Dimensionality reduction and volume minimization - generalization of the determinant minimization criterion for reduced rank regression problems
    Öppna denna publikation i ny flik eller fönster >>Dimensionality reduction and volume minimization - generalization of the determinant minimization criterion for reduced rank regression problems
    2006 (Engelska)Ingår i: Linear Algebra and its Applications, ISSN 0024-3795, Vol. 418, nr 1, s. 201-214Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    In this article we propose a generalization of the determinant minimization criterion. The problem of minimizing the determinant of a matrix expression has implicit assumptions that the objective matrix is always nonsingular. In case of singular objective matrix the determinant would be zero and the minimization problem would be meaningless. To be able to handle all possible cases we generalize the determinant criterion to rank reduction and volume minimization of the objective matrix. The generalized minimization criterion is used to solve the following ordinary reduced rank regression problem:

    minrank(X)=kdet(B-XA)(B-XA)T,

    where A and B are known and X is to be determined. This problem is often encountered in the system identification context.

    Nyckelord
    Volume; Minimization criterion; Determinant; Rank deficient matrix
    Nationell ämneskategori
    Matematik
    Identifikatorer
    urn:nbn:se:liu:diva-13190 (URN)10.1016/j.laa.2006.01.032 (DOI)
    Tillgänglig från: 2008-04-29 Skapad: 2008-04-29 Senast uppdaterad: 2013-11-06
    2. Rank Reduction and Volume Minimization Approach to State-Space Subspace System Identification
    Öppna denna publikation i ny flik eller fönster >>Rank Reduction and Volume Minimization Approach to State-Space Subspace System Identification
    2006 (Engelska)Ingår i: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 86, nr 11, s. 3275-3285Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    In this paper we consider the reduced rank regression problem

    solved by maximum-likelihood-inspired state-space subspace system identification algorithms. We conclude that the determinant criterion is, due to potential rank-deficiencies, not general enough to handle all problem instances. The main part of the paper analyzes the structure of the reduced rank minimization problem and identifies signal properties in terms of geometrical concepts. A more general minimization criterion is considered, rank reduction followed by volume minimization. A numerically sound algorithm for minimizing this criterion is presented and validated on both simulated and experimental data.

    Ort, förlag, år, upplaga, sidor
    Elsevier, 2006
    Nyckelord
    Reduced rank regression, System identification, General algorithm, Determinant minimization criterion, Rank reduction, Volume minimization
    Nationell ämneskategori
    Matematik Reglerteknik
    Identifikatorer
    urn:nbn:se:liu:diva-13191 (URN)10.1016/j.sigpro.2006.01.008 (DOI)
    Tillgänglig från: 2008-04-29 Skapad: 2008-04-29 Senast uppdaterad: 2017-12-13
    3. Handwritten digit classification using higher order singular value decomposition
    Öppna denna publikation i ny flik eller fönster >>Handwritten digit classification using higher order singular value decomposition
    2007 (Engelska)Ingår i: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 40, nr 3, s. 993-1003Artikel i tidskrift (Refereegranskat) Published
    Abstract [en]

    In this paper we present two algorithms for handwritten digit classification based on the higher order singular value decomposition (HOSVD). The first algorithm uses HOSVD for construction of the class models and achieves classification results with error rate lower than 6%. The second algorithm uses the HOSVD for tensor approximation simultaneously in two modes. Classification results for the second algorithm are almost down at 5% even though the approximation reduces the original training data with more than 98% before the construction of the class models. The actual classification in the test phase for both algorithms is conducted by solving a series least squares problems. Considering computational amount for the test presented the second algorithm is twice as efficient as the first one.

    Nyckelord
    Handwritten digit classification, Tensors, Higher order singular value decomposition, Tensor approximation, Least squares
    Nationell ämneskategori
    Matematik
    Identifikatorer
    urn:nbn:se:liu:diva-13192 (URN)10.1016/j.patcog.2006.08.004 (DOI)
    Tillgänglig från: 2008-04-29 Skapad: 2008-04-29 Senast uppdaterad: 2017-12-13
  • 19.
    Savas, Berkant
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Dimensionality reduction and volume minimization - generalization of the determinant minimization criterion for reduced rank regression problems2005Ingår i: Working Group on Matrix Computations and Statistics,2005, 2005Konferensbidrag (Övrigt vetenskapligt)
  • 20.
    Savas, Berkant
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Dimensionality Reduction and Volume Minimization: Generalization of the Determinant Minimization Criterion for Reduced Rank Regression Problems2005Rapport (Övrigt vetenskapligt)
    Abstract [en]

    In this article we propose a generalization of the determinant minimization criterion. The problem of minimizing the determinant of a matrix expression has implicit assumptions that the objective matrix is always nonsingular. In case of singular objective matrix the determinant would be zero and the minimization problem would be meaningless. To be able to handle all possible cases we generalize the determinant criterion to rank reduction and volume minimization of the objective matrix. The generalized minimization criterion is used to solve the following ordinary reduced rank regression problem:minrank(X)=kdet(B-XA)(B-XA)T,where A and B are known and X is to be determined. This problem is often encountered in the system identification context.

  • 21.
    Elden, Lars
    et al.
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Savas, Berkant
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Pattern Recognition using Higher Order SVD2005Ingår i: 3rd IASC world conference onComputational Statistics and Data Analysis,2005, 2005Konferensbidrag (Övrigt vetenskapligt)
  • 22.
    Savas, Berkant
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Tangent distance algorithm on HOSVD reduced data for handwritten digit recognition2005Ingår i: Workshop on Tensor Decompositions and Applications,2005, 2005Konferensbidrag (Övrigt vetenskapligt)
  • 23.
    Elden, Lars
    et al.
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Savas, Berkant
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    The maximum likelihood estimate in reduced-rank regression2005Ingår i: Numerical Linear Algebra with Applications, ISSN 1070-5325, E-ISSN 1099-1506, Vol. 12, nr 8, s. 731-741Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In previous work by Stoica and Viberg the reduced-rank regression problem is solved in a maximum likelihood sense. The present paper proposes an alternative numerical procedure. The solution is written in terms of the principal angles between subspaces spanned by the data matrices. It is demonstrated that the solution is meaningful also in the case when the maximum likelihood criterion is not valid. A numerical example is given. Copyright (c) 2005 John Wiley & Sons, Ltd.

  • 24.
    Savas, Berkant
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Analyses and Tests of Handwritten Digit Recognition Algorithms2004Ingår i: Workshop i tillämpad matematik,2004, 2004Konferensbidrag (Övrigt vetenskapligt)
  • 25.
    Elden, Lars
    et al.
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Savas, Berkant
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    The Maximum likelihood estimate in reduced-rank regression2003Rapport (Övrigt vetenskapligt)
1 - 25 av 25
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf