liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 25 of 25
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    Elden, Lars
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Savas, Berkant
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Pattern Recognition using Higher Order SVD2005In: 3rd IASC world conference onComputational Statistics and Data Analysis,2005, 2005Conference paper (Other academic)
  • 2.
    Elden, Lars
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Savas, Berkant
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    The Maximum likelihood estimate in reduced-rank regression2003Report (Other academic)
  • 3.
    Elden, Lars
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Savas, Berkant
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    The maximum likelihood estimate in reduced-rank regression2005In: Numerical Linear Algebra with Applications, ISSN 1070-5325, E-ISSN 1099-1506, Vol. 12, no 8, p. 731-741Article in journal (Refereed)
    Abstract [en]

    In previous work by Stoica and Viberg the reduced-rank regression problem is solved in a maximum likelihood sense. The present paper proposes an alternative numerical procedure. The solution is written in terms of the principal angles between subspaces spanned by the data matrices. It is demonstrated that the solution is meaningful also in the case when the maximum likelihood criterion is not valid. A numerical example is given. Copyright (c) 2005 John Wiley & Sons, Ltd.

  • 4.
    Eldén, Lars
    et al.
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Savas, Berkant
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    A Newton-Grassmann method for computing the best multilinear rank-(r1,r2,r3) approximation of a tensor2009In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, Vol. 32, no 2, p. 248-271Article in journal (Refereed)
    Abstract [en]

    We derive a Newton method for computing the best rank-$(r_1,r_2,r_3)$ approximation of a given $J\times K\times L$ tensor $\mathcal{A}$. The problem is formulated as an approximation problem on a product of Grassmann manifolds. Incorporating the manifold structure into Newton's method ensures that all iterates generated by the algorithm are points on the Grassmann manifolds. We also introduce a consistent notation for matricizing a tensor, for contracted tensor products and some tensor-algebraic manipulations, which simplify the derivation of the Newton equations and enable straightforward algorithmic implementation. Experiments show a quadratic convergence rate for the Newton–Grassmann algorithm.

  • 5.
    Eldén, Lars
    et al.
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Savas, Berkant
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Perturbation Theory and Optimality Conditions for the Best Multilinear Rank Approximation of a Tensor2011In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 32, no 4, p. 1422-1450Article in journal (Refereed)
    Abstract [en]

    The problem of computing the best rank-(p,q,r) approximation of a third order tensor is considered. First the problem is reformulated as a maximization problem on a product of three Grassmann manifolds. Then expressions for the gradient and the Hessian are derived in a local coordinate system at a stationary point, and conditions for a local maximum are given. A first order perturbation analysis is performed using the Grassmann manifold framework. The analysis is illustrated in a few examples, and it is shown that the perturbation theory for the singular value decomposition is a special case of the tensor theory.

  • 6.
    Lindgren, David
    et al.
    Linköping University, Department of Electrical Engineering, Automatic Control. Linköping University, The Institute of Technology.
    Savas, Berkant
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Rank Reduction and Volume Minimization Approach to State-Space Subspace System Identification2006In: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 86, no 11, p. 3275-3285Article in journal (Refereed)
    Abstract [en]

    In this paper we consider the reduced rank regression problem

    solved by maximum-likelihood-inspired state-space subspace system identification algorithms. We conclude that the determinant criterion is, due to potential rank-deficiencies, not general enough to handle all problem instances. The main part of the paper analyzes the structure of the reduced rank minimization problem and identifies signal properties in terms of geometrical concepts. A more general minimization criterion is considered, rank reduction followed by volume minimization. A numerically sound algorithm for minimizing this criterion is presented and validated on both simulated and experimental data.

  • 7.
    Lu, Zhengdong
    et al.
    Microsoft Research Asia.
    Savas, Berkant
    University of Texas at Austin.
    Tang, Wei
    University of Texas at Austin.
    Dhillon, Inderjit S.
    University of Texas at Austin.
    Link prediction using multiple sources of information2010Report (Other academic)
  • 8.
    Lu, Zhengdong
    et al.
    Microsoft Research Asia.
    Savas, Berkant
    The University of Texas at Austin.
    Tang, Wei
    The University of Texas at Austin.
    Dhillon, Inderjit S.
    The University of Texas at Austin.
    Supervised Link Prediction Using Multiple Sources2010In: Proceedings of the IEEE International Conference on Data Mining (ICDM), 2010, p. 923-928Conference paper (Refereed)
    Abstract [en]

    Link prediction is a fundamental problem in social network analysis and modern-day commercial applications such as Facebook and Myspace. Most existing research approaches this problem by exploring the topological structure of a social network using only one source of information. However, in many application domains, in addition to the social network of interest, there are a number of auxiliary social networks and/or derived proximity networks available. The contribution of the paper is twofold: (1) a supervised learning framework that can effectively and efficiently learn the dynamics of social networks in the presence of auxiliary networks; (2) a feature design scheme for constructing a rich variety of path-based features using multiple sources, and an effective feature selection strategy based on structured sparsity. Extensive experiments on three real-world collaboration networks show that our model can effectively learn to predict new links using multiple sources, yielding higher prediction accuracy than unsupervised and singlesource supervised models.

  • 9.
    Savas, Berkant
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Algorithms in data mining: reduced rank regression and classification by tensor methods2005Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    In many fields of science, engineering, and economics large amounts of data are stored and there is a need to analyze these data in order to extract information for various purposes. Data mining is a general concept involving different tools for performing this kind of analysis. The development of mathematical models and efficient algorithms is of key importance. In this thesis, which consists of three appended manuscripts, we discuss algorithms for reduced rank regression and for classification in the context of tensor theory.

    The first two manuscripts deal with the reduced rank regression problem, which is encountered in the field of state-space subspace system identification. More specifically the problem is

    where A and B are given matrices and we want to find X under a certain rank condition that minimizes the determinant. This problem is not properly stated since it involves implicit assumptions on A and B so that (B - XA)(B - XA)T is never singular. This deficiency of the determinant criterion is fixed by generalizing the minimization criterion to rank reduction and volume minimization of the objective matrix. The volume of a matrix is defined as the product of its nonzero singular values. We give an algorithm that solves the generalized problem and identify properties of the input and output signals causing singularity on the objective matrix.

    Classification problems occur in many applications. The task is to determine the label or class of an unknown object. The third appended manuscript concerns with classification of hand written digits in the context of tensors or multidimensional data arrays. Tensor theory is also an area that attracts more and more attention because of the multidimensional structure of the collected data in a various applications. Two classification algorithms are given based on the higher order singular value decomposition (HOSVD). The main algorithm makes a data reduction using HOSVD of 98%- 99% prior the construction of the class models. The models are computed as a set of orthonormal bases spanning the dominant subspaces for the different classes. An unknown digit is expressed as a linear combination of the basis vectors. The amount of computations is fairly low and the performance reasonably good, 5% in error rate.

    List of papers
    1. Dimensionality reduction and volume minimization - generalization of the determinant minimization criterion for reduced rank regression problems
    Open this publication in new window or tab >>Dimensionality reduction and volume minimization - generalization of the determinant minimization criterion for reduced rank regression problems
    2006 (English)In: Linear Algebra and its Applications, ISSN 0024-3795, Vol. 418, no 1, p. 201-214Article in journal (Refereed) Published
    Abstract [en]

    In this article we propose a generalization of the determinant minimization criterion. The problem of minimizing the determinant of a matrix expression has implicit assumptions that the objective matrix is always nonsingular. In case of singular objective matrix the determinant would be zero and the minimization problem would be meaningless. To be able to handle all possible cases we generalize the determinant criterion to rank reduction and volume minimization of the objective matrix. The generalized minimization criterion is used to solve the following ordinary reduced rank regression problem:

    minrank(X)=kdet(B-XA)(B-XA)T,

    where A and B are known and X is to be determined. This problem is often encountered in the system identification context.

    Keywords
    Volume; Minimization criterion; Determinant; Rank deficient matrix
    National Category
    Mathematics
    Identifiers
    urn:nbn:se:liu:diva-13190 (URN)10.1016/j.laa.2006.01.032 (DOI)
    Available from: 2008-04-29 Created: 2008-04-29 Last updated: 2013-11-06
    2. Rank Reduction and Volume Minimization Approach to State-Space Subspace System Identification
    Open this publication in new window or tab >>Rank Reduction and Volume Minimization Approach to State-Space Subspace System Identification
    2006 (English)In: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 86, no 11, p. 3275-3285Article in journal (Refereed) Published
    Abstract [en]

    In this paper we consider the reduced rank regression problem

    solved by maximum-likelihood-inspired state-space subspace system identification algorithms. We conclude that the determinant criterion is, due to potential rank-deficiencies, not general enough to handle all problem instances. The main part of the paper analyzes the structure of the reduced rank minimization problem and identifies signal properties in terms of geometrical concepts. A more general minimization criterion is considered, rank reduction followed by volume minimization. A numerically sound algorithm for minimizing this criterion is presented and validated on both simulated and experimental data.

    Place, publisher, year, edition, pages
    Elsevier, 2006
    Keywords
    Reduced rank regression, System identification, General algorithm, Determinant minimization criterion, Rank reduction, Volume minimization
    National Category
    Mathematics Control Engineering
    Identifiers
    urn:nbn:se:liu:diva-13191 (URN)10.1016/j.sigpro.2006.01.008 (DOI)
    Available from: 2008-04-29 Created: 2008-04-29 Last updated: 2017-12-13
    3. Handwritten digit classification using higher order singular value decomposition
    Open this publication in new window or tab >>Handwritten digit classification using higher order singular value decomposition
    2007 (English)In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 40, no 3, p. 993-1003Article in journal (Refereed) Published
    Abstract [en]

    In this paper we present two algorithms for handwritten digit classification based on the higher order singular value decomposition (HOSVD). The first algorithm uses HOSVD for construction of the class models and achieves classification results with error rate lower than 6%. The second algorithm uses the HOSVD for tensor approximation simultaneously in two modes. Classification results for the second algorithm are almost down at 5% even though the approximation reduces the original training data with more than 98% before the construction of the class models. The actual classification in the test phase for both algorithms is conducted by solving a series least squares problems. Considering computational amount for the test presented the second algorithm is twice as efficient as the first one.

    Keywords
    Handwritten digit classification, Tensors, Higher order singular value decomposition, Tensor approximation, Least squares
    National Category
    Mathematics
    Identifiers
    urn:nbn:se:liu:diva-13192 (URN)10.1016/j.patcog.2006.08.004 (DOI)
    Available from: 2008-04-29 Created: 2008-04-29 Last updated: 2017-12-13
  • 10.
    Savas, Berkant
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Algorithms in data mining using matrix and tensor methods2008Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In many fields of science, engineering, and economics large amounts of data are stored and there is a need to analyze these data in order to extract information for various purposes. Data mining is a general concept involving different tools for performing this kind of analysis. The development of mathematical models and efficient algorithms is of key importance. In this thesis we discuss algorithms for the reduced rank regression problem and algorithms for the computation of the best multilinear rank approximation of tensors.

    The first two papers deal with the reduced rank regression problem, which is encountered in the field of state-space subspace system identification. More specifically the problem is

    \[

    \min_{\rank(X) = k} \det (B - X A)(B - X A)\tp,

    \]

    where $A$ and $B$ are given matrices and we want to find $X$ under a certain rank condition that minimizes the determinant. This problem is not properly stated since it involves implicit assumptions on $A$ and $B$ so that $(B - X A)(B - X A)\tp$ is never singular. This deficiency of the determinant criterion is fixed by generalizing the minimization criterion to rank reduction and volume minimization of the objective matrix. The volume of a matrix is defined as the product of its nonzero singular values. We give an algorithm that solves the generalized problem and identify properties of the input and output signals causing a singular objective matrix.

    Classification problems occur in many applications. The task is to determine the label or class of an unknown object. The third paper concerns with classification of handwritten digits in the context of tensors or multidimensional data arrays. Tensor and multilinear algebra is an area that attracts more and more attention because of the multidimensional structure of the collected data in various applications. Two classification algorithms are given based on the higher order singular value decomposition (HOSVD). The main algorithm makes a data reduction using HOSVD of 98--99 \% prior the construction of the class models. The models are computed as a set of orthonormal bases spanning the dominant subspaces for the different classes. An unknown digit is expressed as a linear combination of the basis vectors. The resulting algorithm achieves 5\% in classification error with fairly low amount of computations.

    The remaining two papers discuss computational methods for the best multilinear

    rank approximation problem

    \[

    \min_{\cB} \| \cA - \cB\|

    \]

    where $\cA$ is a given tensor and we seek the best low multilinear rank approximation tensor $\cB$. This is a generalization of the best low rank matrix approximation problem. It is well known that for matrices the solution is given by truncating the singular values in the singular value decomposition (SVD) of the matrix. But for tensors in general the truncated HOSVD does not give an optimal approximation. For example, a third order tensor $\cB \in \RR^{I \x J \x K}$ with rank$(\cB) = (r_1,r_2,r_3)$ can be written as the product

    \[

    \cB = \tml{X,Y,Z}{\cC}, \qquad b_{ijk}=\sum_{\lambda,\mu,\nu}

    x_{i\lambda} y_{j\mu} z_{k\nu} c_{\lambda\mu\nu},

    \]

    where $\cC \in \RR^{r_1 \x r_2 \x r_3}$ and $X \in \RR^{I \times r_1}$, $Y \in \RR^{J \times r_2}$, and $Z \in \RR^{K \times r_3}$ are matrices of full column rank. Since it is no restriction to assume that $X$, $Y$, and $Z$ have orthonormal columns and due to these constraints, the approximation problem can be considered as a nonlinear optimization problem defined on a product of Grassmann manifolds. We introduce novel techniques for multilinear algebraic manipulations enabling means for theoretical analysis and algorithmic implementation. These techniques are used to solve the approximation problem using Newton and Quasi-Newton methods specifically adapted to operate on products of Grassmann manifolds. The presented algorithms are suited for small, large and sparse problems and, when applied on difficult problems, they clearly outperform alternating least squares methods, which are standard in the field.

    List of papers
    1. Dimensionality reduction and volume minimization - generalization of the determinant minimization criterion for reduced rank regression problems
    Open this publication in new window or tab >>Dimensionality reduction and volume minimization - generalization of the determinant minimization criterion for reduced rank regression problems
    2006 (English)In: Linear Algebra and its Applications, ISSN 0024-3795, Vol. 418, no 1, p. 201-214Article in journal (Refereed) Published
    Abstract [en]

    In this article we propose a generalization of the determinant minimization criterion. The problem of minimizing the determinant of a matrix expression has implicit assumptions that the objective matrix is always nonsingular. In case of singular objective matrix the determinant would be zero and the minimization problem would be meaningless. To be able to handle all possible cases we generalize the determinant criterion to rank reduction and volume minimization of the objective matrix. The generalized minimization criterion is used to solve the following ordinary reduced rank regression problem:

    minrank(X)=kdet(B-XA)(B-XA)T,

    where A and B are known and X is to be determined. This problem is often encountered in the system identification context.

    Keywords
    Volume; Minimization criterion; Determinant; Rank deficient matrix
    National Category
    Mathematics
    Identifiers
    urn:nbn:se:liu:diva-13190 (URN)10.1016/j.laa.2006.01.032 (DOI)
    Available from: 2008-04-29 Created: 2008-04-29 Last updated: 2013-11-06
    2. Rank Reduction and Volume Minimization Approach to State-Space Subspace System Identification
    Open this publication in new window or tab >>Rank Reduction and Volume Minimization Approach to State-Space Subspace System Identification
    2006 (English)In: Signal Processing, ISSN 0165-1684, E-ISSN 1872-7557, Vol. 86, no 11, p. 3275-3285Article in journal (Refereed) Published
    Abstract [en]

    In this paper we consider the reduced rank regression problem

    solved by maximum-likelihood-inspired state-space subspace system identification algorithms. We conclude that the determinant criterion is, due to potential rank-deficiencies, not general enough to handle all problem instances. The main part of the paper analyzes the structure of the reduced rank minimization problem and identifies signal properties in terms of geometrical concepts. A more general minimization criterion is considered, rank reduction followed by volume minimization. A numerically sound algorithm for minimizing this criterion is presented and validated on both simulated and experimental data.

    Place, publisher, year, edition, pages
    Elsevier, 2006
    Keywords
    Reduced rank regression, System identification, General algorithm, Determinant minimization criterion, Rank reduction, Volume minimization
    National Category
    Mathematics Control Engineering
    Identifiers
    urn:nbn:se:liu:diva-13191 (URN)10.1016/j.sigpro.2006.01.008 (DOI)
    Available from: 2008-04-29 Created: 2008-04-29 Last updated: 2017-12-13
    3. Handwritten digit classification using higher order singular value decomposition
    Open this publication in new window or tab >>Handwritten digit classification using higher order singular value decomposition
    2007 (English)In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 40, no 3, p. 993-1003Article in journal (Refereed) Published
    Abstract [en]

    In this paper we present two algorithms for handwritten digit classification based on the higher order singular value decomposition (HOSVD). The first algorithm uses HOSVD for construction of the class models and achieves classification results with error rate lower than 6%. The second algorithm uses the HOSVD for tensor approximation simultaneously in two modes. Classification results for the second algorithm are almost down at 5% even though the approximation reduces the original training data with more than 98% before the construction of the class models. The actual classification in the test phase for both algorithms is conducted by solving a series least squares problems. Considering computational amount for the test presented the second algorithm is twice as efficient as the first one.

    Keywords
    Handwritten digit classification, Tensors, Higher order singular value decomposition, Tensor approximation, Least squares
    National Category
    Mathematics
    Identifiers
    urn:nbn:se:liu:diva-13192 (URN)10.1016/j.patcog.2006.08.004 (DOI)
    Available from: 2008-04-29 Created: 2008-04-29 Last updated: 2017-12-13
    4. A Newton-Grassmann method for computing the best multilinear rank-(r1,r2,r3) approximation of a tensor
    Open this publication in new window or tab >>A Newton-Grassmann method for computing the best multilinear rank-(r1,r2,r3) approximation of a tensor
    2009 (English)In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, Vol. 32, no 2, p. 248-271Article in journal (Refereed) Published
    Abstract [en]

    We derive a Newton method for computing the best rank-$(r_1,r_2,r_3)$ approximation of a given $J\times K\times L$ tensor $\mathcal{A}$. The problem is formulated as an approximation problem on a product of Grassmann manifolds. Incorporating the manifold structure into Newton's method ensures that all iterates generated by the algorithm are points on the Grassmann manifolds. We also introduce a consistent notation for matricizing a tensor, for contracted tensor products and some tensor-algebraic manipulations, which simplify the derivation of the Newton equations and enable straightforward algorithmic implementation. Experiments show a quadratic convergence rate for the Newton–Grassmann algorithm.

    Keywords
    tensor, multilinear, rank, approximation, Grassmann manifold, Newton
    National Category
    Mathematics
    Identifiers
    urn:nbn:se:liu:diva-13193 (URN)10.1137/070688316 (DOI)
    Available from: 2008-04-29 Created: 2008-04-29 Last updated: 2013-10-11
    5. Best multilinear rank approximation of tensors with quasi-Newton methods on Grassmannians
    Open this publication in new window or tab >>Best multilinear rank approximation of tensors with quasi-Newton methods on Grassmannians
    Manuscript (Other academic)
    Identifiers
    urn:nbn:se:liu:diva-13194 (URN)
    Available from: 2008-04-29 Created: 2008-04-29 Last updated: 2010-01-13
  • 11.
    Savas, Berkant
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Analyses and Tests of Handwritten Digit Recognition Algorithms2004In: Workshop i tillämpad matematik,2004, 2004Conference paper (Other academic)
  • 12.
    Savas, Berkant
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Dimensionality reduction and volume minimization - generalization of the determinant minimization criterion for reduced rank regression problems2005In: Working Group on Matrix Computations and Statistics,2005, 2005Conference paper (Other academic)
  • 13.
    Savas, Berkant
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Dimensionality reduction and volume minimization - generalization of the determinant minimization criterion for reduced rank regression problems2006In: Linear Algebra and its Applications, ISSN 0024-3795, Vol. 418, no 1, p. 201-214Article in journal (Refereed)
    Abstract [en]

    In this article we propose a generalization of the determinant minimization criterion. The problem of minimizing the determinant of a matrix expression has implicit assumptions that the objective matrix is always nonsingular. In case of singular objective matrix the determinant would be zero and the minimization problem would be meaningless. To be able to handle all possible cases we generalize the determinant criterion to rank reduction and volume minimization of the objective matrix. The generalized minimization criterion is used to solve the following ordinary reduced rank regression problem:

    minrank(X)=kdet(B-XA)(B-XA)T,

    where A and B are known and X is to be determined. This problem is often encountered in the system identification context.

  • 14.
    Savas, Berkant
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Dimensionality Reduction and Volume Minimization: Generalization of the Determinant Minimization Criterion for Reduced Rank Regression Problems2005Report (Other academic)
    Abstract [en]

    In this article we propose a generalization of the determinant minimization criterion. The problem of minimizing the determinant of a matrix expression has implicit assumptions that the objective matrix is always nonsingular. In case of singular objective matrix the determinant would be zero and the minimization problem would be meaningless. To be able to handle all possible cases we generalize the determinant criterion to rank reduction and volume minimization of the objective matrix. The generalized minimization criterion is used to solve the following ordinary reduced rank regression problem:minrank(X)=kdet(B-XA)(B-XA)T,where A and B are known and X is to be determined. This problem is often encountered in the system identification context.

  • 15.
    Savas, Berkant
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Tangent distance algorithm on HOSVD reduced data for handwritten digit recognition2005In: Workshop on Tensor Decompositions and Applications,2005, 2005Conference paper (Other academic)
  • 16.
    Savas, Berkant
    et al.
    University of Texas at Austin..
    Dhillon, Inderjit, S.
    University of Texas at Austin..
    Clustered low rank approximation of graphs in information science applications2011In: Proceedings of the SIAM International Conference on Data Mining (SDM), 2011, p. 164-175Conference paper (Refereed)
    Abstract [en]

    In this paper we present a fast and accurate procedure called clusteredlow rank matrix approximation for massive graphs. The procedure involvesa fast clustering of the graph and then approximates each clusterseparately using existing methods, e.g. the singular value decomposition,or stochastic algorithms. The cluster-wise approximations are thenextended to approximate the entire graph. This approach has severalbenets: (1) important community structure of the graph is preserveddue to the clustering; (2) highly accurate low rank approximationsare achieved; (3) the procedure is ecient both in terms of computationalspeed and memory usage; (4) better performance in problems from variousapplications compared to standard low rank approximation. Further,we generalize stochastic algorithms to the clustered low rank approximationframework and present theoretical bounds for the approximation error.Finally, a set of experiments, using large scale and real-world graphs,show that our methods outperform standard low rank matrix approximationalgorithms.

  • 17.
    Savas, Berkant
    et al.
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, Faculty of Science & Engineering.
    Dhillon, Inderjit S.
    University of Texas Austin, USA.
    CLUSTERED MATRIX APPROXIMATION2016In: SIAM Journal on Matrix Analysis and Applications, ISSN 0895-4798, E-ISSN 1095-7162, Vol. 37, no 4, p. 1531-1555Article in journal (Refereed)
    Abstract [en]

    In this paper we develop a novel clustered matrix approximation framework, first showing the motivation behind our research. The proposed methods are particularly well suited for problems with large scale sparse matrices that represent graphs and/or bipartite graphs from information science applications. Our framework and resulting approximations have a number of benefits: (1) the approximations preserve important structure that is present in the original matrix; (2) the approximations contain both global-scale and local-scale information; (3) the procedure is efficient both in computational speed and memory usage; and (4) the resulting approximations are considerably more accurate with less memory usage than truncated SVD approximations, which are optimal with respect to rank. The framework is also quite flexible as it may be modified in various ways to fit the needs of a particular application. In the paper we also derive a probabilistic approach that uses randomness to compute a clustered matrix approximation within the developed framework. We further prove deterministic and probabilistic bounds of the resulting approximation error. Finally, in a series of experiments we evaluate, analyze, and discuss various aspects of the proposed framework. In particular, all the benefits we claim for the clustered matrix approximation are clearly illustrated using real-world and large scale data.

  • 18.
    Savas, Berkant
    et al.
    University of Texas at Austin.
    Dhillon, Inderjit S.
    University of Texas at Austin.
    Fast and accurate low rank approximation of massive graphs2010Report (Other academic)
  • 19.
    Savas, Berkant
    et al.
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Eldén, Lars
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Handwritten digit classification using higher order singular value decomposition2007In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 40, no 3, p. 993-1003Article in journal (Refereed)
    Abstract [en]

    In this paper we present two algorithms for handwritten digit classification based on the higher order singular value decomposition (HOSVD). The first algorithm uses HOSVD for construction of the class models and achieves classification results with error rate lower than 6%. The second algorithm uses the HOSVD for tensor approximation simultaneously in two modes. Classification results for the second algorithm are almost down at 5% even though the approximation reduces the original training data with more than 98% before the construction of the class models. The actual classification in the test phase for both algorithms is conducted by solving a series least squares problems. Considering computational amount for the test presented the second algorithm is twice as efficient as the first one.

  • 20.
    Savas, Berkant
    et al.
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Eldén, Lars
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Krylov-type methods for tensor computations I2013In: Linear Algebra and its Applications, ISSN 0024-3795, E-ISSN 1873-1856, Vol. 438, no 2, p. 891-918Article in journal (Refereed)
    Abstract [en]

    Several Krylov-type procedures are introduced that generalize matrix Krylov methods for tensor computations. They are denoted minimal Krylov recursion, maximal Krylov recursion, and contracted tensor product Krylov recursion. It is proved that, for a given tensor A with multilinear rank-(p; q; r), the minimal Krylov recursion extracts the correct subspaces associated to the tensor in p+q+r number of tensor-vector-vector multiplications. An optimized minimal Krylov procedure is described that, for a given multilinear rank of an approximation, produces a better approximation than the standard minimal recursion. We further generalize the matrix Krylov decomposition to a tensor Krylov decomposition. The tensor Krylov methods are intended for the computation of low multilinear rank approximations of large and sparse tensors, but they are also useful for certain dense and structured tensors for computing their higher order singular value decompositions or obtaining starting points for the best low-rank computations of tensors. A set of numerical experiments, using real-world and synthetic data sets, illustrate some of the properties of the tensor Krylov methods.

  • 21.
    Savas, Berkant
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Mathematics, Scientific Computing.
    Lim, Lek-Heng
    ICME Stanford University.
    Quasi-Newton algorithm for best multi-linear rank approximation of tensors2007In: 6th International Congress on Industrial and Applied Mathematics,2007, 2007Conference paper (Other academic)
    Abstract [en]

    In this talk we introduce a novel method for solving the best multilinear rank approximation problem. Our algorithm differs from existing methods in two respects: (1) it exploits the fact that the problem may be viewed as an optimization problem over a product of Grassmann manifolds; (2) it uses Quasi-Newton-like Hessian-approximates specially adapted for Grassmannians and thus avoids the inevitable problem of large Hessians in such problems. Tensor approximation problems occur in various applications involving multidimensional data. The performance of the Quasi-Newton algorithm is compared with the Newton-Grassmann and Higher Order Orthogonal Iteration algorithms for general and symmetric 3-tensors.

  • 22.
    Savas, Berkant
    et al.
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Lim, Lek-Heng
    University of California Berkeley.
    Quasi-Newton Methods on Grassmannians and Multilinear Approximations of Tensors2010In: SIAM Journal on Scientific Computing, ISSN 1064-8275, Vol. 32, no 6, p. 3352-3393Article in journal (Refereed)
    Abstract [en]

    In this paper we proposed quasi-Newton and limited memory quasi-Newton methods for objective functions defined on Grassmannians or a product of Grassmannians. Specifically we defined BFGS and limited memory BFGS updates in local and global coordinates on Grassmannians or a product of these. We proved that, when local coordinates are used, our BFGS updates on Grassmannians share the same optimality property as the usual BFGS updates on Euclidean spaces. When applied to the best multilinear rank approximation problem for general and symmetric tensors, our approach yields fast, robust, and accurate algorithms that exploit the special Grassmannian structure of the respective problems and which work on tensors of large dimensions and arbitrarily high order. Extensive numerical experiments are included to substantiate our claims.

  • 23.
    Song, Han Hee
    et al.
    Department of Computer Science, The University of Texas at Austin.
    Savas, Berkant
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Cho, Tae Won
    Department of Computer Science, The University of Texas at Austin.
    Dave, Vacha
    Department of Computer Science, The University of Texas at Austin.
    Lu, Zhengdong
    Institute for Computational Engineering and Sciences, The University of Texas at Austin.
    Dhillon, Inderjit S.
    Department of Computer Science, The University of Texas at Austin.
    Zhang, Yin
    Department of Computer Science, The University of Texas at Austin.
    Qiu, Lili
    Department of Computer Science, The University of Texas at Austin.
    Clustered Embedding of Massive Social Networks2012In: Proceedings of the 12th ACM SIGMETRICS/PERFORMANCE joint international conference on Measurement and Modeling of Computer Systems, Association for Computing Machinery (ACM), 2012, , p. 27p. 331-342Conference paper (Other academic)
    Abstract [en]

    The explosive growth of social networks has created numerous exciting research opportunities. A central concept in the analysis of social networks is a proximity measure, which captures the closeness or similarity between nodes in a social network. Despite much research on proximity measures,  there is a lack of techniques to eciently and accurately compute proximity measures for large-scale social networks. In this paper, we develop a novel dimensionality reduction technique, called clustered spectral graph embedding, to embed the graphs adjacency matrix into a much smaller matrix. The embedded matrix together with the embedding subspaces capture the essential clustering and spectral structure of the original graph and allows a wide range of analysis tasks to be performed in an ecient and accurate fashion. To evaluate our technique, we use three large real-world social  network datasets: Flickr, LiveJournal and MySpace, with up to 2 million nodes and 90 million links. Our results clearly demonstrate the accuracy, scalability and  exibility of our approach in the context of three importantsocial network analysis tasks: proximity estimation, missing link inference, and link prediction.

  • 24.
    Sui, Xin
    et al.
    University of Texas at Austin, USA.
    Lee, Tsung-Hsien
    University of Texas at Austin, USA.
    Whang, Joyce Jiyoung
    University of Texas at Austin, USA.
    Savas, Berkant
    Linköping University, Department of Mathematics, Scientific Computing. Linköping University, The Institute of Technology.
    Jain, Saral
    University of Texas at Austin, USA.
    Pingali, Keshav
    University of Texas at Austin, USA.
    Dhillon, Inderjit S.
    University of Texas at Austin, USA.
    Parallel clustered low-rank approximation of graphs and its application to link prediction2012In: Proceedings of the International Workshop on Languages and Compilers for Parallel Computing, Springer Berlin Heidelberg , 2012, p. 76-95Conference paper (Other academic)
    Abstract [en]

    Social network analysis has become a major research area that has impact in diverse applications ranging from search engines to product recommendation systems. A major problem in implementing social network analysis algorithms is the sheer size of many social networks, for example, the Facebook graph has more than 900 million vertices and even small networks may have tens of millions of vertices. One solution to dealing with these large graphs is dimensionality reduction using spectral or SVD analysis of the adjacency matrix of the network, but these global techniques do not necessarily take into account local structures or clusters of the network that are critical in network analysis. A more promising approach is clustered low-rank approximation: instead of computing a global low-rank approximation, the adjacency matrix is first clustered, and then a low-rank approximation of each cluster (i.e., diagonal block) is computed. The resulting algorithm is challenging to parallelize not only because of the large size of the data sets in social network analysis, but also because it requires computing with very diverse data structures ranging from extremely sparse matrices to dense matrices. In this paper, we describe the first parallel implementation of a clustered low-rank approximation algorithm for large social network graphs, and use it to perform link prediction in parallel. Experimental results show that this implementation scales well on large distributed-memory machines; for example, on a Twitter graph with roughly 11 million vertices and 63 million edges, our implementation scales by a factor of 86 on 128 processes and takes less than 2300 seconds, while on a much larger Twitter graph with 41 million vertices and 1.2 billion edges, our implementation scales by a factor of 203 on 256 processes with a running time about 4800 seconds.

  • 25.
    Vasuki, Vishvas
    et al.
    University of Texas at Austin.
    Natarajan, Nagarajan
    University of Texas at Austin.
    Lu, Zhengdong
    University of Texas at Austin.
    Savas, Berkant
    University of Texas at Austin.
    Dhillon, Inderjit S.
    University of Texas at Austin.
    Scalable Affiliation Recommendation using Auxiliary Networks2011In: ACM Transactions on Intelligent Systems and Technology, ISSN 2157-6904, Vol. 3, no 1, p. 3:1-3:20Article in journal (Refereed)
    Abstract [en]

    Social network analysis has attracted increasing attention in recent years. In many social networks, besides friendship links among users, the phenomenon of users associating themselves with groups or communities is common. Thus, two networks exist simultaneously: the friendship network among users, and the affiliation network between users and groups. In this article, we tackle the affiliation recommendation problem, where the task is to predict or suggest new affiliations between users and communities, given the current state of the friendship and affiliation networks. More generally, affiliations need not be community affiliations—they can be a user?s taste, so affiliation recommendation algorithms have applications beyond community recommendation. In this article, we show that information from the friendship network can indeed be fruitfully exploited in making affiliation recommendations. Using a simple way of combining these networks, we suggest two models of user-community affinity for the purpose of making affiliation recommendations: one based on graph proximity, and another using latent factors to model users and communities. We explore the affiliation recommendation algorithms suggested by these models and evaluate these algorithms on two real-world networks, Orkut and Youtube. In doing so, we motivate and propose a way of evaluating recommenders, by measuring how good the top 50 recommendations are for the average user, and demonstrate the importance of choosing the right evaluation strategy. The algorithms suggested by the graph proximity model turn out to be the most effective. We also introduce scalable versions of these algorithms, and demonstrate their effectiveness. This use of link prediction techniques for the purpose of affiliation recommendation is, to our knowledge, novel.

1 - 25 of 25
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf