liu.seSearch for publications in DiVA
Endre søk
Begrens søket
123 101 - 115 of 115
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 101.
    Savas, Berkant
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Tangent distance algorithm on HOSVD reduced data for handwritten digit recognition2005Inngår i: Workshop on Tensor Decompositions and Applications,2005, 2005Konferansepaper (Annet vitenskapelig)
  • 102.
    Savas, Berkant
    et al.
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Eldén, Lars
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Handwritten digit classification using higher order singular value decomposition2007Inngår i: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 40, nr 3, s. 993-1003Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper we present two algorithms for handwritten digit classification based on the higher order singular value decomposition (HOSVD). The first algorithm uses HOSVD for construction of the class models and achieves classification results with error rate lower than 6%. The second algorithm uses the HOSVD for tensor approximation simultaneously in two modes. Classification results for the second algorithm are almost down at 5% even though the approximation reduces the original training data with more than 98% before the construction of the class models. The actual classification in the test phase for both algorithms is conducted by solving a series least squares problems. Considering computational amount for the test presented the second algorithm is twice as efficient as the first one.

  • 103.
    Savas, Berkant
    et al.
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Eldén, Lars
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Krylov-type methods for tensor computations I2013Inngår i: Linear Algebra and its Applications, ISSN 0024-3795, E-ISSN 1873-1856, Vol. 438, nr 2, s. 891-918Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Several Krylov-type procedures are introduced that generalize matrix Krylov methods for tensor computations. They are denoted minimal Krylov recursion, maximal Krylov recursion, and contracted tensor product Krylov recursion. It is proved that, for a given tensor A with multilinear rank-(p; q; r), the minimal Krylov recursion extracts the correct subspaces associated to the tensor in p+q+r number of tensor-vector-vector multiplications. An optimized minimal Krylov procedure is described that, for a given multilinear rank of an approximation, produces a better approximation than the standard minimal recursion. We further generalize the matrix Krylov decomposition to a tensor Krylov decomposition. The tensor Krylov methods are intended for the computation of low multilinear rank approximations of large and sparse tensors, but they are also useful for certain dense and structured tensors for computing their higher order singular value decompositions or obtaining starting points for the best low-rank computations of tensors. A set of numerical experiments, using real-world and synthetic data sets, illustrate some of the properties of the tensor Krylov methods.

  • 104.
    Savas, Berkant
    et al.
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Lim, Lek-Heng
    ICME Stanford University.
    Quasi-Newton algorithm for best multi-linear rank approximation of tensors2007Inngår i: 6th International Congress on Industrial and Applied Mathematics,2007, 2007Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    In this talk we introduce a novel method for solving the best multilinear rank approximation problem. Our algorithm differs from existing methods in two respects: (1) it exploits the fact that the problem may be viewed as an optimization problem over a product of Grassmann manifolds; (2) it uses Quasi-Newton-like Hessian-approximates specially adapted for Grassmannians and thus avoids the inevitable problem of large Hessians in such problems. Tensor approximation problems occur in various applications involving multidimensional data. The performance of the Quasi-Newton algorithm is compared with the Newton-Grassmann and Higher Order Orthogonal Iteration algorithms for general and symmetric 3-tensors.

  • 105.
    Savas, Berkant
    et al.
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Lim, Lek-Heng
    University of California Berkeley.
    Quasi-Newton Methods on Grassmannians and Multilinear Approximations of Tensors2010Inngår i: SIAM Journal on Scientific Computing, ISSN 1064-8275, Vol. 32, nr 6, s. 3352-3393Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper we proposed quasi-Newton and limited memory quasi-Newton methods for objective functions defined on Grassmannians or a product of Grassmannians. Specifically we defined BFGS and limited memory BFGS updates in local and global coordinates on Grassmannians or a product of these. We proved that, when local coordinates are used, our BFGS updates on Grassmannians share the same optimality property as the usual BFGS updates on Euclidean spaces. When applied to the best multilinear rank approximation problem for general and symmetric tensors, our approach yields fast, robust, and accurate algorithms that exploit the special Grassmannian structure of the respective problems and which work on tensors of large dimensions and arbitrarily high order. Extensive numerical experiments are included to substantiate our claims.

  • 106. Simoncini, V
    et al.
    Elden, Lars
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Inexact Rayleigh quotient-type methods for eigenvalue computations2002Inngår i: BIT Numerical Mathematics, ISSN 0006-3835, E-ISSN 1572-9125, Vol. 42, nr 1, s. 159-182Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We consider the computation of an eigenvalue and corresponding eigenvector of a Hermitian positive definite matrix A is an element of C-nxn, assuming that good approximations of the wanted eigenpair are already available, as may be the case in applications such as structural mechanics. We analyze efficient implementations of inexact Rayleigh quotient-type methods, which involve the approximate solution of a linear system at each iteration by means of the Conjugate Residuals method. We show that the inexact version of the classical Rayleigh quotient iteration is mathematically equivalent to a Newton approach. New insightful bounds relating the inner and outer recurrences are derived. In particular, we show that even if in the inner iterations the norm of the residual for the linear system decreases very slowly, the eigenvalue residual is reduced substantially. Based on the theoretical results, we examine stopping criteria for the inner iteration. We also discuss and motivate a preconditioning strategy for the inner iteration in order to further accelerate the convergence. Numerical experiments illustrate the analysis.

  • 107.
    Simonsson, Lennart
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Computing a Partial SVD of a Matrix with Missing Data2003Inngår i: Numerical Linear Algebra and its Applications: XXI International School and Workshop,2003, 2003Konferansepaper (Annet vitenskapelig)
  • 108.
    Simonsson, Lennart
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Lågrang-approximation av en matris med saknade element2004Inngår i: Workshop i tillämpad matematik,2004, 2004Konferansepaper (Annet vitenskapelig)
  • 109.
    Simonsson, Lennart
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Subspace Computations via Matrix Decompositions and Geometric Optimization2006Doktoravhandling, monografi (Annet vitenskapelig)
    Abstract [en]

    This thesis is concerned with the computation of certain subspaces connected to a given matrix, where the closely related problem of approximating the matrix with one of lower rank is given special attention.

    To determine the rank and obtain bases for fundamental subspaces such as the range and null space of a matrix, computing the singular value decomposition (SVD) is the standard method. When new data are added, like in adaptive signal processing, a more economic alternative to the SVD is to use a rank-revealing UTV (ULV or URV) decomposition since it can be updated more easily.

    The scenario in part I of the thesis is that the matrix to be updated is either a product or a quotient of two other matrices. There exist implicit algorithms for computing the SVD of a product or quotient that operate on the two matrices separately. For the corresponding problem of an URV decomposition of a product or quotient, originally sketched by S. Qiao, we give the details of the updating algorithms. Sample numerical experiments confirm that the quality of the approximate subspaces compared to the ones obtained by the implicitly computed URV, is degraded if the product is formed explicitly in some cases. We argue that the same pros and cons that affect the choice between the URV and ULV decomposition of one matrix, carry over to the choice between the implicit URV decomposition and the more established ULLV decomposition in the quotient case. As a signal processing application, we track the range of an estimated cross-covariance matrix.

    We also describe the updating issues of a decomposition that reveals the ranks of the individual matrices in a product. That decomposition suffers from a difficult decision about the rank of the product and will not be tested as a competitor to the implicit URV decomposition referred to above.

    A common situation in scientific computing is that the matrix is too lagre to admit a full factorization within reasonable time. In that case iterative methods must be employed, where Lanczos type algorithms are the most widely used. In part II we discuss the formulation of standard numerical optimization methods on the Grassmann manifold whose objects are subspaces and focus on the application to numerical linear algebra problems. This approach allow us to (re-)derive algorithms for the partial symmetric eigenvalue problem and the inexact Newton method is given special attention.

    A recent method is the Jacobi-Davidson (JD) algorithm that can be seen both as a variation of an inexact Newton method for solving a set of nonlinear equations/minimizing a function and as an expanding subspace algorithm that is equivalent to Lanczos if the equation in each step is solved exactly. Our contribution is an algorithm that is fairly robust with a subspace that is only twice as large as the desired one. A large part treats the implementation issues associated with the solution of a correction equation including stopping rules and the use of preconditioners.

    Other numerical linear algebra problems call for a pair of subspaces. We give Grassmann type algorithms for the partial SVD problem and low rank approximation of matrices with missing entries, but will restrain ourselves by demonstrating their efficiency for exact solution of the Newton equation.

  • 110.
    Simonsson, Lennart
    et al.
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Elden, Lars
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Grassmann algorithms for low rank approximation of matrices with missing values2010Inngår i: BIT NUMERICAL MATHEMATICS, ISSN 0006-3835, Vol. 50, nr 1, s. 173-191Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The problem of approximating a matrix by another matrix of lower rank, when a modest portion of its elements are missing, is considered. The solution is obtained using Newtons algorithm to find a zero of a vector field on a product manifold. As a preliminary the algorithm is formulated for the well-known case with no missing elements where also a rederivation of the correction equation in a block Jacobi-Davidson method is included. Numerical examples show that the Newton algorithm grows more efficient than an alternating least squares procedure as the amount of missing values increases.

  • 111.
    Skoglund, Ingegerd
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Algorithms for a Partially Regularized Least Squares Problem2007Licentiatavhandling, med artikler (Annet vitenskapelig)
    Abstract [sv]

    Vid analys av vattenprover tagna från t.ex. ett vattendrag betäms halten av olika ämnen. Dessa halter är ofta beroende av vattenföringen. Det är av intresse att ta reda på om observerade förändringar i halterna beror på naturliga variationer eller är orsakade av andra faktorer. För att undersöka detta har föreslagits en statistisk tidsseriemodell som innehåller okända parametrar. Modellen anpassas till uppmätta data vilket leder till ett underbestämt ekvationssystem. I avhandlingen studeras bl.a. olika sätt att säkerställa en unik och rimlig lösning. Grundidén är att införa vissa tilläggsvillkor på de sökta parametrarna. I den studerade modellen kan man t.ex. kräva att vissa parametrar inte varierar kraftigt med tiden men tillåter årstidsvariationer. Det görs genom att dessa parametrar i modellen regulariseras.

    Detta ger upphov till ett minsta kvadratproblem med en eller två regulariseringsparametrar. I och med att inte alla ingående parametrar regulariseras får vi dessutom ett partiellt regulariserat minsta kvadratproblem. I allmänhet känner man inte värden på regulariseringsparametrarna utan problemet kan behöva lösas med flera olika värden på dessa för att få en rimlig lösning. I avhandlingen studeras hur detta problem kan lösas numeriskt med i huvudsak två olika metoder, en iterativ och en direkt metod. Dessutom studeras några sätt att bestämma lämpliga värden på regulariseringsparametrarna.

    I en iterativ lösningsmetod förbättras stegvis en given begynnelseapproximation tills ett lämpligt valt stoppkriterium blir uppfyllt. Vi använder här konjugerade gradientmetoden med speciellt konstruerade prekonditionerare. Antalet iterationer som krävs för att lösa problemet utan prekonditionering och med prekonditionering jämförs både teoretiskt och praktiskt. Metoden undersöks här endast med samma värde på de två regulariseringsparametrarna.

    I den direkta metoden används QR-faktorisering för att lösa minsta kvadratproblemet. Idén är att först utföra de beräkningar som kan göras oberoende av regulariseringsparametrarna samtidigt som hänsyn tas till problemets speciella struktur.

    För att bestämma värden på regulariseringsparametrarna generaliseras Reinsch’s etod till fallet med två parametrar. Även generaliserad korsvalidering och en mindre beräkningstung Monte Carlo-metod undersöks.

    Delarbeid
    1. A block-preconditioner for a special regularized least-squares problem
    Åpne denne publikasjonen i ny fane eller vindu >>A block-preconditioner for a special regularized least-squares problem
    2007 (engelsk)Inngår i: Linear Algebra with Applications, ISSN 1070-5325, Vol. 14, nr 6, s. 469-484Artikkel i tidsskrift (Fagfellevurdert) Published
    Abstract [en]

    We consider a linear system of the form A1x1 + A2x2 + =b1. The vector consists of independent and identically distributed random variables all with mean zero. The unknowns are split into two groups x1 and x2. It is assumed that AA1 has full rank and is easy to invert. In this model, usually there are more unknowns than observations and the resulting linear system is most often consistent having an infinite number of solutions. Hence, some constraint on the parameter vector x is needed. One possibility is to avoid rapid variation in, e.g. the parameters x2. This can be accomplished by regularizing using a matrix A3, which is a discretization of some norm (e.g. a Sobolev space norm). We formulate the problem as a partially regularized least-squares problem and use the conjugate gradient method for its solution. Using the special structure of the problem we suggest and analyse block-preconditioners of Schur compliment type. We demonstrate their effectiveness in some numerical tests. The test examples are taken from an application in modelling of substance transport in rivers.

    Emneord
    conjugate gradient, least squares, regularization
    HSV kategori
    Identifikatorer
    urn:nbn:se:liu:diva-14423 (URN)10.1002/nla.533 (DOI)
    Tilgjengelig fra: 2007-05-02 Laget: 2007-05-02 Sist oppdatert: 2009-04-26
    2. A direct method for a special regularized least squares problem
    Åpne denne publikasjonen i ny fane eller vindu >>A direct method for a special regularized least squares problem
    Manuskript (Annet vitenskapelig)
    Identifikatorer
    urn:nbn:se:liu:diva-14424 (URN)
    Tilgjengelig fra: 2007-05-02 Laget: 2007-05-02 Sist oppdatert: 2010-01-13
  • 112.
    Song, Han Hee
    et al.
    Department of Computer Science, The University of Texas at Austin.
    Savas, Berkant
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Cho, Tae Won
    Department of Computer Science, The University of Texas at Austin.
    Dave, Vacha
    Department of Computer Science, The University of Texas at Austin.
    Lu, Zhengdong
    Institute for Computational Engineering and Sciences, The University of Texas at Austin.
    Dhillon, Inderjit S.
    Department of Computer Science, The University of Texas at Austin.
    Zhang, Yin
    Department of Computer Science, The University of Texas at Austin.
    Qiu, Lili
    Department of Computer Science, The University of Texas at Austin.
    Clustered Embedding of Massive Social Networks2012Inngår i: Proceedings of the 12th ACM SIGMETRICS/PERFORMANCE joint international conference on Measurement and Modeling of Computer Systems, Association for Computing Machinery (ACM), 2012, , s. 27s. 331-342Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    The explosive growth of social networks has created numerous exciting research opportunities. A central concept in the analysis of social networks is a proximity measure, which captures the closeness or similarity between nodes in a social network. Despite much research on proximity measures,  there is a lack of techniques to eciently and accurately compute proximity measures for large-scale social networks. In this paper, we develop a novel dimensionality reduction technique, called clustered spectral graph embedding, to embed the graphs adjacency matrix into a much smaller matrix. The embedded matrix together with the embedding subspaces capture the essential clustering and spectral structure of the original graph and allows a wide range of analysis tasks to be performed in an ecient and accurate fashion. To evaluate our technique, we use three large real-world social  network datasets: Flickr, LiveJournal and MySpace, with up to 2 million nodes and 90 million links. Our results clearly demonstrate the accuracy, scalability and  exibility of our approach in the context of three importantsocial network analysis tasks: proximity estimation, missing link inference, and link prediction.

  • 113.
    Sui, Xin
    et al.
    University of Texas at Austin, USA.
    Lee, Tsung-Hsien
    University of Texas at Austin, USA.
    Whang, Joyce Jiyoung
    University of Texas at Austin, USA.
    Savas, Berkant
    Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap. Linköpings universitet, Tekniska högskolan.
    Jain, Saral
    University of Texas at Austin, USA.
    Pingali, Keshav
    University of Texas at Austin, USA.
    Dhillon, Inderjit S.
    University of Texas at Austin, USA.
    Parallel clustered low-rank approximation of graphs and its application to link prediction2012Inngår i: Proceedings of the International Workshop on Languages and Compilers for Parallel Computing, Springer Berlin Heidelberg , 2012, s. 76-95Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    Social network analysis has become a major research area that has impact in diverse applications ranging from search engines to product recommendation systems. A major problem in implementing social network analysis algorithms is the sheer size of many social networks, for example, the Facebook graph has more than 900 million vertices and even small networks may have tens of millions of vertices. One solution to dealing with these large graphs is dimensionality reduction using spectral or SVD analysis of the adjacency matrix of the network, but these global techniques do not necessarily take into account local structures or clusters of the network that are critical in network analysis. A more promising approach is clustered low-rank approximation: instead of computing a global low-rank approximation, the adjacency matrix is first clustered, and then a low-rank approximation of each cluster (i.e., diagonal block) is computed. The resulting algorithm is challenging to parallelize not only because of the large size of the data sets in social network analysis, but also because it requires computing with very diverse data structures ranging from extremely sparse matrices to dense matrices. In this paper, we describe the first parallel implementation of a clustered low-rank approximation algorithm for large social network graphs, and use it to perform link prediction in parallel. Experimental results show that this implementation scales well on large distributed-memory machines; for example, on a Twitter graph with roughly 11 million vertices and 63 million edges, our implementation scales by a factor of 86 on 128 processes and takes less than 2300 seconds, while on a much larger Twitter graph with 41 million vertices and 1.2 billion edges, our implementation scales by a factor of 203 on 256 processes with a running time about 4800 seconds.

  • 114.
    Wikstrom, P.
    et al.
    Wikström, P., Department of Materials Science and Engineering, Division of Energy and Furnace Technology, Royal Institute of Technology (KTH), Brinellvägen 23, S-100 44 Stockholm, Sweden.
    Blasiak, W.
    Department of Materials Science and Engineering, Division of Energy and Furnace Technology, Royal Institute of Technology (KTH), Brinellvägen 23, S-100 44 Stockholm, Sweden.
    Berntsson, Fredrik
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Estimation of the transient surface temperature and heat flux of a steel slab using an inverse method2007Inngår i: Applied Thermal Engineering, ISSN 1359-4311, E-ISSN 1873-5606, Vol. 27, nr 14-15, s. 2463-2472Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In the steel industry it is of great importance to be able to control the surface temperature and heating- or cooling rates during heat treatment processes. An experiment was performed in which a steel slab was heated up to 1250 °C in a fuel fired test furnace. The transient surface temperature and heat flux of a steel slab is calculated using a model for inverse heat conduction. That is, the time dependent local surface temperature and heat flux of a slab is calculated on the basis of temperature measurements in selected points of its interior by using a model of inverse heat conduction. Time- and temperature histories were measured at three points inside a steel slab. Measured temperature histories at the two lower locations of the slab were used as input to calculate the temperature at the position of the third location. A comparison of the experimentally measured and the calculated temperature histories was made to verify the model. The results showed very good agreement and suggest that this model can be applied to similar applications in the Steel industry or in other areas where the target of investigation for some reason is inaccessible to direct measurements. © 2007 Elsevier Ltd. All rights reserved.

  • 115. Wikstrom, Patrik
    et al.
    Blasiak, Wlodzimierz
    Berntsson, Fredrik
    Linköpings universitet, Tekniska högskolan. Linköpings universitet, Matematiska institutionen, Beräkningsvetenskap.
    Estimation of the transient surface temperature, heat flux and effective heat transfer coefficient of a slab in an industrial reheating furnace by using an inverse method2007Inngår i: STEEL RESEARCH INTERNATIONAL, ISSN 1611-3683, Vol. 78, nr 1, s. 63-70Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In the steel industry it is of great importance to be able to control the surface temperature and heating or cooling rates during heat treatment processes. In this paper, a steel slab is heated up to 1300 degrees C in an industrial reheating furnace and the temperature data are recorded during the reheating process. The transient local surface temperature, heat flux and effective heat transfer coefficient of the steel slab ares calculated using a model for inverse heat conduction. The calculated surface temperatures are compared with the temperatures achieved by using a model of the heating process with the help of the software STEELTEMP (R) 2D. The results obtained show very good agreement and suggest that the inverse method can be applied to similar high temperature applications with very good accuracy.

123 101 - 115 of 115
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf