liu.seSearch for publications in DiVA
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Learning Convolution Operators for Visual Tracking
Linköpings universitet, Institutionen för systemteknik, Datorseende. Linköpings universitet, Tekniska fakulteten.ORCID-id: 0000-0001-6144-9520
2018 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Visual tracking is one of the fundamental problems in computer vision. Its numerous applications include robotics, autonomous driving, augmented reality and 3D reconstruction. In essence, visual tracking can be described as the problem of estimating the trajectory of a target in a sequence of images. The target can be any image region or object of interest. While humans excel at this task, requiring little effort to perform accurate and robust visual tracking, it has proven difficult to automate. It has therefore remained one of the most active research topics in computer vision.

In its most general form, no prior knowledge about the object of interest or environment is given, except for the initial target location. This general form of tracking is known as generic visual tracking. The unconstrained nature of this problem makes it particularly difficult, yet applicable to a wider range of scenarios. As no prior knowledge is given, the tracker must learn an appearance model of the target on-the-fly. Cast as a machine learning problem, it imposes several major challenges which are addressed in this thesis.

The main purpose of this thesis is the study and advancement of the, so called, Discriminative Correlation Filter (DCF) framework, as it has shown to be particularly suitable for the tracking application. By utilizing properties of the Fourier transform, a correlation filter is discriminatively learned by efficiently minimizing a least-squares objective. The resulting filter is then applied to a new image in order to estimate the target location.

This thesis contributes to the advancement of the DCF methodology in several aspects. The main contribution regards the learning of the appearance model: First, the problem of updating the appearance model with new training samples is covered. Efficient update rules and numerical solvers are investigated for this task. Second, the periodic assumption induced by the circular convolution in DCF is countered by proposing a spatial regularization component. Third, an adaptive model of the training set is proposed to alleviate the impact of corrupted or mislabeled training samples. Fourth, a continuous-space formulation of the DCF is introduced, enabling the fusion of multiresolution features and sub-pixel accurate predictions. Finally, the problems of computational complexity and overfitting are addressed by investigating dimensionality reduction techniques.

As a second contribution, different feature representations for tracking are investigated. A particular focus is put on the analysis of color features, which had been largely overlooked in prior tracking research. This thesis also studies the use of deep features in DCF-based tracking. While many vision problems have greatly benefited from the advent of deep learning, it has proven difficult to harvest the power of such representations for tracking. In this thesis it is shown that both shallow and deep layers contribute positively. Furthermore, the problem of fusing their complementary properties is investigated.

The final major contribution of this thesis regards the prediction of the target scale. In many applications, it is essential to track the scale, or size, of the target since it is strongly related to the relative distance. A thorough analysis of how to integrate scale estimation into the DCF framework is performed. A one-dimensional scale filter is proposed, enabling efficient and accurate scale estimation.

sted, utgiver, år, opplag, sider
Linköping: Linköping University Electronic Press, 2018. , s. 71
Serie
Linköping Studies in Science and Technology. Dissertations, ISSN 0345-7524 ; 1926
HSV kategori
Identifikatorer
URN: urn:nbn:se:liu:diva-147543DOI: 10.3384/diss.diva-147543ISBN: 9789176853320 (tryckt)OAI: oai:DiVA.org:liu-147543DiVA, id: diva2:1201230
Disputas
2018-06-11, Ada Lovelace, B-huset, Campus Valla, Linköping, 13:00 (engelsk)
Opponent
Veileder
Tilgjengelig fra: 2018-05-03 Laget: 2018-04-25 Sist oppdatert: 2025-02-07bibliografisk kontrollert
Delarbeid
1. Adaptive Color Attributes for Real-Time Visual Tracking
Åpne denne publikasjonen i ny fane eller vindu >>Adaptive Color Attributes for Real-Time Visual Tracking
2014 (engelsk)Inngår i: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2014, IEEE Computer Society, 2014, s. 1090-1097Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power.

This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional variant of color attributes. Both quantitative and attributebased evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24% in median distance precision. Furthermore, we show that our approach outperforms state-of-the-art tracking methods while running at more than 100 frames per second.

sted, utgiver, år, opplag, sider
IEEE Computer Society, 2014
Serie
IEEE Conference on Computer Vision and Pattern Recognition. Proceedings, ISSN 1063-6919
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-105857 (URN)10.1109/CVPR.2014.143 (DOI)2-s2.0-84911362613 (Scopus ID)978-147995117-8 (ISBN)
Konferanse
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, Ohio, USA, June 24-27, 2014
Merknad

Publication status: Accepted

Tilgjengelig fra: 2014-04-10 Laget: 2014-04-10 Sist oppdatert: 2023-04-03bibliografisk kontrollert
2. Coloring Channel Representations for Visual Tracking
Åpne denne publikasjonen i ny fane eller vindu >>Coloring Channel Representations for Visual Tracking
2015 (engelsk)Inngår i: 19th Scandinavian Conference, SCIA 2015, Copenhagen, Denmark, June 15-17, 2015. Proceedings / [ed] Rasmus R. Paulsen, Kim S. Pedersen, Springer, 2015, Vol. 9127, s. 117-129Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Visual object tracking is a classical, but still open research problem in computer vision, with many real world applications. The problem is challenging due to several factors, such as illumination variation, occlusions, camera motion and appearance changes. Such problems can be alleviated by constructing robust, discriminative and computationally efficient visual features. Recently, biologically-inspired channel representations \cite{felsberg06PAMI} have shown to provide promising results in many applications ranging from autonomous driving to visual tracking.

This paper investigates the problem of coloring channel representations for visual tracking. We evaluate two strategies, channel concatenation and channel product, to construct channel coded color representations. The proposed channel coded color representations are generic and can be used beyond tracking.

Experiments are performed on 41 challenging benchmark videos. Our experiments clearly suggest that a careful selection of color feature together with an optimal fusion strategy, significantly outperforms the standard luminance based channel representation. Finally, we show promising results compared to state-of-the-art tracking methods in the literature.

sted, utgiver, år, opplag, sider
Springer, 2015
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9127
Emneord
Visual tracking, channel coding, color names
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-121003 (URN)10.1007/978-3-319-19665-7_10 (DOI)978-3-319-19664-0 (ISBN)978-3-319-19665-7 (ISBN)
Konferanse
Scandinavian Conference on Image Analysis
Tilgjengelig fra: 2015-09-02 Laget: 2015-09-02 Sist oppdatert: 2025-02-07bibliografisk kontrollert
3. Discriminative Scale Space Tracking
Åpne denne publikasjonen i ny fane eller vindu >>Discriminative Scale Space Tracking
2017 (engelsk)Inngår i: IEEE Transactions on Pattern Analysis and Machine Intelligence, ISSN 0162-8828, E-ISSN 1939-3539, Vol. 39, nr 8, s. 1561-1575Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Accurate scale estimation of a target is a challenging research problem in visual object tracking. Most state-of-the-art methods employ an exhaustive scale search to estimate the target size. The exhaustive search strategy is computationally expensive and struggles when encountered with large scale variations. This paper investigates the problem of accurate and robust scale estimation in a tracking-by-detection framework. We propose a novel scale adaptive tracking approach by learning separate discriminative correlation filters for translation and scale estimation. The explicit scale filter is learned online using the target appearance sampled at a set of different scales. Contrary to standard approaches, our method directly learns the appearance change induced by variations in the target scale. Additionally, we investigate strategies to reduce the computational cost of our approach. Extensive experiments are performed on the OTB and the VOT2014 datasets. Compared to the standard exhaustive scale search, our approach achieves a gain of 2.5 percent in average overlap precision on the OTB dataset. Additionally, our method is computationally efficient, operating at a 50 percent higher frame rate compared to the exhaustive scale search. Our method obtains the top rank in performance by outperforming 19 state-of-the-art trackers on OTB and 37 state-of-the-art trackers on VOT2014.

sted, utgiver, år, opplag, sider
IEEE COMPUTER SOC, 2017
Emneord
Visual tracking; scale estimation; correlation filters
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-139382 (URN)10.1109/TPAMI.2016.2609928 (DOI)000404606300006 ()27654137 (PubMedID)
Merknad

Funding Agencies|Swedish Foundation for Strategic Research; Swedish Research Council; Strategic Vehicle Research and Innovation (FFI); Wallenberg Autonomous Systems Program; National Supercomputer Centre; Nvidia

Tilgjengelig fra: 2017-08-07 Laget: 2017-08-07 Sist oppdatert: 2025-02-07bibliografisk kontrollert
4. Learning Spatially Regularized Correlation Filters for Visual Tracking
Åpne denne publikasjonen i ny fane eller vindu >>Learning Spatially Regularized Correlation Filters for Visual Tracking
2015 (engelsk)Inngår i: Proceedings of the International Conference in Computer Vision (ICCV), 2015, IEEE Computer Society, 2015, s. 4310-4318Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model.

We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.

sted, utgiver, år, opplag, sider
IEEE Computer Society, 2015
Serie
IEEE International Conference on Computer Vision. Proceedings, ISSN 1550-5499
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-121609 (URN)10.1109/ICCV.2015.490 (DOI)000380414100482 ()978-1-4673-8390-5 (ISBN)
Konferanse
International Conference in Computer Vision (ICCV), Santiago, Chile, December 13-16, 2015
Tilgjengelig fra: 2015-09-28 Laget: 2015-09-28 Sist oppdatert: 2025-02-07
5. Convolutional Features for Correlation Filter Based Visual Tracking
Åpne denne publikasjonen i ny fane eller vindu >>Convolutional Features for Correlation Filter Based Visual Tracking
2015 (engelsk)Inngår i: 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), IEEE conference proceedings, 2015, s. 621-629Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Visual object tracking is a challenging computer vision problem with numerous real-world applications. This paper investigates the impact of convolutional features for the visual tracking problem. We propose to use activations from the convolutional layer of a CNN in discriminative correlation filter based tracking frameworks. These activations have several advantages compared to the standard deep features (fully connected layers). Firstly, they mitigate the need of task specific fine-tuning. Secondly, they contain structural information crucial for the tracking problem. Lastly, these activations have low dimensionality. We perform comprehensive experiments on three benchmark datasets: OTB, ALOV300++ and the recently introduced VOT2015. Surprisingly, different to image classification, our results suggest that activations from the first layer provide superior tracking performance compared to the deeper layers. Our results further show that the convolutional features provide improved results compared to standard handcrafted features. Finally, results comparable to state-of-theart trackers are obtained on all three benchmark datasets.

sted, utgiver, år, opplag, sider
IEEE conference proceedings, 2015
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-128869 (URN)10.1109/ICCVW.2015.84 (DOI)000380434700075 ()9781467397117 (ISBN)9781467397100 (ISBN)
Konferanse
15th IEEE International Conference on Computer Vision Workshops, ICCVW 2015, 7-13 December 2015, Santiago, Chile
Tilgjengelig fra: 2016-06-02 Laget: 2016-06-02 Sist oppdatert: 2025-02-07bibliografisk kontrollert
6. Adaptive Decontamination of the Training Set: A Unified Formulation for Discriminative Visual Tracking
Åpne denne publikasjonen i ny fane eller vindu >>Adaptive Decontamination of the Training Set: A Unified Formulation for Discriminative Visual Tracking
2016 (engelsk)Inngår i: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Institute of Electrical and Electronics Engineers (IEEE), 2016, s. 1430-1438Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-by-detection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be down-weighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3.8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2016
Serie
IEEE Conference on Computer Vision and Pattern Recognition, E-ISSN 1063-6919 ; 2016
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-137882 (URN)10.1109/CVPR.2016.159 (DOI)000400012301051 ()9781467388511 (ISBN)9781467388528 (ISBN)
Konferanse
29th IEEE Conference on Computer Vision and Pattern Recognition, 27-30 June 2016, Las Vegas, NV, USA
Merknad

Funding Agencies|SSF (CUAS); VR (EMC2); VR (ELLIIT); Wallenberg Autonomous Systems Program; NSC; Nvidia

Tilgjengelig fra: 2017-06-01 Laget: 2017-06-01 Sist oppdatert: 2025-02-07bibliografisk kontrollert
7. Deep motion and appearance cues for visual tracking
Åpne denne publikasjonen i ny fane eller vindu >>Deep motion and appearance cues for visual tracking
Vise andre…
2019 (engelsk)Inngår i: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 124, s. 74-81Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Generic visual tracking is a challenging computer vision problem, with numerous applications. Most existing approaches rely on appearance information by employing either hand-crafted features or deep RGB features extracted from convolutional neural networks. Despite their success, these approaches struggle in case of ambiguous appearance information, leading to tracking failure. In such cases, we argue that motion cue provides discriminative and complementary information that can improve tracking performance. Contrary to visual tracking, deep motion features have been successfully applied for action recognition and video classification tasks. Typically, the motion features are learned by training a CNN on optical flow images extracted from large amounts of labeled videos. In this paper, we investigate the impact of deep motion features in a tracking-by-detection framework. We also evaluate the fusion of hand-crafted, deep RGB, and deep motion features and show that they contain complementary information. To the best of our knowledge, we are the first to propose fusing appearance information with deep motion features for visual tracking. Comprehensive experiments clearly demonstrate that our fusion approach with deep motion features outperforms standard methods relying on appearance information alone.

sted, utgiver, år, opplag, sider
Elsevier, 2019
Emneord
Visual tracking, Deep learning, Optical flow, Discriminative correlation filters
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-148015 (URN)10.1016/j.patrec.2018.03.009 (DOI)000469427700008 ()2-s2.0-85044328745 (Scopus ID)
Merknad

Funding agencies: Swedish Foundation for Strategic Research; Swedish Research Council [2016-05543]; Wallenberg Autonomous Systems Program; Swedish National Infrastructure for Computing (SNIC); Nvidia

Tilgjengelig fra: 2018-05-24 Laget: 2018-05-24 Sist oppdatert: 2023-04-03bibliografisk kontrollert
8. Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking
Åpne denne publikasjonen i ny fane eller vindu >>Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking
2016 (engelsk)Inngår i: Computer Vision – ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part V / [ed] Bastian Leibe, Jiri Matas, Nicu Sebe and Max Welling, Cham: Springer, 2016, s. 472-488Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data by including all shifted versions of a training sample. However, the underlying DCF formulation is restricted to single-resolution feature maps, significantly limiting its potential. In this paper, we go beyond the conventional DCF framework and introduce a novel formulation for training continuous convolution filters. We employ an implicit interpolation model to pose the learning problem in the continuous spatial domain. Our proposed formulation enables efficient integration of multi-resolution deep feature maps, leading to superior results on three object tracking benchmarks: OTB-2015 (+5.1% in mean OP), Temple-Color (+4.6% in mean OP), and VOT2015 (20% relative reduction in failure rate). Additionally, our approach is capable of sub-pixel localization, crucial for the task of accurate feature point tracking. We also demonstrate the effectiveness of our learning formulation in extensive feature point tracking experiments.

sted, utgiver, år, opplag, sider
Cham: Springer, 2016
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 9909
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-133550 (URN)10.1007/978-3-319-46454-1_29 (DOI)000389385400029 ()9783319464534 (ISBN)9783319464541 (ISBN)
Konferanse
14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, October 11-14, 2016
Tilgjengelig fra: 2016-12-30 Laget: 2016-12-29 Sist oppdatert: 2025-02-07bibliografisk kontrollert
9. ECO: Efficient Convolution Operators for Tracking
Åpne denne publikasjonen i ny fane eller vindu >>ECO: Efficient Convolution Operators for Tracking
2017 (engelsk)Inngår i: Proceedings 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Institute of Electrical and Electronics Engineers (IEEE), 2017, s. 6931-6939Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

In recent years, Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. However, in the pursuit of ever increasing tracking performance, their characteristic speed and real-time capability have gradually faded. Further, the increasingly complex models, with massive number of trainable parameters, have introduced the risk of severe over-fitting. In this work, we tackle the key causes behind the problems of computational complexity and over-fitting, with the aim of simultaneously improving both speed and performance. We revisit the core DCF formulation and introduce: (i) a factorized convolution operator, which drastically reduces the number of parameters in the model; (ii) a compact generative model of the training sample distribution, that significantly reduces memory and time complexity, while providing better diversity of samples; (iii) a conservative model update strategy with improved robustness and reduced complexity. We perform comprehensive experiments on four benchmarks: VOT2016, UAV123, OTB-2015, and Temple-Color. When using expensive deep features, our tracker provides a 20-fold speedup and achieves a 13.0% relative gain in Expected Average Overlap compared to the top ranked method [12] in the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features, operates at 60 Hz on a single CPU, while obtaining 65.0% AUC on OTB-2015.

sted, utgiver, år, opplag, sider
Institute of Electrical and Electronics Engineers (IEEE), 2017
Serie
IEEE Conference on Computer Vision and Pattern Recognition, ISSN 1063-6919 ; 2017
HSV kategori
Identifikatorer
urn:nbn:se:liu:diva-144284 (URN)10.1109/CVPR.2017.733 (DOI)000418371407004 ()9781538604571 (ISBN)9781538604588 (ISBN)
Konferanse
30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 21-26 July 2017, Honolulu, HI, USA
Merknad

Funding Agencies|SSF (SymbiCloud); VR (EMC2) [2016-05543]; SNIC; WASP; Visual Sweden; Nvidia

Tilgjengelig fra: 2018-01-12 Laget: 2018-01-12 Sist oppdatert: 2025-02-07bibliografisk kontrollert

Open Access i DiVA

Learning Convolution Operators for Visual Tracking(5539 kB)1877 nedlastinger
Filinformasjon
Fil FULLTEXT02.pdfFilstørrelse 5539 kBChecksum SHA-512
c03c26c47d3f7133ff279178c859385527b2aacf5cb56c18bf63823cb27861b44ea614116d928e23374940676a2f6b56d2d2b1bb89bf382251b4e959ddb31246
Type fulltextMimetype application/pdf
omslag(2818 kB)271 nedlastinger
Filinformasjon
Fil COVER01.pdfFilstørrelse 2818 kBChecksum SHA-512
b88d335a49cb143290a9dc1fe7c4853d2154742a540b818d0837cdf2c6a13b3f57c7751f0207a9427391fced94b234d3435bdbb60d9e98a1727e885534751766
Type coverMimetype application/pdf
Bestill online >>

Andre lenker

Forlagets fulltekst

Person

Danelljan, Martin

Søk i DiVA

Av forfatter/redaktør
Danelljan, Martin
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 2138 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 10023 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf