liu.seSearch for publications in DiVA
Change search
Refine search result
1 - 9 of 9
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ambuluri, Sreehari
    et al.
    Linköping University, Department of Electrical Engineering. Linköping University, Faculty of Science & Engineering.
    Garrido, Mario
    Linköping University, Department of Electrical Engineering, Electronics System. Linköping University, The Institute of Technology.
    Caffarena, Gabriel
    Boadilla del Monte, Madrid, Spain.
    Ogniewski, Jens
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
    Ragnemalm, Ingemar
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
    New Radix-2 and Radix-22 Constant Geometry Fast Fourier Transform Algorithms For GPUs2013Conference paper (Refereed)
    Abstract [en]

    This paper presents new radix-2 and radix-22 constant geometry fast Fourier transform (FFT) algorithms for graphics processing units (GPUs). The algorithms combine the use of constant geometry with special scheduling of operations and distribution among the cores. Performance tests on current GPUs show a significant improvements compared to the most recent version of NVIDIA’s well-known CUFFT, achieving speedups of up to 5.6x.

    Download full text (pdf)
    fulltext
  • 2.
    Andrei, Alexandru
    et al.
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Eles, Petru Ion
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Jovanovic, Olivera
    University of Dortmund.
    Schmitz, Marcus
    Robert Bosch GmbH, Stuttgart.
    Ogniewski, Jens
    Linköping University, Department of Electrical Engineering. Linköping University, The Institute of Technology.
    Peng, Zebo
    Linköping University, Department of Computer and Information Science, ESLAB - Embedded Systems Laboratory. Linköping University, The Institute of Technology.
    Quasi-Static Voltage Scaling for Energy Minimization with Time Constraints2011In: IEEE Transactions on Very Large Scale Integration (vlsi) Systems, ISSN 1063-8210, E-ISSN 1557-9999, ISSN 1063-8210, Vol. 19, no 1, p. 10-23Article in journal (Refereed)
    Abstract [en]

    Supply voltage scaling and adaptive body-biasing are important techniques that help to reduce the energy dissipation of embedded systems. This is achieved by dynamically adjusting the voltage and performance settings according to the application needs. In order to take full advantage of slack that arises from variations in the execution time, it is important to recalculate the voltage (performance) settings during runtime, i.e., online. However, optimal voltage scaling algorithms are computationally expensive, and thus, if used online, significantly hamper the possible energy savings. To overcome the online complexity, we propose a quasi-static voltage scaling scheme, with a constant online time complexity O(1). This allows to increase the exploitable slack as well as to avoid the energy dissipated due to online recalculation of the voltage settings.

    Download full text (pdf)
    FULLTEXT01
  • 3.
    Ogniewski, Jens
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
    Cubic Spline Interpolation in Real-Time Applications using Three Control Points2019In: Proceedings of International Conference in Central Europe on Computer Graphics, Visualization and ComputerVision’2019 / [ed] Vaclav Skala, World Society for Computer Graphics , 2019, Vol. 2901, p. 1-10Conference paper (Refereed)
    Abstract [en]

    Spline interpolation is widely used in many different applications like computer graphics, animations and robotics. Many of these applications are run in real-time with constraints on computational complexity, thus fueling the need for computational inexpensive, real-time, continuous and loop-free data interpolation techniques. Often Catmull- Rom splines are used, which use four control-points: the two points between which to interpolate as well as the point directly before and the one directly after. If interpolating over time, this last point will ly in the future. However, in real-time applications future values may not be known in advance, meaning that Catmull-Rom splines are not applicable. In this paper we introduce another family of interpolation splines (dubbed Three-Point-Splines) which show the same characteristics as Catmull-Rom, but which use only three control-points, omitting the one “in the future”. Therefore they can generate smooth interpolation curves even in applications which do not have knowledge of future points, without the need for more computational complex methods. The generated curves are more rigid than Catmull-Rom, and because of that the Three-Point-Splines will not generate self-intersections within an interpolated curve segment, a property that has to be introduced to Catmull-Rom by careful parameterization. Thus, the Three-Point-Splines allow for greater freedom in parameterization, and can therefore be adapted to the application at hand, e.g. to a requested curvature or limitations on acceleration/deceleration. We will also show a method that allows to change the control-points during an ongoing interpolation, both with Thee-Point-Splines as well as with Catmull-Rom splines.

  • 4.
    Ogniewski, Jens
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
    High-Quality Real-Time Depth-Image-Based-Rendering2017In: Proceedings of SIGRAD 2017, August 17-18, 2017 Norrköping, Sweden, Linköping University Electronic Press, 2017, no 143, p. 1-8, article id 001Conference paper (Refereed)
    Abstract [en]

    With depth sensors becoming more and more common, and applications with varying viewpoints (like e.g. virtual reality) becoming more and more popular, there is a growing demand for real-time depth-image-based-rendering algorithms that reach a high quality. Starting from a quality-wise top performing depth-image-based-renderer, we develop a real-time version. Despite reaching a high quality as well, the new OpenGL-based renderer decreases runtime by (at least) 2 magnitudes. This was made possible by discovering similarities between forward-based and mesh-based rendering, which enable us to remove the common parallelization bottleneck of competing memory access, and facilitated by the implementation of accurate yet fast algorithms for the different parts of the rendering pipeline. We evaluated the proposed renderer using a publicly available dataset with ground-truth depth and camera data, that contains both rapid camera movements and rotations as well as complex scenes and is therefore challenging to project accurately.

    Download full text (pdf)
    fulltext
  • 5.
    Ogniewski, Jens
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
    Interpolation Techniques with Applications in Video Coding2019Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Recent years have seen the advent of RGB+D video (color+depth video), which enables new applications like free-viewpoint video, 3D and virtual reality. This is however achieved by adding additional data, thus increasing the bitrate. On the other hand, the added geometrical data can be used for more accurate frame prediction, thus decreasing bitrate. Modern encoders use previously decoded frames to predict other ones, meaning they only need to encode the difference. When geometrical data is available, previous frames can instead be projected to the frame that is currently predicted, thus reaching a higher accuracy and a higher compression.

    In this thesis, different techniques are described and evaluated enabling such a prediction scheme based on projecting from depth-images, so called depth-image based rendering (DIBR). A DIBR method is found that maximizes image quality, in terms of minimizing the differences of the projected frame to the groundtruth of the frame it was projected to, i.e. the frame that is to be predicted. This was achieved by evaluating combinations of both state-of-the-art methods for DIBR as well as own extensions, meant to solve artifacts that were discovered during this work. Furthermore, a real-time version of this DIBR method is derived and, since the deph-maps will be compressed as well, the impact of depth-map compression on the achieved projection quality is evaluated, for different compression methods including novel extensions of existing methods. Finally, spline methods are derived for both geometrical and color interpolation.

    Although all this was done with a focus on video compression, many of the presented methods are useful for other applications as well, like free-viewpoint video or animation.

    List of papers
    1. Cubic Spline Interpolation in Real-Time Applications using Three Control Points
    Open this publication in new window or tab >>Cubic Spline Interpolation in Real-Time Applications using Three Control Points
    2019 (English)In: Proceedings of International Conference in Central Europe on Computer Graphics, Visualization and ComputerVision’2019 / [ed] Vaclav Skala, World Society for Computer Graphics , 2019, Vol. 2901, p. 1-10Conference paper, Published paper (Refereed)
    Abstract [en]

    Spline interpolation is widely used in many different applications like computer graphics, animations and robotics. Many of these applications are run in real-time with constraints on computational complexity, thus fueling the need for computational inexpensive, real-time, continuous and loop-free data interpolation techniques. Often Catmull- Rom splines are used, which use four control-points: the two points between which to interpolate as well as the point directly before and the one directly after. If interpolating over time, this last point will ly in the future. However, in real-time applications future values may not be known in advance, meaning that Catmull-Rom splines are not applicable. In this paper we introduce another family of interpolation splines (dubbed Three-Point-Splines) which show the same characteristics as Catmull-Rom, but which use only three control-points, omitting the one “in the future”. Therefore they can generate smooth interpolation curves even in applications which do not have knowledge of future points, without the need for more computational complex methods. The generated curves are more rigid than Catmull-Rom, and because of that the Three-Point-Splines will not generate self-intersections within an interpolated curve segment, a property that has to be introduced to Catmull-Rom by careful parameterization. Thus, the Three-Point-Splines allow for greater freedom in parameterization, and can therefore be adapted to the application at hand, e.g. to a requested curvature or limitations on acceleration/deceleration. We will also show a method that allows to change the control-points during an ongoing interpolation, both with Thee-Point-Splines as well as with Catmull-Rom splines.

    Place, publisher, year, edition, pages
    World Society for Computer Graphics, 2019
    Series
    Computer Science Research Notes, ISSN 2464-4617, E-ISSN 2464-4625 ; 2901
    National Category
    Computational Mathematics
    Identifiers
    urn:nbn:se:liu:diva-162119 (URN)978-80-86943-37-4 (ISBN)
    Conference
    27. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision WSCG 2019, Plzen, Czech Republic, May 27 – 30, 2019
    Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2019-11-19
    2. Artifact-Free Color Interpolation
    Open this publication in new window or tab >>Artifact-Free Color Interpolation
    2015 (English)In: Proceedings SCCG: 2015 31st Spring Conference on Computer Graphics, ASSOC COMPUTING MACHINERY , 2015, p. 116-119Conference paper, Published paper (Refereed)
    Abstract [en]

    Color interpolation is still the most used method for image upsampling, since it offers the simplest and therefore fastest algorithms. However, in recent years research concentrated on other techniques to counter the shortcomings of interpolation techniques (like color artifacts or the fact that interpolation does not take statistics into account), while interpolation itself has ceased to be an active research topic. Still, current interpolation techniques can be improved. Especially it should be possible to avoid color artifacts by carefully choosing the correct interpolation schemes. In this paper we derive mathematical constraints which need to be fulfilled to reach an artifact-free interpolation, and use these to develop an interpolation method which is basically a self-configuring cubic spline.

    Place, publisher, year, edition, pages
    ASSOC COMPUTING MACHINERY, 2015
    Keywords
    interpolation; image upsampling; color interpolation
    National Category
    Discrete Mathematics
    Identifiers
    urn:nbn:se:liu:diva-131252 (URN)10.1145/2788539.2788556 (DOI)000380609300017 ()978-1-4503-3693-2 (ISBN)
    Conference
    31st Spring Conference on Computer Graphics
    Available from: 2016-09-16 Created: 2016-09-12 Last updated: 2019-11-19
    3. Pushing the Limits for View Prediction in Video Coding
    Open this publication in new window or tab >>Pushing the Limits for View Prediction in Video Coding
    2017 (English)In: PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2017), VOL 4, SCITEPRESS , 2017, p. 68-76Conference paper, Published paper (Refereed)
    Abstract [en]

    More and more devices have depth sensors, making RGB+D video (colour+depth video) increasingly common. RGB+D video allows the use of depth image based rendering (DIBR) to render a given scene from different viewpoints, thus making it a useful asset in view prediction for 3D and free-viewpoint video coding. In this paper we evaluate a multitude of algorithms for scattered data interpolation, in order to optimize the performance of DIBR for video coding. This also includes novel contributions like a Kriging refinement step, an edge suppression step to suppress artifacts, and a scale-adaptive kernel. Our evaluation uses the depth extension of the Sintel datasets. Using ground-truth sequences is crucial for such an optimization, as it ensures that all errors and artifacts are caused by the prediction itself rather than noisy or erroneous data. We also present a comparison with the commonly used mesh-based projection.

    Place, publisher, year, edition, pages
    SCITEPRESS, 2017
    Keywords
    Projection Algorithms; Video Coding; Motion Estimation
    National Category
    Computer Vision and Robotics (Autonomous Systems)
    Identifiers
    urn:nbn:se:liu:diva-151812 (URN)10.5220/0006131500680076 (DOI)000444907000007 ()978-989-758-225-7 (ISBN)
    Conference
    12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP)
    Note

    Funding Agencies|Ericsson Research; Swedish Research Council [2014-5928]

    Available from: 2018-10-04 Created: 2018-10-04 Last updated: 2019-11-19
    4. High-Quality Real-Time Depth-Image-Based-Rendering
    Open this publication in new window or tab >>High-Quality Real-Time Depth-Image-Based-Rendering
    2017 (English)In: Proceedings of SIGRAD 2017, August 17-18, 2017 Norrköping, Sweden, Linköping University Electronic Press, 2017, no 143, p. 1-8, article id 001Conference paper, Published paper (Refereed)
    Abstract [en]

    With depth sensors becoming more and more common, and applications with varying viewpoints (like e.g. virtual reality) becoming more and more popular, there is a growing demand for real-time depth-image-based-rendering algorithms that reach a high quality. Starting from a quality-wise top performing depth-image-based-renderer, we develop a real-time version. Despite reaching a high quality as well, the new OpenGL-based renderer decreases runtime by (at least) 2 magnitudes. This was made possible by discovering similarities between forward-based and mesh-based rendering, which enable us to remove the common parallelization bottleneck of competing memory access, and facilitated by the implementation of accurate yet fast algorithms for the different parts of the rendering pipeline. We evaluated the proposed renderer using a publicly available dataset with ground-truth depth and camera data, that contains both rapid camera movements and rotations as well as complex scenes and is therefore challenging to project accurately.

    Place, publisher, year, edition, pages
    Linköping University Electronic Press, 2017
    Series
    Linköping Electronic Conference Proceedings, ISSN 1650-3686, E-ISSN 1650-3740 ; 143
    Keywords
    Real-Time Rendering, Depth Image, Splatting
    National Category
    Computer Sciences
    Identifiers
    urn:nbn:se:liu:diva-162126 (URN)978-91-7685-384-9 (ISBN)
    Conference
    SIGRAD 2017, August 17-18, 2017 Norrköping, Sweden
    Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2019-11-19
    5. What is the best depth-map compression for Depth Image Based Rendering?
    Open this publication in new window or tab >>What is the best depth-map compression for Depth Image Based Rendering?
    2017 (English)In: Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part II / [ed] Michael Felsberg, Anders Heyden and Norbert Krüger, Springer, 2017, Vol. 10425, p. 403-415Conference paper, Published paper (Refereed)
    Abstract [en]

    Many of the latest smart phones and tablets come with integrated depth sensors, that make depth-maps freely available, thus enabling new forms of applications like rendering from different view points. However, efficient compression exploiting the characteristics of depth-maps as well as the requirements of these new applications is still an open issue. In this paper, we evaluate different depth-map compression algorithms, with a focus on tree-based methods and view projection as application.

    The contributions of this paper are the following: 1. extensions of existing geometric compression trees, 2. a comparison of a number of different trees, 3. a comparison of them to a state-of-the-art video coder, 4. an evaluation using ground-truth data that considers both depth-maps and predicted frames with arbitrary camera translation and rotation.

    Despite our best efforts, and contrary to earlier results, current video depth-map compression outperforms tree-based methods in most cases. The reason for this is likely that previous evaluations focused on low-quality, low-resolution depth maps, while high-resolution depth (as needed in the DIBR setting) has been ignored up until now. We also demonstrate that PSNR on depth-maps is not always a good measure of their utility.

    Place, publisher, year, edition, pages
    Springer, 2017
    Series
    Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 10425
    Keywords
    Depth map compression; Quadtree; Triangle tree; 3DVC; View projection
    National Category
    Computer Vision and Robotics (Autonomous Systems) Computer Systems
    Identifiers
    urn:nbn:se:liu:diva-142064 (URN)10.1007/978-3-319-64698-5_34 (DOI)000432084600034 ()2-s2.0-85028463006 (Scopus ID)9783319646978 (ISBN)9783319646985 (ISBN)
    Conference
    17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24
    Funder
    Swedish Research Council, 2014-5928
    Note

    VR Project: Learnable Camera Motion Models, 2014-5928

    Available from: 2017-10-20 Created: 2017-10-20 Last updated: 2019-11-19Bibliographically approved
    Download (png)
    presentationsbild
  • 6.
    Ogniewski, Jens
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, Faculty of Science & Engineering.
    What is the best depth-map compression for Depth Image Based Rendering?2017In: Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part II / [ed] Michael Felsberg, Anders Heyden and Norbert Krüger, Springer, 2017, Vol. 10425, p. 403-415Conference paper (Refereed)
    Abstract [en]

    Many of the latest smart phones and tablets come with integrated depth sensors, that make depth-maps freely available, thus enabling new forms of applications like rendering from different view points. However, efficient compression exploiting the characteristics of depth-maps as well as the requirements of these new applications is still an open issue. In this paper, we evaluate different depth-map compression algorithms, with a focus on tree-based methods and view projection as application.

    The contributions of this paper are the following: 1. extensions of existing geometric compression trees, 2. a comparison of a number of different trees, 3. a comparison of them to a state-of-the-art video coder, 4. an evaluation using ground-truth data that considers both depth-maps and predicted frames with arbitrary camera translation and rotation.

    Despite our best efforts, and contrary to earlier results, current video depth-map compression outperforms tree-based methods in most cases. The reason for this is likely that previous evaluations focused on low-quality, low-resolution depth maps, while high-resolution depth (as needed in the DIBR setting) has been ignored up until now. We also demonstrate that PSNR on depth-maps is not always a good measure of their utility.

    Download full text (pdf)
    What is the best depth-map compression for Depth Image Based Rendering?
  • 7.
    Ogniewski, Jens
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Karlsson, Andréas
    Linköping University, Department of Electrical Engineering, Computer Engineering. Linköping University, The Institute of Technology.
    Ragnemalm, Ingemar
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    TEXTURE COMPRESSION IN MEMORY AND PERFORMANCE-CONSTRAINED EMBEDDED SYSTEMS2011In: Computer Graphics, Visualization, Computer Vision and Image Processing 2011 / [ed] Yingcai Xiao, 2011, p. 19-26Conference paper (Refereed)
    Abstract [en]

    More embedded systems gain increasing multimedia capabilities, including computer graphics. Although this is mainly due to their increasing computational capability, optimizations of algorithms and data structures are important as well, since these systems have to fulfill a variety of constraints and cannot be geared solely towards performance. In this paper, the two most popular texture compression methods (DXT1 and PVRTC) are compared in both image quality and decoding performance aspects. For this, both have been ported to the ePUMA platform which is used as an example of energy consumption optimized embedded systems. Furthermore, a new DXT1 encoder has been developed which reaches higher image quality than existing encoders.

  • 8.
    Ogniewski, Jens
    et al.
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Ragnemalm, Ingemar
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Autostereoscopy and Motion Parallax for Mobile Computer Games Using Commercially Available Hardware2011In: International Journal of Computer Information Systems and Industrial Management Applications, ISSN 2150-7988, Vol. 3, p. 480-488Article in journal (Refereed)
    Abstract [en]

    In this paper we present a solution for the three dimensional representation of mobile computer games which includes both motion parallax and an autostereoscopic display. The system was built on hardware which is available on the consumer market: an iPhone 3G with a Wazabee 3Dee Shell, which is an autostereoscopic extension for the iPhone. The motion sensor of the phone was used for the implementation of the motion parallax effect as well as for a tilt compensation for the autostereoscopic display. This system was evaluated in a limited user study on mobile 3D displays. Despite some obstacles that needed to be overcome and a few remaining shortcomings of the final system, an overall acceptable 3D experience could be reached. That leads to the conclusion that portable systems for the consumer market which include 3D displays are within reach.

  • 9.
    Sandberg, David
    et al.
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Forssén, Per-Erik
    Linköping University, Department of Electrical Engineering, Computer Vision. Linköping University, The Institute of Technology.
    Ogniewski, Jens
    Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, The Institute of Technology.
    Model-Based Video Coding using Colour and Depth Cameras2011In: Digital Image Computing: Techniques and Applications (DICTA11), IEEE , 2011, p. 158-163Conference paper (Other academic)
    Abstract [en]

    In this paper, we present a model-based video coding method that uses input from colour and depth cameras, such as the Microsoft Kinect. The model-based approach uses a 3D representation of the scene, enabling several other applications besides video playback. Some of these applications are stereoscopic viewing, object insertion for augmented reality and free viewpoint viewing. The video encoding step uses computer vision to estimate the camera motion. The scene geometry is represented by keyframes, which are encoded as 3D quadsusing a quadtree, allowing good compression rates. Camera motion in-between keyframes is approximated to be linear. The relative camera positions at keyframes and the scene geometry are then compressed and transmitted to the decoder. Our experiments demonstrate that the model-based approach delivers a high level of detail at competitively low bitrates.

1 - 9 of 9
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf