liu.seSearch for publications in DiVA
System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Interpolation Techniques with Applications in Video Coding
Linköping University, Department of Electrical Engineering, Information Coding. Linköping University, Faculty of Science & Engineering.
2019 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Recent years have seen the advent of RGB+D video (color+depth video), which enables new applications like free-viewpoint video, 3D and virtual reality. This is however achieved by adding additional data, thus increasing the bitrate. On the other hand, the added geometrical data can be used for more accurate frame prediction, thus decreasing bitrate. Modern encoders use previously decoded frames to predict other ones, meaning they only need to encode the difference. When geometrical data is available, previous frames can instead be projected to the frame that is currently predicted, thus reaching a higher accuracy and a higher compression.

In this thesis, different techniques are described and evaluated enabling such a prediction scheme based on projecting from depth-images, so called depth-image based rendering (DIBR). A DIBR method is found that maximizes image quality, in terms of minimizing the differences of the projected frame to the groundtruth of the frame it was projected to, i.e. the frame that is to be predicted. This was achieved by evaluating combinations of both state-of-the-art methods for DIBR as well as own extensions, meant to solve artifacts that were discovered during this work. Furthermore, a real-time version of this DIBR method is derived and, since the deph-maps will be compressed as well, the impact of depth-map compression on the achieved projection quality is evaluated, for different compression methods including novel extensions of existing methods. Finally, spline methods are derived for both geometrical and color interpolation.

Although all this was done with a focus on video compression, many of the presented methods are useful for other applications as well, like free-viewpoint video or animation.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2019. , p. 38
Series
Linköping Studies in Science and Technology. Licentiate Thesis, ISSN 0280-7971 ; 1858
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:liu:diva-162116ISBN: 9789179299514 (print)OAI: oai:DiVA.org:liu-162116DiVA, id: diva2:1371394
Presentation
2019-12-09, Ada Lovelace, Campus Valla, Linköping, 13:15 (English)
Opponent
Supervisors
Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2025-02-07Bibliographically approved
List of papers
1. Cubic Spline Interpolation in Real-Time Applications using Three Control Points
Open this publication in new window or tab >>Cubic Spline Interpolation in Real-Time Applications using Three Control Points
2019 (English)In: Proceedings of International Conference in Central Europe on Computer Graphics, Visualization and ComputerVision’2019 / [ed] Vaclav Skala, World Society for Computer Graphics , 2019, Vol. 2901, p. 1-10Conference paper, Published paper (Refereed)
Abstract [en]

Spline interpolation is widely used in many different applications like computer graphics, animations and robotics. Many of these applications are run in real-time with constraints on computational complexity, thus fueling the need for computational inexpensive, real-time, continuous and loop-free data interpolation techniques. Often Catmull- Rom splines are used, which use four control-points: the two points between which to interpolate as well as the point directly before and the one directly after. If interpolating over time, this last point will ly in the future. However, in real-time applications future values may not be known in advance, meaning that Catmull-Rom splines are not applicable. In this paper we introduce another family of interpolation splines (dubbed Three-Point-Splines) which show the same characteristics as Catmull-Rom, but which use only three control-points, omitting the one “in the future”. Therefore they can generate smooth interpolation curves even in applications which do not have knowledge of future points, without the need for more computational complex methods. The generated curves are more rigid than Catmull-Rom, and because of that the Three-Point-Splines will not generate self-intersections within an interpolated curve segment, a property that has to be introduced to Catmull-Rom by careful parameterization. Thus, the Three-Point-Splines allow for greater freedom in parameterization, and can therefore be adapted to the application at hand, e.g. to a requested curvature or limitations on acceleration/deceleration. We will also show a method that allows to change the control-points during an ongoing interpolation, both with Thee-Point-Splines as well as with Catmull-Rom splines.

Place, publisher, year, edition, pages
World Society for Computer Graphics, 2019
Series
Computer Science Research Notes, ISSN 2464-4617, E-ISSN 2464-4625 ; 2901
National Category
Computational Mathematics
Identifiers
urn:nbn:se:liu:diva-162119 (URN)978-80-86943-37-4 (ISBN)
Conference
27. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision WSCG 2019, Plzen, Czech Republic, May 27 – 30, 2019
Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2019-11-19
2. Artifact-Free Color Interpolation
Open this publication in new window or tab >>Artifact-Free Color Interpolation
2015 (English)In: Proceedings SCCG: 2015 31st Spring Conference on Computer Graphics, ASSOC COMPUTING MACHINERY , 2015, p. 116-119Conference paper, Published paper (Refereed)
Abstract [en]

Color interpolation is still the most used method for image upsampling, since it offers the simplest and therefore fastest algorithms. However, in recent years research concentrated on other techniques to counter the shortcomings of interpolation techniques (like color artifacts or the fact that interpolation does not take statistics into account), while interpolation itself has ceased to be an active research topic. Still, current interpolation techniques can be improved. Especially it should be possible to avoid color artifacts by carefully choosing the correct interpolation schemes. In this paper we derive mathematical constraints which need to be fulfilled to reach an artifact-free interpolation, and use these to develop an interpolation method which is basically a self-configuring cubic spline.

Place, publisher, year, edition, pages
ASSOC COMPUTING MACHINERY, 2015
Keywords
interpolation; image upsampling; color interpolation
National Category
Discrete Mathematics
Identifiers
urn:nbn:se:liu:diva-131252 (URN)10.1145/2788539.2788556 (DOI)000380609300017 ()978-1-4503-3693-2 (ISBN)
Conference
31st Spring Conference on Computer Graphics
Available from: 2016-09-16 Created: 2016-09-12 Last updated: 2019-11-19
3. Pushing the Limits for View Prediction in Video Coding
Open this publication in new window or tab >>Pushing the Limits for View Prediction in Video Coding
2017 (English)In: PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2017), VOL 4, SCITEPRESS , 2017, p. 68-76Conference paper, Published paper (Refereed)
Abstract [en]

More and more devices have depth sensors, making RGB+D video (colour+depth video) increasingly common. RGB+D video allows the use of depth image based rendering (DIBR) to render a given scene from different viewpoints, thus making it a useful asset in view prediction for 3D and free-viewpoint video coding. In this paper we evaluate a multitude of algorithms for scattered data interpolation, in order to optimize the performance of DIBR for video coding. This also includes novel contributions like a Kriging refinement step, an edge suppression step to suppress artifacts, and a scale-adaptive kernel. Our evaluation uses the depth extension of the Sintel datasets. Using ground-truth sequences is crucial for such an optimization, as it ensures that all errors and artifacts are caused by the prediction itself rather than noisy or erroneous data. We also present a comparison with the commonly used mesh-based projection.

Place, publisher, year, edition, pages
SCITEPRESS, 2017
Keywords
Projection Algorithms; Video Coding; Motion Estimation
National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:liu:diva-151812 (URN)10.5220/0006131500680076 (DOI)000444907000007 ()978-989-758-225-7 (ISBN)
Conference
12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP)
Note

Funding Agencies|Ericsson Research; Swedish Research Council [2014-5928]

Available from: 2018-10-04 Created: 2018-10-04 Last updated: 2025-02-07
4. High-Quality Real-Time Depth-Image-Based-Rendering
Open this publication in new window or tab >>High-Quality Real-Time Depth-Image-Based-Rendering
2017 (English)In: Proceedings of SIGRAD 2017, August 17-18, 2017 Norrköping, Sweden, Linköping University Electronic Press, 2017, no 143, p. 1-8, article id 001Conference paper, Published paper (Refereed)
Abstract [en]

With depth sensors becoming more and more common, and applications with varying viewpoints (like e.g. virtual reality) becoming more and more popular, there is a growing demand for real-time depth-image-based-rendering algorithms that reach a high quality. Starting from a quality-wise top performing depth-image-based-renderer, we develop a real-time version. Despite reaching a high quality as well, the new OpenGL-based renderer decreases runtime by (at least) 2 magnitudes. This was made possible by discovering similarities between forward-based and mesh-based rendering, which enable us to remove the common parallelization bottleneck of competing memory access, and facilitated by the implementation of accurate yet fast algorithms for the different parts of the rendering pipeline. We evaluated the proposed renderer using a publicly available dataset with ground-truth depth and camera data, that contains both rapid camera movements and rotations as well as complex scenes and is therefore challenging to project accurately.

Place, publisher, year, edition, pages
Linköping University Electronic Press, 2017
Series
Linköping Electronic Conference Proceedings, ISSN 1650-3686, E-ISSN 1650-3740 ; 143
Keywords
Real-Time Rendering, Depth Image, Splatting
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-162126 (URN)978-91-7685-384-9 (ISBN)
Conference
SIGRAD 2017, August 17-18, 2017 Norrköping, Sweden
Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2019-11-19
5. What is the best depth-map compression for Depth Image Based Rendering?
Open this publication in new window or tab >>What is the best depth-map compression for Depth Image Based Rendering?
2017 (English)In: Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part II / [ed] Michael Felsberg, Anders Heyden and Norbert Krüger, Springer, 2017, Vol. 10425, p. 403-415Conference paper, Published paper (Refereed)
Abstract [en]

Many of the latest smart phones and tablets come with integrated depth sensors, that make depth-maps freely available, thus enabling new forms of applications like rendering from different view points. However, efficient compression exploiting the characteristics of depth-maps as well as the requirements of these new applications is still an open issue. In this paper, we evaluate different depth-map compression algorithms, with a focus on tree-based methods and view projection as application.

The contributions of this paper are the following: 1. extensions of existing geometric compression trees, 2. a comparison of a number of different trees, 3. a comparison of them to a state-of-the-art video coder, 4. an evaluation using ground-truth data that considers both depth-maps and predicted frames with arbitrary camera translation and rotation.

Despite our best efforts, and contrary to earlier results, current video depth-map compression outperforms tree-based methods in most cases. The reason for this is likely that previous evaluations focused on low-quality, low-resolution depth maps, while high-resolution depth (as needed in the DIBR setting) has been ignored up until now. We also demonstrate that PSNR on depth-maps is not always a good measure of their utility.

Place, publisher, year, edition, pages
Springer, 2017
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 10425
Keywords
Depth map compression; Quadtree; Triangle tree; 3DVC; View projection
National Category
Computer graphics and computer vision Computer Systems
Identifiers
urn:nbn:se:liu:diva-142064 (URN)10.1007/978-3-319-64698-5_34 (DOI)000432084600034 ()2-s2.0-85028463006 (Scopus ID)9783319646978 (ISBN)9783319646985 (ISBN)
Conference
17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24
Funder
Swedish Research Council, 2014-5928
Note

VR Project: Learnable Camera Motion Models, 2014-5928

Available from: 2017-10-20 Created: 2017-10-20 Last updated: 2025-02-01Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records

Ogniewski, Jens

Search in DiVA

By author/editor
Ogniewski, Jens
By organisation
Information CodingFaculty of Science & Engineering
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 804 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf