liu.seSearch for publications in DiVA
Change search
Link to record
Permanent link

Direct link
BETA
Ogniewski, Jens
Publications (9 of 9) Show all publications
Ogniewski, J. (2019). Cubic Spline Interpolation in Real-Time Applications using Three Control Points. In: Vaclav Skala (Ed.), Proceedings of International Conference in Central Europe on Computer Graphics, Visualization and ComputerVision’2019: . Paper presented at 27. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision WSCG 2019, Plzen, Czech Republic, May 27 – 30, 2019 (pp. 1-10). World Society for Computer Graphics, 2901
Open this publication in new window or tab >>Cubic Spline Interpolation in Real-Time Applications using Three Control Points
2019 (English)In: Proceedings of International Conference in Central Europe on Computer Graphics, Visualization and ComputerVision’2019 / [ed] Vaclav Skala, World Society for Computer Graphics , 2019, Vol. 2901, p. 1-10Conference paper, Published paper (Refereed)
Abstract [en]

Spline interpolation is widely used in many different applications like computer graphics, animations and robotics. Many of these applications are run in real-time with constraints on computational complexity, thus fueling the need for computational inexpensive, real-time, continuous and loop-free data interpolation techniques. Often Catmull- Rom splines are used, which use four control-points: the two points between which to interpolate as well as the point directly before and the one directly after. If interpolating over time, this last point will ly in the future. However, in real-time applications future values may not be known in advance, meaning that Catmull-Rom splines are not applicable. In this paper we introduce another family of interpolation splines (dubbed Three-Point-Splines) which show the same characteristics as Catmull-Rom, but which use only three control-points, omitting the one “in the future”. Therefore they can generate smooth interpolation curves even in applications which do not have knowledge of future points, without the need for more computational complex methods. The generated curves are more rigid than Catmull-Rom, and because of that the Three-Point-Splines will not generate self-intersections within an interpolated curve segment, a property that has to be introduced to Catmull-Rom by careful parameterization. Thus, the Three-Point-Splines allow for greater freedom in parameterization, and can therefore be adapted to the application at hand, e.g. to a requested curvature or limitations on acceleration/deceleration. We will also show a method that allows to change the control-points during an ongoing interpolation, both with Thee-Point-Splines as well as with Catmull-Rom splines.

Place, publisher, year, edition, pages
World Society for Computer Graphics, 2019
Series
Computer Science Research Notes, ISSN 2464-4617, E-ISSN 2464-4625 ; 2901
National Category
Computational Mathematics
Identifiers
urn:nbn:se:liu:diva-162119 (URN)978-80-86943-37-4 (ISBN)
Conference
27. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision WSCG 2019, Plzen, Czech Republic, May 27 – 30, 2019
Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2019-11-19
Ogniewski, J. (2019). Interpolation Techniques with Applications in Video Coding. (Licentiate dissertation). Linköping: Linköping University Electronic Press
Open this publication in new window or tab >>Interpolation Techniques with Applications in Video Coding
2019 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Recent years have seen the advent of RGB+D video (color+depth video), which enables new applications like free-viewpoint video, 3D and virtual reality. This is however achieved by adding additional data, thus increasing the bitrate. On the other hand, the added geometrical data can be used for more accurate frame prediction, thus decreasing bitrate. Modern encoders use previously decoded frames to predict other ones, meaning they only need to encode the difference. When geometrical data is available, previous frames can instead be projected to the frame that is currently predicted, thus reaching a higher accuracy and a higher compression.

In this thesis, different techniques are described and evaluated enabling such a prediction scheme based on projecting from depth-images, so called depth-image based rendering (DIBR). A DIBR method is found that maximizes image quality, in terms of minimizing the differences of the projected frame to the groundtruth of the frame it was projected to, i.e. the frame that is to be predicted. This was achieved by evaluating combinations of both state-of-the-art methods for DIBR as well as own extensions, meant to solve artifacts that were discovered during this work. Furthermore, a real-time version of this DIBR method is derived and, since the deph-maps will be compressed as well, the impact of depth-map compression on the achieved projection quality is evaluated, for different compression methods including novel extensions of existing methods. Finally, spline methods are derived for both geometrical and color interpolation.

Although all this was done with a focus on video compression, many of the presented methods are useful for other applications as well, like free-viewpoint video or animation.

Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2019. p. 38
Series
Linköping Studies in Science and Technology. Licentiate Thesis, ISSN 0280-7971 ; 1858
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:liu:diva-162116 (URN)9789179299514 (ISBN)
Presentation
2019-12-09, Ada Lovelace, Campus Valla, Linköping, 13:15 (English)
Opponent
Supervisors
Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2019-12-10Bibliographically approved
Ogniewski, J. (2017). High-Quality Real-Time Depth-Image-Based-Rendering. In: Proceedings of SIGRAD 2017, August 17-18, 2017 Norrköping, Sweden: . Paper presented at SIGRAD 2017, August 17-18, 2017 Norrköping, Sweden (pp. 1-8). Linköping University Electronic Press (143), Article ID 001.
Open this publication in new window or tab >>High-Quality Real-Time Depth-Image-Based-Rendering
2017 (English)In: Proceedings of SIGRAD 2017, August 17-18, 2017 Norrköping, Sweden, Linköping University Electronic Press, 2017, no 143, p. 1-8, article id 001Conference paper, Published paper (Refereed)
Abstract [en]

With depth sensors becoming more and more common, and applications with varying viewpoints (like e.g. virtual reality) becoming more and more popular, there is a growing demand for real-time depth-image-based-rendering algorithms that reach a high quality. Starting from a quality-wise top performing depth-image-based-renderer, we develop a real-time version. Despite reaching a high quality as well, the new OpenGL-based renderer decreases runtime by (at least) 2 magnitudes. This was made possible by discovering similarities between forward-based and mesh-based rendering, which enable us to remove the common parallelization bottleneck of competing memory access, and facilitated by the implementation of accurate yet fast algorithms for the different parts of the rendering pipeline. We evaluated the proposed renderer using a publicly available dataset with ground-truth depth and camera data, that contains both rapid camera movements and rotations as well as complex scenes and is therefore challenging to project accurately.

Place, publisher, year, edition, pages
Linköping University Electronic Press, 2017
Series
Linköping Electronic Conference Proceedings, ISSN 1650-3686, E-ISSN 1650-3740 ; 143
Keywords
Real-Time Rendering, Depth Image, Splatting
National Category
Computer Sciences
Identifiers
urn:nbn:se:liu:diva-162126 (URN)978-91-7685-384-9 (ISBN)
Conference
SIGRAD 2017, August 17-18, 2017 Norrköping, Sweden
Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2019-11-19
Ogniewski, J. & Forssén, P.-E. (2017). What is the best depth-map compression for Depth Image Based Rendering?. In: Michael Felsberg, Anders Heyden and Norbert Krüger (Ed.), Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part II. Paper presented at 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24 (pp. 403-415). Springer, 10425
Open this publication in new window or tab >>What is the best depth-map compression for Depth Image Based Rendering?
2017 (English)In: Computer Analysis of Images and Patterns: 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part II / [ed] Michael Felsberg, Anders Heyden and Norbert Krüger, Springer, 2017, Vol. 10425, p. 403-415Conference paper, Published paper (Refereed)
Abstract [en]

Many of the latest smart phones and tablets come with integrated depth sensors, that make depth-maps freely available, thus enabling new forms of applications like rendering from different view points. However, efficient compression exploiting the characteristics of depth-maps as well as the requirements of these new applications is still an open issue. In this paper, we evaluate different depth-map compression algorithms, with a focus on tree-based methods and view projection as application.

The contributions of this paper are the following: 1. extensions of existing geometric compression trees, 2. a comparison of a number of different trees, 3. a comparison of them to a state-of-the-art video coder, 4. an evaluation using ground-truth data that considers both depth-maps and predicted frames with arbitrary camera translation and rotation.

Despite our best efforts, and contrary to earlier results, current video depth-map compression outperforms tree-based methods in most cases. The reason for this is likely that previous evaluations focused on low-quality, low-resolution depth maps, while high-resolution depth (as needed in the DIBR setting) has been ignored up until now. We also demonstrate that PSNR on depth-maps is not always a good measure of their utility.

Place, publisher, year, edition, pages
Springer, 2017
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 10425
Keywords
Depth map compression; Quadtree; Triangle tree; 3DVC; View projection
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Systems
Identifiers
urn:nbn:se:liu:diva-142064 (URN)10.1007/978-3-319-64698-5_34 (DOI)000432084600034 ()2-s2.0-85028463006 (Scopus ID)9783319646978 (ISBN)9783319646985 (ISBN)
Conference
17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24
Funder
Swedish Research Council, 2014-5928
Note

VR Project: Learnable Camera Motion Models, 2014-5928

Available from: 2017-10-20 Created: 2017-10-20 Last updated: 2019-11-19Bibliographically approved
Ambuluri, S., Garrido, M., Caffarena, G., Ogniewski, J. & Ragnemalm, I. (2013). New Radix-2 and Radix-22 Constant Geometry Fast Fourier Transform Algorithms For GPUs. In: : . Paper presented at IADIS Computer Graphics, Visualization, Computer Vision and Image Processing (pp. 59-66).
Open this publication in new window or tab >>New Radix-2 and Radix-22 Constant Geometry Fast Fourier Transform Algorithms For GPUs
Show others...
2013 (English)Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents new radix-2 and radix-22 constant geometry fast Fourier transform (FFT) algorithms for graphics processing units (GPUs). The algorithms combine the use of constant geometry with special scheduling of operations and distribution among the cores. Performance tests on current GPUs show a significant improvements compared to the most recent version of NVIDIA’s well-known CUFFT, achieving speedups of up to 5.6x.

Keywords
Fast Fourier transform (FFT), graphics processing unit (GPU), constant geometry, radix, CUDA, real-time.
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:liu:diva-127975 (URN)
Conference
IADIS Computer Graphics, Visualization, Computer Vision and Image Processing
Available from: 2016-05-13 Created: 2016-05-13 Last updated: 2019-06-28Bibliographically approved
Ogniewski, J. & Ragnemalm, I. (2011). Autostereoscopy and Motion Parallax for Mobile Computer Games Using Commercially Available Hardware. International Journal of Computer Information Systems and Industrial Management Applications, 3, 480-488
Open this publication in new window or tab >>Autostereoscopy and Motion Parallax for Mobile Computer Games Using Commercially Available Hardware
2011 (English)In: International Journal of Computer Information Systems and Industrial Management Applications, ISSN 2150-7988, Vol. 3, p. 480-488Article in journal (Refereed) Published
Abstract [en]

In this paper we present a solution for the three dimensional representation of mobile computer games which includes both motion parallax and an autostereoscopic display. The system was built on hardware which is available on the consumer market: an iPhone 3G with a Wazabee 3Dee Shell, which is an autostereoscopic extension for the iPhone. The motion sensor of the phone was used for the implementation of the motion parallax effect as well as for a tilt compensation for the autostereoscopic display. This system was evaluated in a limited user study on mobile 3D displays. Despite some obstacles that needed to be overcome and a few remaining shortcomings of the final system, an overall acceptable 3D experience could be reached. That leads to the conclusion that portable systems for the consumer market which include 3D displays are within reach.

Place, publisher, year, edition, pages
MIR Labs, 2011
Keywords
mobile games, autostereoscopy, motion parallax
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:liu:diva-70297 (URN)
Available from: 2011-09-01 Created: 2011-09-01 Last updated: 2018-01-12
Sandberg, D., Forssén, P.-E. & Ogniewski, J. (2011). Model-Based Video Coding using Colour and Depth Cameras. In: Digital Image Computing: Techniques and Applications (DICTA11). Paper presented at 2011 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2011; Noosa, QLD; Australia (pp. 158-163). IEEE
Open this publication in new window or tab >>Model-Based Video Coding using Colour and Depth Cameras
2011 (English)In: Digital Image Computing: Techniques and Applications (DICTA11), IEEE , 2011, p. 158-163Conference paper, Published paper (Other academic)
Abstract [en]

In this paper, we present a model-based video coding method that uses input from colour and depth cameras, such as the Microsoft Kinect. The model-based approach uses a 3D representation of the scene, enabling several other applications besides video playback. Some of these applications are stereoscopic viewing, object insertion for augmented reality and free viewpoint viewing. The video encoding step uses computer vision to estimate the camera motion. The scene geometry is represented by keyframes, which are encoded as 3D quadsusing a quadtree, allowing good compression rates. Camera motion in-between keyframes is approximated to be linear. The relative camera positions at keyframes and the scene geometry are then compressed and transmitted to the decoder. Our experiments demonstrate that the model-based approach delivers a high level of detail at competitively low bitrates.

Place, publisher, year, edition, pages
IEEE, 2011
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-77062 (URN)10.1109/DICTA.2011.33 (DOI)978-1-4577-2006-2 (ISBN)
Conference
2011 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2011; Noosa, QLD; Australia
Available from: 2012-05-07 Created: 2012-05-03 Last updated: 2015-12-10
Andrei, A., Eles, P. I., Jovanovic, O., Schmitz, M., Ogniewski, J. & Peng, Z. (2011). Quasi-Static Voltage Scaling for Energy Minimization with Time Constraints. IEEE Transactions on Very Large Scale Integration (vlsi) Systems, 19(1), 10-23
Open this publication in new window or tab >>Quasi-Static Voltage Scaling for Energy Minimization with Time Constraints
Show others...
2011 (English)In: IEEE Transactions on Very Large Scale Integration (vlsi) Systems, ISSN 1063-8210, E-ISSN 1557-9999, ISSN 1063-8210, Vol. 19, no 1, p. 10-23Article in journal (Refereed) Published
Abstract [en]

Supply voltage scaling and adaptive body-biasing are important techniques that help to reduce the energy dissipation of embedded systems. This is achieved by dynamically adjusting the voltage and performance settings according to the application needs. In order to take full advantage of slack that arises from variations in the execution time, it is important to recalculate the voltage (performance) settings during runtime, i.e., online. However, optimal voltage scaling algorithms are computationally expensive, and thus, if used online, significantly hamper the possible energy savings. To overcome the online complexity, we propose a quasi-static voltage scaling scheme, with a constant online time complexity O(1). This allows to increase the exploitable slack as well as to avoid the energy dissipated due to online recalculation of the voltage settings.

Place, publisher, year, edition, pages
IEEE, 2011
Keywords
Clocks, Complexity theory, Energy minimization, Optimization, Program processors, Runtime, Table lookup, Time frequency analysis, online voltage scaling, quasi-static voltage scaling (QSVS), real-time systems, voltage scaling
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-59628 (URN)10.1109/TVLSI.2009.2030199 (DOI)000285844200002 ()
Note
©2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Alexandru Andrei, Petru Ion Eles, Olivera Jovanovic, Marcus Schmitz, Jens Ogniewski and Zebo Peng, Quasi-Static Voltage Scaling for Energy Minimization with Time Constraints, 2010, IEEE Transactions on Very Large Scale Integration (vlsi) Systems, (19), 1, 10-23. http://dx.doi.org/10.1109/TVLSI.2009.2030199 Available from: 2010-09-22 Created: 2010-09-22 Last updated: 2017-12-12
Ogniewski, J., Karlsson, A. & Ragnemalm, I. (2011). TEXTURE COMPRESSION IN MEMORY AND PERFORMANCE-CONSTRAINED EMBEDDED SYSTEMS. In: Yingcai Xiao (Ed.), Computer Graphics, Visualization, Computer Vision and Image Processing 2011. Paper presented at Computer Graphics, Visualization, Computer Vision and Image Processing 2011 (pp. 19-26).
Open this publication in new window or tab >>TEXTURE COMPRESSION IN MEMORY AND PERFORMANCE-CONSTRAINED EMBEDDED SYSTEMS
2011 (English)In: Computer Graphics, Visualization, Computer Vision and Image Processing 2011 / [ed] Yingcai Xiao, 2011, p. 19-26Conference paper, Published paper (Refereed)
Abstract [en]

More embedded systems gain increasing multimedia capabilities, including computer graphics. Although this is mainly due to their increasing computational capability, optimizations of algorithms and data structures are important as well, since these systems have to fulfill a variety of constraints and cannot be geared solely towards performance. In this paper, the two most popular texture compression methods (DXT1 and PVRTC) are compared in both image quality and decoding performance aspects. For this, both have been ported to the ePUMA platform which is used as an example of energy consumption optimized embedded systems. Furthermore, a new DXT1 encoder has been developed which reaches higher image quality than existing encoders.

Keywords
Embedded systems, texture compression
National Category
Engineering and Technology
Identifiers
urn:nbn:se:liu:diva-70245 (URN)978-972-8939-48-9 (ISBN)
Conference
Computer Graphics, Visualization, Computer Vision and Image Processing 2011
Projects
ePUMA
Available from: 2011-08-29 Created: 2011-08-29 Last updated: 2011-09-05
Organisations

Search in DiVA

Show all publications