Master Thesis (2017)
Cross-Compiling Shading Languages
With the idea of my XShaderCompiler open source project, I wrote my master thesis about shader cross-compilation.
Shading languages are the major class of programming languages for a modern mainstream Graphics Processing Unit (GPU). The programs of those languages are called “Shaders” as they were originally used to describe shading characteristics for computer graphics applications. To make use of GPU accelerated shaders a sophisticated rendering Application Programming Interface (API) is required and the available rendering APIs at the present time are OpenGL, Direct3D, Vulkan, and Metal. While Direct3D and Metal are only supported on a limited set of platforms, OpenGL and Vulkan are for the most part platform independent. On the one hand, Direct3D is the leading rendering API for many real-time graphics applications, especially in the video game industry. But on the other hand, OpenGL and Vulkan are the prevalent rendering APIs on mobile devices, especially for Android with the largest market share.
Each rendering API has its own shading language which are very similar to each other but varying enough to make it difficult for developers to write a single shader to be used across multiple APIs. However, since the enormous appearance of mobile devices many graphics systems are reliant on being platform independent. Therefore, several rendering technologies must be provided as back ends. The naive approach is to write all shaders multiple times, i.e. once for each shading language which is errorprone, highly redundant, and difficult to maintain.
This thesis investigates different approaches to automatically transform shaders from one high-level language into another, so called “cross-compilation” (sometimes also referred to as “trans-compilation”). High-level to high-level translation is reviewed as well as algorithms with an Intermediate Representation (IR) such as Standard Portable Intermediate Representation (SPIR-V). We are focusing the two most prevalent shading languages, which are firstly OpenGL Shading Language (GLSL) and secondly DirectX High Level Shading Language (HLSL), while Metal Shading Language (MSL) is only briefly examined. The benefits and failings of state-of-the-art approaches are clearly separated and a novel algorithm for generic shader cross-compilation is presented.
Bachelor Thesis (2015)
Screen Space Cone Tracing for Glossy Reflections
Based on the eponymous poster, I wrote my bachelor thesis on the subject of local reflections for global illumination.
Indirect lighting (also Global Illumination (GI)) is an important part of photo-realistic imagery and has become a widely used method in real-time graphics applications, such as Computer Aided Design (CAD), Augmented Realtiy (AR) and video games. Path tracing can already achieve photorealism by shooting thousands or millions of rays into a 3D scene for every pixel, which results in computational overhead exceeding real-time budgets. However, with modern programmable shader pipelines, a fusion of ray-casting algorithms and rasterization is possible, i.e. methods, which are similar to testing rays against geometry, can be performed on the GPU within a fragment (or rather pixel-) shader. Nevertheless, many implementations for real-time GI still trace perfect specular reflections only.
In this Bachelor thesis the advantages and disadvantages of different reflection methods are exposed and a combination of some of these is presented, which circumvents artifacts in the rendering and provides a stable, temporally coherent image enhancement. The benefits and failings of this new method are clearly separated as well. Moreover the developed algorithm can be implemented as pure post-process, which can easily be integrated into an existing rendering pipeline. The core idea of this thesis has been presented as a poster at SIGGRAPH 2014 [Hermanns and Franke, 2014].
ACM SIGGRAPH Poster (2014)
Screen Space Cone Tracing for Glossy Reflection
Together with my former colleague Tobias Alexander Franke I published my first poster at ACM SIGGRAPH’14.
A typical modern engine has a postprocessing pipeline which can be used to augment the final image from a previous render process with several effects. These usually include depth-of-field, crepuscular rays, tonemapping or morphological antialiasing. Such effects can be easily added to any existing renderer, since they usually rely only on information readily available in screen space. Recently, global illumination algorithms have been mapped to postprocessing effects, such as the wide selection of Screen Space Ambient Occlusion methods. An insight in [Ritschel et al. 2009] is that screen space algorithms can sample more information than just occlusion: in addition to visibility Screen Space Direct Occlusion samples neighboring pixels to gather indirect bounces. Soler et al. [Soler et al. 2010] use mipmapped buffers to sample diffuse far- field indirect illumination and importance sample specular cones for glossy reflections, but do not consider to use mipmaps to access prefiltered bounces for specular cones. We present Screen Space Cone Tracing (SSCT), a method to simulate glossy and specular reflections. Instead of regular screen-space ray tracing, we adopt the concept of cone-tracing on hierarchical, prefiltered buffers to reduce integration costs.