I've been reading about reconstructing a fragment's position in world space from a depth buffer, but I was thinking about storing position in a high-precision three channel position buffer. Would doing this be faster than unpacking the position from a depth buffer? What is the cost of reconstructing position from depth?
1
There are 1 best solutions below
Related Questions in OPENGL
- setting OpenGL version in objective-C
- How to run OpenGL version 3.3 (with Intel HD 4000) on Ubuntu 15.04
- Can linear filtering be used for an FBO blit of an MSAA texture to non-MSAA texture?
- How to get shader version from QOpenGLShader?
- "Capture GPU Frame" in XCode -- iOS only?
- Difference between glewGetString(GLEW_VERSION) and glewIsSupported
- Tesselation result flickering - OpenGL/GLSL
- Water rendering in opengl
- Texture mapping consuming physical memory
- Rotating a Cube using Quaternions in PyOpenGL
- Switching from perspective orthographic projection in OpenGL
- FreeType2 and OpenGL : Use unicode
- Should Meshes with and without Skeleton use different Shaders?
- How to get accurate 3D depth from 2D screen mouse click for large scale object in OpenGL?
- Trying to load 2d texture with glTexImage2D, but just getting blank
Related Questions in GLSL
- How to get shader version from QOpenGLShader?
- Tesselation result flickering - OpenGL/GLSL
- Water rendering in opengl
- Should Meshes with and without Skeleton use different Shaders?
- PBO Indexed Color Texture Rendering with Palette in Fragment Shader not working
- Modern GLSL ( opengl 3+ ) : Implementing phong effect correctly;
- Passing grayscale OpenCV image to an OpenGL texture
- OpenGL / weight order independent transparency
- GLSL: How to calculate fragments output RGB value based on Photoshops curve value?
- Fragment shader does not show any colour when compiled with vs2013
- How to access all vertexes within the same patch in Tessellation Control Shader
- OpenGL GLSL: How to implement the concept of gradient map in photoshop using fragment shader?
- Ambient and Specular lighting not working correctly in GLSL
- How to debug transforms in glsl vertex shaders in lwjgl
- Blender GLSL Export to THREE.js
Related Questions in LWJGL
- appgc.setResizable(true); error
- LWJGL Drawing colored text to the screen issue
- LWJGL wglGetCurrentContext exception
- finding the angle of a triangle using tan
- How to debug transforms in glsl vertex shaders in lwjgl
- GLSL Shader Draws Only Black Screen LWJGL
- Render Fog-Of-War on 2d tilemap
- quad won't rotate around center in lwjgl
- Overlaying a transparent color over a Texture with GLSL
- lwjgl freezes on creation of Font object
- LWJGL Java App Only Launches By Terminal (Linux, potentially OS X)
- Quad moves when screen is resized verticly
- No OpenGL context is current in the current thread
- java glReadpixels to OpenCV mat
- glBufferData generating GL_INVALID_OPERATION
Related Questions in DEPTH-BUFFER
- XMVector3Unproject - Screen to world coordinate at specific Z
- Drawing a triangle in 3D to a depth buffer
- Modifying the depth values between draw calls
- WebGL 3d usage for depth sorting 2d objects
- glEnable doesn't compile inside a class?
- Open GL ES 2.0 multiple drawElements and draw order
- OpenGL: Depth Attachment breaks Framebuffer
- OpenGL: Bind FBO's depth texture to a compute shader
- XNA 4.0.4 use Z-buffer in rendertarget
- WebGL FrameBuffer - Render Depth Texture
- Away3D Stage3DProxy drawing order
- Absolute limite to z-buffer (depth buffer) value
- Terrain tessellation and depth buffer
- SSAO implementation in Babylon JS and GLSL, using view ray for depth comparison
- DirectX + GLM Depth Reconstruction issues
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
This question is essentially unanswerable for two reasons:
There are several ways of "reconstructing position from depth", with different performance characteristics.
It is very hardware-dependent.
The last point is important. You're essentially comparing the performance of a texture fetch of a
GL_RGBA16F(at a minimum) to the performance of aGL_DEPTH24_STENCIL8fetch followed by some ALU computations. Basically, you're asking if the cost of fetching an addition 32-bits per fragment (the difference between the 24x8 fetch and the RGBA16F fetch) is equivalent to the ALU computations.That's going to change with various things. The performance of fetching memory, texture cache sizes, and so forth will all have an effect on texture fetch performance. And the speed of ALUs depends on how many are going to be in-flight at once (ie: number of shading units), as well as clock speeds and so forth.
In short, there are far too many variables here to know an answer a priori.
That being said, consider history.
In the earliest days of shaders, back in the GeForce 3 days, people would need to re-normalize a normal passed from the vertex shader. They did this by using a cubemap, not by doing math computations on the normal. Why? Because it was faster.
Today, there's pretty much no common programmable GPU hardware, in the desktop or mobile spaces, where a cubemap texture fetch is faster than a dot-product, reciprocal square-root, and a vector multiply. Computational performance in the long-run outstrips memory access performance.
So I'd suggest going with history and finding a quick means of computing it in your shader.