Compute Shader Corrupting Vertex Buffer

659 Views Asked by At

I'm making a tutorial for computing tangents and bitangents in a WGPU (Vulkan GLSL) compute shader. I'm creating the vertex buffer on the CPU from a .obj I made in blender.

Here's the code for the compute shader.

#version 450

#define VERTICES_PER_TRIANGLE 3
layout(local_size_x = VERTICES_PER_TRIANGLE) in;

// Should match the struct in model.rs
struct ModelVertex {
    vec3 position;
    vec2 tex_coords;
    vec3 normal;
    vec3 tangent;
    vec3 bitangent;
};

layout(std140, set=0, binding=0) buffer SrcVertexBuffer {
    ModelVertex srcVertices[];
};
layout(std140, set=0, binding=1) buffer DstVertexBuffer {
    ModelVertex dstVertices[];
};
layout(std140, set=0, binding=2) buffer IndexBuffer {
    uint Indices[];
};

void main() {
    uint index = gl_GlobalInvocationID.x;

    // Grab the indices for the triangle
    uint i0 = Indices[index];
    uint i1 = Indices[index + 1];
    uint i2 = Indices[index + 2];
    
    // Grab the vertices for the triangle
    ModelVertex v0 = srcVertices[i0];
    ModelVertex v1 = srcVertices[i1];
    ModelVertex v2 = srcVertices[i2];

    // Grab the position and uv components of the vertices
    vec3 pos0 = v0.position;
    vec3 pos1 = v1.position;
    vec3 pos2 = v2.position;
    vec2 uv0 = v0.tex_coords;
    vec2 uv1 = v1.tex_coords;
    vec2 uv2 = v2.tex_coords;

    // Calculate the edges of the triangle
    vec3 delta_pos1 = pos1 - pos0;
    vec3 delta_pos2 = pos2 - pos0;

    // This will give us a direction to calculate the
    // tangent and bitangent
    vec2 delta_uv1 = uv1 - uv0;
    vec2 delta_uv2 = uv2 - uv0;

    // Solving the following system of equations will
    // give us the tangent and bitangent.
    //     delta_pos1 = delta_uv1.x * T + delta_u.y * B
    //     delta_pos2 = delta_uv2.x * T + delta_uv2.y * B
    // Luckily, the place I found this equation provided
    // the solution!
    float r = 1.0 / (delta_uv1.x * delta_uv2.y - delta_uv1.y * delta_uv2.x);
    vec3 tangent = (delta_pos1 * delta_uv2.y - delta_pos2 * delta_uv1.y) * r;
    vec3 bitangent = (delta_pos2 * delta_uv1.x - delta_pos1 * delta_uv2.x) * r;

    // We'll use the same tangent/bitangent for each vertex in the triangle
    dstVertices[i0].tangent = tangent;
    dstVertices[i1].tangent = tangent;
    dstVertices[i2].tangent = tangent;
    dstVertices[i0].bitangent = bitangent;
    dstVertices[i1].bitangent = bitangent;
    dstVertices[i2].bitangent = bitangent;
}

This leads to an image like the following.

corrupted cubes

The problem occurs in the last six lines.

dstVertices[i0].tangent = tangent;
dstVertices[i1].tangent = tangent;
dstVertices[i2].tangent = tangent;
dstVertices[i0].bitangent = bitangent;
dstVertices[i1].bitangent = bitangent;
dstVertices[i2].bitangent = bitangent;

If I delete these lines, the output is fine (albeit the lightings all wrong due to the tangent and bitangent being a 0 vector).

enter image description here

Why is modifying the tangent and bitangent messing with the position of the vertices?

Here's the rest of the code for context. https://github.com/sotrh/learn-wgpu/tree/compute/code/intermediate/tutorial14-compute

EDIT:

Here's the code where I'm calling the compute shader.

let src_vertex_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
    label: Some(&format!("{:?} Vertex Buffer", m.name)),
    contents: bytemuck::cast_slice(&vertices),
    // UPDATED!
    usage: wgpu::BufferUsage::STORAGE,
});
let dst_vertex_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
    label: Some(&format!("{:?} Vertex Buffer", m.name)),
    contents: bytemuck::cast_slice(&vertices),
    // UPDATED!
    usage: wgpu::BufferUsage::VERTEX | wgpu::BufferUsage::STORAGE,
});
let index_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
    label: Some(&format!("{:?} Index Buffer", m.name)),
    contents: bytemuck::cast_slice(&m.mesh.indices),
    // UPDATED!
    usage: wgpu::BufferUsage::INDEX | wgpu::BufferUsage::STORAGE,
});

let binding = BitangentComputeBinding {
    dst_vertex_buffer,
    src_vertex_buffer,
    index_buffer,
    num_elements: m.mesh.indices.len() as u32,
};

// Calculate the tangents and bitangents
let calc_bind_group = self.binder.create_bind_group(
    &binding, 
    device, 
    Some("Mesh BindGroup")
);
let mut encoder = device.create_command_encoder(&wgpu::CommandEncoderDescriptor {
    label: Some("Tangent and Bitangent Calc"),
});
{
    let mut pass = encoder.begin_compute_pass();
    pass.set_pipeline(&self.pipeline);
    pass.set_bind_group(0, &calc_bind_group, &[]);
    pass.dispatch(binding.num_elements as u32 / 3, 1, 1);
}
queue.submit(std::iter::once(encoder.finish()));
device.poll(wgpu::Maintain::Wait);

The shader is supposed to loop through all the triangles in the mesh and compute the tangent and bitangent using the positon, and uv coordinates of the vertices of that triangle. I'm guessing that the vertices that are shared with multiple triangles are getting written to at the same time, causing this memory corruption.

I don't think it's a problem with shaders elsewhere, as I'm using the same model for the light, and the vertex shader responsible for that doesn't use the tangent and bitangent at all.

#version 450

layout(location=0) in vec3 a_position;

layout(location=0) out vec3 v_color;

layout(set=0, binding=0)
uniform Uniforms {
    vec3 u_view_position;
    mat4 u_view_proj;
};

layout(set=1, binding=0)
uniform Light {
    vec3 u_position;
    vec3 u_color;
};

// Let's keep our light smaller than our other objects
float scale = 0.25;

void main() {
    vec3 v_position = a_position * scale + u_position;
    gl_Position = u_view_proj * vec4(v_position, 1);

    v_color = u_color;
}

Looking at the vertex data in Render Doc shows that they position data is getting messed up.

render doc screenshot

Also here's what the cubes look like if I set the tangent and bitangent to vec3(0, 1, 0).

enter image description here

My only guess is that storage buffers have a byte alignment rule that I'm unaware of. I know that's the case for uniform buffers, but I'm using storage buffers for my instancing code, and that doesn't seem to have any issues.

1

There are 1 best solutions below

0
On BEST ANSWER

Turns out Vulkan style GLSL aligns to the largest field in the struct when using std430.

https://github.com/KhronosGroup/glslang/issues/264

In my case it's vec3. The vec2 tex_coord is throwing it off causing the shader to pull data from the wrong parts of the vertex buffer.

The fix was to change the struct in model_load.comp to specify the individual components instead.

struct ModelVertex {
    float x; float y; float z;
    float uv; float uw;
    float nx; float ny; float nz;
    float tx; float ty; float tz;
    float bx; float by; float bz;
};

Now the base alignment is a float (4 bytes), and the shader reads the vertex buffer data properly.

I'm aware there's a packed layout, but shaderc doesn't allow me to use that for reasons beyond me. Honestly I think this is quite annoying, and cumbersome, but it works.

There's still a flaw in the result. There's some banding on the edge faces of the cube. My guess is that it's do a single vertex sharing multiple triangles, but that's another problem that I'll have to look into later.

corruption is gone