I am cross compiling a binary for Windows and only recently started using an SSBO (Shader Storage Buffer Object) to accumulate a value on the GPU end, between frames, in order to avoid a bottleneck that is out of the scope of this question. My Windows binary was working as intended on native Windows until I started using an SSBO. When I started using an SSBO, I made sure to update the OpenGL window hints and the #version preprocessor properly, as shown below.

GLSL:

#version 430 core
#ifdef GL_ES
precision mediump float;
#endif

C++:

  glfwWindowHint(GLFW_SAMPLES, 4);
  // Set the GLFW window hints for the OpenGL context
  glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
  glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
  glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
  glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);

Just for context, I am rendering a raymarched scene in which the user can control the lateral movement of the camera with the keyboard. Previously, I computed this accumulated camera movement on the CPU end. I have tested that code on native Windows and it works. However, as I was adding more features, I encountered a feature that would either require me to pass information out from the shader back to the CPU in order to properly compute the camera movement on the CPU end, thus causing a small bottleneck, or use an SSBO and compute all camera movement on the GPU using the aforementioned information. I opted for the second choice.

I was able to make this work on Linux with a Linux binary, and went to compile it for windows and test it on Wine, and it still worked. Then, I booted into native Windows on my same hardware, and tested it there. The SSBO was not working as intended. Most of the time, the camera would not respond to keyboard input at all. Some of the time, the camera would snap to a far off point after holding a button down for a while. I updated my graphics drivers on native Windows, to no avail.

I have not changed any of my input code except removing the part that accumulates the movement on the CPU and sends it as a vec3 uniform, and replacing that with a uniform that sends a (not normalized, I'll get to that later) directional movement vector through an ivec3 uniform. I also added code that initializes an SSBO to a vec3 of zeroes, and bound that SSBO to location 0. Below is the code for that, even though I am sure it is correct.

Initialization: (note that glGetError() returns 0 after all calls on native Windows, Linux, and Wine on Linux)

 // Create the SSBO and allocate memory for it.
  glGenBuffers(1, &ssboID);
  glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssboID);
  std::cout << glGetError() << std::endl;
  // Allocate space for a vec3.
  glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(float) * 3, NULL, GL_DYNAMIC_COPY);
  std::cout << glGetError() << std::endl;
  //Initialize the buffer with some data.
  glm::vec3 initialMovement = glm::vec3(0.0f, 0.0f, 0.0f);  // Starting with zero movement.
  glBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(float) * 3, glm::value_ptr(initialMovement));
  std::cout << glGetError() << std::endl;
  // Unbind the buffer.
  glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
  std::cout << glGetError() << std::endl;

At the top of my while loop for rendering:

  movedir = ivec3(-glfwGetKey(window, GLFW_KEY_A) + glfwGetKey(window, GLFW_KEY_D), -glfwGetKey(window, GLFW_KEY_C) + glfwGetKey(window, GLFW_KEY_SPACE), -glfwGetKey(window, GLFW_KEY_W) + glfwGetKey(window, GLFW_KEY_S));
  // Bind the SSBO to the shader storage block binding point.
  glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssboID);

  // Clear the screen
  glClear(GL_COLOR_BUFFER_BIT);
  // Use our shader
  glUseProgram(programID);
  glUniform3i(glGetUniformLocation(programID, "u_move_dir"), movedir.x, movedir.y, movedir.z);

In my GLSL code:


layout(std430, binding = 0) buffer movement_buffer {
    vec3 camera_sum;
};

Frankly, I am unsure what other information I could provide. To reiterate myself, everything works as expected on Linux with a Linux binary, and on Wine on Linux with a Windows binary. I am using GLEW 2.1.0, and I have checked the compatibility in the code below. No error message is returned on Windows or Linux.

  // Initialize GLEW
  glewExperimental = GL_TRUE;  // Needed for core profile
  if (glewInit() != GLEW_OK) {
    fprintf(stderr, "Failed to initialize GLEW\n");
    getchar();
    glfwTerminate();
    return -1;
  }
  if (GLEW_ARB_shader_storage_buffer_object || GLEW_VERSION_4_3) {
    // The extension or OpenGL 4.3 is supported
  } else {
    // The extension or OpenGL 4.3 is not supported
    // Handle the lack of support, typically by logging and exiting
    fprintf(stderr, "SSBOs or OpenGL 4.3 not supported.\n");
    // Exit or return from your initialization function
  }
  std::cout << glGetError() << std::endl;

I have tested the binary on multiple Windows computers, including Windows on the same computer hardware that works for linux, all have the same result. I have tried messing with the window hints and the #version preprocessor. I have tested my binary on Windows before I made the change to using the SSBO and it worked. My linux graphics drivers are the default for Ubuntu 22.04.3 LTS. If there is any other information I can provide, please ask and I will do my best to respond in a timely manner. Thank you in advance.

Okay, well one of the higher ups decided I need a MRE so here it is. Color changes slowly and persists across frames on user input (WASD,C,SPACE). So: on wine on linux with the windows executable, the transition from dark to bright if, let's say, space is pressed, takes about 12 seconds for me and is completely linear and smooth. On windows on the same hardware with the same binary, the transition takes about the same amount of time, but there is something like 4 distinct transitions, between black and bright. I have messed around with RenderDoc on both operating systems, but I am not really sure what I am doing with it, I can see that on windows, the data for the color is correct to what it is for a given frame, but the transition is what I care about. This MRE translates directly to the issue with the movement that I was having where the camera would not move at all for some time and then snap to a far off location. I am hoping that someone here will know what to do to fix this.

(Hopefully) final edit. I did some thinking and the fact that the transition over time was the issue, and found this post, stating

The output on Linux differs from that on Windows. std::chrono::system_clock is far more accurate on Linux; std::chrono::high_resultion_clock is steady on Windows.

However the change to high_resolution_clock did not do much to make the transition smooth. I am still looking for answers here, and I am not totally sold that it is the clock at fault, so I’m just going to modify the title to make that clear.

// C++ Code
#define GLEW_STATIC
#include <GL/glew.h>
#include <GLFW/glfw3.h>

#include <chrono>
#include <glm/glm.hpp>
#include <glm/gtc/type_ptr.hpp>
#include <iostream>
GLuint createShaderProgram() {
  // Minimal shader program creation with a vertex and fragment shader
  // This example omits error checking for brevity
  const char* vertexShaderSource = R"glsl(#version 430 core

// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;

void main(){

    gl_Position.xyz = vertexPosition_modelspace;
    gl_Position.w = 1.0;

})glsl";
  const char* fragmentShaderSource = R"glsl(
        #version 430 core
        uniform float u_delta_time;
        uniform ivec3 u_move_dir;
        layout(std430, binding = 0) buffer movement_buffer {
            vec4 camera_sum;
        };
        out vec4 fragColor;
        void main() {
            camera_sum.xyz+=vec3(float(u_move_dir.x),float(u_move_dir.y),float(u_move_dir.z))*u_delta_time*.00000001;
            camera_sum.w=1.;
            fragColor = vec4(camera_sum);
        }
    )glsl";

  GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
  glShaderSource(vertexShader, 1, &vertexShaderSource, NULL);
  glCompileShader(vertexShader);

  GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
  glShaderSource(fragmentShader, 1, &fragmentShaderSource, NULL);
  glCompileShader(fragmentShader);

  GLuint shaderProgram = glCreateProgram();
  glAttachShader(shaderProgram, vertexShader);
  glAttachShader(shaderProgram, fragmentShader);
  glLinkProgram(shaderProgram);

  glDeleteShader(vertexShader);
  glDeleteShader(fragmentShader);

  return shaderProgram;
}

int main() {
  if (!glfwInit()) {
    std::cerr << "Failed to initialize GLFW" << std::endl;
    return -1;
  }

  glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
  glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
  glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

  GLFWwindow* window = glfwCreateWindow(640, 480, "Minimal SSBO Example", NULL, NULL);
  if (!window) {
    std::cerr << "Failed to create GLFW window" << std::endl;
    glfwTerminate();
    return -1;
  }

  glfwMakeContextCurrent(window);
  glewExperimental = GL_TRUE;
  if (glewInit() != GLEW_OK) {
    std::cerr << "Failed to initialize GLEW" << std::endl;
    glfwTerminate();
    return -1;
  }

  GLuint programID = createShaderProgram();
  // Define the vertex array for two triangles to fill the screen
  GLfloat g_vertex_buffer_data[] = {
      -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, -1.0f, 1.0f, 0.0f, -1.0f, 1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.0f,
  };

  GLuint vertexArrayID;
  glGenVertexArrays(1, &vertexArrayID);
  glBindVertexArray(vertexArrayID);
  // Initialize graphics
  glClearColor(0.0f, 0.0f, 0.4f, 0.0f);
  glUseProgram(programID);
  // Create and bind the buffer for vertex data
  GLuint vertexbuffer;
  glGenBuffers(1, &vertexbuffer);
  glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
  glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);

  GLuint ssboID;
  glGenBuffers(1, &ssboID);
  glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssboID);
  glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(glm::vec4), NULL, GL_DYNAMIC_COPY);
  glm::vec4 initialMovement(0.0f);
  glBufferSubData(GL_SHADER_STORAGE_BUFFER, 0, sizeof(glm::vec4), glm::value_ptr(initialMovement));
  glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssboID);
  glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);

  glUseProgram(programID);
  auto start = std::chrono::system_clock::now();
  while (!glfwWindowShouldClose(window)) {
    auto deltatime = std::chrono::system_clock::now();
    // Clear the color and depth buffer
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // Enable vertex attribute array
    glEnableVertexAttribArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);

    // Set move direction uniform
    glm::ivec3 movedir(-glfwGetKey(window, GLFW_KEY_A) + glfwGetKey(window, GLFW_KEY_D), -glfwGetKey(window, GLFW_KEY_C) + glfwGetKey(window, GLFW_KEY_SPACE), -glfwGetKey(window, GLFW_KEY_W) + glfwGetKey(window, GLFW_KEY_S));
    glUniform3i(glGetUniformLocation(programID, "u_move_dir"), movedir.x, movedir.y, movedir.z);
    glUniform1f(glGetUniformLocation(programID, "u_delta_time"), (deltatime - start).count());

    // Bind the SSBO
    glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, ssboID);

    // Draw the triangles
    glDrawArrays(GL_TRIANGLES, 0, 6);

    // Disable vertex attribute array
    glDisableVertexAttribArray(0);

    // Swap buffers and poll for events
    glfwSwapBuffers(window);
    glfwPollEvents();
    start = std::chrono::system_clock::now();
  }

  glDeleteProgram(programID);
  glDeleteBuffers(1, &ssboID);
  glfwDestroyWindow(window);
  glfwTerminate();
  return 0;
}


0

There are 0 best solutions below