I am new to SDL/SDL2, with little experience in C/C++, some with pygame, pyglet and PySide, and now I am trying PySDL2 in hope to find faster Processing alternative. Processing is for drawing, but there is a straightforward API for direct pixel works.
I started with PySDL2 just yesterday and spent whole day writing boilerplate code to implement simple fire algorithm. Some may laugh at it - just a day - but I think it is important for graphics library that you can draw something with it in a day. My problem is poor C/C++ experience and relevant PySDL2 API is mostly direct mapping to C API, so it is inherently hard to dig through all the gory details of C based terminology and specifics. I gave up to find a simple overview about the graphics pipeline of SDL2.
The graphics pipeline is a path how bytes and/or integers end up being pixels on the screen. Just for example, a high level picture for OpenGL is here - http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Chapter-1:-The-Graphics-Pipeline.html - and I am looking for more detailed picture of pipeline in SDL2.
The main difficulty with understanding how pixel are drawn in SDL2 is that SDL2 API provides useful helpers that hide this low level details. Like you load bitmap and get surface. Then you move surface to texture. Then SDL2 somehow moves texture to video memory. Then tells GPU to show it on screen. This is not how I understood it, but there is no link I can give you where it was mentioned. So, it is:
[data array] --> [pixel array] --> [surface] --> [texture] --> [video memory] ┐
|
..made with uglihunds ascii editor.. [screen] <-┘
(which haz no graphics pipeline)
And for my application this should run 25 times each second. As I write it in Python I really need to know the pipeline to decide where to optimize. But I am not sure the pipeline is correct and I am a little lost inside API - there are many helpers and I don't know even have a starting point how my calculated pixel data may and should look like.
So I need to know:
- what are kinds of input data that SDL2 supports (bmp, memory pointer, etc)
- is there intermediate data format to which input is transformed
- what is the data transformation chain
- how the transformation chain is controlled
what are kinds of output that SDL2 supports
How it should look like in PySDL2
There is also this question downvoted by someone, which tells me that I am not alone in my confusions.
UPDATE:
As noted by @Ancurio, surface object in already an array of pixels and array of data, so I am changing the diagram and provide definition that surface is a pixel array.
[bitmap] ----┐
[int array] -┴-> [surface] --> [texture] --> [video memory] --> [screen]
So, basically the surface is 1. continuous memory chunk with pixel data, 2. information about the pixels. bitmap is a also a pixel array, which you load from disk or from image. But SDL2 can not use it directly. Why?
The question for the first step of pipeline is how int and bitmap pixels are transformed into surface pixels, and how surface is different from bitmap? I am using Python where is no obvious direct memory access and I need to figure out what are my options.
If you're doing this for the first time, maybe you should focus less on the big picture (i.e. the graphics pipeline) and concentrate on doing what you want to do (pixel access).
I have had my fair share of confusion on this topic when learning SDL2 after I had been used to the old SDL (my own question). It turns out that the solution is in the Migration Guide. You need to use
SDL_TEXTUREACCESS_STREAMING
and then you can manipulate the pixels directly.At that point you don't even need to bother about surfaces. You just have to update the pixels in the texture and upload that to video memory (using
SDL_RenderPresent()
and the other accompanying calls).