I'm working for 2 months on prebaking PCSS of nVidia with cubeTextures in WebGL. I have succeeded in implementing the beast in real time. Prebaking it finally works too, but with some obvious artifacts.
To be brief, lower I highlight the main artifact I have and I explain why I get it, in a general case that everyone should encounter if he tries to prebake soft shadows. To sum up this thread, every advice around this question could help me :
How can we deal with prebaked soft shadows artifacts ?
Let's simplify the situation by talking about soft shadows instead of focusing on PCSS. That means we're working with a shadowMap containing a visibility value ranging from 0 to 1 for each texel (whereas hard shadows generate shadowMaps with only 2 possible values : 0 or 1).
Since we're not in real time, we have to fix a point of view instead of using camera point of view at each frame. My soft shadows are computed from light point of view. To build them, I :
- Compute a basic shadowMap with all the blockers (occluders)
- In a second pass, containing only the receivers, I compute the soft shadows using the shadowMap and I store the visibility for each pixel
This gives me a precomputed shadowMap I can sample in real time.
The first limitation of prebaked soft shadows is that blockers cannot be part of receivers. Imagine a room full of occluders, then the only receiver would certainly be the room. Otherwise, you'll lose some shadows on the room mesh. The reason is that we are stuck in light point of view, so we cannot see what's behind a blocker if we put it as receiver too.
This limitation leads to get a shadowMap with the good visibility value for the room, but obviously not the good one for the shadows on blockers. I don't mind to get the right visibility value on blockers, I hoped to be able to use room's shadow as an approximation for the shadow on the blockers. But in practice, you'll get hard shadows on blockers because behind the blocker, visibility value for the room's shadow is totally black.
Here is a graphic to illustrate why this is happening.
In the top case, I only take the room as receiver. In the bottom case, I use the blocker as receiver too. You can easily see a problem appears in both case. Because for the same texel of the shadowMap, we need two different visibility values : the point on the room is totally black whereas the point on the blocker is in penumbra.
I have many ideas to deal with this artifact :
- Sending to the shaders a meshId for each mesh and evaluate this id to know if we are shadowing a blocker or not.
- Make a PCSS pass for each blocker separately and mix all the shadowMaps at the end.
- Make a PCSS pass for each receiver separately (taking into account blockers).
- Precompute half the calculation and make other half in real time.
- Precompute the shadowMap from several points of view, other than light point of view.
I failed for 1 & 2. Idea 3 seems to be the same as idea 2. 4 is stupid, it's not precomputation anymore. And I fear I won't be able to make idea 5 generic.
There is really little documentation on the subject. And most of documentations I found work with ideal scenes with no blocker shadowing another blocker, as if it wasn't an usual case. So maybe someone here already faced this issue or is interested by the subject ? Hope it'll help other people after me too.
However, thank you for considering the issue.