I'm currently developing a project in Unity using the Default Render Pipeline, focusing on the Pico3 and Pico4 headset and WebRTC.
Our aim is to display real-time video within our seperate android app. We are trying to copy the camera to supported Rendertexture and use that as the Video stream for WebRTC. So that coaches can view the gameplay from the app and support clients during their journey.
I've successfully implemented CommandBuffer for camera rendering in Multi Pass mode but am struggling to adapt it for Single Pass Instanced rendering. I was wondering if anyone has experience in this and could help me in the right direction. There is alot of documentation for URP. But not as much for the default RP.
Key points:
Unity 2021.3 LTS on Android OpenGLES3.
The main challenge now is making CommandBuffer work with Single Pass Instanced in the Default Render Pipeline. The extra performance is important. Any insights or solutions to align CommandBuffer with Single Pass Instanced rendering would really help me out!
Thank you for your help!
Here is my current code:
using Unity.WebRTC;
using UnityEngine;
using UnityEngine.Rendering;
public class VRCommandBuffer : MonoBehaviour
{
public Material targetMaterial;
private CommandBuffer commandBuffer;
private Camera vrCamera;
private RenderTexture vrRenderTexture;
void Start()
{
Setup();
}
private void Setup()
{
vrCamera = GetComponent<Camera>();
vrRenderTexture = new RenderTexture(vrCamera.pixelWidth, vrCamera.pixelHeight, 0, RenderTextureFormat.ARGB32)
{
antiAliasing = vrCamera.allowMSAA ? QualitySettings.antiAliasing : 1,
format = WebRTC.GetSupportedRenderTextureFormat(SystemInfo.graphicsDeviceType),
};
commandBuffer = new CommandBuffer();
commandBuffer.name = "VRCommandBuffer";
commandBuffer.Blit(BuiltinRenderTextureType.CurrentActive, vrRenderTexture);
vrCamera.AddCommandBuffer(CameraEvent.AfterSkybox, commandBuffer);
if (targetMaterial != null)
targetMaterial.mainTexture = vrRenderTexture;
}
void OnDestroy()
{
vrCamera.RemoveCommandBuffer(CameraEvent.AfterSkybox, commandBuffer);
CommandBufferPool.Release(commandBuffer);
if (vrRenderTexture != null)
vrRenderTexture.Release();
}
}
I tried to make it work with ScreenCapture.CaptureScreenshotIntoRenderTexture and OnRenderImage with the camera. I didn’t manage to capture the screen successfully with CaptureScreenshotIntoRenderTexture, resulting in an always black RenderTexture on multiview. The OnRenderImage solution worked, however, performance was reduced by half.
Now I got the CommandBuffer to work, however, the RenderTexture stays black on multiview.
I also tried to add:
EnableShaderKeyword("STEREO_MULTIVIEW_ON");
to the CommandBuffer. But that doesn't seem to help.