I am trying to stream web camera to my application and while I have a workaround for various resolutions I would like to use camera capabilities to provide the image in different resolution.
My workaround at the moment is as follows (in pseudo code):
ReadSample(); //always in highest resolution
var bmp = UnpackTheFrameIntoBitmap();
if(some_different_resolution_set)
{
bmp = ResizeImageToNewResolution(bmp);
}
DrawBitmapOnCanvas(bmp);
I found out I can iterate camera native types. I use this information to find the native type that supports the selected resolution. I then set this native type to be the current media type. This is my code (half pseudo):
var nativeType = FindNativeTypeSupportingResolution(resolution);
sourceReader.SetCurrentMediaType(0, nativeType);
sourceReader.SetStreamSelection(0, true);
while(true)
{
var sample = sourceReader.ReadSample(0, SourceReaderControlFlags.None, out int readStreamIndex, out SourceReaderFlags readFlags, out long timestamp);
if(sample != null)
{
var mediaBuffer = sample.GetBufferByIndex(0);
var sourcePointer = mediaBuffer.Lock(out int maxLength, out int currentLength);
var data = new byte[sample.TotalLength];
Marshal.Copy(sourcePointer, data, 0, frameWidth * frameHeight * 4);
/// Get this data into a bitmap.
}
}
So what happens is that setting the native type has no effect on the frame size. For instance, my camera has the highest frame size of 1920x1080. If I set the media tpye to that which is 800x600 I will still get a sample that is actually 1920x1080.
What am I doing wrong? Why does setting media type has no effect on the sample?
The underlying Windows Media Foundation API definitely supports video resolution changes along with active video capture.
It happens in the straightforward way you assumed: by changing media type on the go.
Some C++ code to demostrate this: https://github.com/roman380/MediaFoundationVideoCapture/...