I am trying to understand how to upscale live videos using my own ai models on aws

303 Views Asked by At

I want to upscale a live video on aws. The input stream will be a rtmp stream which i want to upscale using my own AI upscaling model and then the output will be distributed through the CDN. I tried searching the internet regarding upscaling on aws but i couldn't find a way to do it using my own models. I already have a streaming pipeline set up where i stream my screen from my phone and the stream goes to aws elemental medialive to aws elemental mediapackage and then to CDN for distribution across the globe. I don't understand how to include the upscaling in the pipeline and where in the pipeline should upscaling be done at to save the transmission cost?

I already have a pipeline setup for streaming using aws medialive and aws mediapackage.

1

There are 1 best solutions below

0
On

Thanks for your message.

The scaling operation will need a compute resource, probably EC2.

Your scaler could in theory be configured to accept either a continuous bitstream or a series of flat files (TS segments). The 'bitstream' option will require that you implement a streaming receiver/decoder, potentially based on the NGINX streaming proxy. The flat file option might be simpler as you could configure the scaler to either read those files from an S3 bucket. The resulting output can be delivered to MediaLive in either a continuous bitstream or as a series of flat files.

Regarding order of operations, placing the scaler before MediaLive makes the most sense as you want to deliver the optimized content to MediaLive for further encoding into ABR stack renditions, and leverage other features such as logo branding, input switching, output stabilization in the event of input loss, et cetera. Note: at present, UHD or "4k" is the largest input resolution supported by MediaLive.