I am trying to create a custom transform to detect and replace Pii information in videos using video indexer and media services, but I am not able to find the correct workflow to use the services? video indexer detects insights (OCR)-> text analytics detects Pii -> Media Services encodes and blurs (or overlay) the regions in video? There is no sample for media services to blur regions only faceredaction
Detecting PII and redacting using Video Indexer and Media Services
190 Views Asked by Anass Kartit At
1
There are 1 best solutions below
Related Questions in AZURE
- How to update to the latest external Git in Azure Web App?
- I need an azure product that executes my intensive ffmpeg command then dies, and i only get charged for the delta. Any Tips?
- Inject AsyncCollector into a service
- mutual tls authentication between app service and function app
- Azure Application Insights Not Displaying Custom Logs for Azure Functions with .NET 8
- Application settings for production deployment slot in Azure App Services
- Encountered an error (ServiceUnavailable) from host runtime on Azure Function App
- Implementing Incremental consent when using both application and delegated permissions
- Invalid format for email address in WordPress on Azure app service
- Producer Batching Service Bus Vs Kafka
- Integrating Angular External IP with ClusterIP of .NET microservices on AKS
- Difficulty creating a data pipeline with Fabric Datafactory using REST
- Azure Batch for Excel VBA
- How to authenticate only Local and Guest users in Azure AD B2C and add custom claims in token?
- Azure Scale Sets and Parallel Jobs
Related Questions in AZURE-MEDIA-SERVICES
- Kaltura account settings error
- Azure Blob video is not streaming
- Live stream a single / static audio file using Azure Media Services
- Set begin time of Azure Media Player
- How to specify StorageEncrypted when creating new assets from a transcode in Azure media services
- How to get a Thumbnail from a Video on Azure Media Services?
- Unable to create encrypted mp4 download using Azure Media Services
- How to upload media file in the blob storage of Azure Media Services using Rest API
- Convert .AIB (Audio Media indexer ) file into readable format (String)
- Custom host name for Azure Media Services via new portal
- Azure Media Services - The video playback was aborted due to a corruption... (0x20400003)
- Streaming with Azure media services
- Number of times a video has been streamed via Azure Media Services
- Azure Media Services encoded mp4 file size is 10x the original
- Error When using Azure Media Services SDK With Owin Self Hosted Web API
Related Questions in AZURE-VIDEO-INDEXER
- Azure AI Video Indexer Left and Right arrows does not work anymore
- How can I get readable OCR content based on the video timeline from the Insights JSON output
- how can I migrate the encoded videos from Azure Media Services Account into Indexer?
- Azure AI Video Indexer
- Azure Video Indexer API - Uploading multiple files
- Is there a way to upload files from OneDrive for Business to Microsoft AI Video Indexer?
- get faces from video file using API in Azure Video Indexer
- Azure Video Indexer stuck on 5%
- I want to create a simple video player same like azure media service
- Importing data from trial account to different subscription
- Turning off facial recognition in azure video indexer service classic account
- Playlist A does not belong to account B
- Azure video indexer: Identify duplicate videos
- why I get an 401 error when i upload a file to video indexer
- Azure Video Indexer - Azure Media Service
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Media Services API only supports the detection of faces and the blurring of them in a two-pass or single-pass process.
The two-pass process returns a JSON file with bounding boxes that can be used to adjust the positioning and choose which areas are blurred or not blurred. That file can be updated and then used in the second pass.
https://learn.microsoft.com/en-us/azure/media-services/latest/analyze-face-redaction-concept
also see the JSON schema here - https://learn.microsoft.com/en-us/azure/media-services/latest/analyze-face-redaction-concept#elements-of-the-output-json-file
The current .NET sample for this only shows the single-pass mode being used though, and I don't yet have a detailed sample showing the process of editing and re-submitting the job for the second pass, but I can help with that if you are interested in the details. The current sample uses the "Redact" mode, but you would want to start with the "Analyze" mode if you merely wanted the JSON file with the bounding boxes to be used for blurring adjustments.
There is no support to blur text or OCR related data directly in the service or in Video Indexer.