The broadcaster with its niche news and sports channel was battling with inefficiencies in their MAM (media asset management) and search workflows. With an inherently small content library, the company found itself expending a lot of resources on manual data entry and tagging, the search, and retrieval of media, especially from the production environments.
The system did not have any automation; it was metadata-based and very resource-intensive. This increase in operational turnarounds, editors having lots of overtime, and increase in the operational costs.
Read More — https://gyrus.ai/blog/how-gyrus-helped-news-broadcaster-save-10x-media-processing-costs/
The Challenge:
- High cost of media processing due to the complete reliance on manual tagging and outdated search mechanisms.
- Delayed turnarounds — the searching of content by moments, keywords, or events demanded manual sifting through the footage.
- Low-budget infrastructure prevented them from opting for high-end servers or cloud-native solutions.
The broadcaster wanted a media management solution that was:
- Fast and accurate in terms of content discovery.
- Very low on infrastructure requirements.
- Economically viable for their scale
- Easy to integrate into their on-premise environment.
Our Solution: Gyrus On-Premise Intelligent Media Search
We deployed Gyrus' on-premise media search engine tailored to suit the client's requirement. Gyrus' AI-powered platform offers a lightweight media search solution and digital asset management that plugs and plays for fast and cost-effective results without tagging or metadata.
Gyrus immediately located the exact clip showing the match result: Arsenal 1 – Chelsea 0
Key Features Deployed:
- On-Premise Deployment: Deployed on the customer's premises for guaranteed data privacy and zero dependence on the cloud.
- Lightweight & Efficient: It smoothly runs on an affordable Nvidia 4070 GPU that can index 1-hour video within 15 minutes, with minimal compute resources.
- Zero Tagging Required: Editors were able to search for events, people, or scenes without adding tags or metadata-the human effort was reduced drastically.
- Custom Multi-Modal Embedding Model: Translates the video and audio content into searchable vectors based on the analysis of scene context, actions, sentiments and spoken words
- Supports contextual search using vision-language search techniques, allowing editors to query in natural language or visuals/audio, and get precise result.
- Domain-Specific AI: Tuned for broadcasting-articulated in the language of sports events, scores, and live interaction on screen.
How It Works – Semantic Search Workflow:
With a custom embedding framework, this specifies the semantic search workflow of Gyrus' solution:
This multi-modal mapping ensures the system understands everything from visual cues (scoreboards, player reactions) to commentary speech or textual overlays. It's powered by vision-language search, enabling the AI to semantically interpret both video and audio together.
Conclusion:
The use case above exemplifies ways and means that small broadcasters could exploit AI-powered contextual search to leapfrog traditional work processes. Gyrus came up with a scalable, cheap, and explainable implementation for the broadcaster to trim the cost, enhance work speed, and get active control of their media assets using their own infrastructure.