wuiAI companies want licensed video training data because their models are evolving beyond text and images. Media companies have millions of hours of video data. However, what has been missing is the infrastructure to turn raw footage into datasets that can be used at scale. Versos AI says it has built that missing layer.
The company announced a new platform, named Video Library Intelligence Platform, designed to transform vast video data into structured and licensable datasets built specifically for AI training. This signals that video content may begin to carry new value as training material in the age of multimodal AI.
Why is demand for high-quality training data intensifying? For one, unlike images that are static, video changes over time, offering greater depth of information. It is also multimodal by nature, capturing visual data, speech, environment cues, human behavior, and other parameters.
There’s also a big shift happening toward what researchers call “world models.” These are AI systems that try to understand how the physical world behaves. Video gives models richer cause and effect relationships and human interaction patterns that other media types do not offer.
Versos AI wants to tap into this potential with the Video Library Intelligence Platform. “AI training has outgrown scraping data,” said Chris Keevill, CEO and Co-founder of Versos AI.
(Navidim/Shutterstock)
“Video introduces significant complexity around structure and delivery at scale. Versos AI was built to manage that complexity end-to-end—so content owners can unlock new revenue streams and hyperscalers can train models with confidence in licensed datasets.”
“Until now, there has been no purpose-built solution for converting unstructured video libraries into structured datasets suitable for hyperscale AI training. Versos AI closes that gap by making video data searchable, licensable, and ready for model development.”
It’s not just the volume of data that is challenging to handle, it’s also the structure. Raw footage is not automatically useful for machine learning. Before it can be fed to AI systems, the raw footage may have to be segmented, the rights must be verified, delivery formats must match, and metadata may need to be attached. Without this preparation, even millions of hours of video remain largely inaccessible for training purposes.
Versos AI works by first ingesting large video libraries from content owners. After structured ingestion, the focus moves to scene detection and temporal segmentation. This segmentation converts one long file into thousands of indexed micro-units. This is followed by metadata enrichment and multimodal tagging to prepare files for analysis.
Rights metadata is then attached to each indexed segment to ensure licensing clarity and compliance. The structured clips are ultimately assembled into training-ready datasets and securely delivered to AI developers for use at scale. By turning raw footage into indexed and rights-cleared datasets, Versos AI is positioning itself at the intersection of media archives and AI demand.
“We’re seeing extraordinary demand from major hyperscalers, AI innovators, and tech leaders who recognize that high-quality, structured video and metadata are essential to training more capable, and context-rich models,” said Clint Stinchcomb, President & CEO of CuriosityStream.
“Our extensive library of over 2.5 million hours of video and audio, combined with the Versos AI best-in-class delivery and indexing capabilities, helps position CuriosityStream as the leading provider for next-generation AI models. We’re excited to have a partner like Versos AI that enables us to accelerate growth as AI video data training reaches the next inflection point of the AI market boom.”
The urgency described by CuriosityStream reflects the problem Versos AI set out to solve from the beginning. Versos AI was founded in 2023 with a focus on making large volumes of video useful for AI companies. The Video Library Intelligence Platform is a major leap in that mission. However, Versos AI is operating in a rapidly emerging space, with competitors such as Scale AI and SuperAnnotate also working on video annotation and training data workflows.
Versos is positioning itself differently. It claims to be an end-to-end solution for video training data in the AI supply chain. The bet is that controlling this full pipeline, from ingestion to compliant delivery, will matter more than any single annotation feature as demand for structured video data accelerates.
If you want to read more stories like this and stay ahead of the curve in data and AI, subscribe to BigDataWire and follow us on LinkedIn. We deliver the insights, reporting, and breakthroughs that define the next era of technology.
The post Versos AI Wants to Turn Video Archives Into Structured Data for AI Models appeared first on BigDATAwire.
Go to Source
Author: Ali Azhar
