Claravine AI

Alyssa Riley
Alyssa Riley
  • Updated

How does Claravine AI understand and classify content?
Claravine AI is unique in its ability to scan imagery and video assets through our Computer Vision (CV) technology. Using our patented CV, we deliver complete content comprehension using frame-by-frame Artificial Intelligence analysis versus metadata scraping used by many other providers.
  • Unlike other Computer Vision, we provide individual atomic comprehension scores for each content type as well as a composite ‘Total Score’ that delivers full comprehension of the entire environment.
  • Unlike metadata scraping, our approach doesn’t rely on self-declared - and many times inconsistent - labels that typically don’t allow for deep contextual analysis.
  • When determining similarity, we leverage an “Approximate Nearest Neighbor” algorithm to facilitate the association of analyzed results to the content in question. The results provide a priority stacked list of correlated content based upon the embeddings or vectorizations of the content we have processed, organized by descending similarity.
What is used to train the Claravine AI model?
  • For Computer Vision (CV), we use open sourced libraries and datasets to train our models.
  • Any customer data used as a means of training for Claravine AI will only be leveraged based on customer opt-in. Underlying customer data won’t be shared or surfaced with any other clients.
What data is being stored?
We don’t store the assets that clients request us to process, beyond what is necessary to process said content.
  • We maintain log level data for 7 days.
  • Resulting outputs (JSON files) are stored for 90 days.
Embeddings / Vectorizations
  • Embeddings supporting our content ID and active learning (solutions not yet available to customers) are saved and stored for recall and comparison purposes. This data will be siloed by client.
Where is data being stored?
Data is stored in AWS S3. Data in the legacy Vidscover UI is currently stored in the Google Cloud Platform (GCP) but will be migrated to S3 in the near-future.



Please sign in to leave a comment.