User-generated content presents a tremendous opportunity for advertisers, yet many recognise they cannot fully capitalise on its potential without assurances about where their ads will appear.
Recent figures suggest UK audiences spend 144 minutes per day watching live TV content, and just over 100 minutes per day watching digital video. The amount of digital video that consumers watch is growing however, with 90% of 18-24 year olds heading straight to digital channels.
Much of the content that comprises the digital video category is user-generated, and is found on video-sharing platforms, where everyone from amateurs to professionals aim for virality.
As the heavy volume of user-generated output continues to grow, advertisers’ attempts to neatly categorise and capitalise on the associated advertising opportunities presented to them has become more challenging. This is largely because the sheer amount of what’s being produced has completely outpaced the tech that’s available to advertisers to protect their brand while maximising the impact of digital media investments.
Legacy context analysis and brand safety capabilities are largely reliant on text – video titles, tags, keywords, etc. – that has been shared by the creators and may not be wholly relevant or accurate. This disconnect makes it harder for creators to receive compensation for their work and for advertisers to get in front of the audiences they want to target in a brand safe environment – which is key.
As restrictions on the use of personal data loom large, savvy advertisers are recognising that making strong connections directly with consumers is going to be the best and least privacy-invasive way forward. One study found that contextually relevant ads connect more strongly with consumers, driving purchase intent 14% higher among consumers who viewed an in-context ad.
What’s needed now is a new way of speedily and accurately examining the video available that instils confidence in advertisers that their message is going to land at the right moment for their brand and resonate with who’s watching in a brand safe environment. Machine learning is the most practical way to handle the challenge of rapidly reviewing user-generated content and determining whether an advertiser will want to appear alongside it. We have discovered that for an ad to be placed effectively, it has to follow a three-step approach.
First, the instantaneous content evaluation process must begin with a frame-by-frame analysis that reviews video, images, audio and text, as well as any metadata in order to provide advertisers with a brand safety and suitability score. Second, there has to be industry alignment on what goes into a good score from a reputable, third-party source like the Global Alliance for Responsible Media (GARM) in order to provide an unbiased assessment. Lastly, processes must be created to conveniently package and send actionable data to advertisers on a daily basis, enabling immediate action on any issues that arise when it comes to where ads are appearing.
For example, if only metadata keywords were used, video clips of Tom Cruise discussing his movie ‘Top Gun: Maverick’ might be passed over by some advertisers because of the use of the word “gun” or descriptions of fighter jets. But frame-by-frame analysis, in contrast, would recognise the Top Gun clips as highly desirable content that would be a safe place for most brands to run their ads.
There are more places than ever before for content consumption – and digital video’s popularity isn’t waning anytime soon. This is where the frame-by-frame approach stands out – as an ML-enabled solution that can rapidly evaluate troves of visual content. With user-generated content playing such a big role in consumers’ routines, a contextual analysis approach can better evaluate what’s on display and more effectively match messages to brand safe content without encroaching on viewers’ privacy.