Beta Launch of BrandGuard - Our Brand Safety Technology for Generative AI
Generation at machine scale requires validation at machine scale
Walt Whitman once wrote a line in a poem that is one of my favorites ever. “Do I contradict myself? Very well then I contradict myself. I am large. I contain multitudes.” I think this explains generative AI very well. It’s both incredibly useful and incredibly painful to use. It can save you a lot of time but, it’s also very unreliable.
When we first started pitching our ad automation platform to brands, CMOs all asked us the very same question. “If you create 2000 image ads automatically, how do I know they are all on brand?” These CMOs know that you can’t always trust what you get out of these generative models, and sometimes it may take 1000 images to get one that looks good enough for use in a real world marketing campaign. It made us realize that if you really want to reap the benefits of running marketing campaigns at machine speed and scale, you need a machine powered way to verify the results.
Here is an early example of Nova’s platform generating new ad headlines for a website called Modern Mutt straight from a LLM without any Nova models to filter or improve it.
As you can see, there are lots of partial sentences, way too may pound signs, and other problems. After running it through a series of models to catch and discard the ads with these issues, we get much better output below.
And here is an example of the Nova product generating image ads, but it took thousands of original images generated by the AI models to get these.
Marketing teams are thus faced with an issue. If you use machines to generate thousands of pieces of highly targeted content, but your team has to spend all day combing through it to make sure it’s high quality, you lose the benefits of generative AI.
Today we are announcing the beta launch of BrandGuard. BrandGuard is an ensemble of models that run on generative AI outputs to provide brand safety checks and test for “on-brandness.” BrandGuard was trained on thousands of brand guideline documents and tens of thousands of brand images and pieces of content. Some models in the ensemble take care of the basic stuff: No profanity, no humans with 6 fingers, and no gibberish, etc. The more difficult models to build are those that are trained on brand-specific content and capture the general feel of brand style. We can also ingest brand guidelines and automatically create models that ensure your guidelines are being met for things like font, spacing, colors, etc. Each model spits out a score from 1-100 for how “on-brand” this is. As a user, you can set a threshold for what to accept, what to reject, and what to pass along for human review.
Along with this, we are testing a digital brand guidelines format that you can see below. This gives you a way to easily share brand guidelines with 3rd parties, keep them updated in a single place, and put them in a digital format to make them machine readable.
We are currently working on an API for BrandGuard with two design partners who want to add BrandGuard into their products. If you are a generative AI company, and want to add this automated functionality into your product, or a brand that would like to try the beta on your generative content, please reach out. You can read more at the BrandGuard website.