Why do we make the decisions we do? How do we explain those decisions? How do we teach others so that they can make decisions in a similar manner to us? As people begin to offload part of their workloads to machine learning and AI models, they need to both train the models and codify why they make certain decisions. At the same time, users of these models need to understand why the models make the decisions that they do so they can begin to both trust the models and understand what changes they need to make if necessary.
Brands and creatives work through iterative processes when creating content. There is a lot of discussion, back and forth, review of proofs, reference to style guides, and general tacit knowledge about what goes into a high-quality asset. Why is this important? If I gave you a grade on a test without telling you which questions you got wrong and why, it would take a lot of effort on your end to understand why the decision was made. It's much more helpful for the purposes of learning and making changes to understand what actions to take in the future. Similarly, as brands and creatives use AI to evaluate content, they need methods and models that explain why decisions are being made so they can agree, disagree, or know what to change.
Lack of Explainability in AI/ML for Brands
There is a typical tradeoff in machine learning models between explainability and performance. Typically, higher performing models do not readily explain why they made the decision they did. Alternatively, models that are easy to interpret why they made a certain decision perform worse because they cannot account for non-linearities. In order to get the best of both worlds, a lot of methods have been created to obtain explainability post-inference. However, the current state of explainability in AI and machine learning models leaves a lot to be desired in explaining why a decision was made. Let's walk through what currently exists.
Interpreting coefficients - In linear models, looking at the coefficients tells you which variables matter with what magnitude. Unfortunately, linear models don't work for text and images.
Feature importance - In AI and machine learning, features are essentially variables and are the inputs to the models. There are many feature importance techniques based on the model algorithm. However, these methods just denote which variables mattered most in making the decision. They don't inform how something could change or any indication of the effect of magnitude.
SHAP - SHAP is an explainability methodology that provides the influence of each variable at both an overall model level and at an individual observation level. It tells you how much each variable contributed to making the decision. This is very powerful for understanding what contributed to a decision.
LIME - LIME is an advanced method that makes simple models of complex models locally to provide insight into which variables contribute to a prediction. This process highlights which features were most influential for a specific prediction, which can highlight which areas of text or an image that a model is focusing on.
Unfortunately, all of these methods are descriptive and diagnostic. They tell you how a decision was made with the information provided, but not why. None of these methods work for explaining why or how to correct a decision; they just provide information about how a decision came about. They also are not prescriptive in telling a user what action to take. A brand manager would have a tough time understanding a yes/no decision for why an image was off-brand without sufficient explanation. This explanation needs to point to parts of the styleguide that the image or text violates, along with the location of the errors. Alternatively, if you ask an AI LLM to explain to you why it made a decision, it tends to hallucinate and make things up without a direct understanding of your brand.
What Brands Need
As brands adopt AI technologies, they need tools that fit into their workflows. One of the biggest parts of the content creation process is iterative design for brand consistency. Brand consistency is important because more consistent brands are more recognizable, stick better in the minds of customers, and generate more revenue. To do that, AI tools meant for verifying content need to identify errors or issues, points of agreement, and explain why each check is passing or failing. Brands and creatives need information that informs them about what action to take. If a logo is incorrect, it should be called out. If a color is not on the palette, that area should be highlighted in comparison to the nearest color. If a picture is too dark and stormy for the brand, the AI should use that explanation to explain why an image is off-brand.
Luckily, that's what we're building at BrandGuard. We place a high priority on creating a process that brands can interact with and understand in order to make their lives better. We're developing methods that provide the necessary explanations that brands need about why content does not adhere to their style guide and brand guidelines. BrandGuard is the best in the world at teaching machines to understand brands. We do that by breaking down the various components of each brand and informing over 20 different AI models about how brands work, the attributes of brands, and then tailor that understanding to how each individual brand thinks about itself. When it comes to explainability, we use the deep understanding of models to explain decisions in language and a decision-making process that aligns with how each brand would make its own decisions.
If you're interested in a demo, click here or contact us at sales@brandguard.ai.