In this Article:
Later Influence provides two complementary brand safety toolsets: On-Demand Brand Safety and the Brand Suitability Report. Together, they help you spot potential risks, detect inauthentic activity patterns, and judge a creator’s overall fit before you decide to partner with them.
Use On-Demand Brand Safety for a fast signal check. Use the Brand Suitability Report when you need a deeper, source-linked assessment.
On-Demand Brand Safety
On-Demand Brand Safety checks two things: audience authenticity patterns and content warnings.
Audience Analysis alerts highlight patterns consistent with inauthentic audience activity, such as suspicious follower or engagement behavior. Alerts are not automatic disqualifiers, but they should prompt you to investigate further.
Note: Audience Insights are only available for creators who have authenticated their social profiles in Later Social. Not all creators will have audience data.
Content Warnings use image analysis to estimate the likelihood that a post contains adult content, hate speech or divisive rhetoric, violence, suggestive material, and more. You can use this insight to review the specific post against your brand standards.
Generating On-Demand Brand Safety Data
- In Later Influence, find a creator that you’d like to review
- Select the creator's profile image to open the Influencer Lightbox
- Select the Insights tab
- Under Brand Safety, select Generate Brand Safety Insights (if not already generated)
Processing can take a few minutes depending on the time of day and the number of API calls in progress.
About Brand Safety Scores
Brand Safety scores are based on a machine learning algorithm designed to help brands make informed decisions when evaluating creators.
An Audience Analysis alert means the algorithm detected patterns within that creator's data that are consistent with inauthentic audience activity — like bot followers or engagements.
It's a signal to dig deeper into available data (profile quality, metrics, audience location and demographics) before deciding whether the creator is the right fit.
About Content Warnings
Content Warnings use Google's Vision API to detect possible explicit content in a creator's posts.
The five categories evaluated are: Adult, Spoof, Medical, Violence, and Suggestive Material. If any posts are flagged, review them manually to determine whether they fit your brand standards
Brand Suitability Report
The Brand Suitability Report uses AI-assisted research to evaluate a creator across 12 risk categories and produce a scored assessment of their overall fit with your brand.
What the Report Includes
- Overall Risk Score: a brand-agnostic measure of the risk a creator might pose across multiple categories (profanity, political content, misinformation, competitor conflicts, and others)
- Brand Sensitivity Index: how strict your brand is across those same categories, providing context for analyzing risk
- Suitability Rating: combines the creator's intrinsic risk with your sensitivity score to assess overall alignment (higher ratings indicate better fit and lower risk)
- Detailed Category Assessments: category-by-category risk levels with supporting sources linked for your review
- Key Findings: major red flags or positive indicators, plus a data confidence indicator
How It Works
- Research: AI reviews publicly available sources (news, blogs, interviews, industry reports, forums)
- Analysis: findings are mapped to 12 risk categories and weighted alongside your sensitivity guidelines?
- Scoring: produces Intrinsic Risk, Brand Sensitivity, and an overall Suitability Rating.
Generating a Brand Suitability Report
- In Later Influence, find a creator that you’d like to review
- Select the creator's profile image to open the Influencer Lightbox
- Select the Brand Suitability tab
- Select Generate Report and allow a few minutes for the first load
You can also select the sources cited in the report to navigate to them directly, so you can review the underlying content yourself.
Brand Suitability Guidelines
You can enter campaign-specific guidelines to give the AI custom instructions on what areas you’d like it to evaluate a creator’s fit. When you provide guidelines, the report summary is tailored to those standards and highlights findings in the context of your expectations.
Where to add guidelines:
- In Later Influence, navigate to Campaigns from the left-hand sidebar
- Open the relevant campaign and select the Setup tab
- Select Marketing Plan
- Enter your guidelines in the Brand Suitability Guidelines field
Note: There is a 200-character limit for guideline inputs.
Tips for writing effective guidelines:
- Be direct and explicit — state what should be avoided without background or rationale
- Reference specific behaviors, content types, or themes to reduce ambiguity
- Group related concerns into a single guideline rather than listing many separate rules
- Include a timeframe when recency matters, such as "past 24 months"
- Write in instruction form: "Avoid creators who…"
How to Use the Report
Scan the Executive Summary for a quick call. Use the Detailed Category Assessments when you need context on specific risks or alignment issues. Combine the Suitability Rating with your team's judgment and legal guidance to decide whether to proceed, monitor, or decline a partnership.
Recommended Workflow
- Run On-Demand Brand Safety first to surface any Audience Analysis alerts or Content Warnings
- If you’d like to further assess a creator’s fit, generate the Brand Suitability Report for a fuller, source-linked view
- Document your decision with internal notes and ensure it aligns with your legal or compliance policy before inviting or approving
FAQs
Does an Audience Analysis alert mean a creator is fake?
No. It flags patterns consistent with inauthentic activity. Use it to guide deeper vetting — review profile quality, comment relevance, audience geography and demographics, and historical performance.
What categories trigger Content Warnings?
Adult, Spoof, Medical, Violence, and Suggestive Material. Each is a likelihood estimate; you decide whether the flagged content is acceptable for your brand.
Why don't I see any content flags?
The creator may not have any flagged images. The absence of flags doesn't guarantee suitability — still review content in context.
Where do Suitability scores come from?
From AI-assisted research of public sources and a proprietary scoring framework (Intrinsic Risk + Brand Sensitivity). The report shows supporting sources and a data confidence indicator where available.
How should I act on a low Suitability Rating?
Review the category breakdowns and cited sources to understand what's driving the score. Align with your internal tolerance and legal guidance before making a decision. The rating is a decision aid, not an automatic rejection.