Ads in ChatGPT: the next media surface, or the fastest trust collapse in digital?
OpenAI has confirmed it will start testing advertising in ChatGPT in the US, initially for logged-in adults on the Free tier and the new lower-cost Go tier. The format is deliberately constrained, ads will appear at the bottom of an answer when there is a relevant sponsored product or service, clearly labelled and separated from the organic response, with user controls to understand why they are seeing it and to dismiss it.
That sounds neat on paper. Behaviourally, it is a big deal.
This is not another feed, it is not even search as we know it. The early industry response is already converging on the same idea, “This will reward utility, authority, and trust, not cheap performance tricks”. It is advertising placed next to a moment of problem-solving, often with personal context and real intent.
If we get this right, it is the most human-adjacent performance environment we have seen in years. If we get it wrong, it will burn trust faster than any ad format before it.
The behavioural shift: from “attention” to “assistance”
Most digital advertising is built around interrupting attention.
Conversational AI is different. People arrive with a task and a question, “Help me decide”, “Help me understand”, “Help me do”. The user is already in a reflective, decision-making mode. That means the ad is no longer competing for attention. It is competing for legitimacy.
In a behavioural planning frame, the winning ad here is not the loudest, it is the most useful.
When it will be a good thing
This channel is genuinely promising when it behaves like a recommendation layer that respects the user’s goal.
It is good when:
- The ad is additive utility. It helps you complete the task, reduce effort, or compare options, without feeling like a sales ambush.
- The separation is obvious. Users can instantly tell what is sponsored and what is the assistant’s answer. OpenAI is explicitly saying ads will not influence answers and will be shown separately with clear labelling.
- The targeting logic feels contextual, not creepy. OpenAI says conversations stay private from advertisers and users can control personalisation. Whether the experience actually feels private will be the real test.
- Brands show restraint. Not every category belongs in an intimate assistant environment. The categories that win will be the ones that can genuinely help without exploiting anxiety or confusion.
Put simply, it will work when it feels like help.
When will it be a bad thing?
The failure modes are structural.
It gets bad when:
- Users suspect answer steering. Even if the ad unit is “separate”, users will intuitively connect the ad with the answer. If the assistant starts feeling “sponsored”, the product breaks. This is why OpenAI is over-indexing on trust language in its announcement.
- Brand safety becomes “conversation safety”. OpenAI says ads are not eligible near sensitive or regulated topics (health, mental health, politics) and that under-18 users will not see ads during testing. Good intentions, hard execution.
- The assistant is wrong. If an answer is flawed and a sponsored placement sits underneath it, the brand gets dragged into the mistake.
- Ad tech habits creep in. Over-frequency, aggressive retargeting logic, clickbait creative, thin landing pages. The channel will reject those behaviours. Users will reject them faster.
This is a space where “maximise CTR” is not a strategy, it is a reputational risk.
A mature approach for brands and agencies
At Mediaplus, our edge is not that we can buy the new shiny thing first. It is that we plan around people, not placements.
So the approach should be:
- Suitability first, targeting second
Before we talk audiences, decide where a brand is welcome. Not just “safe”, welcome. In an assistant interface, appropriateness is the real currency. - Creative as a service, not a slogan
If the ad cannot genuinely help, it should not run. The best creative here looks more like a tool, a comparison, a clear next step. - Trust is the KPI that makes the other KPIs possible
Demand transparency, user controls, and clear separation in reporting and platform setup. If a platform cannot prove it, we do not scale it. - Test incrementality, not hype
Because this environment sits close to decision moments, early performance will look great. Prove what is genuinely additive using holdouts and robust experimentation.
Why now?
OpenAI is under real economic pressure to fund compute and expand access. Its CFO has said revenue growth tracks available compute capacity and that 2026 is about “practical adoption”. Advertising is one route to make the free product sustainable.
That business reality does not need to be bad for users or brands, it just raises the bar on execution.
So what for clients?
This is a new surface that sits closer to decisions than most media. The upside is high-intent relevance, but the risks are trust and context. Our recommendation would be to treat it like a premium, high-scrutiny environment. Start with tight suitability rules, use “helpful” creative that earns attention, test incrementality before scaling, and prioritise brand experience and transparency over short-term performance spikes. If the channel cannot demonstrate clear separation between answers and ads, or reliable context controls, we stay in test mode.