Generative AI has changed how we work, but not always for the better. While productivity gains are clear, the quality of content has, in many cases, plateaued. We’re faster, but not always sharper. In a year in which 54 percent of LinkedIn long-form content appears completely AI-generated, readers are tuning out.
At Mediaplus UK, we’ve developed a simple principle: AI is not a shortcut to good work, but an amplifier. The value comes not from the tools themselves, but from how we structure the work around them, especially in research & analysis. This article shares our internal approach for using AI in content and strategy workflows.
Polished ≠ Good
Much of the AI-generated content we see today suffers from the same issue. It reads smoothly, avoids obvious errors, and lands with no impact. We can call it “polite slop”, grammatically fine, structurally sound, and intellectually forgettable.
The issue is behavioural. Faced with a fluent first draft, people stop too early. The prompt was shallow. The output sounds plausible. The critical thinking switch stays off.
Like a programmatic sales presentation delivered with total confidence, it sounds so convincing you’re ready to sell-in a test campaign, until you realise, twenty minutes in, you still don’t know what the company actually does.
To address this, we use a structured four-step workflow: prompt, assist, review, and present. It supports clear thinking, improves output quality, and ensures that what we share is something we understand, believe in, and can explain with confidence.
The Research Stack: From Discovery to Structured Thinking
Before we prompt a large language model (LLM), we build context. The quality of any AI-assisted output depends on the thinking behind it. That thinking begins with structured research, using a combination of tools, each chosen for its specific strengths.
Elicit is our starting point for academic insight. It allows us to use natural language queries to explore peer-reviewed studies, extracting variables, evidence, and underlying hypotheses. This is particularly valuable when looking for behavioural mechanisms or frameworks to shape strategic thinking.
WARC AI gives us fast access to industry context. It helps us scan for marketing trends, campaign case studies, and sector-specific commentary. Its AI layer is useful when we want to identify patterns in effectiveness, media mix, or creative strategy across verticals.
Statista provides structured, visual data to ground our thinking in market-scale evidence. When we need consumer trendlines, platform usage data, or macro indicators, Statista offers clear, citable sources to support our arguments.
Each tool offers a distinct lens. Elicit gives us academic grounding. WARC AI brings real-world examples from marketing and media. Statista offers the quant layer that often helps validate qualitative insight. We do not treat these sources as interchangeable. They are complementary inputs that strengthen the foundation for more informed prompting.
Using this stack, alongside many other research tools, allows us to work faster without working shallowly. It gives us the clarity to brief AI tools with precision and direction, rather than relying on guesswork or vague starting points.
From Notes to Narrative: Synthesis with Notebook LM
Once we’ve gathered material, we shift into Notebook LM to analyse and synthesise our findings. Notebook LM acts like a research assistant that understands structure. We group articles, PDFs, and charts by theme, then use Notebook’s AI to help summarise, contrast, or extract implications.
This stage is critical. Without it, users jump straight from raw links to prompting their LLM of choice, often without forming a point of view. By pre-synthesising the research, we not only improve the quality of prompts but also reduce the chances of AI hallucination, due to Notebook LM only referencing the source material you’ve provided.
Prompting as Design, Not Input
The best prompts are not simply longer, but sharper. When our team writes prompts, we think like product designers. What’s the tone? What structure are we asking for? Is it a strategic narrative or a list of recommendations? Is it meant to challenge, explain, or persuade?
And critically, do we want it written as us, or for us?
These decisions change the outcome dramatically. In most cases, we prompt iteratively, asking the model to rewrite with evidence, refute its argument, or change the tone to one of strategic scepticism. Not because it sounds cool, but so it gets us closer to clarity.
Avoiding Cognitive Debt
Overuse of generative tools can lead to what researchers are beginning to describe as cognitive debt. EEG studies from 2024 show that writers who rely heavily on AI tend to exhibit lower neural activity across key regions of the brain. Even after returning to manual writing, their ability to recall information and stay mentally engaged remains affected.
One way to counter this is by building in moments to slow down and reset. Stepping away from the AI, even briefly, and writing a section manually can help you reconnect with the material. It keeps your thinking sharp and reinforces a sense of ownership over the work. These small pauses are not about doing things the hard way. They are about staying actively involved in the process.
From Tools to Discipline
McKinsey and Salesforce data from 2025 shows that teams trained in prompt-based workflows are over 60 percent more productive, and 50 percent more likely to innovate faster than competitors. But tools are not enough. The differentiator is discipline.
At Mediaplus UK, we actively encourage the use of AI across our teams. From early-stage research to creative refinement, it plays a role in accelerating high-quality work. But speed is only useful when paired with rigour.
We train our teams to think critically at every stage. Prompts are crafted with care. Research is cross-checked and combined across trusted sources. AI outputs are reviewed, edited, and reshaped through multiple iterations. The goal is not just to generate content quickly, but to produce thinking that stands up to challenge.
Everyone is expected to take responsibility for what they produce. That means understanding the logic behind it, being able to explain the reasoning, and feeling confident presenting the work to clients and stakeholders. AI can support that process, but it cannot replace the judgement that makes the work credible.
Takeaway: Research First, Prompt Second
If there’s one shift, we recommend to any team using AI for content or strategy, it’s this: research first, prompt second. Too often, the workflow begins with an empty chat window. We start with our research tools. We synthesise in Notebook LM. Only once we understand the landscape do we engage a large language model (LLM) to help shape the response.
That’s how we avoid AI-generated noise. That’s how we build something worth reading.