I’ll begin with a tale of late-night woe.
As the devoted yet perennially sleep-deprived servant of a five-month-old, fur-covered tyrant (more commonly referred to as a “puppy”), I slumped glassy-eyed and end-of-tether-adjacent onto my sofa, staring down a dilemma:
Leave the Big Bad Wolf wannabe to “howl it out”, thus teaching him that his bellows won’t always garner my immediate attention? Or go to comfort him, alleviating the situation before it escalates from attention craving to irreversible meltdown?
Naturally, I did what any rational person would do in such a situation. I Googled it. “Should I leave my puppy to howl at night?” I typed, squinting at my phone’s screen.




Notice a problem?
In my confused and irritable state, I was looking for the answer. What I got (in response to my single search query) was a series of summaries that in many cases directly contradicted one another.
The AI Overview confidently informed me that no, I should not leave the little foghorn to wail. But I sought second, third and fourth opinions to cement this viewpoint. And that’s where things got sticky. While many of the subsequent snippets offered corresponding advice, others on the same results page advised me to ignore the howling completely.
This outcome was neither helpful nor particularly comforting.
Despite seeking the “right” answer via a search engine, I was left to handle the situation in the way I felt was right. Because I had no definitive guidance to turn to.
In Google’s own words (or the words of someone who works at Google, at least), “Search aims to connect human curiosity to knowledge as accurately as possible.” But how does this work when the “knowledge” being connected to my curiosity is pointing in completely different directions?
We have access to more information than ever before, but somehow this hasn’t made finding the “right” answers any easier. One well-meaning “tip” is cancelled out by another expert-backed “truth”, and vice versa.
But why do search engines so often serve up contradiction over consensus? And when does trusting your gut start to sound like the smartest strategy of all?
Why does so much conflicting advice exist online?
Firstly, there’s the small issue of context. Most advice isn’t universally true; it’s situational. We often seek definitive, black-or-white guidance, but the “right” answers tend to exist somewhere within the grey-hued purgatory in between.
In the case of my puppy-related quandary, there’s no definitive solution because it’s largely dependent on the context of my situation. Is the howling clearly driven by anxiety or more likely a plea for attention? Could I realistically afford to let him bawl for more than a few minutes without being scratched off the Christmas card list of the nice couple next door?
In reality, very few questions have clear-cut answers. This isn’t just true of pet care but topics like health, business, tech, marketing — they’re all rife with variables. Two people can ask the same question but need (and receive) very different answers.
We crave certainty, not nuance
When faced with a dilemma, we’re rarely seeking an answer drenched in subtlety or shrouded in shade. We want decisiveness. Nuance is messy and bewildering. Caveats are inconvenient. But certainty is reassuringly unequivocal and uncomplicated.
In the pursuit of certainty, however, we’re faced with binaries: the answer must be either A or B. Yes or no. Do this or do that.
“It depends” might be the SEO industry’s unofficial catchphrase, but within this enduring in-joke lies an unavoidable truth — “it depends” is often the most accurate, authentic answer to any meaningful request for information or advice. The problem is: it’s just not that clickable.
Most content exists for clicks, not counsel
Let’s not forget that content is an economy. Most of what we absorb online isn’t shared out of pure-hearted goodwill — it’s created to rank, convert, or go viral. That means advice is often packaged in the most confident, click-worthy way possible.
- “Always do this.”
- “Never do that.”
- “Forget that; do this instead.”
Content creators and “thought leaders” rarely drive engagement through fence-perching. They build a following through conviction; through confident assertions and all-or-nothing conclusions. And in their quest to reach the top of the SERPS or become “LinkedInfluencers”, they’ll often flirt with hyperbole, purporting to have the “ultimate solution” or a “game-changing hack”.
In other words, everyone’s shouting in absolutes. There’s simply no room for ambiguity or grey areas.
Search engines are looking for relevance, not “truth”
Search engines don’t evaluate “truth” in the philosophical sense. The goal is to show pages that are most relevant to the user’s query, and for this they rely on proxies like authority and trustworthiness. Sure, guidelines like E-E-A-T help promote content that’s more likely to be accurate and reliable — but it’s more about showcasing relevant, credible information than establishing what’s “right” and what’s “wrong”.
This also explains why misinformation can surface. If misleading content is well optimised and matches with popular search intent, there’s theoretically nothing stopping it from ranking. That’s not a failure of morality but simply a limitation of the system. Search engines are sophisticated, but they’re still fundamentally driven by signals, not judgement.
Experts don’t always agree
Google loves experience. It’s what puts the “E” (or one of them, at least) in E-E-A-T. Content that demonstrates first-hand experience is seen as inherently more valuable and reliable because, as per the example in Google’s own guidelines, “If you’re looking for information on how to correctly fill out your tax returns, you probably want to see content produced by an expert in the field of accounting.”
That might be true, but even experts don’t agree on everything. Take the recent debate over whether big tech is becoming too big, for example: the opinions of industry experts are utterly polarised on this issue. There’s no universally-held “truth”.
In short, authority doesn’t mean unanimity. Even when content is written by someone with solid, been-there-done-it credentials, they won’t necessarily fall in line with consensus. Expert opinion is shaped by the individual’s own experiences and principles, and it’s entirely possible (and very common) for two credible sources to reach opposing conclusions.
We often get the answers we want
When we’re looking for solutions, we tend to seek out advice that confirms what we already want to believe. So, we subtly (and perhaps subconsciously) frame our questions in a way that elicits our preferred response.
In the below example, the phrasing of my question directly influences the guidance I’m offered — the AI Overview effectively gives me the answer my search query is unmistakably leaning towards, even if one contradicts the other.


This creates a kind of digital confirmation bias, where we can easily cherry-pick sources that reinforce our existing views. Search engines, which are increasingly moulding into AI-driven “answer engines”, simply reflect back what we feed into them: ask a leading question, and you’ll inevitably get a leading answer.
This doesn’t necessarily mean the information is wrong. It’s just shaped by intent.
How can you tell what to trust and what to ignore?
It’s estimated that as many as 8 in 10 online searches are informational in nature. Most Googlers, then, are not looking for products; they’re not in the mood to transact — they’re searching for answers.
As luck would have it, the web is positively littered with informational content. But while much of it is genuinely valuable, an increasing amount is mass-produced guff or AI-generated slop — which largely summarises widely-known information rather than offering anything distinct or interesting.
This is why critical thinking and a healthy dose of skepticism are recommended. Sometimes, you need to look beyond chintzy headlines and pithy yet too-convenient answers.
Check the credentials behind the content
There’s a reason the pupils of Horace Green couldn’t trust Jack Black’s Dewey Finn to teach them about mass-energy equivalence in School of Rock: he wasn’t qualified for anything other than “blowing people’s minds with high voltage rock”.
Flimsy movie analogies aside (though there are more to come), credibility matters — especially when it comes to complex or high-stakes topics. Anyone can publish content, of course, but not everyone has the background or knowledge to do so responsibly.
So, look who’s behind the content. A casual blogger who covers everything from quantum mechanics to mysterious rashes isn’t likely to be “qualified” to offer counsel on either. But a respected physicist or a certified medical professional? Their insights are far more likely to be grounded in first-hand experience and peer-reviewed knowledge.
Don’t run away from nuance
While we’re often looking for easy answers — and tend to prefer not to have to draw our own conclusions from inconclusiveness — genuine expertise often comes with caveats. It doesn’t always offer simplistic, straight-shooting answers or obscurity-free certitude.
Just because an expert responds to a question or request for advice with our favourite noncommittal phrase (“it depends”), that doesn’t mean they don’t know the answer. On the contrary, it’s likely they understand that the complex, nuanced nature of the topic means it can’t be reduced to headline-baiting one-liners or succinct summaries.
Sometimes, you need to be prepared to dig a little deeper. AI Overviews and snippets (the SERP features that strive to offer quick, click-free answers) are engineered to provide straightforward yet very broad responses; but often, the more complex the question, the more skeptical you should be of “tidy” answers.
Beware agendas and biases
As mentioned earlier, few brands or creators publish content simply to make the world a better place. Which means online content is rarely entirely objective — there’s typically going to be some kind of motive or spin involved, even if the goal is just driving visibility and clicks.
A Facebook ad agency fishing for new clients is hardly going to produce a blog telling you why Facebook marketing is dead, for example — even if many would argue this is true. But they’re not being deliberately deceptive; their case is probably compellingly argued, and it simply wouldn’t make much sense for them to undermine the service they’re selling.
But occasionally content can veer into slightly more partisan territory. Is the author pushing a particular political agenda, for example? Are they being overly provocative or sensationalist in an effort to incite heated debate?
Before you take it as gospel, think: why is the content slanted this way? What do they have to gain by offering this particular perspective? It’s not that no online content can be trusted — just that a little sprinkling of sodium chloride is often advised.
Don’t assume AI has all the answers
For all its generative powers, AI is still prone to the odd hallucination. Look, we’ve all stubbornly clung to beliefs that aren’t quite true (a young, cine-illiterate me once insisted 1998’s Godzilla remake was a bona fide masterpiece), but some AI programs have been shown to fabricate facts almost 40% of the time.
Reported examples include AI Overviews advising gluing cheese to a pizza to stop it falling off, or its past tendency to hallucinate entire idioms. Users discovered that Google would invent (and explain) sayings instead of, y’know, telling the user they’re talking gibberish.

Fortunately, this glitch was recently patched, but it’s worth highlighting that it was the community who discovered it, not Google’s engineers…
Then there’s my earlier example, where I was able to manipulate AI Overviews simply through the phrasing and rephrasing of my query. This is called prompt bias, and refers to the way AI models can favour certain perspectives if the prompt entered by the user is intentionally or unintentionally one-sided.
The lesson? AI might be generating its responses by referencing sources it deems “reliable”, but its amalgamations of information aren’t necessarily to be taken at face value. And you shouldn’t shake hands with bears.
Choose your tribe and stick with it
Remember, you have the power to decide who to listen to and, more importantly, who to ignore. It can be brain-aching to sift through every “expert” voice clamouring for attention and clicks, and often the best way of blocking out the noise is aligning yourself with a single source of truth.
You might discover a source of information whose values and perspectives really resonate with you, for instance. Perhaps there’s a publication you’ve received sound advice from in the past, or one which simply communicates in a reassuringly empathetic and transparent way. There’s no harm in returning to the same source for guidance if it’s one you feel comfortable with.
You don’t have to accept everything they say without question or hesitation, of course (even those we trust the most don’t always come through for us), but you at least have a reliable “baseline” to return to amid uncertainty.
How can you offer credible advice as a brand or expert?
You’re likely familiar with the criminally overused yet admittedly valid claim that “content is king”, but as a brand, if all you’re doing is adding more confusing noise to an already crowded space, isn’t your content more court jester than crown-adorned ruler — distracting but not particularly useful?
Google’s helpful content guidelines remind us that their algorithms are “designed to present helpful, reliable information that’s primarily created to benefit people.” And while most content created for online audiences exists to serve a commercial purpose, for many brands it’s also about driving topical authority, in turn building genuine trust and credibility.
But when the online realm is becoming increasingly content-saturated, it’s no wonder audiences are dubious of who really knows what they’re on about. Why should they trust your blog post claiming link building is still effective in 2025 when the next search result preaches the polar opposite?
Lead with transparency
Trust is far more likely to blossom when your audience feels you’re not hiding anything.
But that doesn’t necessarily mean oversharing or flooding your content with disclaimers. It’s simply about being upfront: don’t be afraid to be open about your motivations — and your limitations.
Are you selling something? It’s not a crime, so say so. Is your advice based on personal experience rather than hard data? This is okay too, as long as you’re not trying to conceal the fact. If a strategy worked for you in a specific context, frame it as such rather than a universal truth.
Acknowledging what you don’t know is often just as powerful as proclaiming what you know. This kind of transparency signals conviction, not weakness. And it’s often refreshingly human, especially when the web is so stacked with half-truths and polished façades.
Acknowledge caveats and complexities
Complicated questions usually have complicated answers, and solid advice rarely boils down to a straightforward “yes” or a confident “no”: the right response may be “yes, but…” or “no, however…”. It might even be “yes and no”.
To borrow my above example, say you’re writing a blog titled “Is Link Building Still Effective in 2025?” The answer isn’t, “Yes, it’s totally effective and everyone should be doing it.” It comes with caveats: “Yes, so long as you’re focusing on relevance and quality.” “On its own, no. But it can form a powerful part of a holistic SEO strategy.”
If there isn’t a single right answer, there isn’t a single right answer. You’re not doing your audience any favours by trying to wrap a complex topic in a neat bow.
It’s okay to say the “quiet” part out loud: “It depends.” “It’s a complicated question.” “There isn’t a catch-all solution.”
It might not get you trending overnight on TikTok or X, but it shows you’re thoughtful rather than dogmatic. You don’t oversimplify — you consider all the angles, giving a response that’s rooted in fact rather than rigged for clickbait.
Back it up, back it up
If an argument doesn’t have hard evidence behind it, then it might not stand up to anything more than gentle scrutiny.
That’s not to say you can’t have opinions, of course — thought-leadership content is often about promoting the not-always-entirely-fact-based views of industry experts — but you can expect them to be treated with at least a wee bit of healthy suspicion.
Stats are powerful. “Most brands aren’t using AI to its full potential” is a fairly compelling statement. “Research shows that only one percent of company leaders feel their AI strategy is “mature”” surely holds greater heft, however.
These “proof points” can turn a vague assertion into a credible one, anchoring your ideas in tangible data or meticulously-placed supporting quotes.
Yet when you’re dealing with YMYL topics like health or finance, evidence alone isn’t always enough. Having your content fact-checked or co-signed by a qualified professional (and showcasing this fact; e.g. “this content has been medically reviewed by X”) provides an extra trust signal in a niche where trust is effectively currency.
Offer guidance, not gospel
Remember that you’re creating content, not delivering a sermon. Rather than making bold, one-sided proclamations declaring the “one true way”, content is about answering questions, sparking thought, inviting conversation — it acknowledges that your audience might have different perspectives or knowledge levels, and that’s okay.
Most readers don’t want to be converted or scolded. They’re usually looking for the kind of measured insight and tactful guidance they can apply to their own situation, not a lecture.
Instead of proclaiming absolutes like “This is the only way” or “Everyone should do this now”, aim for language that invites thought and acknowledges diverse circumstances.
- “Here’s what I’d recommend, and why it works for me.”
- “This is considered best practice, but it might not work in all situations.”
- “If you’re experiencing this specific problem, you should do this.”
By all means share strong opinions. Take a bold stance. Adopt an authoritative voice. But your brand isn’t here to deliver diatribes from the nearest mountaintop.
Lean into your own experiences
People often take “advice” with a pinch of salt. What qualifies you to tell me what to do, anyway?
But it’s far harder to argue with lived experience.
A reader is more likely to trust someone who says, “I recommend doing X because it worked for me — here’s how,” for example, than an “expert” who simply claims, “Trust me, I know what I’m talking about.”
Even failures can provide useful insights. “We tried X but it went catastrophically wrong, so we’d advise against doing this.”
Wins, losses, lessons learned — your experiences make you credible and relatable, so lean on them. Your perspective is more valuable if you’ve “got the T-shirt”.
It’s a hard but unavoidable truth: the internet isn’t a neatly curated encyclopedia but a noisy “marketplace” of ideas. And when everyone shouts with equal volume, the most important skill is knowing what to listen to and what to put on mute.
As a brand, you can help your audience make sense of the noise rather than simply adding to it. Respect their intelligence. Embrace nuance. Deliver value that feeds off your experience. When there’s so much content trying to make waves — some useful, the majority expendable — trust is the real differentiator.