How Bonzer runs agentic content ops with Morrison
Half a year into running our own SEO and content work with Morrison alongside us. What the engine has changed across keyword research, audit, briefing, drafting, publishing and governance - and what is still very much the consultant's job. Includes the prompts our team actually uses.

CEO, Morrison

We built Morrison because we couldn’t find a tool that did what we needed to do the work we do at Bonzer. Half a year in, with our SEO consultants and content team using it every day, this is a candid note on what running content ops with an agentic engine alongside us has actually changed - and, more interestingly, what it hasn’t.
Morrison is new. We’re still figuring out where it earns its keep and where it doesn’t. What’s already clear is that every step of the pipeline - keyword research, audit, briefing, drafting, publishing, refresh, governance - gets meaningfully better when the engine is doing the legwork next to the consultant.
An engine, not an oracle
The framing we landed on internally, and the one we’d encourage any agency leader experimenting with this to start from: the engine is not the driver. Our consultants and specialists are. Morrison sits next to them - reads the entire client site, holds the brand voice doc, joins Search Console, GA4, Google Ads and the live SERP - and answers the question they would otherwise have spent forty minutes assembling from six tabs.
The work that used to eat the day was the lookup, cross-reference and synthesis layer. Roughly 30% of any task. The part where the answer means three CSV exports and a manual stitch. That part is now more or less a sentence and an answer - and the consultant gets to spend more of the day on the work that actually needs them, with more context than before, because the lookup part is suddenly cheap.
What’s left is everything we were already good at, with more room to do it well. Strategy is still set by the consultant who understands the client’s business. Editorial calls are still made by the content lead who has read the last twelve months of the site. Brand positioning, market judgment, client relationship - all the things that make Bonzer a good agency to hire - remain human work.
That’s the engine. Deeper, faster, higher quality. Same consultants. Same content people. The job they were already great at, with a new instrument on the bench.
1. Keyword research - what’s actually worth chasing?
Most keyword research lives in a patchy stack of exports - a query report pulled from Search Console, a Keyword Planner CSV, a SERP tool scrape, all reconciled by hand in a spreadsheet. The data sources don’t talk to each other, so the reconciliation layer is where the time goes and where most of the signal quietly leaks out. A keyword-research session against the engine can pull seed terms and join them in one place: every query already earning impressions for the client’s domain in Search Console (including the long tail of near-miss queries that never get manual attention), volume, competition and CPC-implied commercial intent from the Google Ads Keyword Planner API, and - where it helps - the live SERP for each candidate.
A run of the Skill returns a clustered, intent-tagged keyword universe scored against the client’s actual authority - not a generic difficulty score lifted from a third-party tool. The difference matters more than it looks. A keyword that reads as “hard” for a generic .com is often very reachable for a site that already owns the surrounding cluster; a keyword that reads as “easy” can be a dead end if the client has no supporting authority. Where the pattern fits, the candidate set walks into kickoff already prioritised on what’s defensible for this specific client - not on what a generic tool thinks the average site can win.
For engagements where it earns its keep, three categories consistently surface that consultants miss most. First, queries the client already has impressions for but ranks outside the top 10 - the cheapest wins available, and the ones that rarely surface unless you join Search Console to the live SERP. Second, commercial-intent candidates that competitors monetise but the client doesn’t address yet - the gap that compounds into pipeline if left alone. Third, long-tail clusters around the client’s strongest topical authority - the queries that compound rather than cannibalise, because the engine has actually read the site before scoring them.
The point of joining the data stack isn’t a fancier export. It’s that the consultant can ask a question that crosses sources - “which of these 800 candidates have impressions but no clicks, what’s the dominant blocker per cluster, and what’s the realistic effort to get to top 3?” - and get a single answer that takes the client’s actual authority into account, not an industry average. When it’s part of the flow, the kickoff conversation starts from a defensible shortlist rather than a brainstorm.
2. Audit - where does the portfolio actually sit?
Every engagement starts with the same question: what should this client’s next move be? An audit against the client’s indexed site, joined with Search Console, GA4 and the live SERP, can land a defensible cluster plan in an afternoon - work that, before the engine, lived in spreadsheets across two days.
The consultant still makes the call. The floor of the input lifts - from “what I had time to look at” to “the whole portfolio, scored.” That difference shows up in the recommendation. An audit can pull a coverage map across every cluster a client owns, a list of clusters the competition owns that the client doesn’t touch yet, and a freshness scan against the live SERP for every page that ranks - the kind of input that, where the pattern fits, walks into kickoff as a starting point. Two days of work, surfaced as a conversation.
Two things matter in the audit stage. First, the engine leads with the crawl - it knows what the site actually says, not just what the rankings show. Second, the model and the data sources are chosen per question, and we can swap them mid-conversation when a different angle helps. A reasoning-heavy model for the “why is this cluster underperforming” question; a fast model for the “list the URLs targeting this entity” lookup. The consultant stays in the driver’s seat; the engine handles the parallel reads.
The output that tends to pay back hardest is not the cluster map itself but the intent-drift report - a pass that compares what each ranking page intends to do against what the current SERP rewards. Intent shifts quietly; rankings drop noisily. Catching the shift before the rank drop is the difference between a planned refresh and a panic refresh.
3. Brief - what does this page need to be?
Briefing is one of the more fragile steps of the pipeline. A brief is a translation between strategy and writing, and the translation breaks easily - a brief thin on structure, or one so prescriptive it leaves no room for craft. Both versions cost time on revision.
The brief Skill lifts the floor of the brief without flattening it. A run reads the live SERP, the client’s existing pages, and the internal-link graph, and returns a brief that’s structurally complete: title direction, headings, entities, internal links, voice section, source candidates. The writer takes that and adds the part that no engine should be writing - the angle, the editorial voice, the argument the page is actually making.
The most useful single feature in this stage isn’t the brief itself. It’s the cannibalisation check the Skill runs before the brief is drafted - which, where the pattern fits, can quietly turn a meaningful share of “new page” requests into “refresh this one instead.” It scans the corpus for any page already targeting the same intent and either redirects the request to a refresh of the existing URL or proposes a sharper angle that doesn’t cannibalise the page already ranking.
A working principle the engine makes easy to enforce: no new page enters the pipeline without a corpus check. The discipline used to require a senior consultant’s time; the brief Skill folds it into a step. Where it’s part of the flow, a decent share of the wins are pages that don’t get written. The shipped artifact is a sharper plan - fewer pages, each doing more work, and an internal-link graph that the engine maintains as new briefs are scoped.
The other quiet win is AI Overview eligibility. The Skill produces an entities and FAQ list scoped to what the AI Overview for the target query actually surfaces, so a draft can ship with the structure needed to be considered. It doesn’t guarantee inclusion - but it stops the predictable mistake of shipping a page that was never eligible in the first place.
4. Produce - drafting and refining
Morrison isn’t the drafting surface. The writer drafts in their document of choice; Morrison sits next to the draft - a chat the writer can ask anything to, against the same crawl, brand voice doc and SERP the brief was built on. It is, in our experience, the cleanest configuration: the document is the writer’s, the engine is on call.
The questions writers actually ask, day to day: does this paragraph match the voice doc? What’s an internal link target for the methodology section? Is this claim supported, or do we need a source? Are we eligible for the AI Overview as written - and if not, what changes?Each one used to mean five minutes of context-switching and three open tabs. With the engine on call, each one becomes a sentence and an answer. For writers who lean into the workflow, twenty asks an article isn’t unusual - and the day reshapes itself.
The most overlooked feature on this surface is the model picker. Some tasks need depth - voice fit, originality scoring, contradiction detection across the corpus - and some need speed - link-target lookups, definition rewrites, quick fact checks. The writer picks the right model per message, and the chat keeps the context across switches. It takes about a week to internalise which model fits which question, and pays back from there.
The other thing we’ve learnt: Morrison earns its place by being the second pair of eyes that’s available at 11pm on the Thursday before a Friday ship. Writers don’t want a chatbot; they want a senior editor on call who has read the brief, the corpus and the voice doc, and who can give a useful answer in under a minute. That’s the bar.
5. Publish - the QA gate before it ships
Quality enforcement traditionally rests on senior review. Senior review is irreplaceable for judgment - the angle, the argument, the brand sensibility. But a lot of what senior reviewers actually check is mechanical: voice fit, metadata, schema, internal links, compliance language, originality. A publish-gate Skill collapses the mechanical layer into one call.
The gate isn’t there to replace the editor. It’s there so that when the editor reads the draft, they’re reading the thing they’re actually best at reviewing - the substance - and not catching a 67-character title tag. The five things it catches most often, in order: voice drift on the closing section, internal link orphans on the new page, a meta description over 160 characters, unsupported claims in the body, and schema fields that the template forgot to populate. None of those are interesting problems. All of them used to slow down a release.
For regulated-industry clients - financial advisors, healthcare, a couple of legal-adjacent brands - the compliance pass has become the most valuable single thing on the platform. Where it’s part of the publish flow, nothing on a regulated engagement leaves without it. The pass reads the draft against the client’s specific compliance brief (disclaimers, restricted language, mandatory citations) and either clears the page or returns the offending lines with the required edit. It’s the kind of check that used to live in a senior’s head and lives in a Skill the team can run unsupervised.
6. Refresh - keeping the corpus alive
Half of the value of a content operation isn’t what you write next; it’s what you do with what you’ve already shipped. Most agencies don’t refresh well because the prioritisation is thankless work - reading 200 posts, scoring them, defending the ranking. So refresh sprints either run on instinct or don’t run.
Refresh prioritisation is a Skill that runs across the whole blog in one shot, producing a ranked queue with a one-line rationale per page. The score is composed: ranking decay velocity, internal-link weight, freshness against the SERP, conversion contribution from GA4, and the live intent comparison. When a refresh sprint comes around, the prioritisation can be briefed in one sitting - the consultant reads the top 20, accepts most, argues with a couple.
The second Skill in this stage - decline diagnosis - is the one clients ask about most. Why is this page slipping? is a question that used to have a vague answer. With the live SERP joined to ranking history and the page content, the answer typically lands as a paragraph and a fix. Most of the time it’s one of four things: the SERP added an AI Overview the page isn’t eligible for; the intent has shifted from informational to commercial; a competitor shipped a sharper page; or the page itself stopped being updated and the SERP started preferring fresher entries.
Refresh is also where Morrison most often catches the merge case. Two pages quietly competing for the same query, neither winning. The prioritiser flags them as a cannibalisation pair, the team picks the canonical, redirects the other, and the consolidated page does what neither was managing alone.
7. Govern - keeping the site honest
The least glamorous part of running content at scale: cannibalisation, orphan pages, stale claims, internal contradictions. Every long-running site accumulates these the way a codebase accumulates dead code, and unless someone’s job is to clean them, they compound until the next migration.
Governance is the stage where Morrison most clearly does work that wouldn’t otherwise get done. Cannibalisation and contradiction checks can run as a baseline pass on retainer clients - weekly cannibalisation, monthly contradiction sweep, quarterly orphan and dead-end-link audit are typical cadences - and what surfaces is, almost always, real and almost always at least a couple of items the team would not have found on their own. Stale claims are the most quietly damaging: a product spec stated one way on a launch page and a different way on the FAQ a year later, both still indexed, both contradicting the client’s newest source of truth.
The governance work isn’t about catching mistakes. It’s about a site that gets quietly more correct over time, instead of slowly drifting away from accurate. The team review stays the same - a consultant reads the surfaced items, decides what needs an edit and what’s acceptable variation - but the surfacing can be ambient rather than a project. That’s a difference clients feel even if they can’t name what changed.
What hasn’t changed
Almost more important than the list above is the list of things we still do exactly the way we did before:
- Strategy. The decision of what a client should be doing this quarter, and why, is still made by a consultant in a room with a client. The engine informs the decision; it doesn’t make it.
- Editorial voice. Voice is set by editors. The engine checks compliance with voice - it doesn’t write voice.
- The angle of a piece. A brief can be structurally complete and still need a writer to figure out what the piece is actually arguing. The engine doesn’t argue.
- The client relationship. No surprise here. People hire people.
The engine informs every one of those; it makes none of them.
What this means for the people on both sides
For our existing clients, this is the experience we care most about getting right: same team, same judgment, deeper and faster and more ambitious work. More from the same. Without the quality compromise that usually rides along when an agency adopts AI loudly.
For our consultants and writers, the experience is one we care about just as much: the job got more interesting. The extractive part shrunk. The judgment part grew. People are doing more of the work they’re best at.
What we’re still figuring out
We’re half a year in. Plenty is still moving:
- Where Skills end and the consultant begins. Some Skills got too ambitious early on and tried to do an entire step in one call. The good ones are sharper now - one job per Skill, chained together rather than overloaded.
- How much chat vs. how much Workflow. Conversational work is best for exploration and judgement; recurring work is best as a Workflow that runs on a cadence. Drawing the line in the right place per client is still an ongoing calibration.
- Picking the right model. The fastest path to better output is often the model picker, and we’re still building the internal intuition for which model fits which task. Our writers are leading on that one.
- What we shouldn’t automate. The honest answer is “more than we initially thought.” Some of the highest-leverage decisions in content ops are deliberately slow.
Where we’re taking this
Our vision for Morrison is straightforward: the everyday work companion for people whose job touches a website. The same goal for our consultants and content team as for everyone else - do more, in less time, at higher quality, on the work that actually matters.
For Bonzer, that’s already a different agency than it was six months ago. The same people, doing deeper, faster, higher-quality work, with an engine on the bench. The clients notice.
If you run content for a website - whether you’re an in-house team, an agency, or one of those founders who somehow ends up owning the blog - Morrison is opening up.

CEO, Morrison
Ulrich is CEO of Morrison and founded Bonzer in 2017, growing it into one of Scandinavia's leading SEO agencies with 900+ clients across Copenhagen, Oslo, and Stockholm. At Morrison he leads strategy, operations and go-to-market, bringing years of hands-on SEO and content work to the platform side of the business.
See how Morrison can help
Crawl your site, chat with your content, and run AI-powered workflows at scale.
Browse use cases