Now don’t get us wrong. Keyword research is the lifeblood of SEO and organic content strategy. Pretty much any which way of doing it for the past couple of decades will have got you somewhere. And recent years have seen SEOs come up with masses of inventive ways to process keyword sets, to get more and better insights from them, and to put them to use in more and more sophisticated ways.
But the great limitation of traditional keyword research is its scale. For much of the SEO world, the holy grail of keyword research has long been scaling keyword research projects from a few thousand up to hundreds of thousands of keywords, or even a million. There are any number of benefits to conducting keyword research at scale – in this post we’ll introduce the concept and start to look at some of the big ones.
What do we mean by keyword research at scale?
First of all, we’ll assume that if you’re reading this, you know what keyword research is. If you don’t, here’s an introduction to keyword research. When we talk about ‘keyword research at scale’ we’re talking about an output keyword set of hundreds of thousands – a million, even.
Why would you want that? There are lots of very good reasons to do keyword research at scale, but there are also reasons why it hasn’t really been done up to now. We’ll be covering the former next week; in this post we’re going to look at the latter.
Traditional keyword research has its limitations
Traditional keyword research is a very manual task. It’s labour-intensive, and that reliance on human input precludes the ability to work at scale; manually reviewing and categorising keywords beyond a few hundred is monotonous and error-prone – just ask any SEO. That’s one of the main reasons for the few-thousand limit on most keyword research projects.
But if you have a large website, the full ranking potential of your site is not within those few thousand keywords. Your site will have the potential to rank for hundreds of thousands of keywords. That manual limitation means that it’s really hard (or, at best, really expensive) to fully and reliably assess the full ranking potential of your site. Now, it’s important to stress that for lots of sites, this really isn’t an issue. Traditional keyword research is the ideal process for the job. But if your site works at enterprise scale, with thousands of pages, the process is limiting. Here’s why.
Limited data sources
Access to data is the next difficulty with traditional keyword research. Over the last few years, keyword data has become less and less freely accessible for SEOs from the sources they traditionally used to get it – namely Keyword Planner. There are lots of keyword tools out there, but their reliability varies enormously, and you simply have to know what you’re doing to distinguish between them.
Limited tools
Then there’s the traditional reliance on Excel for keyword projects. In traditional keyword research, SEOs will generally use Excel or a similar program for most of the work. But anything over a few tens of thousands of rows of data can become clunky to use and hard to store. Going beyond that efficiently means building your own database.
To err is human
And finally, let’s come back to those manual errors themselves. Ask any seasoned SEO about the hours they’ve spent manually categorising keywords, and you’re bound to induce a thousand-yard stare. Sit and categorise keywords for hours on end and you will make mistakes. SEOs have known about this for a long time and plenty have come up with clever keyword categorising scripts to help. But the fact remains that accuracy of categorisation is still an issue.
The categorisation problem
If keywords aren’t categorised properly it’s difficult to identify strengths and weaknesses for different areas of a website. That makes this exercise a hugely important part of keyword research. And the more granular you can get, the stronger the insights become.
However, this is still typically being done manually by agencies and in-house teams. To do it accurately takes a huge amount of time. And, naturally, that makes it prohibitively expensive. Generally, the best any SEO team will hope to do with this approach is take a fairly all-purpose approach to categorisation. This takes in, for example:
- Brand vs generic
- Awareness vs consideration vs decision
- Mobile vs desktop
- Informational vs transactional vs navigational
- Top-level categorisation based on:
- Website folder structure
- Existing product categories
Keyword filtering – the process of weeding out the irrelevant keywords that whatever keyword tool you’re using returns – is also often done manually. This means someone going through a (usually huge) list of words and deleting ones that they deem non-relevant to the brand and/or the project. Again, this results in a lot of wasted person-hours, or even relying on the client to assist in the process. It’s enormously variable with hit-and-miss results, and basically comes down to the competence and knowledge of the person doing it.
Overcoming the limitations
Until now it just hasn’t been feasible to do keyword research on a much bigger scale than it’s been done on for years. But we’ve been working on each of these limitations over the years with different technologies, automation processes and human inputs. The result is our process, the Keyword Universe, which gives you just that: your website’s entire universe of keywords.
In the next part we’ll look at the payoff, as we take you through some of the main benefits of doing keyword research at scale, including why your currently ‘good enough’ strategy might not be good enough after all, and how you can maximise the ranking potential of your site by an order of magnitude. We’ll also give you an idea of what the Keyword Universe is all about and how it works.
Find out more with Melt’s head of SEO, Phill, and head of strategy, Bart, as they discuss keyword research at scale in our May webcast.
For more on the benefits of keyword research at scale and how it works, read part 2 here or get in touch with us now.