News & Articles

The Basics of SEO – Technical SEO & Crawability

Thursday, July 9th, 2020
Written by Content Team

When people start talking about Search Engine Optimisation they can quite often start throwing around various terms that can be confusing to the uninitiated. In part one of our series looking at the basics of SEO we’re diving into one of the most misunderstood areas of SEO, the dreaded realm of technical SEO. 

What is Technical SEO:

According to Moz, Technical SEO is defined as “ optimizing your site for crawling and indexing, but can also include any technical process meant to improve search visibility”. This is important because often the word technical SEO can be thrown around without proper understanding of what it means. It’s not unusual to find that people refer to on-page site optimisation as being technical SEO. Whilst elements of on-page SEO can have an impact on technical SEO ( more on that later) we need to understand that the main focus of technical SEO is on crawl optimisation to begin to learn how we can master it.

So what is crawlability?


Crawlability refers to how easy it is for a robot to crawl your website in an efficient way. If you have a small website with only a small number of pages optimising for crawlability is relatively simple. If your website is large though ( an ecommerce websites are often the most affected by this) you will quite often find that robots struggle to move through the site in an efficient way. This matters because search engines such as Google don’t just crawl every single page of your website as a matter of procedure. They set limitations on how many pages they will crawl at any on point. 

As an example of the impact that poor crawlability can have on your SEO lets imagine we have an ecommerce site which has 250,000 pages on it and lets say google only crawls 50,000 URL’s each time it visits the site. With that limitation in place it will take Google 5 attempts to crawl every page on the site, and in reality certain pages will be crawled multiple times, so it will take even longer than this initial estimate. 

The impact of this is that if you make changes to your site you have no certainty over when pages on your site are going to be seen by Google, which means you have no indication of when your rankings might start to increase.

What can be done to assist in making your site crawlable?

One of the most common ways you can optimise your crawlability on your site is to focus on crawl paths. Crawl paths are effectively an outline of the journeys that you want crawlers to take as they travel through your website. To create a crawl path you need to choose the destinations that you want to direct the crawlers toward. This might be a blog post, a service page, or it could just be the home page of the site. 

Once you’ve decided where you want the crawlers to end up you need to sign post them to those pages. This is thankfully very easy. You just need to create links that go to those pages within your site. The more links you have to a page the easier it is for the crawler to access the page, and the more important the crawler will view that page.  Examplea of how you might link more to certain pages include:

Putting it in your Navigation

Putting it in your footer

Linking to it from within Blogs

Implementing Breadcrumbs that act as links back to previous pages in the trail.

Doesn’t directing crawlers to the same page multiple timers just waste your crawl budget?

This is a good question. Yes, it can do, but when it comes to wasted budgets, you’re not likely to make a massive dent in it from targeted internal link building. You’re much more likely to be accidentally  wasting your crawl budget by linking off to pages that you don’t want the crawler to access.

One of the most common ways this can happen is through filters on websites. If you go onto an ecommerce store  you can often find filters on the side or top of the main body of the content which will let you select what properties of the items you want to see to make your shopping experience easier. This is great for user experience, but content management systems will often turn user selections into dynamic URL’s ( and occasionally static URL’s). This means that your one page of products can suddenly become hundreds of pages. If you have even as few as 5 options in the filter your CMS will create around 240 possible combinations that people can select, and each combination will have it’s own distinct URL, which will result in your crawl budget being wasted as the crawlers try and access every possible combination

How can you fix this?

Thankfully the solution to this sort of problem is simple. All you need to do is add a no-follow tag to the filters so that the crawlers know to ignore them. No-follow tags are the surgical response to this issue, and one we’ve seen that works quite nicely, but you can also use Robots.txt to achieve the same result.

What is Robots.txt?

Every website should have a file on the site accessible at /robots.txt. By Updating the robots.txt file you can limit how crawlers access your website. You can learn more about Robots.txt files and how to utilise them to optimise your website in one of our other blog posts. 

If you would like to find out how we can help you with your website crawlability please don’t hesitate to get in touch with us.