SEO small mistakes that could kill your Drupal website

“I have this brand-new website, but it’s not ranking.” We’ve heard that a couple of times. If a website cannot be found by the search engine, then that website can be used as a data storage facility, or something similar.

Just imagine that you have an art gallery with Rembrandts and Picassos that are kept in a really hard to get to basement or barn. It doesn’t matter what masterpieces you have in there, if people cannot find it, then the site is as good as a basement.

“I have this brand-new website, but it’s not ranking.”

Along the years, we have found some rather small mistakes that people do when building websites. In the following, we will be referring strictly to Drupal, yet these rules and mistakes apply to all content management systems out there. Before proceeding, remember to get your page speed right. This ain't a small mistake. Google loves when a webpage loads below 3 seconds. Keep it safe and make all the necessary effort to keep the loading speed as low as possible.

Page Headings - not just a change in font sizes

This is maybe the most common mistake we see. Sometimes we’ve seen websites with no headings at all...not even the website title had an H1. So, how do you expect Google to rank and to recognize your website and your content as valuable?

Let’s get things straight...nobody knows for sure how Google does the indexing, except maybe some precise key-members of the Google headquarter. Yet, we know that the meta information is really important, that the page speed is mega-important and that keywords help, and that everything together makes your website rank.

Ok, you have good meta, but as you may know already, good meta and targeted keywords are not enough. Google needs more than some fancy metatags, so remember to include some headings.

Also, don’t forget that your readers need some “benchmarks”, some flags to say “Hey! The content you’re looking for is HERE!” Nothing drags attention better than an H1 or an image.

The use of header tags is important because the heading mark tags give to the crawler the structure of the website and of a certain page. If analyzing two similar websites, one with headings, and another one without, you will notice that the one with the headings tends to be indexed faster.

Robots.txt protocol missing – your website needs some robots

“The robots.txt file must be in the top-level directory of the host, accessible through the appropriate protocol and port number. Generally accepted protocols for robots.txt (and crawling of websites) are "HTTP" and "https". On HTTP and https, the robots.txt file is fetched using an HTTP non-conditional GET request.” -

Robots.txt or the infamous robots exclusion protocol is a standard used by most websites to communicate with the web crawlers or robots. It decides which crawlers are allowed to visit a website and what part of your website should be visited and which one shouldn’t be crawled.

Let’s imagine Google is a tourist visiting your website. The Robots.txt is a good friend that tells the tourist what neighborhoods NOT to visit, either because those places are not touristically valuable, or it’s dangerous for you.

All websites have pages that should not be crawled for different reasons. If for a smaller website like a blog, for example, a robots.txt file may not be so important, for a bigger website, this protocol makes all the difference.

One of the sweetest features of this protocol is that, if needed, you can tell to specific robots to leave your website. The Robots.txt file schools the crawlers, telling them what to do, and what not to do. If, for example, you have a junk directory on your website that should never be crawled, just write:

User-agent: *
Disallow: /junk/

If you have a full junk file not to be crawled, just use:

User-agent: *
Disallow: /junk/file.html

I know you think it’s not that important, but a mistake like not having a robots.txt, could make the difference between the 1st and the 10th page of the SERP (Search Engine Results Page).

Sitemap.xml not available – give Google a map

First of all, the sitemap is an XML file where all the URLs existing on the website.

Again, you could see the whole situation like having a map with the major landmarks of a city, and Google is a tourist. The sitemap is that map. If the tourist has the map, then it will be easier to visit all the locations indicated on the map. Otherwise, if the map is missing, that tourist may not be able to find all the important locations, or it may get lost.

The sitemap.txt tells the tourists (crawlers) what regions could be visited, the robots.txt tells what places shouldn’t be visited, and the heading tags will say which ones are the most important bits of information.

Besides being a list of links existing on a website, the sitemap offers valuable information about the website, both to the crawler and to you or your developer.

If implemented correctly, with the sitemap:

  • Pages are prioritized for indexing by the frequency of indexing
  • An early index of a website is assured
  • All pages are submitted to the search engine database even before being submitted to their own database

Drupal offers some great modules, for both Drupal 7 and Drupal 8. The modules generate a list of entities (nodes, taxonomy terms, menu links, etc) and custom links. You can go with the XML sitemap or the Simple XML sitemap module. Be careful, as sometimes, in order to upgrade these modules, you will need to uninstall the previous version. Read carefully the specs of the module.

Content not well verified and lack of Metatags

This is a general huuuuuge SEO problem, no matter the CMS. This is like the Gargantua of a website. I know, we said that we will be talking about small mistakes, but this one is so big, that it worth mentioning in all SEO mistakes articles.

Sometimes, people are blaming the implementation, while lots of times, the content was the one to blame for their poor ranking. Good content makes people read, share and bookmark your website…which means traffic and ranking.

This is maybe the most common problem when dealing with SEO. If the technical part is almost perfect, but your website is still not getting results then maybe you should check your content's quality once in a while.

Don’t forget checking if your metatags are working properly. Then also check if the text in your metatags is valuable and contains all the needed keywords.

Drupal has some great metatag modules on both Drupal 7 and Drupal 8 that you should install before getting your Drupal website live.

Duplicate content – lowers your ranking twice as fast estimates that from the total of the web pages, almost 29% are duplicated content. Duplicate content seems to confuse the crawler and thus to lower the ranking.

The main problem is that when having duplicated content, the crawler will not be able to know which page to rank and to show for a specific query. In order, not to get confused, it seems that the crawler excludes both pages or uses specific algorithms to identify the good one. That process takes a lot of time and increases the indexing time.

This may be fixed quite simple with a 301 redirect and some rel=”canonical” attribute to tell search engines that a page is and should be treated as a copy of an URL.

These are just a few of the small mistakes that have a huge impact on your SEO and on your website. Will come back with even more SEO mistakes that people do with their websites, but until then, write the mistakes you’ve met in the comment section below.

This is relevant for …

Post a comment