Awesome Web Designs favicon

Written by

Guillermo Figueredo

Toronto digital marketing tips.

SEO Guide for Beginners (2020)

Awesome Web Designs favicon

Written by

Guillermo Figueredo

SEO Guide for Beginners (2020)

This article is a guide where we will try to break down the most relevant aspects to improve the SEO of our website. If any aspect of this post is not clear or you want to share your opinion, please leave your comments at the bottom of this page.

SEO (Search Engine Optimization) is the set of techniques and actions focused on improving the positioning of a website in search engines.

The pillars of SEO are three:

  • Website Optimization
  • Content Quality
  • Links from External Sites

The first two points could be categorized under On-Page SEO and the third one under Off-Page SEO.

Website Optimization

In this part, we will focus on aspects that affect the entire web, and not just a single piece of content. Changes and improvements of this section may affect the overall ranking of the site.

Crawling and Indexing

Crawling and indexing are not the same thing. Some pieces of content can be crawled, but not indexed.

We say that a URL has been crawled when search engine robots have visited it and “analyzed” the content in it (they see its source code). When that URL is incorporated into the Google index (or any search engine) and appears in the search results, we say that the URL is indexed.

In the Google Search Console coverage index, we can see the exact number of indexed URLs and if there are crawling or indexing problems.

Search Console: Coverage Index
Search Console: Coverage Index

In order for any page to receive visits from search engines, it must have been previously crawled and indexed. Without indexing, there is no organic traffic.

This part is very important, especially on very large websites or online stores: Google dedicates a limited amount of resources to crawling a website. This is what we call crawl budget.

[irp posts=”25834″ name=”What is Crawl Budget for SEO and How to Optimize it?”]

Among other factors, your site’s crawl budget depends on the authority of the site, the periodicity of publication, the loading times, the web architecture, etc. If a website publishes 100 articles or pages daily, but Google only crawls and indexes 20, we can talk about a crawl budget problem! and a very serious one, becasue the website is generating content but Google does not see it,  as if it did not exist. We made an article on how to optimize the crawl budget of your site, check it out!

The main job of an SEO expert is to get search engines to crawl and index the content that really interests a website.

Redirects and Broken Links

We have talked about search engines assigning a daily crawl budget to our website. One way to waste your budget is to have broken links and many redirects.

If the search engine bot checks one URL that redirects to another, we are forcing it to crawl two URLs in order to view the content.

However, broken links are even a worse scenario. If the bot follows a broken link, we are wasting resources that could be used to crawl other potential new content. And not only that, the authority is wasted in a URL that does not exist because it has no content.

We must always try not to have broken links or many redirects. The easier we put things to the bots, the better they will treat us in the SERPs.

We can analyze broken links with the following tools:

Web Architecture and Internal Linking

Thanks to good web architecture and internal linking we will make all our content accessible to search engines.

A URL that does not receive any link cannot be crawled.

The more internal links a given URL receives, the more “weight” it has within that web page.

Our goal will be to prioritize internal linking to the most relevant URLs of the project, to ensure that they rank better in search engines.

Sitemap.xml and Robots.txt

The sitemap.xml file is basically a list of all the URLs that a website has. The bots will come to this file to learn about the new content we publish. This must be notified through Search Console.

The sitemap of a website must contain all those URLs that are likely to capture traffic from search engines.

The robots.txt file indicates the parts of our website that may or may not crawl the robots. At the end of robots.txt Google recommends linking the sitemap.xml.

If a URL has a noindex tag, we must remove it from the sitemap. Doing the opposite would be inconsistent since we are telling the bots to crawl that URL (on the sitemap) but once they crawl it they see that we don’t want it to be indexed (noindex tag). Referring back to the crawl budget section, remember to make things easy for search engines and only deliver content that can generate organic traffic.

Noindex Pages

All content that does not attack a particular search intent or keywords is better to put it as a noindex.

Some examples may be all those pages about legal notices, terms and conditions, password-protected content, etc.

After some time, Google will stop crawling those noindex pages, which will improve our crawl budget.

Responsiveness and Loading Speed

In 2018 Google announced its Mobile-First Index. This meant that from that date, to assess the quality of a website Google would take into account its mobile version first, instead of the desktop.

Since then, the loading speed of a website and its adaptability to devices became way more important. Your website must be correctly viewed on mobile devices and load quickly.

[irp posts=”23797″ name=”How to Make your Website Load in less than 2 Seconds? Swift Performance step by step guide”]

We have two official Google resources to check these parameters:

Content Quality

This part has to do with each post, article or each individual “content” that we publish on our website. Changes included in this section usually affect each content individually.

[irp posts=”27728″ name=”Wordpress Blog Post Checklist: Post with Confidence with the Pre-Publish Checklist Plugin”]

Semantic Density and Search Intent

Through its artificial intelligence, Google detects the search intent behind a keyword.

If someone searches for “Download program to watch TV on the computer” Google interprets that in order to satisfy the user’s search intent, there should be links in the results offered to download a program.

This happens with every search we perform. Therefore, it makes more sense to talk about search intents than keywords. Under the same search intent, we can group hundreds or thousands of keywords.

Types of user intent
Types of user intent

The following phrases or keywords have the same search intent behind them, which is to learn how to optimize the organic traffic of a website:

  • SEO guide
  • How to position a web page
  • Search engine positioning guide

We don’t have to create different pieces of content for each of those keywords, instead, we can rank them all with the same article.

Here also comes semantic density. If I talk about a “Bat”, you don’t know what kind of bat I mean. It can be an animal or a piece of equipment to play baseball.

Search engines give context to our content through the keywords that we mention in it. If in a content where I use the word “bat”, I also mention baseball, runs, game, stadium… Google identifies the context of that piece of content. This is what we call semantic density.

Friendly URLs

Gone are the days with the URLs shown as follows: domain.com/index1-php?id=3

A URL must be descriptive so that both robots and users know what it is even before accessing its content.

Headings Hierarchy

Headings are elements of great importance within a web page. They range from H1 to H6. They are similar to the titles and subtitles used in for a Word document and serve to structure the information within a content.

H1 is the main title and there should only be one for each content. The rest can be as many as we want.

Each header depends hierarchically on the previous one. This means that there should not be an H3 if there is previously no H2.

Let’s see it with an example:

Suppose we create content about types of vehicles. The H1 would be “Most used vehicles”. Within it we talk about “Cars”, “Motorcycles”, “Buses”, … Each of them would act as an H2. If within “Cars” we want to differentiate between “Diesel cars” and “Gas cars” they should be marked as H3.

Title and Description

The title and description appear in the search results and they are the key elements for a potential visitor to decide to click on our link or on that of the competition.

If the title and description include the words the visitor enters in the search box, they will be marked in bold in the search results. We must try to have the main keyword appearing here, and also make a small call to action that prompts the click.

We can also play with icons and special characters to highlight our link.

Structured data

Structured data helps robots better understand the elements that create our website. This data serves to highlight the price of a product, brand, stock, offers, etc. on search results.

In addition, this data may appear together with the title and description as a prominent fragment if Google considers it relevant.

The way I recommend marking structured data is through JSON-LD using the Schema language.

[irp posts=”4132″ name=”What are Schema Markups and Why are They Important for your SEO?”]

User Response

The interaction the user has once they visit our website is key to maintaining a result between first positions.

If a user enters your website, but quickly leaves it and then returns to the results page (known as Pogo sticking) to access another result, Google will understand that your content may not be as good as to be in the first positions.

If a visitor spends time browsing your website visiting different content, Google understands the user liked the content they found.

Links from External Sites

Off-Page SEO has to do with the way other websites link us. These links work as the recommendations of the traditional offline world. A good link building strategy will take an effect on all the content of our web.

[irp posts=”23333″ name=”How to Get Free Quality Backlinks for your Toronto Business”]

Authority of Reference Sites

It is not the same if our friend links us from a blog he/she just created, versus getting a link from a famous blog that fits the thematic of our sector.

The authority of the sites that link us is measured by the number and quality of links they receive, as well as the quantity and quality of their traffic, content size, theme, etc.

Typically, a project receives links from sites with different authorities. It would not be normal if all the webs that linked our site were very powerful. If large websites add a link to our site, it’s logical to think that small websites will add a link to our site too.

[irp posts=”24116″ name=”A Beginner’s Guide to Link Building: How to Take Advantage of Google and Advertise for Free”]

Link Pattern

When getting new external links, there are some variables we need to consider:

  • Follow / Nofollow
  • Anchor text used
  • Progression over time
  • Link location
  • Destination URLs

When we create a link building strategy we must try to simulate the generation of natutal links. These are links that we have not created ourselves, nor have we bought or ordered them. Remember that Google prohibits the purchase or creation of artificial links (however, the reality is very different).

Our link pattern must be natural. The best approach is to analyze the best websites in our sector (our competitors). See what type of links they are receiving and try to replicate their proportions and progressions. If they are well-positioned, they must be doing something right.

Referring same IPs

One single server can host several websites, and we may think that is a good idea to link all these websites to each other. However, this is a mistake, since Google takes into account not only the reference domain, but also the IP of that website.

Google assumes that if two websites under the same IP are linked, it is very likely that the person or company that manages them is the same one and has decided to link them to try to manipulate their search engine results.

Try to avoid getting links to your website from many different domains but sharing the same IP.

Do you need help with your website?

Get a free marketing consultation

book a call.

I hope you liked this SEO Guide for Beginners. Please, leave your comments below, we would love to hear your feedback and opinions.

Article originally published in Spanish by MadreSEOperiora.

This blog is a place for everyone who is interested in marketing, whether you are a professional web designer, marketer or business entrepreneur. If you want to be part of our digital marketing journey, sign up for more awesome web design tips and tricks.

Something went wrong. Please check your entries and try again.