close
close
Guide

How to define SERP intent and ‘source type’ for better analysis

SERP analysis coupled with your keyword research is a cornerstone of any modern SEO campaign.

The analysis of search intent is already a process. But when it comes to SERP analysis, all too often I see reports that stop classifying a result based on its intent – ​​and that’s it.

We know that for searches with multiple common interpretations, Google works to provide a diversified results page, often making the following distinctions:

  • Result intent (commercial, informational).
  • Business Type (National Result, Local Result).
  • Aggregators and comparison sites.
  • Page type (static or blog).

And then when we plan content, we might come up with a strategy based on Google ranking some pieces of information on page 1, so we’re also going to create pieces of information.

We can also use a tool to “aggregate” metrics on the first page and create artificial keyword difficulty scores.

This is where this strategy fails and, in my opinion, will continue to show falling returns in the future.

This is because the majority of these pieces of analysis do not acknowledge or consider the source type. Personally, I believe this is because the Search Quality Rater guidelines that have caused EAT, YMYL and page quality to become an important part of our day-to-day work don’t really use the term source type, but he does talk about rating and analyzing sources for things like misinformation or bias.

As we begin to consider source types, we also need to engage with and understand the concepts of quality thresholds and thematic authority.

I’ve talked about quality thresholds and how they relate to indexing in previous articles I’ve written for Search Engine Land:

But by connecting this to SERP analysis, we can understand how and why Google chooses the websites and elements that make up the results page and also get an idea of ​​how viable it can be, effective for certain search queries to rank.

Read  How to Make Branded Email Templates With Layouts in Gmail

A better understanding of ranking feasibility will help in forecasting potential traffic opportunities and then estimating leads/earnings based on your website’s conversion.


Get the daily newsletter search marketers count on.


Define source types

Defining source types means going deeper than just classifying the ranking website as informative or commercial, because Google goes deeper too.

This is because Google compares websites based on their type and not just based on the content produced. This is particularly prevalent on search results pages for queries that can have mixed intent, returning results with both commercial and informational intent.

If we look at the query [rotating proxy manager] we can see this in practice in the top 5 results:

# results website intent classification Source Type Classification
1 oxylabs Commercially Commercial, lead generation
2 cyte Commercially Commercial, lead generation
3 Geek Flare Informative Informative, commercially neutral
4 I like knots Informative Open source code, non-commercial
5 Scraper API Informative Informational, commercial bias

Quality thresholds are determined by site identity, general domain type (not just blog subdomain or subfolder), and then context.

When Google retrieves information to compile a search results page, it first compares sites that are retrieved first based on their source type group. In the example, SERP, Oxylabs and Zyte are compared first before the other source types are selected for inclusion or ranked highest based on weight and annotation.

The SERP is then formed on the basis of these retrieved rankings and then overlaid with user data, SERP features, etc.

Read  How to Earn More Gems Fast

By understanding the source types that Google shows for specific searches (and where they rank), we can know at a glance if they are viable search terms to target given your source type.

This is also common in SERPs for [x alternative] Queries where the company wants to rank for competitors + alternative connections.

For example, if we look at the top 10 blue link results [pardot alternatives]:

# results website intent classification Source Type Classification
1 G2 Informative Informational, non-commercial bias
2 trust radius Informative Informational, non-commercial bias
3 The climb Informative Informational, non-commercial bias
4 Capterra (blog) Informative Informational, non-commercial bias
5 jot form Informative Informational, non-commercial bias
6 finance on the internet Informative Informational, non-commercial bias
7 gardener Informative Informational, non-commercial bias
8th Get the app Informative Informational, non-commercial bias
9 Demodia Informative Informational, non-commercial bias
10 software suggestion Informative Informational, non-commercial bias

So, if you’re a Freshmarketer or ActiveCampaign, while the company will see that as a relevant search term and fit your product positioning, you probably won’t gain traction as a commercial source type.

That’s not to say that messaging and comparison pages on your site aren’t important content for user education and conversion.

Different source types have different quality thresholds

Another important distinction is that different source types have different thresholds.

Because of this, third-party tools that create keyword difficulty scores based on a metric like backlinks for all results on page 1 struggle, as not all source types are judged the same way on most SERPs.

This means that in order to determine the “benchmark” for what your website and content needs to rank in a traffic-boosting position, you need to compare it to other websites with the same source types and then the type of content you rank with.

Read  How to Reverse Liver Damage Quickly, Say Experts      — Eat This Not That

Topic cluster and frequency

Establishing good topic clusters and easy-to-understand information trees allow search engines to more easily understand your site’s source type and “usefulness depth”.

That’s also why I think for a range of searches in the same field (e.g. tech) you’ll probably often see sites similar to G2 and Capterra for a wide range of searches.

A search engine can have more confidence in returning these websites in the SERPs, regardless of software/technology type, because these websites exhibit:

  • High publication frequencies.
  • A logical information tree.
  • Developed a reputation for helpful, accurate information

When developing websites within the topic clusters, in addition to the semantics and a good keyword research, it is also important to understand the basics of natural language interfaces, in particular the Stanford Natural Language Inference (SNLI) corpus.

The basics of this are that you must test the hypothesis against the text, and the conclusion is that the text either implies, contradicts, or is neutral to the hypothesis.

If the web page contradicts the hypothesis to a search engine, it has little value and should not be retrieved or ranked. On the other hand, if the webpage includes or is neutral to the query, it can be considered for ranking to provide both the answer and a possible unbiased perspective (depending on the query).

We do this to some degree through content hubs/content clusters, which have grown in popularity over the last five years to demonstrate EAT and create high authority linkable assets for non-branded search terms.

This is achieved through good information architecture on the site and conciseness in our thematic clusters and internal linking, making it easier for large-scale search engines to digest them.

Understand source types to inform your SEO strategy

By better understanding the source types that rank most prominently for target search queries, we can create better strategies and forecasts that yield more immediate results.

This is a better option, rather than targeting search terms that we’re just not good at and are unlikely to see a return on traffic against the resource investment.


The opinions expressed in this article are those of the guest author and not necessarily those of Search Engine Land. Staff authors are listed here.


New in search engine land

About the author

Dan Taylor is Head of Technical SEO at SALT.agency, a UK-based technical SEO specialist and 2022 Queens Award winner. Dan works with and leads a team working with companies ranging from technology and SaaS companies to enterprise e-commerce.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button