Do We Only Use BrightLocal to Find Competing Pages?


When multiple pages from the same site show up for the same keyword, rankings usually suffer. Many people assume BrightLocal is the go-to tool for finding these competing pages, but that’s only part of the picture. BrightLocal can confirm a problem, not diagnose it. In this post, we explain why we start with a site crawl, how we identify true competing pages versus cannibalization, and where BrightLocal fits into a clean, reliable workflow.

Table of Contents

Why BrightLocal is useful but limited

BrightLocal’s organic rank tracker will show multiple URLs from a domain for the same search query. That is useful as a confirmation tool. If we see two or three URLs from the same site showing impressions for one query, we know something is wrong.

But BrightLocal only shows what’s happening in the search results. It does not tell us why the site has those pages, how they are built, or whether they are producing crawl problems. For that we need to look at the site itself.

Our crawl-first workflow

We follow a simple sequence to find competing pages and the root causes:

  1. Crawl the site using a site crawler set to the Googlebot user agent.
  2. Export a small CSV with key fields.
  3. Sort and inspect the data to spot duplication, over-optimization, and crawl resistance.
  4. Use BrightLocal to confirm which queries return multiple URLs from the same domain.
  5. Plan fixes: merge, canonicalize, redirect, or control indexing.

Which crawler we use and what to export

We use Website Auditor (part of SEO PowerSuite) to crawl sites as Googlebot. The crawler gives us everything we need quickly. We export the page data as a CSV with just these columns:

  • URL
  • SEO title
  • H1 count
  • H1 text
  • H2–H6 count

Keeping the export small makes it easy to scan and spot patterns in a spreadsheet.

Got SEO Questions? Get answers every week at 4pm ET at Hump Day Hangouts. Ask questions ahead of time, or live – just go to: https://semanticmastery.com/hdho (bookmark this!) 10+ years of insights given every week!

Get your checklist to help get better results with GBPs, faster. 

What we look for in the CSV

From those five fields we can spot three things fast:

  • Duplicate or redundant pages — multiple pages with the same or very similar SEO title and H1 text.
  • Over-optimization — too many pages targeting the same intent with slightly different wording.
  • Crawl resistance — internal pages that resolve to redirects or 404s, which show N/A for title and H1.

Those N/A rows are huge clues. They often mark internal redirects, broken links, or pages that return errors. Each one is a point of crawl resistance. Every point of crawl resistance wastes crawl budget and can hide important pages from search engines.

How we sort and why

We sort the CSV two ways:

  • Sort by SEO title to group pages that target the same phrase. This reveals obvious duplicates and groups the N/A rows so we can fix redirects and broken pages quickly.
  • Sort by URL to group tag pages, category pages, and parametered URLs. These often create crawl noise and should be handled with robots or meta tags.

Tag and category pages frequently become crawl resistance. In most cases we set them to noindex or noindex, follow depending on whether they need to be crawled but not indexed.

Cannibalization versus competing pages

It helps to separate the two terms:

  • Cannibalization means Google serves only one URL from your domain for a query at a time, but which URL changes across repeated searches. It looks like your pages are trading places.
  • Competing pages means Google serves more than one URL from your domain for the same query at the same time. You can see two or three URLs for the same query in the results.

We used to see a lot more true cannibalization. Tools like Koray’s SERP volatility tester were helpful for visualizing that behavior by running many back-to-back searches and showing which URL showed each time.

More recently we see fewer cases where one page swaps with another. Instead, Google often returns multiple pages from the same site for the same query. Those competing pages usually push each other lower in the results. If there had been only one clear page for the intent, that page would likely rank higher.

How BrightLocal fits in

BrightLocal’s organic rank tracker shows the top 50 results and will list multiple URLs from your domain for a query inside that range. When that happens we use the crawler data to find the cause and plan the fix.

So BrightLocal is not the starting point. It is the confirmation layer. It tells us which keywords are affected so we can prioritize fixes.

Common fixes once we find competing pages

Once we identify competing or duplicate pages we decide on one of these actions:

  • Merge content — combine similar pages into one stronger page.
  • 301 redirect — redirect low-value duplicates to the main page.
  • Canonical tag — tell search engines which page is preferred when the content must stay separate.
  • Noindex — keep a page on the site but prevent it from being indexed.
  • Fix internal links — remove links to pages that should not be crawled or fix redirects so crawl paths are clean.

Quick checklist to run right now

  1. Crawl the site as Googlebot and export the CSV with URL, SEO title, H1 count, H1 text, H2–H6 count.
  2. Sort by SEO title and by URL to find duplicates and noisy sections like tags and categories.
  3. Isolate rows with N/A values and fix redirects or broken pages.
  4. Use BrightLocal to find queries where multiple URLs from your domain show impressions.
  5. Decide whether to merge, redirect, canonicalize, or noindex the extras.

FAQ

Can BrightLocal find every competing page on my site?

No. BrightLocal only shows competing pages when multiple URLs from your domain appear in the top 50 results for a query. It will not show internal duplicates, tag pages, or pages that never get impressions. A site crawl will find those.

What does N/A in the crawl export mean?

N/A for SEO title or H1 usually means the page resolved to an internal redirect or returned a 404. Those are points of crawl resistance and should be fixed.

When should we use a tool like Koray’s SERP volatility tester?

Use it when you suspect real cannibalization—when one URL from your domain is replacing another across repeated searches. It shows which URL is served in each instance. For most current competing page issues, a crawl and BrightLocal confirmation are enough.

Should tag and category pages be indexed?

Most of the time we set tag and category pages to noindex because they create crawl noise and rarely add SEO value. If they have unique, helpful content, consider keeping them indexed but monitor their performance closely.

What is the fastest way to fix competing pages?

Identify the main page for the intent, then merge content or 301 redirect duplicates to that page. Use canonical tags only when duplicating is unavoidable. Fix internal links so they point to the preferred page.