Usability FAQ
Why Isn't my Internal Link Analysis data Showing after a Site Audit Crawl?
If you've completed a Site Audit crawl and your Internal Link Analysis data isn't appearing, don't worry! Here's what you need to know: Internal Links data can take up to 24 hours to process after a crawl is completed; the analysis is based on pages ...
How To Rename A Site Audit Crawl Project
To rename a Site Audit crawl project, navigate to the Site Audit Projects page and select the pencil icon for the crawl project to edit the naming convention.
How does seoClarity handle multiple external links from the same URL in Site Audits?
Our crawler will go through the URL and find all links to your site from there. Then each link will automatically be listed separately within the right side window when the number of External Links is clicked on in the data table. Site Audit Details ...
How Do I Manage Which Domains Are Crawled In A Site Audit?
To allow or deny specific domains from being crawled when running a Site Audit, navigate to the Crawling Rules when creating or updating a Site Audit Project.
Why isn't metadata showing in the Site Audit crawl?
If URLs from the crawl don’t have a 2xx status (meaning they're not fully accessible), metadata won’t show up for those pages. Since our crawler works similarly to Google’s, if we can’t access these URLs, it’s likely Google also can’t. This could ...
Why does my crawl say preparing?
If you see "Preparing" next to your Site Audit crawl, here are a few reasons why this might be happening and what you can do: Just Started Crawls: When you initiate a new crawl, it will display as “Preparing…” for a few minutes before starting. No ...
What is Page Rank?
Page Rank, as seen in Internal Links Analysis, is a score for a URL from 1-100. Page Rank is an algorithmic metric based on the number and quality of links to a page and is a way to estimate how important a page is. Page Rank can be generated via the ...
Why did my crawl return an error?
Crawl Error Video Malformed or Incorrect URL Check that the starting URL is correct and is accessible (200 response) to the user agent setup for the crawl. User Agent We recommend adding our crawler as an allowed crawler based on the user agent name, ...
What type of crawl should I set up, Standard or Javascript?
Javascript vs Standard Crawls How to check which crawler option to use: Open the url in the browser Disable Javascript See below for steps to disable Javascript Load/re-load the page If you cannot see the links populate, it means that the ...
How To Find Orphan Pages On Your Site
One of the best ways to find orphan pages on your site is to compare URL lists from full Site Crawls to other data sources. If you run a full site crawl and download the crawled URLs, then you can compare the URLs from the crawl to other ...
Why did my crawl get paused?
Sometimes crawl frequencies and schedules conflict and your crawl will be paused. When setting up a new crawl there are 2 different sections in the “Start Audit” tab that can conflict. They are the Frequency options and the Schedule set up for days ...
How do recurring crawls work?
The Recurring Crawl mechanism is designed to create a trend of crawl data so you can run and compare crawls within a project. Site Audits allows you to set up crawls that can be scheduled to run Weekly, Bi Weekly or Monthly, from 1 - 6 months. ...
Why are the Page Speed score results different when I look at directly in the Page Speed tool?
Per Google's PageSpeed Insight documentation: Why does the performance score change from run to run? I didn’t change anything on my page! Variability in performance measurement is introduced via a number of channels with different levels of impact. ...
Why does a completed crawl show a different number of pages crawled versus when it was running?
While a crawl is running, the Site Health report will display every single URL that has been encountered during the crawl so far. Once the crawl is completed, the Audit reports are processed and duplicates are removed which can lead to a difference ...
Can we use Site Audit on our development environment?
Your development environment can be crawled by adding a specific user agent to the allowed list of bots. Once access is given you can set up a crawl in Site Audits using that user agent. Contact support@seoclarity.net with any further questions on ...
How do I request and schedule crawls?
A crawl can be requested within the platform via the Crawl Request button located on the Site Audits page. This allows for customizations such as crawl speed, depth, exclusions, and inclusions for specific content blocks. The platform allows for ...
What is AMP?
AMP stands for accelerated mobile pages and is a way to have page content render quickly. AMP URLs can be seen in Google's mobile search results as identified in SERP Features. Oftentimes they appear in the Top Stories or News section but will also ...
What are meta robots or robots.txt ?
You can use an HTML <META> tag to tell robots not to index the content of a page, or follow it for links. Robots (aka crawlers or spiders) can ignore your Meta tag as it is only a guideline and not all Robots will respect it. The Meta Name attribute ...