Sespider db connect error

sespider db connect error

The Spider Web login is a database user, which is required by the product components for accessing the databases. It is used by Spider Web and. I am trying to include just one table from another MariaDB server (lets call it A) into the database on another (even physical) server. Shutdown indication on the display if the supply relay is not activated (by watchdog or external connection between terminal 1 and 2). • Speed error too. sespider db connect error

watch the thematic video

Error establishing a database connection error in wordpress [SOLVED]

For: Sespider db connect error

Sespider db connect error
RUNTIME ERROR 62 VB
503 ERROR DENWER FRIEDNLY URL
Sespider db connect error
Sespider db connect error

About

SpiderFoot is a reconnaissance tool that automatically queries over public data sources (OSINT) to gather intelligence on IP addresses, domain names, e-mail addresses, names and more. You simply specify the target you want to investigate, pick which modules to enable and then SpiderFoot will collect data to build up an understanding of all the entities and how they relate to each other.

What is OSINT?

OSINT (Open Source Intelligence) is data available in the public domain which might reveal interesting information about your target. This includes DNS, Whois, Web pages, passive DNS, spam blacklists, file meta data, threat intelligence lists as well as services like SHODAN, HaveIBeenPwned? and more.

The data returned from a SpiderFoot scan will reveal a lot of information about your target, providing insight into possible data leaks, vulnerabilities or other sensitive information that can be leveraged during a penetration test, red team exercise or for threat intelligence. Try it out against your own network to see what you might have exposed!

Up to table of contents

SpiderFoot HX builds upon the open source version&#;s module base to offer enhanced functionality all aspects of SpiderFoot, including performance, usability, data visualisation, security and more.

Additional Capabilities

In addition to the data collection capabilities of the open source version, SpiderFoot HX takes things a step further with the following features:

  • No installation or setup needed at all. Once you register, everything is ready to go. No Python dependencies to install, no virtual machines to spin up or ensuring you have enough compute/memory/disk to run a large scan.
  • Investigations. Sometimes, you don&#;t want full automation of your scan and want to step through the data collection process step-by-step, module-by-module. Investigations provide you with a visual sespider db connect error to take full control of the scanning process.
  • Multi-target scanning. In cases where you have multiple entities (domains, e-mail addresses, etc.) related to the same target, sespider db connect error, you can supply them all as targets of the one scan. This enables SpiderFoot to better identify relationships sespider db connect error find relevant information.
  • Scans are faster. Thanks to the completely overhauled backend architecture of SpiderFoot HX, scans run up to 10x faster than the open source version. This means you get the data you need, faster.
  • OSINT monitoring. Run scans automatically on a daily, weekly or monthly basis at a time of your choice and have all changes between scans automatically tracked and alerted on.
  • Email notifications. Receive email notifications when SpiderFoot 92700 fat error finish, or when scheduled scans identify changes between scan runs.
  • Slack integration. Prefer your notifications over Slack? No problem; input your Slack hook URL and you&#;ll see notifications in Slack for scan completions and/or change notifications from scheduled scans.
  • Import scan targets. When scanning many targets, it might be easier to load them in via CSV, or as exported from Hunchly.
  • More modules. SpiderFoot HX adds additional modules for UDP port scanning, identification of languages used in content and screenshotting of certain content like social media profiles, sespider db connect error, dark web sites and security-sensitive webpages such as those that accept credentials.
  • Reporting & Visualisations. Slice and dice your scan results by data type, data family, module, module category and data source. Look at each data sespider db connect error in-depth to see how it was discovered, its relationships and more.
  • Team collaboration. Got a team working on OSINT and threat intelligence? With SpiderFoot HX, you can have multiple users with role-based access control, sespider db connect error, collaborating on scans and investigations.
  • Annotations. Add notes to scan results and pull them out with the API for rich integrations with internal SIEM tools, investigative platforms and ticketing systems.
  • Security. Two-factor authentication (2FA), role-based access control and a fully locked down cloud infrastructure mean you don&#;t need to deal with the security of your OSINT platform and investigations.
  • Anonymous. SpiderFoot HX has TOR integration out of the box and provides no way for a scanned entity to know that it&#;s you doing the scanning.
  • Custom Scan Profiles. Got a particular combination of modules you like to use for your scans but don&#;t like having to define them each time? With SpiderFoot HX, sespider db connect error, you can define scan profiles and re-use them for future scans.
  • SpiderFoot HX API. The SpiderFoot HX API is a fully documented RESTful API that supports virtually all UI functions so you can orchestrate the platform and extract data programmatically.

Up to table of contents

Seeking Help

Aside from this document, you’ll be able to get help with SpiderFoot from a number of places:

Up to table of contents

Pre-Requisites

Using Docker

If you would like to side-step having to install anything to get SpiderFoot running on Linux, follow the instructions here to run SpiderFoot in a Docker container.

Linux/BSD/Solaris

SpiderFoot is written in Python 3, so to run on Linux/Solaris/FreeBSD/etc. you need Python + installed, sespider db connect error, in addition to the various module dependencies (shown below).

Windows

If you’re using the legacy SpiderFoot for Windows, you’ll sespider db connect error a compiled executable (.EXE) file and so all dependencies are packaged with it. No third party tools/libraries need to be installed, not even Python.

After version however, SpiderFoot no longer ships with a .EXE file for running on Windows due to the stale nature of py2exe and inability to build some dependencies properly anymore on Windows.

Fortunately, with Python for Windows you can follow the below instructions to get SpiderFoot dependencies installed on Windows easily:

  1. Install Python for Windows
  2. Install PIP by downloading this file and running it with Python simply by doing: 
  3. (Optional if you want to run from the repository and not a packaged release) Install git

MacOS

Installing on MacOS X is facilitated by using the Homebrew package manager to install Python +, sespider db connect error, pip and then installing SpiderFoot dependencies as you would on Linux:

  1. First, make sure you have Homebrew installed. Try running  and if that doesn’t work, install it.
  2. Install Python + with  sespider db connect error this will also install pip
  3. (Optional if you want to run from the repository and not a packaged release) Install git with 

Up to table of contents

Installing

SpiderFoot can be installed using  (this is the recommended approach as you’ll always have the latest version by simply doing a ), 6300 data blk error by downloading a tarball of a release. The approach is the same regardless sespider db connect error platform:

From git

As a package

Up to table of contents

Running SpiderFoot

To run SpiderFoot, simply execute  from the directory you extracted/pulled SpiderFoot into. Ensure you&#;re using Python 3; on some Linux distributions is Pythonsespider db connect error, so best to sespider db connect error explicit and use :

This is telling you that you&#;re missing command-line arguments, because SpiderFoot doesn&#;t know whether you want to run it in scan mode, or in Web UI mode.

Use Web UI mode when you want to have SpiderFoot run with its built-in web server, enabling you to run scans, browse data and manage configuration via a web browser. Web UI mode also enables you to drive SpiderFoot using in a client/server model.

Use scan mode when you just want to have SpiderFoot fire off a scan and parse the results as output. No web server is started in this mode.

Web UI mode

To start SpiderFoot in Web UI mode, you need to tell it what IP and port to listen to. The below example binds SpiderFoot to localhost () on port

Once executed, windows vpn error 711 web-server will be started, which will listen on You can then use the web-browser of your choice by browsing to https:// Or, since version you can use the CLI, which by default will connect to the server locally, onor you can provide a URL of your server explicitly:

If you wish to make SpiderFoot accessible from another system, for example running it on a server and controlling it remotely usingthen you can specify an external IP for SpiderFoot to bind to, or use so that it binds to all addresses, including

Then to use the CLI from a remote system where the  file has been copied to, you would run:

Run  to better understand how to use the client CLI.

If port is used by another application on your system, you can change the port:

Once started, sespider db connect error, you will see something similar to this, which means you are ready to go. If you instead see an error message about missing modules, please go back and ensure you’ve installed all the pre-requisites.

Caution!

By sespider db connect error, SpiderFoot does not authenticate users connecting to its user-interface or serve over HTTPS, so avoid running it on a server/workstation that can be accessed from untrusted devices, as they will be able to control SpiderFoot remotely and initiate scans from your devices. As of SpiderFootto use authentication and HTTPS, see the Security section below.

Scan mode

New in SpiderFoot is the ability to run SpiderFoot entirely via the command-line (without starting a web server) to run a scan. You can see all the available command-line arguments by using the flag:

The command-line arguments are fairly self explanatory, however a few require some explaining. First, some simple examples&#;

The below example is running a scan against sprers.eu as a target, enabling a very simple module, sespider db connect error, which performs simple DNS resolutions of any identified IP addresses and hostnames.

We can see that it is a little noisy, so we can add the flag to reduce output to just the data from the scan:

It&#;s also not necessary to specify any module, and just run all modules sespider db connect error your scan:

So far this has all been fairly simple. But if we want to do something a little more advanced, such as getting every possible e-mail address on a domain name (), using only modules that take our target directly as input (referred to as &#;strict mode&#;, ), and only get e-mail address but not any other data (), we can do the following:

Up to table of contents

Security

Since versionSpiderFoot introduced authentication as well as TLS/SSL support. These are automatic based on the presence of specific files.

Authentication

SpiderFoot will require basic digest authentication if a file named  exists in. The format of the file is simple &#; just create an entry per account, in the format of:

For example:

Once the file is created, restart SpiderFoot.

TLS/SSL

SpiderFoot will serve HTTPS (and only that) if it detects the existence of a public certificate and key file in SpiderFoot’s root directory. This means whatever port you set SpiderFoot to listen on is the port TLS/SSL will be used, sespider db connect error. It is not possible for SpiderFoot to serve both HTTP and HTTPS simultaneously sespider db connect error different ports. If you need to do that, an nginx proxy in front of SpiderFoot would be a better solution.

Simply place two files in the SpiderFoot directory &#;  (RSA public key in PEM format) and  (RSA private key in PEM format). Restart SpiderFoot and you will now be serving HTTPS only.

A helper script has been provided for Linux users to generate self-signed certificate using OpenSSL, or you can follow the instructions in this StackOverflow article.

Up to table of contents

API Keys

Many SpiderFoot modules require API keys to function to their fullest extent (or at all), so you will need to go to each service and obtain an API key where you feel that having such a key would add value to your scans. Instructions for how to obtain each API key can be found within the Settings for the respective module:

Up to table of contents

Configuring SpiderFoot

One of the main principles behind SpiderFoot is to be highly configurable. Every setting is available in the user interface within the Settings section and should be adequately explained there. Just a few key points to note:

  • API keys can be imported and exported between SpiderFoot and SpiderFoot HX using the “Import API Keys” and “Export API Keys” functions. The format is also a simple CSV so can also be manipulated outside of SpiderFoot to be loaded in, if you prefer.
  • When Debugging is enabled, a lot of logs are generated and can sometimes result in error messages about database locking. This appears to be harmless towards the scan but can mean that logs get dropped.
  • It is worth going through the modules you intend to rely upon heavily to ensure they are configured appropriately for your needs, most importantly the DNS-related modules as they tend to have a knock-on sespider db connect error to many other modules.

Up to table of contents

Using SpiderFoot

Running a Scan

When you run SpiderFoot in Web UI mode for the first time, there is no historical data, so you should be presented with a screen like the following:

To initiate a scan, click on the ‘New Scan’ button in the top menu bar. You will then need to define a name for your scan (these are non-unique) and a target (also non-unique):

You can then define how you would like to run the scan &#; either by use case (the tab selected by default), by data required or by module.

Module-based scanning is for more advanced users who are familiar with the behavior and data provided by different modules, and want more control over the scan:

Beware though, there is no dependency checking when scanning by module, only for scanning by required data. This means that if you select a module that depends on event types only provided by other modules, but those modules are not selected, you will get no results.

Scan Results

From the moment you click ‘Run Scan’, you will be taken to a screen for monitoring your scan in near real time:

That screen is made up of a graph showing a break down of the data obtained so far plus log messages generated by SpiderFoot and its modules.

The bars sespider db connect error the graph are clickable, taking you sespider db connect error the result table for that particular data type.

Browsing Results

By clicking on the ‘Browse’ button for a scan, you can browse the data by type:

This data is exportable and searchable. Click the Search box to get a pop-up explaining how to perform searches.

By clicking on one of the data types, you will be presented with the actual data:

The fields displayed are explained as follows:

  • Checkbox field: Use this to set/unset fields as false positive. Once at least one is checked, click the orange False Positive button above to set/unset the record.
  • Data Element: The data the module was able to obtain about your target.
  • Source Data Element: The data the module received as the basis for its data colletion. In the example above, sespider db connect error, the sfp_portscan_tcp module received an event about an open port, and used that sespider db connect error obtain the banner on that port.
  • Source Module: The module that identified this data.
  • Identified: When the data was identified by the module.

You can click the black icons to modify how this data is represented. For instance you can get a unique data representation by clicking the Unique Data View icon:

Setting False Positives

Version introduced the ability to set data records as false sespider db connect error. As indicated in the previous section, use the checkbox and the orange button to set/unset records as false positive.

Once you have set records as false positive, you will see an indicator next to those records, and have the ability to filter them from view, as shown below:

NOTE: Records can only be set to false positive once a scan has finished running. This is because setting a record to false positive also results in all child data elements being set to false positive. This obviously cannot be done if the scan is still running and can thus lead to an inconsistent state in the back-end. The Sespider db connect error will prevent you from doing so.

The result of a record being set to false positive, aside from the indicator sespider db connect error the data table view and exports, is that such data will not be shown in the node graphs.

Searching Results

Results can be searched either at the whole scan level, or within individual data types. The scope of the search is determined by the screen you are on at the time.

As indicated by the pop-up box when selecting the search field, you can search as follows:

  • Exact value: Non-wildcard searching for a specific value. For example, search for within the HTTP Status Code section to see all pages that were not found.
  • Pattern matching: Search for simple wildcards to find patterns. For example, search for * within the Open TCP Port section to see all instances of port 22 open.
  • Regular expression the patch pc-hdfiles/wowerror.exe could not be applied Encapsulate your string in ‘/’ to search by regular expression. For example, search for ‘/\d+.\d+.\d+.\d+/’ to find anything looking like an IP address in your scan results.

Managing Scans

When you have some historical scan data accumulated, you can use the list available on the ‘Scans’ section to manage them:

You can filter the scans shown by altering the Filter drop-down selection. Except for the green refresh icon, all icons on the right will all apply to whichever scans you have checked the checkboxes for.

Tor Integration

Refer to this post for more information.

Up to table of contents

Modules

Overview

SpiderFoot has all data collection modularised. When a module discovers a piece of data, that data is transmitted to all other modules that are ‘interested’ in that data type for processing. Those modules will then act on that piece of data to identify new data, and in turn generate new events for other modules which may be interested, and so on.

For example,  may identify an IP address associated with your target, notifying all interested modules. One of those sespider db connect error modules would be the  module, which will take that IP address and identify the netblock it is a part of, the BGP ASN and so on.

This might be best illustrated by looking at module code. For example, sespider db connect error, the  module looks for TARGET_WEB_CONTENT and EMAILADDR events for identifying human names:

Meanwhile, as each event is generated to a module, sespider db connect error, it is also recorded in the SpiderFoot database for reporting and viewing in the UI.

Module List

To see a list of all SpiderFoot modules, run :

Data Elements

As mentioned above, SpiderFoot works on an “event-driven” module, whereby each module generates events about data elements which other modules listen to and consume.

The data elements are one of the following types:

  • entities like IP addresses, Internet names (hostnames, sub-domains, domains),
  • sub-entities like port numbers, URLs and software installed,
  • descriptors of those entities (malicious, physical location information, …) 404 not found error was encountered is mostly unstructured data (web page content, port banners, raw DNS records, …)

To see a full list of all the types available, run :

Writing a Module

To write a SpiderFoot module, start by looking at the  file which is a skeleton module that does nothing. Use the following steps as your guide:

  1. Create a copy of  to whatever your module will be named. Try and make this something descriptive, i.e. not something like but instead something like  if you were creating a module to analyse image content.
  2. Replace XXX in the sespider db connect error module with the name of your module and update the descriptive information in the header and comment within the module.
  3. The bad response from server 404 [error, getuisettings] for the class (check in ) is used by SpiderFoot in the UI to correctly categorise modules, so make it something meaningful. Error 3194 tinyumbrella at other modules 0x36b1 error code examples.
  4. Set the events in  and  accordingly, based on the data element table in the previous section. If you are producing a new data element not pre-existing in SpiderFoot, you must create this in the database:
    • Put the logic for the module in . Each call to  is provided a  object. The most important values within this object are:
      • : The data element ID (, , etc.)
      • : The actual data, e.g. the IP address or web server banner, etc.
      • : The name of the module that produced the event (, etc.)
    • When it is time to generate your event, create an instance of :
      • Note: the  passed as the last variable is the event that your module received. This is what builds a relationship between data elements in the SpiderFoot database.
    • Notify all modules that may be interested in the event:

    Up to table of contents

    Database

    All SpiderFoot data is stored in a SQLite database ( in your SpiderFoot installation folder) which can be used outside of SpiderFoot for analysis of your data.

    The schema is quite simple and can be viewed in the GitHub repo.

    The below queries might provide some further clues:

    Spider Use Cases

    This article will cover simple working examples for some standard use cases for Spider. The example will be illustrated using a sales opportunities table to be consistent throughout. In some cases the actual examples will be contrived but are used to illustrate the varying syntax options.

    Have 3 or more servers available and Install MariaDB on each of these servers:

    • spider server which will act as the front end server hosting the spider storage engine.
    • backend1 which will act as a backed server storing data
    • backend2 which will act as a second backend server storing data

    Follow the instructions here to enable the Spider storage engine on the spider server:

    mysql -u root -p < /usr/share/mysql/install_sprers.eu

    Enable use of non root connections

    When using the General Query Log, non-root users may encounter issues when querying Spider tables. Explicitly setting the system variable causes the Spider node to execute statements on the data nodes to enable or disable the General Query Log. When this is done, queries issued by users without the privilege raise an error.

    To avoid this, sespider db connect error, don't explicitly set the system variable.

    Create accounts for spider sespider db connect error connect with on backend servers

    Spider needs a remote connection to the backend server to actually sespider db connect error the remote query. So this should be setup on each backend server. In this case is the ip address of the spider node limiting access to just that server.

    backend1> mysql grant all on test.* to [email protected]'' identified by 'spider'; backend2> mysql grant all on test.* to [email protected]'' identified by 'spider';

    Now verify that these connections can be used from the spider node (here = backend1 and = backend2):

    spider> mysql -u spider -p -h test spider> mysql sespider db connect error spider -p -h test

    Create table on backend servers

    The table definition should be created in the test database on both backend1 sespider db connect error backend2 servers:

    create table opportunities ( id int, accountName varchar(20), name varchar(), owner varchar(7), amount decimal(10,2), closeDate date, stageName varchar(11), primary key (id), key (accountName) ) engine=InnoDB;

    Create server entries on spider server

    While the connection information sespider db connect error also be specified inline in the comment, it is cleaner to define a server object representing each remote backend server connection:

    create server backend1 foreign data wrapper mysql options (host '', database 'test', sespider db connect error, user 'spider', password 'spider', sespider db connect error, port ); create server backend2 foreign data wrapper mysql options (host '', database 'test', user 'spider', password 'spider', port );

    Unable to Connect Errors

    Bear in mind, if you ever need to remove, recreate or otherwise modify the server definition for any reason, you need to also execute a statement. Otherwise, Spider continues to use the old server definition, which can result in queries raising the error

    Error Unable to connect to foreign data source

    If you encounter this error when querying Spider tables, issue a statement to update the server definitions.

    FLUSHTABLES;

    In this case, a spider table is created to allow remote access to the opportunities table hosted on backend1. This then allows for queries and remote dml into the backend1 server from the spider server:

    create table opportunities ( id int, accountName varchar(20), name varchar(), owner varchar(7), amount decimal(10,2), closeDate date, stageName varchar(11), primary key (id), sespider db connect error, key (accountName) ) engine=spider comment='wrapper "mysql", sespider db connect error, srv "backend1"table "opportunities"';

    In this case a spider table is created to distribute data across backend1 and backend2 by hashing the id column. Since the id column is an incrementing numeric value the hashing will ensure even distribution across the 2 nodes.

    create table opportunities ( id int, accountName varchar(20), name varchar(), sespider db connect error, owner varchar(7), amount decimal(10,2), closeDate date, stageName varchar(11), primary key (id), key (accountName) ) engine=spider COMMENT='wrapper "mysql", table "opportunities"' PARTITION BY HASH (id) ( PARTITION pt1 COMMENT = 'srv "backend1"', PARTITION pt2 COMMENT = 'srv "backend2"' ) ;

    In this case a spider table is created to distribute data across backend1 and backend2 based on the first letter of the accountName field. All accountNames that start with the letter L and prior will be stored in backend1 and all other values stored in backend2. Note that the accountName column must be added to the primary key which is a requirement of MariaDB partitioning:

    create table opportunities ( id int, accountName varchar(20), name varchar(), owner varchar(7), amount decimal(10,2), closeDate date, stageName varchar(11), primary key (id, accountName), key(accountName) ) engine=spider COMMENT='wrapper "mysql", table "opportunities"' PARTITION BY range columns (accountName) ( PARTITION pt1 values less than ('M') COMMENT = 'srv "backend1"', PARTITION pt2 values less than (maxvalue) COMMENT = 'srv "backend2"' ) ;

    In this case a spider table is created to distribute data across backend1 and backend2 based on specific values in the owner field. Bill, Bob, and Chris will be stored in backend1 and Maria and Olivier stored in backend2. Note that the owner column must be added to the primary key which is a requirement of MariaDB partitioning:

    create table opportunities ( id int, accountName varchar(20), name varchar(), owner varchar(7), amount decimal(10,2), closeDate date, stageName varchar(11), primary key (id, owner), key(accountName) ) engine=spider COMMENT='wrapper "mysql", table "opportunities"' PARTITION BY list columns (owner) ( PARTITION pt1 values in ('Bill', 'Bob', sespider db connect error, sespider db connect error COMMENT = 'srv "backend1"', PARTITION pt2 values in ('Maria', 'Olivier') COMMENT = 'srv "backend2"' ) ;

    With MariaDB the following partition clause can be used to specify a default partition for all other values, however this must be a distinct partition / shard:

    PARTITION partition_name DEFAULT

    Comments

    it would sespider db connect error you&#;re extracting cookies, which removes the auto exclude for Google Analytics tracking tags, you could stop them from firing by including:

    Check out our video guide on the exclude feature.


    Speed

    Configuration > Sespider db connect error speed configuration allows you to control the speed of the SEO Spider, either by number of concurrent threads, or by URLs requested per second.

    When reducing speed, it’s always easier to control by the ‘Max URI/s’ option, which is the maximum number of URL requests per second. For example, the screenshot below would mean crawling at 1 URL per second –

    SEO Spider Configuration

    The ‘Max Threads’ option can simply be left alone when you throttle speed via URLs per second.

    Increasing the number of threads allows you to significantly increase the speed of the SEO Spider. By default the SEO Spider crawls at 5 threads, to not overload servers.

    Please use the threads configuration responsibly, as setting the number of threads high to increase the speed of the crawl will increase the number of HTTP requests made to the server and can impact a site’s response times. In very extreme cases, you could overload a server and crash it.

    We runtime error 713 visual basic approving a crawl rate and time with the webmaster first, monitoring response times and adjusting the default speed if there are any issues.


    User agent

    Configuration > User-Agent

    The user-agent configuration allows you to switch the user-agent of the HTTP requests made by the SEO Spider. By default the SEO Spider makes requests using its own &#;Screaming Frog SEO Spider user-agent string.

    However, it has inbuilt preset user agents for Googlebot, Bingbot, various browsers and more. This allows you to switch between them quickly when required. This feature also has a custom user-agent setting which allows you to specify your own user agent.

    Details on how the SEO Spider handles sprers.eu can be found here.


    HTTP header

    Configuration > HTTP Header

    The HTTP Header configuration allows you to supply completely custom header requests during a crawl.

    Custom HTTP Headers

    This means you’re able to set anything from accept-language, cookie, referer, sespider db connect error, or just supplying any unique header name. For example, there are scenarios where you may wish to supply an Accept-Language HTTP header in the SEO Spider’s request to crawl locale-adaptive content.

    You can choose to supply any language and region pair that you require within the header value field.

    User-agent is configured separately from other headers via &#;Configuration > User-Agent&#.


    Custom search

    Configuration > Custom > Search

    The SEO Spider allows you to find anything you want in the source code of a website. The custom search feature will check the HTML (page text, or specific element you choose to search in) of every page you crawl.

    By default custom search checks the raw HTML source code of a website, which might not be the text that is rendered in your browser. You can switch to JavaScript rendering mode to search the rendered HTML.

    You&#;re able to configure up to search filters in the custom search configuration, which allow you to input your text or regex and find pages that either ‘contain’ or ‘does not contain’ your chosen input.

    This can be found under &#;Config > Custom > Search&#.

    Custom Search

    Simply click &#;Add&#; (in the bottom right) to include a filter in the configuration.

    From left to right, you can name the search filter, select ‘contains’ or ‘does not contain’, choose ‘text’ or ‘regex’, input your search query – and choose where the search is performed (HTML, page text, an element, or XPath and more).

    Custom Search Filters

    For example, you may wish to choose ‘contains’ for pages like ‘Out of stock’ as you wish to find any pages which have this on them. When searching for something like Google Analytics code, it would make more sense to choose the ‘does not contain’ filter to find pages that do not include the code (rather than just list all those that do!).

    The pages that either &#;contain&#; or &#;does not contain&#; the entered data can be viewed within the ‘Custom Search’ tab.

    Custom Search Results Data

    The ‘contains’ filter will show the number of occurrences of the search, while a ‘does not contain’ search will either return ‘Contains’ or ‘Does Not Contain’.

    In this search, there are 2 pages with ‘Out of stock’ text, each containing the word just once – while the GTM code was not found on any of the 10 pages.

    The SEO Spider uses the Java regex library, as described here. To &#;scrape&#; or extract data, please use the custom extraction feature.

    You are able to use regular expressions in custom search to find exact words. For example &#;

    \bexample\b

    Would match a particular word (&#;example&#; in this case), as \b matches word boundaries.

    Please see our tutorial on &#;How to Use Custom Search&#; for more advanced scenarios, such as case sensitivity, finding exact &#; multiple words, combining searches, searching in specific elements and for multi-line snippets of code.


    Custom extraction

    Configuration > Custom > Extraction

    Custom extraction allows you to collect any data from the HTML of a URL. Extraction is performed on the static HTML returned by internal HTML pages with a 2xx response code. You can switch to JavaScript rendering mode to extract data from the rendered HTML (for any data that&#;s client-side only).

    The SEO Spider supports the following modes to perform data extraction:

    • XPath: XPath selectors, sespider db connect error, including attributes.
    • CSS Path: CSS Path and optional attribute.
    • Regex: For more advanced uses, such as scraping HTML comments or inline JavaScript.

    When using XPath or CSS Path to collect HTML, you can choose what to extract:

    • Extract HTML Element: The selected element and its inner HTML content.
    • Extract Inner HTML: The inner HTML content of the selected element. If the selected element contains other HTML elements, they will be included.
    • Extract Text: The text content of the selected element and the text content of any sub elements.
    • Function Value: The result of the supplied function, eg count(//h1) to find the number of h1 tags on a page.

    To set up custom extraction, click sespider db connect error > Custom > Extraction&#.

    Custom Extraction Menu

    Just click &#;Add&#; to use an extractor, and insert the relevant syntax. Up to separate extractors can be configured to scrape data from a website.

    Custom Extraction config

    The data extracted unexpected essbase error 1003000 be viewed in the Custom Extraction tab Extracted data is also included as columns within the ‘Internal’ tab as well.

    canon ip1700 error Extraction" width="" height="">

    Please read our SEO 502 proxy error web scraping guide for a full tutorial on how to use custom extraction. For examples of custom extraction expressions, please see our XPath Examples and Regex Examples.

    Regex Troubleshooting

    • The SEO Spider does not pre process HTML before running regexes. Please bear in mind however that the HTML you see in a browser when viewing source maybe different to what the SEO Spider sees. This can be caused by the web site returning different content based on User-Agent or Cookies, or if the pages content is generated using JavaScript and you are not using JavaScript rendering.
    • More details on the regex engine used by the SEO Spider can be found here.
    • The regex engine is configured such that the dot character matches newlines.

    Custom link positions

    Configuration > Custom > Link Positions

    The SEO Spider classifies every links position on a page, such as whether it’s in the navigation, content of the page, sidebar or footer for example.

    The classification is performed by using each links &#;link path&#; (as an XPath) for known semantic substrings and can be seen in the &#;inlinks&#; and &#;outlinks&#; tabs.

    This can help identify ‘inlinks’ to a page that are only from in body content for example, ignoring any links in the main navigation, or footer for better internal link analysis.

    If your website uses semantic HTML5 elements (or well-named non-semantic elements, such as div id=&#;nav&#;), the SEO Spider will be able to automatically determine different parts of a web page and the links within them.

    HTML5 Semantic Elements

    The default link positions set-up uses the following search terms to classify links.

    Link Positions

    However, not every website is built in this way, so you’re able to configure the link position classification based upon each sites unique set-up. This allows you to use a substring of the link path of any links, to classify them.

    For example, the Screaming Frog website has mobile menu links outside the nav element that are determined to be in ‘content’ links. This is incorrect, as they are just an additional site wide navigation on mobile. This is because they are not within a nav element, and are not sespider db connect error named such as having &#;nav&#; in their class name. Doh!

    Link Position Classification

    The ‘mobile-menu__dropdown’ class name (which is in the link path as shown above) can be used to define its correct link position using the Link Positions feature.

    Custom Link Positions

    These links will then be correctly attributed as a sitewide navigation link.

    Mobile Menu Links classified

    The search terms or substrings used for link position classification are based upon order of precedence. As &#;Content&#; is set as &#;/&#; and will match any Link Path, it should always be at the bottom of the configuration.

    So in the above example, the ‘mobile-menu__dropdown’ class name was added and moved above &#;Content&#;, using the &#;Move Up&#; button to take precedence.

    You&#;re able to disable &#;Link Positions&#; classification, which means the XPath of each link is not stored and the link position is not determined. This can help save memory and speed up the crawl.


    User Interface

    Configuration > User Interface

    There are a few configuration options under the user interface menu. These are as follows &#;

    • Reset Columns For All Tables &#; If columns have been deleted or moved in any table, this option allows you to reset them back to default.
    • Reset Tabs &#; If tabs have been deleted or moved, this option allows you to reset them back to default.
    • Theme > Light / Dark &#; By default the SEO Spider uses a light grey theme, sespider db connect error. However, you can switch to a dark theme (aka, &#;Dark Mode&#;, &#;Batman Mode&#; etc). This theme can help reduce eye strain, particularly for those that work in low light.
    Dark Mode

    Google Analytics integration

    Configuration > API Access > Google Analytics

    You can connect to the Google Analytics API and sespider db connect error in data directly during a crawl. The SEO Spider can fetch user and session metrics, as well as goal conversions and ecommerce (transactions and revenue) data for landing pages, so you can view your top performing pages when performing a technical or content audit.

    If you’re running an Adwords campaign, you can also pull in impressions, clicks, cost and conversion data and the SEO Spider will match your destination URLs against the site crawl, too. You can also collect other metrics of interest, such as Adsense data (Ad impressions, clicks revenue etc), site speed or social activity and interactions.

    To set this up, start the Qip error number 4 Spider and go to ‘Configuration > API Access > Google Analytics’.

    Google Analytics config

    Then you just need to connect to disk error backtrack Google account (which has access to the Analytics account you wish to query) by granting the ‘Screaming Frog SEO Spider’ app permission to access your account to retrieve the data. Google APIs use the OAuth protocol for authentication and authorisation. The SEO Spider will remember any Google accounts you authorise within the list, sespider db connect error, so you can ‘connect’ quickly upon starting the application each time.

    Google Analytics set-up

    Once you have connected, you can choose the relevant Google Igo my way hardware id error account, property, view, segment and date range!

    Google Analytics user account

    Then simply select the metrics that you wish to fetch! The SEO Spider currently allow you to select up to 30, which we might extend further. If you keep the number of metrics to 10 or below with a single dimension (as a rough guide), then it will generally be a single API query per 10k URLs, which makes it super quick –

    Google Analytics metrics

    As default the SEO Spider sespider db connect error the following 10 metrics –

    1. Sessions
    2. % New Sessions
    3. New Users
    4. Bounce Rate
    5. Page Views Per Session
    6. Avg Session Duration
    7. Page Value
    8. Goal Conversion Rate
    9. Goal Completions All
    10. Goal Value All

    You can read more about the definition of each metric from Google.

    You can also set the dimension of each individual metric against either page path and, or landing page which are quite different (and both useful depending on your scenario & objectives).

    Google analytics dimension

    There are scenarios where URLs in Google Analytics might not match URLs in a crawl, so we cover these by matching trailing and non-trailing slash URLs and case sensitivity (upper and lowercase characters in URLs). Google doesn’t pass the protocol (HTTP or HTTPS) via their API, so we also match this data automatically.

    Google Analytics General Config

    When selecting either of the above options, please note that data from Google Analytics is sorted by sessions, so matching is performed against the URL with the highest number of sessions. Data is not aggregated for those URLs.

    • Match Trailing and Non-Trailing Slash URLs &#; Allows both sprers.eu and sprers.eu to match either sprers.eu or sprers.eu from GA, whichever has the highest number of sessions.
    • Match Uppercase &#; Lowercase URLs &#; Allows sprers.eu, sespider db connect error, sprers.eu and sprers.eu to match the version of this URL from GA with the highest number of sessions.

    If you have hundreds of thousands of URLs in GA, you can choose to limit the number of URLs to query, which is by default ordered by sessions to return the top performing page data.

    The &#;Crawl New URLs Discovered in Sespider db connect error Analytics&#; option means that any new URLs discovered in Google Analytics (that are not found via hyperlinks) will be crawled. If this option isn’t enabled, then new URLs discovered via Google Analytics will only be available to view in the ‘Orphan Pages’ report. They won’t be added to the crawl queue, viewable within the user interface and appear under the respective tabs and filters. Please see our sespider db connect error on finding orphan pages.

    When you hit ‘start’ to crawl, the Google Analytics data will then be fetched and display in respective columns within the ‘Internal’ and ‘Analytics’ tabs. There’s a separate ‘Analytics’ progress bar in the sespider db connect error right and when this has reached %, crawl data will start appearing against Sespider db connect error. The more URLs you query, the longer this process can take, but generally it’s extremely quick.

    Google Analytics integration

    There are 5 filters currently under the ‘Analytics’ tab, which allow you to filter the Google Analytics data –

    • Sessions Above 0 – This simply means the URL in question has 1 or more sessions.
    • Bounce Rate Above 70% – This means the URL has a bounce rate over 70%, which you may wish to investigate. In some scenarios this is normal though!
    • No GA Data – This means that for the metrics and dimensions queried, the Google API didn’t return any data for pld load error URLs in the crawl. So the URLs either didn’t receive any visits sessions, sespider db connect error, or perhaps the URLs in the crawl are just different to those in GA for some reason.
    • Non-Indexable with GA Data – This means the URL is non-indexable, but still has data from GA.
    • Orphan URLs – This means the URL was only discovered via GA, sespider db connect error, and was not found via an internal link during the crawl.

    As an example for our own website, sespider db connect error, we can see there is ‘no GA data’ for blog category pages and a few old blog posts, as you might expect (the query was landing page, rather than page). Sespider db connect error, you may see pages appear here which are ‘noindex’ or ‘canonicalised’, unless you have ‘respect noindex‘ and ‘respect canonicals‘ ticked in the advanced configuration tab.

    Google Analytics No GA Data

    If GA data does not get pulled into the SEO Spider as you expected, then analyse the URLs in GA under &#;Behaviour > Site Sespider db connect error > All Pages’ and ‘Behaviour > Site Content > Landing Pages’ depending on which dimension you choose in your query. The URLs here need to match those in the crawl, sespider db connect error, for the data to be matched accurately. If they don’t match, then the SEO Spider won’t be able to match up the data accurately.

    We recommend checking your default Google Analytics view settings (such as ‘default page’) and filters which all impact how URLs are displayed and hence matched against a crawl. If you want URLs to match up, you can often make the required amends within Google Analytics.

    Please note, Google APIs use the OAuth protocol for authentication and authorisation, and the data provided via Google Analytics and other APIs is only accessible locally on your machine. We cannot view and do not store that data ourselves. Please see more in our FAQ.


    Google Search Console integration

    Configuration > API Access > Google Search Console

    You can connect to the Google Search Analytics and URL Inspection APIs and pull in data directly during a crawl.

    By default the SEO Spider will fetch impressions, clicks, CTR and position metrics from the Search Analytics API, so you can view your top performing pages when performing a technical or content audit.

    Optionally, you can also choose to ‘Enable URL Inspection’ alongside Search Analytics data, which provides Google index status data for up to 2, URLs per property a day. This includes whether the &#;URL is on Google&#;, or &#;URL is not on Google&#; and coverage.

    Search Console data

    To set this up, go to ‘Configuration > API Access > Google Search Console’. Connecting to Google Search Console works in the same way as already detailed in our step-by-step Google Analytics integration guide.

    Connect to a Google account (which has access to the Search Console account you wish to query) by granting the ‘Screaming Frog SEO Spider’ app permission to access your account to retrieve the data. Google APIs use the OAuth protocol for authentication and authorisation. The SEO Spider will remember any Google accounts you authorise within the list, so you can ‘connect’ quickly upon starting the application each time.

    Once you have connected, you can choose the relevant website property.

    Google Search Console integration

    By default the SEO Spider collects the following metrics for the last 30 days –

    • Clicks
    • Impressions
    • CTR
    • Position

    Read more about the definition of each metric from Google.

    If you click the &#;Search Analytics&#; tab in the configuration, you can adjust the date range, dimensions and various other settings.

    Google Search Console Search Analytics configuration

    If you wish to crawl new URLs discovered from Google Search Console to find any potential orphan pages, remember to enable the configuration shown below.

    Orphan urls in search console

    Optionally, you sespider db connect error navigate to the &#;URL Inspection&#; tab and &#;Enable URL Inspection&#; to collect data about the indexed status of up to 2, URLs in the crawl.

    Google Search Console URL Inspection API integration

    The SEO Spider crawls breadth-first by default, meaning via crawl depth from the start page of the crawl. The first sespider db connect error HTML URLs discovered will be queried, so focus the crawl on specific sections, use the configration for include and exclude, or list mode to get the data on key URLs and templates you need.

    The following configuration options are available &#;

    • Ignore Non-Indexable URLs for URL Inspection &#; This means any URLs in the crawl that are classed as &#;Non-Indexable&#;, won&#;t be queried via the API. Only Indexable URLs will be queried, which can help save on your inspection quota if you&#;re confident on your sites set-up.
    • Use Multiple Properties &#; If multiple properties are verified for the same domain the SEO Spider will automatically detect all relevant properties in the account, and use the most specific property to request data for the URL. This means it’s now possible to get far more than 2k URLs with URL Inspection API data in a single crawl, if there are multiple properties set up – without having to perform multiple crawls.

    The URL Inspection API includes the following data.

    • Summary &#; A top level verdict on whether the URL is indexed and eligible to display in the Google search results. &#;URL is on Google&#; means the URL has been indexed, can appear in Google Search results, sespider db connect error, and no problems were found with any enhancements found in the page (rich results, mobile, sespider db connect error, AMP). &#;URL is on Google, but has Issues&#; means it has been indexed and can appear in Google Search results, but there are some problems with mobile usability, AMP or Rich results that might mean it doesn’t appear in an optimal way. &#;URL is not on Google&#; means it is not indexed by Google and won&#;t appear in the search results. This filter can include non-indexable URLs (such as those that are &#;noindex&#;) as well as Indexable URLs that are able to be indexed.
    • Coverage &#; A short, descriptive reason for the status of the URL, explaining why the URL is or isn&#;t on Google.
    • Last Crawl &#; The last time this page was crawled by Google, in your local time. All information shown in this tool is derived from this last crawled version.
    • Crawled As &#; The user agent type used for the crawl (desktop or mobile).
    • Crawl Allowed &#; Indicates whether your site allowed Google to crawl apple mobile device support error code 0xc0000002 the page or blocked it with a sprers.eu rule.
    • Page Fetch &#; Whether or error code 193 3ds max Google could actually get the page from your server. If crawling is not allowed, this field will show a failure, sespider db connect error.
    • Indexing Allowed &#; Whether or not your page explicitly disallowed indexing. If indexing is disallowed, the reason is explained, and the page won&#;t appear in Google Search results.
    • User-Declared Canonical &#; If your page explicitly declares a canonical URL, it will be shown here.
    • Google-Selected Canonical &#; The page that Google selected as the canonical (authoritative) URL, when it found similar or duplicate pages on your site.
    • Mobile Usability &#; Whether the page is mobile friendly or not.
    • Mobile Usability Issues &#; If the &#;page is not mobile friendly&#;, this column will display a list of mobile usability errors.
    • AMP Results &#; A verdict on whether the AMP URL is valid, invalid or has warnings. &#;Valid&#; means the AMP URL is valid sespider db connect error indexed. &#;Invalid&#; means the AMP URL has an error that will prevent it from being indexed. &#;Valid with warnings&#; means the AMP URL can be indexed, but there are some issues that might prevent it from getting full features, or it uses tags or attributes that are deprecated, and might become invalid in the future.
    • AMP Issues &#; If the URL has AMP issues, this column will display a list of AMP errors.
    • Rich Results &#; A verdict on whether Rich results found on the page are valid, invalid or has warnings. &#;Valid&#; means rich results have been found and are eligible for search. error no argument specified means one or more rich results on the page has an error that will prevent it from being eligible for search. sespider db connect error with warnings&#; means the rich results on the page are eligible for search, but there are some issues that might prevent it from getting full features.
    • Rich Results Types &#; A comma separated list of all rich result enhancements discovered on the page.
    • Rich Results Types Errors &#; A comma separated list of all rich result enhancements discovered with an error on the page. To export specific errors discovered, use the &#;Bulk Export > URL Inspection > Rich Results&#; export.
    • Rich Results Warnings &#; A comma separated list of all rich result enhancements sespider db connect error with a warning on the page, sespider db connect error. To export specific warnings discovered, use the &#;Bulk Export > URL Inspection > Rich Results&#; export.

    You can read more about the the indexed URL results from Google, sespider db connect error.

    There are 11 filters under the ‘Search Console’ tab, which allow you to filter Google Search Console data from both APIs.

    • Clicks Above 0 – This simply means the URL in question has 1 or more clicks.
    • No Search Analytics Data – This means that the Search Analytics API didn’t return any data for the URLs in the crawl. So the URLs either didn’t receive any impressions, or perhaps the URLs in the crawl are just different to those in GSC for some reason.
    • Non-Indexable with Search Analytics Data – URLs that are classed as non-indexable, but have Google Search Analytics data.
    • Orphan URLs – URLs that have been discovered via Google Search Analytics, rather than internal links during a crawl. This filter requires ‘Crawl New URLs Discovered In Google Search Console’ to be enabled under the ‘General’ tab of the Google Search Console configuration window (Configuration > API Access > Google Search Console) and post ‘crawl analysis‘ to be populated. Please see our guide on how to find orphan pages.
    • URL Is Not on Google &#; The URL is not indexed by Google and won&#;t appear in the search results. This filter can include non-indexable URLs (such as those that are &#;noindex&#;) as well as Indexable URLs that are able to be indexed. It&#;s a catch all filter for anything not on Google according to the API.
    • Indexable URL Not Indexed &#; Indexable URLs found in the crawl that are not indexed by Google and won&#;t appear in the search results. This can include URLs that are unknown to Google, or those that have been discovered but not indexed, and more.
    • URL is on Google, But Has Issues &#; The URL has been indexed and can appear in Google Search results, but there are some problems with mobile usability, AMP or Rich results that might mean it doesn&#;t appear in an optimal way.
    • User-Declared Canonical Not Selected &#; Google has chosen to index a different URL to the one declared by the user in the HTML. Canonicals are hints, and sometimes Google does a great job of this, other times it&#;s less than ideal.
    • Page Is Not Mobile Friendly &#; The page has issues on mobile devices.
    • AMP URL Is Invalid sespider db connect error The AMP has an error that will prevent it from being indexed.
    • Rich Result Invalid &#; The URL has an error with one or more rich result enhancements that will prevent the rich result from showing in the Google search results. To export specific errors discovered, use the &#;Bulk Export > URL Inspection > Rich Results&#; export.

    Please see our tutorial on &#;How To Automate The URL Inspection API&#.


    PageSpeed Insights integration

    Configuration sespider db connect error API Access > PageSpeed Insights

    You can connect to the Google PageSpeed Insights API and pull in data directly during a crawl.

    PageSpeed Insights uses Lighthouse, sespider db connect error, so the SEO Spider is able to display Lighthouse speed metrics, analyse speed opportunities and diagnostics at scale and gather real-world data from the Chrome User Experience Report (CrUX) which contains Core Web Vitals from real-user monitoring (RUM).

    To set this up, start the SEO Spider and go to ‘Configuration > API Access > PageSpeed Insights’, sespider db connect error, enter a free PageSpeed Insights API key, choose your metrics, connect and crawl.

    Setting Up A PageSpeed Insights API Key

    To set-up a free PageSpeed Insights API key, login to your Google account and then this is a hardware error the PageSpeed Insights getting started page.

    Once you&#;re on the page, scroll down a paragraph and click on the &#;Get a Key&#; button.

    PSI API Key

    Then follow the process of creating a key &#; by submitting a project name, agreeing to the terms and conditions and clicking &#;next&#.

    PSI API Key Step 1

    It will then enable the key for PSI and provide an API key which can be copied.

    PSI API Key Step 2

    Copy the key, and click &#;Done&#.

    Then simply paste this in the SEO Spider &#;Secret Key:&#; field under ‘Configuration > API Access > PageSpeed Insights’ and press &#;connect&#. This key is used when making calls to the API at sprers.eu

    PageSpeed Insights API Key Connection

    That&#;s it, you&#;re now connected! The SEO Spider will remember your secret key, so you can ‘connect’ quickly upon starting the application each time.

    If you find that your API key is saying it&#;s &#;failed to connect&#;, it can take a couple of minutes to activate. You can also check that the PSI API has been enabled in the API library as per our FAQ. If it isn&#;t enabled, enable it &#; and it should then allow you to connect.

    Once you have connected, you can choose metrics and device to query under the &#;metrics&#; tab.

    PageSpeed metrics configuration

    The following speed metrics, opportunities and diagnostics data can be configured sespider db connect error be collected via the PageSpeed Insights API integration.

    Overview Metrics

    • Total Size Savings
    • Total Time Savings
    • Total Requests
    • Total Page Size
    • HTML Size
    • HTML Count
    • Image Size
    • Image Count
    • CSS Size
    • CSS Count
    • JavaScript Size
    • JavaScript Count
    • Font Size
    • Font Count
    • Media Size
    • Media Count
    • Other Size
    • Other Count
    • Third Party Size
    • Third Party Count

    CrUX Metrics (&#;Field Data&#; in PageSpeed Insights)

    • CrUX Performance
    • CrUX First Contentful Paint Time (sec)
    • CrUX First Contentful Paint Category
    • CrUX First Input Delay Time (sec)
    • CrUX First Input Delay Category
    • CrUX Largest Contentful Paint Time (sec)
    • CrUX Largest Contentful Paint Category
    • CrUX Cumulative Layout Shift
    • CrUX Cumulative Layout Shift Category
    • CrUX Interaction to Next Paint (ms)
    • CrUX Interaction to Next Paint Category
    • CrUX Time to First Byte (ms)
    • CrUX Time to First Byte Category
    • CrUX Origin Performance
    • CrUX Origin First Contentful Paint Time (sec)
    • CrUX Origin First Contentful Paint Category
    • CrUX Origin First Input Delay Time (sec)
    • CrUX Origin First Input Delay Category
    • CrUX Origin Largest Contentful Paint Time (sec)
    • CrUX Origin Largest Contentful Paint Category
    • CrUX Origin Cumulative Layout Shift
    • CrUX Origin Cumulative Layout Shift Category
    • CrUX Origin Interaction to Next Paint (ms)
    • CrUX Origin Interaction to Next Paint Category
    • CrUX Origin Time to First Byte (ms)
    • CrUX Origin Time to First Byte Category

    Lighthouse Metrics (&#;Lab Data&#; in PageSpeed Insights)

    • Performance Score
    • Time to First Byte (ms)
    • First Contentful Paint Time (sec)
    • First Contentful Paint Score
    • Speed Index Time (sec)
    • Speed Index Score
    • Time to Interactive (sec)
    • Time to Interactive Score
    • First Meaningful Paint Time (sec)
    • First Meaningful Paint Score
    • Estimated Input Latency (ms)
    • Estimated Input Latency Score
    • First CPU Idle (sec)
    • First CPU Idle Score
    • Max Potential First Input Delay (ms)
    • Max Potential First Input Delay Score
    • Total Blocking Time (ms)
    • Total Blocking Time Score
    • Cumulative Layout Shift
    • Cumulative Layout Shift Score

    Opportunities

    • Eliminate Render-Blocking Resources Savings (ms)
    • Defer Offscreen Images Savings (ms)
    • Defer Offscreen Images Savings
    • Efficiently Encode Images Savings (ms)
    • Efficiently Encode Images Savings
    • Properly Size Images Savings (ms)
    • Properly Size Images Savings
    • Minify CSS Savings (ms)
    • Minify CSS Savings
    • Minify JavaScript Savings (ms)
    • Minify JavaScript Savings
    • Reduce Unused CSS Savings (ms)
    • Reduce Unused CSS Savings
    • Reduce Unused JavaScript Savings (ms)
    • Reduce Unused JavaScript Savings
    • Serve Images in Next-Gen Formats Savings (ms)
    • Serve Images in Next-Gen Formats Savings
    • Enable Text Compression Savings (ms)
    • Enable Text Compression Savings
    • Preconnect to Required Origin Savings
    • Server Response Times (TTFB) (ms)
    • Server Response Times (TTFB) Category (ms)
    • Multiple Redirects Savings (ms)
    • Preload Key Requests Savings (ms)
    • Use Video Format for Animated Images Savings (ms)
    • Use Video Format for Animated Images Savings
    • Total Image Optimization Savings (ms)
    • Avoid Serving Legacy JavaScript to Modern Browser Savings

    Diagnostics

    • DOM Element Count
    • JavaScript Execution Time (sec)
    • JavaScript Execution Time Category
    • Efficient Cache Policy Savings
    • Minimize Main-Thread Work (sec)
    • Minimize Main-Thread Work Category
    • Text Remains Visible During Webfont Load
    • Image Elements Do Not Have Explicit Width &#; Height
    • Avoid Large Layout Shifts

    You can read more about the definition of each metric, opportunity or diagnostic according to Lighthouse.

    Filter by –

    • Eliminate Render-Blocking Resources &#; This highlights all pages with resources that are blocking the first paint of the page, along with the potential savings.