Skip Navigation
Engagement Engine code and web crawlers indexing
Answer ID 9543   |   Last Review Date 12/19/2018

Will the Engagement Engine code impact the way in which my pages are indexed by web crawlers?


All releases with Engagement Engine Integration


By default, Engagement Engine code does not allow indexing by web crawlers. 
User-agent: *
Disallow: /

This is happening because:

1. There is no content on that could be crawlable, as this domain is only used for rules requests, which are dynamic.

2. To reduce traffic for content that will not be meaningful for a crawler to index.

Nevertheless, the Engagement Engine code will not prevent the pages where it is inserted from getting indexed by web crawlers.

From Blocked Resources Report:

"If a site's robots.txt file disallows crawling these resources, it can affect how well Google renders and indexes the page, which can affect the page's ranking in Google search."

This suggests that it will impact the ranking of the pages where Engagement Engine code is inserted **IF** what is blocked is important to the ranking. 

The Engagement Engine rules are not, they are considered dynamic content and do not affect the site content (other than simply adding a chat invite to the page, for instance).    

If you have questions around what generates a session and how you can prevent inaccurate session billing on your site please review Demystifying Session Usage (PDF). Some simple mis-steps in customization and configuration can increase billable sessions.  For more information, see Session usage information.