Web crawler

About PhyniteBot.

If you're seeing PhyniteBot/1.0 in your server logs, that's us. Here's what it does, why, and how to control it.

What is PhyniteBot?

PhyniteBot is the web crawler used by Phynite Analytics to read and understand the content on your blog. When you connect your site to Phynite, the bot visits your published pages to extract metadata, recipe information, and content structure so we can provide accurate analytics and recommendations.

It is not a search engine crawler. PhyniteBot does not index your content for public search, does not cache or republish your pages, and does not share your content with third parties.

How it behaves

User agent

PhyniteBot/1.0 (+https://phyniteanalytics.com/bot)

Respects robots.txt

PhyniteBot reads and obeys your robots.txt file, including Disallow, Allow, and Crawl-delay directives for the PhyniteBot user agent. If no specific rules exist for PhyniteBot, it follows the * rules.

Rate limiting

The bot limits itself to approximately 5 requests per second by default and respects any Crawl-delay you set. It runs a maximum of 10 concurrent requests across all sites, so it will never overwhelm your server.

Sitemap-aware

PhyniteBot reads your sitemap.xml and uses <lastmod> dates to avoid re-crawling pages that haven't changed. Only modified content is re-fetched.

Timeout

Each request has a 10-second timeout. If your page takes longer to respond, PhyniteBot will move on and retry later.

No JavaScript rendering

PhyniteBot fetches raw HTML only. It does not execute JavaScript, load images, or download stylesheets. This means minimal impact on your bandwidth and server resources.

Why it visits

PhyniteBot crawls your site for three reasons:

  1. Content enrichment — extracting titles, descriptions, recipe structured data, and other metadata to power your content overview and analytics dashboards.
  2. Freshness detection — comparing current content to previous crawls to identify updates, so your analytics reflect what's actually on the page today.
  3. Recommendations — understanding your content so Phynite can suggest which posts to refresh, which topics to cover next, and which pages need attention.

How to control it

To block PhyniteBot from crawling specific pages or your entire site, add rules to your robots.txt:

User-agent: PhyniteBot Disallow: /private/ Disallow: /draft/

To block PhyniteBot entirely:

User-agent: PhyniteBot Disallow: /

To set a crawl delay (in seconds):

User-agent: PhyniteBot Crawl-delay: 2

Note that blocking PhyniteBot will prevent Phynite Analytics from reading your content, which means content enrichment, recommendations, and some analytics features won't work for the blocked pages.

Questions?

If PhyniteBot is causing issues with your site, or you have questions about how it works, reach out to us at support@phynitesolutions.com or visit our contact page. We'll respond within a day.

© 2026 Phynite Solutions LLC