Public AI discoverability audit

Run an AI agent-readiness scan

Enter a public domain and site profile to check whether AI agents can discover, understand, and act on the important parts of your website.

Enter a domain and select a site profile to generate an agent-readiness score

The scan checks public entry points like the homepage, llms.txt, robots.txt, sitemap.xml, and action-path signals.

Made for the kinds of sites AI agents visit every day

Next.jsReactTailwindCSSVercelDocumentation sites

What Agent Readiness Score does

This project is a focused audit tool for checking whether public website signals are usable for AI agents. It turns a domain into a score, a prioritized issue list, and clear next-step recommendations.

Scan public AI signals

Review homepage structure, llms.txt, robots.txt, sitemap.xml, and other public entry points that agents rely on.

Score what matters

Summarize readiness across discoverability, understandability, actionability, and trust so teams know what to fix first.

Explain every issue

Each finding should answer what is wrong, why it matters for AI agents, and how to improve the page or configuration.

Generate a starter llms.txt

Help teams move faster with a practical template that fits docs sites, blogs, SaaS products, and marketing sites.

Why this approach works for MVP

The first release focuses on public, explainable checks instead of heavy enterprise monitoring. That keeps the product easier to build, understand, and iterate.

Low-friction first scan

Users only need a domain to get a first-pass readiness report and a shareable summary.

Explainable output

Scores are backed by concrete findings instead of opaque AI-only summaries, which makes the product easier to trust.

Built for quick iteration

The product can start with a rules-based engine, then layer in exports, history, recurring scans, and AI-enhanced guidance later.

How a scan should feel

Keep the user journey short, clear, and practical.

Enter a domain

Start from a clean domain input with fast validation and normalization.

Run the audit

Fetch the homepage, llms.txt, robots.txt, and sitemap.xml, then evaluate the public signals.

Read the report

Show the total score, four dimensions, high-priority findings, and practical fixes.

Share and improve

Use a public report link or llms.txt draft to push the site toward better agent visibility.

Core checks for the first release

The first version should stay focused on signals that are easy to explain and useful to fix.

Domain normalization and safety checks

Accept valid public domains, reject unsupported inputs, and guard against private network targets.

Homepage structure review

Inspect title, description, headings, canonical signals, and navigation clarity.

llms.txt and crawlability checks

Detect whether agents have a direct guidance file and whether crawl signals are discoverable.

robots.txt and sitemap coverage

Verify that machine-readable routes exist and help agents find the important parts of the site.

Action and trust signals

Check whether AI agents can find docs, pricing, sign-up, contact, privacy, and terms pages without guessing.

Shareable report output

Turn findings into a public report URL that users can reopen and share with teammates.

What the MVP is optimized for

Clear output beats complexity in the first release.

4 Scoring dimensions

4

Scoring dimensions

5 Core public resources

5

Core public resources

1 Shareable report link

1

Shareable report link

Who this project is for

A practical agent-readiness report is most useful when teams need quick clarity before they build a full monitoring stack.

Use a quick public audit to catch missing machine-readable signals before a product launch.

Founders

Founders, Launch checklist

Founders

Launch checklist

Find out whether documentation is easy for AI agents to discover, navigate, and cite correctly.

Docs teams

Docs teams, Knowledge visibility

Docs teams

Knowledge visibility

Turn technical checks into a clear scorecard and improvement plan that non-technical clients can understand.

Agencies

Agencies, Client reporting

Agencies

Client reporting

Frequently asked questions

The first release should stay simple and explicit about what it checks.





Need a private implementation brief? Start with the audit result, then expand into a technical plan for crawling, scoring, and recurring scans.

Start with a clear agent-readiness baseline

Use this project bootstrap to shape the brand, the copy, and the first-pass demo before building the full crawler and scoring engine.