Expand web sitemap and robots coverage#623
Conversation
| const OPENCLAW_USE_CASE_SLUGS = [ | ||
| 'multi-agent-workflows-claude-codex', | ||
| 'how-to-let-ai-agents-message-each-other', | ||
| 'agent-orchestration-for-coding-teams', | ||
| 'slack-style-messaging-for-ai-agents', | ||
| 'human-in-the-loop-agent-workflows', | ||
| ] as const; |
There was a problem hiding this comment.
🔴 Sitemap lists 5 use-case URLs with no corresponding routes (404s)
The OPENCLAW_USE_CASE_SLUGS array generates sitemap entries for /openclaw/use-cases/<slug>, but no pages or routes exist for these paths. There is no /openclaw/use-cases/ directory under web/app/, no dynamic [slug] route, no catch-all route, and no rewrites/redirects in next.config.mjs. All 5 URLs will return 404 to search engine crawlers, which Google Search Console reports as sitemap errors and can negatively impact crawl budget and SEO standing.
The same concern applies to web/app/robots.ts:8 which allows /openclaw/use-cases/ for crawling—but that is less impactful since it just permits crawling of a non-existent path.
Was this helpful? React with 👍 or 👎 to provide feedback.
Summary\n- expand web sitemap generation to enumerate docs, blog posts, and planned OpenClaw use-case URLs\n- update robots.txt to explicitly allow crawling the new /openclaw/use-cases surface\n- keep private invite routes blocked from indexing\n\n## Testing\n- npm run build --workspace web\n - Next.js production build succeeded; final output also reported an existing missing ESLint plugin from the repo root config\n\n## Notes\n- This pairs with the separate use-case landing pages PR so sitemap coverage lands cleanly with the new URLs.\n