Learn how to use the Crawling slash command to retrieve website content
urls
- Array of URLs to crawl
limit
- Maximum number of pages to crawl (default: 10)
maxDepth
- Maximum depth of links to follow
excludePaths
- Paths to exclude from crawling (glob patterns)
includePaths
- Only include these paths when crawling (glob patterns)
allowBackwardLinks
- Allow crawling links that aren’t direct children of the provided URL
scrapeOptions
- Configure output formats (markdown, html)
webhook
- URL to receive webhook events for crawl updates