The /firecrawl command enables you to scrape web content and create site maps from URLs. Perfect for:

  • Scraping web pages
  • Extracting content
  • Taking screenshots
  • Mapping site structure
  • Converting to markdown

Basic Usage

Use the command to scrape websites:

/firecrawl scrape https://example.com as markdown
/firecrawl map https://example.com and https://another-site.com
/firecrawl extract content from https://example.com with screenshot

Key Features

Content Extraction

  • Scrape multiple URLs
  • Extract main content
  • Remove unwanted elements
  • Clean HTML output
  • Convert to markdown

Output Formats

  • Markdown (default)
  • HTML (processed)
  • Raw HTML
  • JSON structure
  • Screenshots
  • Link extraction

Browser Actions

  • Take screenshots
  • Scroll pages
  • Wait for content
  • Click elements
  • Full page capture

Content Filtering

  • Include specific tags
  • Exclude unwanted tags
  • Main content only
  • Remove base64 images

Example Commands

Basic Scrape

/firecrawl scrape https://example.com

Multiple URLs

/firecrawl scrape https://site1.com and https://site2.com as markdown

With Screenshot

/firecrawl capture https://example.com with full page screenshot
/firecrawl get all links from https://example.com

Custom Tags

/firecrawl scrape https://example.com including only div, p, h1, h2 tags

Configuration Options

Output Formats

  • markdown: Clean markdown content
  • html: Processed HTML
  • rawHtml: Original HTML
  • links: Array of links
  • screenshot: Base64 image
  • json: Structured data

Browser Actions

{
  "type": "screenshot",
  "fullPage": true
}
{
  "type": "scroll",
  "direction": "down"
}

Wait Options

  • Default: 1000ms
  • Custom: specify milliseconds
  • Ensures content loads

Tag Filtering

Include Tags

  • div, p, h1, h2
  • Article content tags
  • Custom selections

Exclude Tags

  • script, style, noscript
  • Ads and tracking
  • Unwanted elements

Response Data

Success Response

  • Scraped content
  • Metadata (title, language)
  • Source URL
  • Status code
  • File URLs

Metadata Includes

  • Page title
  • Language
  • Referrer
  • Scrape ID
  • Status code

Tips

  • Use markdown format for clean text
  • Enable screenshots for visual content
  • Filter tags for cleaner output
  • Set appropriate wait times for dynamic content

The /firecrawl command enables you to scrape web content and create site maps from URLs. Perfect for:

  • Scraping web pages
  • Extracting content
  • Taking screenshots
  • Mapping site structure
  • Converting to markdown

Basic Usage

Use the command to scrape websites:

/firecrawl scrape https://example.com as markdown
/firecrawl map https://example.com and https://another-site.com
/firecrawl extract content from https://example.com with screenshot

Key Features

Content Extraction

  • Scrape multiple URLs
  • Extract main content
  • Remove unwanted elements
  • Clean HTML output
  • Convert to markdown

Output Formats

  • Markdown (default)
  • HTML (processed)
  • Raw HTML
  • JSON structure
  • Screenshots
  • Link extraction

Browser Actions

  • Take screenshots
  • Scroll pages
  • Wait for content
  • Click elements
  • Full page capture

Content Filtering

  • Include specific tags
  • Exclude unwanted tags
  • Main content only
  • Remove base64 images

Example Commands

Basic Scrape

/firecrawl scrape https://example.com

Multiple URLs

/firecrawl scrape https://site1.com and https://site2.com as markdown

With Screenshot

/firecrawl capture https://example.com with full page screenshot
/firecrawl get all links from https://example.com

Custom Tags

/firecrawl scrape https://example.com including only div, p, h1, h2 tags

Configuration Options

Output Formats

  • markdown: Clean markdown content
  • html: Processed HTML
  • rawHtml: Original HTML
  • links: Array of links
  • screenshot: Base64 image
  • json: Structured data

Browser Actions

{
  "type": "screenshot",
  "fullPage": true
}
{
  "type": "scroll",
  "direction": "down"
}

Wait Options

  • Default: 1000ms
  • Custom: specify milliseconds
  • Ensures content loads

Tag Filtering

Include Tags

  • div, p, h1, h2
  • Article content tags
  • Custom selections

Exclude Tags

  • script, style, noscript
  • Ads and tracking
  • Unwanted elements

Response Data

Success Response

  • Scraped content
  • Metadata (title, language)
  • Source URL
  • Status code
  • File URLs

Metadata Includes

  • Page title
  • Language
  • Referrer
  • Scrape ID
  • Status code

Tips

  • Use markdown format for clean text
  • Enable screenshots for visual content
  • Filter tags for cleaner output
  • Set appropriate wait times for dynamic content