firecrawl-aisdk
Start a crawl job to extract content from multiple related pages on a website. Best for: Comprehensive content extraction from multiple pages with depth control. Note: This is an asynchronous operation that returns a job ID. Use pollTool to get results. Example use cases: - Crawl an entire blog section - Extract all documentation pages - Scrape product catalog with pagination - Comprehensive site analysis
by firecrawl
Test firecrawl-aisdk (crawlTool) with AI-powered execution
0/2000 characters
Install this tool and use it with the AI SDK
npm install firecrawl-aisdkpnpm add firecrawl-aisdkyarn add firecrawl-aisdkbun add firecrawl-aisdkdeno add npm:firecrawl-aisdkimport { crawlTool } from 'firecrawl-aisdk';import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { crawlTool } from 'firecrawl-aisdk';
const result = await generateText({
model: openai('gpt-4o'),
tools: { crawlTool },
prompt: 'Your prompt here...',
});
console.log(result.text);How AI agents can use this tool
Use when you need to crawl and extract data from entire websites
(url: string, delay?: number, limit?: number, prompt?: string, sitemap?: string, webhook?: { url: string; events: string[]; headers: Record<string, unknown>; metadata: Record<string, unknown> }, excludePaths?: string[], includePaths?: string[], scrapeOptions?: { proxy: string; maxAge: number; mobile: boolean; actions: { key: string; text: string; type: string; script: string; fullPage: boolean }[]; formats: { }[] }, maxConcurrency?: number, allowSubdomains?: boolean, crawlEntireDomain?: bool...Available configuration options
urlstringThe starting URL to crawl from
limitintegerMaximum number of pages to crawl
maxDiscoveryDepthintegerMaximum depth to crawl based on discovery order. Root site and sitemapped pages have depth 0
allowExternalLinksbooleanAllow crawling external links
allowSubdomainsbooleanAllow crawling subdomains
crawlEntireDomainbooleanCrawl the entire domain, not just child pages of the starting URL
includePathsarrayOnly crawl URLs matching these path patterns
excludePathsarrayExclude URLs matching these path patterns
ignoreQueryParametersbooleanDo not re-scrape the same path with different (or none) query parameters
sitemapstringSitemap handling: include (default) or skip
promptstringNatural language prompt to guide the crawler (e.g., "Only crawl blog posts and docs")
delaynumberDelay in seconds between requests
maxConcurrencyintegerMaximum number of concurrent requests
scrapeOptionsobjectOptions for scraping crawled pages
webhookobjectWebhook config to receive crawl results
zeroDataRetentionbooleanEnable zero data retention. Contact help@firecrawl.dev to enable this feature
Schema extracted: 3/1/2026, 1:19:34 AM
Downloads/month
8,908
GitHub Stars
0
Quality Score