Documentation · Workflows
How to Route Firecrawl Scraper Output into Salesforce with Claude Code
Connect Firecrawl and Salesforce in Deepline, describe the scrape + field schema, and Claude handles everything.
What you need
- • Firecrawl account (Any plan — Deepline works on whatever tier you have)
- • Salesforce account (Any plan with API access — see provider docs)
- • Claude Code installed locally
- • ~2 minutes
Step-by-step
- 1
Install Deepline
One command installs the Deepline CLI and registers a workspace. Takes about 30 seconds. You only do this once per machine.
curl -s "https://code.deepline.com/api/v2/cli/install" | bash deepline auth register - 2
Connect Firecrawl in the Deepline dashboard
In the Deepline dashboard, click Integrations → Firecrawl → paste your API key. Deepline uses Firecrawl for JS-rendered scrape tasks inside any chat.
- 3
Connect Salesforce in the Deepline dashboard
In the Deepline dashboard, click Integrations → Salesforce → Authorize. Salesforce opens its OAuth consent screen — pick sandbox or production, approve scopes. Deepline manages the refresh-token flow and JWT rotation every 55 minutes.
- 4
Chat with Claude
Open Claude Code and describe the workflow in plain English. Deepline handles the tool calls, waterfall routing, rate limits, auth refresh, and dedup. An example prompt for this pair:
> Scrape career pages for 200 target accounts via Firecrawl. Extract Open_Roles_Count__c and Hiring_Velocity__c custom fields. Upsert Salesforce Accounts matching on Website. - 5
Deploy as a workflow
Once Claude's one-off run looks right, type "Deploy this as a workflow" and tell it the schedule. Deepline wraps the exact prompt + tool chain as a recurring workflow with run history, billing, and alerting in the dashboard.
> Deploy this as a workflow that runs every weekday morning at 8am.
Cost math
For 1,000 leads: $1.00
~0.1 credits per row. Firecrawl per-scrape fee separate. Deepline's credit pricing is pay-as-you-go — see code.deepline.com/docs/pricing for current rates.
Why do it in Claude Code
Waterfall routing by default
Deepline tries the cheapest provider first and only falls back if it misses. For Firecrawl workflows, this means email enrichment stops at the first valid hit — you don't pay for the second and third provider unless you need to.
Deploy the exact prompt as a schedule
Once the one-off run looks right, "Deploy this as a workflow" wraps the same prompt + tool chain as a cron-scheduled workflow. Run history, per-run billing, and retry logic all live in the dashboard. No DevOps.
One place for every provider credential
Firecrawl keys, Salesforce OAuth, waterfall fallbacks (Hunter, Dropcontact, Findymail, Prospeo) — all live in the Deepline dashboard. Rotate once; every workflow picks up the new credential automatically.
What people are saying
“Firecrawl benchmarks ~50x faster than Apify on JS-rendered pages at $0.0008 per scrape flat. Apify still wins on actor marketplace breadth.”
Citations sourced from community posts, vendor case studies, and engineering blogs. See src/data/workflow-social-proof.md in the repo for the full sentiment bank.
Troubleshooting
Firecrawl integration shows red in the Deepline dashboard
Cause: Firecrawl credentials are either expired, revoked at the provider side, or the account tier doesn't expose API access.
Fix: Click the integration row in the dashboard → Test Connection. If it fails, re-paste the API key (or re-run OAuth for OAuth-based providers). Confirm the provider account tier includes API access — most providers gate this to paid tiers.
Workflow run succeeds but 0 rows landed in the destination
Cause: The filter returned 0 matches, OR rows failed a downstream gate (email not verified, already in destination, deliverability check failed).
Fix: Open the run in the Deepline dashboard → expand the step-by-step trace. Every row's path is logged: matched/unmatched at each stage. Most frequent culprit is the email-verification gate. Loosen the filter or remove the gate in the prompt if that's the cause.
Scheduled workflow stopped firing
Cause: Either hit a provider rate-limit, ran out of Deepline credits, or the destination API returned 500s Deepline classified as permanent.
Fix: Dashboard → Workflows → Recent Runs. Failed runs show the exact error and a "Replay" button. Credits are visible top-right. Rate-limit issues auto-resume once the window resets; permanent errors need intervention.
FAQ
Do I need an Anthropic API key for this?
No. You need Claude Code (the CLI/IDE). Deepline provides the tool-execution layer and credit system. Your Claude Code subscription or Anthropic API key handles the LLM calls — Deepline handles the GTM actions underneath.
How does Deepline handle dedup across runs?
Built-in. For Salesforce as a destination, Deepline uses the provider-native idempotency key (email for HubSpot/Attio, External_Id for Salesforce, campaign-level dedup for Instantly/Smartlead/Lemlist). Reruns update existing records instead of creating duplicates.
Can I still use my own API keys?
Yes. Paste them in the Deepline dashboard under Integrations. Deepline uses your keys for the actual provider calls — you keep your provider contracts, billing, and rate limit allocation. Deepline's credit billing only covers the orchestration + waterfall layer.
What if my filter only matches 3 rows but the workflow says 1000?
Deepline runs exactly what you describe. The 1000 in cost-math is the reference cost per 1K rows — a 3-row run costs 3/1000 of that total. You never pay for rows that don't exist.
Can I combine this with a waterfall?
Yes. Mention it in the prompt — e.g. "If Firecrawl doesn't return an email, fall back to Hunter → Dropcontact → Findymail." Deepline assembles the waterfall automatically and charges you only for the provider that successfully finds each row.
Related workflows
Firecrawl → HubSpot
Connect Firecrawl and HubSpot in Deepline, give Claude URLs and a field schema, and it scrapes + structures + upserts into HubSpot.
Firecrawl → Instantly
Connect Firecrawl and Instantly in Deepline, describe the scrape target + persona, and Claude runs scrape + waterfall + Instantly import.
Apify → Salesforce
Connect Apify and Salesforce in Deepline, pick the actor, and Claude handles the scrape + Salesforce upsert. External_Id__c keeps it idempotent.
Want this workflow pre-configured?
Run it on Deepline or fork the full skill pack on GitHub. Either way, the code is yours to read and change.