Drop a single HTTP Request node into any n8n workflow to scrape LinkedIn, Instagram, TikTok, Twitter/X, YouTube, and more. Per-result credits, structured JSON output, zero scraper maintenance.
Sound familiar? You're not alone.
Setting up Brightdata, Apify, or PhantomBuster in n8n means configuring OAuth, pagination, webhook callbacks, and custom headers. Each tool needs a different HTTP Request node pattern.
You wrote a Puppeteer script inside n8n's Code node to scrape profiles. It worked for a week. Then the site updated its markup and now you're debugging JavaScript at midnight.
n8n has nodes for Slack, Google Sheets, and Airtable — but nothing for extracting structured data from LinkedIn, Instagram, or TikTok. You're stuck with raw HTTP or Code nodes.
Scraping APIs return results asynchronously. In n8n, that means building Wait → HTTP → IF → Loop patterns to poll for job completion. Every workflow has the same boilerplate.
One API, all platforms, pay per result
POST to create a job, GET to fetch results. Same node config for LinkedIn, Instagram, TikTok, Twitter/X, YouTube, Facebook, Indeed, Glassdoor, Yelp, GitHub, and Crunchbase. Copy-paste between workflows.
Weld returns flat, structured JSON — names, titles, follower counts, post content. Use n8n's Set node to extract fields, IF node to filter, and merge directly into your downstream tools.
When platforms change their markup, Weld updates the scraper. Your n8n workflow keeps running. No more midnight debugging sessions for broken Code nodes.
Copy these patterns to get started in minutes
The simplest Weld + n8n pattern. Trigger, scrape, store. Works for any platform — just change the scraper slug.
Kick off manually or run on a schedule (daily, hourly, etc.)
POST /api/jobs/create with scraper slug, URLs, and API key
Pause for 30-60 seconds while Weld processes the job
GET /api/jobs/results returns structured JSON array
Pull out the fields you need — name, title, bio, followers, etc.
Append rows to your tracking sheet or base
Enrich the same list of people across LinkedIn, Twitter, and Instagram, then merge results per person.
Webhook receives a list of people with social URLs
Code node groups URLs by platform (LinkedIn, Twitter, Instagram)
Three HTTP Request nodes fire simultaneously — one per platform
Wait for all three jobs to complete, fetch results from each
Combine results per person using name or URL as the join key
Write enriched profiles to Airtable, CRM, or send via webhook response
Scrape LinkedIn profiles, then conditionally scrape additional platforms based on the results.
Initial enrichment with linkedin-profiles scraper
Check if the LinkedIn profile includes a Twitter/X handle
For leads with Twitter handles, run twitter-profiles scraper
Filter for leads with significant social presence
Mark qualifying leads as high-priority in your CRM with the social context attached
The scrapers most relevant to your use case
2 credits / row
2 credits / row
1 credits / row
1 credits / row
1 credits / row
TikTok
1 credits / row
TikTok
1 credits / row
Twitter/X
1 credits / row
Twitter/X
1 credits / row
YouTube
1 credits / row
1 credits / row
Crunchbase
2 credits / row
Indeed
1 credits / row
Connect your scraped data to your favorite tools
Auto-sync results to spreadsheets
Real-time delivery to any endpoint
Programmatic access for developers
Connect to 1000+ apps
Download in standard formats
Common questions about n8n