Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.zenrows.com/llms.txt

Use this file to discover all available pages before exploring further.

What is the ZenRows MCP?

The ZenRows MCP (Model Context Protocol) server connects ZenRows’ web scraping and browser automation capabilities directly to MCP-compatible AI tools like Claude, Cursor, or VS Code Copilot. Once configured, your AI assistant can fetch real-time content from any webpage and control a full browser session on your behalf. It connects to two ZenRows products:
  • Universal Scraper API — for fast, single-request scraping with JavaScript rendering, residential proxies, and anti-bot bypass.
  • Scraping Browser — for multi-step browser automation through a cloud-hosted browser with built-in anti-bot protection and residential proxies.

What you can do

  • Scrape any webpage by describing the task in plain English
  • Extract structured data from e-commerce sites, news pages, or search results
  • Take screenshots of any page for visual inspection
  • Access geo-restricted or bot-protected pages using premium proxies
  • Automate multi-step browser workflows: navigate, click, fill forms, scroll, and extract data across pages
  • Run JavaScript in a live browser context
  • Manage cookies, local storage, and multiple tabs

Before you start

You need:

ZenRows Account

A ZenRows account

ZenRows API Key

Your ZenRows API key

Supported integrations

ToolType
Claude DesktopAI assistant
Claude CodeAI coding assistant
CursorAI code editor
WindsurfAI code editor
VS CodeAI code editor
ZedAI code editor
JetBrains IDEsAI code editor

How it works

All integrations use the same MCP server package (@zenrows/mcp) and the same configuration pattern. Each tool has a configuration file where you register MCP servers. Adding ZenRows gives your AI assistant access to the scrape tool for single-request scraping and a full set of browser_* tools for multi-step browser automation. Once configured, ask your assistant to scrape a page or automate a browser workflow and it handles the request automatically:
Scrape https://scrapingcourse.com/ecommerce/ and list the names and prices of the products.
Open https://www.scrapingcourse.com/ecommerce/ in a browser, type "hoodie" into the search field, press Enter, wait for the results to load, then extract all matching product names and prices.

Tools

The ZenRows MCP exposes two types of tools to your AI assistant.

The scrape tool

The scrape tool fetches a webpage in a single request and returns its content in the format you specify: clean Markdown (default), plain text, raw HTML, a screenshot, or structured JSON. It uses the Universal Scraper API under the hood. Best for quick data extraction where you don’t need to interact with the page.

Browser tools

The browser_* tools give your AI assistant control of a cloud-hosted browser session powered by the ZenRows Scraping Browser. The browser comes with built-in residential proxies and anti-bot protection. Use browser tools when you need to:
  • Navigate across multiple pages in a single session
  • Fill out forms, click buttons, or interact with page elements
  • Wait for dynamically loaded content
  • Execute JavaScript in the browser context
  • Manage cookies, local storage, or multiple tabs
Every browser workflow starts with browser_navigate, which opens a session and returns a session_id. All subsequent browser calls use this session_id. Always call browser_close when done.

Available browser tools

ToolDescription
browser_navigateOpen a browser session and navigate to a URL. Returns a session_id for subsequent calls.
browser_closeClose a browser session and free all associated resources.
ToolDescription
browser_clickClick an element using a CSS selector.
browser_hoverHover over an element to trigger hover effects.
browser_typeType text into an input field. Appends to existing content unless clear_first is set.
browser_fillClear an input field and set its value.
browser_select_optionSelect an option from a <select> dropdown.
browser_checkCheck a checkbox or radio button.
browser_uncheckUncheck a checkbox.
browser_focusMove focus to an element.
browser_press_keyPress a keyboard key (e.g. Enter, Tab, Escape, Control+a).
browser_scrollScroll the page in a given direction.
browser_dragDrag an element to a target element.
ToolDescription
browser_get_accessibility_treeGet the page’s accessibility tree as readable text. Call this after browser_navigate to understand page structure.
browser_get_urlGet the current URL.
browser_get_titleGet the page title.
browser_get_textGet the visible text of an element or the entire page.
browser_get_attributeGet the value of a specific attribute on an element.
browser_get_htmlGet the HTML source of an element or the full page.
browser_query_selector_allFind all elements matching a CSS selector. Returns text, HTML, and attributes.
ToolDescription
browser_screenshotTake a screenshot of the page or a specific element.
browser_generate_pdfRender the current page as a PDF.
ToolDescription
browser_wait_for_selectorWait until an element is stable in the DOM. Set visible=true to require visibility.
browser_wait_for_navigationWait for a page navigation. Call this before the action that triggers navigation.
browser_waitWait for a fixed duration in milliseconds.
ToolDescription
browser_evaluateExecute JavaScript in the browser context and return the result.
ToolDescription
browser_get_cookiesGet all cookies for the current page.
browser_set_cookiesSet one or more cookies.
browser_clear_cookiesClear all cookies for the session.
ToolDescription
browser_local_storageRead, write, or clear localStorage.
ToolDescription
browser_new_tabOpen a new tab. Returns a tab_id.
browser_switch_tabSwitch focus to a different tab.
ToolDescription
browser_batchExecute a sequence of browser actions in a single call for better performance.

Scrape vs. browser tools

scrapebrowser_* tools
Best forSingle-page data extractionMulti-step workflows and interactions
Backed byUniversal Scraper APIScraping Browser
SessionStateless, one request per callPersistent session across multiple calls
InteractionLimited (via js_instructions)Full browser control (click, type, scroll, etc.)
OutputMarkdown, plain text, HTML, JSON, screenshot, PDFText, HTML, screenshots, PDF, JSON, accessibility tree

Troubleshooting

Common issues and solutions

IssueCauseSolution
Page content is missing or blankPage uses JavaScript to load content dynamicallyEnable js_render: true in the scrape tool, or use browser_navigate for full browser rendering
403 error or bot detection pageTarget site blocks datacenter IPsEnable premium_proxy: true in the scrape tool. Browser tools use residential proxies automatically
Content is geo-restrictedSite serves different content by regionEnable premium_proxy: true and set proxy_country to the target country code (e.g. US)
Content loads after a delayPage renders elements asynchronouslyUse wait_for with a CSS selector, or wait with a millisecond value. Both require js_render: true. With browser tools, use browser_wait_for_selector
Content only appears after a click or form inputPage requires user interaction before loading dataUse js_instructions with the scrape tool, or use browser tools (browser_click, browser_fill, etc.) for full control
Response contains full page instead of specific fieldsNo extraction parameters setUse css_extractor with a JSON selector map, autoparse for structured pages, or outputs for specific data types
Screenshot appears blankPage loads content dynamicallyEnable js_render: true alongside the screenshot parameter, or use browser_screenshot after waiting for content
The scrape tool is unavailable in your AI assistantConfiguration not loaded yetRestart your AI tool after saving the configuration file. For Claude Desktop, fully quit and reopen the application
API key is not recognizedKey not replaced in configurationConfirm YOUR_ZENROWS_API_KEY has been replaced with your actual key from the ZenRows dashboard. The value must have no extra spaces or surrounding quotation marks
Browser session timed outSession exceeded its TTLStart a new session with browser_navigate. Keep sessions short and close them when done

Example prompts

Once you configure the ZenRows MCP, describe the task to your AI assistant and it handles the scraping automatically.

Scrape tool examples

Fetch https://www.scrapingcourse.com/javascript-rendering/. It uses React, so enable JavaScript rendering.
Get the top 5 products from https://www.scrapingcourse.com/ecommerce/ and extract just the names and prices.
Take a full-page screenshot of https://news.ycombinator.com.
Scrape https://www.scrapingcourse.com/antibot-bypass — it's blocking requests. Use Premium Proxies.

Browser tool examples

Open https://www.scrapingcourse.com/ecommerce/ in a browser, type "hoodie" into the search field, press Enter, wait for the results to load, then extract all matching product names and prices.
Navigate to https://scrapingcourse.com/ecommerce/, click on the first product, extract its full description and price, then go back and do the same for the second product.
Open https://news.ycombinator.com in a browser, scroll down to load more stories, then extract all the headlines and links.

Contributing

The ZenRows MCP server is open source. If you want to modify the server, fix a bug, or suggest an improvement, clone the repository and run it locally:
git clone https://github.com/ZenRows/zenrows-mcp
cd zenrows-mcp
npm install
cp .env.example .env   # Add your API key
npm run dev            # Run with .env loaded (requires Node.js 20.6+)
npm run build          # Compile to dist/
npm run inspect        # Open the MCP inspector UI
Pull requests and issues are welcome on the GitHub repository.