There are roughly four eras of Python web scraping. urllib and a regex. Then requests plus BeautifulSoup. Then Scrapy for anything serious. Then Playwright once half the web went JavaScript-only and sent the previous three tools into the cliff face.

Scrapling is one of the newer libraries trying to be the next layer on that stack — a single toolkit that covers the easy case, the heavy-JS case, and the anti-bot-protected case, without making you stitch together three different libraries.

I’ve been reading through the project, the benchmarks, and the API. Here’s what’s actually interesting about it, what to be careful with, and when I’d reach for it instead of the obvious alternatives.

What it is in one sentence

Scrapling is a Python 3.10+ scraping framework that wraps three different fetching backends — plain HTTP with TLS fingerprint impersonation, a stealth-mode browser, and a full Playwright-driven browser — behind one consistent selector API. BSD-3-Clause licensed.

The tagline on the repo is “Effortless Web Scraping for the Modern Web,” which is the kind of thing every scraping library says. The more useful framing is: it’s trying to be Scrapy’s spider model + curl_cffi’s TLS fingerprinting + an undetected Playwright in one import.

The three-fetcher model

This is the part of the design I find genuinely well thought out. Most scraping projects accumulate a hairball of requests for the fast pages, Selenium or Playwright for the JS-heavy ones, and some custom CDN-bypass for the protected ones. Scrapling separates those into three tiers with the same response shape:

FetcherBackendWhen to use
FetcherPlain HTTP, with TLS fingerprint impersonationStatic HTML; you don’t need a real browser; you want it fast
StealthyFetcherHeadless browser with anti-detection patchesCloudflare/JS-protected pages where a real browser is required
DynamicFetcherPlaywright/Chromium, full automationSPA with complex auth, click flows, or JS-rendered data

In one Spider class you can mark different requests for different tiers. The README’s example:

async def parse(self, response: Response):
    for link in response.css('a::attr(href)').getall():
        if "protected" in link:
            yield Request(link, sid="stealth")
        else:
            yield Request(link, sid="fast", callback=self.parse)

The reason this matters: in a real crawl, only a fraction of pages need the heavy backend, but you usually end up paying browser overhead for everything because rewriting halfway through is painful. Letting a single spider mix tiers keeps the average page cheap.

The benchmarks, with a grain of salt

The README publishes numbers for parsing 5,000 nested elements:

LibraryTimeRelative
Scrapling2.02 ms1.0×
Parsel / Scrapy2.04 ms1.01×
Raw lxml2.54 ms1.26×
BeautifulSoup4 + lxml1584.31 ms~784×

Two honest reads of this:

Yes, BeautifulSoup is that much slower. That number is not a typo. BS4 is famously a usability-first library; for tight loops over many documents, lxml-based parsers (which Scrapling, Parsel, and raw lxml all are) are orders of magnitude faster. This is a known result, not a Scrapling-specific finding.

No, Scrapling isn’t really faster than Scrapy’s parser. Look at the table — Parsel (the engine Scrapy uses) is 2.04 ms vs Scrapling’s 2.02 ms. Within margin of error for a microbenchmark. The headline “orders of magnitude faster than BS4” is true; “faster than Scrapy” isn’t, at the parser layer.

Where Scrapling probably does win in real workloads is the network layer — TLS fingerprint impersonation in Fetcher lets you skip the “add Playwright just to get past basic JA3 fingerprinting” tax. That saves seconds per request, not microseconds.

Adaptive selection — the interesting idea

Most scraping pain isn’t writing the first selector, it’s the selector breaking three weeks later when the target site renames a class. Scrapling’s pitch on this is “smart element relocation after website changes using similarity algorithms” — i.e. the library can re-find an element when the original selector fails, by comparing structural similarity to what it captured before.

I’m cautiously optimistic about this. It’s the right thing to want. In practice, similarity-based relocation will work great for cosmetic DOM rearrangements (class renames, wrapper divs added) and badly for semantic restructures (the data moved to a different page, or the section was re-templated). Treat it as a “save you from waking up at 3am for a one-class change” feature, not a “your scraper now maintains itself” feature.

What’s the simplest thing that works?

Lifted directly from the docs, the absolute minimum is:

from scrapling.fetchers import Fetcher, FetcherSession

with FetcherSession(impersonate='chrome') as session:
    page = session.get('https://quotes.toscrape.com/', stealthy_headers=True)
    quotes = page.css('.quote .text::text').getall()

That’s it. Three lines for “fetch and parse with Chrome’s TLS fingerprint.” The impersonate='chrome' parameter is the curl_cffi-style fingerprint spoofing — useful when a target uses basic fingerprint-based bot detection.

For Cloudflare-protected pages:

from scrapling.fetchers import StealthyFetcher

page = StealthyFetcher.fetch('https://nopecha.com/demo/cloudflare')
data = page.css('#padded_content a').getall()

Note that StealthyFetcher requires a separate browser install:

pip install "scrapling[fetchers]"
scrapling install

The scrapling install step pulls down patched Chromium binaries. On a fresh server that’s a couple hundred MB of dependencies, which is worth knowing before you pip install it on a small VM.

Compared to the obvious alternatives

NeedReach for
One-off script, simple HTML, learningrequests + BeautifulSoup
Big crawl, well-defined pipeline, mature ecosystemScrapy
Heavy JS app, complex auth flow, you control the browserPlaywright directly
TLS fingerprinting, no browser overheadcurl_cffi
Cloudflare/Turnstile-protected static-ish pagescloudscraper or Scrapling.StealthyFetcher
You want all of the above in one libraryScrapling

The most honest thing I can say: if your project has a clear shape (“we’re crawling 10M product pages on a known site”), use the tool that’s specialized for that shape — Scrapy is still the right answer for big disciplined crawls, Playwright is still the right answer for serious browser automation. Scrapling’s value is in projects that don’t have one clear shape, where you’d otherwise end up maintaining three scrapers.

Where I’d be careful

Anti-bot bypass is a moving target. The README is upfront about this: enterprise systems (Akamai, DataDome, Kasada, Incapsula) need third-party solutions and aren’t promised. Even for the systems Scrapling does target — Cloudflare Turnstile in particular — expect that what works today won’t necessarily work next quarter. If your business depends on it, plan for ongoing maintenance, not “set it and forget it.”

robots_txt_obey is opt-in, not default. This is a deliberate design choice (some users have legitimate reasons to bypass robots files — e.g. you own the site you’re crawling) but it means you have to consciously turn it on. Forgetting to do so on a third-party site is a thing you’ll regret in court before you regret it technically.

Stealth ≠ permission. The library has a LICENSE and a disclaimer that’s worth quoting:

“This library is provided for educational and research purposes only. By using this library, you agree to comply with local and international data scraping and privacy laws.”

Anti-detection capabilities are useful for legitimate cases — research, archival, accessibility scraping, monitoring sites where you have permission, scraping your own data out of services that won’t give you an export. They’re also exactly the capabilities used by ad fraud, content theft, and ToS violations. The library doesn’t care which you’re doing; courts and regulators do. Treat the “stealthy” features as power tools — useful, but you need to know when not to use them.

When I’d actually reach for it

Three concrete cases I think Scrapling is well-suited for:

  1. Personal data export. A service holds your data and won’t provide a real export API. Scraping your own account with a real browser, slowly, with respect for their rate limits — Scrapling’s DynamicSession is good at this.

  2. A small commercial crawl that hits two or three different protection levels. You don’t want to architect a Scrapy + Playwright + curl_cffi pipeline for a 1-week project. Scrapling gets you to a working prototype faster.

  3. Research and archival. Scraping public-record sites, datasets, government portals, news archives. The kind of work where you want fast HTTP for the bulk and a real browser for the handful of awkward pages.

I would not reach for it as a first choice when the target is a single, well-understood site you’re going to scrape for years (Scrapy’s discipline pays off there) or when the site’s owner explicitly asks you not to (no library solves that problem).

Bottom line

Scrapling is a real, well-designed library — not vapor, not hype. The benchmarks against BeautifulSoup are real and against Scrapy are honestly within noise; the genuine win is the unified surface across three fetcher tiers and the network-level fingerprinting that lets you avoid spinning up a browser for cases that don’t need one.

It doesn’t solve the actual hard problems of scraping — those are all on the legal/ethical side, where no library can help — but it solves a real engineering problem cleanly. If you’re starting a new scraping project today and you don’t know yet whether you’ll need a browser, this is a reasonable place to start.

The full source and docs are at github.com/D4Vinci/Scrapling.