Amazon is one of the biggest e-commerce retailers in the world and a popular web scraping target.
Amazon is using its own proprietary web scraping protection mechanisms that are constantly evolving. This makes it difficult to scrape Amazon data reliably and this is where web scraping APIs come in handy.
Overall, most web scraping APIs we've tested through our benchmarks perform well for Amazon at $2.62 per 1,000 scrape requests on average.
Amazon.com scraping API benchmarks
Scrapeway runs weekly benchmarks for Amazon Products for the most popular web scraping APIs. Here's the table for this week:
Service | Success % | Speed | Cost $/1000 | |
---|---|---|---|---|
1
|
100%
+2
|
3.5s
-2.1
|
$0.18
-0.08
|
|
2
|
100%
+16
|
6.2s
-0.6
|
$2.71
=
|
|
3
|
99%
+1
|
4.8s
-2.5
|
$2.45
=
|
|
4
|
99%
=
|
5.5s
-0.6
|
$2.76
=
|
|
5
|
92%
+5
|
6.5s
-0.1
|
$3.27
=
|
|
6
|
85%
-3
|
4.1s
-0.3
|
$2.2
=
|
|
7
|
8%
-5
|
11.5s
+1.3
|
$4.75
=
|
How to scrape amazon.com?
Amazon is relatively easy to scrape as it's mostly static content with a few dynamic elements so headless browser use is not required.
That being said, Amazon has a lot of anti-scraping mechanisms in place, so it's recommended to use a reliable web scraping service that can bypass the constantly changing anti-scraping measures. See benchmarks for the most up-to-date results.
Here's an example Amazon web scraper using Python and each web scraping API service implementation:
from parsel import Selector
# install using `pip install scrapingant-client`
from scrapingant_client import ScrapingAntClient
# create an API client instance
client = ScrapingAntClient(token="YOUR API KEY")
# create scrape function that returns HTML parser for a given URL
def scrape(url: str, country: str="", render_js=False, headers: dict=None) -> Selector:
api_result = client.general_request(
url,
proxy_type='residential',
browser=False,
return_page_source=False,
)
assert api_result.ok, api_result.text
return Selector(api_result.text)
url = "https://www.amazon.com/kindle-the-lightest-and-most-compact-kindle/dp/B0B92489PD/"
selector = scrape(url)
data = {
"url": url,
"name": selector.css("#productTitle::text").get("").strip(),
"asin": selector.css("input[name=ASIN]::attr(value)").get("").strip(),
"price": selector.css('span.a-price ::text').get(),
# ...
}
from pprint import pprint
pprint(data)
{
'asin': 'B0B92489PD',
'name': 'Amazon Kindle – The lightest and most compact Kindle, with extended '
'battery life, adjustable front light, and 16 GB storage – Without '
'Lockscreen Ads – Black',
'price': '$99.99',
'url': 'https://www.amazon.com/kindle-the-lightest-and-most-compact-kindle/dp/B0B92489PD/'
}
As seen above, Amazon scraping in Python is relatively straight-forward using common Python tools for HTML parsing.
Why scrape Amazon Products?
Amazon is a popular target for web scraping because it has a large amount of e-commerce data that can be used for various purposes, such as price monitoring, market research, and competitive analysis.
With price monitoring scraping we can keep track the product historic pricing data and take advantage of market fluctuations to make better purchasing decisions.
Market research scraping, and especially review scraping, can help us understand customer preferences through sentiment analysis, identify trends through statistics, and make informed decisions about product development and marketing strategies.
Amazon is also often scraped by Amazon sellers themselves to monitor competition and adjust their pricing strategy.
Finally, Amazon contains so much data that it can be used in AI and machine learning models to predict trends and make better business decisions.