Internet extends fast and modern websites pretty often use dynamic content load mechanisms to provide the best user experience. Still, on the other hand, it becomes harder to extract data from such web pages, as it requires the execution of internal Javascript in the page context while scraping. Let's review several conventional techniques that allow data extraction from dynamic websites using Python.
- Web Scraping Websites Python Programming
- Web Scraping Websites Python 3
- Web Scraping Websites Python Online
What is a dynamic website?#
Web scraping, also called web data extraction, refers to the technique of harvesting data from a web page through leveraging the patterns in the page’s underlying code. It can be used to collect unstructured information from websites for processing and storage in a structured format.
A dynamic website is a type of website that can update or load content after the initial HTML load. So the browser receives basic HTML with JS and then loads content using received Javascript code. Such an approach allows increasing page load speed and prevents reloading the same layout each time you'd like to open a new page.
- Loading Web Pages with 'request' The requests module allows you to send HTTP.
- Mar 15, 2021 To parse our HTML document and extract the 50 div containers, we’ll use a Python module called BeautifulSoup, the most common web scraping module for Python. In the following code cell we will: Import the BeautifulSoup class creator from the package bs4.
- Hello, I am python developer with 3 years of experience in python and web scraping. I have read your requirements and I am confident that I can do the job. I have developed product details scrappers for many websites l More ₹1250 INR in 3 days.
Usually, dynamic websites use AJAX to load content dynamically, or even the whole site is based on a Single-Page Application (SPA) technology.
In contrast to dynamic websites, we can observe static websites containing all the requested content on the page load.
A great example of a static website is example.com
:
The whole content of this website is loaded as a plain HTML while the initial page load.
Web Scraping Websites Python Programming
To demonstrate the basic idea of a dynamic website, we can create a web page that contains dynamically rendered text. It will not include any request to get information, just a render of a different HTML after the page load:
All we have here is an HTML file with a single <div>
in the body that contains text - Web Scraping is hard
, but after the page load, that text is replaced with the text generated by the Javascript:
To prove this, let's open this page in the browser and observe a dynamically replaced text:
Alright, so the browser displays a text, and HTML tags wrap this text.
Can't we use BeautifulSoup or LXML to parse it? Let's find out.
Extract data from a dynamic web page#
BeautifulSoup is one of the most popular Python libraries across the Internet for HTML parsing. Almost 80% of web scraping Python tutorials use this library to extract required content from the HTML.
Let's use BeautifulSoup for extracting the text inside <div>
from our sample above.
This code snippet uses os
library to open our test HTML file (test.html
) from the local directory and creates an instance of the BeautifulSoup library stored in soup
variable. Using the soup
we find the tag with id test
and extracts text from it.
In the screenshot from the first article part, we've seen that the content of the test page is I ❤️ ScrapingAnt
, but the code snippet output is the following:
And the result is different from our expectation (except you've already found out what is going on there). Everything is correct from the BeautifulSoup perspective - it parsed the data from the provided HTML file, but we want to get the same result as the browser renders. The reason is in the dynamic Javascript that not been executed during HTML parsing.
We need the HTML to be run in a browser to see the correct values and then be able to capture those values programmatically.
Below you can find four different ways to execute dynamic website's Javascript and provide valid data for an HTML parser: Selenium, Pyppeteer, Playwright, and Web Scraping API.
Selenuim: web scraping with a webdriver#
Selenium is one of the most popular web browser automation tools for Python. It allows communication with different web browsers by using a special connector - a webdriver.
To use Selenium with Chrome/Chromium, we'll need to download webdriver from the repository and place it into the project folder. Don't forget to install Selenium itself by executing:
Selenium instantiating and scraping flow is the following:
- define and setup Chrome path variable
- define and setup Chrome webdriver path variable
- define browser launch arguments (to use headless mode, proxy, etc.)
- instantiate a webdriver with defined above options
- load a webpage via instantiated webdriver
In the code perspective, it looks the following:
And finally, we'll receive the required result:
Selenium usage for dynamic website scraping with Python is not complicated and allows you to choose a specific browser with its version but consists of several moving components that should be maintained. The code itself contains some boilerplate parts like the setup of the browser, webdriver, etc.
I like to use Selenium for my web scraping project, but you can find easier ways to extract data from dynamic web pages below.
Pyppeteer: Python headless Chrome#
Pyppeteer is an unofficial Python port of Puppeteer JavaScript (headless) Chrome/Chromium browser automation library. It is capable of mainly doing the same as Puppeteer can, but using Python instead of NodeJS.
Puppeteer is a high-level API to control headless Chrome, so it allows you to automate actions you're doing manually with the browser: copy page's text, download images, save page as HTML, PDF, etc.
To install Pyppeteer you can execute the following command:
The usage of Pyppeteer for our needs is much simpler than Selenium:
I've tried to comment on every atomic part of the code for a better understanding. However, generally, we've just opened a browser page, loaded a local HTML file into it, and extracted the final rendered HTML for further BeautifulSoup processing.
As we can expect, the result is the following:
We did it again and not worried about finding, downloading, and connecting webdriver to a browser. Though, Pyppeteer looks abandoned and not properly maintained. This situation may change in the nearest future, but I'd suggest looking at the more powerful library.
Playwright: Chromium, Firefox and Webkit browser automation#
Playwright can be considered as an extended Puppeteer, as it allows using more browser types (Chromium, Firefox, and Webkit) to automate modern web app testing and scraping. You can use Playwright API in JavaScript & TypeScript, Python, C# and, Java. And it's excellent, as the original Playwright maintainers support Python.
The API is almost the same as for Pyppeteer, but have sync and async version both.
Installation is simple as always:
Let's rewrite the previous example using Playwright.
As a good tradition, we can observe our beloved output:
We've gone through several different data extraction methods with Python, but is there any more straightforward way to implement this job? How can we scale our solution and scrape data with several threads?
Meet the web scraping API!
Web Scraping API#
ScrapingAnt web scraping API provides an ability to scrape dynamic websites with only a single API call. It already handles headless Chrome and rotating proxies, so the response provided will already consist of Javascript rendered content. ScrapingAnt's proxy poll prevents blocking and provides a constant and high data extraction success rate.
Usage of web scraping API is the simplest option and requires only basic programming skills.
You do not need to maintain the browser, library, proxies, webdrivers, or every other aspect of web scraper and focus on the most exciting part of the work - data analysis.
As the web scraping API runs on the cloud servers, we have to serve our file somewhere to test it. I've created a repository with a single file: https://github.com/kami4ka/dynamic-website-example/blob/main/index.html
To check it out as HTML, we can use another great tool: HTMLPreview
The final test URL to scrape a dynamic web data has a following look: http://htmlpreview.github.io/?https://github.com/kami4ka/dynamic-website-example/blob/main/index.html
The scraping code itself is the simplest one across all four described libraries. We'll use ScrapingAntClient library to access the web scraping API.
Let's install in first:
And use the installed library:
To get you API token, please, visit Login page to authorize in ScrapingAnt User panel. It's free.
And the result is still the required one.
All the headless browser magic happens in the cloud, so you need to make an API call to get the result.
Check out the documentation for more info about ScrapingAnt API.
Summary#
Today we've checked four free tools that allow scraping dynamic websites with Python. All these libraries use a headless browser (or API with a headless browser) under the hood to correctly render the internal Javascript inside an HTML page. Below you can find links to find out more information about those tools and choose the handiest one:
Happy web scraping, and don't forget to use proxies to avoid blocking 🚀
- Python Web Scraping Tutorial
- Python Web Scraping Resources
- Selected Reading
In the previous chapter, we have seen scraping dynamic websites. In this chapter, let us understand scraping of websites that work on user based inputs, that is form based websites.
Introduction
These days WWW (World Wide Web) is moving towards social media as well as usergenerated contents. So the question arises how we can access such kind of information that is beyond login screen? For this we need to deal with forms and logins.
In previous chapters, we worked with HTTP GET method to request information but in this chapter we will work with HTTP POST method that pushes information to a web server for storage and analysis.
Interacting with Login forms
While working on Internet, you must have interacted with login forms many times. They may be very simple like including only a very few HTML fields, a submit button and an action page or they may be complicated and have some additional fields like email, leave a message along with captcha for security reasons.
In this section, we are going to deal with a simple submit form with the help of Python requests library.
First, we need to import requests library as follows −
Now, we need to provide the information for the fields of login form.
In next line of code, we need to provide the URL on which action of the form would happen.
After running the script, it will return the content of the page where action has happened.
Suppose if you want to submit any image with the form, then it is very easy with requests.post(). You can understand it with the help of following Python script −
Loading Cookies from the Web Server
A cookie, sometimes called web cookie or internet cookie, is a small piece of data sent from a website and our computer stores it in a file located inside our web browser.
In the context of dealings with login forms, cookies can be of two types. One, we dealt in the previous section, that allows us to submit information to a website and second which lets us to remain in a permanent “logged-in” state throughout our visit to the website. For the second kind of forms, websites use cookies to keep track of who is logged in and who is not.
What do cookies do?
These days most of the websites are using cookies for tracking. We can understand the working of cookies with the help of following steps −
Step 1 − First, the site will authenticate our login credentials and stores it in our browser’s cookie. This cookie generally contains a server-generated toke, time-out and tracking information.
Step 2 − Next, the website will use the cookie as a proof of authentication. This authentication is always shown whenever we visit the website.
Web Scraping Websites Python 3
Cookies are very problematic for web scrapers because if web scrapers do not keep track of the cookies, the submitted form is sent back and at the next page it seems that they never logged in. It is very easy to track the cookies with the help of Python requests library, as shown below −
In the above line of code, the URL would be the page which will act as the processor for the login form.
After running the above script, we will retrieve the cookies from the result of last request.
There is another issue with cookies that sometimes websites frequently modify cookies without warning. Such kind of situation can be dealt with requests.Session() as follows −
Web Scraping Websites Python Online
In the above line of code, the URL would be the page which will act as the processor for the login form.
Observe that you can easily understand the difference between script with session and without session.
Automating forms with Python
In this section we are going to deal with a Python module named Mechanize that will reduce our work and automate the process of filling up forms.
Mechanize module
Mechanize module provides us a high-level interface to interact with forms. Before starting using it we need to install it with the following command −
Note that it would work only in Python 2.x.
Example
In this example, we are going to automate the process of filling a login form having two fields namely email and password −
The above code is very easy to understand. First, we imported mechanize module. Then a Mechanize browser object has been created. Then, we navigated to the login URL and selected the form. After that, names and values are passed directly to the browser object.