- Python Web Scraping Tutorial
We build an API to fetch data from. Then build a quick frontend with Next.js, Custom React hooks and Context.
- Python Web Scraping Resources
Feb 12, 2021 Parsehub is a web scraping desktop application that allows you to scrape the web, even with complicated and dynamic websites / scenarios. The scraping itself happens on Parsehub servers. You only have to create the instruction within the app. Lots of visual web scraping tools are very limited when it comes to scraping dynamic websites, not. Netatmo weathermap. Web scraping is a complex task and the complexity multiplies if the website is dynamic. According to United Nations Global Audit of Web Accessibility more than 70% of the websites are dynamic in nature and they rely on JavaScript for their functionalities.
- Selected Reading
In this chapter, let us learn how to perform web scraping on dynamic websites and the concepts involved in detail.
Introduction
Web scraping is a complex task and the complexity multiplies if the website is dynamic. According to United Nations Global Audit of Web Accessibility more than 70% of the websites are dynamic in nature and they rely on JavaScript for their functionalities.
Dynamic Website Example
Let us look at an example of a dynamic website and know about why it is difficult to scrape. Here we are going to take example of searching from a website named http://example.webscraping.com/places/default/search. But how can we say that this website is of dynamic nature? It can be judged from the output of following Python script which will try to scrape data from above mentioned webpage −
Output
The above output shows that the example scraper failed to extract information because the <div> element we are trying to find is empty.
Approaches for Scraping data from Dynamic Websites
We have seen that the scraper cannot scrape the information from a dynamic website because the data is loaded dynamically with JavaScript. In such cases, we can use the following two techniques for scraping data from dynamic JavaScript dependent websites −
- Reverse Engineering JavaScript
- Rendering JavaScript
Reverse Engineering JavaScript
The process called reverse engineering would be useful and lets us understand how data is loaded dynamically by web pages.
For doing this, we need to click the inspect element tab for a specified URL. Next, we will click NETWORK tab to find all the requests made for that web page including search.json with a path of /ajax. Instead of accessing AJAX data from browser or via NETWORK tab, we can do it with the help of following Python script too −
Example
The above script allows us to access JSON response by using Python json method. Similarly we can download the raw string response and by using python’s json.loads method, we can load it too. We are doing this with the help of following Python script. It will basically scrape all of the countries by searching the letter of the alphabet ‘a’ and then iterating the resulting pages of the JSON responses.
After running the above script, we will get the following output and the records would be saved in the file named countries.txt.
Output
Rendering JavaScript
In the previous section, we did reverse engineering on web page that how API worked and how we can use it to retrieve the results in single request. However, we can face following difficulties while doing reverse engineering −
Sometimes websites can be very difficult. For example, if the website is made with advanced browser tool such as Google Web Toolkit (GWT), then the resulting JS code would be machine-generated and difficult to understand and reverse engineer.
Some higher level frameworks like React.js can make reverse engineering difficult by abstracting already complex JavaScript logic.
The solution to the above difficulties is to use a browser rendering engine that parses HTML, applies the CSS formatting and executes JavaScript to display a web page.
Example
In this example, for rendering Java Script we are going to use a familiar Python module Selenium. The following Python code will render a web page with the help of Selenium −
November 2020 - Empire Express is compatible with Apple ’ s newly-released macOS Big Sur. Just be sure you have the latest version of Empire Express, version 2.3.1. We’ve found one minor cosmetic issue (see the article below), but it works as well as ever and takes advantage of the new look of Big Sur. By Killer Bee Software Empire Deluxe Internet Edition is abstract global warfare on a strategic scale. Often imitated, this classic war game has over three decades of history. Empire deluxe manual. Our Success Depends on Your Success. We offer an industry leading pay package and strong miles, by utilizing our world class customer base and only hauling high velocity, no touch freight. Buy Empire Express Deluxe in the App Store Empire Express is now sold exclusively through Apple’s Mac App Store. If you have have the version 2.2 public beta installed on your Mac, it will be updated to the final version when you purchase Empire Express Deluxe from the App Store.
First, we need to import webdriver from selenium as follows −
Now, provide the path of web driver which we have downloaded as per our requirement −
Now, provide the url which we want to open in that web browser now controlled by our Python script.
Now, we can use ID of the search toolbox for setting the element to select.
Next, we can use java script to set the select box content as follows −
The following line of code shows that search is ready to be clicked on the web page −
Next line of code shows that it will wait for 45 seconds for completing the AJAX request.
Now, for selecting country links, we can use the CSS selector as follows −
Now the text of each link can be extracted for creating the list of countries −
Web scraping is a way to grab data from websites without needing access to APIs or the website’s database. You only need access to the site’s data — as long as your browser can access the data, you will be able to scrape it.
Realistically, most of the time you could just go through a website manually and grab the data ‘by hand’ using copy and paste, but in a lot of cases that would take you many hours of manual work, which could end up costing you a lot more than the data is worth, especially if you’ve hired someone to do the task for you. Why hire someone to work at 1–2 minutes per query when you can get a program to perform a query automatically every few seconds?
For example, let’s say that you wish to compile a list of the Oscar winners for best picture, along with their director, starring actors, release date, and run time. Using Google, you can see there are several sites that will list these movies by name, and maybe some additional information, but generally you’ll have to follow through with links to capture all the information you want.
Obviously, it would be impractical and time-consuming to go through every link from 1927 through to today and manually try to find the information through each page. With web scraping, we just need to find a website with pages that have all this information, and then point our program in the right direction with the right instructions.
In this tutorial, we will use Wikipedia as our website as it contains all the information we need and then use Scrapy on Python as a tool to scrape our information.
A few caveats before we begin:
Data scraping involves increasing the server load for the site that you’re scraping, which means a higher cost for the companies hosting the site and a lower quality experience for other users of that site. The quality of the server that is running the website, the amount of data you’re trying to obtain, and the rate at which you’re sending requests to the server will moderate the effect you have on the server. Keeping this in mind, we need to make sure that we stick to a few rules.
Most sites also have a file called robots.txt in their main directory. This file sets out rules for what directories sites do not want scrapers to access. A website’s Terms & Conditions page will usually let you know what their policy on data scraping is. For example, IMDB’s conditions page has the following clause:
Robots and Screen Scraping: You may not use data mining, robots, screen scraping, or similar data gathering and extraction tools on this site, except with our express-written consent as noted below.
Before we try to obtain a website’s data we should always check out the website’s terms and robots.txt
to make sure we are obtaining legal data. When building our scrapers, we also need to make sure that we do not overwhelm a server with requests that it can’t handle.
Luckily, many websites recognize the need for users to obtain data, and they make the data available through APIs. If these are available, it’s usually a much easier experience to obtain data through the API than through scraping.
Wikipedia allows data scraping, as long as the bots aren’t going ‘way too fast’, as specified in their robots.txt
. They also provide downloadable datasets so people can process the data on their own machines. If we go too fast, the servers will automatically block our IP, so we’ll implement timers in order to keep within their rules.
Getting Started, Installing Relevant Libraries Using Pip
React Web Scraper Tutorial
First of all, to start off, let’s install Scrapy.
Windows
Install the latest version of Python from https://www.python.org/downloads/windows/
Note:Windows users will also need Microsoft Visual C++ 14.0, which you can grab from “Microsoft Visual C++ Build Tools” over here.
You’ll also want to make sure you have the latest version of pip.
In cmd.exe, type in:
This will install Scrapy and all the dependencies automatically.
Linux
First you’ll want to install all the dependencies:
In Terminal, enter:
Once that’s all installed, just type in:
To make sure pip is updated, and then:
And it’s all done.
Mac
First you’ll need to make sure you have a c-compiler on your system. In Terminal, enter:
After that, install homebrew from https://brew.sh/.
Web Scraper Google Chrome
Update your PATH variable so that homebrew packages are used before system packages:
Install Python:
And then make sure everything is updated:
After that’s done, just install Scrapy using pip:
>Overview Of Scrapy, How The Pieces Fit Together, Parsers, Spiders, Etc
You will be writing a script called a ‘Spider’ for Scrapy to run, but don’t worry, Scrapy spiders aren’t scary at all despite their name. The only similarity Scrapy spiders and real spiders have are that they like to crawl on the web.
Inside the spider is a class
that you define that tells Scrapy what to do. For example, where to start crawling, the types of requests it makes, how to follow links on pages, and how it parses data. You can even add custom functions to process data as well, before outputting back into a file.
Writing Your First Spider, Write A Simple Spider To Allow For Hands-on Learning
To start our first spider, we need to first create a Scrapy project. To do this, enter this into your command line:
This will create a folder with your project.
We’ll start with a basic spider. The following code is to be entered into a python script. Open a new python script in /oscars/spiders
and name it oscars_spider.py
We’ll import Scrapy.
We then start defining our Spider class. First, we set the name and then the domains that the spider is allowed to scrape. Finally, we tell the spider where to start scraping from.
Next, we need a function which will capture the information that we want. For now, we’ll just grab the page title. We use CSS to find the tag which carries the title text, and then we extract it. Finally, we return the information back to Scrapy to be logged or written to a file.
Now save the code in /oscars/spiders/oscars_spider.py
To run this spider, simply go to your command line and type:
You should see an output like this:
Congratulations, you’ve built your first basic Scrapy scraper!
Full code:
Obviously, we want it to do a little bit more, so let’s look into how to use Scrapy to parse data.
First, let’s get familiar with the Scrapy shell. The Scrapy shell can help you test your code to make sure that Scrapy is grabbing the data you want.
To access the shell, enter this into your command line:
This will basically open the page that you’ve directed it to and it will let you run single lines of code. For example, you can view the raw HTML of the page by typing in:
Or open the page in your default browser by typing in:
Our goal here is to find the code that contains the information that we want. For now, let’s try to grab the movie title names only.
The easiest way to find the code we need is by opening the page in our browser and inspecting the code. In this example, I am using Chrome DevTools. Just right-click on any movie title and select ‘inspect’:
As you can see, the Oscar winners have a yellow background while the nominees have a plain background. There’s also a link to the article about the movie title, and the links for movies end in film)
. Now that we know this, we can use a CSS selector to grab the data. In the Scrapy shell, type in:
As you can see, you now have a list of all the Oscar Best Picture Winners!
Going back to our main goal, we want a list of the Oscar winners for best picture, along with their director, starring actors, release date, and run time. To do this, we need Scrapy to grab data from each of those movie pages.
We’ll have to rewrite a few things and add a new function, but don’t worry, it’s pretty straightforward.
We’ll start by initiating the scraper the same way as before.
But this time, two things will change. First, we’ll import time
along with scrapy
because we want to create a timer to restrict how fast the bot scrapes. Also, when we parse the pages the first time, we want to only get a list of the links to each title, so we can grab information off those pages instead.
Here we make a loop to look for every link on the page that ends in film)
with the yellow background in it and then we join those links together into a list of URLs, which we will send to the function parse_titles
to pass further. We also slip in a timer for it to only request pages every 5 seconds. Remember, we can use the Scrapy shell to test our response.css fields to make sure we’re getting the correct data!
The real work gets done in our parse_data
function, where we create a dictionary called data
and then fill each key with the information we want. Again, all these selectors were found using Chrome DevTools as demonstrated before and then tested with the Scrapy shell.
The final line returns the data dictionary back to Scrapy to store.
Complete code:
Sometimes we will want to use proxies as websites will try to block our attempts at scraping.
To do this, we only need to change a few things. Using our example, in our def parse()
, we need to change it to the following:
This will route the requests through your proxy server.
Deployment And Logging, Show How To Actually Manage A Spider In Production
Clean my mac full free. Now it is time to run our spider. To make Scrapy start scraping and then output to a CSV file, enter the following into your command prompt:
You will see a large output, and after a couple of minutes, it will complete and you will have a CSV file sitting in your project folder.
Compiling Results, Show How To Use The Results Compiled In The Previous Steps
When you open the CSV file, you will see all the information we wanted (sorted out by columns with headings). It’s really that simple.
With data scraping, we can obtain almost any custom dataset that we want, as long as the information is publicly available. What you want to do with this data is up to you. This skill is extremely useful for doing market research, keeping information on a website updated, and many other things.
It’s fairly easy to set up your own web scraper to obtain custom datasets on your own, however, always remember that there might be other ways to obtain the data that you need. Businesses invest a lot into providing the data that you want, so it’s only fair that we respect their terms and conditions.
Additional Resources For Learning More About Scrapy And Web Scraping In General
- “The 10 Best Data Scraping Tools and Web Scraping Tools,” Scraper API
- “5 Tips For Web Scraping Without Getting Blocked or Blacklisted,” Scraper API
- Parsel, a Python library to use regular expressions to extract data from HTML.