A Beginner’s Guide to learn web scraping with python!

Last updated on Apr 29,2024 1.3M Views
Tech Enthusiast in Blockchain, Hadoop, Python, Cyber-Security, Ethical Hacking. Interested in anything... Tech Enthusiast in Blockchain, Hadoop, Python, Cyber-Security, Ethical Hacking. Interested in anything and everything about Computers.

A Beginner’s Guide to learn web scraping with python!

edureka.co

Web Scraping with Python

Let’s say you need to scrape a lot of information from the web, and time is of the essence. What alternative would there be to manually accessing each website and retrieving the information? “Web Scraping” is the technique that can be used. Web scraping merely facilitates and accelerates the process.

Python Full Course – Learn Python in 12 Hours | Python Tutorial For Beginners | Edureka

This Edureka Python Full Course helps you to became a master in basic and advanced Python Programming Concepts.

In this article on Web Scraping with Python, you will learn about web scraping in brief and see how to extract data from a website with a demonstration. I will be covering the following topics:

Why is Web Scraping Used?

Web scraping is used to collect large information from websites. You can also find more in-depth concepts about Web Scraping on Edureka’s Python course. But why does someone have to collect such large data from websites? To know about this, let’s look at the applications of web scraping:

Upskill for Higher Salary with PySpark Certification Training Course

Course NameUpcoming BatchesFees
Pyspark Certification Training27th April,2024 (Weekend Batch)₹21,995
Pyspark Certification Training1st June,2024 (Weekend Batch)₹21,995

What is Web Scraping?

Web scraping is one of the automated processes for gathering extensive information from the World Wide Web. The information found on the websites is disorganized. In order to store this data in a more organized fashion, web scraping is a useful tool. Online services, application programming interfaces (APIs), and custom code are just some of the options for scraping websites. This article will show how to use Python to perform web scraping.

Is Web Scraping Legal?

Talking about whether web scraping is legal or not, some websites allow web scraping and some don’t. To know whether a website allows web scraping or not, you can look at the website’s “robots.txt” file. You can find this file by appending “/robots.txt” to the URL that you want to scrape. For this example, I am scraping Flipkart website. So, to see the “robots.txt” file, the URL is www.flipkart.com/robots.txt.

Get in-depth Knowledge of Python along with its Diverse Applications

Why is Python Good for Web Scraping?

Here is the list of features of Python which makes it more suitable for web scraping.

Find out our Python Training in Top Cities/Countries

IndiaUSAOther Cities/Countries
BangaloreNew YorkUK
HyderabadChicagoLondon
DelhiAtlantaCanada
ChennaiHoustonToronto
MumbaiLos AngelesAustralia
PuneBostonUAE
KolkataMiamiDubai
AhmedabadSan FranciscoPhilippines

Top 10 Trending Technologies to Learn in 2024 | Edureka

 

This video talks about the Top 10 Trending Technologies in 2024 that you must learn.

 

How Do You Scrape Data From A Website?

When you run the code for web scraping, a request is sent to the URL that you have mentioned. As a response to the request, the server sends the data and allows you to read the HTML or XML page. The code then, parses the HTML or XML page, finds the data and extracts it. 

To extract data using web scraping with python, you need to follow these basic steps:

  1. Find the URL that you want to scrape
  2. Inspecting the Page
  3. Find the data you want to extract
  4. Write the code
  5. Run the code and extract the data
  6. Store the data in the required format 

Now let us see how to extract data from the Flipkart website using Python.

Learn Python, Deep Learning, NLP, Artificial Intelligence, Machine Learning with these AI and ML courses a PG Diploma certification program by NIT Warangal.

Libraries used for Web Scraping 

As we know, Python is has various applications and there are different libraries for different purposes. In our further demonstration, we will be using the following libraries:

Subscribe to our YouTube channel to get new updates..!

Web Scraping Example : Scraping Flipkart Website

Pre-requisites:

Let’s get started!

Step 1: Find the URL that you want to scrape

For this example, we are going scrape Flipkart website to extract the Price, Name, and Rating of Laptops. The URL for this page is https://www.flipkart.com/laptops/~buyback-guarantee-on-laptops-/pr?sid=6bo%2Cb5g&uniqBStoreParam1=val1&wid=11.productCard.PMU_V2.

Step 2: Inspecting the Page

The data is usually nested in tags. So, we inspect the page to see, under which tag the data we want to scrape is nested. To inspect the page, just right click on the element and click on “Inspect”.

When you click on the “Inspect” tab, you will see a “Browser Inspector Box” open.

Step 3: Find the data you want to extract

Let’s extract the Price, Name, and Rating which is in the “div” tag respectively.

Learn Python in 42 hours!

Step 4: Write the code

First, let’s create a Python file. To do this, open the terminal in Ubuntu and type gedit <your file name> with .py extension.

I am going to name my file “web-s”. Here’s the command:

gedit web-s.py

Now, let’s write our code in this file. 

First, let us import all the necessary libraries:

from selenium import webdriver
from BeautifulSoup import BeautifulSoup
import pandas as pd

To configure webdriver to use Chrome browser, we have to set the path to chromedriver

driver = webdriver.Chrome("/usr/lib/chromium-browser/chromedriver")

Refer the below code to open the URL:

products=[] #List to store name of the product
prices=[] #List to store price of the product
ratings=[] #List to store rating of the product
driver.get("https://www.flipkart.com/laptops/~buyback-guarantee-on-laptops-/pr?sid=6bo%2Cb5guniq")

Now that we have written the code to open the URL, it’s time to extract the data from the website. As mentioned earlier, the data we want to extract is nested in <div> tags. So, I will find the div tags with those respective class-names, extract the data and store the data in a variable. Refer the code below:

content = driver.page_source
soup = BeautifulSoup(content)
for a in soup.findAll('a',href=True, attrs={'class':'_31qSD5'}):
name=a.find('div', attrs={'class':'_3wU53n'})
price=a.find('div', attrs={'class':'_1vC4OE _2rQ-NK'})
rating=a.find('div', attrs={'class':'hGSR34 _2beYZw'})
products.append(name.text)
prices.append(price.text)
ratings.append(rating.text) 

Step 5: Run the code and extract the data

To run the code, use the below command:

python web-s.py

Step 6: Store the data in a required format

After extracting the data, you might want to store it in a format. This format varies depending on your requirement. For this example, we will store the extracted data in a CSV (Comma Separated Value) format. To do this, I will add the following lines to my code:

df = pd.DataFrame({'Product Name':products,'Price':prices,'Rating':ratings}) 
df.to_csv('products.csv', index=False, encoding='utf-8')

Now, I’ll run the whole code again.

A file name “products.csv” is created and this file contains the extracted data.

Scrape and Parse Text From Websites

To scrape and parse text from websites in Python, you can use the requests library to fetch the HTML content of the website and then use a parsing library like BeautifulSoup or lxml to extract the relevant text from the HTML. Here’s a step-by-step guide:

Step 1: Import necessary modules

import requests 
from bs4 import BeautifulSoup
import re

Step 2: Fetch the HTML content of the website using `requests`

 

url = 'https://example.com'; # Replace this with the URL of the website you want to scrape
response = requests.get(url)
# Check if the request was successful
if response.status_code == 200:
html_content = response.content
else:
 print("Failed to fetch the website.")
exit()

 

Step 3: Parse the HTML content using `BeautifulSoup`

# Parse the HTML content with BeautifulSoup
soup = BeautifulSoup(html_content, 'html.parser')

 
Step 4: Extract the text from the parsed HTML using string methods

# Find all the text elements (e.g., paragraphs, headings, etc.) you want to scrape
text_elements = soup.find_all(['p', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6', 'span'])
# Extract the text from each element and concatenate them into a single string
scraped_text = ' '.join(element.get_text() for element in text_elements)
print(scraped_text)

 

Step 5: Extract text from HTML using regular expressions

</pre>
<span> </span></pre>
<span style="font-weight: 400;">Note: The regular expression in Step 5 is a simple pattern that matches any HTML tag and removes them from the HTML content. In real-world scenarios, you may need more complex regular expressions depending on the structure of the HTML.</span>

<strong>Check Your Understanding:</strong>

<span style="font-weight: 400;">Now that you have built your web scraper, you can use either the string method approach or the regular expression approach to extract text from websites. Remember to use web scraping responsibly and adhere to website policies and legal restrictions. Always review the website's terms of service and robots.txt file before scraping any website. Additionally, excessive or unauthorized scraping may put a strain on the website's server and is generally considered unethical.</span>
<h2>Use an HTML Parser for Web Scraping in Python</h2>
<span style="font-weight: 400;">Here are the steps to use an HTML parser like Beautiful Soup for web scraping in Python:</span>

<b>Step 1: Install Beautiful Soup</b>

<span style="font-weight: 400;">Make sure you have the Beautiful Soup library installed. If not, you can install it using `pip`:</span>

<span style="font-weight: 400;">```bash</span>

<span style="font-weight: 400;">pip install beautifulsoup4</span>

<span style="font-weight: 400;">```</span>

<b>Step 2: Create a BeautifulSoup Object</b>

<span style="font-weight: 400;">Import the necessary modules and create a BeautifulSoup object to parse the HTML content of the website.</span>

<span style="font-weight: 400;">

Step 4: Check Your Understanding

Now that you have a BeautifulSoup object (`soup`), you can use its various methods to extract specific data from the HTML. For example, you can use `soup.find()` to find the first occurrence of a specific HTML element, `soup.find_all()` to find all occurrences of an element, and `soup.select()` to use CSS selectors to extract elements.

Here’s an example of how to use `soup.find()` to extract the text of the first paragraph (`<p>`) tag:


<span style="font-weight: 400;"># Find the first paragraph tag and extract its text</span>

<span style="font-weight: 400;">first_paragraph = soup.find('p').get_text()</span>
<h2></h2>
<span style="font-weight: 400;">print(first_paragraph)</span>

You can explore more methods available in the BeautifulSoup library to extract data from the HTML content as needed for your web scraping task.

Remember to use web scraping responsibly, adhere to website policies and legal restrictions, and review the website’s terms of service and robots.txt file before scraping any website. Additionally, excessive or unauthorized scraping may put a strain on the website’s server and is generally considered unethical.

Interact With HTML Forms

Certainly! Here are the steps to interact with HTML forms using MechanicalSoup in Python:

 

Step 1: Install MechanicalSoup

Ensure you have the MechanicalSoup library installed. If not, you can install it using `pip`:

 

“`bash

pip install MechanicalSoup

“`

 

Step 2: Create a Browser Object

Import the necessary modules and create a MechanicalSoup browser object to interact with the website.

 

 

</span>

<span style="font-weight: 400;">import mechanicalsoup</span>

<span style="font-weight: 400;">

 

 

Step 3: Submit a Form with MechanicalSoup

Create a browser object and use it to submit a form on a specific webpage.

 

 

</span>

<span style="font-weight: 400;"># Create a MechanicalSoup browser object</span>

<span style="font-weight: 400;">browser = mechanicalsoup.StatefulBrowser()</span>

<b></b>

<span style="font-weight: 400;"># Navigate to the webpage with the form</span>

<span style="font-weight: 400;">url = 'https://example.com/form-page' # Replace this with the URL of the webpage with the form</span>

<span style="font-weight: 400;">browser.open(url)</span>

<b></b>

<span style="font-weight: 400;"># Fill in the form fields</span>

<span style="font-weight: 400;">form = browser.select_form() # Select the form on the webpage</span>

<span style="font-weight: 400;">form['username'] = 'your_username'; # Replace 'username' with the name attribute of the username input field</span>

<span style="font-weight: 400;">form['password'] = 'your_password'; # Replace 'password' with the name attribute of the password input field</span>

<b></b>

<span style="font-weight: 400;"># Submit the form</span>

<span style="font-weight: 400;">browser.submit_selected()</span>

<span style="font-weight: 400;">

 

Step 4: Check Your Understanding

In this example, we used MechanicalSoup to create a browser object (`browser`) and navigate to a webpage with a form. We then selected the form using `browser.select_form()`, filled in the username and password fields using `form[‘username’]` and `form[‘password’]`, and finally submitted the form using `browser.submit_selected()`.

 

With these steps, you can interact with HTML forms programmatically. MechanicalSoup is a powerful tool for automating form submissions, web scraping, and interacting with websites that have forms.

Remember to use web scraping and form submission responsibly, adhere to website policies and legal restrictions, and review the website’s terms of service before interacting with its forms. Additionally, make sure that the website allows automated interactions and that you are not violating any usage policies. Unauthorized and excessive form submissions can cause strain on the website’s server and may be considered unethical.

Interact With Websites in Real Time

Interacting with websites in real-time typically involves performing actions on a webpage and receiving immediate feedback or responses without requiring a full page reload. There are several methods to achieve real-time interactions with websites, depending on the use case and technologies involved. Here are some common approaches:

  1. JavaScript and AJAX: JavaScript is a powerful client-side scripting language that allows you to manipulate the DOM (Document Object Model) of a webpage. AJAX (Asynchronous JavaScript and XML) enables you to make asynchronous HTTP requests to the server without reloading the entire page. With JavaScript and AJAX, you can perform actions like submitting forms, updating content, and fetching data from the server in real-time.
  2. WebSockets: WebSockets provide full-duplex communication channels over a single TCP connection, enabling real-time, bidirectional communication between a client and a server. WebSockets are ideal for applications that require continuous data streams or real-time updates, such as chat applications, live notifications, and collaborative platforms.
  3. Server-Sent Events (SSE): SSE is a standard that enables a server to send real-time updates to a client over an HTTP connection. Unlike WebSockets, SSE is unidirectional (server to client), making it suitable for scenarios where the client only needs to receive updates from the server without sending data back.
  4. WebRTC: Web Real-Time Communication (WebRTC) is a technology that allows peer-to-peer communication between browsers. It is commonly used for video conferencing, audio calls, and other real-time media interactions directly between users.
  5. Push Notifications: Push notifications are messages sent from a server to a client’s device, notifying them of new events or updates. They are commonly used on mobile devices and web browsers to deliver real-time alerts or updates to users, even when the application is not open.
  6. Single Page Applications (SPAs): SPAs are web applications that load a single HTML page and dynamically update the content as the user interacts with the page. SPAs use JavaScript frameworks like React, Angular, or Vue.js to manage state and handle real-time updates efficiently.

Overall, the choice of the approach for real-time interactions with websites depends on the specific requirements and technologies involved. JavaScript, AJAX, WebSockets, SSE, WebRTC, and push notifications are some of the common technologies used to enable real-time communication and interactivity on modern web applications.

I hope you guys enjoyed this article on “Web Scraping with Python”. I hope this blog was informative and has added value to your knowledge. Now go ahead and try Web Scraping. Experiment with different modules and applications of Python

If you wish to know about Web Scraping With Python on Windows platform, then the below video will help you understand how to do it or you can also join our Python Master course.

To get in-depth knowledge on Python Programming language along with its various applications, Enroll now in our comprehensive Python Course and embark on a journey to become a proficient Python programmer. Whether you’re a beginner or looking to expand your coding skills, this course will equip you with the knowledge to tackle real-world projects confidently.

Got a question regarding “web scraping with Python”? You can ask it on edureka! Forum and we will get back to you at the earliest or you can join our Python training in Hobart today..

Upcoming Batches For Python Programming Certification Course
Course NameDateDetails
Python Programming Certification Course

Class Starts on 18th May,2024

18th May

SAT&SUN (Weekend Batch)
View Details
Python Programming Certification Course

Class Starts on 22nd June,2024

22nd June

SAT&SUN (Weekend Batch)
View Details
BROWSE COURSES
REGISTER FOR FREE WEBINAR Prompt Engineering Explained