CAPSOLVER
Blog
How to Use Requests (Python Library) for Web Scraping

How to Use Requests (Python Library) for Web Scraping

Logo of CapSolver

Lucas Mitchell

Automation Engineer

04-Sep-2024

How to Use Requests (Python Library) for Web Scraping

Web scraping allows you to extract data from websites, but websites may implement anti-scraping measures such as captchas or rate-limiting. In this guide, we’ll introduce the Requests library and provide an example of how to scrape data from a live website: Quotes to Scrape. Additionally, we'll explore how to handle reCAPTCHA v2 challenges using Requests and Capsolver.

What is Requests?

Requests is a simple and powerful Python library used to make HTTP requests. It's widely used for tasks like interacting with APIs, downloading web pages, and scraping data. With its user-friendly API, it's easy to send requests, handle sessions, and deal with HTTP headers and cookies.

Key Features:

  • Simple API for sending requests
  • Support for sessions and cookies
  • Automatic handling of redirects and proxies
  • Custom headers for simulating browser requests

Prerequisites

Install the Requests library using pip:

bash Copy
pip install requests

Example: Scraping Quotes to Scrape

Let’s start with a basic web scraping example where we’ll extract quotes from the Quotes to Scrape website using Requests.

python Copy
import requests
from bs4 import BeautifulSoup

# URL of the page to scrape
url = 'http://quotes.toscrape.com/'

# Send a GET request to the page
response = requests.get(url)

# Check if the request was successful
if response.status_code == 200:
    # Parse the page content using BeautifulSoup
    soup = BeautifulSoup(response.text, 'html.parser')

    # Find all the quotes on the page
    quotes = soup.find_all('span', class_='text')

    # Print each quote
    for quote in quotes:
        print(quote.text)
else:
    print(f"Failed to retrieve the page. Status Code: {response.status_code}")

Explanation:

  • We send a GET request to the Quotes to Scrape website.
  • We use BeautifulSoup to parse the HTML content.
  • We extract and print all the quotes found on the page.

How to Solve reCAPTCHA v2 with Requests

Some websites, however, may employ reCAPTCHA to prevent scraping. In this case, solving reCAPTCHA is necessary before accessing content. Using Capsolver alongside Requests, we can automate the captcha-solving process.

Prerequisites

Install the Capsolver library:

bash Copy
pip install capsolver requests

Example: Solving reCAPTCHA v2

Below is a sample script that solves reCAPTCHA v2 challenges using Capsolver and sends a request with the solved captcha token:

python Copy
import capsolver
import requests

# Consider using environment variables for sensitive information
PROXY = "http://username:password@host:port"
capsolver.api_key = "Your Capsolver API Key"
PAGE_URL = "https://example.com"
PAGE_KEY = "Your-Site-Key"

def solve_recaptcha_v2(url, key):
    solution = capsolver.solve({
        "type": "ReCaptchaV2Task",
        "websiteURL": url,
        "websiteKey": key,
        "proxy": PROXY
    })
    return solution['solution']['gRecaptchaResponse']

def main():
    print("Solving reCaptcha v2")
    solution = solve_recaptcha_v2(PAGE_URL, PAGE_KEY)
    print("Solution: ", solution)

    # Headers to simulate browser
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
    }

    # Data payload with the captcha solution
    data = {
        'g-recaptcha-response': solution
    }

    # Send GET request to the target page with the captcha solution
    response = requests.get(PAGE_URL, headers=headers, data=data, proxies={"http": PROXY, "https": PROXY})

    # Check the response status and print the content if successful
    if response.status_code == 200:
        print("Successfully bypassed captcha and fetched the page!")
        print(response.text)
    else:
        print(f"Failed to fetch the page. Status Code: {response.status_code}")

if __name__ == "__main__":
    main()

Explanation:

  1. Capsolver API: The solve_recaptcha_v2 function sends the site’s key and URL to Capsolver, along with proxy information, to obtain a solved captcha token.
  2. Sending the request: Once the captcha is solved, the g-recaptcha-response is included in the request data payload and sent with custom headers to the target URL.
  3. Simulating browser requests: We use a custom User-Agent header to avoid detection as a bot.

Web Scraping Best Practices

When web scraping, it is essential to be ethical and follow best practices:

  • Respect robots.txt: Always check the website's robots.txt to ensure scraping is permitted.
  • Rate Limiting: Introduce delays between requests to avoid overwhelming the website and reduce the risk of getting blocked.
  • Use Proxies: Rotate proxies to prevent IP blocks, especially when scraping at scale.
  • Spoof Headers: Simulate browser behavior by using custom headers like User-Agent.
  • Use TLS
  • Headers matching your chrome version
  • Headers matching the order of the chrome version

Conclusion

The Requests library offers an easy and efficient way to scrape websites, while handling advanced scenarios such as reCAPTCHA can be achieved with Capsolver. Always ensure your scraping activities comply with the website’s terms of service and legal guidelines.

Happy scraping!

Compliance Disclaimer: The information provided on this blog is for informational purposes only. CapSolver is committed to compliance with all applicable laws and regulations. The use of the CapSolver network for illegal, fraudulent, or abusive activities is strictly prohibited and will be investigated. Our captcha-solving solutions enhance user experience while ensuring 100% compliance in helping solve captcha difficulties during public data crawling. We encourage responsible use of our services. For more information, please visit our Terms of Service and Privacy Policy.

More

AI-powered Image Recognition: The Basics and How to Solve it
AI-powered Image Recognition: The Basics and How to Solve it

Say goodbye to image CAPTCHA struggles – CapSolver Vision Engine solves them fast, smart, and hassle-free!

Logo of CapSolver

Lucas Mitchell

24-Apr-2025

Best User Agents for Web Scraping & How to Use Them
Best User Agents for Web Scraping & How to Use Them

A guide to the best user agents for web scraping and their effective use to avoid detection. Explore the importance of user agents, types, and how to implement them for seamless and undetectable web scraping.

Logo of CapSolver

Ethan Collins

07-Mar-2025

What is a Captcha? Can Captcha Track You?
What is a Captcha? Can Captcha Track You?

Ever wondered what a CAPTCHA is and why websites make you solve them? Learn how CAPTCHAs work, whether they track you, and why they’re crucial for web security. Plus, discover how to bypass CAPTCHAs effortlessly with CapSolver for web scraping and automation.

Logo of CapSolver

Lucas Mitchell

05-Mar-2025

Cloudflare TLS Fingerprinting: What It Is and How to Solve It
Cloudflare TLS Fingerprinting: What It Is and How to Solve It

Learn about Cloudflare's use of TLS fingerprinting for security, how it detects and blocks bots, and explore effective methods to solve it for web scraping and automated browsing tasks.

Cloudflare
Logo of CapSolver

Lucas Mitchell

28-Feb-2025

Why do I keep getting asked to verify I'm not a robot?
Why do I keep getting asked to verify I'm not a robot?

Learn why Google prompts you to verify you're not a robot and explore solutions like using CapSolver’s API to solve CAPTCHA challenges efficiently.

Logo of CapSolver

Ethan Collins

27-Feb-2025

What is the best CAPTCHA solver in 2025
What is the best CAPTCHA solver in 2025

Discover the best CAPTCHA solver in 2025 with CapSolver, the ultimate tool for automated web scraping, CAPTCHA bypass, and data collection using advanced AI and machine learning. Enjoy bonus codes, seamless integration, and real-world examples to boost your scraping efficiency.

Logo of CapSolver

AloĂ­sio VĂ­tor

25-Feb-2025