Web Scraping in Golang with Colly

Sora Fujimoto
AI Solutions Architect
04-Jul-2024

Web scraping is a method used to extract data from websites. In Golang, the Colly library is a popular tool for web scraping due to its simplicity and powerful features. This guide will take you through setting up a Golang project with Colly, building a basic scraper, handling complex data extraction scenarios, and optimizing your scrapers with concurrent requests.
Setting Up Your Golang Project
Before you begin, ensure you have Go installed on your system. Initialize your project and fetch the Colly package with these commands:
bash
go mod init my_scraper
go get -u github.com/gocolly/colly
This sets up your project directory and installs the Colly package.
Redeem Your CapSolver Bonus Code
Boost your automation budget instantly!
Use bonus code CAPN when topping up your CapSolver account to get an extra 5% bonus on every recharge โ with no limits.
Redeem it now in your CapSolver Dashboard
.
Building a Basic Scraper
Let's create a basic scraper to extract all links from a specific Wikipedia page.
- Create a new file
main.goand add the following code:
go
package main
import (
"fmt"
"github.com/gocolly/colly"
)
func main() {
c := colly.NewCollector(
colly.AllowedDomains("en.wikipedia.org"),
)
c.OnHTML(".mw-parser-output", func(e *colly.HTMLElement) {
links := e.ChildAttrs("a", "href")
fmt.Println(links)
})
c.Visit("https://en.wikipedia.org/wiki/Web_scraping")
}
This code initializes a new Colly collector restricted to en.wikipedia.org, then sets up a callback to find and print all links within the .mw-parser-output div of the page.
Scraping Table Data
For more complex tasks like scraping table data and writing it to a CSV file, you can use the encoding/csv package in Go:
- Extend
main.gowith the following code to scrape table data:
go
package main
import (
"encoding/csv"
"log"
"os"
"github.com/gocolly/colly"
)
func main() {
fName := "data.csv"
file, err := os.Create(fName)
if err != nil {
log.Fatalf("Could not create file, err: %q", err)
return
}
defer file.Close()
writer := csv.NewWriter(file)
defer writer.Flush()
c := colly.NewCollector()
c.OnHTML("table.wikitable", func(e *colly.HTMLElement) {
e.ForEach("tr", func(_ int, row *colly.HTMLElement) {
rowData := []string{}
row.ForEach("td", func(_ int, cell *colly.HTMLElement) {
rowData = append(rowData, cell.Text)
})
writer.Write(rowData)
})
})
c.Visit("https://en.wikipedia.org/wiki/List_of_programming_languages")
}
This script scrapes table data from a Wikipedia page and writes it to data.csv.
Making Concurrent Requests
To speed up scraping, you can make concurrent requests using Go's goroutines. Here's how you can scrape multiple pages concurrently:
go
package main
import (
"fmt"
"github.com/gocolly/colly"
"sync"
)
func scrape(url string, wg *sync.WaitGroup) {
defer wg.Done()
c := colly.NewCollector()
c.OnHTML("title", func(e *colly.HTMLElement) {
fmt.Println("Title found:", e.Text)
})
c.Visit(url)
}
func main() {
var wg sync.WaitGroup
urls := []string{
"https://en.wikipedia.org/wiki/Web_scraping",
"https://en.wikipedia.org/wiki/Data_mining",
"https://en.wikipedia.org/wiki/Screen_scraping",
}
for _, url := range urls {
wg.Add(1)
go scrape(url, &wg)
}
wg.Wait()
}
In this example, we define a scrape function that takes a URL and a wait group as arguments. The function initializes a Colly collector, sets up a callback to print the title of the page, and visits the URL. The main function creates a wait group, iterates over a list of URLs, and starts a goroutine for each URL to scrape concurrently.
By following these steps, you can build robust web scrapers in Golang using Colly, handle various scraping scenarios, and optimize performance with concurrent requests. For more detailed tutorials and advanced usage, check out resources on web scraping with Go and Colly.
Other Web Scraping Libraries for Go
In addition to Colly, there are several other excellent libraries for web scraping in Golang:
- GoQuery: This library offers a syntax and feature set similar to jQuery, allowing you to perform web scraping operations with ease, much like you would in jQuery.
- Ferret: A portable, extensible, and fast web scraping system designed to simplify data extraction from the web. Ferret focuses on data extraction using a unique declarative language.
- Selenium: Known for its headless browser capabilities, Selenium is ideal for scraping dynamic content. While it doesn't have official support for Go, there is a port available that allows its use in Golang projects.
Conclusion
Web scraping is a powerful and essential skill for efficiently extracting data from websites. Using Golang and the Colly library, you can build robust scrapers that handle various data extraction scenarios, from collecting simple links to extracting complex table data and optimizing performance with concurrent requests.
In this guide, you learned how to:
- Set up a Golang project with the Colly library.
- Build a basic scraper to extract links from a webpage.
- Handle more complex data extraction, such as scraping table data and writing it to a CSV file.
- Optimize your scrapers by making concurrent requests.
By following these steps, you can create effective and efficient web scrapers in Golang, leveraging the simplicity and powerful features of Colly. For more advanced usage and detailed tutorials, explore additional resources on web scraping with Go and Colly.
FAQ
1. Is Colly suitable for beginners learning web scraping in Golang?
Yes. Colly is designed to be simple and beginner-friendly while still offering powerful features like DOM parsing, request handling, callbacks, and concurrency. Even new Go developers can quickly build a functional scraper with just a few lines of code.
2. Can Colly scrape structured content such as tables or lists?
Absolutely. Colly allows selection of specific HTML nodes and attributes, which makes it easy to extract tables, lists, links, and other structured elements. You can also store results directly into files like CSV or JSON using Goโs standard libraries.
3. How can I speed up my Colly web scraper?
You can use Goโs goroutines to process multiple pages in parallel. By launching scrapers concurrently and synchronizing them with a wait group, scraping performance increases significantlyโespecially for large datasets or multi-URL crawling tasks.
Compliance Disclaimer: The information provided on this blog is for informational purposes only. CapSolver is committed to compliance with all applicable laws and regulations. The use of the CapSolver network for illegal, fraudulent, or abusive activities is strictly prohibited and will be investigated. Our captcha-solving solutions enhance user experience while ensuring 100% compliance in helping solve captcha difficulties during public data crawling. We encourage responsible use of our services. For more information, please visit our Terms of Service and Privacy Policy.
More

Top 10 No-Code Scrapers to Use in 2026
A curated list of the best no-code web scraping tools to use in 2026. Compare AI-powered scrapers, visual point-and-click platforms, pricing, pros and cons, and real-world use cases.

Lucas Mitchell
21-Jan-2026

How to Solve Captcha in Maxun with CapSolver Integration
A practical guide to integrating CapSolver with Maxun for real-world web scraping. Learn how to handle reCAPTCHA, Cloudflare Turnstile, and CAPTCHA-protected sites using pre-auth and robot workflows.

Ethan Collins
21-Jan-2026

How to Solve Captcha in Browser4 with CapSolver Integration
High-throughput Browser4 automation combined with CapSolver for handling CAPTCHA challenges in large-scale web data extraction.

Lucas Mitchell
21-Jan-2026

What Is A Scraping Bot and How to Build One
Learn what is a scraping bot and how to build one for automated data extraction. Discover top tools, security navigation techniques, and ethical scraping practices.

Emma Foster
15-Jan-2026

How to Solve Captcha in Agno with CapSolver Integration
Learn how to integrate CapSolver with Agno to solve reCAPTCHA v2/v3, Cloudflare Turnstile, and WAF challenges in autonomous AI agents. Includes real Python examples for web scraping and automation.

Lucas Mitchell
13-Jan-2026

Integrating Katana with CapSolver: Automated CAPTCHA Solving for Web Crawling
Learn how to integrate Katana with Capsolver to automatically solve reCAPTCHA v2 and Cloudflare Turnstile in headless crawling.

Lucas Mitchell
09-Jan-2026


.