项目作者: EugenCazacu

项目描述 :
Cheap modular c++ crawler library
高级语言: C++
项目地址: git://github.com/EugenCazacu/CheapCrawler.git
创建时间: 2019-05-01T11:54:18Z
项目社区:https://github.com/EugenCazacu/CheapCrawler

开源协议:MIT License

下载


CheapCrawler

Cheap modular c++ crawler library

This library, although STABLE is a WORK IN PROGRESS. Feel free to use is, ask about it and contribute.

This C++ library is designed to run as many simultaneous downloads as it is physically
possible on a single machine. It is implemented using Boost.Asio with libcurl MULTI_SOCKET.

A sample executable driver is available which uses the library to download a list of URLs.
You can use it as a guideline of how to integrate CheapCrawler in your own project.

Spec

The library has 2 main components the crawler and the downloader.

Some utilities are also available:

  • A logging library
  • A task system
  • An exception handler
  • Other goodies (check src/utils).

crawler

Flow:

  1. While (keepCrawling)
  2. Pull URL list from the crawling dispatcher
  3. Prepare robots.txt download for each host and protocol
  4. Try download each robots.txt
  5. log each downloaded robots.txt result
  6. if no robots.txt => crawl all
  7. if download error => set result to download error, don't crawl
  8. else (successful download of robots.txt)
  9. (NOT IMPLEMENTED YET) For all forbidden urls by robots.txt:
  10. => remove from download queue
  11. Download allowed URLs with 2 secs default delay between download requests per same host

Some additional specifications:

  • Stop and exit when keepCrawling() returns false
  • Pull URL list from the crawling dispatcher
  • Calculate overall robots.txt URL list for the dispatched URL bunch
  • Read robots.txt before each call to dispatcher (no persistent robots.txt)
  • One download queue per host with 2 secs timeout between downloads per host
  • NOT YET IMPLEMENTED: Filter URLs by robots.txt result
  • Call download result for each finished download

downloader

Implements the Downloader interface of the crawler in a performant oriented implementation using libcurl and Boost.Asio.
Multiple simultaneous downloads are possible with a fixed number of threads usage.
Should theoretically support multiple hundreds of simultaneous downloads.

Compiling

I developed this library on macOS and haven’t tested it on anything else, but it should be easy to cross
compile it on Linux.

Prerequisites

To be able to use CheapCrawler you will need the following:

Build separately

Here is how you can build the library and the driver executable separately.

  1. git clone https://github.com/EugenCazacu/CheapCrawler.git
  2. mkdir build && cd build
  3. conan install ../CheapCrawler
  4. cmake -DCHEAP_CRAWLER_RUN_TESTS=ON -DCMAKE_BUILD_TYPE=Release ../CheapCrawler
  5. make
  6. ./bin/crawlerDriver -u "http://example.com"

Integrate into a project

Here is how I integrate CheapCrawler into my project. There is definitely room for improvement here.

  1. git remote add CheapCrawler https://github.com/EugenCazacu/CheapCrawler.git
  2. git subtree add --squash --prefix=external/CheapCrawler CheapCrawler master

To update to a new CheapCrawler version:
git subtree pull --squash --prefix=external/CheapCrawler CheapCrawler master

After adding the CheapCrawler directory to your cmake build, you can use the crawlerLibrary target.

Future development

I am developing this library along with another private project. I will be adding features and fixing issues
as I need them. If you would like to use this library and need support, feel free to contact me
and I will try to help you. This library is a work in progress and if you find any problems with it,
please submit an issue. If you want to contribute, please get in contact with me beforehand to make sure
I will be able to accept your contribution.

Here is a list of additional features I plan and hope to implement:

  • filter downloads based on robots.txt
  • Implement download bottleneck detector with auto nr downloads increase/decrease
  • Ask for more downloads when starting to starve
  • Go through each http response and think if it applies to just the downloaded page or whole queue for the host
    e.g. 4xx might mean to many requests and should slow down.
  • handle 429 http response
  • handle other error messages
  • save the header response if error