web-crawler-plus

A micro-framework to crawl the web pages with crawlers configs. It can use MongoDB, Elasticsearch and Solr databases to cache and save the extracted data.

Installation

In a virtualenv (see these instructions if you need to create one):

pip3 install web-crawler-plus

Releases

Version Released Bullseye
Python 3.9
Bookworm
Python 3.11
Files
0.9.14 2018-04-01  
0.9.13 2018-03-31  
0.9.12 2018-03-31  
0.9.11 2018-03-18  
0.9.10 2018-03-18  
0.9.9 2018-03-18  
0.9.8 2018-03-18  
0.9.7a0 pre-release 2018-03-18  
0.9.6.beta pre-release 2018-03-18  
0.9.5.beta pre-release 2018-03-18  
0.9.4.beta pre-release 2018-03-18  
0.9.3.beta pre-release 2018-03-18  
0.9.2 2018-03-18  
0.9.1.beta1 pre-release 2018-03-18  
0.9.1.beta pre-release 2018-03-18  
0.9.1a0 pre-release 2018-03-18  
0.9.0.beta pre-release 2018-03-18  

Issues with this package?

Page last updated 2025-07-17 23:21:30 UTC