web-crawler-plus
A micro-framework to crawl the web pages with crawlers configs. It can use MongoDB, Elasticsearch and Solr databases to cache and save the extracted data.
Installation
In a virtualenv (see these instructions if you need to create one):
pip3 install web-crawler-plus
Releases
Issues with this package?
- Search issues for this package
- Package or version missing? Open a new issue
- Something else? Open a new issue