Saturday, January 28, 2012

Web Crawler Tools


What are the best java based web crawler tools?

Crawler4j

Crawler4j is an open source Java crawler which provides a simple interface for crawling the Web.

Heritrix

Heritrix is the Internet Archive's open-source, extensible, web-scale, archival-quality web crawler project. Heritrix is designed to respect the robots.txt exclusion directives and META robots tags .

WebSPHINX

WebSPHINX ( Website-Specific Processors for HTML INformation eXtraction) is a Java class library and interactive development environment for web crawlers. A web crawler (also called a robot or spider) is a program that browses and processes Web pages automatically. WebSPHINX consists of two parts: the Crawler Workbench and the WebSPHINX class library.

Nutch

Apache Nutch is an open source web-search software project. Stemming from Apache Lucene, it now builds on Apache Solr adding web-specifics, such as a crawler, a link-graph database and parsing support handled by Apache Tika for HTML and and array other document formats.

WebLech

WebLech is a fully featured web site download/mirror tool in Java, which supports many features required to download websites and emulate standard web-browser behaviour as much as possible. WebLech is multithreaded and will feature a GUI console.

Arale

While many bots around are focused on page indexing, Arale is primarly designed for personal use. It fits the needs of advanced web surfers and web developers.

HyperSpider

HyperSpider (Java app) collects the link structure of a website. Data import/export from/to database and CSV-files. Export to Graphviz DOT, Resource Description Framework (RDF/DC), XML Topic Maps (XTM), Prolog, HTML. Visualization as hierarchy and map.

Arachnid

Arachnid is a Java-based web spider framework. It includes a simple HTML parser object that parses an input stream containing HTML content. Simple Web spiders can be created by sub-classing Arachnid and adding a few lines of code called after each page of a Web site is parsed.

Spindle

Spindle is a web indexing/search tool built on top of the Lucene toolkit. It includes a HTTP spider that is used to build the index, and a search class that is used to search the index. In addition, support is provided for the Bitmechanic listlib JSP TagLib, so that a search can be added to a JSP based site without writing any Java classes.

Spider

Spider is a complete standalone Java application designed to easily integrate varied data sources. XML driven framework for data retrieval from network accessible sources, scheduled pulling, highly extensible, provides hooks for custom post-processing and configuration and implemented as a Avalon/Keel framework data feed service.

LARM

LARM is a 100% Java search solution for end-users of the Jakarta Lucene search engine framework. It contains methods for indexing files, database tables, and a crawler for indexing web sites. Well, it will be. At the moment we only have some specifications. It's up to you to turn this into a working program. Its predecessor was an experimental crawler called larm-web crawler available from the Jakarta project.

Metis

Metis is a tool to collect information from the content of web sites. This was written for the Ideahamster Group for finding the competitive intelligence weight of a web server and assists in satisfying the CI Scouting portion of the Open Source Security Testing Methodology Manual (OSSTMM).

Aperture

Aperture crawls information systems such as file systems, websites, mail boxes and mail servers. It can extract full-text and metadata from many common file formats. Aperture has a flexible architecture that can be extended with custom file formats, data sources, etc., with support for deployment on OSGI platforms.

Smart and Simple Web Crawler

A framework that is crawl a web site with integrated Lucene support. Support two crawling modes, Max Iterations and Max Depth. Provides a filter interface to limit the links to be crawled. Filters can be combined with AND, OR and NOT.

Web Harvest

Web-Harvest collects Web pages and extracts useful data from them. It leverages technologies for text/xml manipulation such as XSLT, XQuery and Regular Expressions. Web-Harvest mainly focuses on HTML/XML based web sites. However it can be extended by custom Java libraries to augment its extraction capabilities.

Criterions for Selecting a Tool

  1. Multi-Threaded Structure.
  2. Control for Depth.
  3. Control for Redundant Links.
  4. "Max Page Size - to be crawled", "Max Page Number- to be crawled", "Time to Work" should be used as parameter to manage crawler.
  5. Documentation.
I use crawler4j for crawling whole web.
You can setup a multi-threaded web crawler in 5 minutes!