Welcome![Sign In][Sign Up]
Location:
Search - SubjectSpider_ByKelvenJU

Search list

[JSP/JavaSubjectSpider_ByKelvenJU

Description: 1、锁定某个主题抓取; 2、能够产生日志文本文件,格式为:时间戳(timestamp)、URL; 3、抓取某一URL时最多允许建立2个连接(注意:本地作网页解析的线程数则不限) 4、遵守文明蜘蛛规则:必须分析robots.txt文件和meta tag有无限制;一个线程抓完一个网页后要sleep 2秒钟; 5、能对HTML网页进行解析,提取出链接URL,能判别提取的URL是否已处理过,不重复解析已crawl过的网页; 6、能够对spider/crawler程序的一些基本参数进行设置,包括:抓取深度(depth)、种子URL等; 7、使用User-agent向服务器表明自己的身份; 8、产生抓取统计信息:包括抓取速度、抓取完成所需时间、抓取网页总数;重要变量和所有类、方法加注释; 9、请遵守编程规范,如类、方法、文件等的命名规范, 10、可选:GUI图形用户界面、web界面,通过界面管理spider/crawler,包括启停、URL增删等 -1, the ability to lock a particular theme crawls; 2, can produce log text file format : timestamp (timestamp), the URL; 3. crawls up a URL to allow for the establishment of two connecting (Note : local website for a few analytical thread is not limited) 4, abide by the rules of civilized spiders : to be analyzed robots.txt file and meta tag unrestricted; End grasp a thread after a website to sleep two seconds; 5, capable of HTML pages for analysis, Links to extract URL, the extract can judge whether the URL have been processed. Analysis has not repeat crawl over the web; 6. to the spider / crawler some of the basic procedures for setting up parameters, including : Grasp depth (depth), seeds URL; 7. use User-agent to the server to identify themselves; 8, crawls produce statistical informati
Platform: | Size: 1912263 | Author: 祝庆荣 | Hits:

[JSP/JavaSubjectSpider_ByKelvenJU

Description: 1、锁定某个主题抓取; 2、能够产生日志文本文件,格式为:时间戳(timestamp)、URL; 3、抓取某一URL时最多允许建立2个连接(注意:本地作网页解析的线程数则不限) 4、遵守文明蜘蛛规则:必须分析robots.txt文件和meta tag有无限制;一个线程抓完一个网页后要sleep 2秒钟; 5、能对HTML网页进行解析,提取出链接URL,能判别提取的URL是否已处理过,不重复解析已crawl过的网页; 6、能够对spider/crawler程序的一些基本参数进行设置,包括:抓取深度(depth)、种子URL等; 7、使用User-agent向服务器表明自己的身份; 8、产生抓取统计信息:包括抓取速度、抓取完成所需时间、抓取网页总数;重要变量和所有类、方法加注释; 9、请遵守编程规范,如类、方法、文件等的命名规范, 10、可选:GUI图形用户界面、web界面,通过界面管理spider/crawler,包括启停、URL增删等 -1, the ability to lock a particular theme crawls; 2, can produce log text file format : timestamp (timestamp), the URL; 3. crawls up a URL to allow for the establishment of two connecting (Note : local website for a few analytical thread is not limited) 4, abide by the rules of civilized spiders : to be analyzed robots.txt file and meta tag unrestricted; End grasp a thread after a website to sleep two seconds; 5, capable of HTML pages for analysis, Links to extract URL, the extract can judge whether the URL have been processed. Analysis has not repeat crawl over the web; 6. to the spider/crawler some of the basic procedures for setting up parameters, including : Grasp depth (depth), seeds URL; 7. use User-agent to the server to identify themselves; 8, crawls produce statistical informati
Platform: | Size: 1911808 | Author: | Hits:

CodeBus www.codebus.net