Welcome![Sign In][Sign Up]
Location:
Search - Search Engine spider

Search list

[Search EngineSpider

Description: search engine spider
Platform: | Size: 4001 | Author: qingshuli | Hits:

[Search EngineSpideroo

Description: C#写的一个搜索引擎,可以搜索、建立索引等。building a simple search engine that crawls the file system from a specified folder, and indexing all HTML (or other types) of documents. A basic design and object model was developed as well as a query/results page-C# to write a search engine, search, index and so on. Building a simple search engine that crawls the file system from a specified folder, and indexing all HTML (or other types) of documents. A basic design and object model was developed as well as a query/results page
Platform: | Size: 24576 | Author: 站长 | Hits:

[Other resourceopenwebspider-0.5.1

Description: OpenWebSpider is an Open Source multi-threaded Web Spider (robot, crawler) and search engine with a lot of intresting features!
Platform: | Size: 231424 | Author: 龙龙 | Hits:

[Search Enginespider(java)

Description: 网页抓取器又叫网络机器人(Robot)、网络爬行者、网络蜘蛛。网络机器人(Web Robot),也称网络蜘蛛(Spider),漫游者(Wanderer)和爬虫(Crawler),是指某个能以人类无法达到的速度不断重复执行某项任务的自动程序。他们能自动漫游与Web站点,在Web上按某种策略自动进行远程数据的检索和获取,并产生本地索引,产生本地数据库,提供查询接口,共搜索引擎调用。-web crawling robots- known network (Robot), Web crawling, spider network. Network Robot (Web Robot), also called network spider (Spider), rovers (Wanderer) and reptiles (Crawler), is a human can not reach the speed of repeated execution of a mandate automatic procedures. They can automatically roaming and Web site on the Web strategy by some automatic remote data access and retrieval, Index and produce local, have local database, which provides interfaces for a total of search engine called.
Platform: | Size: 20480 | Author: shengping | Hits:

[Search EngineSearchEngineOptimization

Description: 这是一本讲搜索引擎不错的书籍,有想编辑自己的搜索引擎的朋友可以看看。-This is a good search engine stresses the books, to edit its own search engine you can look at.
Platform: | Size: 104448 | Author: curly | Hits:

[Search Enginesphider

Description: 搜索引擎蜘蛛程序 http://www.vogood.com-Search engine spider http://www.vogood.com
Platform: | Size: 62464 | Author: vogood | Hits:

[WEB Codephpspidercount

Description: 欢迎使用搜索引擎蜘蛛跟踪器,写这个小程序是因为我的服务器日志不能用了,所以才有了它。 由于水平有限,大家凑合着用吧:) 功用:跟踪搜索引擎的蜘蛛(BOT),并进行记录,提供在线察看和生成cvs格式文档下载。 -Welcome to the search engine spider tracker, write this small program is because I can not use the server logs, so be it. Due to the limited level, we make do with using it:) function: tracking search engine spiders (BOT), and records, provides an online view and download documents generated cvs format.
Platform: | Size: 4096 | Author: webghost | Hits:

[Search EngineSpider

Description:
Platform: | Size: 4096 | Author: qingshuli | Hits:

[Internet-NetworkSpider

Description: 网络蜘蛛,搜索引擎, 网络蜘蛛,搜索引擎-Web spiders, search engines, web spiders, search engine
Platform: | Size: 3099648 | Author: dlyzh | Hits:

[Internet-Networkspider

Description: 系统实现了简单的搜索引擎功能。抓取腾讯网站的群数据。-System to achieve a simple search engine function. Tencent crawl site data base.
Platform: | Size: 3072 | Author: 魏祥峰 | Hits:

[JSP/Javaspider

Description: 本程序可从网上利用百度搜索引擎下载和输入关键词有关的网页-This procedure could be using from the Internet search engine Baidu to download and enter keywords related web pages
Platform: | Size: 180224 | Author: yaozengli | Hits:

[Search EngineCzhizhu

Description: "蜘蛛"(Spider)是Internet上一种很有用的程序,搜索引擎利用蜘蛛程序将Web页面收集到数据库,企业利用蜘蛛程序监视竞争对手的网站并跟踪变动,个人用户用蜘蛛程序下载Web页面以便脱机使用,开发者利用蜘蛛程序扫描自己的Web检查无效的链接……对于不同的用户,蜘蛛程序有不同的用途。那么,蜘蛛程序到底是怎样工作的呢? - Spider (Spider) is the Internet on a very useful procedure, the search engine spider will use to collect Web pages to the database, enterprise use of spider s web site to monitor and track competitor changes, personal users to download Web pages spider in order to Offline use and development of procedures to use spiders to scan their own Web check invalid link ... ... for different users, spider have different purposes. Then, in the end is how spider work?
Platform: | Size: 4137984 | Author: 李鹏 | Hits:

[MultiLanguagespider

Description: 本系统为简易网络爬虫,输入初始url,系统自动在网上搜索网页信息,并记录下来做为搜索引擎的数据.-The system for the Simple Network reptiles, enter the initial url, system automatically searches the Web page information, and record data as a search engine.
Platform: | Size: 49152 | Author: 杨广兴 | Hits:

[Internet-Networkspider+23

Description: 搜索引擎索引数据库的设计与实现搜索引擎索引数据库的设计与实现-Search engine index database design and implementation of search engine indexing database design and implementation of
Platform: | Size: 5806080 | Author: sf | Hits:

[OtherInstant_Spider

Description: search engine: Instance spider
Platform: | Size: 92160 | Author: shijp74 | Hits:

[Search EngineCrawler_src_code

Description: 网页爬虫(也被称做蚂蚁或者蜘蛛)是一个自动抓取万维网中网页数据的程序.网页爬虫一般都是用于抓取大量的网页,为日后搜索引擎处理服务的.抓取的网页由一些专门的程序来建立索引(如:Lucene,DotLucene),加快搜索的速度.爬虫也可以作为链接检查器或者HTML代码校验器来提供一些服务.比较新的一种用法是用来检查E-mail地址,用来防止Trackback spam.-A web crawler (also known as a web spider or ant) is a program, which browses the World Wide Web in a methodical, automated manner. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine, that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a web site, such as checking links, or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for spam).
Platform: | Size: 55296 | Author: lisi | Hits:

[JSP/Javaspider

Description: Java实现搜索引擎代码实现,采用了java编程技术,实现搜索网页链接-Java code to achieve the search engine using the java programming technology, the realization of the search page link
Platform: | Size: 162816 | Author: fu | Hits:

[CSharpSpider

Description: 蜘蛛"(Spider)是Internet上一种很有用的程序,搜索引擎利用蜘蛛程序将Web页面收集到数据库,企业利用蜘蛛程序监视竞争对手的网站并跟踪变动,个人用户用蜘蛛程序下载Web页面以便脱机使用,开发者利用蜘蛛程序扫描自己的Web检查无效的链接……对于不同的用户,蜘蛛程序有不同的用途。那么,蜘蛛程序到底是怎样工作的呢? 蜘蛛是一种半自动的程序,就象现实当中的蜘蛛在它的Web(蜘蛛网)上旅行一样,蜘蛛程序也按照类似的方式在Web链接织成的网上旅行。蜘蛛程序之所以是半自动的,是因为它总是需要一个初始链接(出发点),但此后的运行情况就要由它自己决定了,蜘蛛程序会扫描起始页面包含的链接,然后访问这些链接指向的页面,再分析和追踪那些页面包含的链接。从理论上看,最终蜘蛛程序会访问到Internet上的每一个页面,因为Internet上几乎每一个页面总是被其他或多或少的页面引用。 -Spider "(Spider) is the Internet, a very useful procedure, the search engine spider programs will use to collect Web pages to the database, business process using spider to monitor a competitor s site and track changes in individual users to download Web pages using spider programs to off machine use, developers use Web spiders scan your check invalid links ... ... for different users, different use spider programs. So in the end is how the spider program work? Spider is a semi-automatic procedure, as the reality of which the spider in its Web (web) on the same trip, spider also follow a similar approach in the Web link woven into the online travel. Spider is a semi-automatic procedure is, because it always requires an initial link (starting point), but then the operation will start from its own decision, and the spider program will scan the start page contains the link, and then visit these links point to the page, and then analyze and track those pages contain links. In theory, the
Platform: | Size: 24576 | Author: webuser_cn | Hits:

[Search EngineCSharpSpider

Description: "蜘蛛"(Spider)是Internet上一種很有用的程序,搜索引擎利用蜘蛛程序將Web頁面收集到數據庫,企業利用蜘蛛程序監視競爭對手的網站並跟蹤變動,個人用戶用蜘蛛程序下載Web頁面以便脫機使用,開發者利用蜘蛛程序掃瞄自己的Web檢查無效的鏈接……對於不同的用戶,蜘蛛程序有不同的用途。那麼,蜘蛛程序到底是怎樣工作的呢? 本文介紹如何用C#語言構造一個蜘蛛程序,它能夠把整個網站的內容下載到某個指定的目錄,程序的運行界面如圖一。你可以方便地利用本文提供的幾個核心類構造出自己的蜘蛛程序。 -"Spider" (Spider) is the Internet, a very useful procedure, the search engine spider programs will use to collect Web pages to the database, business process using spider to monitor and track changes in a competitor s site, individual users download Web pages with a spider program to off machine use, developers use the Web spider program scans your check invalid link ... ... different users have different procedures for the use of spider. So in the end is how the spider program work? This article describes how to use the C# language to construct a spider program that can download entire site s content to a specified directory, run the program interface shown in Figure 1. You can easily use this article offers several core classes to construct their own spider.
Platform: | Size: 105472 | Author: 王明 | Hits:

[WEB Codespider

Description: 工具说明: 1.类文件的作用是监控搜索引擎爬虫对网站的操作。 2.本类为php代码,只适用于php系统的网站。 3.代码没有使用到数据库,直接把记录写在文本文件中,请在根目录建立spider文件夹。 4.代码产生的记录,仅供参考,并不保证包含所有的记录,因为没有运行到本代码的文件是不会记录的。 5.本代码为免费代码,可以随便复制,修改使用,但是希望能保留一点我的版权信息。 使用方法: 请将需要统计的页面加入以下代码,并调用,一般修改在全局调用的文件中。 require(ROOT_PATH . ‘本文件目录/cls_spider.php’) $spider=new spider() 除此文件外还包含一前台统计文件,前台文件名可随便修改,但是请注意下里面调用本文件的路径。 前台访问密码请修改下面的 $viewpass 值。 -Tool: 1. Class files is the role of monitoring the site search engine spiders operate. 2. This class is php code, php system is only applicable to the website. 3. Code not used to the database, write directly to the records in a text file, built in the root directory Li spider folder. 4. The code generated records, for reference only, does not guarantee that contain all of the records, because there is no transport Line to the code file is not recorded. 5. This code is free code that can be easily copied, modified to use but want to preserve that My copyright information. Usage: Please need to add the following code in the page statistics, and calls generally modify the text in the global call Parts of. require (ROOT_PATH. the file directory/cls_spider.php ) $ Spider = new spider () In addition documents also include a statistics file front, front desk can not tamper with the file name, but please note Think of it which is called the path of the file.
Platform: | Size: 7168 | Author: 陆飞 | Hits:
« 12 3 4 5 6 7 »

CodeBus www.codebus.net