Create your Gitee Account
Explore and code with more than 6 million developers,Free private repositories !:)
Sign up
Clone or download
Cancel
Notice: Creating folder will generate an empty file .keep, because not support in Git
Loading...
README.md

#web-crawler 一个简单的web爬虫,采用scala + akka实现,该爬虫可以指定过滤规则,存储处理(文件存储,数据库存储。。。),例子如下 new Crawler().source(ArrayBuffer("http://money.163.com/stock/")) .processor(FileStore("d:/crawl-sites").process)
.filter((url: String) => { url.contains("163.com") }).start

Comments ( 0 )

Sign in for post a comment

About

一个简单的web爬虫,采用scala akka实现 spread retract
Cancel

Releases

No release

Contributors

All

Activities

load more
can not load any more