文档章节

爬虫

菜鸟上路中
 菜鸟上路中
发布于 2016/04/18 19:00
字数 1027
阅读 9
收藏 0
点赞 1
评论 0

                       

1. [代码]主程序    

           

?

1
2
3
4
5
6
7
8
9
10
11
public class Demo {
     @SuppressWarnings ( "static-access" )
     public static void main(String[] args) {
         MyCrawler crawler = MyCrawler.getInstance();
         crawler.setUrl( "http://docs.oracle.com/javase/8/docs/api/" );
         crawler.setDir( "/api2" );
         crawler.setDeep( 3 );
         crawler.setThread( 1 );
         crawler.start();
     }
}

                   

                       

                       

2. [代码]数据参数处理    

           

?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
public class MyCrawler {
     private static String url;
     private static int deep = 4 ;
     private static int topN = 10 ;
     private static int thread = 3 ;
     private static String host;
     private static String dir = System.getProperty( "user.dir" );
     private static MyCrawler crawler = new MyCrawler();
     public static MyCrawler getInstance(){
         return crawler;
     }
     private MyCrawler(){}
     public static int getDeep() {
         return deep;
     }
     public static void setDeep( int deep) {
         MyCrawler.deep = deep;
     }
     public static int getTopN() {
         return topN;
     }
     public static void setTopN( int topN) {
         MyCrawler.topN = topN;
     }
     public static String getUrl() {
         return url;
     }
     public static void setUrl(String url) {
         MyCrawler.url = url;
         if (url.endsWith( ".html" )){
             host = url.substring( 0 , url.lastIndexOf( "/" ));
         } else {
             MyCrawler.host = url;
         }
     }
     public static String getHost() {
         return host;
     }
     public static String getDir() {
         return dir;
     }  
     public void start() {
         UrlObject obj = new UrlObject(url);
         obj.setIdeep( 1 );
         QueryCrawler.push(obj);
         CrawlerWriterFiles writer = new CrawlerWriterFiles();
         writer.open();
     }
     public static void setDir(String dir) {
         MyCrawler.dir += dir+ "\\" ;
     }
     public static int getThread() {
         return MyCrawler.thread;
     }
     public static void setThread( int thread) {
         MyCrawler.thread = thread;
     }
}

                   

                       

                       

3. [代码]url对象    

           

?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
public class UrlObject {
     private String url;
     private int ideep;
     public UrlObject(String url) {
         this .url = url;
     }
     public String getUrl() {
         return url;
     }
     public void setUrl(String url) {
         this .url = url;
     }
     public int getIdeep() {
         return ideep;
     }
     public void setIdeep( int ideep) {
         this .ideep = ideep;
     }
     public UrlObject(String url, int ideep) {
         this .url = url;
         this .ideep = ideep;
     }  
}

                   

                       

                       

4. [代码]url任务队列    

           

?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public class QueryCrawler {
     private static QueryCrawler query = new QueryCrawler();
     private static ArrayList<UrlObject> list = new ArrayList<UrlObject>();
     private QueryCrawler(){}
     public static QueryCrawler getInstance() {
         return query;
     }
     public synchronized static void push(UrlObject obj) {
         list.add(obj);
     }
     public synchronized static void push(List<UrlObject> objs) {
         list.addAll(objs);
     }
     public synchronized static UrlObject pop() {
         if (list.size() < 1 )
             return null ;
         return list.remove( 0 );
     }
}

                   

                       

                       

5. [代码]线程遍历抓取,存储    

           

?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
public class CrawlerWriterFiles {
     public void open() {
         for ( int i = 0 ; i < MyCrawler.getThread(); i++) {
             new Thread( new Runnable() {
                 public void run() {
                     while ( true ){
                         try {
                             DefaultHttpClient client = new SystemDefaultHttpClient();
                             final UrlObject obj = QueryCrawler.pop();
                             if (obj != null ){
                                 HttpPost httpPost = new HttpPost(obj.getUrl());
                                 HttpResponse response = client.execute(httpPost);
                                 final String result = EntityUtils.toString(response.getEntity(), "UTF-8" );
                                 if (obj.getIdeep() < MyCrawler.getDeep() && !obj.getUrl().endsWith( ".css" )){
                                     CrawlerUtil.addUrlObject(obj, result);
                                 }
                                 new Thread( new Runnable() {
                                     public void run() {
                                         try {                                          
                                             CrawlerUtil.writer(obj.getUrl(), result);
                                         } catch (IOException e) {
                                             System.err.println( "输出错误url:" +obj.getUrl());
                                         }
                                     }
                                 }).start();
                             } else {
                                 System.out.println( "--------暂时没有任务!!" );
                                 Thread.sleep( 5000 );                            
                             }
                         } catch (Exception e) {
                             e.printStackTrace();
                             System.err.println( "error" );
                         }
                     }              
                 }
                 
             }).start();
         }              
     }  
}

                   

                       

                       

6. [代码]抓取url,存储页面数据    

           

?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
public class CrawlerUtil {
     private static List<String> arrays = new ArrayList<String>();
     private static List<String> filearrays = new ArrayList<String>();
     static {
         String a = ",[]'\"+:;{}" ;
         String[] as = a.split( "" );
         for ( int i = 0 ; i < as.length; i++) {
             if (as[i].equals( "" )){
                 continue ;
             }
             arrays.add(as[i]);
         }
         filearrays.add( "?" );
         filearrays.add( "=" );
         //filearrays.add(".");
     }
     public static void writer(String url, String data) throws IOException {
         File file = null ;
         if (url.toLowerCase().endsWith( ".css" )){
             file = new File(getPathCSS(url));
         } else {
             file = new File(getPathHTML(url));
         }
         System.out.println(file.getPath());
         if (!file.getParentFile().exists()){
             file.getParentFile().mkdirs();
         }
         if (!file.exists()){
             byte [] datab = data.getBytes();
             FileOutputStream f = new FileOutputStream(file);
             f.write(datab, 0 , datab.length);
             f.close();
         }
     }
 
     private static String getPathHTML(String url) {
         if (url.equals(MyCrawler.getHost())){
             url += "index" ;
         }
         if (!url.endsWith( "html" )){
             if (url.endsWith( "/" )){
                 url+= "index.html" ;
             } else if (url.lastIndexOf( "/" ) < url.lastIndexOf( "." )) {
                 url = url.substring( 0 , url.lastIndexOf( "." )) + ".html" ;
             } else {
                 url += ".html" ;
             }
         }
         if (url.startsWith( "http://" )){
             url = MyCrawler.getDir() + url.replace(MyCrawler.getHost(), "" );
         }      
         for ( int i = 0 ; i < filearrays.size(); i++) {
             url = url.replaceAll( "\\" +filearrays.get(i)+ "" , "_" );
         }
         return url;
     }
     private static String getPathCSS(String url) {     
         if (url.startsWith( "http://" )){
             url = MyCrawler.getDir() + url.replace(MyCrawler.getHost(), "" );
         }      
         return url;
     }
 
     public static void addUrlObject(UrlObject obj, String result) {
         //"<a\\s+href\\s*=\\s*\"?(.*?)[\"|>]"
         Pattern pcss =Pattern.compile( "<link.*href\\s*=\\s*\"?(.*?)[\"|>]" ,Pattern.CASE_INSENSITIVE);
         addUrlObjToPattern(pcss, obj, result);
         Pattern pa =Pattern.compile( "<a\\s+href\\s*=\\s*\"?(.*?)[\"|>]" ,Pattern.CASE_INSENSITIVE);
         addUrlObjToPattern(pa, obj, result);
         Pattern pframe =Pattern.compile( "<frame\\s+src\\s*=\\s*\"?(.*?)[\"|>]" ,Pattern.CASE_INSENSITIVE);
         addUrlObjToPattern(pframe, obj, result);
     }
     private static void addUrlObjToPattern(Pattern p, UrlObject obj,
             String result) {
         Matcher m = p.matcher(result);
         ArrayList<UrlObject> urlobjs = new ArrayList<UrlObject>();
         while (m.find()){
             String link = m.group( 1 ).trim();
             //urlobjs.add(new UrlObject(link, 1+obj.getIdeep()));
             if (!isLink(link)){
                 continue ;
             }
             if (link.startsWith(MyCrawler.getHost())){
                 urlobjs.add( new UrlObject(link, 1 +obj.getIdeep()));
             } else if (!link.contains( "://" )){
                 urlobjs.add( new UrlObject(MyCrawler.getHost() + link, 1 +obj.getIdeep()));
             }
         }
         QueryCrawler.push(urlobjs);
         show(urlobjs);
     }
 
     private static void show(ArrayList<UrlObject> urlobjs) {
         /*for (int i = 0; i < urlobjs.size(); i++) {
             System.out.println(urlobjs.get(i).getUrl());
         }*/    
     }
 
     private static boolean isLink(String link) {
         if ( null == link) return false ;
         link = link.replace(MyCrawler.getHost(), "" );
         for ( int i = 0 ; i < arrays.size(); i++) {
             if (link.contains(arrays.get(i))){
                 return false ;
             }
         }
         return true ;
     }
}

                   

                       

                       

7. [图片] 官网.png    

           

                       

                       

                       

8. [图片] 自己抓取得.png    

           

                       


本文转载自:

共有 人打赏支持
菜鸟上路中
粉丝 1
博文 14
码字总数 7161
作品 0
浦东

暂无相关文章

Cube、Cuboid 和 Cube Segment

1.Cube (或Data Cube),即数据立方体,是一种常用于数据分析与索引的技术;它可以对原始数据建立多维度索引。通过 Cube 对数据进行分析,可以大大加快数据的查询效率 2.Cuboid 在 Kylin 中特...

无精疯 ⋅ 40分钟前 ⋅ 0

github太慢

1:用浏览器访问 IPAddress.com or http://tool.chinaz.com 使用 IP Lookup 工具获得github.com和github.global.ssl.fastly.net域名的ip地址 2:/etc/hosts文件中添加如下格式(IP最好自己查一...

whoisliang ⋅ 41分钟前 ⋅ 0

非阻塞同步之 CAS

为解决线程安全问题,互斥同步相当于以时间换空间。多线程情况下,只有一个线程可以访问同步代码。这种同步也叫阻塞同步(Blocking Synchronization). 这种同步属于一种悲观并发策略。认为只...

长安一梦 ⋅ 52分钟前 ⋅ 0

云计算的选择悖论如何对待?

人们都希望在工作和生活中有所选择。但心理学家的调查研究表明,在多种选项中进行选择并不一定会使人们更快乐,甚至不会产生更好的决策。心理学家Barry Schwartz称之为“选择悖论”。云计算为...

linux-tao ⋅ 54分钟前 ⋅ 0

我的第一篇个人博客

虽然这是个技术博客,但是,我总是想写一些自己的东西,所有就大胆的在这里写下了第一篇非技术博客。技术博客也很久没有更新,个人原因。 以后自己打算在这里写一些非技术博客,可能个人观点...

Mrs_CoCo ⋅ 55分钟前 ⋅ 0

Redis 注册为 Windows 服务

Redis 注册为 Windows 服务 redis 注册为 windows 服务相关命令 注册服务 redis-server.exe –service-install redis.windows.conf 删除服务 redis-server –service-uninstall 启动服务 re......

Os_yxguang ⋅ 55分钟前 ⋅ 0

世界那么大,语言那么多,为什么选择Micropython,它的优势在哪?

最近国内MicroPython风靡程序界,是什么原因导致它这么火呢?是因为他功能强大,遵循Mit协议开源么? 错!因为使用它真的是太舒服了!!! Micropython的由来,这得益于Damien George这位伟大...

bodasisiter ⋅ 59分钟前 ⋅ 0

docker 清理总结

杀死所有正在运行的容器 docker kill $(docker ps -a -q) 删除所有已经停止的容器(docker rm没有加-f参数,运行中的容器不会删掉) docker rm $(docker ps -a -q) 删除所有未打 dangling 标...

vvx1024 ⋅ 今天 ⋅ 0

关于学习

以前学车的时候,教练说了这样的一句话:如果一个人坐在车上一直学,一直学,反而不如大家轮流着学。因为一个人一直学,就没有给自己留空间来反思和改进。而轮流着学的时候大家下来之后思考上...

mskk ⋅ 今天 ⋅ 0

压缩工具之gzip-bzip2-xz

win下常见压缩工具:rar zip 7z linux下常见压缩工具:zip gz bz2 xz tar.gz tar.bz2 tar.xz gzip 不支持目录压缩 gzip 1.txt #压缩。执行后1.txt消失,生成1.txt.gz压缩文件 gzip -d 1.txt....

ZHENG-JY ⋅ 今天 ⋅ 0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

返回顶部
顶部