httpclient4.2.1 连接池

原创
2012/10/16 13:10
阅读数 1.7W

最近做了一个httpclient4.2.1的连接池


@Override
    public void afterPropertiesSet() throws Exception {

        SchemeRegistry schemeRegistry = new SchemeRegistry();
        schemeRegistry.register(new Scheme("http", 80, PlainSocketFactory.getSocketFactory()));
        schemeRegistry.register(new Scheme("https", 443, SSLSocketFactory.getSocketFactory()));

        connectionManager = new PoolingClientConnectionManager(schemeRegistry);

        // 设置最大连接数
        connectionManager.setMaxTotal(maxTotalConnections);

        // 设置 每个路由最大连接数
        connectionManager.setDefaultMaxPerRoute(maxRouteConnections);

        client = new DefaultHttpClient(connectionManager);
        // 设置连接超时
        client.getParams().setParameter(CoreConnectionPNames.CONNECTION_TIMEOUT, connectTimeout);
        // 设置读取超时
        client.getParams().setParameter(CoreConnectionPNames.SO_TIMEOUT, readTimeout);

        // 连接保持时间
//        client.setKeepAliveStrategy(new ConnectionKeepAliveStrategy() {
//            @Override
//            public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
//                // Honor 'keep-alive' header
//                HeaderElementIterator it = new BasicHeaderElementIterator(response.headerIterator(HTTP.CONN_KEEP_ALIVE));
//                while (it.hasNext()) {
//                    HeaderElement he = it.nextElement();
//                    String param = he.getName();
//                    String value = he.getValue();
//                    if (value != null && param.equalsIgnoreCase("timeout")) {
//                        try {
//                            return Long.parseLong(value) * 1000;
//                        } catch (NumberFormatException ignore) {
//                        }
//                    }
//                }
//                return readTimeout * 2;
//            }
//        });
        // 定时清楚过期和闲置连接
        ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1, new DaemonThreadFactory(
                "httpClient-con-monitor"));
        scheduler
                .scheduleAtFixedRate(new IdleConnectionMonitor(connectionManager), initialDelay, heartbeatPeriod, unit);
    }

    @Override
    public HttpClient getHttpClient() {
        return client;
    }

    private final class IdleConnectionMonitor implements Runnable {
        PoolingClientConnectionManager connectionManager;

        public IdleConnectionMonitor(PoolingClientConnectionManager connectionManager) {
            this.connectionManager = connectionManager;
        }

        @Override
        public void run() {
            if (log.isInfoEnabled()) {
                log.info("release start connect count:=" + connectionManager.getTotalStats().getAvailable());
            }
            // Close expired connections
            connectionManager.closeExpiredConnections();
            // Optionally, close connections
            // that have been idle longer than readTimeout*2 MILLISECONDS
            connectionManager.closeIdleConnections(readTimeout * 2, TimeUnit.MILLISECONDS);

            if (log.isInfoEnabled()) {
                log.info("release end connect count:=" + connectionManager.getTotalStats().getAvailable());
            }

        }
    }


测试问题:

    1.改用连接池后发现大量连接close_wait,不释放,(网站不能提供搜索服务,只能重启应用释放)


原因:调用后没有关闭流引起,导致服务器关闭连接后,连接池没有释放。

问题代码:



           HttpResponse res = client.execute(hget);
            statusCode = res.getStatusLine().getStatusCode();
            if(statusCode==HttpStatus.SC_OK){
                //问题在这里,前面加了判断,状态不是200的时候,不会关闭连接
                String jsonResult = EntityUtils.toString(res.getEntity(), charset);
            };    



    2.httpclient连接池,无法保持对search的长连接,每次向search请求完后,被search断掉, 这个池基本无效。

原因:

  

  http://wiki.nginx.org/HttpCoreModule 这个最全,发现有三个keepalive相关配置 (keepalive_disable, keepalive_timeout, keepalive_requests)

keepalive_requestsNumber of requests which can be made over a keep-alive connection, Default:100

可惜的keepalive_requests不能设置成无穷大,设置成65535,测试正常



  3.关于搜索返回数据量大, 带宽占满

解决方法:搜索端和httpclient都启用gizp压缩

1.     gzip压缩了50%以上,带宽节省一半

2.     相同100M带宽了,TPS是原来的2(450升到了950)

3.     client的gzip解压增加了CPU负担,CPU消耗是原来的3(14%增加到44%)

nginx启动gzip


gzip on;
        gzip_disable "msie6";

         gzip_vary on;
         gzip_proxied any;
         gzip_comp_level 6;
         gzip_buffers 16 8k;
         gzip_http_version 1.1;
         gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
client断启动gzip
client = new DefaultHttpClient(connectionManager);
       //拦截器:增加gzip压缩请求
        client.addRequestInterceptor(new HttpRequestInterceptor() {

              public void process(
              final HttpRequest request,
              final HttpContext context) throws HttpException, IOException {
              if (!request.containsHeader("Accept-Encoding")) {
                     request.addHeader("Accept-Encoding", "gzip");
              }
            }
        });
              
       //拦截器:返回增加gzip解压
        client.addResponseInterceptor(new HttpResponseInterceptor() {

            public void process(final HttpResponse response, 
            final HttpContext context) throws HttpException, IOException {
                  HttpEntity entity = response.getEntity();
                    if (entity != null) {
                        Header ceheader = entity.getContentEncoding();
                        if (ceheader != null) {
                            HeaderElement[] codecs = ceheader.getElements();
                            for (int i = 0; i < codecs.length; i++) {
                                if (codecs[i].getName().equalsIgnoreCase("gzip")) {
                                    response.setEntity(new GzipDecompressingEntity(response.getEntity()));
                                    return;
                                }
                            }
                        }
                    }
                }
        });




展开阅读全文
打赏
2
49 收藏
分享
加载中
楼主理解透彻,小弟折服
2012/10/21 12:02
回复
举报
更多评论
打赏
1 评论
49 收藏
2
分享
返回顶部
顶部