文档章节

http://www.tigase.net/blog-entry/1mln-or-more-onli

今幕明
 今幕明
发布于 2014/11/21 12:06
字数 1013
阅读 139
收藏 1

By admin on May 29, 2011    

I have been working on clustering code improvements in the Tigase server for last a few months to make it more reliable and better scale. In article about XMPP Service sharding - Tigase on Intel ATOMs I have presented some preliminary results on a small scale.

In last weeks I had a great opportunity to run several tests over the Tigase cluster of 10 nodes on much better hardware. The goal was to achieve 1mln online users connected to the cluster generating sensible traffic. More tests have been run to see how the cluster behaves with a different number of connections and under a different load.

Below are charts taken from two tests. One test with top 1mln 128k online users and moderate traffic and the second with peak 1mln 685k online users and very reduced traffic.

All tests were carried out until the number of connections reached its maximum and for some time after that to make sure we receive a stable service when connections start dropping.

The test for 1mln online users run with a moderate traffic, that is a message from each online user every 400 seconds and status change every 2800 seconds.

The other test for 1mln 500k online users ran with no additional traffic except user login, roster retrieval, initial presence broadcast and offline presence broadcast on connection close.

The roster size for all online users was 150 elements of which 20% (30) were online during the test and new connections rate was 100/sec.

If you are interested in more details, please continue reading...

I guess the first question which comes to your mind is why so low traffic. Especially looking in presented charts there is for sure room for more.

The CPU would most likely handle more, probably at least as much as twice more traffic and memory usage shouldn't grow much either as traffic generates only temporary objects.

Indeed, the average traffic was estimated to a message every 200 seconds and presence broadcast every 20 mins on each user connection.

The "high" traffic was estimated to a message every 100 seconds and presence broadcast every 10minutes.

Unfortunately as always with load tests, the problem was with generating enough traffic. I used Tsung 1.3 for testing which did really good job simulating user connections from 10 other machines, however it just couldn't do more than that.

Test environment used for tests

I had 21 identical machines at my disposal for duration of the tests: 2 x Xeon Quad 2.0GHz, 16GB RAM, 750GB SATA HDD, 1Gb ethernet.

One machine running Ubuntu Server 9.04 used as a database with MySQL 5.1 installed and tuned for the test.

10 machines running Ubuntu Server 9.04 with Tigase server installed in cluster mode, with Linux kernel and GC settings tuned for the test. Tigase server in version from SVN with some not yet committed changes.

10 machines running Proxmox 1.3 and Debian 5 in virtual machines. Tsung 1.3 on Erlang R13B01 was used as traffic and load generator.

Results

As we can see on attached charts both tests were quite successful.

Of course nobody wants to run a service for 1mln 600k online users with idle connections. The second test was executed only to check the installation limits. As we can see on the memory chart the server completely used up memory. So with 16GB of RAM not much more is possible. Traffic was on quite stable level as it was only generated by new user connections in the first phase, then by both new connections and closing connections in the second phase, hence the CPU load jump, and by closing connections only in third phase.

Much more interesting charts are for the 1mln online users testwith traffic on each connection. We can clearly see "steps" on the cluster traffic chart and less clear steps on the session manager traffic chart. They are related to presences updates "wave" which was starting every 2800 seconds. The CPU usage stayed at about 60% at peak time with plenty of room for more traffic. Memory consumption was quite high at about 70% at peak number of connections.

Other tests

As I mentioned before I have run several tests to see how the server works under a different conditions. There is for sure no room here to present all charts, however I could post them if there is an interest for that. Please send me a message or add comments to the article if you want to see more charts.

The server was tested under different loads:

  1. A message every 100 seconds and presence broadcast every 700 seconds on each connection.

  2. A message every 200 seconds and presence broadcast every 1400 seconds on each connection.

  3. A message every 400 seconds and presence broadcast every 2800 seconds on each connection.

  4. A message every 800 seconds and presence broadcast every 5600 seconds on each connection.

  5. No traffic except packets related to user login, roster retrieval, initial presence broadcast and offline presence broadcast.

Other tests I have run are listed below:

  1. 250k connections over plain TCP with load 1

  2. 250k connections over SSL with load 1

  3. 500k connections over plain TCP with load 1 and 2

  4. 500k connections over SSL with load 1, 2 and 3

  5. 750k connections over plain TCP with load 2 and 3

  6. 1mln connections over plain TCP with load 2 and 3

  7. 1mln 500k connections over plain TCP with load 5

Please note, given max number of connections is a target number, actual tests usually reached more.

Charts

All charts display plots for all 10 cluster nodes with a different colour for each node. In most cases only one plot (blue) is visible as user distribution was very even, hence load was the same. This is especially confusing for connections chart when all 10 plots look like a single blue line.

While chart plots display values for a particular node, the chart title displays sum for all nodes, the max is the maximum total registered by the monitor.


本文转载自:http://www.tigase.net/blog-entry/1mln-or-more-online-users-tigase-cluster

共有 人打赏支持
今幕明
粉丝 46
博文 224
码字总数 39350
作品 0
朝阳
程序员
Xstream之常用方式与常用注解

参考资料 1 xStream框架完美实现Java对象和xml文档JSON、XML相互转换 http://www.cnblogs.com/hoojo/archive/2011/04/22/2025197.html 2 xStream完美转换XML、JSON http://archive.cnblogs.c......

yuanyuan_186
2015/11/27
28
0
ACMer博客瀑布流分析

ACMer博客瀑布流是一个专门收集ACMer博客并展示的站点。地址http://blog.acmicpc.info/ 打开网页之后直接查看源代码发现 很明显,网页中的html代码都是由这个函数来生成的。再搜索一下源代码...

ismdeep
2016/04/17
51
0
【JDK7】新特性(6) 监听文件系统的更改

我们用IDE(例如Eclipse)编程,外部更改了代码文件,IDE马上提升“文件有更改”。Jdk7的NIO2.0也提供了这个功能,用于监听文件系统的更改。它采用类似观察者的模式,注册相关的文件更改事件...

5W1H-
2012/12/11
0
0
Tigase Load Tests again - 500k user connections

By admin on May 29, 2010 I have had a great opportunity and pleasure to use Sun's environment both hardware and software to run load tests on the Tigase server for a couple of l......

今幕明
2014/11/21
0
0
浅析tornado 中demo的 blog模块

#!/usr/bin/env python # # Copyright 2009 Facebook # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the L......

沉淀岁月
2016/09/06
7
0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

kernel version does not match DSO version

错误信息: kernel version 384.11 does not match DSO version 384.130.0 原因是: cuda driver版本太低,不匹配DSO 简单有效的修复方法,升级nvidia driver, 步骤如下: 1. google seach ...

刘小米
今天
0
0
maven坐标和依赖

一、maven坐标详解 <groupId>com.fgt.club</groupId><artifactId>club-common-service-facade</artifactId><version>3.0.0</version><packaging>jar</packaging> maven的坐标元素说......

老韭菜
今天
1
0
springmvc-servlet.xml配置表功能解释

问:<?xml version="1.0" encoding="UTF-8" ?> 答: xml version="1.0"表示是此xml文件的版本是1.0 encoding="UTF-8"表示此文件的编码方式是UTF-8 问:<!DOCTYPE beans PUBLIC "-//SPRING//......

隐士族隐逸
今天
1
0
基于TP5的微信的公众号获取登录用户信息

之前讲过微信的公众号自动登录的菜单配置,这次记录一下在TP5项目中获取自动登录的用户信息并存到数据库的操作 基本的流程为:微信设置自动登录的菜单—>访问的URL指定的函数里获取用户信息—...

月夜中徘徊
今天
0
0
youTrack

package jetbrains.teamsys.license.runtime; 计算lis package jetbrains.ring.license.reader; 验证lis 安装后先不要生成lis,要把相关文件进行替换 ring-license-checker-1.0.41.jar char......

max佩恩
今天
2
0

没有更多内容

加载失败,请刷新页面

加载更多

下一页

返回顶部
顶部