文档章节

http://www.tigase.net/blog-entry/1mln-or-more-onli

今幕明
 今幕明
发布于 2014/11/21 12:06
字数 1013
阅读 139
收藏 1

By admin on May 29, 2011    

I have been working on clustering code improvements in the Tigase server for last a few months to make it more reliable and better scale. In article about XMPP Service sharding - Tigase on Intel ATOMs I have presented some preliminary results on a small scale.

In last weeks I had a great opportunity to run several tests over the Tigase cluster of 10 nodes on much better hardware. The goal was to achieve 1mln online users connected to the cluster generating sensible traffic. More tests have been run to see how the cluster behaves with a different number of connections and under a different load.

Below are charts taken from two tests. One test with top 1mln 128k online users and moderate traffic and the second with peak 1mln 685k online users and very reduced traffic.

All tests were carried out until the number of connections reached its maximum and for some time after that to make sure we receive a stable service when connections start dropping.

The test for 1mln online users run with a moderate traffic, that is a message from each online user every 400 seconds and status change every 2800 seconds.

The other test for 1mln 500k online users ran with no additional traffic except user login, roster retrieval, initial presence broadcast and offline presence broadcast on connection close.

The roster size for all online users was 150 elements of which 20% (30) were online during the test and new connections rate was 100/sec.

If you are interested in more details, please continue reading...

I guess the first question which comes to your mind is why so low traffic. Especially looking in presented charts there is for sure room for more.

The CPU would most likely handle more, probably at least as much as twice more traffic and memory usage shouldn't grow much either as traffic generates only temporary objects.

Indeed, the average traffic was estimated to a message every 200 seconds and presence broadcast every 20 mins on each user connection.

The "high" traffic was estimated to a message every 100 seconds and presence broadcast every 10minutes.

Unfortunately as always with load tests, the problem was with generating enough traffic. I used Tsung 1.3 for testing which did really good job simulating user connections from 10 other machines, however it just couldn't do more than that.

Test environment used for tests

I had 21 identical machines at my disposal for duration of the tests: 2 x Xeon Quad 2.0GHz, 16GB RAM, 750GB SATA HDD, 1Gb ethernet.

One machine running Ubuntu Server 9.04 used as a database with MySQL 5.1 installed and tuned for the test.

10 machines running Ubuntu Server 9.04 with Tigase server installed in cluster mode, with Linux kernel and GC settings tuned for the test. Tigase server in version from SVN with some not yet committed changes.

10 machines running Proxmox 1.3 and Debian 5 in virtual machines. Tsung 1.3 on Erlang R13B01 was used as traffic and load generator.

Results

As we can see on attached charts both tests were quite successful.

Of course nobody wants to run a service for 1mln 600k online users with idle connections. The second test was executed only to check the installation limits. As we can see on the memory chart the server completely used up memory. So with 16GB of RAM not much more is possible. Traffic was on quite stable level as it was only generated by new user connections in the first phase, then by both new connections and closing connections in the second phase, hence the CPU load jump, and by closing connections only in third phase.

Much more interesting charts are for the 1mln online users testwith traffic on each connection. We can clearly see "steps" on the cluster traffic chart and less clear steps on the session manager traffic chart. They are related to presences updates "wave" which was starting every 2800 seconds. The CPU usage stayed at about 60% at peak time with plenty of room for more traffic. Memory consumption was quite high at about 70% at peak number of connections.

Other tests

As I mentioned before I have run several tests to see how the server works under a different conditions. There is for sure no room here to present all charts, however I could post them if there is an interest for that. Please send me a message or add comments to the article if you want to see more charts.

The server was tested under different loads:

  1. A message every 100 seconds and presence broadcast every 700 seconds on each connection.

  2. A message every 200 seconds and presence broadcast every 1400 seconds on each connection.

  3. A message every 400 seconds and presence broadcast every 2800 seconds on each connection.

  4. A message every 800 seconds and presence broadcast every 5600 seconds on each connection.

  5. No traffic except packets related to user login, roster retrieval, initial presence broadcast and offline presence broadcast.

Other tests I have run are listed below:

  1. 250k connections over plain TCP with load 1

  2. 250k connections over SSL with load 1

  3. 500k connections over plain TCP with load 1 and 2

  4. 500k connections over SSL with load 1, 2 and 3

  5. 750k connections over plain TCP with load 2 and 3

  6. 1mln connections over plain TCP with load 2 and 3

  7. 1mln 500k connections over plain TCP with load 5

Please note, given max number of connections is a target number, actual tests usually reached more.

Charts

All charts display plots for all 10 cluster nodes with a different colour for each node. In most cases only one plot (blue) is visible as user distribution was very even, hence load was the same. This is especially confusing for connections chart when all 10 plots look like a single blue line.

While chart plots display values for a particular node, the chart title displays sum for all nodes, the max is the maximum total registered by the monitor.


本文转载自:http://www.tigase.net/blog-entry/1mln-or-more-online-users-tigase-cluster

共有 人打赏支持
今幕明
粉丝 46
博文 224
码字总数 39350
作品 0
朝阳
程序员
Xstream之常用方式与常用注解

参考资料 1 xStream框架完美实现Java对象和xml文档JSON、XML相互转换 http://www.cnblogs.com/hoojo/archive/2011/04/22/2025197.html 2 xStream完美转换XML、JSON http://archive.cnblogs.c......

yuanyuan_186
2015/11/27
28
0
ACMer博客瀑布流分析

ACMer博客瀑布流是一个专门收集ACMer博客并展示的站点。地址http://blog.acmicpc.info/ 打开网页之后直接查看源代码发现 很明显,网页中的html代码都是由这个函数来生成的。再搜索一下源代码...

ismdeep
2016/04/17
51
0
【JDK7】新特性(6) 监听文件系统的更改

我们用IDE(例如Eclipse)编程,外部更改了代码文件,IDE马上提升“文件有更改”。Jdk7的NIO2.0也提供了这个功能,用于监听文件系统的更改。它采用类似观察者的模式,注册相关的文件更改事件...

5W1H-
2012/12/11
0
0
浅析tornado 中demo的 blog模块

#!/usr/bin/env python # # Copyright 2009 Facebook # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the L......

沉淀岁月
2016/09/06
7
0
Tigase Load Tests again - 500k user connections

By admin on May 29, 2010 I have had a great opportunity and pleasure to use Sun's environment both hardware and software to run load tests on the Tigase server for a couple of l......

今幕明
2014/11/21
0
0

没有更多内容

加载失败,请刷新页面

加载更多

idea 通过jpa自动生成实体类

引入jpa包 打开persistence窗口 右键选择连接数据库 如果数据库没配置,则可以在下图选项中配置 选择好数据库和实体类的生成地址

斩神魂
31分钟前
1
0
tcpdump 命令

TCPDUMP简介 tcpdump 是一个很常用的网络包分析工具,可以用来显示通过网络传输到本系统的 TCP/IP 以及其他网络的数据包。tcpdump 使用 libpcap 库来抓取网络报,这个库在几乎在所有的 Linu...

寰宇01
38分钟前
2
0
软件的Alpha、Beta、RC、GA版本的区别

Alpha:是内部测试版,一般不向外部发布,会有很多Bug.一般只有测试人员使用。 Beta:也是测试版,这个阶段的版本会一直加入新的功能。在Alpha版之后推出。 RC:(Release Candidate) 顾名思义...

乔老哥
40分钟前
3
0
慢雾安全海贼王:从DApp亡灵军团,细说区块链安全

本文转载自微信公号“万向区块链”,为慢雾安全负责人海贼王在万向区块链实验室举办的2018上海区块链国际周-技术开放日上的演讲速记整理。 这张图总结了智能合约攻防的各个方面,分为两大部分...

万向区块链
45分钟前
14
0
Matlab编程之——卷积神经网络CNN代码解析

卷积神经网络CNN代码解析 deepLearnToolbox-master是一个深度学习matlab包,里面含有很多机器学习算法,如卷积神经网络CNN,深度信念网络DBN,自动编码AutoE ncoder(堆栈SAE,卷积CAE)的作...

酒逢知己千杯少
46分钟前
5
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部