vert.x core vert.x的核心是一个java api的集合

2016/04/19 17:59
阅读数 897

At the heart of Vert.x is a set of Java APIs that we call Vert.x Core

vert.x的核心是一个java api的集合


Vert.x core provides functionality for things like:


  • Writing TCP clients and servers tcp的客户端、服务端

  • Writing HTTP clients and servers including support for WebSockets  http的 客户端、服务端,支持websockets

  • The Event bus  事件总线

  • Shared data - local maps and clustered distributed maps  本地、集群共享数据

  • Periodic and delayed actions  周期、延时的动作

  • Deploying and undeploying Verticles  发布、卸载组件

  • Datagram Sockets  数据报的sockets

  • DNS client  DNS客户端

  • File system access  文件系统

  • High availability  高可用

  • Clustering  集群

The functionality in core is fairly low level - you won’t find stuff like database access, authorisation or high level web functionality here - that kind of stuff you’ll find in Vert.x ext (extensions).

core里的功能是很低水平的,你将找不到类似数据库访问、授权、高级网络的功能在这里,这类的功能你可以在vertx ext里找

Vert.x core is small and lightweight. You just use the parts you want. It’s also entirely embeddable in your existing applications - we don’t force you to structure your applications in a special way just so you can use Vert.x.

vert.x core是小型、轻量的,你只用你想要的,它完全可以集成到你现有的系统里,我们不强迫你重构你的系统。

You can use core from any of the other languages that Vert.x supports. But here’a a cool bit - we don’t force you to use the Java API directly from, say, JavaScript or Ruby - after all, different languages have different conventions and idioms, and it would be odd to force Java idioms on Ruby developers (for example). Instead, we automatically generate an idiomatic equivalent of the core Java APIs for each language.

你可以使用任何vert.x支持的语言,我们不强迫你直接使用java api,javascript、ruby 还有其他语言的习惯,对应一个ruby开发者来说java语法是有些困难的。core对待任何一种语言都是平等的。

From now on we’ll just use the word core to refer to Vert.x core.

从现在开始我们Vert.x core来称为core。

If you are using Maven or Gradle, add the following dependency to the dependencies section of your project descriptor to access the Vert.x Core API:


  • Maven (in your pom.xml):

  • Gradle (in your build.gradle file):

compile io.vertx:vertx-core:3.2.1

Let’s discuss the different concepts and features in core.


In the beginning there was Vert.x


Much of this is Java specific - need someway of swapping in language specific parts

You can’t do much in Vert.x-land unless you can commune with a Vertx object!


It’s the control centre of Vert.x and is how you do pretty much everything, including creating clients and servers, getting a reference to the event bus, setting timers, as well as many other things.


So how do you get an instance?


If you’re embedding Vert.x then you simply create an instance as follows:


Vertx vertx = Vertx.vertx();

If you’re using Verticles


Most applications will only need a single Vert.x instance, but it’s possible to create multiple Vert.x instances if you require, for example, isolation between the event bus or different groups of servers and clients.

Specifying options when creating a Vertx object


When creating a Vertx object you can also specify options if the defaults aren’t right for you:


Vertx vertx = Vertx.vertx(new VertxOptions().setWorkerPoolSize(40));

The VertxOptions object has many settings and allows you to configure things like clustering, high availability, pool sizes and various other settings. The Javadoc describes all the settings in detail.


Creating a clustered Vert.x object


If you’re creating a clustered Vert.x (See the section on the event bus for more information on clustering the event bus), then you will normally use the asynchronous variant to create the Vertx object.


This is because it usually takes some time (maybe a few seconds) for the different Vert.x instances in a cluster to group together. During that time, we don’t want to block the calling thread, so we give the result to you asynchronously.


Are you fluent?


You may have noticed that in the previous examples a fluent API was used.


A fluent API is where multiple methods calls can be chained together. For example:


request.response().putHeader("Content-Type", "text/plain").write("some text").end();

This is a common pattern throughout Vert.x APIs, so get used to it.


Chaining calls like this allows you to write code that’s a little bit less verbose. Of course, if you don’t like 


the fluent approach we don’t force you to do it that way, you can happily ignore it if you prefer and write your code like this:


HttpServerResponse response = request.response();
response.putHeader("Content-Type", "text/plain");
response.write("some text");

Don’t call us, we’ll call you.


The Vert.x APIs are largely event driven. This means that when things happen in Vert.x that you are interested in, Vert.x will call you by sending you events.


Some example events are:

  • a timer has fired   时间事件

  • some data has arrived on a socket, socket收到数据

  • some data has been read from disk  从硬盘读到数据

  • an exception has occurred 异常产生

  • an HTTP server has received a request  http服务收到请求

You handle events by providing handlers to the Vert.x APIs. For example to receive a timer event every second you would do:


vertx.setPeriodic(1000, id -> { // This handler will get called every second System.out.println("timer fired!");

Or to receive an HTTP request:


server.requestHandler(request -> { // This handler will be called every time an HTTP request is received at the server request.response().end("hello world!");

Some time later when Vert.x has an event to pass to your handler Vert.x will call it asynchronously.


This leads us to some important concepts in Vert.x:


Don’t block me!


With very few exceptions (i.e. some file system operations ending in 'Sync'), none of the APIs in Vert.x block the calling thread.

带有少量异常, vert.x里的api没有阻止线程的

If a result can be provided immediately, it will be returned immediately, otherwise you will usually provide a handler to receive events some time later.


Because none of the Vert.x APIs block threads that means you can use Vert.x to handle a lot of concurrency using just a small number of threads.


With a conventional blocking API the calling thread might block when:


  • Reading data from a socket  从socket读取数据

  • Writing data to disk  写数据

  • Sending a message to a recipient and waiting for a reply. 发送消息等待反馈

  • … Many other situations  其他的解决方案

In all the above cases, when your thread is waiting for a result it can’t do anything else - it’s effectively useless.


This means that if you want a lot of concurrency using blocking APIs then you need a lot of threads to prevent your application grinding to a halt.


Threads have overhead in terms of the memory they require (e.g. for their stack) and in context switching.


For the levels of concurrency required in many modern applications, a blocking approach just doesn’t scale.


Reactor and Multi-Reactor


We mentioned before that Vert.x APIs are event driven - Vert.x passes events to handlers when they are available.

 我们之前提到的vert.x api 是事件驱动,vert.x发送事件到处理者上。

In most cases Vert.x calls your handlers using a thread called an event loop.

大部分场景 vert.x 通过一个线程的事件循环调处理者

As nothing in Vert.x or your application blocks, the event loop can merrily run around delivering events to different handlers in succession as they arrive.


Because nothing blocks, an event loop can potentially deliver huge amounts of events in a short amount of time. For example a single event loop can handle many thousands of HTTP requests very quickly.


We call this the Reactor Pattern.


You may have heard of this before - for example Node.js implements this pattern.


In a standard reactor implementation there is a single event loop thread which runs around in a loop delivering all events to all handlers as they arrive.


The trouble with a single thread is it can only run on a single core at any one time, so if you want your single threaded reactor application (e.g. your Node.js application) to scale over your multi-core server you have to start up and manage many different processes.


Vert.x works differently here. Instead of a single event loop, each Vertx instance maintains several event loops. By default we choose the number based on the number of available cores on the machine, but this can be overridden.


This means a single Vertx process can scale across your server, unlike Node.js.


We call this pattern the Multi-Reactor Pattern to distinguish it from the single threaded reactor pattern.



Even though a Vertx instance maintains multiple event loops, any particular handler will never be executed concurrently, and in most cases (with the exception of worker verticles) will always be called using the exact same event loop.

The Golden Rule - Don’t Block the Event Loop


We already know that the Vert.x APIs are non blocking and won’t block the event loop, but that’s not much help if you block the event loop yourself in a handler.


If you do that, then that event loop will not be able to do anything else while it’s blocked. If you block all of the event loops in Vertx instance then your application will grind to a complete halt!


So don’t do it! You have been warned.


Examples of blocking include:


  • Thread.sleep() 线程睡眠

  • Waiting on a lock 等一个锁

  • Waiting on a mutex or monitor (e.g. synchronized section)  等一个互斥

  • Doing a long lived database operation and waiting for a result  长时间的db操作

  • Doing a complex calculation that takes some significant time.  复杂的计算

  • Spinning in a loop 循环中循环

If any of the above stop the event loop from doing anything else for a significant amount of time then you should go immediately to the naughty step, and await further instructions.


So… what is a significant amount of time?


How long is a piece of string? It really depends on your application and the amount of concurrency you require.


If you have a single event loop, and you want to handle 10000 http requests per second, then it’s clear that each request can’t take more than 0.1 ms to process, so you can’t block for any more time than that.


The maths is not hard and shall be left as an exercise for the reader.


If your application is not responsive it might be a sign that you are blocking an event loop somewhere. To help you diagnose such issues, Vert.x will automatically log warnings if it detects an event loop hasn’t returned for some time. If you see warnings like these in your logs, then you should investigate.


Thread vertx-eventloop-thread-3 has been blocked for 20458 ms

Vert.x will also provide stack traces to pinpoint exactly where the blocking is occurring.


If you want to turn of these warnings or change the settings, you can do that in the VertxOptionsobject before creating the Vertx object.


Running blocking code


In a perfect world, there will be no war or hunger, all APIs will be written asynchronously and bunny rabbits will skip hand-in-hand with baby lambs across sunny green meadows.


But… the real world is not like that. (Have you watched the news lately?)

 但是 ……现实不是这样的。你看最近的新闻了吗?

Fact is, many, if not most libraries, especially in the JVM ecosystem have synchronous APIs and many of the methods are likely to block. A good example is the JDBC API - it’s inherently synchronous, and no matter how hard it tries, Vert.x cannot sprinkle magic pixie dust on it to make it asynchronous.


We’re not going to rewrite everything to be asynchronous overnight so we need to provide you a way to use "traditional" blocking APIs safely within a Vert.x application.


As discussed before, you can’t call blocking operations directly from an event loop, as that would prevent it from doing any other useful work. So how can you do this?


It’s done by calling executeBlocking specifying both the blocking code to execute and a result handler to be called back asynchronous when the blocking code has been executed.


vertx.executeBlocking(future -> { // Call some blocking API that takes a significant amount of time to return String result = someAPI.blockingMethod("hello");
}, res -> {
  System.out.println("The result is: " + res.result());

By default, if executeBlocking is called several times from the same context (e.g. the same verticle instance) then the different executeBlocking are executed serially (i.e. one after another).


If you don’t care about ordering you can call executeBlocking specifying false as the argument toordered. In this case any executeBlocking may be executed in parallel on the worker pool.


An alternative way to run blocking code is to use a worker verticle


A worker verticle is always executed with a thread from the worker pool.


点击引领话题📣 发布并加入讨论🔥
0 评论
6 收藏