《Migrating to Cloud-Native Application Architectures》学习笔记之Chapter 3. Migration Cookbook

原创
2019/03/02 15:02
阅读数 178

New Features as Microservices 微服务的新特性

想从单体应用走向微服务,第一步就是不再王单体应用中写入代码。

 

 

The Anti-Corruption Layer 隔离层模式

 

an anti-corruption layer purpose is to allow the integration of two systems without allowing the domain model of one system to corrupt the domain model of the other. The anticorruption layer is a way of creating API contracts that make the monolith look like other microservices.

隔离层模式目的是允许两个系统的集成,而不允许一个系统的域模型破坏另一个系统的域模型。反腐朽层是一种创建API契约的方法,这种API契约使整体服务看起来像其他微服务。

 

Evans divides the implementation of anti-corruption layers into three submodules, the first two representing classic design patterns:

  • Facade

The purpose of the facade module here is to simplify the pro‐cess of integrating with the monolith’s interface. Importantly, it does not change the monolith’s model, being careful not to couple translation and integration concerns

  • Adapter

The adapter is where we define “services” that provide things our new features need. It knows how to take a request from our system, using a protocol that it understands, and make that request to the monolith’s facade(s).

  • Translator

The translator’s responsibility is to convert requests and responses between the domain model of the monolith and the domain model of the new microservice.

Evans将实现隔离层模式划分为三个子,前两个代表经典设计模式:

  • 表现层

这里立面模块的目的是简化与单体应用集成的过程。重要的是,它不会改变单体应用架构的模型

  • 适配层

适配层是我们定义“服务”的地方,它提供我们的新特性所需的东西。它知道如何使用它理解的协议从我们的系统接收请求,并将该请求发送到单体应用的facade。

  • 转换器

转换器职责是在单体应用领域模型和新的微服务的域模型之间转换请求和响应。

 

These three loosely coupled components solve three problems:

1. System integration

2. Protocol translation

3. Model translation

这三个松散耦合的组件解决了三个问题:

1. 系统集成

2. 协议转换

3.模型转变

 

What remains is the location of the communication link. In DDD,Evans discusses two alternatives. The first, facade to system, is primarily useful when you can’t access or alter the legacy system. Our focus here is on monoliths we do control, so we’ll lean toward Evans’ second suggestion, adapter to facade. Using this alternative, we build the facade into the monolith, allowing communications to occur between the adapter and the facade, as presumably it’s easier to create this link between two things written explicitly for this purpose.

剩下的是通信链路的位置。在DDD中,Evans讨论了两个替代方案。第一个是系统的facade,当您不能访问或更改遗留系统时,它非常有用。我们在这里的重点是我们可以控制的整体,因此我们将倾向于Evans的第二个建议,即适应facade。使用这种替代方法,我们将facade构建为一个整体,允许适配器和facade之间进行通信,因为在为这个目的显式编写的两个东西之间创建这个链接可能更容易。

 

反腐败层可以促进双向沟通,可以代理微服务请求单体应用也可以代理单体应用请求微服务

 

Strangling the Monolith 拆分单体应用

Through a combination of extracted microservices and additional anti-corruption layers, we’ll build a new cloud-native system around the edges of the existing monolith.

通过提取的微服务和额外的反腐败层的组合,我们将在现有的单一系统的边缘构建一个新的云本地系统。

 

Two criteria help us choose which components to extract:

  1. Identify bounded contexts within the monolith.
  2. Our second criterion deals with priority:

 

两个标准帮助我们拆分微服务:

1.界定单体应用的上下文边界;

2.优先级的选择。

 

Potential End States 潜在的结束状态

本节中主要介绍的是拆分到什么样的程度算是结束。

 

 There are basically two end states:

1. The monolith has been completely strangled to death. All bounded contexts have been extracted into microservices. The final step  is  to  identify  opportunities  to  eliminate  antcorruption layers that are no longer necessary.

2. The monolith has been strangled to a point where the cost of additional  service  extraction  exceeds  the  return  on  the  necessary development efforts. Some portions of the monolith may be fairly stable—we haven’t changed them in years and they’re doing their jobs. There may not be much value in moving these portions around, and the cost of maintaining the necessary anticorruption  layers  to  integrate  with  them  may  be  low  enough that we can take it on longterm.

基本上有两种最终状态:

1.单体应用完全的被拆解,所有的上下文都被提取到微服务。最后一步消除隔离层。

2.单体应用拆分到这样一个点:单体应用拆解到有部分的服务提取出来需要较大成本,而且继续拆分的价值不大,此时可以继续维护这部分单体应用服务和隔离层,但是维护的成本需要足够低。

 

Distributed Systems Recipes 分布式系统的图谱

 

Versioned and Distributed Confi guration 版本化和分布式配置

 we scale up to larger systems, sometimes we want additional configuration capabilities:

 • Changing  logging  levels  of  a  running  application  in  order  to debug a production issue

• Change the number of threads receiving messages from a message broker

• Report all configuration changes made to a production system to support regulatory audits

• Toggle features on/off in a running application

• Protect secrets (such as passwords) embedded in configuration

随着系统的扩大,我们还需要扩展配置功能:

  • 更改正在运行的应用程序的日志级别,以便调试生产问题
  • 更改从消息代理接收消息的线程数
  • 报告所有对生产系统配置所做的更改以支持审计监管
  • 运行中的应用切换功能开关
  • 保护配置中的机密信息(如密码)

 

 

In order to support these capabilities, we need a configuration management approach with the following features:

• Versioning

• Auditability

• Encryption

• Refresh without restart

为了支持这些特性,我们需要配置具有以下特性的配置管理方法:

  • 版本控制
  • 可审计
  • 加密
  • 在线刷新

 

Service Registration/Discovery 服务注册与发现

A common architecture pattern in the cloud (Figure 3-3) is to have frontend (application) and backend (business) services. Backend services are often not accessible directly from the Internet but are rather accessed via the frontend services. The service registry provides a listing of all services and makes them available to frontend services through a client library (“Routing and Load Balancing” on page 39) which performs load balancing and routing to backend services.

云中的常见体系结构模式(图3-3)是具有前端(应用程序)和后端(业务)服务。后端服务通常不能直接从Internet访问,而是通过前端服务访问。服务注册中心提供所有服务的列表,并通过客户端库将它们提供给前端服务,客户端库执行负载平衡和路由到后端服务。

 

Routing and Load Balancing 路由和负载平衡

Basic  round-robin  load  balancing  is  effective  for  many  scenarios,but  distributed  systems  in  cloud  environments  often  demand  a more advanced set of routing and load balancing behaviors. These are  commonly  provided  by  various  external,  centralized  load  balancing solutions. However, it’s often true that such solutions do not possess enough information or context to make the best choices for a  given  application  as  it  attempts  to  communicate  with  its  dependencies. Also, should such external solutions fail, these failures can cascade across the entire architecture.

基本的循环负载平衡对于许多场景都是有效的,但是云环境中的分布式系统通常需要一组更高级的路由和负载平衡功能。这些通常由各种外部集中负载平衡解决方案提供。然而,这类解决方案通常不具备足够的信息或上下文,从而无法在给定的应用程序试图与其依赖项进行通信时为其做出最佳选择,这是事实。此外,如果这些外部解决方案失败,这些失败可能会跨整个体系结构级联出现。

 

 

Fault-Tolerance 容错

Circuit breakers 熔断器

Circuit breakers insulate a service from its dependencies by preventing  remote  calls  when  a  dependency  is  determined  to  be unhealthy, just as electrical circuit breakers protect homes from burning down due to excessive use of power. Circuit breakers are implemented as state machines . When in their closed state, calls are simply passed through to the dependency. If any of these calls fails, the failure is counted. When the failure count  reaches  a  specified  threshold  within  a  specified  time period, the circuit trips into the open state. In the open state,calls  always  fail  immediately.  After  a  predetermined  period  of time, the circuit transitions into a “half-open” state. In this state,calls are again attempted to the remote dependency. Successful calls  transition  the  circuit  breaker  back  into  the  closed  state,while failed calls return the circuit breaker to the open state.

当服务的依赖被确定为不健康时,使用熔断器来阻绝该服务与其依赖的远程调用,就像电路熔断器可以防止电力使用过度,防止房子被烧毁一样。熔断器实现为状态机。当其处于关闭状态时,服务调用将直接传递给依赖关系。如果任何一个调用失败,则计入这次失败。当故障计数在指定时间内达到指定的阈值时,熔断器进入打开状态。在熔断器为打开状态时,所有调用都会失败。在预定时间段之后,线路转变为“半开”状态。在这种状态下,调用再次尝试远程依赖组件。成功的调用将熔断器转换回关闭状态,而失败的调用将熔断器返回到打开状态。

 

Bulkheads 隔板

Bulkheads partition a service in order to confine errors and prevent the entire service from failing due to failure in one area. They are named for partitions that can be sealed to segment a ship  into  multiple  watertight  compartments.  This  can  prevent damage  from causing the entire ship  to  sink.  Software  systems  can  utilize  bulkheads  in  many ways. Simply partitioning into microservices is our first line of defense.  The  partitioning  of  application  processes  into  Linux containers  so that one process cannot  takeover  an  entire  machine  is  another.  Yet  another example is the division of parallelized work into different thread pools.

隔板将服务分区,以便限制错误影响的区域,并防止整个服务由于某个区域中的故障而失败。这些分区就像将船舶划分成多个水密舱室一样,使用隔板将不同的舱室分区。这可以防止当船只受损时造成整艘船沉没。软件系统中可以用许多方式利用隔板。简单地将系统分为微服务是我们的第一道防线。将应用程序进程分区为Linux容器,以便使用单个进程无法接管整个计算机。另一个例子是将并行工作划分为不同的线程池。

 

API Gateways/Edge Services API 网关/边缘服务

为什么需要API网关与边缘服务?

  • Latency 延时

网络延时的存在,应用程序需要使用并发的方式来访问这些服务。在服务端一次性捕获和实行这些并发模式,会比在每一个设备平台上做相同的事情,来得更廉价、更不容易出错。

延迟的另一个来源是响应数据的大小。

  • Round trips 往返通信

即使网速不成问题,与大量的微服务通信依然会给移动应用开发者造成困扰。移动设备的电池消耗主要是因为网络开销造成的。移动应用开发者尽可能通过最少的服务端调用来减少网络的开销,并提供预期的用户体验。

  • Device diversity 设备多样性

移动设备生态系统中设备多样性是十分巨大的。企业必须应对不断增长的客户群体差异,包括如下这些:

  1. 制造商
  2. 设备类型
  3. 形式因素
  4. 设备尺寸
  5. 编程语言
  6. 操作系统
  7. 运行时环境
  8. 并发模型
  9. 支持的网络协议

 

The API Gateway pattern is targeted at shifting the burden of these requirements from the device developer to the serverside.  API  gateways  are  simply  a  special  class  of microservices  that meet  the  needs  of  a  single  client  application, and provide it with a single entry point to the backend.They access tens (or hundreds) of microservices concurrently with each request, aggregating the responses and transforming them to meet  the  client  application’s  needs.  They  also  perform  protocol translation (e.g., HTTP to AMQP) when necessary.

API网关模式的目标是将这些需求的负担从设备开发人员转移到服务器端。API网关只是一种特殊的微服务,它满足单个客户机应用程序的需求,并为其提供到后端的单一入口点。它们与每个请求同时访问数十(或数百)个微服务,聚合响应并对其进行转换,以满足客户机应用程序的需求。它们还在必要时执行协议转换(例如,HTTP到AMQP)。

 

API gateways can be implemented using any language, runtime, or framework that well supports web programming, concurrency patterns, and the protocols necesssary to communicate with the target microservices. Popular choices include Node.js (due to its reactive programming model) and the Go programming language (due to its simple concurrency model).

API 网关可以使用任何支持 web 编程和并发模式的语言、运行时、框架,和能够目标微服务进行通信的协议来实现。热门的选择包括 Node.js (由于其反应式编程模型)和 GO 编程语言(由于其简单的并发模型)。

 

Summary 总结

In this chapter we walked through two sets of recipes that can help us move toward a cloud-native application architecture:

Decomposition

We break down monolithic applications by:

1. Building all new features as microservices.

2. Integrating new microservices with the monolith via anticorruption layers.

3. Strangling  the  monolith  by  identifying  bounded  contexts and extracting services.

 

Distributed systems

We compose distributed systems by:

1. Versioning, distributing, and refreshing configuration via a configuration server and management bus.

2. Dynamically discovering remote dependencies.

3. Decentralizing load balancing decisions.

4. Preventing cascading failures through circuit breakers and bulkheads.

5. Integrating  on  the  behalf  of  specific  clients  via  API  Gateways.

 

本章中我们讨论了两种帮助我们迁移到云原生应用架构的方法:

分解原架构

我们使用以下方式分解单体应用:

  1. 所有新功能都使用微服务形式构建。
  2. 通过隔离层将微服务与单体应用集成。
  3. 通过定义有界上下文来分解服务,逐步扼杀单体架构。

使用分布式系统

分布式系统由以下部分组成:

  1. 版本化,分布式,通过配置服务器和管理总线刷新配置。
  2. 动态发现远端依赖。
  3. 去中心化的负载均衡策略
  4. 通过熔断器和隔板阻止级联故障
  5. 通过 API 网关集成到特定的客户端上

展开阅读全文
打赏
0
0 收藏
分享
加载中
更多评论
打赏
0 评论
0 收藏
0
分享
返回顶部
顶部