《Migrating to Cloud-Native Application Architectures》学习笔记之Chapter 3. Migration Cookbook

2019/03/02 15:02
阅读数 178

New Features as Microservices 微服务的新特性




The Anti-Corruption Layer 隔离层模式


an anti-corruption layer purpose is to allow the integration of two systems without allowing the domain model of one system to corrupt the domain model of the other. The anticorruption layer is a way of creating API contracts that make the monolith look like other microservices.



Evans divides the implementation of anti-corruption layers into three submodules, the first two representing classic design patterns:

  • Facade

The purpose of the facade module here is to simplify the pro‐cess of integrating with the monolith’s interface. Importantly, it does not change the monolith’s model, being careful not to couple translation and integration concerns

  • Adapter

The adapter is where we define “services” that provide things our new features need. It knows how to take a request from our system, using a protocol that it understands, and make that request to the monolith’s facade(s).

  • Translator

The translator’s responsibility is to convert requests and responses between the domain model of the monolith and the domain model of the new microservice.


  • 表现层


  • 适配层


  • 转换器



These three loosely coupled components solve three problems:

1. System integration

2. Protocol translation

3. Model translation


1. 系统集成

2. 协议转换



What remains is the location of the communication link. In DDD,Evans discusses two alternatives. The first, facade to system, is primarily useful when you can’t access or alter the legacy system. Our focus here is on monoliths we do control, so we’ll lean toward Evans’ second suggestion, adapter to facade. Using this alternative, we build the facade into the monolith, allowing communications to occur between the adapter and the facade, as presumably it’s easier to create this link between two things written explicitly for this purpose.





Strangling the Monolith 拆分单体应用

Through a combination of extracted microservices and additional anti-corruption layers, we’ll build a new cloud-native system around the edges of the existing monolith.



Two criteria help us choose which components to extract:

  1. Identify bounded contexts within the monolith.
  2. Our second criterion deals with priority:






Potential End States 潜在的结束状态



 There are basically two end states:

1. The monolith has been completely strangled to death. All bounded contexts have been extracted into microservices. The final step  is  to  identify  opportunities  to  eliminate  antcorruption layers that are no longer necessary.

2. The monolith has been strangled to a point where the cost of additional  service  extraction  exceeds  the  return  on  the  necessary development efforts. Some portions of the monolith may be fairly stable—we haven’t changed them in years and they’re doing their jobs. There may not be much value in moving these portions around, and the cost of maintaining the necessary anticorruption  layers  to  integrate  with  them  may  be  low  enough that we can take it on longterm.





Distributed Systems Recipes 分布式系统的图谱


Versioned and Distributed Confi guration 版本化和分布式配置

 we scale up to larger systems, sometimes we want additional configuration capabilities:

 • Changing  logging  levels  of  a  running  application  in  order  to debug a production issue

• Change the number of threads receiving messages from a message broker

• Report all configuration changes made to a production system to support regulatory audits

• Toggle features on/off in a running application

• Protect secrets (such as passwords) embedded in configuration


  • 更改正在运行的应用程序的日志级别,以便调试生产问题
  • 更改从消息代理接收消息的线程数
  • 报告所有对生产系统配置所做的更改以支持审计监管
  • 运行中的应用切换功能开关
  • 保护配置中的机密信息(如密码)



In order to support these capabilities, we need a configuration management approach with the following features:

• Versioning

• Auditability

• Encryption

• Refresh without restart


  • 版本控制
  • 可审计
  • 加密
  • 在线刷新


Service Registration/Discovery 服务注册与发现

A common architecture pattern in the cloud (Figure 3-3) is to have frontend (application) and backend (business) services. Backend services are often not accessible directly from the Internet but are rather accessed via the frontend services. The service registry provides a listing of all services and makes them available to frontend services through a client library (“Routing and Load Balancing” on page 39) which performs load balancing and routing to backend services.



Routing and Load Balancing 路由和负载平衡

Basic  round-robin  load  balancing  is  effective  for  many  scenarios,but  distributed  systems  in  cloud  environments  often  demand  a more advanced set of routing and load balancing behaviors. These are  commonly  provided  by  various  external,  centralized  load  balancing solutions. However, it’s often true that such solutions do not possess enough information or context to make the best choices for a  given  application  as  it  attempts  to  communicate  with  its  dependencies. Also, should such external solutions fail, these failures can cascade across the entire architecture.




Fault-Tolerance 容错

Circuit breakers 熔断器

Circuit breakers insulate a service from its dependencies by preventing  remote  calls  when  a  dependency  is  determined  to  be unhealthy, just as electrical circuit breakers protect homes from burning down due to excessive use of power. Circuit breakers are implemented as state machines . When in their closed state, calls are simply passed through to the dependency. If any of these calls fails, the failure is counted. When the failure count  reaches  a  specified  threshold  within  a  specified  time period, the circuit trips into the open state. In the open state,calls  always  fail  immediately.  After  a  predetermined  period  of time, the circuit transitions into a “half-open” state. In this state,calls are again attempted to the remote dependency. Successful calls  transition  the  circuit  breaker  back  into  the  closed  state,while failed calls return the circuit breaker to the open state.



Bulkheads 隔板

Bulkheads partition a service in order to confine errors and prevent the entire service from failing due to failure in one area. They are named for partitions that can be sealed to segment a ship  into  multiple  watertight  compartments.  This  can  prevent damage  from causing the entire ship  to  sink.  Software  systems  can  utilize  bulkheads  in  many ways. Simply partitioning into microservices is our first line of defense.  The  partitioning  of  application  processes  into  Linux containers  so that one process cannot  takeover  an  entire  machine  is  another.  Yet  another example is the division of parallelized work into different thread pools.



API Gateways/Edge Services API 网关/边缘服务


  • Latency 延时



  • Round trips 往返通信


  • Device diversity 设备多样性


  1. 制造商
  2. 设备类型
  3. 形式因素
  4. 设备尺寸
  5. 编程语言
  6. 操作系统
  7. 运行时环境
  8. 并发模型
  9. 支持的网络协议


The API Gateway pattern is targeted at shifting the burden of these requirements from the device developer to the serverside.  API  gateways  are  simply  a  special  class  of microservices  that meet  the  needs  of  a  single  client  application, and provide it with a single entry point to the backend.They access tens (or hundreds) of microservices concurrently with each request, aggregating the responses and transforming them to meet  the  client  application’s  needs.  They  also  perform  protocol translation (e.g., HTTP to AMQP) when necessary.



API gateways can be implemented using any language, runtime, or framework that well supports web programming, concurrency patterns, and the protocols necesssary to communicate with the target microservices. Popular choices include Node.js (due to its reactive programming model) and the Go programming language (due to its simple concurrency model).

API 网关可以使用任何支持 web 编程和并发模式的语言、运行时、框架,和能够目标微服务进行通信的协议来实现。热门的选择包括 Node.js (由于其反应式编程模型)和 GO 编程语言(由于其简单的并发模型)。


Summary 总结

In this chapter we walked through two sets of recipes that can help us move toward a cloud-native application architecture:


We break down monolithic applications by:

1. Building all new features as microservices.

2. Integrating new microservices with the monolith via anticorruption layers.

3. Strangling  the  monolith  by  identifying  bounded  contexts and extracting services.


Distributed systems

We compose distributed systems by:

1. Versioning, distributing, and refreshing configuration via a configuration server and management bus.

2. Dynamically discovering remote dependencies.

3. Decentralizing load balancing decisions.

4. Preventing cascading failures through circuit breakers and bulkheads.

5. Integrating  on  the  behalf  of  specific  clients  via  API  Gateways.





  1. 所有新功能都使用微服务形式构建。
  2. 通过隔离层将微服务与单体应用集成。
  3. 通过定义有界上下文来分解服务,逐步扼杀单体架构。



  1. 版本化,分布式,通过配置服务器和管理总线刷新配置。
  2. 动态发现远端依赖。
  3. 去中心化的负载均衡策略
  4. 通过熔断器和隔板阻止级联故障
  5. 通过 API 网关集成到特定的客户端上

0 收藏
0 评论
0 收藏