没有合适的资源?快使用搜索试试~ 我知道了~
首页A service mesh for kubernetes
A service mesh for kubernetes
需积分: 9 48 下载量 83 浏览量
更新于2023-03-16
评论 2
收藏 2.21MB PDF 举报
A service mesh for kubernetes, A service mesh for kubernetes,A service mesh for kubernetes,A service mesh for kubernetes
资源详情
资源评论
资源推荐
!!1
A Service Mesh
for Kubernetes
Using Linkerd to add Reliability,
Security and Performance to your
Kubernetes Application
Published by
Table of Contents
!!2
I. Introduction
Page 3
What is a service mesh?
Is the service mesh a networking model?
What does a service mesh actually do?
Why is the service mesh necessary?
1. Top-line service metrics
2. Encrypting all the things
3. Continuous deployment via traffic shifting
4. Dogfood, ingress & edge routing
5. Staging microservices without the tears
6. Distributed tracing made easy
7. Linkerd as an ingress controller
8. gRPC for fun and profit
II. A Service
Mesh for
Kubernetes
Page 8
III. Conclusion
Page 64
The future of the service mesh
9
13
17
30
37
44
51
59
Introduction
Over the past year, the service mesh has emerged as a critical component of the
cloud native stack. High-traffic companies like PayPal, Lyft, Ticketmaster and Credit
Karma have all added a service mesh to their production applications, typically
alongside components like Kubernetes and Docker. This past January, Linkerd, the
open source service mesh for cloud native applications, became an official project of
the Cloud Native Computing Foundation. !
But what is a service mesh, exactly? And why is it suddenly relevant? !
This ebook defines the service mesh and traces its lineage through shifts in
application architecture over the past decade. We provide a series of hands-on
tutorials on how to use Linkerd as a service mesh with Kubernetes. Finally, we
describe where the service mesh is heading, and what to expect as this concept
evolves alongside cloud native adoption.!
After reading this book, you should not only know what a service mesh is, you should
be armed with concrete ways to use the Linkerd service mesh to make your
Kubernetes application safer, faster, and more resilient.!
WHAT IS A SERVICE MESH?
A service mesh is a dedicated infrastructure layer for handling service-to-service
communication. It’s responsible for the delivery of requests through the complex
topology of services that comprise a modern, cloud native application. In practice,
the service mesh is typically implemented as an array of lightweight network proxies
that are deployed alongside application code, without the application needing to be
aware. (But there are variations to this idea, as we’ll see.)!
The concept of the service mesh as a separate layer is tied to the rise of the cloud
native application. In the cloud native model, a single application might consist of
hundreds of services; each service might have thousands of instances; and each of
those instances might be in a constantly changing state as they are dynamically
scheduled by an orchestrator like Kubernetes. Not only is service communication in
this world incredibly complex, it’s a pervasive and fundamental part of runtime
behavior. Managing it is vital to ensuring end-to-end performance and reliability.!
!!3
!
IS THE SERVICE MESH A NETWORKING MODEL?
The service mesh is a networking model that sits at a layer of abstraction above TCP/
IP. It assumes that the underlying L3/L4 network is present and capable of delivering
bytes from point to point. (It also assumes that this network, as with every other
aspect of the environment, is unreliable; the service mesh must therefore also be
capable of handling network failures.)!
In some ways, the service mesh is analogous to TCP/IP. Just as the TCP stack
abstracts the mechanics of reliably delivering bytes between network endpoints, the
service mesh abstracts the mechanics of reliably delivering requests between
services. Like TCP, the service mesh doesn’t care about the actual payload or how
it’s encoded. The application has a high-level goal (“send something from A to B”),
and the job of the service mesh, like that of TCP, is to accomplish this goal while
handling any failures along the way.!
Unlike TCP, the service mesh has a significant goal beyond “just make it work”: it
provides a uniform, application-wide point for introducing visibility and control into
the application runtime. The explicit goal of the service mesh is to move service
communication out of the realm of the invisible, implied infrastructure, and into the
role of a first-class member of the ecosystem—where it can be monitored, managed
and controlled.!
WHAT DOES A SERVICE MESH ACTUALLY DO?
Reliably delivering requests in a cloud native application can be incredibly complex. A
service mesh like Linkerd manages this complexity with a wide array of powerful
techniques: circuit-breaking, latency-aware load balancing, eventually consistent
(“advisory”) service discovery, retries, and deadlines. These features must all work in
conjunction, and the interactions between these features and the complex environment
in which they operate can be quite subtle.!
For example, when a request is made to a service through Linkerd, a very simplified
timeline of events is as follows:!
1. Linkerd applies dynamic routing rules to determine which service the requester
intended. Should the request be routed to a service in production or in staging?
To a service in a local datacenter or one in the cloud? To the most recent version
of a service that’s being tested or to an older one that’s been vetted in
!!4
production? All of these routing rules are dynamically configurable, and can be
applied both globally and for arbitrary slices of traffic.!
2. Having found the correct destination, Linkerd retrieves the corresponding pool of
instances from the relevant service discovery endpoint, of which there may be
several. If this information diverges from what Linkerd has observed in practice,
Linkerd makes a decision about which source of information to trust.!
3. Linkerd chooses the instance most likely to return a fast response based on a
variety of factors, including its observed latency for recent requests.!
4. Linkerd attempts to send the request to the instance, recording the latency and
response type of the result.!
5. If the instance is down, unresponsive, or fails to process the request, Linkerd
retries the request on another instance (but only if it knows the request is
idempotent).!
6. If an instance is consistently returning errors, Linkerd evicts it from the load
balancing pool, to be periodically retried later (for example, an instance may be
undergoing a transient failure).!
7. If the deadline for the request has elapsed, Linkerd proactively fails the request
rather than adding load with further retries.!
8. Linkerd captures every aspect of the above behavior in the form of metrics and
distributed tracing, which are emitted to a centralized metrics system.!
And that’s just the simplified version–Linkerd can also initiate and terminate TLS,
perform protocol upgrades, dynamically shift traffic, and fail over between
datacenters.!
!
!!5
剩余65页未读,继续阅读
一大批攻城狮正在靠近
- 粉丝: 80
- 资源: 3
上传资源 快速赚钱
- 我的内容管理 收起
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
会员权益专享
最新资源
- ExcelVBA中的Range和Cells用法说明.pdf
- 基于单片机的电梯控制模型设计.doc
- 主成分分析和因子分析.pptx
- 共享笔记服务系统论文.doc
- 基于数据治理体系的数据中台实践分享.pptx
- 变压器的铭牌和额定值.pptx
- 计算机网络课程设计报告--用winsock设计Ping应用程序.doc
- 高电压技术课件:第03章 液体和固体介质的电气特性.pdf
- Oracle商务智能精华介绍.pptx
- 基于单片机的输液滴速控制系统设计文档.doc
- dw考试题 5套.pdf
- 学生档案管理系统详细设计说明书.doc
- 操作系统PPT课件.pptx
- 智慧路边停车管理系统方案.pptx
- 【企业内控系列】企业内部控制之人力资源管理控制(17页).doc
- 温度传感器分类与特点.pptx
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0