【组件治理】后台标准调研笔记

微软与阿里云提出“开放引用模型OAM”用于Kubernetes及更多平台的应用开发、运行的开放标准



https://oam.dev/

  1. Purpose and Goals

https://github.com/oam-dev/spec/blob/master/1.purpose_and_goals.md

The Open Application Model defines a standard, platform-agnostic way to describe cloud and edge applications. The goal of the specification is to provide a common way to describe applications agnostic to any specific container runtime, orchestration software, cloud provider, or hardware configuration, with clearly defined roles for developers and operators, while still allowing implementations to make use of the native APIs, tools, and features that are unique to the implementation and its underlying platform.

目标: 提供一种描述应用的方法,这个描述有如下三个特点:

  • 平台、容器、软件、硬件配置无关

  • 有清晰的角色(开发、运营)

  • 同时利平台上的用原生APIs、工具、特性。

Rudr: A Kubernetes Implementation of the Open Application Model

https://github.com/oam-dev/rudr

rudr OAM的实现

启动 minikube

# 输入, 公司内网环境要设置一下NO_PROXY
export NO_PROXY=http://dev-proxy.oa.com:8080,localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24

minikube start

# 输出
😄  Darwin 10.13.6 上的 minikube v1.6.2
✨  Selecting 'virtualbox' driver from existing profile (alternates: [])
💡  Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
🏃  Using the running virtualbox "minikube" VM ...
⌛  Waiting for the host to be provisioned ...
🌐  找到的网络选项:
    ▪ NO_PROXY=http://dev-proxy.oa.com:8080,localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
    ▪ http_proxy=http://dev-proxy.oa.com:8080
    ▪ https_proxy=http://dev-proxy.oa.com:8080
⚠️  VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository
🐳  正在 Docker '19.03.5' 中准备 Kubernetes v1.17.0…
    ▪ env NO_PROXY=http://dev-proxy.oa.com:8080,localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
    ▪ env HTTP_PROXY=http://dev-proxy.oa.com:8080
    ▪ env HTTPS_PROXY=http://dev-proxy.oa.com:8080
🚀  正在启动 Kubernetes ... 
🏄  完成!kubectl 已经配置至 "minikube"

安装 Helm

照着如下步骤做就可以了。

Install Helm 3. The below is copied directly from the Helm installation guide.

  • Unpack it (tar -zxvf helm-v3.0.0-linux-amd64.tgz). Note that the command might change depending on the Helm 3 version you installed.

  • Find the helm binary in the unpacked directory, and move it to its desired destination (mv linux-amd64/helm /usr/local/bin/helm)

  • From there, you should be able to run the client: helm help.

在Mac上更简单的办法是用brew

#输入
brew install helm
# 输出
Updating Homebrew...
==> Downloading https://homebrew.bintray.com/bottles/helm-3.0.2.high_sierra.bottle.tar.gz
==> Downloading from https://akamai.bintray.com/18/18c358c890202edd6cd15ee8f59c015177b932fd536c6421cc7e68be35270a9b?__gda__=exp=1577346562~hmac=5b3c118b04e848c0ab8b340b36828d18f5df6044d954
######################################################################## 100.0%
==> Pouring helm-3.0.2.high_sierra.bottle.tar.gz
==> Caveats
Bash completion has been installed to:
  /usr/local/etc/bash_completion.d

zsh completions have been installed to:
  /usr/local/share/zsh/site-functions
==> Summary
🍺  /usr/local/Cellar/helm/3.0.2: 7 files, 40.6MB

1.Helm的三个基本概念

Chart:Helm应用(package),包括该应用的所有Kubernetes manifest模版,类似于YUM RPM或Apt dpkg文件

Repository:Helm package存储仓库

Release:chart的部署实例,每个chart可以部署一个或多个release

2.Helm工作原理

Helm把Kubernetes资源(比如deployments、services或 ingress等) 打包到一个chart中,而chart被保存到chart仓库。通过chart仓库可用来存储和分享chart。Helm使发布可配置,支持发布应用配置的版本管理,简化了Kubernetes部署应用的版本控制、打包、发布、删除、更新等操作。

Helm包括两个部分,helm客户端和tiller服务端。

3.helm客户端

helm客户端是一个命令行工具,负责管理charts、reprepository和release。它通过gPRC API(使用kubectl port-forward将tiller的端口映射到本地,然后再通过映射后的端口跟tiller通信)向tiller发送请求,并由tiller来管理对应的Kubernetes资源。

4.tiller服务端

tiller接收来自helm客户端的请求,并把相关资源的操作发送到Kubernetes,负责管理(安装、查询、升级或删除等)和跟踪Kubernetes资源。为了方便管理,tiller把release的相关信息保存在kubernetes的ConfigMap中。tiller对外暴露gRPC API,供helm客户端调用。

3, helm components

There are two main components in Helm, i.e. Helm Tiller client and the server:

Helm client: is a command-line tool for end-user use.

Tiller server: Tiller Kubernetes services deployed in a cluster, Helm clients by interacting with Tiller server, and eventually interact with Kubernetes API server.

[root@master ~]# helm search mysql      #搜索MySQL
#查看到的是charts包文件,查出来的版本是helm的Charts包的版本
[root@master ~]# helm inspect stable/mysql    #查看其详细信息
[root@master ~]# helm fetch stable/mysql        #下载搜索到的包到本地
[root@master templates]# __helm install stable/mysql       #在线安装这个MySQL__

通过helm安装 rudr

# 输入
helm install rudr ./charts/rudr --wait
# 输出
NAME: rudr
LAST DEPLOYED: Thu Dec 26 15:45:02 2019
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Rudr is a Kubernetes controller to manage Configuration CRDs.

It has been successfully installed.

验证一下rudr是否已经安装好了,可以执行如下命令。

# 输入
kubectl get crds -l app.kubernetes.io/part-of=core.oam.dev
# 输出
NAME                                      CREATED AT
applicationconfigurations.core.oam.dev   2019-10-02T19:57:32Z
componentinstances.core.oam.dev          2019-10-02T19:57:32Z
componentschematics.core.oam.dev         2019-10-02T19:57:32Z
healthscopes.core.oam.dev                2019-10-02T19:57:32Z
scopes.core.oam.dev                      2019-10-02T19:57:32Z
traits.core.oam.dev                      2019-10-02T19:57:32Z

You should see at least those six CRDs. You can also verify that the Rudr deployment

同时还可以看到一个deployment

# 输入
kubectl get deployment rudr
# 输出
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
rudr   1/1     1            1           2m47s

运行 Rudr Sample

接下来尝试做一下如下这个Sample。

https://github.com/oam-dev/samples

This is an example microservices application with a Javascript Web UI, a MongoDB database, and a series of API microservices.

这个sample的组成部分可知有三部分:

  • a Javascript Web UI

  • a MongoDB database

  • a series of API microservices


同时区分一下,在这个OAM应用程序中有如下几个角色:

  • UI Developer

  • API Microservices Developer

  • MongoDB Admin

  • App Operator / SRE (handles application deployment in Kubernetes)

开始实验吧:

STEP 0.安装 NGINX Controller

Ingress

To successfully use an ingress trait, you will need to install one of the Kubernetes Ingress controllers. We recommend nginx-ingress.

  • First, add the stable repo to your Helm installation.

  • Install the NGINX ingress using Helm 3.

STEP 1. 注册ComponentSchematics

The OAM Component Schematics that are applied require the following information about the app from the developers

# 输入
kubectl apply -f tracker-db-component.yaml
kubectl apply -f tracker-data-component.yaml
kubectl apply -f tracker-flights-component.yaml
kubectl apply -f tracker-quakes-component.yaml
kubectl apply -f tracker-weather-component.yaml
kubectl apply -f tracker-ui-component.yaml
# 输出
componentschematic.core.oam.dev/tracker-mongo-db created
componentschematic.core.oam.dev/data-api created
componentschematic.core.oam.dev/flights-api created
componentschematic.core.oam.dev/quakes-api created
componentschematic.core.oam.dev/weather-api created

这玩意干如下几个事情:

  • The workloadType which dictates how the microservice is supposed to run. In this example, all are of type Server indicating multiple replicas can exist.

  • The container image and credentials. Developers are responsible at the very least for authoring the Dockerfiles containing the dependencies in order to build their runnable container. This example also expects an image to be pushed to a registry although this can be handled by a continuous integration system.

  • Container ports that expose any ports that servers are listening to.

  • Parameters that can be overriden by an operator at time of instiation

STEP 2. 实例化应用

OAM ApplicationConfiguration instantiates each of the components。

“`

Install the ApplicationConfiguration.

# 输入
kubectl create -f tracker-app-config.yaml

# 输出
applicationconfiguration.core.oam.dev/service-tracker created

看一下服务:

# 输入
kubectl get svc

# 输出
NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
data-api                        ClusterIP      10.96.55.117            3009/TCP                     9m30s
flights-api                     ClusterIP      10.96.30.7              3003/TCP                     9m30s
kubernetes                      ClusterIP      10.96.0.1               443/TCP                      3h1m
mongodb                         ClusterIP      10.96.126.48            27017/TCP                    9m31s
nginx-ingress-controller        LoadBalancer   10.96.237.142        80:32618/TCP,443:31145/TCP   89m
nginx-ingress-default-backend   ClusterIP      10.96.143.165           80/TCP                       89m
quakes-api                      ClusterIP      10.96.39.30             3012/TCP                     9m25s
service-tracker-ui              ClusterIP      10.96.5.31              8080/TCP                     9m15s
weather-api                     ClusterIP      10.96.200.32            3015/TCP                     9m20s

正好应该可以对应下图的微服务架构。


之后只要访问nginx-ingress-controller 这个服务就可以了。

可以通过 describe service 查看这个服务的具体信息。


这玩意主要干如下几个事情:

  • Starting pods

  • Instantiating Services with appropriate configurations

  • Creates the Ingress resource with the rules

youtube视频

https://www.youtube.com/watch?v=LAUDVk8PaCY

1:38


2:45

The goal, again, for boath OAM is to support the application model that can target different infrastructures. and Rudr happens to be that implementation of OAM for Kubernetes.


参考资料

发表评论

电子邮件地址不会被公开。 必填项已用*标注