arrow

Use Aliyun ASM For Phased Release In Container Service

1. Experiment

1.1 Knowledge points

This experiment mainly uses Alibaba Cloud Container Service and Alibaba Cloud Alibaba Cloud Service Mesh (ASM) products. The experiment describes how to use Alibaba Cloud ASM to perform canopy publishing in a container cluster.

ASM provides a fully managed service grid platform, compatible with the community Istio open source service grid, and is used to simplify service governance, including traffic routing and splitting management between service calls, the authentication security of inter-service communication and grid observability greatly reduce the workload of development and operation.

1.2 Experiment process

  • Create Cluster
  • Create Service mesh
  • Connect Cluster
  • Perform phased release
  • Simulation demonstration

1.3 Cloud resources required

  • ECS
  • Resource objects in Kubernetes
  • ASM

1.4 Prerequisites

  • If you are using your own Alibaba Cloud account instead of the account provided by this lab to operate the experiment, note that you need to choose the same Ubuntu 16.04 operating system for your ECS in order to run the experiment smoothly.
  • Before starting the experiment, confirm that the previous experiment has been closed normally and exited.

2. Start the experiment environment

Click Start Lab in the upper-right corner of the page to start the experiment.

image desc.

After the experiment environment is successfully started, the system has deployed the resources required by this experiment in the background, including the ECS instance, RDS instance, Server Load Balancer instance, and OSS bucket. An account consisting of the username and password for logging on to the Alibaba Cloud console is also provided.

image desc

After the experiment environment is started and related resources are deployed, the experiment countdown starts. You have two hours to perform experimental operations. After the countdown ends, the experiment stops and related resources are released. During the experiment, pay attention to the remaining time and arrange your time wisely. Then, use the username and password provided by the system to log on to the Web console of Alibaba Cloud and view related resources.

openCole

Access the logon page of Alibaba Cloud console.

image desc

Enter the sub-user account and click Next.

image desc

Enter the sub-user password and click Log on.

image desc

After you log on to the console, the following page is displayed.

image desc

3. Create Cluster

Refer to the following figure and select Container Service to enter the container service console.

image desc

Refer to the figure below to create a cluster first.

image desc

Select the Standard Managed Kubernetes.

image desc

Refer to the figure below, set the cluster name, select the US (Silicon Valley) area, and check the VPC network and switch to which it belongs. Click Next.

image desc

Start the worker node configuration.

Refer to the figure below and select the instance type of the Worker node.

image desc

The number of worker nodes is set to 2

image desc

Set password, and click Next.

image desc

The default component configuration can be retained. Click Next.

image desc

Click Create Cluster.

image desc

It takes about 10 minutes to create a cluster. Please wait patiently.

image desc

image desc

4. Create Service mesh

Refer to the figure below and click Service Mesh to go to the service mesh console.

image desc

Click Create ASM Instance.

image desc

Refer to the settings in the figure below to create a service grid instance.

image desc

image desc

image desc

During initialization, please wait for 2~3 minutes.

image desc

After the creation is complete, click Manage.

image desc

Click Add to add the container cluster created before to the service grid.

image desc

image desc

Click OK。

image desc

The addition is complete.

image desc

Refer to the figure below to deploy the default entry gateway.

image desc

image desc

image desc

The deployment is complete.

image desc

5. Connect Cluster

5.1 Connect Container Cluster

Click Elastic Computer Service, as shown in the following picture.

image desc

Copy this ECS instance’s Internet IP address and remotely log in to this ECS (Ubuntu system) instance. For details of remote login, refer to login

At this time, there will be multiple ECS nodes in the console. The user chooses the ECS with the public IP address to log in. The remaining nodes are automatically created when the cluster is created.

image desc

The default account name and password of the ECS instance:

Account name: root

Password: nkYHG890..

Enter the following command to download the latest version of kubectl client tool.

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

image desc

Enter the following command to grant execution permission to the downloaded client and move it to the /usr/bin directory.

chmod +x kubectl
mv kubectl /usr/bin/

image desc

Enter the following command to create a “.kube” directory.

mkdir -p .kube

image desc

Go back to the Alibaba Cloud container console and click on the cluster name to go to the cluster details page.

image desc

Click COPY to copy the content of the cluster certification.

image desc

Go back to the command line interface and enter the command vim .kube/config to create a new config file. Copy the authentication content of the Kubernetes cluster created earlier. Save and exit.

image desc

Enter the following command to view node information.

kubectl get node

image desc

It means that you have successfully connected to our Kubernetes cluster.

5.2 Connect Service Mesh

Go back to the grid service console and click Connection.

image desc

Refer to the figure below to copy the intranet connection address.

image desc

Back to the command line, enter vim ~/.kube/asm.config, paste the content just copied into, save and exit.

image desc

Enter the following command to see if you can connect to the ASM service grid instance.

kubectl get ns --kubeconfig=/root/.kube/asm.config

image desc

The service grid cluster and the container cluster are actually two independent cluster services, and the resources in the two clusters are managed separately.

By default, kubectl connects to the container cluster service. When --kubeconfig=/root/.kube/asm.config is added to the command, the connection is the ASM service grid cluster.

6. Perform phased release

6.1 Add tag

Using the service grid for monitoring and proxying resources, you need to deploy a container plug-in in its resource content.

For convenience, we will first tag the default namespace with “Istio-injection:enabled”, in this way, all pod resources in the “default” namespace will automatically add the proxy plug-in of the service mesh when they are created.

Go back to the Service Mesh console.

In the following figure, under the default namespace that controls the plane point, click Enable Automatic Sidecar Injection。

image desc

Click OK.

image desc

Add success.

image desc

6.2 Create Nginx sample application

Create two versions of the nginx Deployment and one Service.

Back to the command line interface, Enter commands vim resource.yaml,Create a configuration file and copy the following content to the file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-v1
  labels:
    app: nginx
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
      version: v1
  template:
    metadata:
      labels:
        app: nginx
        version: v1
    spec:
      containers:
      - name: nginx
        image: registry-intl.us-west-1.aliyuncs.com/labex/nginx:v1
        ports:
        - containerPort: 80

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-v2
  labels:
    app: nginx
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
      version: v2
  template:
    metadata:
      labels:
        app: nginx
        version: v2
    spec:
      containers:
      - name: nginx
        image: registry-intl.us-west-1.aliyuncs.com/labex/nginx:v2
        ports:
        - containerPort: 80

---   

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  labels:
    app: nginx
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    name: http
    port: 80
    targetPort: 80

image desc

Run the following command to create resources based on the configuration file.

kubectl apply -f resource.yaml

image desc

Run the following command to see that the resource is created.

kubectl get all

image desc

Enter command vim init.yaml,Create a configuration file and copy the following content to the file.

The profile creates a Gateway object and a VirtualService object. The Gateway object will access the traffic ingressgateway the gateway, and the VirtualService object is responsible for controlling the traffic.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: nginx-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

---

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: nginx-vs
spec:
  hosts:
  - "*"
  gateways:
  - nginx-gateway
  http:
  - match:
    - uri:
        exact: /
    route:
    - destination:
        host: nginx-svc
        port:
          number: 80

image desc

Run the following command to create resources based on the configuration file.

kubectl apply -f init.yaml --kubeconfig=/root/.kube/asm.config

image desc

Run the following command to obtain the public IP address of the “Istio-ingressgateway” service.

kubectl get svc -n istio-system istio-ingressgateway

image desc

Enter the IP address in the browser and refresh it multiple times. You can see that two Nginx versions appear in sequence, indicating that the deployment is successful.

image desc

image desc

<font color='red'>The user can cut off the above result picture when doing the experiment and send it to the teacher, indicating that the part of the current chapter has been completed.</font>

7. Simulation demonstration

7.1 Create Client

We use a pre-written script to verify VirtualService flow control.

Go back to the command line interface and enter command vim client.sh,Create a script file, copy the following content to the file, save it and exit. Please pay attention to replace the YOUR-INGRESS-GATEWAY-IP with the public address of the user’s own Ingress Gateway, which is the IP address just entered on the browser

#/bin/bash
IngressGatewayIP='YOUR-INGRESS-GATEWAY-IP'
v1Count=0
v2Count=0
for i in `seq 10000`;
do
    result=`curl -s ${IngressGatewayIP} | grep labex`
    version=${result:10:2}
    if [ $version == 'v1' ];
    then
        v1Count=$[$v1Count+1]
    fi
    if [ $version == 'v2' ];
    then
        v2Count=$[$v2Count+1]
    fi
done
echo "v1:${v1Count}"
echo "v2:${v2Count}"

image desc

Run the following command to run the script. It takes about 3 minutes.

bash client.sh

image desc

You can see that the ratio of the two versions of the access result is close to 1:1.

7.2 Create DestinationRule

Here we create a DestinationRule object, which is responsible for defining all the rules, and the configuration of the object takes effect when the VirtualService object references the rules of the object.

Go back to the Alibaba Cloud service grid console.

Click Create, as shown in the following figure.

image desc

Select the default namespace, copy the following content to the content box, and click Create.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: nginx-rule
spec:
  host: nginx-svc
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

image desc

Create success.

image desc

7.3 Modify VirtualSerivce

As shown in the following figure, we first delete the “nginx-vs” resource.

image desc

Click OK to confirm.

image desc

Then click Create, select the “default” namespace, copy the following content to the content box, click OK.

image desc

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: nginx-vs
spec:
  hosts:
  - "*"
  gateways:
  - nginx-gateway
  http:
  - match:
    - uri:
        exact: /
    route:
    - destination:
        host: nginx-svc
        port:
          number: 80
        subset: v1
      weight: 90
    - destination:
        host: nginx-svc
        port:
          number: 80
        subset: v2
      weight: 10

image desc

Back to the ECS command line interface, enter the following command, and execute the “client.sh” script again.

bash client.sh

image desc

You can see that the ratio of requests to access v1 and v2 versions is approximately 9:1, indicating that the nginx-vs version has taken effect.

As shown in the figure below, we change the weight ratio of v1 and v2 to 1:4.

image desc

image desc

Execute the “client.sh” script again.

bash client.sh

image desc

<font color='red'>Users can cut off the above result picture when they are doing the experiment and send it to the teacher, indicating that the current experiment has been completed.</font>

You can see that the ratio of requests to access V1 and V2 versions is about 1:4.

When the grayscale release, you can slowly adjust the new version of the traffic in the VirtualService object, and wait until the test is appropriate, and then fully launch the new version.

Reminder:
Before you leave this lab, remember to log off from your Alibaba RAM account before you click the stop button of your lab. Otherwise, you will encounter some issues when opening a new lab session in the same browser.

image descimage desc

8. Experiment summary

The experiment describes how to use Alibaba Cloud ASM to perform canopy publishing in a container cluster. Service Grid ASM is suitable for application scenarios that require traffic management, security management, fault recovery, observation and monitoring, and migration of microservice architectures.