Managing Talend microservices with Istio service mesh on Azure Kubernetes Service

Overview

The Talend Community Knowledge Base (KB) article, Managing Talend microservices with Istio service mesh on Kubernetes, shows you how to connect, secure, control, and observe Talend microservices leveraging Istio.

 

This article describes the installation and platform-specific steps for managing Talend microservices using Istio service mesh on Azure Kubernetes Service (AKS).

 

For more information on installing Talend microservices using Istio on other cloud providers, see the following Talend Community KB articles:

 

Environment Setup

 

Launching a standard Azure Kubernetes Service (AKS)

Create an Azure Kubernetes Service by using Azure CLI or Azure portal.

 

Using Azure CLI or Cloud shell

  1. Set the Azure subscription ID.

    az account set -s REPLACE_WITH_SUBSCRIPTION_ID
    # for example, az account set -s d93xxxxxxxxxxxxxx
  2. Create a resource group.

    az group create --name AKS_RESOURCE_GROUP --location REPLACE_WITH_AZURE_LOCATION
    # for example, az group create --name rchinta_csa_resource_group --location "West Europe"
  3. Create an AKS Cluster.

    az aks create --resource-group REPLACE_WITH_AKS_RESOURCE_GROUP --name AKS_CLUSTER_NAME --node-count 2 --kubernetes-version 1.14.8  --node-vm-size DS2_v2
    #for example, az aks create --resource-group rchinta_csa_resource_group --name talend-bonn-Az-aks-cluster --node-count 2 --kubernetes-version 1.14.8  --node-vm-size DS2_v2

 

Using the Azure portal

  1. From the Azure portal search for Kubernetes services. Click the Add button.

    Dashboard.jpg

     

  2. In the Create Kubernetes cluster > Basic settings, enter the details for Resource group, Kubernetes cluster name, Region, Kubernetes version, and Node size. Click Next.

    Cluster_Create_2.jpg

     

  3. In the Scale settings, Enable the VM scale sets option to allow auto-scaling of the nodes. Click Next.

    Vm_Scale_Sets.jpg

     

  4. In the Authentication settings, set Service principal to use default service principal, then select Yes to Enable RBAC.

    Note: If you want to create/assign a service principal, click the Configure service principal link.

    Authentication.jpg

     

  5. Click Next until Review + Create, validate the settings, then click Create.

    validation.jpg

    Note: The cluster creation might take up to 5 or 10 minutes.

     

  6. In the Azure portal, click the Cloud Shell icon, initialize storage if required, then launch the bash shell.

    Launch_shell.jpg

     

  7. In the cloud shell, create a kubeconfig file in the ~/.kube folder by executing the following command:

    az aks get-credentials --resource-group REPLACE_WITH_AKS_RESOURCE_GROUP  --name  REPLACE_WITH_AKS_CLUSTER_NAME
    # for example, az aks get-credentials --resource-group rchinta_aks_resource_group --name talend-bonn-Az-aks-cluster

    download_kubeconfig_file.jpg

     

  8. Verify that the worker nodes joined with the cluster, by executing the following command:

    kubectl get nodes

    get_nodes.jpg

     

Installing Istio with Helm on AKS

Helm is a package manager for Kubernetes. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

 

Istio provides customizable Helm templates for installation on a Kubernetes cluster. For more information on other installation options, see the Istio Installation Guides.

 

Install Helm client

Install Helm client using one of these three approaches, for example, Azure Cloud shell / Linux / Windows OS.

 

Using Cloud Shell (AKS)

Helm client is preinstalled on Azure Cloud Shell.

helm version

helm_installed.jpg

 

Using shell script (Linux)

Skip this step if you use Azure Cloud Shell. Helm client can be installed on Linux OS using shell commands.

Connect to a Cloud shell, then install Helm client.

curl -LO https://git.io/get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

 

From Chocolatey (Windows)

Skip this step if you use Azure Cloud shell. Helm package can be installed on Windows using Chocolatey

  1. Launch a command prompt and execute the below command.

    choco

    Note: If you get an error command not found choco, then follow the instructions in this link and Install Chocolatey on Windows.

  2. Install Helm using the choco install command:

    #Install helm client
    choco install kubernetes-helm

     

Customizing the Istio installation with Helm

From the cloud shell or command prompt.

  1. Create an istio_install directory.

    # create directory istio_install
    mkdir istio_install (Windows / Linux)
    
    # move into the istio_install
    cd istio_install
  2. Download an Istio release and then execute the steps in the guide Customizable Install with Helm.

    # Download the installation file using the curl command
    curl -Ls -O https://github.com/istio/istio/releases/download/1.3.4/istio-1.3.4-linux.tar.gz  ( Linux)
    curl -Ls -O https://github.com/istio/istio/releases/download/1.3.4/istio-1.3.4-win.zip (Windows)
    
    # gunzip and untar the file
    gunzip istio-1.3.4-linux.tar.gz 
    tar -xvf istio-1.3.4-linux.tar
    
    # move into the directory istio-1.3.4
    cd istio-1.3.4
    
    # Configure the PATH environment variable
    export PATH=$PWD/bin:$PATH (Linux)
    set PATH=%CD%/bin;%PATH% (Windows)
    
    # Initialize helm
    helm init
    
    #Add Istio to the helm repository
    helm repo add istio.io https://storage.googleapis.com/istio-release/releases/1.3.4/charts/
    
    #Create namespace istio-system
    kubectl create namespace istio-system
    
    #Install all the istio CRDs
    helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
  3. Install Istio choosing one of the configuration profiles.

    Note: This article uses the demo_auth profile.

    # Execute the helm template with the predefined settings in the demo_auth profile
    
    helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
    --values install/kubernetes/helm/istio/values-istio-demo-auth.yaml | kubectl apply -f -
  4. Verify that all of the Istio components in the demo-auth configuration profile are in the status Running / Completed, by executing the following command:

    kubectl get pods -n istio-system

    Pods_status.jpg

     

  5. Verify an External IP is assigned to the istio-ingressgateway service.

    kubectl get svc -n istio-system

    services_running.jpg

    Note:

    1. Ingress traffic to application pods or microservices is only made possible using the External / Public IP assigned to the istio-ingressgateway service.

    2. An external Load-balancer is launched on AKS automatically, and its IP is assigned to the istio-ingressgateway service.

 

Launching Istio monitoring dashboards for AKS

With Istio demo-auth profile, add-ons for dashboards like Prometheus, Grafana, Kiali, and Jaeger are enabled and installed along with Istio.

  • Prometheus is a web-based graphical user interface for querying the Istio metric values.

  • Grafana is a web-based graphical user interface that provides a global view of the mesh along with services and their workloads.

  • Kiali is a web-based graphical user interface to view service graphs of the mesh and Istio configuration objects. Different graph types such as App, Versioned App, Workload, Service are available to view the services in the mesh.

  1. Open a Windows command prompt in Administrator mode, then execute the following commands, which creates the kubeconfig file in the Windows USER_HOME/.kube folder.

    # login to windows from command prompt
    az login
    
    az aks get-credentials --resource-group REPLACE_WITH_AKS_RESOURCE_GROUP --name REPLACE_WITH_AKS_CLUSTER_NAME
    # for example, az aks get-credentials --resource-group rchinta_aks_resource_group --name talend-bonn-Az-aks-cluster
  2. Launch the dashboards using local port forwarding.

    # Execute the below port-forward command and access Prometheus using local port forwarding
    kubectl port-forward svc/prometheus 9090:9090 -n istio-system
    
    # Execute the below port-forward command and access Grafana using local port forwarding
    kubectl port-forward svc/grafana 3000:3000 -n istio-system
    
    # Execute the below port-forward command and access Kiali using local port forwarding
    kubectl port-forward svc/kiali 20001:20001 -n istio-system
    
    # Execute the below port-forward command and access Jaeger using local port forwarding
    kubectl port-forward svc/jaeger-query 16686:16686 -n istio-system

 

Publish Talend microservices to Azure Container Registry

 

Creating an Azure Container Registry (ACR)

  1. From the Azure portal search for container registries. Click Add.

    search_rgistry.jpg

     

  2. Enter the registry details such as Registry name and Resource group as shown below, then click Create.

    create-registry_1.jpg

     

  3. After the registry is created, open the registry, then click Access keys.

    access_key_password_1.jpg

     

  4. Obtain the ACR credentials from the fields username and password.

 

Publishing microservices from Talend Studio to ACR

  1. From Talend Studio, publish the Customers microservice to ACR, as shown below. Click Finish.

    push._1jpg.jpg

    Note: In the Publish window, enter the usernameand password fields with the ACR credentials. For more information, follow the steps in Creating an Azure Container Registry (ACR) section.

     

  2. Repeat Step 1 and publish the Orders microservice. Make a note of the URLs of the microservice images published to ACR.

    Container_registry_urls.jpg

     

Authentication with Azure Container Registry from AKS using a service principal

Skip this section if you are not using Azure Container Registry as a container registry.

 

An AKS cluster, by default, doesn't have read access to the images in ACR. The application images are pulled from the registry after a successful Authenticate with Azure Container Registry from Azure Kubernetes Service.

 

Obtaining a service principal

When creating an AKS cluster, in the Authentication settings by default, a service principal is created and assigned to the AKS cluster.

 

Obtain the Client ID of the default service principal by executing the following command:

az aks show --name  REPLACE_WITH_AKS_CLUSTER_NAME  --resource-group  REPLACE_WITH_AKS_CLUSTER_RESOURCE_GROUP --query servicePrincipalProfile.clientId -o tsv 
# for example,  az aks show --name talend-bonn-Az-aks-cluster --resource-group rchinta_aks_resource_group --query servicePrincipalProfile.clientId -o tsv
# command outputs clientId : 197bdxxx-xxxx............

 

Obtaining the ACR resource ID

Obtain the ACR resource ID by executing the following command:

az acr show --name  REPLACE_WITH_AZURE_CONTAINER_REGISTRY_NAME --query id --output tsv
# for example, az acr show --name rchinta --query id --output tsv
# command outputs ACR resource ID : /subscriptions/xxxxxx/resourceGroups/rchinta_aks_resource_group/providers/Microsoft.ContainerRegistry/registries/rchinta

 

Granting role-based access to ACR from AKS

Assign the acrpull role to the AKS service principal by executing the following command:

az role assignment create --assignee  REPLACE_WITH_AKS_SERVICE_PRINCIPAL_ID  --scope  REPLACE_WITH_ACR_RESOURCE_ID  --role acrpull
# for example, /subscriptions/xxxxxx/resourceGroups/rchinta_aks_resource_group/providers/Microsoft.ContainerRegistry/registries/rchinta

Note: If you get an Insufficient privileges to complete the operation error, contact your Azure administrator.

 

Create a Kubernetes secret with service principal

Create a Kubernetes secret that contains the AKS service principal's Client ID and Client Secret.

 

Using the Azure CLI / Cloud shell execute the following command:

# Fill your registry details in the below command before executing.
kubectl create secret docker-registry acr-auth --docker-server REPLACE_WITH_YOUR_AZURE_REGISTRY_URL --docker-username REPLACE_WITH_YOUR_SERVICE_PRINCIPAL_CLIENT_ID --docker-password REPLACE_WITH_YOUR_SERVICE_PRINCIPAL_CLIENT_SECRET --docker-email REPLACE_WITH_YOUR_EMAIL_ID

# for example
kubectl create secret docker-registry  acr-auth  --docker-server rchinta.azurecr.io --docker-username ff420.... --docker-password 5CQHrss.... --docker-email rchinta@talend.com

 

Using Secret in the Kubernetes resource files

Azure Kubernetes service authenticates with the Azure Container Registry using the Secret configuration in the resource files and downloads the application images.

 

In the Demo Kubernete resource files for AKS attached to this article, observe the usage of the property imagePullSecrets with name acr-auth.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: order-service-v2
spec:
replicas: 2
template:
metadata:
labels:
app : order-service
version: v2
spec:
containers:
- name: order-service
image: rchinta.azurecr.io/k8s_microservice/customer/microservices/orders:0.5.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
command: ["/bin/sh", "/maven/Orders/Orders_run.sh"]
args: ["--spring.config.location=classpath:config/contexts/PROD.properties"]
imagePullSecrets:
- name: acr-auth

 

Conclusion

This article showed you how to launch an Azure Kubernetes Cluster, install Istio manually using Helm, and how to publish Talend microservices on Azure Container Registry.

Version history
Revision #:
23 of 23
Last update:
‎12-23-2019 08:18 AM
Updated by:
 
Contributors