Dom Steil

Serverless Containerization

Kubernetes and KNative are driving sequential, scalable and containerized build systems scoped within pods for state sharing.  Knative is a set of open-source components and platform level custom APIs installed on Kubernetes.

In the next few minutes I will walk through how to containerize a node.js or spring boot application, deploy it to Google Kubernetes Engine and use the KNative APIs to manage the containers from 0 to infiinte.

Create an account on Google Cloud Platform (GCP) and on the left hand side go to Kubernetes Engine Create a new cluster with 3 pods and Connect to the Cluster Create Dockerfile for your application either with node.js: Or a spring boot application: docker build gcr.io//:TAG docker push gcr.io//TAG We'll assume that you built the image to io/${PROJECT_ID}/name:lastestand you've created the Kubernetes cluster as described above. Once connected in the cluster you can run kubectl to interact with Kubernetes.

Intro Kubernetes Engine Commands & General K8 Deployment Steps

Containerize Application (spring boot, express server for example) with Dockerfile Docker Tag Docker Push Push the tagged container to gcr.io on GCP kubectl run --image=gcr.io/project/name:tag kubectl get pods kubectl expose deployment name –type=LoadBalance –port 80 – target-port 8080 Move the external IP to a domain in the DNS Settings kubectl scale deployment name kubectl get pods kubectl delete service name kubectl delete name-cluster

Knative adds an abstraction layer for the orchestration process using some key conceptions around revisions, rollouts, and templates.

Knative makes it possible to:

Deploy and serve applications with a higher-level and easier to understand API. These applications automatically scale from zero-to-N, and back to zero, based on requests. Build and package your application code inside the cluster. Deliver events to your application. You can define custom event sources and declare subscriptions between event buses and your applications.

This is why, Knative provides developer experiences similar to serverless platforms. The KNative API builds images inside Kubernetes Pods and here is the KNative API:

Service: describe and application on KNative

Configuration: creates a new revision when the revisionTemplate field changes

Route: configures how traffic should be split between revisions.

Revision: Read-Only snapshot of an application image and settings

Rollout Percent: what % of the traffic the candidate revision gets.

Build Template:

Build: Declare an ordered set of build steps

ClusterBuildTemplate:

istio-ingressgateway – service mesh gateway

kubectl  - Kubernetes Command

kubectl get ksvc (knative service)

Here are the steps to deploy a containerized application on GKE using Kubernetes and Knative:

A production deployment comprises two parts: your Docker container, and a front-end load balancer (which also provides a public IP address.)

THIS.

Once this clicked for me Kubernetes made a lot more sense, it will continue to become more clear but once I figured out that there is a public IP address linked to the set of containers it became much easier to reason about (similar to external IP on a VM, but instead its for the containerized app).

Helm is a package manager for Kubernetes Applications similar to NPM for Node.js

We can also use Helm to bring in the application to the Cluster.

Helm now has an installer script that will automatically grab the latest version of the Helm client and install it locally.

You can fetch that script, and then execute it locally. It’s well documented so that you can read through it and understand what it is doing before you run it.

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh$ chmod 700 get_helm.sh$ ./get_helm.sh

Helm initkube

kubectl create serviceaccount --namespace kube-system tiller

kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

Create Dockerfile for your application either with node.js:

Or a spring boot application:

docker build gcr.io//:TAG

docker push gcr.io//TAG

We'll assume that you built the image to gcr.io/${PROJECT_ID}/name:lastest and you've created the Kubernetes cluster as described above.

Once connected in the cluster you can run kubectl to interact with Kubernetes.

Create a deployment:

kubectl run name --image=gcr.io/${PROJECT_ID}/name:v1 --port 8080

This runs your image on a Kubernetes pod, which is the deployable unit in Kubernetes.

The pod opens port 8080, which is the port your Spring Boot application is listening on.

You can view the running pods using:

kubectl get pods

Expose the application by creating a load balancer pointing at your pod: kubectl expose deployment demo --type=LoadBalancer --port 80 --target-port 8080

This creates a service resource pointing at your running pod. It listens on the standard HTTP port 80, and proxies back to your pod on port 8080.

Obtain the IP address of the service by running: kubectl get service

Initially, the external IP field will be pending while Kubernetes Engine procures an IP address for you.

If you rerun the kubectl get servicecommand repeatedly, eventually the IP address will appear.

You can then point your browser at that URL to view the running application.

Congratulations! Your application is now up and running!

Now we are going to add KNative to the cluster

kubectl create clusterrolebinding cluster-admin-binding \

  --clusterrole=cluster-admin \

  --user=$(gcloud config get-value core/account)

You can read the documentation at https://github.com/knative/docs.

Knative is still Kubernetes

If you deployed applications with Kubernetes before, Knative will feel familiar to you. You will still write YAML manifest files and deploy container images on a Kubernetes cluster.

Knative APIs

Kubernetes offers a feature called Custom Resource Definitions (CRDs). With CRDs, third party Kubernetes controllers like Istio or Knative can install more APIs into Kubernetes.

Knative installs of three families of custom resource APIs:

Knative Serving: Set of APIs that help you host applications that serve traffic. Provides features like custom routing and autoscaling. Knative Build: Set of APIs that allow you to execute builds (arbitrary transformations on source code) inside the cluster. For example, you can use Knative Build to compile an app into a container image, then push the image to a registry. Knative Eventing: Set of APIs that let you declare event sources and event delivery to your applications. (Not covered in this codelab due to time constraints.)

Together, Knative Serving, Build and Eventing APIs provide a common set of middleware for Kubernetes applications. We will use these APIs to run build and run applications.

Install Istio: Knative uses Istio for configuring networking and using request-based routing. kubectl apply --filename https://github.com/knative/serving/releases/download/v0.4.0/istio-crds.yaml && \ kubectl apply --filename https://github.com/knative/serving/releases/download/v0.4.0/istio.yaml

Label the default namespace with istio-injection=enabled: kubectl label namespace default istio-injection=enabled Monitor the Istio components until all of the components show a STATUS of Running or Completed: kubectl get pods --namespace istio-system

It will take a few minutes for all the components to be up and running; you can rerun the command to see the current state.

kubectl delete svc istio-ingressgateway -n istio-system

kubectl delete deploy istio-ingressgateway -n istio-system

Activate Istio on the "default" Kubernetes namespace: This automatically injects an Istio proxy sidecar container to all pods deployed to the "default" namespace kubectl label namespace default istio-injection=enabled Wait until Istio installation is complete (all pods become "Running" or "Completed"). kubectl get pods --namespace=istio-system

Install Knative Build & Serving:

kubectl apply --filename https://github.com/knative/serving/releases/download/v0.2.1/release.yaml

https://github.com/knative/serving/releases/download/v0.4.0/serving.yaml

https://github.com/knative/build/releases/download/v0.4.0/build.yaml

Wait until Knative Serving & Build installation is complete (all pods become "Running" or "Completed"), run these a few times:

kubectl get pods --namespace=knative-serving

kubectl get pods --namespace=knative-build

Knative is now installed on your cluster!

Create your service.yaml file and deploy it using the following:

Deploy it:

kubectl apply -f service.yaml

Verify it's deployed by querying "ksvc" (Knative Service) objects:

kubectl get ksvc

For more information check out the following articles:

https://cloud.google.com/community/tutorials/kotlin-springboot-container-engine

https://github.com/knative/

https://github.com/knative/build-templates