Container Clustering with ECS/K8 Lab

For more information on Kubernetes, visit kubernetes.io
Description:
This lab is meant to serve as a docker/containers clustering lab course. The objective of this lab will be to walk through a step by step exercise to help a user new to Kubernetes to deploy a containerized app on the Kubernetes platform.
Pre-Requisites:
As an alternative to installing the pre-requisites below, you could build an ec2-linux instance with all of the requirements bundled in. See the bottom of this page in the Resources section for a packer file that will create a lab builder AMI.
Kubernetes relies heavily on DNS to register its different components. In order to get the cluster running properly, we will need to ensure that we have a Public DNS Zone or Route 53 Hosted Zone that Kubernetes can register with and use to properly resolve namespaces for various components and deployed services. Instructions on how to set up DNS in Route 53 can be found in the Routing and DNS section of the lab.
1. AWS Account:
You will need to have an active AWS account, as this lab will cover setting up a Kubernetes cluster using the AWS EC2 service.
2. IAM User:
You will need an IAM user created with a the appropriate permissions (admin access for this demo). The user should have programmatic access, and have a generated Access Key, and associated Access Secret Key. You will also need the users ARN for later in the lab (Found in the IAM console under the users section).
3. Python and PIP:
You will need to have python and PIP (Pip Installs Packages) installed on your workstation so we can download and install the AWS CLI tools. This step will be required for various provisioning/deployment steps further down the tutorial.
- Python
- yum/apt-get install -y python-pip
4. Install AWS CLI:
In order to interact with AWS easier, you should have the awscli tool installed and configured with proper user access and user access secrets. You can configure the access key and access secret key using the aws configure
command once the CLI tools have been installed via python pip
pip3 install awscli
aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:
Example:
Desktop rnason$ aws configure
AWS Access Key ID [None]: ABCDEFGHIJKLMNOPQRST
AWS Secret Access Key [None]: ****************************************
Default region name [us-east-2]:
Default output format [None]:
5. Install Kubectl:
On your local workstation, follow the steps found on the kubernetes.io page to install Kubectl. Kubectl can be installed on Windows, Mac or Linux.
6. Putty (Windows Only):
If your using windows, then you will need to have Putty and Putty KeyGen installed on your local windows instance. We will need putty in order to SSH to the docker instances, and Putty KeyGen in order to convert the AWS Key PEM file to a putty required PPK File.
- Download the AWS Key Pem file used to launch your instance to your local drive
- Download Putty
- Download PuttyGen
Convert the PEM file into Putty PPK files
Open PuttyGen, Click on the Load button. Browse to the PEM file, and click on Open. Press OK on the import dialog box.
Save the Private Key
Once Loaded, Click on the **Save Private Key** button to save the private key as a ppk formatted file. Press Yes on the dialog asking if you want to save the file without a passphrase to protect it. | ![]() |
Save the Public Key
Last Click on the Save Public Key button to save the public key.
Keys now ready to use with Putty!
Once save, the private key can be used with putty to connect to your EC2 instances.
Routing and DNS:
In order to configure DNS we will need to configure a zone in Route53. We can do this by logging into the AWS console, and from the list of services choose Route53.
1. Create Zone:
From the menu on the left side of the Route53 console, click on Hosted Zones and then click the . A dialog will will open on the right side of the screen. Fill in the zone's domain name, and choose whether it is a Public Hosted Zone or a Private Hosted Zone.
2. Get Zone ID:
Once the zone has been created, click on the zone, in the main Route53 console window, and from the zone details on the right hand side grab the Hosted Zone ID.
Provisioning a K8 Cluster:
For the purpose of this lab, we are going to set up the Kubernetes cluster by running a simple docker image that will automatically connect to your AWS account using supplied IAM credentials. The docker setup container will connect to your account, and perform all of the necessary steps required to get a kubernetes cluster up and running easily. The setup process after running the container will take around 10-15 minutes to fully complete.
1. Prepare Docker Run Statement:
Collect the the following information and fill in the docker run statement template below.
The following docker container will provision the Kubernetes cluster for you, and requires variables such as an S3 Bucket in order to store the kubeconfig and credential files.
S3 Bucket names must be globally unique
, ensure that you choose a unique S3 bucket name in order to ensure that the cluster provioning container doesn't fail on error. Also for the purpose of this lab, you must also ensure that the Cluster_Name
is also unique if launching multiple clusters against the same DNS Zone.
docker run -it --rm --name {{CONTAINER_NAME}} -h {{CONTAINER_HOST_NAME}} \
-e AWS_KEY="{{AWS_ACCESS_KEY_ID}}" \
-e AWS_SECRET="{{AWS_SECRET_ACCESS_KEY}}" \
-e AWS_REGION="{{REGION}}" \
-e AWS_AZ="{{AWS_AZ}}" \
-e KMS_DESC="{{AWS_KMS_DESCRIPTION}}" \
-e KMS_ALIAS="{{AWS_KMS_ALIAS}}" \
-e IAM_USER_ARN="{{IAM USER ARN}}" \
-e HOSTED_ZONE_ID="{{R53 ZONE ID}}" \
-e CLUSTER_NAME="{{CLUSTER_NAME}}" \
-e DOMAIN_NAME="{{DNS ZONE FQDN}}" \
-e BUCKET_NAME="{{BUCKET_NAME}}" \
-e KEY_NAME="{{AWS_SSH_KEY_PAIR_NAME}}" \
appcontainers/kubecluster-builder:latest
Example:
docker run -it --rm --name kubecluster-builder -h kubecluster-builder \
-e AWS_KEY="ABCDEFGHIJKLMNOPQRS" \
-e AWS_SECRET="abcdefghijklmnopqrstuvwxyz12345678909876" \
-e AWS_REGION="us-east-1" \
-e AWS_AZ="us-east-1b" \
-e KMS_DESC="AWS Kubernetes Cluster Demo" \
-e KMS_ALIAS="alias/kubernetes-demo" \
-e IAM_USER_ARN="arn:aws:iam::012345678900:user/container_lab" \
-e HOSTED_ZONE_ID="A12BCDEFGH34I5" \
-e CLUSTER_NAME="K8-Demo-Cluster" \
-e DOMAIN_NAME="k8clusterdemo.com" \
-e BUCKET_NAME="K8-Demo" \
-e KEY_NAME="My_SSH_Key" \
appcontainers/kubecluster-builder:latest
2. Run the setup container:
Once the variables have been substituted properly into the docker run statement below, run the container on your local workstation. As the container sets up your Kubernetes cluster, you will see the output in your console window. Once the container has completed the build, it will automatically terminate and remove the setup container.
Success! Created cluster.yaml
Next steps:
1. (Optional) Edit cluster.yaml to parameterize the cluster.
2. Use the "kube-aws render" command to render the CloudFormation stack template and coreos-cloudinit userdata.
Generating credentials...
-> Generating new TLS CA
-> Generating new assets
Success! Stack rendered to ./stack-templates.
Next steps:
1. (Optional) Validate your changes to cluster.yaml with "kube-aws validate"
2. (Optional) Further customize the cluster by modifying templates in ./stack-templates or cloud-configs in ./userdata.
3. Start the cluster with "kube-aws up".
{
"Location": "/K8-Demo"
}
WARN: the worker node pool "nodepool1" is associated to a k8s API endpoint behind the DNS name "K8-Demo-Cluster" managed by YOU!
Please never point the DNS record for it to a different k8s cluster, especially when the name is a "stable" one which is shared among multiple k8s clusters for achieving blue-green deployments of k8s clusters!
kube-aws can't save users from mistakes like that
INFO: generated "credentials/tokens.csv.enc" by encrypting "credentials/tokens.csv"
INFO: generated "credentials/kubelet-tls-bootstrap-token.enc" by encrypting "credentials/kubelet-tls-bootstrap-token"
INFO: generated "credentials/ca.pem.enc" by encrypting "credentials/ca.pem"
INFO: generated "credentials/ca-key.pem.enc" by encrypting "credentials/ca-key.pem"
INFO: generated "credentials/apiserver.pem.enc" by encrypting "credentials/apiserver.pem"
INFO: generated "credentials/apiserver-key.pem.enc" by encrypting "credentials/apiserver-key.pem"
INFO: generated "credentials/worker.pem.enc" by encrypting "credentials/worker.pem"
INFO: generated "credentials/worker-key.pem.enc" by encrypting "credentials/worker-key.pem"
INFO: generated "credentials/admin.pem.enc" by encrypting "credentials/admin.pem"
INFO: generated "credentials/admin-key.pem.enc" by encrypting "credentials/admin-key.pem"
INFO: generated "credentials/etcd.pem.enc" by encrypting "credentials/etcd.pem"
INFO: generated "credentials/etcd-key.pem.enc" by encrypting "credentials/etcd-key.pem"
INFO: generated "credentials/etcd-client.pem.enc" by encrypting "credentials/etcd-client.pem"
INFO: generated "credentials/etcd-client-key.pem.enc" by encrypting "credentials/etcd-client-key.pem"
Validating UserData and stack template...
Validation Report: {
Capabilities: ["CAPABILITY_NAMED_IAM"],
CapabilitiesReason: "The following resource(s) require capabilities: [AWS::CloudFormation::Stack]",
Description: "kube-aws Kubernetes cluster K8-Demo-Cluster"
}
{
Capabilities: ["CAPABILITY_IAM"],
CapabilitiesReason: "The following resource(s) require capabilities: [AWS::IAM::ManagedPolicy]",
Description: "kube-aws Kubernetes cluster K8-Demo-Cluster"
}
{
Capabilities: ["CAPABILITY_IAM"],
CapabilitiesReason: "The following resource(s) require capabilities: [AWS::IAM::ManagedPolicy]",
Description: "kube-aws Kubernetes node pool K8-Demo-Cluster nodepool1",
Parameters: [{
Description: "The name of a control-plane stack used to import values into this stack",
NoEcho: false,
ParameterKey: "ControlPlaneStackName"
}]
}
stack template is valid.
Validation OK!
WARN: the worker node pool "nodepool1" is associated to a k8s API endpoint behind the DNS name "K8-Demo-Cluster" managed by YOU!
Please never point the DNS record for it to a different k8s cluster, especially when the name is a "stable" one which is shared among multiple k8s clusters for achieving blue-green deployments of k8s clusters!
kube-aws can't save users from mistakes like that
Creating AWS resources. Please wait. It may take a few minutes.
Streaming CloudFormation events for the cluster 'K8-Demo-Cluster'...
+00:00:00 CREATE_IN_PROGRESS K8-Demo-Cluster "User Initiated"
+00:00:15 CREATE_IN_PROGRESS Controlplane
+00:00:16 CREATE_IN_PROGRESS Controlplane "Resource creation Initiated"
+00:00:16 CREATE_IN_PROGRESS K8-Demo-Cluster-Controlplane-12YWL0YFTWEK0 "User Initiated"
+00:00:20 CREATE_IN_PROGRESS VPC
+00:00:20 CREATE_IN_PROGRESS IAMManagedPolicyEtcd
+00:00:20 CREATE_IN_PROGRESS InternetGateway
+00:00:20 CREATE_IN_PROGRESS Etcd0EIP
+00:00:20 CREATE_IN_PROGRESS IAMManagedPolicyController
+00:00:20 CREATE_IN_PROGRESS VPC "Resource creation Initiated"
+00:00:20 CREATE_IN_PROGRESS IAMManagedPolicyEtcd "Resource creation Initiated"
+00:00:21 CREATE_IN_PROGRESS InternetGateway "Resource creation Initiated"
+00:00:21 CREATE_IN_PROGRESS Etcd0EIP "Resource creation Initiated"
+00:00:21 CREATE_IN_PROGRESS IAMManagedPolicyController "Resource creation Initiated"
+00:00:25 CREATE_COMPLETE IAMManagedPolicyEtcd
+00:00:25 CREATE_COMPLETE IAMManagedPolicyController
+00:00:28 CREATE_IN_PROGRESS IAMRoleEtcd
+00:00:28 CREATE_IN_PROGRESS IAMRoleController
+00:00:28 CREATE_IN_PROGRESS IAMRoleEtcd "Resource creation Initiated"
+00:00:28 CREATE_IN_PROGRESS IAMRoleController "Resource creation Initiated"
+00:00:37 CREATE_COMPLETE InternetGateway
+00:00:37 CREATE_COMPLETE VPC
+00:00:37 CREATE_COMPLETE IAMRoleEtcd
+00:00:38 CREATE_COMPLETE IAMRoleController
+00:00:39 CREATE_IN_PROGRESS VPCGatewayAttachment
+00:00:39 CREATE_IN_PROGRESS Subnet0
+00:00:39 CREATE_IN_PROGRESS SecurityGroupWorker
+00:00:39 CREATE_IN_PROGRESS SecurityGroupElbAPIServer
+00:00:39 CREATE_IN_PROGRESS Subnet0RouteTable
+00:00:40 CREATE_IN_PROGRESS APIEndpointDefaultSG
+00:00:40 CREATE_IN_PROGRESS SecurityGroupEtcd
+00:00:40 CREATE_IN_PROGRESS Subnet0RouteTable "Resource creation Initiated"
+00:00:40 CREATE_IN_PROGRESS IAMInstanceProfileController
+00:00:40 CREATE_IN_PROGRESS SecurityGroupElbAPIServer "Resource creation Initiated"
+00:00:40 CREATE_IN_PROGRESS SecurityGroupWorker "Resource creation Initiated"
+00:00:40 CREATE_IN_PROGRESS IAMInstanceProfileEtcd
+00:00:40 CREATE_IN_PROGRESS Subnet0 "Resource creation Initiated"
+00:00:41 CREATE_IN_PROGRESS IAMInstanceProfileEtcd "Resource creation Initiated"
+00:00:41 CREATE_IN_PROGRESS SecurityGroupEtcd "Resource creation Initiated"
+00:00:41 CREATE_COMPLETE Subnet0RouteTable
+00:00:41 CREATE_IN_PROGRESS IAMInstanceProfileController "Resource creation Initiated"
+00:00:41 CREATE_COMPLETE SecurityGroupElbAPIServer
+00:00:42 CREATE_IN_PROGRESS APIEndpointDefaultSG "Resource creation Initiated"
+00:00:43 CREATE_COMPLETE APIEndpointDefaultSG
+00:00:43 CREATE_COMPLETE SecurityGroupEtcd
+00:00:43 CREATE_IN_PROGRESS Subnet0RouteToInternet
+00:00:43 CREATE_IN_PROGRESS Etcd0EBS
+00:00:44 CREATE_IN_PROGRESS Subnet0RouteToInternet "Resource creation Initiated"
+00:00:44 CREATE_IN_PROGRESS Etcd0EBS "Resource creation Initiated"
+00:00:45 CREATE_IN_PROGRESS SecurityGroupEtcdPeerIngress
+00:00:46 CREATE_IN_PROGRESS SecurityGroupEtcdPeerIngress "Resource creation Initiated"
+00:00:46 CREATE_IN_PROGRESS SecurityGroupController
+00:00:46 CREATE_IN_PROGRESS SecurityGroupEtcdPeerHealthCheckIngress
+00:00:46 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromWorkerToWorkerKubeletReadOnly
+00:00:46 CREATE_IN_PROGRESS SecurityGroupEtcdPeerHealthCheckIngress "Resource creation Initiated"
+00:00:46 CREATE_IN_PROGRESS SecurityGroupEtcdIngressFromWorkerToEtcd
+00:00:46 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromWorkerToWorkerKubeletReadOnly "Resource creation Initiated"
+00:00:46 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromWorkerToFlannel
+00:00:46 CREATE_IN_PROGRESS SecurityGroupEtcdIngressFromWorkerToEtcd "Resource creation Initiated"
+00:00:46 CREATE_IN_PROGRESS SecurityGroupController "Resource creation Initiated"
+00:00:46 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromWorkerToFlannel "Resource creation Initiated"
+00:00:47 CREATE_COMPLETE SecurityGroupEtcdPeerIngress
+00:00:47 CREATE_COMPLETE SecurityGroupEtcdPeerHealthCheckIngress
+00:00:47 CREATE_COMPLETE SecurityGroupWorkerIngressFromWorkerToWorkerKubeletReadOnly
+00:00:47 CREATE_COMPLETE SecurityGroupEtcdIngressFromWorkerToEtcd
+00:00:48 CREATE_COMPLETE SecurityGroupWorkerIngressFromWorkerToFlannel
+00:00:49 CREATE_COMPLETE SecurityGroupController
+00:00:51 CREATE_IN_PROGRESS SecurityGroupEtcdIngressFromControllerToEtcd
+00:00:51 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromControllerTocAdvisor
+00:00:52 CREATE_IN_PROGRESS SecurityGroupControllerIngressFromControllerToController
+00:00:52 CREATE_IN_PROGRESS SecurityGroupControllerIngressFromControllerToKubelet
+00:00:52 CREATE_IN_PROGRESS SecurityGroupEtcdIngressFromControllerToEtcd "Resource creation Initiated"
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromControllerTocAdvisor "Resource creation Initiated"
+00:00:52 CREATE_IN_PROGRESS SecurityGroupControllerIngressFromControllerToController "Resource creation Initiated"
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromWorkerToControllerKubeletReadOnly
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromControllerToKubelet
+00:00:52 CREATE_IN_PROGRESS SecurityGroupControllerIngressFromControllerToKubelet "Resource creation Initiated"
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromWorkerToControllerKubeletReadOnly "Resource creation Initiated"
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromControllerToFlannel
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromControllerToKubelet "Resource creation Initiated"
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromFlannelToController
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromControllerToFlannel "Resource creation Initiated"
+00:00:52 CREATE_COMPLETE SecurityGroupControllerIngressFromControllerToController
+00:00:52 CREATE_COMPLETE SecurityGroupWorkerIngressFromControllerTocAdvisor
+00:00:53 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromFlannelToController "Resource creation Initiated"
+00:00:53 CREATE_IN_PROGRESS SecurityGroupControllerIngressFromWorkerToEtcd
+00:00:53 CREATE_COMPLETE SecurityGroupEtcdIngressFromControllerToEtcd
+00:00:53 CREATE_COMPLETE SecurityGroupControllerIngressFromControllerToKubelet
+00:00:53 CREATE_IN_PROGRESS SecurityGroupControllerIngressFromWorkerToEtcd "Resource creation Initiated"
+00:00:53 CREATE_COMPLETE SecurityGroupWorkerIngressFromWorkerToControllerKubeletReadOnly
+00:00:53 CREATE_COMPLETE SecurityGroupWorkerIngressFromControllerToKubelet
+00:00:53 CREATE_COMPLETE SecurityGroupWorkerIngressFromControllerToFlannel
+00:00:53 CREATE_COMPLETE SecurityGroupWorkerIngressFromFlannelToController
+00:00:54 CREATE_COMPLETE SecurityGroupControllerIngressFromWorkerToEtcd
+00:00:56 CREATE_COMPLETE VPCGatewayAttachment
+00:00:57 CREATE_COMPLETE Subnet0
+00:00:59 CREATE_COMPLETE Subnet0RouteToInternet
+00:00:59 CREATE_IN_PROGRESS APIEndpointDefaultELB
+00:01:00 CREATE_IN_PROGRESS Subnet0RouteTableAssociation
+00:01:00 CREATE_COMPLETE Etcd0EBS
+00:01:01 CREATE_IN_PROGRESS APIEndpointDefaultELB "Resource creation Initiated"
+00:01:01 CREATE_IN_PROGRESS Subnet0RouteTableAssociation "Resource creation Initiated"
+00:01:01 CREATE_COMPLETE APIEndpointDefaultELB
+00:01:16 CREATE_COMPLETE Subnet0RouteTableAssociation
+00:02:41 CREATE_COMPLETE IAMInstanceProfileEtcd
+00:02:42 CREATE_COMPLETE IAMInstanceProfileController
+00:02:44 CREATE_IN_PROGRESS Etcd0LC
+00:02:44 CREATE_IN_PROGRESS Etcd0LC "Resource creation Initiated"
+00:02:45 CREATE_COMPLETE Etcd0LC
+00:02:48 CREATE_IN_PROGRESS ControllersLC
+00:02:49 CREATE_IN_PROGRESS ControllersLC "Resource creation Initiated"
+00:02:49 CREATE_COMPLETE ControllersLC
+00:02:50 CREATE_IN_PROGRESS Etcd0
+00:02:51 CREATE_IN_PROGRESS Etcd0 "Resource creation Initiated"
+00:05:37 CREATE_IN_PROGRESS Etcd0 "Received SUCCESS signal with UniqueId i-0b6bcccaa384fcca9"
+00:05:42 CREATE_COMPLETE Etcd0
+00:05:45 CREATE_IN_PROGRESS Controllers
+00:05:46 CREATE_IN_PROGRESS Controllers "Resource creation Initiated"
+00:08:31 CREATE_IN_PROGRESS Controllers "Received SUCCESS signal with UniqueId i-09347bb0bd9415d7b"
+00:08:39 CREATE_COMPLETE Controllers
+00:08:42 CREATE_COMPLETE K8-Demo-Cluster-Controlplane-12YWL0YFTWEK0
+00:08:47 CREATE_COMPLETE Controlplane
+00:08:50 CREATE_IN_PROGRESS Nodepool1
+00:08:51 CREATE_IN_PROGRESS Nodepool1 "Resource creation Initiated"
+00:08:51 CREATE_IN_PROGRESS K8-Demo-Cluster-Nodepool1-N8BI3XVS9LNN "User Initiated"
+00:08:55 CREATE_IN_PROGRESS IAMManagedPolicyWorker
+00:08:56 CREATE_IN_PROGRESS IAMManagedPolicyWorker "Resource creation Initiated"
+00:09:00 CREATE_COMPLETE IAMManagedPolicyWorker
+00:09:03 CREATE_IN_PROGRESS IAMRoleWorker
+00:09:03 CREATE_IN_PROGRESS IAMRoleWorker "Resource creation Initiated"
+00:09:13 CREATE_COMPLETE IAMRoleWorker
+00:09:26 CREATE_IN_PROGRESS IAMInstanceProfileWorker
+00:09:26 CREATE_IN_PROGRESS IAMInstanceProfileWorker "Resource creation Initiated"
+00:11:27 CREATE_COMPLETE IAMInstanceProfileWorker
+00:11:30 CREATE_IN_PROGRESS WorkersLC
+00:11:31 CREATE_IN_PROGRESS WorkersLC "Resource creation Initiated"
+00:11:31 CREATE_COMPLETE WorkersLC
+00:11:34 CREATE_IN_PROGRESS Workers
+00:11:35 CREATE_IN_PROGRESS Workers "Resource creation Initiated"
+00:14:09 CREATE_IN_PROGRESS Workers "Received SUCCESS signal with UniqueId i-0c0e17c2579bf1e16"
+00:14:10 CREATE_COMPLETE Workers
+00:14:18 CREATE_COMPLETE K8-Demo-Cluster-Nodepool1-N8BI3XVS9LNN
+00:14:50 CREATE_COMPLETE Nodepool1
+00:14:52 CREATE_COMPLETE K8-Demo-Cluster
Success! Your AWS resources have been created:
Cluster Name: K8-Demo-Cluster
Controller DNS Names: K8-Demo-C-APIEndpo-1EIYYCTR51YML-786635646.us-east-1.elb.amazonaws.com
The containers that power your cluster are now being downloaded.
You should be able to access the Kubernetes API once the containers finish downloading.
3. Verify the cluster in the AWS Console:
Once the cluster setup container has completed, you can verify that the stack is complete and running via your AWS CloudFormation console.
Configure Kubectl:
Kubectl is the tool that is used to manage and obtain information about a Kubernetes cluster. In order to proceed with the lab, we must ensure that Kubectl is installed, configured and operating correctly. To use our cluster created earlier (with kube-aws), we'll need the kubeconfig
file from the cluster creation steps. The Kubernetes Cluster Setup container, should have uploaded the kubeconfig
and credentials
directories to the configured S3 bucket. You can re-download those files with the following commands:
cd /tmp
aws s3 cp s3://{{BUCKET_NAME}}}/kubeconfig .
aws s3 sync s3://{{BUCKET_NAME}}/credentials credentials/ --region us-east-2
In order to use Kubectl to reach and administrate your cluster you will need to make sure that the FQDN of the cluster name that was set during the set up process is DNS resolvable. If the cluster name was set to a generic value, then add an entry to the host file that resolves the external available IP of the Kube Services generated load balancer to the set cluster name.
Host File Entry:
The Hosts file can be found in
/etc/hosts
on Linux and OSX based systems, and C:\Windows\System32\Drivers\etc\hosts
on Windows based systems.
1. Verify kubectl:
Verify kubectl is installed and working.
kubectl --kubeconfig=kubeconfig version
Response:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-14T06:55:55Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2. Validate Cluster Access:
Validate your access to the cluster:
kubectl --kubeconfig=kubeconfig get nodes
Response:
NAME STATUS AGE VERSION
ip-10-0-0-32.us-east-2.compute.internal Ready 1h v1.7.4+coreos.0
3. Set Environment:
Set your environment to use the kubeconfig
file by default.
export KUBECONFIG=$(pwd)/kubeconfig
Deploying to K8 (CLI):
Creating a Deployment:
We need to create a deployment in kubernetes. Deployments provide declarative updates for Pods by defining a desired state in your template, and the Deployment Controller makes the changes to the Pods. Deployments define things like environment variables, the container image you wish to use, and the resources you want to allocate to the service (port, memory, CPU).
1. Create Deployment Template:
To create a Deployment, copy the YAML-formatted template below and save it locally on your drive as deployment.yml. If you are using a remote host to run this lab, then copy the template to your clipboard, and on the build host, again paste the template into a file using vim /media/deployment.yml
, i
, paste, and save the file by typing esc
, :wq!
.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: {{DEPLOYMENT_NAME}}
labels:
app: {{DEPLOYMENT_LABEL}}
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: {{DEPLOYMENT_LABEL}}
spec:
containers:
- image: {{CONTAINER_IMAGE}}
name: {{CONTAINER_NAME}}
ports:
- containerPort: {{CONTAINER_PORT}}
Example:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: k8s-lab-nginx
labels:
app: k8s-lab-nginx
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: k8s-lab-nginx
spec:
containers:
- image: nginx:latest
name: k8s-lab-web-port
ports:
- containerPort: 80
* We've specified a specific container image, including the
:latest
tag. Although it's not important for this lab, in a production environment where you were creating Task Definitions programmatically from a CI/CD pipeline, Task Definitions could include a specific SHA hash, or a more accurate tag. * In this example, you will also notice that the image we are using is simply
nginx:latest
, which appears to be missing the URL part of the image naming convention. If an image only shows the repository:tag designation, it simply implies that the image will be pulled from the docker hub, and is an official image which doesn't require the prefix name space. When the docker daemon sees a repository:tag designation it auto implies to pull the library/nginx:latest
image directly from the [Docker Hub](https://hub.docker.com).
2. Launch Deployment:
Once the deployment file has been successfully created and saved, we can register the deployment with the following steps.
kubectl create -f deployment.yml
Response:
deployment "k8s-lab-nginx" created
Creating the Service:
Next, we need to create the service in kubernetes. Services include a logical set of Pods (usually determined by a Label Selector) and a policy by which to access/expose the Pods.
1. Create Service Template:
To create a Service, copy the YAML-formatted template below and save it locally on your drive as service.yml. If you are using a remote host to run this lab, then copy the template to your clipboard, and on the build host, again paste the template into a file using vim /media/service.yml
, i
, paste, and save the file by typing esc
, :wq!
.
apiVersion: v1
kind: Service
metadata:
name: {{SERVICE_NAME}}
labels:
app: {{SERVICE_LABEL}}
spec:
selector:
app: {{DEPLOYMENT_LABEL}}
ports:
- protocol: {{PROTOCOL}}
port: {{HOST_PORT}}
targetPort: {{CONTAINER_PORT}}
type: LoadBalancer
Example:
apiVersion: v1
kind: Service
metadata:
name: k8s-lab-nginx
labels:
app: K8sLabNginx
spec:
selector:
app: k8s-lab-nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Note that the selector
spec.selector
in the service.yml file matches the label provided in the Pod template spec.template.metadata.labels
in the deployment.yml file. Also the target port spec.ports.targetPort
in the service.yml file matches the container port spec.template.spec.containers[].ports[].containerPort
in the deployment.yml file.
The
type:LoadBalancer
tells kubernetes to automatically create a new classic load balancer in AWS.
2. Launch the Service:
Once the service file has been successfully created and saved, we can register the service with the following steps.
kubectl create -f service.yml
Response:
service "k8s-lab-nginx" created
3. Verify the Service:
Wait a few seconds (up to a minute) for the load balancer to be created. Monitor the progress using the kubectl get services --output=wide -l app=K8sLabNginx
command. Once your EXTERNAL-IP has changed from <pending>
to the DNS name of the ELB, your service has been created and exposed.
Note the change in the External IP from < pending > to aa2cdeddc9bd911e78a591656dfbed74-1854465911.us-east-1.elb.amazonaws.com.
The full External IP has been concatenated in the output below for formatting purposes.
FROM
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
k8s-lab-nginx 10.3.0.77 <pending> 80:30521/TCP 1h app=k8s-lab-nginx
TO
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
k8s-lab-web 10.3.0.77 aa2cd.amazonaws.com 80:30521/TCP 1h app=k8s-lab-web
Validate your application is up and running. Run this command to get the DNS name of the ELB, and browse to that site.
echo $(kubectl get services k8s-lab-nginx -o go-template='{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}')
Response:
a7374e1f7a17d11e79a5006b7b63ef69-1033327378.us-east-2.elb.amazonaws.com
Deploying to K8 (GUI/Console):
Kubectl Proxy:
In order to launch the Kubernetes GUI, we need to use Kubectl to launch a proxy that will allow us to use our local workstation to connect to and administer the Kubernetes cluster.
In order to use the kubectl proxy, you must launch the proxy on a workstation with a GUI. Launching the proxy on a remote workstation will not work, as the kubectl proxy only accepts requests from 127.0.0.1. The proxy must be running on the same workstation as the browser being used to connect to it. Configuration options do exist that allow you to change the proxy listener from 127.0.0.1 to 0.0.0.0, however authentication will fail, as the the proxy requires authentication certificates in order to be able to access the kubernetes GUI via the proxy.
1. Launch Kube proxy:
Launch a kubernetes proxy to the cluster. Note the IP:port (Default: 127.0.0.1:8001) and leave the command window open or run with the &
designator at the end of the command to tell the command to run as a background daemon.
kubectl --kubeconfig=kubeconfig proxy
2. Open UI in Browser:
Browse to http://127.0.0.1:8001/ui/
Creating a Deployment:
We need to create a deployment in kubernetes. Deployments provide declarative updates for Pods by defining a desired state in your template, and the Deployment Controller makes the changes to the Pods. Deployments define things like environment variables, the container image you wish to use, and the resources you want to allocate to the service (port, memory, CPU).
1. Create Deployment Template:
To create a Deployment, copy the YAML-formatted template below and save it locally on your drive as deployment.yml.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: {{DEPLOYMENT_NAME}}
labels:
app: {{DEPLOYMENT_LABEL}}
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: {{DEPLOYMENT_LABEL}}
spec:
containers:
- image: {{CONTAINER_IMAGE}}
name: {{CONTAINER_NAME}}
ports:
- containerPort: {{CONTAINER_PORT}}
Example:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: k8s-lab-nginx
labels:
app: k8s-lab-nginx
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: k8s-lab-nginx
spec:
containers:
- image: nginx:latest
name: k8s-lab-web-port
ports:
- containerPort: 80
* We've specified a specific container image, including the
:latest
tag. Although it's not important for this lab, in a production environment where you were creating Task Definitions programmatically from a CI/CD pipeline, Task Definitions could include a specific SHA hash, or a more accurate tag. * In this example, you will also notice that the image we are using is simply
nginx:latest
, which appears to be missing the URL part of the image naming convention. If an image only shows the repository:tag designation, it simply implies that the image will be pulled from the docker hub, and is an official image which doesn't require the prefix name space. When the docker daemon sees a repository:tag designation it auto implies to pull the library/nginx:latest
image directly from the [Docker Hub](https://hub.docker.com).
2. Workloads UI:
Navigate to Workloads > Deployments and click CREATE in the top-right corner.
3. Upload Template:
Select Upload a YAML or JSON file, upload your modified template deployment.yml
, and click UPLOAD.
4. Validate the Deployment:
Navigate back to Workloads > Deployments to validate your deployment exists.
Creating the Service:
Next, we need to create the service in kubernetes. Services include a logical set of Pods (usually determined by a Label Selector) and a policy by which to access/expose the Pods.
1. Create Service Template:
To create a Service, copy the YAML-formatted template below and save it locally on your drive as service.yml.
apiVersion: v1
kind: Service
metadata:
name: {{SERVICE_NAME}}
labels:
app: {{SERVICE_LABEL}}
spec:
selector:
app: {{DEPLOYMENT_LABEL}}
ports:
- protocol: {{PROTOCOL}}
port: {{HOST_PORT}}
targetPort: {{CONTAINER_PORT}}
type: LoadBalancer
Example:
apiVersion: v1
kind: Service
metadata:
name: k8s-lab-nginx
labels:
app: K8sLabNginx
spec:
selector:
app: k8s-lab-nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Note that the selector
spec.selector
in the service.yml file matches the label provided in the Pod template spec.template.metadata.labels
in the deployment.yml file. Also the target port spec.ports.targetPort
in the service.yml file matches the container port spec.template.spec.containers[].ports[].containerPort
in the deployment.yml file.
2. Discovery and LBing:
Navigate to Discovery and Load Balancing > Services and click CREATE in the top-right corner.
3. Upload the Template:
Select Upload a YAML or JSON file, upload your modified template service.yml
, and click UPLOAD.
4. Validate the Service :
Navigate back to Discovery and Load Balancing > Services to validate your service exists.
5. Validate the Load Balancer :
Wait a few seconds (up to a minute) for the load balancer to be created. Refresh your page to check the progress. Once your External endpoint has changed from -
to a DNS name of the ELB, your service has been created and exposed.
6. Validate Test Application :
Validate your application is up and running. Click the name of the External endpoint to open your application in a new tab/window.
Resources:
Dockerfile for Kubernetes Builder:
############################################################
# Dockerfile to build Kubernetes Cluster Builder
# Based on: alpine:latest
# DATE: 09/18/2017
# COPYRIGHT: Appcontainers.com
############################################################
# Set the base image
FROM alpine:latest
# File Author / Maintainer
MAINTAINER "Rich Nason" <rnason@clusterfrak.com>
###################################################################
#*************** OVERRIDE ENABLED ENV VARIABLES *****************
###################################################################
ENV KUBE_AWS_VERSION 0.9.8
ENV AWS_KEY ""
ENV AWS_SECRET ""
ENV AWS_REGION us-east-1
ENV AWS_AZ us-east-1b
ENV KMS_DESC AWS Container Demo
ENV KMS_ALIAS alias/container-demo
ENV IAM_USER_ARN arn:aws:iam::015811329325:user/container_lab
ENV HOSTED_ZONE_ID A12BCDEFGH34I5
ENV CLUSTER_NAME K8-Demo
ENV DOMAIN_NAME "mydomain.local"
ENV BUCKET_NAME K8-Demo
ENV KEY_NAME ""
ENV NUM_WORKERS 1
###################################################################
#******************* UPDATES & PRE-REQS *************************
###################################################################
# Install dependancies
RUN apk update && apk add curl python py-pip jq && pip install awscli && mkdir -p /root/.aws
###################################################################
#******************* APPLICATION INSTALL ************************
###################################################################
# Grab latest version of kube-aws
RUN curl -L https://github.com/kubernetes-incubator/kube-aws/releases/download/v${KUBE_AWS_VERSION}/kube-aws-linux-amd64.tar.gz -o /tmp/kube-aws-linux-amd64.tar.gz && \
tar -zxvf /tmp/kube-aws-linux-amd64.tar.gz -C /tmp && mv /tmp/linux-amd64/kube-aws /usr/local/bin/kube-aws && rm -fr /tmp/kube-aws-linux-amd64.tar.gz /tmp/linux-amd64 && kube-aws version
###################################################################
#***************** CONFIGURE START ITEMS ************************
###################################################################
# Export the AWS Access and secret keys
RUN echo "export AWS_ACCESS_KEY_ID=$AWS_KEY" >> /root/.bashrc && \
echo "export AWS_SECRET_ACCESS_KEY=$AWS_SECRET" >> /root/.bashrc && \
echo "export AWS_DEFAULT_REGION=$AWS_REGION" >> /root/.bashrc
###################################################################
#****************** ADD REQUIRED APP FILES **********************
###################################################################
ADD kubecluster_builder.sh /root/
RUN chmod +x /root/kubecluster_builder.sh
###################################################################
#************* CMD & EXPOSE APPLICATION PORTS *******************
###################################################################
WORKDIR /root/
CMD ["/root/kubecluster_builder.sh"]
Kubernetes Builder Script:
The builder Script is already bundled into the appcontainers/kubecluster-builder:latest container image, and will auto execute on run
#!/bin/sh
# Generate KMS Key
#RUN aws kms create-key --description "${KMS_DESC}" --policy "{\"Version\":\"2012-10-17\",\"Id\":\"key-default-1\",\"Statement\":[{\"Sid\":\"Enable IAM User Permissions\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"${IAM_USER_ARN}\"},\"Action\": \"kms:*\",\"Resource\":\"*\"}]}" > key.json && \
# Ensure we are running from the /root directory.
cd /root
# Export the AWS Access and secret keys
export AWS_ACCESS_KEY_ID=$AWS_KEY
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET
export AWS_DEFAULT_REGION=$AWS_REGION
# Create KMS keys
aws kms create-key --description "$KMS_DESC" > key.json
aws kms create-alias --alias-name "$KMS_ALIAS" --target-key-id $(jq -r '.KeyMetadata.KeyId' key.json)
# Init the Cluster (only creates the cluster.yaml file)
kube-aws init --cluster-name=$CLUSTER_NAME \
--hosted-zone-id=$HOSTED_ZONE_ID \
--external-dns-name=$CLUSTER_NAME.$DOMAIN_NAME \
--region=$AWS_REGION \
--availability-zone=$AWS_AZ \
--key-name=$KEY_NAME \
--kms-key-arn=`jq -r '.KeyMetadata.Arn' key.json`
# Generate all of the required assets
kube-aws render credentials --generate-ca
kube-aws render stack
# Set the worker count to 3 worker nodes
if [[ "$NUM_WORKERS" == 1 ]]; then
echo "Workers Launched = 1"
else
sed -i "s/#workerCount:\ 1/worker.count:\ $NUM_WORKERS/g" cluster.yaml
echo "Workers Launched = $NUM_WORKERS"
fi
# Create S3 Bucket
if [[ "$AWS_REGION" == "us-east-1" ]]; then
aws s3api create-bucket --bucket $BUCKET_NAME
else
aws s3api create-bucket --bucket $BUCKET_NAME --create-bucket-configuration LocationConstraint=$AWS_REGION
fi
kube-aws validate --s3-uri s3://$BUCKET_NAME
# Bring the cluster online by running the generated cloudformation template.
kube-aws up --s3-uri s3://$BUCKET_NAME
# Copy the kubeconfig and credentials files to the S3 bucket
aws s3 cp kubeconfig s3://$BUCKET_NAME
aws s3 cp cluster.yaml s3://$BUCKET_NAME
aws s3 sync credentials/ s3://$BUCKET_NAME/credentials
Kubernetes Builder Docker Run Statement:
docker run -it --rm --name k8-builder -h k8-builder \
-e AWS_KEY="ABCDEFGHIJKLMNOPQRST" \
-e AWS_SECRET="abcdefghijklmnopqrstuvwxyz1234567890abcd" \
-e AWS_REGION="us-east-2" \
-e AWS_AZ="us-east-2b" \
-e KMS_DESC="AWS Kubernetes Demo" \
-e KMS_ALIAS="alias/k8s-demo" \
-e IAM_USER_ARN="arn:aws:iam::012345678910:user/k8-builder" \
-e HOSTED_ZONE_ID="r53zone.tld" \
-e CLUSTER_NAME="K8-Demo-Cluster" \
-e DOMAIN_NAME="mydomain.com" \
-e BUCKET_NAME="kubernetes-cluster-demo" \
-e KEY_NAME="My_Key_Name" \
appcontainers/kubecluster-builder:latest
Lab Pre-Req Packer File:
Packer File:
{
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "{{user `region`}}",
"source_ami": "{{user `ami`}}",
"instance_type": "{{user `instance_type`}}",
"ami_name": "{{user `ami_name`}}-{{timestamp}}",
"ami_description": "{{user `ami_description`}}",
"availability_zone": "{{user `availability_zone`}}",
"vpc_id": "{{user `vpc_id`}}",
"subnet_id": "{{user `subnet_id`}}",
"security_group_id": "{{user `security_group_id`}}",
"ssh_keypair_name": "{{user `ssh_keypair_name`}}",
"ssh_agent_auth": true,
"ssh_username": "{{user `ssh_username`}}",
"associate_public_ip_address": true,
"ssh_private_ip": false,
"tags": {
"Name": "{{user `tag_name`}}",
"OS_Version": "{{user `tag_osver`}}"
}
}],
"provisioners": [
{
"type": "shell",
"inline": [
"sudo yum clean all",
"sudo yum -y update",
"sudo yum install -y docker jq",
"sudo chkconfig docker on",
"sudo /etc/init.d/docker start",
"sudo pip install awscli",
"sudo curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl",
"sudo mv kubectl /usr/bin/",
"sudo chmod +x /usr/bin/kubectl",
"sudo curl -L https://github.com/docker/compose/releases/download/1.16.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose",
"sudo chmod +x /usr/local/bin/docker-compose"
]
}
]}
Packer Build-Vars File:
{
"aws_access_key": "ABCDEFGHIJKLMNOPQRST",
"aws_secret_key": "abcdefghijklmnopqrstuvwxyz1234567890abcd",
"instance_type": "t2.small",
"region": "us-east-2",
"availability_zone": "us-east-2a",
"ami": "ami-ea87a78f",
"vpc_id": "vpc-y12345ba",
"subnet_id": "subnet-12a3456b",
"security_group_id": "sg-a6ca00cd",
"ssh_keypair_name": "MyKey",
"ssh_username": "ec2-user",
"ami_name": "Container-Lab",
"ami_description": "Image with all of the tools required to run ECS/K8 Labs",
"tag_name": "Container-Lab",
"tag_osver": "Amazon Linux"
}
Packer Build Command:
packer build -var-file=buildvars.json container_lab.json
Container Clustering with ECS/K8 Lab

For more information on Kubernetes, visit kubernetes.io
Description:
This lab is meant to serve as a docker/containers clustering lab course. The objective of this lab will be to walk through a step by step exercise to help a user new to Kubernetes to deploy a containerized app on the Kubernetes platform.
Pre-Requisites:
As an alternative to installing the pre-requisites below, you could build an ec2-linux instance with all of the requirements bundled in. See the bottom of this page in the Resources section for a packer file that will create a lab builder AMI.
Kubernetes relies heavily on DNS to register its different components. In order to get the cluster running properly, we will need to ensure that we have a Public DNS Zone or Route 53 Hosted Zone that Kubernetes can register with and use to properly resolve namespaces for various components and deployed services. Instructions on how to set up DNS in Route 53 can be found in the Routing and DNS section of the lab.
1. AWS Account:
You will need to have an active AWS account, as this lab will cover setting up a Kubernetes cluster using the AWS EC2 service.
2. IAM User:
You will need an IAM user created with a the appropriate permissions (admin access for this demo). The user should have programmatic access, and have a generated Access Key, and associated Access Secret Key. You will also need the users ARN for later in the lab (Found in the IAM console under the users section).
3. Python and PIP:
You will need to have python and PIP (Pip Installs Packages) installed on your workstation so we can download and install the AWS CLI tools. This step will be required for various provisioning/deployment steps further down the tutorial.
- Python
- yum/apt-get install -y python-pip
4. Install AWS CLI:
In order to interact with AWS easier, you should have the awscli tool installed and configured with proper user access and user access secrets. You can configure the access key and access secret key using the aws configure
command once the CLI tools have been installed via python pip
pip3 install awscli
aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:
Example:
Desktop rnason$ aws configure
AWS Access Key ID [None]: ABCDEFGHIJKLMNOPQRST
AWS Secret Access Key [None]: ****************************************
Default region name [us-east-2]:
Default output format [None]:
5. Install Kubectl:
On your local workstation, follow the steps found on the kubernetes.io page to install Kubectl. Kubectl can be installed on Windows, Mac or Linux.
6. Putty (Windows Only):
If your using windows, then you will need to have Putty and Putty KeyGen installed on your local windows instance. We will need putty in order to SSH to the docker instances, and Putty KeyGen in order to convert the AWS Key PEM file to a putty required PPK File.
- Download the AWS Key Pem file used to launch your instance to your local drive
- Download Putty
- Download PuttyGen
Convert the PEM file into Putty PPK files
Open PuttyGen, Click on the Load button. Browse to the PEM file, and click on Open. Press OK on the import dialog box.
Save the Private Key
Once Loaded, Click on the **Save Private Key** button to save the private key as a ppk formatted file. Press Yes on the dialog asking if you want to save the file without a passphrase to protect it. | ![]() |
Save the Public Key
Last Click on the Save Public Key button to save the public key.
Keys now ready to use with Putty!
Once save, the private key can be used with putty to connect to your EC2 instances.
Routing and DNS:
In order to configure DNS we will need to configure a zone in Route53. We can do this by logging into the AWS console, and from the list of services choose Route53.
1. Create Zone:
From the menu on the left side of the Route53 console, click on Hosted Zones and then click the . A dialog will will open on the right side of the screen. Fill in the zone's domain name, and choose whether it is a Public Hosted Zone or a Private Hosted Zone.
2. Get Zone ID:
Once the zone has been created, click on the zone, in the main Route53 console window, and from the zone details on the right hand side grab the Hosted Zone ID.
Provisioning a K8 Cluster:
For the purpose of this lab, we are going to set up the Kubernetes cluster by running a simple docker image that will automatically connect to your AWS account using supplied IAM credentials. The docker setup container will connect to your account, and perform all of the necessary steps required to get a kubernetes cluster up and running easily. The setup process after running the container will take around 10-15 minutes to fully complete.
1. Prepare Docker Run Statement:
Collect the the following information and fill in the docker run statement template below.
The following docker container will provision the Kubernetes cluster for you, and requires variables such as an S3 Bucket in order to store the kubeconfig and credential files.
S3 Bucket names must be globally unique
, ensure that you choose a unique S3 bucket name in order to ensure that the cluster provioning container doesn't fail on error. Also for the purpose of this lab, you must also ensure that the Cluster_Name
is also unique if launching multiple clusters against the same DNS Zone.
docker run -it --rm --name {{CONTAINER_NAME}} -h {{CONTAINER_HOST_NAME}} \
-e AWS_KEY="{{AWS_ACCESS_KEY_ID}}" \
-e AWS_SECRET="{{AWS_SECRET_ACCESS_KEY}}" \
-e AWS_REGION="{{REGION}}" \
-e AWS_AZ="{{AWS_AZ}}" \
-e KMS_DESC="{{AWS_KMS_DESCRIPTION}}" \
-e KMS_ALIAS="{{AWS_KMS_ALIAS}}" \
-e IAM_USER_ARN="{{IAM USER ARN}}" \
-e HOSTED_ZONE_ID="{{R53 ZONE ID}}" \
-e CLUSTER_NAME="{{CLUSTER_NAME}}" \
-e DOMAIN_NAME="{{DNS ZONE FQDN}}" \
-e BUCKET_NAME="{{BUCKET_NAME}}" \
-e KEY_NAME="{{AWS_SSH_KEY_PAIR_NAME}}" \
appcontainers/kubecluster-builder:latest
Example:
docker run -it --rm --name kubecluster-builder -h kubecluster-builder \
-e AWS_KEY="ABCDEFGHIJKLMNOPQRS" \
-e AWS_SECRET="abcdefghijklmnopqrstuvwxyz12345678909876" \
-e AWS_REGION="us-east-1" \
-e AWS_AZ="us-east-1b" \
-e KMS_DESC="AWS Kubernetes Cluster Demo" \
-e KMS_ALIAS="alias/kubernetes-demo" \
-e IAM_USER_ARN="arn:aws:iam::012345678900:user/container_lab" \
-e HOSTED_ZONE_ID="A12BCDEFGH34I5" \
-e CLUSTER_NAME="K8-Demo-Cluster" \
-e DOMAIN_NAME="k8clusterdemo.com" \
-e BUCKET_NAME="K8-Demo" \
-e KEY_NAME="My_SSH_Key" \
appcontainers/kubecluster-builder:latest
2. Run the setup container:
Once the variables have been substituted properly into the docker run statement below, run the container on your local workstation. As the container sets up your Kubernetes cluster, you will see the output in your console window. Once the container has completed the build, it will automatically terminate and remove the setup container.
Success! Created cluster.yaml
Next steps:
1. (Optional) Edit cluster.yaml to parameterize the cluster.
2. Use the "kube-aws render" command to render the CloudFormation stack template and coreos-cloudinit userdata.
Generating credentials...
-> Generating new TLS CA
-> Generating new assets
Success! Stack rendered to ./stack-templates.
Next steps:
1. (Optional) Validate your changes to cluster.yaml with "kube-aws validate"
2. (Optional) Further customize the cluster by modifying templates in ./stack-templates or cloud-configs in ./userdata.
3. Start the cluster with "kube-aws up".
{
"Location": "/K8-Demo"
}
WARN: the worker node pool "nodepool1" is associated to a k8s API endpoint behind the DNS name "K8-Demo-Cluster" managed by YOU!
Please never point the DNS record for it to a different k8s cluster, especially when the name is a "stable" one which is shared among multiple k8s clusters for achieving blue-green deployments of k8s clusters!
kube-aws can't save users from mistakes like that
INFO: generated "credentials/tokens.csv.enc" by encrypting "credentials/tokens.csv"
INFO: generated "credentials/kubelet-tls-bootstrap-token.enc" by encrypting "credentials/kubelet-tls-bootstrap-token"
INFO: generated "credentials/ca.pem.enc" by encrypting "credentials/ca.pem"
INFO: generated "credentials/ca-key.pem.enc" by encrypting "credentials/ca-key.pem"
INFO: generated "credentials/apiserver.pem.enc" by encrypting "credentials/apiserver.pem"
INFO: generated "credentials/apiserver-key.pem.enc" by encrypting "credentials/apiserver-key.pem"
INFO: generated "credentials/worker.pem.enc" by encrypting "credentials/worker.pem"
INFO: generated "credentials/worker-key.pem.enc" by encrypting "credentials/worker-key.pem"
INFO: generated "credentials/admin.pem.enc" by encrypting "credentials/admin.pem"
INFO: generated "credentials/admin-key.pem.enc" by encrypting "credentials/admin-key.pem"
INFO: generated "credentials/etcd.pem.enc" by encrypting "credentials/etcd.pem"
INFO: generated "credentials/etcd-key.pem.enc" by encrypting "credentials/etcd-key.pem"
INFO: generated "credentials/etcd-client.pem.enc" by encrypting "credentials/etcd-client.pem"
INFO: generated "credentials/etcd-client-key.pem.enc" by encrypting "credentials/etcd-client-key.pem"
Validating UserData and stack template...
Validation Report: {
Capabilities: ["CAPABILITY_NAMED_IAM"],
CapabilitiesReason: "The following resource(s) require capabilities: [AWS::CloudFormation::Stack]",
Description: "kube-aws Kubernetes cluster K8-Demo-Cluster"
}
{
Capabilities: ["CAPABILITY_IAM"],
CapabilitiesReason: "The following resource(s) require capabilities: [AWS::IAM::ManagedPolicy]",
Description: "kube-aws Kubernetes cluster K8-Demo-Cluster"
}
{
Capabilities: ["CAPABILITY_IAM"],
CapabilitiesReason: "The following resource(s) require capabilities: [AWS::IAM::ManagedPolicy]",
Description: "kube-aws Kubernetes node pool K8-Demo-Cluster nodepool1",
Parameters: [{
Description: "The name of a control-plane stack used to import values into this stack",
NoEcho: false,
ParameterKey: "ControlPlaneStackName"
}]
}
stack template is valid.
Validation OK!
WARN: the worker node pool "nodepool1" is associated to a k8s API endpoint behind the DNS name "K8-Demo-Cluster" managed by YOU!
Please never point the DNS record for it to a different k8s cluster, especially when the name is a "stable" one which is shared among multiple k8s clusters for achieving blue-green deployments of k8s clusters!
kube-aws can't save users from mistakes like that
Creating AWS resources. Please wait. It may take a few minutes.
Streaming CloudFormation events for the cluster 'K8-Demo-Cluster'...
+00:00:00 CREATE_IN_PROGRESS K8-Demo-Cluster "User Initiated"
+00:00:15 CREATE_IN_PROGRESS Controlplane
+00:00:16 CREATE_IN_PROGRESS Controlplane "Resource creation Initiated"
+00:00:16 CREATE_IN_PROGRESS K8-Demo-Cluster-Controlplane-12YWL0YFTWEK0 "User Initiated"
+00:00:20 CREATE_IN_PROGRESS VPC
+00:00:20 CREATE_IN_PROGRESS IAMManagedPolicyEtcd
+00:00:20 CREATE_IN_PROGRESS InternetGateway
+00:00:20 CREATE_IN_PROGRESS Etcd0EIP
+00:00:20 CREATE_IN_PROGRESS IAMManagedPolicyController
+00:00:20 CREATE_IN_PROGRESS VPC "Resource creation Initiated"
+00:00:20 CREATE_IN_PROGRESS IAMManagedPolicyEtcd "Resource creation Initiated"
+00:00:21 CREATE_IN_PROGRESS InternetGateway "Resource creation Initiated"
+00:00:21 CREATE_IN_PROGRESS Etcd0EIP "Resource creation Initiated"
+00:00:21 CREATE_IN_PROGRESS IAMManagedPolicyController "Resource creation Initiated"
+00:00:25 CREATE_COMPLETE IAMManagedPolicyEtcd
+00:00:25 CREATE_COMPLETE IAMManagedPolicyController
+00:00:28 CREATE_IN_PROGRESS IAMRoleEtcd
+00:00:28 CREATE_IN_PROGRESS IAMRoleController
+00:00:28 CREATE_IN_PROGRESS IAMRoleEtcd "Resource creation Initiated"
+00:00:28 CREATE_IN_PROGRESS IAMRoleController "Resource creation Initiated"
+00:00:37 CREATE_COMPLETE InternetGateway
+00:00:37 CREATE_COMPLETE VPC
+00:00:37 CREATE_COMPLETE IAMRoleEtcd
+00:00:38 CREATE_COMPLETE IAMRoleController
+00:00:39 CREATE_IN_PROGRESS VPCGatewayAttachment
+00:00:39 CREATE_IN_PROGRESS Subnet0
+00:00:39 CREATE_IN_PROGRESS SecurityGroupWorker
+00:00:39 CREATE_IN_PROGRESS SecurityGroupElbAPIServer
+00:00:39 CREATE_IN_PROGRESS Subnet0RouteTable
+00:00:40 CREATE_IN_PROGRESS APIEndpointDefaultSG
+00:00:40 CREATE_IN_PROGRESS SecurityGroupEtcd
+00:00:40 CREATE_IN_PROGRESS Subnet0RouteTable "Resource creation Initiated"
+00:00:40 CREATE_IN_PROGRESS IAMInstanceProfileController
+00:00:40 CREATE_IN_PROGRESS SecurityGroupElbAPIServer "Resource creation Initiated"
+00:00:40 CREATE_IN_PROGRESS SecurityGroupWorker "Resource creation Initiated"
+00:00:40 CREATE_IN_PROGRESS IAMInstanceProfileEtcd
+00:00:40 CREATE_IN_PROGRESS Subnet0 "Resource creation Initiated"
+00:00:41 CREATE_IN_PROGRESS IAMInstanceProfileEtcd "Resource creation Initiated"
+00:00:41 CREATE_IN_PROGRESS SecurityGroupEtcd "Resource creation Initiated"
+00:00:41 CREATE_COMPLETE Subnet0RouteTable
+00:00:41 CREATE_IN_PROGRESS IAMInstanceProfileController "Resource creation Initiated"
+00:00:41 CREATE_COMPLETE SecurityGroupElbAPIServer
+00:00:42 CREATE_IN_PROGRESS APIEndpointDefaultSG "Resource creation Initiated"
+00:00:43 CREATE_COMPLETE APIEndpointDefaultSG
+00:00:43 CREATE_COMPLETE SecurityGroupEtcd
+00:00:43 CREATE_IN_PROGRESS Subnet0RouteToInternet
+00:00:43 CREATE_IN_PROGRESS Etcd0EBS
+00:00:44 CREATE_IN_PROGRESS Subnet0RouteToInternet "Resource creation Initiated"
+00:00:44 CREATE_IN_PROGRESS Etcd0EBS "Resource creation Initiated"
+00:00:45 CREATE_IN_PROGRESS SecurityGroupEtcdPeerIngress
+00:00:46 CREATE_IN_PROGRESS SecurityGroupEtcdPeerIngress "Resource creation Initiated"
+00:00:46 CREATE_IN_PROGRESS SecurityGroupController
+00:00:46 CREATE_IN_PROGRESS SecurityGroupEtcdPeerHealthCheckIngress
+00:00:46 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromWorkerToWorkerKubeletReadOnly
+00:00:46 CREATE_IN_PROGRESS SecurityGroupEtcdPeerHealthCheckIngress "Resource creation Initiated"
+00:00:46 CREATE_IN_PROGRESS SecurityGroupEtcdIngressFromWorkerToEtcd
+00:00:46 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromWorkerToWorkerKubeletReadOnly "Resource creation Initiated"
+00:00:46 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromWorkerToFlannel
+00:00:46 CREATE_IN_PROGRESS SecurityGroupEtcdIngressFromWorkerToEtcd "Resource creation Initiated"
+00:00:46 CREATE_IN_PROGRESS SecurityGroupController "Resource creation Initiated"
+00:00:46 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromWorkerToFlannel "Resource creation Initiated"
+00:00:47 CREATE_COMPLETE SecurityGroupEtcdPeerIngress
+00:00:47 CREATE_COMPLETE SecurityGroupEtcdPeerHealthCheckIngress
+00:00:47 CREATE_COMPLETE SecurityGroupWorkerIngressFromWorkerToWorkerKubeletReadOnly
+00:00:47 CREATE_COMPLETE SecurityGroupEtcdIngressFromWorkerToEtcd
+00:00:48 CREATE_COMPLETE SecurityGroupWorkerIngressFromWorkerToFlannel
+00:00:49 CREATE_COMPLETE SecurityGroupController
+00:00:51 CREATE_IN_PROGRESS SecurityGroupEtcdIngressFromControllerToEtcd
+00:00:51 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromControllerTocAdvisor
+00:00:52 CREATE_IN_PROGRESS SecurityGroupControllerIngressFromControllerToController
+00:00:52 CREATE_IN_PROGRESS SecurityGroupControllerIngressFromControllerToKubelet
+00:00:52 CREATE_IN_PROGRESS SecurityGroupEtcdIngressFromControllerToEtcd "Resource creation Initiated"
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromControllerTocAdvisor "Resource creation Initiated"
+00:00:52 CREATE_IN_PROGRESS SecurityGroupControllerIngressFromControllerToController "Resource creation Initiated"
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromWorkerToControllerKubeletReadOnly
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromControllerToKubelet
+00:00:52 CREATE_IN_PROGRESS SecurityGroupControllerIngressFromControllerToKubelet "Resource creation Initiated"
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromWorkerToControllerKubeletReadOnly "Resource creation Initiated"
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromControllerToFlannel
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromControllerToKubelet "Resource creation Initiated"
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromFlannelToController
+00:00:52 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromControllerToFlannel "Resource creation Initiated"
+00:00:52 CREATE_COMPLETE SecurityGroupControllerIngressFromControllerToController
+00:00:52 CREATE_COMPLETE SecurityGroupWorkerIngressFromControllerTocAdvisor
+00:00:53 CREATE_IN_PROGRESS SecurityGroupWorkerIngressFromFlannelToController "Resource creation Initiated"
+00:00:53 CREATE_IN_PROGRESS SecurityGroupControllerIngressFromWorkerToEtcd
+00:00:53 CREATE_COMPLETE SecurityGroupEtcdIngressFromControllerToEtcd
+00:00:53 CREATE_COMPLETE SecurityGroupControllerIngressFromControllerToKubelet
+00:00:53 CREATE_IN_PROGRESS SecurityGroupControllerIngressFromWorkerToEtcd "Resource creation Initiated"
+00:00:53 CREATE_COMPLETE SecurityGroupWorkerIngressFromWorkerToControllerKubeletReadOnly
+00:00:53 CREATE_COMPLETE SecurityGroupWorkerIngressFromControllerToKubelet
+00:00:53 CREATE_COMPLETE SecurityGroupWorkerIngressFromControllerToFlannel
+00:00:53 CREATE_COMPLETE SecurityGroupWorkerIngressFromFlannelToController
+00:00:54 CREATE_COMPLETE SecurityGroupControllerIngressFromWorkerToEtcd
+00:00:56 CREATE_COMPLETE VPCGatewayAttachment
+00:00:57 CREATE_COMPLETE Subnet0
+00:00:59 CREATE_COMPLETE Subnet0RouteToInternet
+00:00:59 CREATE_IN_PROGRESS APIEndpointDefaultELB
+00:01:00 CREATE_IN_PROGRESS Subnet0RouteTableAssociation
+00:01:00 CREATE_COMPLETE Etcd0EBS
+00:01:01 CREATE_IN_PROGRESS APIEndpointDefaultELB "Resource creation Initiated"
+00:01:01 CREATE_IN_PROGRESS Subnet0RouteTableAssociation "Resource creation Initiated"
+00:01:01 CREATE_COMPLETE APIEndpointDefaultELB
+00:01:16 CREATE_COMPLETE Subnet0RouteTableAssociation
+00:02:41 CREATE_COMPLETE IAMInstanceProfileEtcd
+00:02:42 CREATE_COMPLETE IAMInstanceProfileController
+00:02:44 CREATE_IN_PROGRESS Etcd0LC
+00:02:44 CREATE_IN_PROGRESS Etcd0LC "Resource creation Initiated"
+00:02:45 CREATE_COMPLETE Etcd0LC
+00:02:48 CREATE_IN_PROGRESS ControllersLC
+00:02:49 CREATE_IN_PROGRESS ControllersLC "Resource creation Initiated"
+00:02:49 CREATE_COMPLETE ControllersLC
+00:02:50 CREATE_IN_PROGRESS Etcd0
+00:02:51 CREATE_IN_PROGRESS Etcd0 "Resource creation Initiated"
+00:05:37 CREATE_IN_PROGRESS Etcd0 "Received SUCCESS signal with UniqueId i-0b6bcccaa384fcca9"
+00:05:42 CREATE_COMPLETE Etcd0
+00:05:45 CREATE_IN_PROGRESS Controllers
+00:05:46 CREATE_IN_PROGRESS Controllers "Resource creation Initiated"
+00:08:31 CREATE_IN_PROGRESS Controllers "Received SUCCESS signal with UniqueId i-09347bb0bd9415d7b"
+00:08:39 CREATE_COMPLETE Controllers
+00:08:42 CREATE_COMPLETE K8-Demo-Cluster-Controlplane-12YWL0YFTWEK0
+00:08:47 CREATE_COMPLETE Controlplane
+00:08:50 CREATE_IN_PROGRESS Nodepool1
+00:08:51 CREATE_IN_PROGRESS Nodepool1 "Resource creation Initiated"
+00:08:51 CREATE_IN_PROGRESS K8-Demo-Cluster-Nodepool1-N8BI3XVS9LNN "User Initiated"
+00:08:55 CREATE_IN_PROGRESS IAMManagedPolicyWorker
+00:08:56 CREATE_IN_PROGRESS IAMManagedPolicyWorker "Resource creation Initiated"
+00:09:00 CREATE_COMPLETE IAMManagedPolicyWorker
+00:09:03 CREATE_IN_PROGRESS IAMRoleWorker
+00:09:03 CREATE_IN_PROGRESS IAMRoleWorker "Resource creation Initiated"
+00:09:13 CREATE_COMPLETE IAMRoleWorker
+00:09:26 CREATE_IN_PROGRESS IAMInstanceProfileWorker
+00:09:26 CREATE_IN_PROGRESS IAMInstanceProfileWorker "Resource creation Initiated"
+00:11:27 CREATE_COMPLETE IAMInstanceProfileWorker
+00:11:30 CREATE_IN_PROGRESS WorkersLC
+00:11:31 CREATE_IN_PROGRESS WorkersLC "Resource creation Initiated"
+00:11:31 CREATE_COMPLETE WorkersLC
+00:11:34 CREATE_IN_PROGRESS Workers
+00:11:35 CREATE_IN_PROGRESS Workers "Resource creation Initiated"
+00:14:09 CREATE_IN_PROGRESS Workers "Received SUCCESS signal with UniqueId i-0c0e17c2579bf1e16"
+00:14:10 CREATE_COMPLETE Workers
+00:14:18 CREATE_COMPLETE K8-Demo-Cluster-Nodepool1-N8BI3XVS9LNN
+00:14:50 CREATE_COMPLETE Nodepool1
+00:14:52 CREATE_COMPLETE K8-Demo-Cluster
Success! Your AWS resources have been created:
Cluster Name: K8-Demo-Cluster
Controller DNS Names: K8-Demo-C-APIEndpo-1EIYYCTR51YML-786635646.us-east-1.elb.amazonaws.com
The containers that power your cluster are now being downloaded.
You should be able to access the Kubernetes API once the containers finish downloading.
3. Verify the cluster in the AWS Console:
Once the cluster setup container has completed, you can verify that the stack is complete and running via your AWS CloudFormation console.
Configure Kubectl:
Kubectl is the tool that is used to manage and obtain information about a Kubernetes cluster. In order to proceed with the lab, we must ensure that Kubectl is installed, configured and operating correctly. To use our cluster created earlier (with kube-aws), we'll need the kubeconfig
file from the cluster creation steps. The Kubernetes Cluster Setup container, should have uploaded the kubeconfig
and credentials
directories to the configured S3 bucket. You can re-download those files with the following commands:
cd /tmp
aws s3 cp s3://{{BUCKET_NAME}}}/kubeconfig .
aws s3 sync s3://{{BUCKET_NAME}}/credentials credentials/ --region us-east-2
In order to use Kubectl to reach and administrate your cluster you will need to make sure that the FQDN of the cluster name that was set during the set up process is DNS resolvable. If the cluster name was set to a generic value, then add an entry to the host file that resolves the external available IP of the Kube Services generated load balancer to the set cluster name.
Host File Entry:
The Hosts file can be found in
/etc/hosts
on Linux and OSX based systems, and C:\Windows\System32\Drivers\etc\hosts
on Windows based systems.
1. Verify kubectl:
Verify kubectl is installed and working.
kubectl --kubeconfig=kubeconfig version
Response:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-14T06:55:55Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2. Validate Cluster Access:
Validate your access to the cluster:
kubectl --kubeconfig=kubeconfig get nodes
Response:
NAME STATUS AGE VERSION
ip-10-0-0-32.us-east-2.compute.internal Ready 1h v1.7.4+coreos.0
3. Set Environment:
Set your environment to use the kubeconfig
file by default.
export KUBECONFIG=$(pwd)/kubeconfig
Deploying to K8 (CLI):
Creating a Deployment:
We need to create a deployment in kubernetes. Deployments provide declarative updates for Pods by defining a desired state in your template, and the Deployment Controller makes the changes to the Pods. Deployments define things like environment variables, the container image you wish to use, and the resources you want to allocate to the service (port, memory, CPU).
1. Create Deployment Template:
To create a Deployment, copy the YAML-formatted template below and save it locally on your drive as deployment.yml. If you are using a remote host to run this lab, then copy the template to your clipboard, and on the build host, again paste the template into a file using vim /media/deployment.yml
, i
, paste, and save the file by typing esc
, :wq!
.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: {{DEPLOYMENT_NAME}}
labels:
app: {{DEPLOYMENT_LABEL}}
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: {{DEPLOYMENT_LABEL}}
spec:
containers:
- image: {{CONTAINER_IMAGE}}
name: {{CONTAINER_NAME}}
ports:
- containerPort: {{CONTAINER_PORT}}
Example:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: k8s-lab-nginx
labels:
app: k8s-lab-nginx
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: k8s-lab-nginx
spec:
containers:
- image: nginx:latest
name: k8s-lab-web-port
ports:
- containerPort: 80
* We've specified a specific container image, including the
:latest
tag. Although it's not important for this lab, in a production environment where you were creating Task Definitions programmatically from a CI/CD pipeline, Task Definitions could include a specific SHA hash, or a more accurate tag. * In this example, you will also notice that the image we are using is simply
nginx:latest
, which appears to be missing the URL part of the image naming convention. If an image only shows the repository:tag designation, it simply implies that the image will be pulled from the docker hub, and is an official image which doesn't require the prefix name space. When the docker daemon sees a repository:tag designation it auto implies to pull the library/nginx:latest
image directly from the [Docker Hub](https://hub.docker.com).
2. Launch Deployment:
Once the deployment file has been successfully created and saved, we can register the deployment with the following steps.
kubectl create -f deployment.yml
Response:
deployment "k8s-lab-nginx" created
Creating the Service:
Next, we need to create the service in kubernetes. Services include a logical set of Pods (usually determined by a Label Selector) and a policy by which to access/expose the Pods.
1. Create Service Template:
To create a Service, copy the YAML-formatted template below and save it locally on your drive as service.yml. If you are using a remote host to run this lab, then copy the template to your clipboard, and on the build host, again paste the template into a file using vim /media/service.yml
, i
, paste, and save the file by typing esc
, :wq!
.
apiVersion: v1
kind: Service
metadata:
name: {{SERVICE_NAME}}
labels:
app: {{SERVICE_LABEL}}
spec:
selector:
app: {{DEPLOYMENT_LABEL}}
ports:
- protocol: {{PROTOCOL}}
port: {{HOST_PORT}}
targetPort: {{CONTAINER_PORT}}
type: LoadBalancer
Example:
apiVersion: v1
kind: Service
metadata:
name: k8s-lab-nginx
labels:
app: K8sLabNginx
spec:
selector:
app: k8s-lab-nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Note that the selector
spec.selector
in the service.yml file matches the label provided in the Pod template spec.template.metadata.labels
in the deployment.yml file. Also the target port spec.ports.targetPort
in the service.yml file matches the container port spec.template.spec.containers[].ports[].containerPort
in the deployment.yml file.
The
type:LoadBalancer
tells kubernetes to automatically create a new classic load balancer in AWS.
2. Launch the Service:
Once the service file has been successfully created and saved, we can register the service with the following steps.
kubectl create -f service.yml
Response:
service "k8s-lab-nginx" created
3. Verify the Service:
Wait a few seconds (up to a minute) for the load balancer to be created. Monitor the progress using the kubectl get services --output=wide -l app=K8sLabNginx
command. Once your EXTERNAL-IP has changed from <pending>
to the DNS name of the ELB, your service has been created and exposed.
Note the change in the External IP from < pending > to aa2cdeddc9bd911e78a591656dfbed74-1854465911.us-east-1.elb.amazonaws.com.
The full External IP has been concatenated in the output below for formatting purposes.
FROM
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
k8s-lab-nginx 10.3.0.77 <pending> 80:30521/TCP 1h app=k8s-lab-nginx
TO
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
k8s-lab-web 10.3.0.77 aa2cd.amazonaws.com 80:30521/TCP 1h app=k8s-lab-web
Validate your application is up and running. Run this command to get the DNS name of the ELB, and browse to that site.
echo $(kubectl get services k8s-lab-nginx -o go-template='{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}')
Response:
a7374e1f7a17d11e79a5006b7b63ef69-1033327378.us-east-2.elb.amazonaws.com
Deploying to K8 (GUI/Console):
Kubectl Proxy:
In order to launch the Kubernetes GUI, we need to use Kubectl to launch a proxy that will allow us to use our local workstation to connect to and administer the Kubernetes cluster.
In order to use the kubectl proxy, you must launch the proxy on a workstation with a GUI. Launching the proxy on a remote workstation will not work, as the kubectl proxy only accepts requests from 127.0.0.1. The proxy must be running on the same workstation as the browser being used to connect to it. Configuration options do exist that allow you to change the proxy listener from 127.0.0.1 to 0.0.0.0, however authentication will fail, as the the proxy requires authentication certificates in order to be able to access the kubernetes GUI via the proxy.
1. Launch Kube proxy:
Launch a kubernetes proxy to the cluster. Note the IP:port (Default: 127.0.0.1:8001) and leave the command window open or run with the &
designator at the end of the command to tell the command to run as a background daemon.
kubectl --kubeconfig=kubeconfig proxy
2. Open UI in Browser:
Browse to http://127.0.0.1:8001/ui/
Creating a Deployment:
We need to create a deployment in kubernetes. Deployments provide declarative updates for Pods by defining a desired state in your template, and the Deployment Controller makes the changes to the Pods. Deployments define things like environment variables, the container image you wish to use, and the resources you want to allocate to the service (port, memory, CPU).
1. Create Deployment Template:
To create a Deployment, copy the YAML-formatted template below and save it locally on your drive as deployment.yml.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: {{DEPLOYMENT_NAME}}
labels:
app: {{DEPLOYMENT_LABEL}}
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: {{DEPLOYMENT_LABEL}}
spec:
containers:
- image: {{CONTAINER_IMAGE}}
name: {{CONTAINER_NAME}}
ports:
- containerPort: {{CONTAINER_PORT}}
Example:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: k8s-lab-nginx
labels:
app: k8s-lab-nginx
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: k8s-lab-nginx
spec:
containers:
- image: nginx:latest
name: k8s-lab-web-port
ports:
- containerPort: 80
* We've specified a specific container image, including the
:latest
tag. Although it's not important for this lab, in a production environment where you were creating Task Definitions programmatically from a CI/CD pipeline, Task Definitions could include a specific SHA hash, or a more accurate tag. * In this example, you will also notice that the image we are using is simply
nginx:latest
, which appears to be missing the URL part of the image naming convention. If an image only shows the repository:tag designation, it simply implies that the image will be pulled from the docker hub, and is an official image which doesn't require the prefix name space. When the docker daemon sees a repository:tag designation it auto implies to pull the library/nginx:latest
image directly from the [Docker Hub](https://hub.docker.com).
2. Workloads UI:
Navigate to Workloads > Deployments and click CREATE in the top-right corner.
3. Upload Template:
Select Upload a YAML or JSON file, upload your modified template deployment.yml
, and click UPLOAD.
4. Validate the Deployment:
Navigate back to Workloads > Deployments to validate your deployment exists.
Creating the Service:
Next, we need to create the service in kubernetes. Services include a logical set of Pods (usually determined by a Label Selector) and a policy by which to access/expose the Pods.
1. Create Service Template:
To create a Service, copy the YAML-formatted template below and save it locally on your drive as service.yml.
apiVersion: v1
kind: Service
metadata:
name: {{SERVICE_NAME}}
labels:
app: {{SERVICE_LABEL}}
spec:
selector:
app: {{DEPLOYMENT_LABEL}}
ports:
- protocol: {{PROTOCOL}}
port: {{HOST_PORT}}
targetPort: {{CONTAINER_PORT}}
type: LoadBalancer
Example:
apiVersion: v1
kind: Service
metadata:
name: k8s-lab-nginx
labels:
app: K8sLabNginx
spec:
selector:
app: k8s-lab-nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Note that the selector
spec.selector
in the service.yml file matches the label provided in the Pod template spec.template.metadata.labels
in the deployment.yml file. Also the target port spec.ports.targetPort
in the service.yml file matches the container port spec.template.spec.containers[].ports[].containerPort
in the deployment.yml file.
2. Discovery and LBing:
Navigate to Discovery and Load Balancing > Services and click CREATE in the top-right corner.
3. Upload the Template:
Select Upload a YAML or JSON file, upload your modified template service.yml
, and click UPLOAD.
4. Validate the Service :
Navigate back to Discovery and Load Balancing > Services to validate your service exists.
5. Validate the Load Balancer :
Wait a few seconds (up to a minute) for the load balancer to be created. Refresh your page to check the progress. Once your External endpoint has changed from -
to a DNS name of the ELB, your service has been created and exposed.
6. Validate Test Application :
Validate your application is up and running. Click the name of the External endpoint to open your application in a new tab/window.
Resources:
Dockerfile for Kubernetes Builder:
############################################################
# Dockerfile to build Kubernetes Cluster Builder
# Based on: alpine:latest
# DATE: 09/18/2017
# COPYRIGHT: Appcontainers.com
############################################################
# Set the base image
FROM alpine:latest
# File Author / Maintainer
MAINTAINER "Rich Nason" <rnason@clusterfrak.com>
###################################################################
#*************** OVERRIDE ENABLED ENV VARIABLES *****************
###################################################################
ENV KUBE_AWS_VERSION 0.9.8
ENV AWS_KEY ""
ENV AWS_SECRET ""
ENV AWS_REGION us-east-1
ENV AWS_AZ us-east-1b
ENV KMS_DESC AWS Container Demo
ENV KMS_ALIAS alias/container-demo
ENV IAM_USER_ARN arn:aws:iam::015811329325:user/container_lab
ENV HOSTED_ZONE_ID A12BCDEFGH34I5
ENV CLUSTER_NAME K8-Demo
ENV DOMAIN_NAME "mydomain.local"
ENV BUCKET_NAME K8-Demo
ENV KEY_NAME ""
ENV NUM_WORKERS 1
###################################################################
#******************* UPDATES & PRE-REQS *************************
###################################################################
# Install dependancies
RUN apk update && apk add curl python py-pip jq && pip install awscli && mkdir -p /root/.aws
###################################################################
#******************* APPLICATION INSTALL ************************
###################################################################
# Grab latest version of kube-aws
RUN curl -L https://github.com/kubernetes-incubator/kube-aws/releases/download/v${KUBE_AWS_VERSION}/kube-aws-linux-amd64.tar.gz -o /tmp/kube-aws-linux-amd64.tar.gz && \
tar -zxvf /tmp/kube-aws-linux-amd64.tar.gz -C /tmp && mv /tmp/linux-amd64/kube-aws /usr/local/bin/kube-aws && rm -fr /tmp/kube-aws-linux-amd64.tar.gz /tmp/linux-amd64 && kube-aws version
###################################################################
#***************** CONFIGURE START ITEMS ************************
###################################################################
# Export the AWS Access and secret keys
RUN echo "export AWS_ACCESS_KEY_ID=$AWS_KEY" >> /root/.bashrc && \
echo "export AWS_SECRET_ACCESS_KEY=$AWS_SECRET" >> /root/.bashrc && \
echo "export AWS_DEFAULT_REGION=$AWS_REGION" >> /root/.bashrc
###################################################################
#****************** ADD REQUIRED APP FILES **********************
###################################################################
ADD kubecluster_builder.sh /root/
RUN chmod +x /root/kubecluster_builder.sh
###################################################################
#************* CMD & EXPOSE APPLICATION PORTS *******************
###################################################################
WORKDIR /root/
CMD ["/root/kubecluster_builder.sh"]
Kubernetes Builder Script:
The builder Script is already bundled into the appcontainers/kubecluster-builder:latest container image, and will auto execute on run
#!/bin/sh
# Generate KMS Key
#RUN aws kms create-key --description "${KMS_DESC}" --policy "{\"Version\":\"2012-10-17\",\"Id\":\"key-default-1\",\"Statement\":[{\"Sid\":\"Enable IAM User Permissions\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"${IAM_USER_ARN}\"},\"Action\": \"kms:*\",\"Resource\":\"*\"}]}" > key.json && \
# Ensure we are running from the /root directory.
cd /root
# Export the AWS Access and secret keys
export AWS_ACCESS_KEY_ID=$AWS_KEY
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET
export AWS_DEFAULT_REGION=$AWS_REGION
# Create KMS keys
aws kms create-key --description "$KMS_DESC" > key.json
aws kms create-alias --alias-name "$KMS_ALIAS" --target-key-id $(jq -r '.KeyMetadata.KeyId' key.json)
# Init the Cluster (only creates the cluster.yaml file)
kube-aws init --cluster-name=$CLUSTER_NAME \
--hosted-zone-id=$HOSTED_ZONE_ID \
--external-dns-name=$CLUSTER_NAME.$DOMAIN_NAME \
--region=$AWS_REGION \
--availability-zone=$AWS_AZ \
--key-name=$KEY_NAME \
--kms-key-arn=`jq -r '.KeyMetadata.Arn' key.json`
# Generate all of the required assets
kube-aws render credentials --generate-ca
kube-aws render stack
# Set the worker count to 3 worker nodes
if [[ "$NUM_WORKERS" == 1 ]]; then
echo "Workers Launched = 1"
else
sed -i "s/#workerCount:\ 1/worker.count:\ $NUM_WORKERS/g" cluster.yaml
echo "Workers Launched = $NUM_WORKERS"
fi
# Create S3 Bucket
if [[ "$AWS_REGION" == "us-east-1" ]]; then
aws s3api create-bucket --bucket $BUCKET_NAME
else
aws s3api create-bucket --bucket $BUCKET_NAME --create-bucket-configuration LocationConstraint=$AWS_REGION
fi
kube-aws validate --s3-uri s3://$BUCKET_NAME
# Bring the cluster online by running the generated cloudformation template.
kube-aws up --s3-uri s3://$BUCKET_NAME
# Copy the kubeconfig and credentials files to the S3 bucket
aws s3 cp kubeconfig s3://$BUCKET_NAME
aws s3 cp cluster.yaml s3://$BUCKET_NAME
aws s3 sync credentials/ s3://$BUCKET_NAME/credentials
Kubernetes Builder Docker Run Statement:
docker run -it --rm --name k8-builder -h k8-builder \
-e AWS_KEY="ABCDEFGHIJKLMNOPQRST" \
-e AWS_SECRET="abcdefghijklmnopqrstuvwxyz1234567890abcd" \
-e AWS_REGION="us-east-2" \
-e AWS_AZ="us-east-2b" \
-e KMS_DESC="AWS Kubernetes Demo" \
-e KMS_ALIAS="alias/k8s-demo" \
-e IAM_USER_ARN="arn:aws:iam::012345678910:user/k8-builder" \
-e HOSTED_ZONE_ID="r53zone.tld" \
-e CLUSTER_NAME="K8-Demo-Cluster" \
-e DOMAIN_NAME="mydomain.com" \
-e BUCKET_NAME="kubernetes-cluster-demo" \
-e KEY_NAME="My_Key_Name" \
appcontainers/kubecluster-builder:latest
Lab Pre-Req Packer File:
Packer File:
{
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "{{user `region`}}",
"source_ami": "{{user `ami`}}",
"instance_type": "{{user `instance_type`}}",
"ami_name": "{{user `ami_name`}}-{{timestamp}}",
"ami_description": "{{user `ami_description`}}",
"availability_zone": "{{user `availability_zone`}}",
"vpc_id": "{{user `vpc_id`}}",
"subnet_id": "{{user `subnet_id`}}",
"security_group_id": "{{user `security_group_id`}}",
"ssh_keypair_name": "{{user `ssh_keypair_name`}}",
"ssh_agent_auth": true,
"ssh_username": "{{user `ssh_username`}}",
"associate_public_ip_address": true,
"ssh_private_ip": false,
"tags": {
"Name": "{{user `tag_name`}}",
"OS_Version": "{{user `tag_osver`}}"
}
}],
"provisioners": [
{
"type": "shell",
"inline": [
"sudo yum clean all",
"sudo yum -y update",
"sudo yum install -y docker jq",
"sudo chkconfig docker on",
"sudo /etc/init.d/docker start",
"sudo pip install awscli",
"sudo curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl",
"sudo mv kubectl /usr/bin/",
"sudo chmod +x /usr/bin/kubectl",
"sudo curl -L https://github.com/docker/compose/releases/download/1.16.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose",
"sudo chmod +x /usr/local/bin/docker-compose"
]
}
]}
Packer Build-Vars File:
{
"aws_access_key": "ABCDEFGHIJKLMNOPQRST",
"aws_secret_key": "abcdefghijklmnopqrstuvwxyz1234567890abcd",
"instance_type": "t2.small",
"region": "us-east-2",
"availability_zone": "us-east-2a",
"ami": "ami-ea87a78f",
"vpc_id": "vpc-y12345ba",
"subnet_id": "subnet-12a3456b",
"security_group_id": "sg-a6ca00cd",
"ssh_keypair_name": "MyKey",
"ssh_username": "ec2-user",
"ami_name": "Container-Lab",
"ami_description": "Image with all of the tools required to run ECS/K8 Labs",
"tag_name": "Container-Lab",
"tag_osver": "Amazon Linux"
}
Packer Build Command:
packer build -var-file=buildvars.json container_lab.json