Tutorial: How to Expose Kubernetes Services on EKS with DNS and TLS

Getting Started

// ./main.tf provider "aws" {} data "aws_vpc" "default" { default = true } data "aws_subnet_ids" "default" { vpc_id = data.aws_vpc.default.id } module "eks" { source = "terraform-aws-modules/eks/aws" cluster_name = "appvia-dns-tls-demo" cluster_version = "1.19" subnets = data.aws_subnet_ids.default.ids write_kubeconfig = true vpc_id = data.aws_vpc.default.id enable_irsa = true workers_group_defaults = { root_volume_type = "gp2" } worker_groups = [ { name = "worker-group" instance_type = "t3a.small" asg_desired_capacity = 3 } ] } data "aws_eks_cluster" "cluster" { name = module.eks.cluster_id } data "aws_eks_cluster_auth" "cluster" { name = module.eks.cluster_id } provider "kubernetes" { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token }$ terraform init Initializing modules... [...] Terraform has been successfully initialized! $ terraform apply [...] Apply complete! Resources: 27 added, 0 changed, 0 destroyed.
$ kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system aws-node-qscpx 1/1 Running 0 18m 172.31.12.14 ip-172-31-12-14.eu-west-2.compute.internal <none> <none> kube-system aws-node-t5qp5 1/1 Running 0 17m 172.31.40.85 ip-172-31-40-85.eu-west-2.compute.internal <none> <none> kube-system aws-node-zk2gj 1/1 Running 0 18m 172.31.31.122 ip-172-31-31-122.eu-west-2.compute.internal <none> <none> kube-system coredns-6fd5c88bb9-5f72v 1/1 Running 0 21m 172.31.26.209 ip-172-31-31-122.eu-west-2.compute.internal <none> <none> kube-system coredns-6fd5c88bb9-zc48s 1/1 Running 0 21m 172.31.8.192 ip-172-31-12-14.eu-west-2.compute.internal <none> <none> kube-system kube-proxy-647rk 1/1 Running 0 18m 172.31.12.14 ip-172-31-12-14.eu-west-2.compute.internal <none> <none> kube-system kube-proxy-6gjvt 1/1 Running 0 18m 172.31.31.122 ip-172-31-31-122.eu-west-2.compute.internal <none> <none> kube-system kube-proxy-6lvnn 1/1 Running 0 17m 172.31.40.85 ip-172-31-40-85.eu-west-2.compute.internal <none> <none> $ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-12-14.eu-west-2.compute.internal Ready <none> 17m v1.19.6-eks-49a6c0 ip-172-31-31-122.eu-west-2.compute.internal Ready <none> 17m v1.19.6-eks-49a6c0 ip-172-31-40-85.eu-west-2.compute.internal Ready <none> 17m v1.19.6-eks-49a6c0

External-dns

$ terraform apply [...] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
$ kubectl run -i --restart=Never --image amazon/aws-cli $(uuid) -- sts get-caller-identity { "UserId": "AROARZYWN37USPQWOL5XC:i-0633eb78d38a31643", "Account": "123412341234", "Arn": "arn:aws:sts::123412341234:assumed-role/appvia-dns-tls-demo20210323123032764000000009/i-0633eb78d38a31643"
$ terraform refresh [...] Outputs: aws_account_id = "123412341234" $ kubectl create namespace external-dns namespace/external-dns created $ kubectl create -n external-dns serviceaccount external-dns serviceaccount/external-dns created $ kubectl annotate serviceaccount -n external-dns external-dns eks.amazonaws.com/role-arn=arn:aws:iam::$(terraform output -raw aws_account_id):role/externaldns_route53 serviceaccount/external-dns annotated $ kubectl run -i -n external-dns --restart=Never --image amazon/aws-cli $(uuid) -- sts get-caller-identity { "UserId": "AROARZYWN37USAHEEKT35:botocore-session-1123456767", "Account": "123412341234", "Arn": "arn:aws:sts::123412341234:assumed-role/externaldns_route53/botocore-session-1123456767" }
$ kubectl -n external-dns apply -k "github.com/kubernetes-sigs/external-dns/kustomize?ref=v0.7.6" serviceaccount/external-dns configured clusterrole.rbac.authorization.k8s.io/external-dns created clusterrolebinding.rbac.authorization.k8s.io/external-dns-viewer created deployment.apps/external-dns created
>$ kubectl -n external-dns patch deployments.apps external-dns --patch-file k8s/external-dns/deployment.yaml deployment.apps/external-dns patched

ingress-nginx

$ kubectl apply -k "github.com/kubernetes/ingress-nginx.git/deploy/static/provider/aws?ref=controller-v0.44.0" namespace/ingress-nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created serviceaccount/ingress-nginx-admission created serviceaccount/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created configmap/ingress-nginx-controller created service/ingress-nginx-controller-admission created service/ingress-nginx-controller created deployment.apps/ingress-nginx-controller created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created

cert-manager

$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created namespace/cert-manager created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager created serviceaccount/cert-manager-webhook created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrole.rbac.authorization.k8s.io/cert-manager-view created clusterrole.rbac.authorization.k8s.io/cert-manager-edit created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created role.rbac.authorization.k8s.io/cert-manager:leaderelection created role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
$ kubectl apply -f ./k8s/cert-manager/issuers.yaml clusterissuer.cert-manager.io/letsencrypt-prod created clusterissuer.cert-manager.io/letsencrypt-staging created

Bringing it all together

$ nslookup dns-tls-demo.sa-team.teams.kore.appvia.io Server: 1.1.1.1 Address: 1.1.1.1#53 Name: dns-tls-demo.sa-team.teams.kore.appvia.io Address: 18.135.204.171 $ curl https://dns-tls-demo.sa-team.teams.kore.appvia.io <!DOCTYPE html> <html> <head> <title>Hello World</title> [...] </body> </html>

Tearing it all down

$ kubectl delete ingress --all -A ingress.extensions "helloworld" deleted $ kubectl delete namespaces ingress-nginx namespace "ingress-nginx" deleted $ terraform state rm module.eks.kubernetes_config_map.aws_auth #workaround https://github.com/terraform-aws-modules/terraform-aws-eks/issues/1162 Removed module.eks.kubernetes_config_map.aws_auth[0] Successfully removed 1 resource instance(s). $ terraform destroy -force [...] Destroy complete! Resources: 27 destroyed. $ unset KUBECONFIG

Minimising the effort

About the author

Chris Nesbitt-Smith

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Split and Expand Multiline Words in an Excel Cell into Multiple Rows

How to Build a USB Keyboard using Arduino

The developer's itch

How to find the right software development company for your special product or service.

Payment Processor Uptime Dramatically Increased

How To Increase Productivity By 50% | KaarbonTech Asset Management Systems

Day Five- Seasonal Regional

Coding with headsets on

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Chris

Chris

More from Medium

Managing and Troubleshooting AWS EKS Access

Multi-Region S3 Strategies

Integrating AWS Secret Manager with EKS and use Secrets inside the Pods: Part-1

EKS Networking— VPC CNI Modes