Since my first tentative steps in to Kubernetes it’s been an interesting journey. For the most part I suspect the most common way to interact with Kubernetes is to use a managed platform through one of the main public cloud players (that’s certainly been my experience), but it doesn’t do a lot to understand the nuts and bolts of the platform. I’d been meaning to try and get stuck in . . .
If, like me, you’ve come from a traditional sysadmin background then Kubernetes can be daunting to say the least, this doesn’t get much easier when it comes to trying to get to grips with how to debug networking issues. Kubernetes networking is VAST and supports a number of complex implementations that vary between the major Kubernetes-as-a-Service platforms (GKE, EKS, AKS) as well as many other options. The broad strokes are . . .
Earlier in the year I wrote about automating Elastic Kubernetes Service role configuration (direct modification of the aws-auth ConfigMap) using Terraform, keeping the ARN data secret by looking it up from a secret management service (in this case Hashicorp Vault). Whilst the solution works well it comes with some built in issues when we want to provision a new deployment from scratch that aren’t obvious when we’re working with an . . .
FluentD is data collection platform and a popular choice for Kubernetes to aggregate logs. Aggregating logs is all well and good but for properly managing logs you really want to output them to a log management platform, ideally one which provides some degree of visualisation and insights, unless you really love working with raw logs it’s nice to be able to view them and see patterns in a manner that’s . . .
In a previous post we looked at the basics of working with multiple instances of Terraform providers, however as usual, Kubernetes presents some slight variations on this theme due to it’s varied options for authentication. In this post we’re looking at how to handle authentication for multiple Kubernetes clusters in Terraform. Provider Aliases Underpinning all concepts of working with multiple instances of a provider is the concept of working with . . .
One of the lesser known functions of Terraform is the ability to operate multiple instances of the same provider within the same configuration. The uses of this are various though as it’s not always needed it’s one of those things that doesn’t always leap out. It’s pretty easy to get to grips with so this is a short post to take a look at how to get started. Providers – . . .
Recently I’ve spent a good amount of time looking at options for managing Kubernetes Secrets with Vault. Hashicorp being a great supporter of the Cloud Native philosophy, it’s little surprise to find that they provide a multitude of options to integrate with Kubernetes and provide extensive documentation here. for my needs I found that the suggested configurations were either unsuitable or required a degree of over-engineering so I’m going to . . .
UPDATED 11/2020: Have a look at a different method for this configuration better suited to CI/CD. In a previous post we looked at how to use Terraform provision and authenticate with Clusters using AWS’ Elastic Kubernetes Service (EKS) using the somewhat unique authentication method of it’s webhook token method leveraging aws-iam-authenticator. Once we get past that point however we still have another permission hurdle to overcome, specifically how we handle . . .
NOTE: The sample code used here is hosted in my GitHub at https://github.com/tinfoilcipher/eks-terraform-example Recently I’ve been getting my hands dirtier and dirtier with Kubernetes but there’s some interesting oddities that only occur in Elastic Kubernetes Service (EKS), the AWS PaaS Kubernetes platform, especially when it comes to how you can authenticate. As Kubernetes is strongly driven by a declarative (and by extension Infrastructure as Code) philosophy, it makes perfect sense . . .
In the last post we looked at how to automate the creation of GKE Kubernetes clusters in GCP, however the deployment of workloads to these clusters was still something of a manual process. Enter Helm; a package manager for Kubernetes which allows us to use declarative configuration to push our cluster and container definitions from external repositories. If you’ve never heard of it, I recommend the IBM Cloud video here . . .