Recently I’ve been looking AWS’ Elastic File Service platform, which allows for the provisioning of highly available PaaS storage which can accessed via NFS by multiple services at at very low cost. Whilst this is good, what’s even better is templating and automating the provisioning. In this post we’ll look at how to provision HA EFS storage using Terraform. What Do We Want? We have the option to create EFS . . .
Terraform is a powerful Infrastructure as Code tool ideal for creating cloud environments and its flexible HCL syntax allows for the provisioning of complex environments from simple templates, saving countless hours. Often missed is the ability to template resources and use them in conjunction with Terraform’s workspaces feature to maintain concurrent versions of the same environment. When coupled with even a basic Continuous Deployment pipeline this combination of systems allows . . .
If, like me, you’ve come from a traditional sysadmin background then Kubernetes can be daunting to say the least, this doesn’t get much easier when it comes to trying to get to grips with how to debug networking issues. Kubernetes networking is VAST and supports a number of complex implementations that vary between the major Kubernetes-as-a-Service platforms (GKE, EKS, AKS) as well as many other options. The broad strokes are . . .
Earlier in the year I wrote about automating Elastic Kubernetes Service role configuration (direct modification of the aws-auth ConfigMap) using Terraform, keeping the ARN data secret by looking it up from a secret management service (in this case Hashicorp Vault). Whilst the solution works well it comes with some built in issues when we want to provision a new deployment from scratch that aren’t obvious when we’re working with an . . .
FluentD is data collection platform and a popular choice for Kubernetes to aggregate logs. Aggregating logs is all well and good but for properly managing logs you really want to output them to a log management platform, ideally one which provides some degree of visualisation and insights, unless you really love working with raw logs it’s nice to be able to view them and see patterns in a manner that’s . . .
In a previous post we looked at the basics of working with multiple instances of Terraform providers, however as usual, Kubernetes presents some slight variations on this theme due to it’s varied options for authentication. In this post we’re looking at how to handle authentication for multiple Kubernetes clusters in Terraform. Provider Aliases Underpinning all concepts of working with multiple instances of a provider is the concept of working with . . .
Recently I had an requirement that I couldn’t find documented outside of the abstract; migrating a single private DNS zone to AWS’ hosted DNS service; Route 53 and conditionally forwarding queries for that zone from an existing Windows DNS infrastructure. This isn’t something I expected to be broken down blow by blow in the AWS documentation but there are plenty of Windows DNS infrastructures out there in the wild and . . .
In a previous post we looked at setting up centralised Terraform state management using S3 for AWS provisioning (as well as using Azure Object Storage for the same solution in Azure before that). What our S3 solution lacked however is a means to achieve State Locking, I.E. any method to prevent two operators or systems from writing to a state at the same time and thus running the risk of . . .
Previously I’ve looked at how to lookup secrets from Hashicorp Vault using Ansible Tower however whilst that functionality is incredibly valuable it doesn’t really tackle the issue of how to write Playbooks which can interact with Vault. In this post we’ll look at how we can use some excellent lookup functionality provided as part of the ansible which provides this functionality. Some Assumptions For this article, I’m going to be . . .
In a previous post we looked at a method to use Terraform’s output function to export return data and load it in to an external YAML file for consumption by Ansible. While this is a useful function it’s a little topheavy, and if we just want to pass data in to another Terraform configuration in order to run an apply operation, we have a means to work a lot more . . .