Last year I wrote about automating Elastic Kubernetes Service role configuration (direct modification of the aws-auth ConfigMap) using Terraform, and a somewhat clunky method of injecting ARN data by looking it up from a secret management service (in this case Hashicorp Vault). Whilst the solution works well it comes with a serious built in issue when we want to provision a new deployment from scratch, namely the need to import . . .
Ansible is a big favourite of mine as anyone that knows me will tell you and has become one of the biggest players in the DevOps world, inevitably if you’re going to use it at any real scale you’ll need to start thinking about tags. Tags are an essential part of life in the cloud, given the scale and complexity we can encounter they really become the only way to . . .
EDIT: A few days after publishing this article, Hashicorp’s official AWS provider was updated to support default tags directly from the provider (which is very simple and saves all of the work detailed in this article). This only works with AWS so if you’re working in another cloud keep reading on, if you’re only working in AWS take a look at the Hashicorp blog post here which provides some very . . .
Helm is an incredibly popular package manager for Kubernetes, however despite it’s incredibly widespread use there isn’t a huge amount of information or options out there for creating private repositories using Open Source platforms. Chartmuseum seeks to solve this problem by offering us just that. In this post I’m looking at how to deploy and bootstrap Chartmuseum on Ubuntu Linux 18.04, using a secure AWS S3 backend. Getting Started Chartmuseum . . .
Recently I’ve been looking AWS’ Elastic File Service platform, which allows for the provisioning of highly available PaaS storage which can accessed via NFS by multiple services at at very low cost. Whilst this is good, what’s even better is templating and automating the provisioning. In this post we’ll look at how to provision HA EFS storage using Terraform. What Do We Want? We have the option to create EFS . . .
Terraform is a powerful Infrastructure as Code tool ideal for creating cloud environments and its flexible HCL syntax allows for the provisioning of complex environments from simple templates, saving countless hours. Often missed is the ability to template resources and use them in conjunction with Terraform’s workspaces feature to maintain concurrent versions of the same environment. When coupled with even a basic Continuous Deployment pipeline this combination of systems allows . . .
If, like me, you’ve come from a traditional sysadmin background then Kubernetes can be daunting to say the least, this doesn’t get much easier when it comes to trying to get to grips with how to debug networking issues. Kubernetes networking is VAST and supports a number of complex implementations that vary between the major Kubernetes-as-a-Service platforms (GKE, EKS, AKS) as well as many other options. The broad strokes are . . .
In a previous post we looked at the basics of working with multiple instances of Terraform providers, however as usual, Kubernetes presents some slight variations on this theme due to it’s varied options for authentication. In this post we’re looking at how to handle authentication for multiple Kubernetes clusters in Terraform. Provider Aliases Underpinning all concepts of working with multiple instances of a provider is the concept of working with . . .
Recently I had an requirement that I couldn’t find documented outside of the abstract; migrating a single private DNS zone to AWS’ hosted DNS service; Route 53 and conditionally forwarding queries for that zone from an existing Windows DNS infrastructure. This isn’t something I expected to be broken down blow by blow in the AWS documentation but there are plenty of Windows DNS infrastructures out there in the wild and . . .
In a previous post we looked at setting up centralised Terraform state management using S3 for AWS provisioning (as well as using Azure Object Storage for the same solution in Azure before that). What our S3 solution lacked however is a means to achieve State Locking, I.E. any method to prevent two operators or systems from writing to a state at the same time and thus running the risk of . . .