Recently I’ve had the experience of reconfiguring the popular Kubernetes Service Mesh Istio (using it’s Gateway ingress model) to work with an AWS Application Load Balancer with a degree of automation and scalability. This is a challenging deployment to say the least and whilst documentation exists to varying degrees for the separate components, it’s scant. I’m less than impressed with the official Istio documentation (though it has gotten way better) . . .
Last year I wrote about automating Elastic Kubernetes Service role configuration (direct modification of the aws-auth ConfigMap) using Terraform, and a somewhat clunky method of injecting ARN data by looking it up from a secret management service (in this case Hashicorp Vault). Whilst the solution works well it comes with a serious built in issue when we want to provision a new deployment from scratch, namely the need to import . . .
In the previous post we looked at how to build Chartmuseum on Ubuntu Linux with an S3 backend, however out of the box this system presents a number of problems; specifically it isn’t TLS encrypted and the service runs on an unprivileged TCP port. I could see no guides suggesting how to do this, so lets take a look at how to solve this problem by performing by proxying our . . .
Helm is an incredibly popular package manager for Kubernetes, however despite it’s incredibly widespread use there isn’t a huge amount of information or options out there for creating private repositories using Open Source platforms. Chartmuseum seeks to solve this problem by offering us just that. In this post I’m looking at how to deploy and bootstrap Chartmuseum on Ubuntu Linux 18.04, using a secure AWS S3 backend. Getting Started Chartmuseum . . .
In the days of cloud we’re often called on to integrate a lot of technologies together (as the somewhat messy title of this post suggests). One of the more recent systems I’ve encountered is Istio, popular Kubernetes Service Mesh, which in EKS tends to rely on an Elastic Load Balancer of one flavour or another as the point of access to it’s Gateway. In this post we’ll look at how . . .
If, like me, you’ve come from a traditional sysadmin background then Kubernetes can be daunting to say the least, this doesn’t get much easier when it comes to trying to get to grips with how to debug networking issues. Kubernetes networking is VAST and supports a number of complex implementations that vary between the major Kubernetes-as-a-Service platforms (GKE, EKS, AKS) as well as many other options. The broad strokes are . . .
In a previous post we looked at the basics of working with multiple instances of Terraform providers, however as usual, Kubernetes presents some slight variations on this theme due to it’s varied options for authentication. In this post we’re looking at how to handle authentication for multiple Kubernetes clusters in Terraform. Provider Aliases Underpinning all concepts of working with multiple instances of a provider is the concept of working with . . .
One of the lesser known functions of Terraform is the ability to operate multiple instances of the same provider within the same configuration. The uses of this are various though as it’s not always needed it’s one of those things that doesn’t always leap out. It’s pretty easy to get to grips with so this is a short post to take a look at how to get started. Providers – . . .
Recently I’ve been looking at how to configure EC2 autoscaling schedules for EKS implementations, specifically delivering these schedule configurations via Terraform. This sounds like it should be rather simple on the surface but after getting the initial configuration to work an issue of idempotency presents itself. In this post I want to look at the issues presented and how to overcome them. Autoscaling Groups and Schedules When an managed EKS . . .
Recently I’ve spent a good amount of time looking at options for managing Kubernetes Secrets with Vault. Hashicorp being a great supporter of the Cloud Native philosophy, it’s little surprise to find that they provide a multitude of options to integrate with Kubernetes and provide extensive documentation here. for my needs I found that the suggested configurations were either unsuitable or required a degree of over-engineering so I’m going to . . .