damn.engineer2024-03-04T14:46:16+01:00https://damn.engineerEugene Romerohttps://twitter.com/theEugeneRomeroCloud Lunch and Learn: Securing your app's communications with Kubernetes, Azure Key Vault, and TLS certificates2023-10-04T00:00:00+02:00https://damn.engineer/2023/10/04/cloud-lnl-securing-apps-kubernetes-azure-keyvault-tls<p>Cloud Lunch and Learn are weekly online sessions that aim to promote knowledge sharing in an easy way. In October, I was invited to present the talk <strong>“Securing your app’s communications with Kubernetes, Azure Key Vault, and TLS certificates”</strong>, in which I discussed how to use the Kubernetes <a href="https://secrets-store-csi-driver.sigs.k8s.io">Secrets Store CSI Driver</a> to automatically inject TLS certificates from an Azure Key Vault into Kubernetes pods.</p>
<p>This talk presents the information I wrote about in an <a href="https://damn.engineer/2022/02/07/tls-cert-azure-keyvault-kubernetes">earlier post</a> on this blog. Additionally, it includes a live demo where I show the tool in action.</p>
<p>The presentation is about 60 minutes long. The slides for the talk can be found <a href="https://damn.engineer/talk-slides/cloud-lunch-and-learn-kubernetes-secrets-store-csi-driver/">here</a>.</p>
<style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }</style>
<div class="embed-container"><iframe src="https://www.youtube.com/embed/mOHQj7qDq2M" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe></div>
NDC Oslo 2023: Securing your app's communications with Kubernetes, Azure Key Vault, and TLS certificates2023-09-04T00:00:00+02:00https://damn.engineer/2023/09/04/ndc-securing-apps-kubernetes-azure-keyvault-tls<p>NDC Oslo 2023 took place in May 2023 in Oslo (Norway). At that event, I presented the talk <strong>“Securing your app’s communications with Kubernetes, Azure Key Vault, and TLS certificates”</strong>, in which I discussed how to use the Kubernetes <a href="https://secrets-store-csi-driver.sigs.k8s.io">Secrets Store CSI Driver</a> to automatically inject TLS certificates from an Azure Key Vault into Kubernetes pods.</p>
<p>This talk presents the information I wrote about in an <a href="https://damn.engineer/2022/02/07/tls-cert-azure-keyvault-kubernetes">earlier post</a> on this blog.</p>
<p>This talk is about 15 minutes long. The slides for the talk can be found <a href="https://damn.engineer/talk-slides/ndc-kubernetes-secrets-store-csi-driver/">here</a>.</p>
<style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }</style>
<div class="embed-container"><iframe src="https://www.youtube.com/embed/Av2OrTgEh-g" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe></div>
Moving resources between subscriptions in Azure2023-02-06T00:00:00+01:00https://damn.engineer/2023/02/06/azure-move-subscriptions<h2 id="migrating-single-or-multiple-resources-between-azure-subscriptions">Migrating (single or multiple) resources between Azure subscriptions</h2>
<p><img src="https://damn.engineer/assets/images/azure-move-subscriptions/moving.jpg" alt="Moving day" />
Photo by <a href="https://unsplash.com/@jiaweizhao?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Jiawei Zhao</a> on <a href="https://unsplash.com/photos/W-ypTC6R7_k?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
<p>Occasionally, you might need to move resources from one Azure subscription to another. This could be related to cost, keeping things organized, dividing resources by type or environment, etc. In my case, I found myself in need of migrating all of my Azure resources when Azure unexpectedly <a href="https://damn.engineer/2022/10/20/postmortem">shut my old subscription down</a>.</p>
<p>When I was notified that my subscription had been deactivated, I had worried that I would need to recreate all resources from scratch in a new subscription. Thankfully, this was not the case. I discussed the issue I was having with my friend <a href="https://twitter.com/nmerrigan">Niall Merrigan</a>, and he showed me that it was indeed possible to move things between subscriptions, instead of needing to destroy and recreate them.</p>
<h2 id="before-you-begin">Before you begin</h2>
<p>I recommend reading through all of the following before attempting a move, since there are a few caveats to be aware of.</p>
<h2 id="source-and-destination">Source and destination</h2>
<p>First off, you can only perform this process with <strong>active</strong> subscriptions. If you want to move things out of a deactivated subscription, you will need to get it reactivated first. In my case, after a few emails with Azure Support, they re-enabled my subscription for 48 hours so that I could perform the move. If you do not do this, you will not be able to do anything with the resources there.</p>
<p>Additionally, you can only move between subscriptions that are in the same Active Directory tenant. Moving resources to a sub in a different tenant is unfortunately not possible.</p>
<h3 id="check-if-the-resource-can-be-moved">Check if the resource can be moved</h3>
<p>Not every resource in Azure can be moved across subscriptions. Check <a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/move-support-resources">this list from Microsoft </a> to see if your desired resources are supported.</p>
<p>If you don’t want to look through the list or are not sure about what type a specific resource is, Azure will let you know before attempting the move if the resource can be moved or not.</p>
<h3 id="make-your-subscriptions-easily-identifiable">Make your subscriptions easily identifiable</h3>
<p>This is not a requirement, but it’s a good tip that will probably make your life a little easier. Before you start shuffling resources around, I advise you rename your subscriptions so that you can easily tell which is which. There will be places where you can only pick a subscription by name, so if you have a few of them named the same, it will be harder to pick the correct one.</p>
<p>Renaming a subscription can be done in the Azure Portal by going to the Subscriptions pane, choosing the desired sub, and using the Rename button:</p>
<p><img src="https://damn.engineer/assets/images/azure-move-subscriptions/1.jpg" alt="Renaming subscriptions" /></p>
<p>Once this is done, your subscriptions will be easily identifiable, which will simplify the process:</p>
<p><img src="https://damn.engineer/assets/images/azure-move-subscriptions/2.jpg" alt="Subscription list" /></p>
<h3 id="resource-groups">Resource groups</h3>
<p>To move resources, you will need to have an already existing resource group for them to go into in the target subscription. This is a good opportunity to establish a system for organizing your resources, if you don’t have one in place. Alternatively, you can just recreate your resource groups from the old subscription (group names only need to be unique inside each individual sub).</p>
<h3 id="dependencies">Dependencies</h3>
<p>You will not be able to move resources that have child dependencies independently from each other. All resources that have dependencies need to be moved at the same time as said dependencies. This means that you will need to put all of these resources together in the same resource group if they aren’t already. Once they are moved to the new sub, they can be reorganized there as needed.</p>
<p>For example, imagine I have a virtual machine attached to a virtual network in my old sub, and they are in separate resource groups. Before migrating them, I would have to move these resources into the same resource group in the old subscription. After migrating them to a resource group in the destination sub, I can then put them in different groups in the new subscription.</p>
<h2 id="the-move">The move</h2>
<p>Once you are ready to perform the move, the process itself is quite simple. Go into the resource group containing the desired resources. Select the ones you wish to move, and choose <code class="language-plaintext highlighter-rouge">...</code> -> <code class="language-plaintext highlighter-rouge">Move</code> -> <code class="language-plaintext highlighter-rouge">Move to another subscription</code>.</p>
<p><img src="https://damn.engineer/assets/images/azure-move-subscriptions/3.jpg" alt="Moving process" /></p>
<p>After inputting the desired destination sub and resource group, Azure will perform a check to see if the resources can be moved. If they can’t, Azure will show you a message letting you know why. After pressing “Move” and waiting a few minutes, the resources will appear in the new subscription.</p>
<h2 id="additional-info">Additional info</h2>
<p>More information and details about specific scenarios can be found in the <a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/move-resource-group-and-subscription">official Microsoft docs</a>.</p>
KCD Berlin 2022: Automate Every Layer: Using Terraform to deploy, configure and maintain Azure Kubernetes clusters2022-11-24T00:00:00+01:00https://damn.engineer/2022/11/24/terraform-deploy-configure-maintain-kubernetes<p>Kubernetes Community Days took place in June of 2022 in Berlin (Germany). At that conference, I presented the talk <strong>“Automate Every Layer: Using Terraform to Deploy, Configure and Maintain Azure Kubernetes Clusters”</strong>, in which I discussed how we use Terraform to control the entire lifecycle of Azure Kubernetes clusters.</p>
<p>This talk is about 35 minutes long. The slides for the talk can be found <a href="https://damn.engineer/talk-slides/terraform-managed-clusters/">here</a>.</p>
<style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }</style>
<div class="embed-container"><iframe src="https://www.youtube.com/embed/7LZB-IrOANs" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe></div>
damn.engineer blackout postmortem2022-10-20T00:00:00+02:00https://damn.engineer/2022/10/20/postmortem<p><a href="#tldr">TL;DR</a> at the end of the post!</p>
<p>On October 1st, I received an email from <a href="https://uptimerobot.com/">Uptime Robot</a> warning me that <a href="https://damn.engineer">damn.engineer</a> was down. When I tried to access the site myself, I kept getting redirected to the site’s 404 page.</p>
<p>This blog is hosted as a <a href="https://azure.microsoft.com/en-us/products/app-service/static/#overview">Static Web App</a> on Azure. This service is pretty great, as it means that I don’t have to worry about the infrastructure running the site, since Microsoft will take care of that for you. All you need to do is provide the static pages that make up your site, and Microsoft handles serving them worldwide.</p>
<p>However, a slight inconvenience with this service (and all other Azure services for that matter) is that it requires an <em>active</em> Azure subscription 😅</p>
<p>When I logged into the Azure Portal to investigate why my Web App was not running, I discovered that my Azure account had been disabled! Here was the root of the problem. Upon further investigation, I saw that the subscription had been disabled because the <strong>free monthly credit that I have as part of Visual Studio Enterprise had been removed from my account</strong>.</p>
<p>The strangest part of this is that my VS Enterprise subscription was still valid and active. It seems that there was some technical issue which made Azure think the VS sub had been terminated, even though this was not the case. A friend who works at Microsoft told me that Microsoft had been cracking down on abuse of free Azure credits lately, so it is possible that some change around that might have affected my account by mistake (I promise I was not abusing these credits 😇). Regardless of the cause, I now had a disabled Azure subscription. The Portal kept telling me that if I wanted to resume using the sub, I would have to convert it to Pay-As-You-Go. At the same time, going into the Visual Studio Benefits page and trying to activate the Azure credits, would throw an error message indicating that the credits had already been used.</p>
<p>Since this seemed like it was not something I could fix myself, I created a support ticket on the Azure Portal. I explained the situation as best I could, and was contacted a day or two later by a Microsoft support technician. He helped me to find a <strong>new, hidden subscription</strong> that was also connected to my account. According to the information he provided, this new subscription had been created when I clicked on “Activate Azure credits” on the VS Benefits page. This was the reason why that page kept saying the credits were already in use.</p>
<p>The technician also told me that this new subscription would be the one receiving the credits from now on, so I would have to migrate all the resources on my old subscription to this one, since apparently the subs could not be swapped. To be able to move resources out of the old sub however, it would have to be reactivated. The tech told me he would work with the operations team to re-enable the sub for 24-48 hours, at no cost to me. Once this was done, I was able to move my resources over without much issue, and delete the old sub afterwards. I will write a future article about how to move resources between subscriptions. <em>Update: Article can be read <a href="https://damn.engineer/2023/02/06/azure-move-subscriptions">here</a>.</em></p>
<p>And now, as promised:</p>
<h2 id="tldr">TL;DR:</h2>
<h3 id="what-happened">What happened?</h3>
<p>Azure disabled my subscription because of some glitch on their part, which removed my Visual Studio Enterprise monthly Azure credits.</p>
<h3 id="could-i-have-prevented-this-in-any-way">Could I have prevented this in any way?</h3>
<p>Not really, as this was a glitch on Azure’s billing system. It’s possible that this would not have happened on a Pay-As-You-Go subscription.</p>
<h3 id="what-did-i-learn">What did I learn?</h3>
<p>That at the end of the day, your Cloud provider has full control over your Cloud resources. As long as you are paying, everything will be running smoothly. But if for some reason your payment stops, they are quick to disable your stuff.</p>
<h3 id="any-other-observations">Any other observations?</h3>
<p>I noticed that even though the resources were disabled, the Static Web App redirected traffic to my <a href="https://damn.engineer/404">custom 404</a> page, instead of a generic “this Web App is off” page. That was unexpected, but welcome.</p>
<p>Also, several of you reached out to me to let me know the site was down. This was really nice of you, and actually told me that this blog does have readers 😄 So, thank you all!</p>
Enabling case insensitive completion in ZSH2022-09-28T00:00:00+02:00https://damn.engineer/2022/09/28/zsh-case-insensitive<p>ZSH has been the default shell in MacOS since 2019. I recently started using a Mac again, after being away from the ecosystem for a couple years. While trying to become familiar with this new shell, I found out that there is a way to enable case-insensitive navigation in the shell. This means that, when using Tab to autocomplete directory or file names, ZSH will offer all matching options, regardless of casing.</p>
<p>For example, say we have the following content inside a directory:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">ls
</span>Code/ Documents/ Music/ CAD_info.doc call.mp3
</code></pre></div></div>
<p>Without case-insensitive completion (the default), typing <code class="language-plaintext highlighter-rouge">ls c</code> and pressing Tab would give us the following options:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">ls </span>c
call.mp3
</code></pre></div></div>
<p>However, with case-insensitive completion enabled, ZSH will return all files and directories starting with C:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">ls </span>c
Code/ CAD_info.doc call.mp3
</code></pre></div></div>
<p>To enable this functionality, add the following snippet to the end of your <code class="language-plaintext highlighter-rouge">~/.zshrc</code> file (or create one if it does not exist):</p>
<div class="language-zsh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>autoload <span class="nt">-Uz</span> +X compinit <span class="o">&&</span> compinit
<span class="c">## case insensitive path-completion</span>
zstyle <span class="s1">':completion:*'</span> matcher-list <span class="s1">'m:{a-zA-Z}={A-Za-z}'</span>
zstyle <span class="s1">':completion:*'</span> menu <span class="k">select</span>
</code></pre></div></div>
<p>Next time you open ZSH, the functionality will be enabled. If you wish to enable it in the running terminal, you can also run <code class="language-plaintext highlighter-rouge">source ~/.zshrc</code>.</p>
Hashiconf Europe 2022: Real Life End-to-End Building, Testing, and Deploying of a Buzzword-Heavy Application2022-07-07T00:00:00+02:00https://damn.engineer/2022/07/07/building-testing-deploying-buzzword-heavy-application<p>Hashiconf Europe took place in June of 2022 in Amsterdam (The Netherlands). At that conference, I presented the hallway track <strong>“Real Life End-to-End Building, Testing, and Deploying of a Buzzword-Heavy Application”</strong>, in which I showcased a real life application built around all the buzzwords: cloud-native, Kubernetes, microservices, Docker, and many more. I discussed how we automated the building, testing, and deploying of the app with Terraform, what some of the challenges were, and what wins we have experienced.</p>
<p>The talk is about 15 minutes long. The slides for the talk can be found <a href="https://damn.engineer/talk-slides/terraform-end-to-end/">here</a>.</p>
<p>Hope you enjoy it!</p>
<style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }</style>
<div class="embed-container"><iframe src="https://www.youtube.com/embed/cbx4qlZzvEM" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe></div>
Uploading files to an Azure File Share from a Raspberry Pi2022-05-23T00:00:00+02:00https://damn.engineer/2022/05/23/az-files-raspberry-pi<h2 id="how-to-use-the-azure-cli-to-upload-files-from-a-raspberry-pi-to-an-azure-storage-file-share">How to use the Azure CLI to upload files from a Raspberry Pi to an Azure Storage File Share</h2>
<p>Recently, I used an <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction">Azure Storage Account</a> to set up a Cloud <a href="https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction">file share</a>. My goal was to have a file share that could be mounted across all my computers/phones/tablets (by using SMB) and accessed from anywhere.</p>
<p>Once the file share was set up, I decided I wanted to upload a copy of all the files I have in my home server, a Raspberry Pi running Raspbian.</p>
<p>To do this, I decided to use the Azure CLI, which conveniently has an <code class="language-plaintext highlighter-rouge">azure storage file upload-batch</code> function. These were the steps I followed:</p>
<h2 id="installing-the-azure-cli-on-the-raspberry-pi">Installing the Azure CLI on the Raspberry Pi</h2>
<p>I attempted to install <code class="language-plaintext highlighter-rouge">azure-cli</code> using the all-in-one script found in the <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt">Azure documentation</a>. However, even though the packages installed, attempting to use them threw an error. There currently aren’t packages for ARM64 architectures, and the packages that the all-in-one installs are mislabeled as ALL, which makes them install but not run. This is tracked in <a href="https://github.com/Azure/azure-cli/issues/7368">this GitHub issue</a>.</p>
<p>Instead, I used the package found in the pip3 repositories, which does work on ARM64 systems (notice that although there is also an <code class="language-plaintext highlighter-rouge">azure-cli</code> package in the Python2’s <code class="language-plaintext highlighter-rouge">pip</code> repos, this package failed to install on my system).</p>
<p>First, I installed <code class="language-plaintext highlighter-rouge">python3-pip</code>, which also installed <code class="language-plaintext highlighter-rouge">python3</code> and all necessary dependencies:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt install python3-pip
</code></pre></div></div>
<p>With <code class="language-plaintext highlighter-rouge">pip3</code> installed, I then installed the <code class="language-plaintext highlighter-rouge">azure-cli</code> package:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip3 install azure-cli
</code></pre></div></div>
<p><code class="language-plaintext highlighter-rouge">pip3</code> installs packages in the <code class="language-plaintext highlighter-rouge">~/.local/bin</code> directory, which is not in the $PATH by default. So I also added this line at the end of my <code class="language-plaintext highlighter-rouge">.bashrc</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export PATH=$PATH:/home/$USER/.local/bin
</code></pre></div></div>
<h2 id="performing-the-batch-upload">Performing the batch upload</h2>
<p>With the az cli installed and ready to go, we can perform the file upload. First, we need to decide which method to use for authenticating to the Storage Account. For my scenario, the easiest way was to use a connection string. To find it, go to the Azure Portal, navigate to your Storage Account, then <code class="language-plaintext highlighter-rouge">Access keys</code>, and finally press <code class="language-plaintext highlighter-rouge">Show keys</code>:</p>
<p><img src="https://damn.engineer/assets/images/az-files-raspberry-pi/storage-access-keys.png" alt="Finding the connection string" /></p>
<p>The rest of the command is pretty self-explanatory. We need to define which File Share to upload to (the <code class="language-plaintext highlighter-rouge">-d</code> flag), the local path to upload (the <code class="language-plaintext highlighter-rouge">-s</code> flag), and which path to upload to in the File Share (the <code class="language-plaintext highlighter-rouge">--destination-path</code> flag; without it, the files will be uploaded to the root of the share). The final command will look like this:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az storage file upload-batch <span class="nt">-d</span> FILE_SHARE <span class="nt">-s</span> LOCAL/PATH/TO/FOLDER/ <span class="se">\</span>
<span class="nt">--destination-path</span> PATH/IN/FILE/SHARE/ <span class="nt">--connection-string</span> MY_STRING
</code></pre></div></div>
<p>Check the <a href="https://docs.microsoft.com/en-us/cli/azure/storage/file?view=azure-cli-latest#az-storage-file-upload-batch">documentation</a> if you want to see all of the available flags.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Remember that the upload is recursive, so all files and subdirectories found in the source path will be uploaded to the share. Also, the <code class="language-plaintext highlighter-rouge">upload-batch</code> subcommand does not skip over already present files - it simply overwrites them if they already exist. So this command might not be ideal for syncing directories regularly, like how one might use <code class="language-plaintext highlighter-rouge">rsync</code>, for example.</p>
<p>Uploading files from a Raspberry Pi to an Azure File Share is not difficult once all pieces are in place. This process should become even easier once Microsoft releases the ARM64 version of <code class="language-plaintext highlighter-rouge">azure-cli</code>.</p>
<p>Did this tip help you out? Let me know in the comments below!</p>
Performing Elasticsearch API calls with Terraform, part 22022-04-25T00:00:00+02:00https://damn.engineer/2022/04/25/elasticsearch-api-terraform-pt2<h2 id="configuring-terraform-to-call-the-elasticsearch-api">Configuring Terraform to call the Elasticsearch API</h2>
<p><img src="https://damn.engineer/assets/images/elasticsearch-api-terraform-pt2/robots.jpg" alt="Can't wait for robots to actually start doing all this grueling work for us" />
Photo by <a href="https://unsplash.com/@ekrull?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Eric Krull</a> on <a href="https://unsplash.com/s/photos/robot?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
<p>In <a href="https://damn.engineer/2022/04/19/elasticsearch-api-terraform-pt1">the previous post</a>, we created a script to automate calls to our Elasticsearch endpoint. Let’s now configure Terraform to use this script whenever we need to reach the API.</p>
<h2 id="creating-the-terraform-null_resource">Creating the Terraform <code class="language-plaintext highlighter-rouge">null_resource</code></h2>
<p>As an example, say we wanted to add a <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/set-up-lifecycle-policy.html#:~:text=To%20create%20a%20lifecycle%20policy,policy%20to%20the%20Elasticsearch%20cluster.">lifecycle policy</a> to Elasticsearch. First off, let’s create a variable to hold the policy definition. We use <a href="https://linuxize.com/post/bash-heredoc/">Heredoc</a> syntax to make sure Terraform does not trip up on the multiline JSON definition:</p>
<div class="language-terraform highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">variable</span> <span class="s2">"lifecyclePolicy"</span> <span class="p">{</span>
<span class="nx">type</span> <span class="p">=</span> <span class="nx">map</span><span class="p">(</span><span class="nx">string</span><span class="p">)</span>
<span class="nx">default</span> <span class="p">=</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"log-lifecycle"</span>
<span class="nx">policy</span> <span class="p">=</span> <span class="o"><<-</span><span class="no">JSON</span><span class="sh">
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "7d"
}
}
}
}
}
}
</span><span class="no"> JSON
</span> <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Now, we create a Terraform <code class="language-plaintext highlighter-rouge">null_resource</code> to apply the above policy with the script we wrote earlier. Notice that the script is called <code class="language-plaintext highlighter-rouge">elastic_api_call.sh</code> and it is located inside of the <code class="language-plaintext highlighter-rouge">./scripts/</code> subfolder, in the same folder as the Terraform files.</p>
<p>Notice also that we pass the variables for the script as ENV variables. Especially of note is the <code class="language-plaintext highlighter-rouge">AUTHORIZATION</code> value, this one should be a <code class="language-plaintext highlighter-rouge">Base64</code> of the <code class="language-plaintext highlighter-rouge">es_username:es_password</code> string. In the real world, we would not want to have the password in the codebase, so we plug it in from somewhere else (replacing the “pass” string below), and we use Terraform’s <a href="https://www.terraform.io/language/functions/join">join</a> and <a href="https://www.terraform.io/language/functions/base64encode">base64encode</a> functions to create the string for us (do remember however, that the <code class="language-plaintext highlighter-rouge">Base64</code> string will be stored in your Terraform state. <strong>Always</strong> make sure to properly secure your state files).</p>
<p>Finally, the <code class="language-plaintext highlighter-rouge">ENDPOINT</code> variable should be set to the specific API endpoint we are trying to reach, while <code class="language-plaintext highlighter-rouge">CLUSTER_NAME</code> and <code class="language-plaintext highlighter-rouge">CLUSTER_RESOURCE_GROUP</code> should be set to the correct values for our Kubernetes cluster:</p>
<div class="language-terraform highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"null_resource"</span> <span class="s2">"setPolicy"</span> <span class="p">{</span>
<span class="k">provisioner</span> <span class="s2">"local-exec"</span> <span class="p">{</span>
<span class="nx">interpreter</span> <span class="p">=</span> <span class="p">[</span><span class="s2">"/bin/bash"</span><span class="p">,</span> <span class="s2">"-c"</span><span class="p">]</span>
<span class="nx">command</span> <span class="p">=</span> <span class="s2">"chmod +x ./scripts/elastic_api_call.sh && ./scripts/elastic_api_call.sh"</span>
<span class="nx">environment</span> <span class="p">=</span> <span class="p">{</span>
<span class="nx">CLUSTER_NAME</span> <span class="p">=</span> <span class="s2">"es-cluster"</span>
<span class="nx">CLUSTER_RESOURCE_GROUP</span> <span class="p">=</span> <span class="s2">"es-resource-group"</span>
<span class="nx">AUTHORIZATION</span> <span class="p">=</span> <span class="nx">base64encode</span><span class="p">(</span><span class="nx">join</span><span class="p">(</span><span class="s2">":"</span><span class="p">,</span> <span class="p">[</span><span class="s2">"user"</span><span class="p">,</span> <span class="s2">"pass"</span><span class="p">]))</span>
<span class="nx">HTTP_VERB</span> <span class="p">=</span> <span class="s2">"PUT"</span>
<span class="nx">ENDPOINT</span> <span class="p">=</span> <span class="s2">"_ilm/policy/</span><span class="k">${</span><span class="kd">var</span><span class="p">.</span><span class="nx">lifecyclePolicy</span><span class="p">.</span><span class="nx">name</span><span class="k">}</span><span class="s2">"</span>
<span class="nx">BODY_JSON</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">lifecyclePolicy</span><span class="p">.</span><span class="nx">policy</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="nx">triggers</span> <span class="p">=</span> <span class="p">{</span>
<span class="nx">policy</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">lifecyclePolicy</span><span class="p">.</span><span class="nx">policy</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Using the policy definition as a <a href="https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource#optional">trigger</a> will ensure that the resource is run every time changes are made to it, keeping our <code class="language-plaintext highlighter-rouge">null_resource</code> idempotent. Triggers can be set to any string, so make sure to set them to a value that makes sense for your specific scenario (such as the policy definition, in this example).</p>
<h2 id="conclusion">Conclusion</h2>
<p>The script we created can be used with different combinations of endpoint/HTTP verb/body. This means that we can easily create more <code class="language-plaintext highlighter-rouge">null_resource</code>s as needed, to call different API endpoints in Elasticsearch. In this way, we can keep all our configuration in code, and keep manual work to a minimum.</p>
Performing Elasticsearch API calls with Terraform, part 12022-04-19T00:00:00+02:00https://damn.engineer/2022/04/19/elasticsearch-api-terraform-pt1<h2 id="configuring-elasticsearch-with-terraform-by-means-of-direct-api-calls">Configuring Elasticsearch with Terraform, by means of direct API calls</h2>
<p><img src="https://damn.engineer/assets/images/elasticsearch-api-terraform-pt1/coffee.jpg" alt="A completely unrealistic work scene" />
Photo by <a href="https://burst.shopify.com/@kthukral?utm_campaign=photo_credit&utm_content=Free+Stock+Photo+of+Laptop+Coffee+%E2%80%94+HD+Images&utm_medium=referral&utm_source=credit">Karan Thukral</a> on <a href="https://burst.shopify.com/laptop?utm_campaign=photo_credit&utm_content=Free+Stock+Photo+of+Laptop+Coffee+%E2%80%94+HD+Images&utm_medium=referral&utm_source=credit">Burst</a></p>
<p>If you use <a href="https://www.terraform.io/">Terraform</a> for automating your Elasticsearch deployments, you might find that you need to call the Elasticsearch API during your Terraform runs. Although the Elasticsearch config can cover a lot of ES settings, there are certain changes that can only be made by means of a direct API call to ES. Thankfully, by means of some clever scripting and a <a href="https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource">null-resource</a> in Terraform, we can easily solve this issue.</p>
<p>NOTE: Even though the information below can be used for a regular instance of Elasticsearch, I am adapting it specifically to Elastic Cloud in Kubernetes (ECK) running on an Azure Kubernetes Service (AKS), as that is the setup we currently have in my project. This only applies to setting up a network tunnel for reaching the endpoints, as otherwise this solution should be identical for any type of ES deployment.</p>
<h2 id="building-blocks">Building blocks</h2>
<p>To build the solution, we will leverage the following components:</p>
<h3 id="the-elasticsearch-api">The Elasticsearch API</h3>
<p>The ES API is well documented in the <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/rest-apis.html">official docs</a>. We will create a script that uses <code class="language-plaintext highlighter-rouge">curl</code> to perform the calls.</p>
<h3 id="terraform-null_resource">Terraform null_resource</h3>
<p>A <a href="https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource">null_resource</a> allows one to run an arbitrary command or script with Terraform. These resources should be configured with triggers, to ensure that they only run when needed.</p>
<p>To perform the calls themselves, we will create a Bash script. Let’s get started!</p>
<h3 id="connecting-to-elasticsearch">Connecting to Elasticsearch</h3>
<p>Before trying to perform API calls on ES, we need to be able to <em>reach</em> the API. If your ES API is available on your network (or over the Internet), you can probably skip these steps. In my case, our ES is self-contained inside of AKS, and the endpoints are not exposed outside of the cluster. So before we can do anything else, we need to open a tunnel to Elasticsearch.</p>
<p>The first step, since we are running in AKS, is to acquire credentials to connect to Kubernetes. Assuming <code class="language-plaintext highlighter-rouge">az cli</code> is already authenticated with Azure, the only information needed will be the names of the cluster and resource group.</p>
<p>We also create a “lock file” so the credentials are not pulled multiple times, which can help speed up subsequent runs:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-f</span> /tmp/kubectl_config_present <span class="o">]</span><span class="p">;</span> <span class="k">then
</span>az aks get-credentials <span class="se">\</span>
<span class="nt">--name</span> <span class="s2">"</span><span class="nv">$CLUSTER_NAME</span><span class="s2">"</span> <span class="se">\</span>
<span class="nt">--resource-group</span> <span class="s2">"</span><span class="nv">$CLUSTER_RESOURCE_GROUP</span><span class="s2">"</span> <span class="o">&&</span>
<span class="nb">touch</span> /tmp/kubectl_config_present<span class="p">;</span>
<span class="k">fi</span>
</code></pre></div></div>
<p>Then, we use the <code class="language-plaintext highlighter-rouge">port-forward</code> functionality in <code class="language-plaintext highlighter-rouge">kubectl</code> to temporarily expose the Elasticsearch API. Make sure to use the correct Kubernetes namespace for the <code class="language-plaintext highlighter-rouge">elasticsearch-es-http</code> service. We will use a random high numbered port, and add a 5 second pause, to ensure we don’t try to hit the endpoint before it’s fully available. We also move the process to the background (by means of <code class="language-plaintext highlighter-rouge">&</code>), since otherwise it will not release the terminal and the rest of the script won’t run:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">local_port</span><span class="o">=</span><span class="si">$(</span><span class="nb">shuf</span> <span class="nt">-i</span> 10000-65000 <span class="nt">-n</span> 1<span class="si">)</span><span class="p">;</span>
kubectl port-forward service/elasticsearch-es-http <span class="se">\</span>
<span class="nt">--namespace</span> default <span class="s2">"</span><span class="nv">$local_port</span><span class="s2">"</span>:9200 &
<span class="nb">sleep </span>5<span class="p">;</span>
</code></pre></div></div>
<h3 id="performing-the-call">Performing the call</h3>
<p>With the Kubernetes service now exposed, we are able to reach the API. Lets create a flexible <code class="language-plaintext highlighter-rouge">curl</code> call that can be used for different endpoints. When we run the script, we will pass it a few variables, including the HTTP verb we want to use, the JSON body (if any), the authorization string (more on that in the next post), and the specific endpoint we want to call.</p>
<p>If you are not using Kubernetes, make sure to add the correct Elasticsearch URL to the <code class="language-plaintext highlighter-rouge">curl</code> call.</p>
<p>We also use a little “safeguard” at the end to ensure the call was actually successful.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">if</span> <span class="o">[</span> <span class="nt">-n</span> <span class="s2">"</span><span class="nv">$HTTP_VERB</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span><span class="nv">verb</span><span class="o">=</span><span class="s2">"-X </span><span class="nv">$HTTP_VERB</span><span class="s2">"</span>
<span class="k">else
</span><span class="nv">verb</span><span class="o">=</span><span class="s2">"-X GET"</span>
<span class="k">fi
if</span> <span class="o">[</span> <span class="nt">-n</span> <span class="s2">"</span><span class="nv">$BODY_JSON</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span><span class="nv">body</span><span class="o">=</span><span class="s2">"-d </span><span class="k">${</span><span class="nv">BODY_JSON</span><span class="k">}</span><span class="s2">"</span>
<span class="k">else
</span><span class="nv">body</span><span class="o">=</span><span class="s2">""</span>
<span class="k">fi
</span>curl <span class="nt">--silent</span> <span class="nt">--show-error</span> <span class="nt">--fail</span> <span class="s2">"</span><span class="nv">$verb</span><span class="s2">"</span> <span class="se">\</span>
<span class="nt">-H</span> <span class="s2">"Authorization: Basic </span><span class="nv">$AUTHORIZATION</span><span class="s2">"</span> <span class="se">\</span>
<span class="nt">-H</span> <span class="s2">"Content-Type: application/json"</span> <span class="s2">"</span><span class="nv">$body</span><span class="s2">"</span> <span class="se">\</span>
<span class="nt">-k</span> <span class="s2">"https://localhost:</span><span class="nv">$local_port</span><span class="s2">"</span>/<span class="s2">"</span><span class="nv">$ENDPOINT</span><span class="s2">"</span> <span class="se">\</span>
<span class="o">||</span> <span class="o">{</span> <span class="nb">export </span><span class="nv">dirty_exit</span><span class="o">=</span><span class="s2">"true"</span><span class="p">;</span> <span class="o">}</span><span class="p">;</span>
</code></pre></div></div>
<h3 id="cleaning-up-after-ourselves">Cleaning up after ourselves</h3>
<p>After performing our call, we should shut down the port forward in <code class="language-plaintext highlighter-rouge">kubectl</code>:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">kill</span> <span class="nt">-2</span> <span class="s2">"</span><span class="si">$(</span>pgrep <span class="nt">-f</span> <span class="s2">"kubectl port-forward.*</span><span class="nv">$local_port</span><span class="s2">"</span><span class="si">)</span><span class="s2">"</span><span class="p">;</span>
</code></pre></div></div>
<p>Finally, throw an error in case the call was not successful:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">if</span> <span class="o">[</span> <span class="s2">"</span><span class="nv">$dirty_exit</span><span class="s2">"</span> <span class="o">==</span> <span class="nb">true</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span><span class="nb">echo</span> <span class="nt">-e</span> <span class="s2">"Something went wrong! Check output above."</span><span class="p">;</span>
<span class="nb">exit </span>1<span class="p">;</span>
<span class="k">fi</span>
</code></pre></div></div>
<h3 id="the-tldr">The TL;DR</h3>
<p>Putting it all together, our script will look like this (added some comments and terminal colors on the error):</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
<span class="c"># Pull down k8s config, only first time</span>
<span class="k">if</span> <span class="o">[</span> <span class="o">!</span> <span class="nt">-f</span> /tmp/kubectl_config_present <span class="o">]</span><span class="p">;</span> <span class="k">then
</span>az aks get-credentials <span class="nt">--name</span> <span class="s2">"</span><span class="nv">$CLUSTER_NAME</span><span class="s2">"</span> <span class="nt">--resource-group</span> <span class="s2">"</span><span class="nv">$CLUSTER_RESOURCE_GROUP</span><span class="s2">"</span> <span class="o">&&</span>
<span class="nb">touch</span> /tmp/kubectl_config_present<span class="p">;</span>
<span class="k">fi</span>
<span class="c"># Start kubectl port-forward and detach it from current console</span>
<span class="nv">local_port</span><span class="o">=</span><span class="si">$(</span><span class="nb">shuf</span> <span class="nt">-i</span> 10000-65000 <span class="nt">-n</span> 1<span class="si">)</span><span class="p">;</span>
kubectl port-forward service/elasticsearch-es-http <span class="nt">--namespace</span> default <span class="s2">"</span><span class="nv">$local_port</span><span class="s2">"</span>:9200 &
<span class="nb">sleep </span>5<span class="p">;</span>
<span class="c"># Hit that API!</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-n</span> <span class="s2">"</span><span class="nv">$HTTP_VERB</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span><span class="nv">verb</span><span class="o">=</span><span class="s2">"-X </span><span class="nv">$HTTP_VERB</span><span class="s2">"</span>
<span class="k">else
</span><span class="nv">verb</span><span class="o">=</span><span class="s2">"-X GET"</span>
<span class="k">fi
if</span> <span class="o">[</span> <span class="nt">-n</span> <span class="s2">"</span><span class="nv">$BODY_JSON</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span><span class="nv">body</span><span class="o">=</span><span class="s2">"-d </span><span class="k">${</span><span class="nv">BODY_JSON</span><span class="k">}</span><span class="s2">"</span>
<span class="k">else
</span><span class="nv">body</span><span class="o">=</span><span class="s2">""</span>
<span class="k">fi
</span>curl <span class="nt">--silent</span> <span class="nt">--show-error</span> <span class="nt">--fail</span> <span class="s2">"</span><span class="nv">$verb</span><span class="s2">"</span> <span class="nt">-H</span> <span class="s2">"Authorization: Basic </span><span class="nv">$AUTHORIZATION</span><span class="s2">"</span> <span class="nt">-H</span> <span class="s2">"Content-Type: application/json"</span> <span class="s2">"</span><span class="nv">$body</span><span class="s2">"</span> <span class="nt">-k</span> <span class="s2">"https://localhost:</span><span class="nv">$local_port</span><span class="s2">"</span>/<span class="s2">"</span><span class="nv">$ENDPOINT</span><span class="s2">"</span> <span class="o">||</span> <span class="o">{</span> <span class="nb">export </span><span class="nv">dirty_exit</span><span class="o">=</span><span class="s2">"true"</span><span class="p">;</span> <span class="o">}</span><span class="p">;</span>
<span class="c"># Ctrl+C the port forward process</span>
<span class="nb">kill</span> <span class="nt">-2</span> <span class="s2">"</span><span class="si">$(</span>pgrep <span class="nt">-f</span> <span class="s2">"kubectl port-forward.*</span><span class="nv">$local_port</span><span class="s2">"</span><span class="si">)</span><span class="s2">"</span><span class="p">;</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">"</span><span class="nv">$dirty_exit</span><span class="s2">"</span> <span class="o">==</span> <span class="nb">true</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span><span class="nb">echo</span> <span class="nt">-e</span> <span class="s2">"</span><span class="se">\n\0</span><span class="s2">33[0;31mSomething went wrong! Check output above.</span><span class="se">\0</span><span class="s2">33[0m</span><span class="se">\n\n</span><span class="s2">"</span><span class="p">;</span>
<span class="nb">exit </span>1<span class="p">;</span>
<span class="k">fi</span>
</code></pre></div></div>
<p>This script can be saved anywhere Terraform can see it. For convenience, I will save it to a subfolder called <code class="language-plaintext highlighter-rouge">./scripts/</code>, and will name the script <code class="language-plaintext highlighter-rouge">elastic_api_call.sh</code>.</p>
<p>In my next post, I will show how to run this script with a Terraform <code class="language-plaintext highlighter-rouge">null_resource</code>, as well as making sure that the resource only runs whenever we need it to.</p>
Bash function for navigating any filesystem location2022-03-14T00:00:00+01:00https://damn.engineer/2022/03/14/cdc-alias<h2 id="creating-a-bash-function-for-navigating-and-autocompleting-any-filesystem-directory">Creating a Bash function for navigating and autocompleting any filesystem directory</h2>
<p><img src="https://damn.engineer/assets/images/cdc-alias/terminal.jpg" alt="Bash terminal" />
Photo by <a href="https://unsplash.com/@6heinz3r?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Gabriel Heinzer</a> on <a href="https://unsplash.com/s/photos/computer-terminal?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
<p>If you are like me, and spend all day typing on a keyboard, you start looking for ways to save yourself a keystroke here and there. This quest for ultimate performance led me to try and find a way to speed up navigating into a directory in my Bash terminal.</p>
<h3 id="why">Why?</h3>
<p>There are many reasons why a method for quickly navigating into a directory might make sense. For example, long (or capitalized) directory names. Say that you find yourself having to navigate to <code class="language-plaintext highlighter-rouge">/media/external/Code</code> on a regular basis, because that is where your keep your repositories. Wouldn’t it be nice to have a quick alias to get yourself there instead of having to type out the path every time?</p>
<p><em>“But Damn Dot Engineer”</em> I hear you say, <em>“wouldn’t it be enough to create an <code class="language-plaintext highlighter-rouge">alias</code> for that specific <code class="language-plaintext highlighter-rouge">cd</code> command? Why write an article about this?”</em></p>
<p>Because I have a self-imposed article quota to fill. Also, because <strong>autocomplete</strong>. If I create an alias such as</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">alias </span><span class="nv">cdc</span><span class="o">=</span><span class="s2">"cd /media/external/Code"</span>
</code></pre></div></div>
<p>this <em>will</em> work for navigating to that specific directory. However, what if I want to continue navigating <em>inside</em> of that directory? Say for example my desired destination is <code class="language-plaintext highlighter-rouge">/media/external/Code/www/site</code>. To reach that directory with the above alias, I would have to do it in two commands:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cdc
<span class="nb">cd </span>www/site
</code></pre></div></div>
<p>That is hardly useful, is it? Instead, I wanted to have a command that would work the same way as the normal <code class="language-plaintext highlighter-rouge">cd</code> command does, allowing me to use [tab] for autocompleting paths, and for showing what is inside of a directory with a double [tab] press.</p>
<h3 id="the-solution">The Solution</h3>
<p>Lets build a simple solution, using nothing else than a couple of Bash functions.</p>
<h4 id="prerequisite">Prerequisite</h4>
<p>This little helper uses the <code class="language-plaintext highlighter-rouge">_cd</code> built-in function. Most “regular” Linux distributions I have tried (such as my daily driver Ubuntu) already have this function included. However, I noticed that the official Ubuntu Docker image did not have it. You can check if your Bash has it by running these two commands:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bash <span class="nt">--debugger</span>
<span class="nb">declare</span> <span class="nt">-F</span> _cd
<span class="c"># _cd 1726 /usr/share/bash-completion/bash_completion</span>
</code></pre></div></div>
<p>You should get output telling you where the function is declared (in my case above, it’s declared in <code class="language-plaintext highlighter-rouge">/usr/share/bash-completion/bash_completion</code>). If the output is empty, your Bash does not have the <code class="language-plaintext highlighter-rouge">_cd</code> function yet. To add it, install the <a href="https://repology.org/project/bash-completion/versions">bash-completion</a> package, and then make sure these lines appear in your <code class="language-plaintext highlighter-rouge">.bashrc</code>:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">if</span> <span class="o">[</span> <span class="nt">-f</span> /etc/bash_completion <span class="o">]</span> <span class="o">&&</span> <span class="o">!</span> <span class="nb">shopt</span> <span class="nt">-oq</span> posix<span class="p">;</span> <span class="k">then</span>
<span class="nb">.</span> /etc/bash_completion
<span class="k">fi</span>
</code></pre></div></div>
<h4 id="the-bash-function">The Bash function</h4>
<p>Add the following block at the end of your <code class="language-plaintext highlighter-rouge">.bashrc</code>, adjusting the <code class="language-plaintext highlighter-rouge">cdc_path</code> variable to the base path you want to navigate to:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">### cdc command</span>
<span class="nv">cdc_path</span><span class="o">=</span><span class="s1">'/media/external/Code'</span>
_cdc <span class="o">()</span> <span class="o">{</span>
<span class="nb">declare </span><span class="nv">CDPATH</span><span class="o">=</span>
<span class="nb">cd</span> <span class="s2">"</span><span class="nv">$cdc_path</span><span class="s2">"</span>
_cd <span class="s2">"</span><span class="nv">$@</span><span class="s2">"</span>
<span class="o">}</span>
<span class="nb">complete</span> <span class="nt">-F</span> _cdc cdc
cdc <span class="o">()</span> <span class="o">{</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">"</span><span class="nv">$@</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span><span class="nb">cd</span> <span class="s2">"</span><span class="nv">$cdc_path</span><span class="s2">"</span><span class="p">;</span>
<span class="k">else
</span><span class="nb">cd</span> <span class="s2">"</span><span class="nv">$cdc_path</span><span class="s2">"</span>/<span class="s2">"</span><span class="nv">$@</span><span class="s2">"</span><span class="p">;</span>
<span class="k">fi</span>
<span class="o">}</span>
<span class="c">###</span>
</code></pre></div></div>
<p>What does this do?</p>
<p>First off, the <code class="language-plaintext highlighter-rouge">_cdc</code> function uses the <code class="language-plaintext highlighter-rouge">_cd</code> function to search for directories inside of <code class="language-plaintext highlighter-rouge">cdc_path</code>. The functionality is the same as that of the regular <code class="language-plaintext highlighter-rouge">cd</code> autocomplete. Any results are added to the arguments of the <code class="language-plaintext highlighter-rouge">cdc</code> command so that, upon pressing [enter], Bash navigates to that directory.</p>
<h4 id="limitations">Limitations</h4>
<p>The <code class="language-plaintext highlighter-rouge">_cdc</code> function needs to navigate to <code class="language-plaintext highlighter-rouge">cdc_path</code> to be able to correctly autocomplete folders. If the operation is cancelled (for example, by pressing [ctrl]+[C] <em>after</em> using [tab] to see results), the terminal still ends up navigating to <code class="language-plaintext highlighter-rouge">cdc_path</code>. I have not found a simple way to avoid navigating away without breaking the autocomplete functionality. This is a minor bug for me, since it does not happen too often and a <code class="language-plaintext highlighter-rouge">cd -</code> gets me back to wherever I was originally.</p>
<p>Did you find this tip useful? Have any suggestions to improve it? Let me know in the comments below!</p>
Using SSL/TLS certificates from Azure Key Vault in Kubernetes pods2022-02-07T00:00:00+01:00https://damn.engineer/2022/02/07/tls-cert-azure-keyvault-kubernetes<h2 id="how-to-make-kubernetes-pods-trust-internal-https-services">How to make Kubernetes pods trust internal HTTPS services</h2>
<p><img src="https://damn.engineer/assets/images/tls-cert-azure-keyvault-kubernetes/padlocks.jpg" alt="Padlocks, padlocks everywhere" />
Photo by <a href="https://unsplash.com/@parsoakhorsand?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Parsoa Khorsand</a> on <a href="https://unsplash.com/s/photos/lock?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
<p><strong>NOTE: This post builds upon my previous post <a href="https://damn.engineer/2022/01/31/azure-keyvault-to-kubernetes">Accessing Azure Key Vault secrets from Kubernetes</a>, and assumes understanding of the subject discussed there.</strong></p>
<p>A common task I face as a DevOps engineer has to do with injecting TLS (formerly SSL) certificates into an application or service. Why would this be needed? There can be many reasons, although by far the one I’ve encountered most, has to do with internal TLS certificates.</p>
<p>Most medium-to-large enterprises use internal TLS certificates to authenticate internal connections. By <em>internal</em>, I mean certificates which have not been obtained from a known Certificate Signing Authority, but instead, have been locally generated, for use by internal applications. This model requires that any client or user attempting to connect have a <a href="https://stackoverflow.com/a/61422058/4441002">“Certificate Authority Certificate”</a> installed, which makes the system trust certificates generated by that particular (internally created, and unique to the organization) Certificate Authority.</p>
<p>For example, an organization might have an internal log sink/aggregator which accepts connections over HTTPS. If this sink is only available within the internal network, its TLS certificate will probably have been generated in-house. Now, imagine a microservice running in Kubernetes needs to send logs to this sink. How can we make the Kubernetes pod trust the internal Certificate Authority, so that connections to the log sink are properly secured?</p>
<p>Although there are probably a few different ways of achieving this result, this is one that I have used which has worked well for me, and does not require any additional helper tools/sidecar containers/etc.</p>
<h2 id="requirements">Requirements</h2>
<p>To start off, the CA certificate to be installed in the microservice should be stored in an Azure Key Vault. For simplicity, I will assume that this certificate has been saved as a <code class="language-plaintext highlighter-rouge">secret</code>. This method should also work if it has been saved as a <code class="language-plaintext highlighter-rouge">certificate</code>, although the syntax might be different. Refer to the <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/configurations/getting-certs-and-keys/">documentation</a> for more information on how to reference the saved cert.</p>
<p>Next, our Kubernetes cluster should already have the Kubernetes Secrets Store CSI Driver set up. For instructions on how to do that, check my <a href="https://damn.engineer/2022/01/31/azure-keyvault-to-kubernetes">previous post</a> on the subject.</p>
<p>The certificate to be used should be in a format that our microservice understands. Since I am using Linux-based microservices, I need to make sure my cert is available as a PEM/CRT file.</p>
<p>Finally, I am going to assume our microservice is based on some flavor of Debian. If it isn’t, the location to mount the certificate or the command to be run might be slightly different. Refer to your distribution’s docs for specific instructions on how to update the local certificate store.</p>
<h2 id="querying-the-certificate">Querying the certificate</h2>
<p>The certificate can be queried in the same way as any other key vault object. One thing to notice is that we do not create a Kubernetes secret from the Azure secret (notice the missing <code class="language-plaintext highlighter-rouge">spec.secretObjects</code> section):</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">secrets-store.csi.x-k8s.io/v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">SecretProviderClass</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">azure-secrets</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">provider</span><span class="pi">:</span> <span class="s">azure</span>
<span class="na">parameters</span><span class="pi">:</span>
<span class="na">keyvaultName</span><span class="pi">:</span> <span class="s">$KEYVAULT_NAME</span>
<span class="na">tenantId</span><span class="pi">:</span> <span class="s">$SERVICE_PRINCIPAL_TENANT_ID</span>
<span class="c1"># Name of the secret containing the certificate</span>
<span class="na">objects</span><span class="pi">:</span> <span class="pi">|</span>
<span class="s">array:</span>
<span class="s">- |</span>
<span class="s">objectName: internal-ca-certificate</span>
<span class="s">objectType: secret</span>
</code></pre></div></div>
<p>The secret should now be available for use in our cluster.</p>
<h2 id="mounting-the-certificate-in-our-microservice">Mounting the certificate in our microservice</h2>
<p>With the cert now available, we can use the <code class="language-plaintext highlighter-rouge">volume</code> functionality in Kubernetes to mount it in our pod. First, we need to declare our secrets provider as an eligible volume:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">volumes</span><span class="pi">:</span>
<span class="c1"># Can be anything</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">secrets-provider</span>
<span class="na">csi</span><span class="pi">:</span>
<span class="na">driver</span><span class="pi">:</span> <span class="s">secrets-store.csi.k8s.io</span>
<span class="na">volumeAttributes</span><span class="pi">:</span>
<span class="c1"># Should match the name of the SecretProviderClass</span>
<span class="na">secretProviderClass</span><span class="pi">:</span> <span class="s">azure-secrets</span>
<span class="c1"># Credentials for authenticating to the key vault,</span>
<span class="c1"># see previous post</span>
<span class="na">nodePublishSecretRef</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">$SERVICE_PRINCIPAL_CREDENTIALS</span>
</code></pre></div></div>
<p>With that out of the way, we should then mount the secret as a file in our pod. This uses a little trick found in the <code class="language-plaintext highlighter-rouge">volumeMounts</code> functionality of Kubernetes, where a single file can be mounted <em>into</em> a directory, instead of mounting <em>on top</em> of a directory and overriding its contents. To achieve this, we use the full path of the mounted file, and use the <code class="language-plaintext highlighter-rouge">subPath</code> field to indicate the specific file in the volume we wish to mount. In this case, the <code class="language-plaintext highlighter-rouge">subPath</code> should match the name of the secret we are querying with our <code class="language-plaintext highlighter-rouge">SecretProviderClass</code>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">volumeMounts</span><span class="pi">:</span>
<span class="c1"># Should match the name of the volume</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">secrets-provider</span>
<span class="c1"># Full path of the mounted file.</span>
<span class="c1"># For Debian-based images, it should be</span>
<span class="c1"># inside the /usr/local/share/ca-certificates/</span>
<span class="c1"># folder, since that is where the system's</span>
<span class="c1"># CA certificates are stored</span>
<span class="na">mountPath</span><span class="pi">:</span> <span class="s2">"</span><span class="s">/usr/local/share/ca-certificates/internal-ca.crt"</span>
<span class="c1"># Name of the secret containing the certificate</span>
<span class="na">subPath</span><span class="pi">:</span> <span class="s2">"</span><span class="s">internal-ca-certificate"</span>
<span class="na">readOnly</span><span class="pi">:</span> <span class="no">true</span>
</code></pre></div></div>
<p>With this, the certificate will be available as a file in our pod. However, most Linux-based systems do not just use whatever files are in that folder at any given moment. Instead, the system needs to be told to update the local certificate store, which is built from whatever files are in that directory. We will do that in the next step.</p>
<h2 id="updating-the-certificate-store">Updating the certificate store</h2>
<p>To update the microservice’s certificate store, we use the <code class="language-plaintext highlighter-rouge">update-ca-certificates</code> command. To make sure that our new cert is available for our service from the moment it starts up, we can run this command as part of a <code class="language-plaintext highlighter-rouge">spec.containers.lifecycle.postStart</code> instruction. PostStart events are sent immediately after a container is started, which means that our command will be run as soon as possible. Additionally, since volume mounts are performed <em>before</em> startup, we can be sure that our cert will be ready to be included in the local certificate store:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">lifecycle</span><span class="pi">:</span>
<span class="na">postStart</span><span class="pi">:</span>
<span class="na">exec</span><span class="pi">:</span>
<span class="na">command</span><span class="pi">:</span> <span class="pi">[</span><span class="s2">"</span><span class="s">/bin/sh"</span><span class="pi">,</span> <span class="s2">"</span><span class="s">-c"</span><span class="pi">,</span> <span class="s2">"</span><span class="s">update-ca-certificates"</span><span class="pi">]</span>
</code></pre></div></div>
<p>This is the last piece of the puzzle. Putting it all together, our pod deployment should look like this:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">Pod</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">lifecycle-pod</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">volumes</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">secrets-provider</span>
<span class="na">csi</span><span class="pi">:</span>
<span class="na">driver</span><span class="pi">:</span> <span class="s">secrets-store.csi.k8s.io</span>
<span class="na">readOnly</span><span class="pi">:</span> <span class="no">true</span>
<span class="na">volumeAttributes</span><span class="pi">:</span>
<span class="na">secretProviderClass</span><span class="pi">:</span> <span class="s">azure-secrets</span>
<span class="na">nodePublishSecretRef</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">$SERVICE_PRINCIPAL_CREDENTIALS</span>
<span class="na">containers</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">app-with-certificate</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">ubuntu</span>
<span class="na">command</span><span class="pi">:</span> <span class="pi">[</span><span class="s2">"</span><span class="s">sleep"</span><span class="pi">,</span> <span class="s2">"</span><span class="s">300"</span><span class="pi">]</span>
<span class="na">volumeMounts</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">secrets-provider</span>
<span class="na">mountPath</span><span class="pi">:</span> <span class="s2">"</span><span class="s">/usr/local/share/ca-certificates/internal-ca.crt"</span>
<span class="na">subPath</span><span class="pi">:</span> <span class="s2">"</span><span class="s">internal-ca-certificate"</span>
<span class="na">readOnly</span><span class="pi">:</span> <span class="no">true</span>
<span class="na">lifecycle</span><span class="pi">:</span>
<span class="na">postStart</span><span class="pi">:</span>
<span class="na">exec</span><span class="pi">:</span>
<span class="na">command</span><span class="pi">:</span> <span class="pi">[</span><span class="s2">"</span><span class="s">/bin/sh"</span><span class="pi">,</span> <span class="s2">"</span><span class="s">-c"</span><span class="pi">,</span> <span class="s2">"</span><span class="s">update-ca-certificates"</span><span class="pi">]</span>
</code></pre></div></div>
<p>At this point, our service should be able to perform HTTPS calls to any other internal services using the same CA provider.</p>
<h2 id="verifying">Verifying</h2>
<p>To verify if our certificate is indeed working, we can exec into our pod:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl <span class="nb">exec</span> <span class="nt">--stdin</span> <span class="nt">--tty</span> app-with-certificate <span class="nt">--</span> /bin/bash
</code></pre></div></div>
<p>Once inside, we can try <code class="language-plaintext highlighter-rouge">curl</code>ing into a known internal service:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="s">curl https://internalsite.mycompany.local</span>
</code></pre></div></div>
<p>If the CA certificate has been set up correctly, <code class="language-plaintext highlighter-rouge">curl</code> should be able to successfully connect to the HTTPS service without complaining about insecure certificates.</p>
Accessing Azure Key Vault secrets from Kubernetes2022-01-31T00:00:00+01:00https://damn.engineer/2022/01/31/azure-keyvault-to-kubernetes<h2 id="querying-and-injecting-azure-key-vault-secrets-into-kubernetes-microservices">Querying and injecting Azure Key Vault secrets into Kubernetes microservices</h2>
<p><img src="https://damn.engineer/assets/images/azure-keyvault-to-kubernetes/secrets.jpg" alt="Application secrets should be properly protected" />
Photo by <a href="https://unsplash.com/@flyd2069?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">FLY:D</a> on <a href="https://unsplash.com/s/photos/padlock?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
<p>If you use Kubernetes to run your applications, sooner or later your cluster pods will need access to <strong>secrets</strong>.</p>
<p>Of course, putting secrets in code is a <a href="https://littlemaninmyhead.wordpress.com/2021/04/05/why-we-shouldnt-commit-secrets-into-source-code-repositories/">very bad idea</a>. Many cyber-attacks and vulnerabilities could have been avoided if a password/API key/connection string/etc had not been committed to a code base.</p>
<p>On the other hand, modern applications, especially in a DevOps environment, should have all of their configuration available programmatically. Having to manually add configuration to running services would defeat the purpose of “Automating All The Things”.</p>
<p>Nowadays, a common approach to secrets management is the use of Vaults. Vaults keep secrets safe, controlling access to them, and providing APIs for querying them. If you have any secrets in your cloud-based application, you should be storing them in a vault. All major cloud providers have vaults readily available for customers.</p>
<p>So, how can Kubernetes-based applications access vault secrets in a secure and automated way?</p>
<p>Recently, I was tasked with finding the answer to that question in my project. After some research and testing, I found what is currently the best (only?) solution for this:</p>
<p>Enter <a href="https://secrets-store-csi-driver.sigs.k8s.io/introduction.html"><strong>Kubernetes Secrets Store CSI Driver</strong></a>. Now that’s a mouthful.</p>
<h2 id="secrets-store-driver">Secrets Store Driver</h2>
<p>At the most basic level, the Kubernetes Secrets Store CSI Driver (from now on, <strong>KSSCD</strong>) is a tool which connects to a vault, pulls one or multiple secrets from it, and makes them available inside the Kubernetes cluster. Pods can then use those secrets natively, without any additional work. If the secrets update on the Vault side, KSSCD will make those updates available in the cluster.</p>
<p>KSSCD is able to query secrets from many different types of vaults. For this example, I am going to use Azure Key Vault.</p>
<p>To accomplish this task, we need to perform the following steps:</p>
<ol>
<li><a href="#granting-ksscd-access-to-azure-key-vault">An identity needs to be created for granting KSSCD access to the vault</a></li>
<li><a href="#installing-ksscd">KSSCD must be installed in the cluster</a></li>
<li><a href="#configuring-ksscd">KSSCD must be configured to know which vault secrets to query, and which Kubernetes secret(s) to create from them</a></li>
<li><a href="#using-the-secrets">Pods have to be configured to use the new Kubernetes secret(s)</a></li>
</ol>
<h3 id="granting-ksscd-access-to-azure-key-vault">Granting KSSCD access to Azure Key Vault</h3>
<p>UPDATE (November 2023): Although I discuss using a Service Principal here for simplicity, there are other, arguably better, ways of providing access to the Key Vault. You can find an overview of those <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/configurations/identity-access-modes/">here</a>.</p>
<p>For KSSCD to have access to the key vault, we create a new Service Principal, or identity, give it permissions on key vault objects, and store the SP credentials as a Kubernetes secret. There are many ways of creating Service Principals, but my preferred way is by using the Azure CLI:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az ad sp create-for-rbac <span class="nt">--name</span> KSSCD-ServicePrincipal
</code></pre></div></div>
<p>Note down the returned <code class="language-plaintext highlighter-rouge">appID</code>, <code class="language-plaintext highlighter-rouge">password</code>, and <code class="language-plaintext highlighter-rouge">tenant</code>.</p>
<p>With our new SP in hand, we can now grant it <code class="language-plaintext highlighter-rouge">get</code> permissions on our key vault objects:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>az keyvault set-policy <span class="nt">-n</span> <span class="s2">"</span><span class="nv">$KEYVAULT_NAME</span><span class="s2">"</span> <span class="se">\</span>
<span class="nt">--secret-permissions</span> get <span class="se">\</span>
<span class="nt">--key-permissions</span> get <span class="se">\</span>
<span class="nt">--certificate-permissions</span> get <span class="se">\</span>
<span class="nt">--spn</span> <span class="s2">"</span><span class="nv">$SERVICE_PRINCIPAL_APP_ID</span><span class="s2">"</span>
</code></pre></div></div>
<p>Finally, we create a Kubernetes secret to hold our Service Principal information. Note that the secret containing the credentials needs to be created in the <strong>same namespace</strong> as the application pod. If pods in multiple namespaces need to use the same credentials to access the key vault, this secret needs to be created in <em>each</em> namespace:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl create secret generic keyvault-credentials <span class="se">\</span>
<span class="nt">--from-literal</span> <span class="nv">clientid</span><span class="o">=</span><span class="s2">"</span><span class="nv">$SERVICE_PRINCIPAL_APP_ID</span><span class="s2">"</span> <span class="se">\</span>
<span class="nt">--from-literal</span> <span class="nv">clientsecret</span><span class="o">=</span><span class="s2">"</span><span class="nv">$SERVICE_PRINCIPAL_APP_PASSWORD</span><span class="s2">"</span>
<span class="c"># KSSCD requires that the secret have this specific label</span>
kubectl label secret keyvault-credentials <span class="se">\</span>
secrets-store.csi.k8s.io/used<span class="o">=</span><span class="nb">true</span>
</code></pre></div></div>
<p>The credentials are now ready to be used by KSSCD.</p>
<h3 id="installing-ksscd">Installing KSSCD</h3>
<p>It’s now time to actually install KSSCD in our Kubernetes cluster. In my case, I chose to do it by means of the official Helm chart. Note that this installs the CSI Secrets Provider, and the required bits for an Azure-specific deployment:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>helm repo add csi-secrets-store-provider-azure https://azure.github.io/secrets-store-csi-driver-provider-azure/charts
helm <span class="nb">install </span>csi csi-secrets-store-provider-azure/csi-secrets-store-provider-azure <span class="nt">--namespace</span> kube-system
</code></pre></div></div>
<p>There are several reasons for installing KSSCD in the <code class="language-plaintext highlighter-rouge">kube-system</code> namespace, which are outlined in the <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/getting-started/installation/#deployment-using-helm">official documentation</a>.</p>
<p>With KSSCD now installed, we are ready to tell it where to go looking for secrets.</p>
<h3 id="configuring-ksscd">Configuring KSSCD</h3>
<p>To configure KSSCD to make key vault secrets available locally, we must create a Kubernetes <code class="language-plaintext highlighter-rouge">SecretProviderClass</code> resource. In my experience, I prefer creating a <code class="language-plaintext highlighter-rouge">SecretProviderClass</code> for each microservice. There are a few reasons for this:</p>
<ol>
<li>It is not easy to have pods in one namespace read secrets from a different namespace. Since each one of my application’s microservices lives in a different namespace, it also makes sense to have a local <code class="language-plaintext highlighter-rouge">SecretProviderClass</code> create local secrets exclusive to that microservice’s needs.</li>
<li>This helps avoid secrets leaking, by creating a series of small Kubernetes secrets, instead of one huge secret with everything in it.</li>
</ol>
<p>To create a <code class="language-plaintext highlighter-rouge">SecretProviderClass</code>, the following YAML can be customized and deployed to the same namespace as the pods that will use the secrets. In this case, we will query the Azure Key Vault objects <code class="language-plaintext highlighter-rouge">key-vault-secret-1</code> and <code class="language-plaintext highlighter-rouge">key-vault-secret-2</code>, and make their values available inside the namespace in a new Kubernetes secret called <code class="language-plaintext highlighter-rouge">foo-secrets</code>:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">apiVersion</span><span class="pi">:</span> <span class="s">secrets-store.csi.x-k8s.io/v1</span>
<span class="na">kind</span><span class="pi">:</span> <span class="s">SecretProviderClass</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">azure-secrets-microservice-foo</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">provider</span><span class="pi">:</span> <span class="s">azure</span>
<span class="na">parameters</span><span class="pi">:</span>
<span class="na">keyvaultName</span><span class="pi">:</span> <span class="s">$KEYVAULT_NAME</span>
<span class="c1"># Azure AD tenant ID. Received when we created the SP.</span>
<span class="na">tenantId</span><span class="pi">:</span> <span class="s">$SERVICE_PRINCIPAL_TENANT_ID</span>
<span class="c1"># List of secrets to pull from the key vault </span>
<span class="na">objects</span><span class="pi">:</span> <span class="pi">|</span>
<span class="s">array:</span>
<span class="s">- |</span>
<span class="s">objectName: key-vault-secret-1</span>
<span class="s">objectType: secret</span>
<span class="s">- |</span>
<span class="s">objectName: key-vault-secret-2</span>
<span class="s">objectType: secret</span>
<span class="c1"># The new Kubernetes secret to create</span>
<span class="na">secretObjects</span><span class="pi">:</span>
<span class="c1"># Name of the new Kubernetes secret</span>
<span class="pi">-</span> <span class="na">secretName</span><span class="pi">:</span> <span class="s">foo-secrets</span>
<span class="na">type</span><span class="pi">:</span> <span class="s">Opaque</span>
<span class="na">data</span><span class="pi">:</span>
<span class="c1"># A key name inside the new secret</span>
<span class="pi">-</span> <span class="na">key</span><span class="pi">:</span> <span class="s">databasepassword</span>
<span class="c1"># Secret value to use</span>
<span class="na">objectName</span><span class="pi">:</span> <span class="s">key-vault-secret-1</span>
<span class="pi">-</span> <span class="na">key</span><span class="pi">:</span> <span class="s">clientsecret</span>
<span class="na">objectName</span><span class="pi">:</span> <span class="s">key-vault-secret-2</span>
</code></pre></div></div>
<p>Note that it is possible to create multiple Kubernetes secrets under <code class="language-plaintext highlighter-rouge">secretObjects</code>, and populate them with different keys and values.</p>
<p>Save this file as <code class="language-plaintext highlighter-rouge">secret-provider-class.yaml</code> and deploy it to the cluster with <code class="language-plaintext highlighter-rouge">kubectl</code>:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl apply <span class="nt">-f</span> ./secret-provider-class.yaml
</code></pre></div></div>
<p>At this point, KSSCD is configured to access Azure, but has not actually made a connection to the key vault yet. For that, we have to configure a pod to use the secrets provided by KSSCD.</p>
<h3 id="using-the-secrets">Using the secrets</h3>
<p>Because of how KSSCD works, secrets are only queried from the key vault <em>when a pod attempts to mount them as a volume</em>. My understanding is that this is because of KSSCD being a CSI driver. Therefore, to use our secrets in a pod, we also need to make them available as volumes.</p>
<p>Here is a sample pod deployment, which mounts our secrets on <code class="language-plaintext highlighter-rouge">/mnt/secrets</code>, and then creates environment variables from those secrets:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">kind</span><span class="pi">:</span> <span class="s">Pod</span>
<span class="na">apiVersion</span><span class="pi">:</span> <span class="s">v1</span>
<span class="na">metadata</span><span class="pi">:</span>
<span class="c1"># The pod we are creating</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">foo-microservice</span>
<span class="na">spec</span><span class="pi">:</span>
<span class="na">volumes</span><span class="pi">:</span>
<span class="c1"># The volume created by KSSCD. This block makes it available</span>
<span class="c1"># to the pod for mounting. It can be named anything we want.</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">secrets-volume</span>
<span class="na">csi</span><span class="pi">:</span>
<span class="na">driver</span><span class="pi">:</span> <span class="s">secrets-store.csi.k8s.io</span>
<span class="na">readOnly</span><span class="pi">:</span> <span class="no">true</span>
<span class="na">volumeAttributes</span><span class="pi">:</span>
<span class="c1"># Which SecretProviderClass is providing this volume?</span>
<span class="na">secretProviderClass</span><span class="pi">:</span> <span class="s">azure-secrets-microservice-foo</span>
<span class="c1"># This is the secret with the SP credentials, which </span>
<span class="c1"># KSSCD will use to connect to the key vault </span>
<span class="na">nodePublishSecretRef</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">keyvault-credentials</span>
<span class="na">containers</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">busybox</span>
<span class="na">image</span><span class="pi">:</span> <span class="s">busybox:latest</span>
<span class="na">command</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">/bin/sleep"</span>
<span class="pi">-</span> <span class="s2">"</span><span class="s">10000"</span>
<span class="na">volumeMounts</span><span class="pi">:</span>
<span class="c1"># Mount the above volume. This also makes our secrets</span>
<span class="c1"># available as files in the pod's filesystem. Crucially,</span>
<span class="c1"># this step also creates the Kubernetes secret we</span>
<span class="c1"># defined in the SecretProviderClass</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">secrets-volume</span>
<span class="na">mountPath</span><span class="pi">:</span> <span class="s2">"</span><span class="s">/mnt/secrets"</span>
<span class="na">readOnly</span><span class="pi">:</span> <span class="no">true</span>
<span class="na">env</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">NON-SECRET-ENV</span>
<span class="na">value</span><span class="pi">:</span> <span class="s2">"</span><span class="s">some</span><span class="nv"> </span><span class="s">value"</span>
<span class="c1"># Create ENV variables from the Kubernetes secret</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">DB_PASSWORD</span>
<span class="na">valueFrom</span><span class="pi">:</span>
<span class="na">secretKeyRef</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">foo-secrets</span>
<span class="na">key</span><span class="pi">:</span> <span class="s">databasepassword</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">CLIENT_SECRET</span>
<span class="na">valueFrom</span><span class="pi">:</span>
<span class="na">secretKeyRef</span><span class="pi">:</span>
<span class="na">name</span><span class="pi">:</span> <span class="s">foo-secrets</span>
<span class="na">key</span><span class="pi">:</span> <span class="s">clientsecret</span>
<span class="c1"># ENVs can be used inside of other ENVs if needed</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">CONNECTION_STRING_WITH_SECRET_INJECTED</span>
<span class="na">value</span><span class="pi">:</span>
<span class="s2">"</span><span class="s">Server=db;User</span><span class="nv"> </span><span class="s">ID=user;Password=$(DB_PASSWORD);"</span>
</code></pre></div></div>
<p>Lets save this file as <code class="language-plaintext highlighter-rouge">pod.yaml</code> and deploy it to the cluster:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl apply <span class="nt">-f</span> ./pod.yaml
</code></pre></div></div>
<p>Once the pod successfully deploys, our secrets will be available to the pod in two ways:</p>
<ol>
<li>As files in the path we specified in <code class="language-plaintext highlighter-rouge">spec.containers.volumeMounts.mountPath</code>. The files will be named <strong>the same way as the Azure Key Vault secret</strong> (in our example, these files will be <code class="language-plaintext highlighter-rouge">/mnt/secrets/key-vault-secret-1</code> and <code class="language-plaintext highlighter-rouge">/mnt/secrets/key-vault-secret-2</code>). <strong>All</strong> queried secrets will be available here.</li>
<li>As environment variables. <strong>Only secrets explicitly set up under <code class="language-plaintext highlighter-rouge">spec.containers.env</code> will be available as ENVs</strong>.</li>
</ol>
<p>In this way, we can choose which secrets we only need as files (certificate files for example), and which ones we want to use as environment variables.</p>
<h3 id="what-happens-if-a-secret-is-updated-in-the-key-vault">What happens if a secret is updated in the key vault?</h3>
<p>KSSCD will query key vaults to check for changes from time to time. However, if a secret being used as an ENV variable is updated at the source, the pod will need to be restarted for the new ENV variables to become available to it. More information on secret rotation can be found in the <a href="https://secrets-store-csi-driver.sigs.k8s.io/topics/secret-auto-rotation.html">official docs</a>.</p>
<h3 id="conclusion">Conclusion</h3>
<p>The Kubernetes Secrets Store CSI Driver is a very powerful tool which can be leveraged to keep secrets and code separate. By using it in our Kubernetes clusters, our entire workflow can be automated, while maintaining security all around.</p>
<p>Questions? Suggestions? Reach out on <a href="https://twitter.com/theEugeneRomero">Twitter</a> and let me know!</p>
Accessing multiple Azure subscriptions in a single Terraform run2022-01-24T00:00:00+01:00https://damn.engineer/2022/01/24/terraform-multiple-azure-subscriptions<h2 id="a-tale-of-two-cities-azure-subscriptions">A tale of two <del>cities</del> Azure subscriptions</h2>
<p><img src="https://damn.engineer/assets/images/terraform-multiple-azure-subscriptions/highway.jpg" alt="One Terraform, two subscriptions" />
Photo by <a href="https://unsplash.com/@aeschwarz?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Adrian Schwarz</a> on <a href="https://unsplash.com/s/photos/cities?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
<p>When infrastructure is declared as Terraform code, resources are usually only created in a single Azure subscription. It normally is best practice to keep multiple subscriptions separated in code, to prevent ending up with a large codebase which can be difficult to maintain and understand. However, there are times when it is necessary, or most logical, to create or query resources from different subscriptions. Luckily, Terraform allows us to work with two (or more) subscriptions in a single run if needed, by means of <a href="https://www.terraform.io/language/providers/configuration#alias-multiple-provider-configurations">configuration aliases</a>.</p>
<p>As an example, say we have two subscriptions, one called <code class="language-plaintext highlighter-rouge">main</code> and one <code class="language-plaintext highlighter-rouge">secondary</code>. Each of those subscriptions has its own Terraform repository and resources, including their own Azure Key Vaults. Now, imagine we need to put a newly created password into both of them, since this secret will be used by applications on both subscriptions.</p>
<h3 id="credentials-aka-service-principals">Credentials, a.k.a. Service Principals</h3>
<p>The first thing needed will be credentials for each subscription. We could create a single Service Principal and give it access to both subscriptions, but I recommend dividing up access to reduce the attack surface if any Service Principal was compromised.</p>
<p>If you are already using Terraform, chances are you already have Service Principals which you use for your Terraform runs. If not, we can create two new ones, one for each subscription. I will not get into the specifics of how to do that here, and instead recommend following the <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret">official Terraform documentation</a>.</p>
<h3 id="configuring-terraform-to-use-multiple-azure-providers">Configuring Terraform to use multiple Azure providers</h3>
<p>With our newly minted Service Principals (SPs) on hand, we can now configure Terraform to use them both.</p>
<p>First, we add some variables to hold the data for both subscriptions and SPs. That way, this sensitive information can be injected at runtime, for example by means of <a href="https://www.terraform.io/language/values/variables#environment-variables">environment variables</a>.</p>
<h5 id="variablestf">variables.tf</h5>
<div class="language-terraform highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># The Active Directory tenant ID.</span>
<span class="c1"># This one should be the same for both SPs</span>
<span class="c1"># and subscriptions</span>
<span class="k">variable</span> <span class="nx">TENANT_ID</span> <span class="p">{</span> <span class="nx">type</span><span class="p">=</span><span class="nx">string</span> <span class="p">}</span>
<span class="c1"># Data for the "main" subscription and SP</span>
<span class="k">variable</span> <span class="nx">SUBSCRIPTION_ID_MAIN</span> <span class="p">{</span> <span class="nx">type</span><span class="p">=</span><span class="nx">string</span> <span class="p">}</span>
<span class="k">variable</span> <span class="nx">SERVICE_PRINCIPAL_ID_MAIN</span> <span class="p">{</span> <span class="nx">type</span><span class="p">=</span><span class="nx">string</span> <span class="p">}</span>
<span class="k">variable</span> <span class="nx">SERVICE_PRINCIPAL_SECRET_MAIN</span> <span class="p">{</span> <span class="nx">type</span><span class="p">=</span><span class="nx">string</span> <span class="p">}</span>
<span class="c1"># Data for the "secondary" subscription and SP</span>
<span class="k">variable</span> <span class="nx">SUBSCRIPTION_ID_SECONDARY</span> <span class="p">{</span> <span class="nx">type</span><span class="p">=</span><span class="nx">string</span> <span class="p">}</span>
<span class="k">variable</span> <span class="nx">SERVICE_PRINCIPAL_ID_SECONDARY</span> <span class="p">{</span> <span class="nx">type</span><span class="p">=</span><span class="nx">string</span> <span class="p">}</span>
<span class="k">variable</span> <span class="nx">SERVICE_PRINCIPAL_SECRET_SECONDARY</span> <span class="p">{</span> <span class="nx">type</span><span class="p">=</span><span class="nx">string</span> <span class="p">}</span>
</code></pre></div></div>
<p>With the variables in place, we can tell Terraform to use the <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs">azurerm</a> provider with two separate configurations. Notice the <code class="language-plaintext highlighter-rouge">alias</code> field in the config for the secondary subscription, which will allow us to specify when we want to use this one. <strong>Otherwise, Terraform will default to using the block with no <code class="language-plaintext highlighter-rouge">alias</code> declared.</strong></p>
<p>We also add the <a href="https://registry.terraform.io/providers/hashicorp/random/latest">random</a> provider, which we will use for creating the password we are saving to both Key Vaults:</p>
<h5 id="configtf">config.tf</h5>
<div class="language-terraform highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">terraform</span> <span class="p">{</span>
<span class="nx">required_providers</span> <span class="p">{</span>
<span class="nx">azurerm</span> <span class="p">=</span> <span class="p">{</span>
<span class="nx">version</span> <span class="p">=</span> <span class="s2">"~>2.90"</span>
<span class="p">}</span>
<span class="nx">random</span> <span class="p">=</span> <span class="p">{</span>
<span class="nx">version</span> <span class="p">=</span> <span class="s2">"~>3.1"</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="c1"># Configuration for our "main" subscription</span>
<span class="k">provider</span> <span class="s2">"azurerm"</span> <span class="p">{</span>
<span class="nx">tenant_id</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">TENANT_ID</span>
<span class="nx">subscription_id</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">SUBSCRIPTION_ID_MAIN</span>
<span class="nx">client_id</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">SERVICE_PRINCIPAL_ID_MAIN</span>
<span class="nx">client_secret</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">SERVICE_PRINCIPAL_SECRET_MAIN</span>
<span class="nx">features</span> <span class="p">{}</span>
<span class="p">}</span>
<span class="c1"># Configuration for the "secondary" subscription</span>
<span class="k">provider</span> <span class="s2">"azurerm"</span> <span class="p">{</span>
<span class="nx">alias</span> <span class="p">=</span> <span class="s2">"secondary"</span>
<span class="nx">tenant_id</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">TENANT_ID</span>
<span class="nx">subscription_id</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">SUBSCRIPTION_ID_SECONDARY</span>
<span class="nx">client_id</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">SERVICE_PRINCIPAL_ID_SECONDARY</span>
<span class="nx">client_secret</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">SERVICE_PRINCIPAL_SECRET_SECONDARY</span>
<span class="nx">features</span> <span class="p">{}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Terraform is now ready to work with both subscriptions.</p>
<h3 id="accessing-and-modifying-resources">Accessing and modifying resources</h3>
<p>With that out of the way, we can create the actual resources we need. We start by finding out data about both Key Vaults. Once again, notice the use of <code class="language-plaintext highlighter-rouge">provider</code> to query the non-default subscription:</p>
<h5 id="datatf">data.tf</h5>
<div class="language-terraform highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Query data from the default subscription</span>
<span class="k">data</span> <span class="s2">"azurerm_key_vault"</span> <span class="s2">"main_key_vault"</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"kv-main"</span>
<span class="nx">resource_group_name</span> <span class="p">=</span> <span class="s2">"rg-main"</span>
<span class="p">}</span>
<span class="c1"># Query the secondary subscription</span>
<span class="k">data</span> <span class="s2">"azurerm_key_vault"</span> <span class="s2">"secondary_key_vault"</span> <span class="p">{</span>
<span class="k">provider</span> <span class="p">=</span> <span class="nx">azurerm</span><span class="p">.</span><span class="nx">secondary</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"kv-secondary"</span>
<span class="nx">resource_group_name</span> <span class="p">=</span> <span class="s2">"rg-secondary"</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Now that we have what we need, we can finally create the secret, and store it into both Key Vaults, using the <code class="language-plaintext highlighter-rouge">provider</code> block when appropriate:</p>
<h5 id="shared_secrettf">shared_secret.tf</h5>
<div class="language-terraform highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"random_password"</span> <span class="s2">"shared_password"</span> <span class="p">{</span>
<span class="nx">length</span> <span class="p">=</span> <span class="mi">64</span>
<span class="p">}</span>
<span class="c1"># Saving the password in the main key vault</span>
<span class="k">resource</span> <span class="s2">"azurerm_key_vault_secret"</span> <span class="s2">"shared_password_main"</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"super-secret-password"</span>
<span class="nx">value</span> <span class="p">=</span> <span class="nx">random_password</span><span class="p">.</span><span class="nx">shared_password</span><span class="p">.</span><span class="nx">result</span>
<span class="nx">key_vault_id</span> <span class="p">=</span> <span class="k">data</span><span class="p">.</span><span class="nx">azurerm_key_vault</span><span class="p">.</span><span class="nx">main_key_vault</span><span class="p">.</span><span class="nx">id</span>
<span class="p">}</span>
<span class="c1"># And in the secondary, using the "provider" field again</span>
<span class="k">resource</span> <span class="s2">"azurerm_key_vault_secret"</span> <span class="s2">"shared_password_secondary"</span> <span class="p">{</span>
<span class="k">provider</span> <span class="p">=</span> <span class="nx">azurerm</span><span class="p">.</span><span class="nx">secondary</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"super-secret-password"</span>
<span class="nx">value</span> <span class="p">=</span> <span class="nx">random_password</span><span class="p">.</span><span class="nx">shared_password</span><span class="p">.</span><span class="nx">result</span>
<span class="nx">key_vault_id</span> <span class="p">=</span> <span class="k">data</span><span class="p">.</span><span class="nx">azurerm_key_vault</span><span class="p">.</span><span class="nx">secondary_key_vault</span><span class="p">.</span><span class="nx">id</span>
<span class="p">}</span>
</code></pre></div></div>
<p>And just like that, we have created our secret and saved it to both Key Vaults. If we ever need to rotate the secret, we can just <code class="language-plaintext highlighter-rouge">taint</code> the single <code class="language-plaintext highlighter-rouge">random_password</code> resource, and Terraform will update all the Key Vaults automatically. Neat!</p>
<h3 id="final-notes">Final notes</h3>
<p>Something to keep in mind is that all of these resources will be stored in the same <code class="language-plaintext highlighter-rouge">terraform-state</code> file. Also, make sure you don’t end up creating circular dependencies, where each repository needs data from resources created in the other repository. These are a few of the reasons why it is good to be mindful about only using this little trick when appropriate.</p>
Don't shave that yak!2022-01-17T00:00:00+01:00https://damn.engineer/2022/01/17/dont-shave-that-yak<h2 id="when-a-10-minute-change-becomes-a-multi-hour-project">When a 10 minute change, becomes a multi-hour project</h2>
<p><img src="https://damn.engineer/assets/images/dont-shave-that-yak/yak-shave.jpg" alt="Someone not following the advice of this article" />
Photo by <a href="https://burst.shopify.com/@lightleaksin?utm_campaign=photo_credit&utm_content=Browse+Free+HD+Images+of+Bet+You+Didn%27t+Expect+To+See+A+Yak+Being+Shaved+Today&utm_medium=referral&utm_source=credit">Samantha Hurley</a> on <a href="https://burst.shopify.com/animal?utm_campaign=photo_credit&utm_content=Browse+Free+HD+Images+of+Bet+You+Didn%27t+Expect+To+See+A+Yak+Being+Shaved+Today&utm_medium=referral&utm_source=credit">Burst</a></p>
<p><strong>Note:</strong> This article was originally published in 2019 on the <a href="https://medium.com/capgemini-norway/dont-shave-that-yak-872e994da32b">Capgemini Medium</a> account, and on <a href="https://www.kode24.no/guider/stick-to-your-planned-work/71207835">kode24.no</a>.</p>
<p>Have you ever had an experience like this?</p>
<p>Yesterday, I had to do a two-line code change to a single source file in our project. Literally all that had to be done was to replace a hard-coded value into a variable.</p>
<p>“Easy”, I thought. Lo and behold, after making the change, the CI pipeline decided it no longer wanted to run successfully.</p>
<p>“What gives?” I asked myself.</p>
<p>Looking into the issue, it seemed the pipeline fail might not be related to my change; instead, it appeared to be a mix of not having pushed any changes to the repository over the past week (so no pipeline runs had occurred in that period), and a dependency which had been updated in that time, after not having been touched by its developer in several months.</p>
<p><em>“Ok, no problem, let’s try to make this work”.</em></p>
<p>The error message I was getting was a bit cryptic, so I went into the dependency’s GitHub page to try and figure out what was breaking. After some digging around and looking through open and closed issues on the project’s site, I realized that the developer had added support for the new version of its parent tool (Terraform), but without also accounting for backwards compatibility.</p>
<p>But, oh no! our pipelines run on Azure DevOps’ provided agents, which still didn’t have that new version of Terraform on them! So this is why the pipe was breaking. The old version of Terraform didn’t understand the new version’s way of doing things.</p>
<p><em>“Ok, what to do now? I suppose I could get around the old version on the agents by modifying the pipeline to download and set up the latest executable, at least until Microsoft updates their images.”</em></p>
<p>But what other things will break with this change? This was a major version change (11 to 12), and apparently a lot of things had changed in between versions.</p>
<p>Oh wait, Terraform has a helper tool to tell you what things will no longer work in your code, cool. My workstation was on the same version of Terraform as the agents, so after downloading and setting up the new version in my machine, I ran the helper. This tool not only tells you what is deprecated, but will also fix things for you if possible, so lets hope there aren’t a lot of changes and… wait, all of a sudden I was sitting on a lot of modified files in my branch.</p>
<p>I also realized that, since Terraform does not look in subfolders, I would need to re-run this process within every subfolder as well, for example the ones where we keep our modules.</p>
<p>Ok, maybe I can write a script that does this, instead of manually going into every folder and running these commands. Should I use Bash or Ruby, hmm? Afterwards, I’ll have to modify the pipeline YAML so it downloads and upgrades Terraform every time it runs, and afterwards I should probably find out how often MS updates their images…</p>
<p>You can see where this is going.</p>
<p>A 10 min change of two lines, had become a multi-hour project with a modified pipeline and dozens of changed files in a pull request. And who knows what else I would have needed to do afterwards!</p>
<p>At this point, my colleague and I realized that this was an exercise in futility. “Why don’t we pin the library to the last-know working version, wait until MS updates their agent images, then create a task for tackling the upgrade properly?” 3 modified lines later (the original two plus a new one for pinning the dependency version), and the pipeline was happy again.</p>
<p>I think this classic scene from Malcolm in the Middle describes the situation perfectly:</p>
<style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }</style>
<div class="embed-container"><iframe src="https://www.youtube.com/embed/8fnfeuoh4s8" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe></div>
<p>So what is the lesson here? <a href="https://seths.blog/2005/03/dont_shave_that/">Don’t Shave that Yak</a>. Stick to the planned work.</p>
<p>If new, unexpected work appears while you’re doing planned work, don’t jump on it, however tempting it might be. If the work can be planned for later, do that. If it absolutely can’t, inform your lead first so that the work can be accounted for and planned, before performing it.</p>
<p>Otherwise, you might find yourself shaving a Yak, when all you wanted to do was wax your car.</p>
Bash alias for cleaning git branches2022-01-10T00:00:00+01:00https://damn.engineer/2022/01/10/git-clean-branches<h2 id="tired-of-hunting-for-stale-git-branches">Tired of hunting for stale git branches?</h2>
<p><img src="https://damn.engineer/assets/images/git-clean-branches/git-branches.jpg" alt="Git branches" />
Photo by <a href="https://unsplash.com/@yancymin?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Yancy Min</a> on <a href="https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a></p>
<p>As a <code class="language-plaintext highlighter-rouge">git</code> user, I got tired of constantly seeing my list of branches grow out of control. Git is great for keeping the history of a repository, but at least for me, there isn’t a lot of value in keeping deleted branches in my local history. Besides, if you are using <a href="https://trunkbaseddevelopment.com/short-lived-feature-branches/">short-lived feature branches</a> (and you <em>really</em> should be), you will be creating and deleting several new branches per week.</p>
<p>Because of this, I decided to create an alias to remove old branches from my local machine. My criteria was:</p>
<ol>
<li>Only branches which have been merged to master should be deleted</li>
<li>My local list of remote branches should be cleaned up, to remove non-existing branches</li>
</ol>
<p>With that criteria in mind, I came up with the following steps:</p>
<h2 id="the-solution">The solution</h2>
<p>First, return a list of all local branches which have already been merged to master. Crucially, this does not include any branches with commits <strong>different</strong> from master:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git branch <span class="nt">--merged</span> master
</code></pre></div></div>
<p>This list also includes the local <code class="language-plaintext highlighter-rouge">master</code> branch, as well as the currently checked out branch (denoted by a <code class="language-plaintext highlighter-rouge">*</code>). This grep removes that:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">grep</span> <span class="nt">-v</span> <span class="nt">-e</span> <span class="s1">'master'</span> <span class="nt">-e</span> <span class="s1">'\*'</span>
</code></pre></div></div>
<p>Finally, delete all remaining results. The lowercase <code class="language-plaintext highlighter-rouge">-d</code> flag also ensures only fully merged branches get deleted (as opposed to the delete all <code class="language-plaintext highlighter-rouge">-D</code>):</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>xargs <span class="nt">-n</span> 1 git branch <span class="nt">-d</span>
</code></pre></div></div>
<p>Afterwards, do a prune of the local copy of remote, to remove any references to non-existing remote branches:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git remote prune origin
</code></pre></div></div>
<p>Putting it all together, I ended up with this alias, which I then added to my <code class="language-plaintext highlighter-rouge">.bashrc</code> file:</p>
<h2 id="the-tldr">The TL;DR</h2>
<p><strong>git-clean-branches:</strong></p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">alias </span>git-clean-branches<span class="o">=</span><span class="s2">"git branch --merged master | grep -v -e 'master' -e '</span><span class="se">\*</span><span class="s2">' | xargs -n 1 git branch -d && git remote prune origin || echo 'No local branches to remove, so nothing done.'"</span>
</code></pre></div></div>
<p>Now, my clean-up ritual after completing a pull request is always the following:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>gcm <span class="c"># this is an alias for "git checkout master"</span>
git pull <span class="c"># grab the latest master</span>
git-clean-branches <span class="c"># clean up all merged local branches and prune my local list of remote branches</span>
</code></pre></div></div>
<p>Automate all the things!</p>