diff --git a/assets/css/custom.css b/assets/css/custom.css index 57a45278..5c8be984 100644 --- a/assets/css/custom.css +++ b/assets/css/custom.css @@ -7,7 +7,9 @@ font-weight: 600; } -h2:hover a, h3:hover a, h4:hover a { +h2:hover a, +h3:hover a, +h4:hover a { visibility: visible; text-decoration: none; } diff --git a/config.yaml b/config.yaml index 967fb4de..565f21e0 100644 --- a/config.yaml +++ b/config.yaml @@ -4,8 +4,8 @@ languageCode: en-us googleAnalytics: G-V8WJDERTX9 copyright: Dave Gallant preserveTaxonomyNames: true -pygmentsstyle: "monokai" -pygmentscodefences: false +pygmentsstyle: nord +pygmentscodefences: true pygmentscodefencesguesssyntax: true theme: - archie diff --git a/content/post/running-k3s-in-lxc-on-proxmox/index.md b/content/post/running-k3s-in-lxc-on-proxmox/index.md index 0ae8fc59..298ad04b 100644 --- a/content/post/running-k3s-in-lxc-on-proxmox/index.md +++ b/content/post/running-k3s-in-lxc-on-proxmox/index.md @@ -29,7 +29,6 @@ flowchartDiagrams: sequenceDiagrams: enable: false options: "" - --- @@ -48,9 +47,9 @@ This [gist](https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca226418 There is an issue on Kubernetes regarding swap [here](https://github.com/kubernetes/kubernetes/issues/53533). There claims to be support for swap in 1.22, but for now let's disable it: -``` -sysctl vm.swappiness=0 -swapoff -a +```shell +sudo sysctl vm.swappiness=0 +sudo swapoff -a ``` It might be worth experimenting with swap enabled in the future to see how that might affect performance. @@ -59,7 +58,7 @@ It might be worth experimenting with swap enabled in the future to see how that To avoid IP Forwarding issues with Traefik, run the following on the host: -```sh +```shell sudo sysctl net.ipv4.ip_forward=1 sudo sysctl net.ipv6.conf.all.forwarding=1 sudo sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf @@ -81,7 +80,7 @@ Now back on the host run `pct list` to determine what VMID it was given. Open `/etc/pve/lxc/$VMID.conf` and append: -```sh +``` lxc.apparmor.profile: unconfined lxc.cap.drop: lxc.mount.auto: "proc:rw sys:rw" @@ -92,6 +91,7 @@ All of the above configurations are described in the [manpages](https://linuxcon Notice that `cgroup2` is used since Proxmox VE 7.0 has switched to a [pure cgroupv2 environment](https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup). Thankfully cgroup v2 support has been supported in k3s with these contributions: + - https://github.com/k3s-io/k3s/pull/2584 - https://github.com/k3s-io/k3s/pull/2844 @@ -99,7 +99,7 @@ Thankfully cgroup v2 support has been supported in k3s with these contributions: From within the container, run: -```sh +```shell echo '#!/bin/sh -e ln -s /dev/console /dev/kmsg mount --make-rshared /' > /etc/rc.local @@ -113,7 +113,7 @@ One of the simplest ways to install K3s on a remote host is to use [k3sup](https Ensure that you supply a valid `CONTAINER_IP` and choose the `k3s-version` you prefer. As of 2021/11, it is still defaulting to the 1.19 channel, so I overrode it to 1.22 for cgroup v2 support. See the published releases [here](https://github.com/k3s-io/k3s/releases). -```sh +```shell ssh-copy-id root@$CONTAINER_IP k3sup install --ip $CONTAINER_IP --user root --k3s-version v1.22.3+k3s1 ``` @@ -124,7 +124,6 @@ If all goes well, you should see a path to the `kubeconfig` generated. I moved t Installing K3s in LXC on Proxmox works with a few tweaks to the default configuration. I later followed the Tekton's [Getting Started](https://tekton.dev/docs/getting-started/) guide and was able to deploy it in a few commands. - ```console $ kubectl get all --namespace tekton-pipelines NAME READY STATUS RESTARTS AGE diff --git a/content/post/watching-youtube-in-private/index.md b/content/post/watching-youtube-in-private/index.md index d8a250d2..95434e46 100644 --- a/content/post/watching-youtube-in-private/index.md +++ b/content/post/watching-youtube-in-private/index.md @@ -56,13 +56,13 @@ A few days ago, yewtu.be went down briefly, and that motivated me enough to self The quickest way to get invidious up is with docker-compose as mentioned in the [docs](https://docs.invidious.io/installation/). -I made a few modifications (such as pinning the container's tag), and ended up with: +I made a few modifications, and ended up with: ```yaml version: "3" services: invidious: - image: quay.io/invidious/invidious:5160d8bae39dc5cc5d51abee90571a03c08d0f2b + image: quay.io/invidious/invidious restart: unless-stopped ports: - "0.0.0.0:3000:3000" diff --git a/public/404.html b/public/404.html index 2b996041..adc203ee 100644 --- a/public/404.html +++ b/public/404.html @@ -19,8 +19,8 @@ - - + + @@ -67,7 +67,7 @@ - + @@ -92,6 +92,8 @@ + + @@ -121,6 +123,7 @@ + diff --git a/public/about/index.html b/public/about/index.html index 8489bae1..2de33d1c 100644 --- a/public/about/index.html +++ b/public/about/index.html @@ -23,8 +23,8 @@ Feel free to reach out at me@davegallant.ca."/> - - + + @@ -71,7 +71,7 @@ Feel free to reach out at me@davegallant.ca."/> - + @@ -94,6 +94,8 @@ Feel free to reach out at me@davegallant.ca."/> + +
@@ -145,6 +147,7 @@ Feel free to reach out at me@davegallant.ca."/> + diff --git a/public/blog/2020/03/16/appgate-sdp-on-arch-linux/index.html b/public/blog/2020/03/16/appgate-sdp-on-arch-linux/index.html index f20e4b0a..85e89d08 100644 --- a/public/blog/2020/03/16/appgate-sdp-on-arch-linux/index.html +++ b/public/blog/2020/03/16/appgate-sdp-on-arch-linux/index.html @@ -20,8 +20,8 @@ - - + + @@ -68,7 +68,7 @@ - + @@ -91,6 +91,12 @@ + + + + + +
@@ -109,93 +115,81 @@ As of right now, the latest AUR is 4.2.2-1.

These steps highlight how to get it working with Python3.8 by making a 1 line modification to AppGate source code.

Packaging

We already know the community package is currently out of date, so let’s clone it:

-
git clone https://aur.archlinux.org/appgate-sdp.git
-cd appgate-sdp
-
-

You’ll likely notice that the version is not what we want, so let’s modify the PKGBUILD to the following:

-
# Maintainer: Pawel Mosakowski <pawel at mosakowski dot net>
-pkgname=appgate-sdp
-conflicts=('appgate-sdp-headless')
-pkgver=4.3.2
-_download_pkgver=4.3
-pkgrel=1
-epoch=
-pkgdesc="Software Defined Perimeter - GUI client"
-arch=('x86_64')
-url="https://www.cyxtera.com/essential-defense/appgate-sdp/support"
-license=('custom')
-# dependecies calculated by namcap
-depends=('gconf' 'libsecret' 'gtk3' 'python' 'nss' 'libxss' 'nodejs' 'dnsmasq')
-source=("https://sdpdownloads.cyxtera.com/AppGate-SDP-${_download_pkgver}/clients/${pkgname}_${pkgver}_amd64.deb"
-        "appgatedriver.service")
-options=(staticlibs)
-prepare() {
-    tar -xf data.tar.xz
-}
-package() {
-    cp -dpr "${srcdir}"/{etc,lib,opt,usr} "${pkgdir}"
-    mv -v "$pkgdir/lib/systemd/system" "$pkgdir/usr/lib/systemd/"
-    rm -vrf "$pkgdir/lib"
-    cp -v "$srcdir/appgatedriver.service" "$pkgdir/usr/lib/systemd/system/appgatedriver.service"
-    mkdir -vp "$pkgdir/usr/share/licenses/appgate-sdp"
-    cp -v "$pkgdir/usr/share/doc/appgate/copyright" "$pkgdir/usr/share/licenses/appgate-sdp"
-    cp -v "$pkgdir/usr/share/doc/appgate/LICENSE.github" "$pkgdir/usr/share/licenses/appgate-sdp"
-    cp -v "$pkgdir/usr/share/doc/appgate/LICENSES.chromium.html.bz2" "$pkgdir/usr/share/licenses/appgate-sdp"
-}
-md5sums=('17101aac7623c06d5fbb95f50cf3dbdc'
-         '002644116e20b2d79fdb36b7677ab4cf')
-
-
-

Let’s first make sure we have some dependencies. If you do not have yay, check it out.

-
yay -S dnsmasq gconf
-
-

Now, let’s install it:

-
makepkg -si
-
-

Running the client

+
git clone https://aur.archlinux.org/appgate-sdp.git
+cd appgate-sdp
+

You’ll likely notice that the version is not what we want, so let’s modify the PKGBUILD to the following:

+
# Maintainer: Pawel Mosakowski <pawel at mosakowski dot net>
+pkgname=appgate-sdp
+conflicts=('appgate-sdp-headless')
+pkgver=4.3.2
+_download_pkgver=4.3
+pkgrel=1
+epoch=
+pkgdesc="Software Defined Perimeter - GUI client"
+arch=('x86_64')
+url="https://www.cyxtera.com/essential-defense/appgate-sdp/support"
+license=('custom')
+# dependecies calculated by namcap
+depends=('gconf' 'libsecret' 'gtk3' 'python' 'nss' 'libxss' 'nodejs' 'dnsmasq')
+source=("https://sdpdownloads.cyxtera.com/AppGate-SDP-${_download_pkgver}/clients/${pkgname}_${pkgver}_amd64.deb"
+        "appgatedriver.service")
+options=(staticlibs)
+prepare() {
+    tar -xf data.tar.xz
+}
+package() {
+    cp -dpr "${srcdir}"/{etc,lib,opt,usr} "${pkgdir}"
+    mv -v "$pkgdir/lib/systemd/system" "$pkgdir/usr/lib/systemd/"
+    rm -vrf "$pkgdir/lib"
+    cp -v "$srcdir/appgatedriver.service" "$pkgdir/usr/lib/systemd/system/appgatedriver.service"
+    mkdir -vp "$pkgdir/usr/share/licenses/appgate-sdp"
+    cp -v "$pkgdir/usr/share/doc/appgate/copyright" "$pkgdir/usr/share/licenses/appgate-sdp"
+    cp -v "$pkgdir/usr/share/doc/appgate/LICENSE.github" "$pkgdir/usr/share/licenses/appgate-sdp"
+    cp -v "$pkgdir/usr/share/doc/appgate/LICENSES.chromium.html.bz2" "$pkgdir/usr/share/licenses/appgate-sdp"
+}
+md5sums=('17101aac7623c06d5fbb95f50cf3dbdc'
+         '002644116e20b2d79fdb36b7677ab4cf')
+

Let’s first make sure we have some dependencies. If you do not have yay, check it out.

+
yay -S dnsmasq gconf
+

Now, let’s install it:

+
makepkg -si
+

Running the client

Ok, let’s run the client by executing appgate.

It complains about not being able to connect.

Easy fix:

-
sudo systemctl start appgatedriver.service
-
-

Now we should be connected… but DNS is not working?

+
sudo systemctl start appgatedriver.service
+

Now we should be connected… but DNS is not working?

Fixing the DNS

Running resolvectl should display that something is not right.

Why is the DNS not being set by appgate?

-
$ head -3 /opt/appgate/linux/set_dns
-#!/usr/bin/env python3
-'''
-This is used to set and unset the DNS.
-
-

It seems like python3 is required for the DNS setting to happen. +

$ head -3 /opt/appgate/linux/set_dns
+#!/usr/bin/env python3
+'''
+This is used to set and unset the DNS.
+

It seems like python3 is required for the DNS setting to happen. Let’s try to run it.

-
$ sudo /opt/appgate/linux/set_dns
-/opt/appgate/linux/set_dns:88: SyntaxWarning: "is" with a literal. Did you mean "=="?
-  servers = [( socket.AF_INET if x.version is 4 else socket.AF_INET6, map(int, x.packed)) for x in servers]
-Traceback (most recent call last):
-  File "/opt/appgate/linux/set_dns", line 30, in <module>
-    import dbus
-ModuleNotFoundError: No module named 'dbus'
-
-

Ok, let’s install it:

-
$ sudo python3.8 -m pip install dbus-python
-
-

Will it work now? Not yet. There’s another issue:

-
$ sudo /opt/appgate/linux/set_dns
-/opt/appgate/linux/set_dns:88: SyntaxWarning: "is" with a literal. Did you mean "=="?
-  servers = [( socket.AF_INET if x.version is 4 else socket.AF_INET6, map(int, x.packed)) for x in servers]
-module 'platform' has no attribute 'linux_distribution'
-
-

This is a breaking change in Python3.8.

+
$ sudo /opt/appgate/linux/set_dns
+/opt/appgate/linux/set_dns:88: SyntaxWarning: "is" with a literal. Did you mean "=="?
+  servers = [( socket.AF_INET if x.version is 4 else socket.AF_INET6, map(int, x.packed)) for x in servers]
+Traceback (most recent call last):
+  File "/opt/appgate/linux/set_dns", line 30, in <module>
+    import dbus
+ModuleNotFoundError: No module named 'dbus'
+

Ok, let’s install it:

+
$ sudo python3.8 -m pip install dbus-python
+

Will it work now? Not yet. There’s another issue:

+
$ sudo /opt/appgate/linux/set_dns
+/opt/appgate/linux/set_dns:88: SyntaxWarning: "is" with a literal. Did you mean "=="?
+  servers = [( socket.AF_INET if x.version is 4 else socket.AF_INET6, map(int, x.packed)) for x in servers]
+module 'platform' has no attribute 'linux_distribution'
+

This is a breaking change in Python3.8.

So what is calling platform.linux_distribution?

Let’s search for it:

-
$ sudo grep -r 'linux_distribution' /opt/appgate/linux/
-/opt/appgate/linux/nm.py:    if platform.linux_distribution()[0] != 'Fedora':
-
-

Aha! So this is in the local AppGate source code. This should be an easy fix. Let’s just replace this line with:

-
if True: # Since we are not using Fedora :)
-
-

Wrapping up

+
$ sudo grep -r 'linux_distribution' /opt/appgate/linux/
+/opt/appgate/linux/nm.py:    if platform.linux_distribution()[0] != 'Fedora':
+

Aha! So this is in the local AppGate source code. This should be an easy fix. Let’s just replace this line with:

+
if True: # Since we are not using Fedora :)
+

Wrapping up

It turns out there are breaking changes in Python3.8.

The docs say Deprecated since version 3.5, will be removed in version 3.8: See alternative like the distro package.

I suppose this highlights one of the caveats of relying upon the system’s python, rather than having an isolated, dedicated environment for all dependencies.

@@ -271,6 +265,9 @@ module 'platform' has no attribute 'linux_distribution' + + + diff --git a/public/blog/2021/09/06/what-to-do-with-a-homelab/index.html b/public/blog/2021/09/06/what-to-do-with-a-homelab/index.html index 18195138..36731900 100644 --- a/public/blog/2021/09/06/what-to-do-with-a-homelab/index.html +++ b/public/blog/2021/09/06/what-to-do-with-a-homelab/index.html @@ -20,8 +20,8 @@ - - + + @@ -68,7 +68,7 @@ - + @@ -91,6 +91,8 @@ + +
@@ -221,6 +223,7 @@ Containers have much less overhead in terms of boot time and storage allocation. + diff --git a/public/blog/2021/09/08/why-i-threw-out-my-dotfiles/index.html b/public/blog/2021/09/08/why-i-threw-out-my-dotfiles/index.html index 939d6721..694a8ca9 100644 --- a/public/blog/2021/09/08/why-i-threw-out-my-dotfiles/index.html +++ b/public/blog/2021/09/08/why-i-threw-out-my-dotfiles/index.html @@ -20,8 +20,8 @@ - - + + @@ -68,7 +68,7 @@ - + @@ -91,6 +91,12 @@ + + + + + +
@@ -107,10 +113,9 @@

What is home-manager?#

Before understanding home-manager, it is worth briefly discussing what nix is. nix is a package manager that originally spawned from a PhD thesis. Unlike other package managers, it uses symbolic links to keep track of the currently installed packages, keeping around the old ones in case you may want to rollback.

For example, I have used nix to install the package bind which includes dig. You can see that it is available on multiple platforms. The absolute path of dig can be found by running:

-
$ ls -lh $(which dig)
-lrwxr-xr-x 73 root 31 Dec  1969 /run/current-system/sw/bin/dig -> /nix/store/0r4qdyprljd3dki57jn6c6a8dh2rbg9g-bind-9.16.16-dnsutils/bin/dig
-
-

Notice that there is a hash included in the file path? This is a nix store path and is computed by the nix package manager. This nix pill does a good job explaining how this hash is computed. All of the nix pills are worth a read, if you are interested in learning more about nix itself. However, using home-manager does not require extensive knowledge of nix.

+
$ ls -lh $(which dig)
+lrwxr-xr-x 73 root 31 Dec  1969 /run/current-system/sw/bin/dig -> /nix/store/0r4qdyprljd3dki57jn6c6a8dh2rbg9g-bind-9.16.16-dnsutils/bin/dig
+

Notice that there is a hash included in the file path? This is a nix store path and is computed by the nix package manager. This nix pill does a good job explaining how this hash is computed. All of the nix pills are worth a read, if you are interested in learning more about nix itself. However, using home-manager does not require extensive knowledge of nix.

Part of the nix ecosystem includes nixpkgs. Many popular tools can be found already packaged in this repository. As you can see with these stats, there is a large number of existing packages that are being maintained by the community. Contributing a new package is easy, and anyone can do it!

home-manager leverages the nix package manager (and nixpkgs), as well the nix language so that you can declaratively define your system configuration. I store my nix-config in git so that I can keep track of my packages and configurations, and retain a clean and informative git commit history so that I can understand what changed and why.

Setting up home-manager#

@@ -118,123 +123,119 @@ lrwxr-xr-x 73 root 31 Dec 1969 /run/current-system/sw/bin/dig -> /nix/store/

⚠️ If you run this on your main machine, make sure you backup your configuration files first. home-manager is pretty good about not overwriting existing configuration, but it is better to have a backup! Alternatively, you could test this out on a VM or cloud instance.

The first thing you should do is install nix:

-
curl -L https://nixos.org/nix/install | sh
-
-

It’s generally not a good idea to curl and execute files from the internet (without verifying integrity), so you might want to download the install script first and take a look before executing it!

+
curl -L https://nixos.org/nix/install | sh
+

It’s generally not a good idea to curl and execute files from the internet (without verifying integrity), so you might want to download the install script first and take a look before executing it!

Open up a new shell in your terminal and running nix should work. If not, run . ~/.nix-profile/etc/profile.d/nix.sh

Now, install home-manager:

-
nix-channel --add https://github.com/nix-community/home-manager/archive/master.tar.gz home-manager
-nix-channel --update
-nix-shell '<home-manager>' -A install
-
-

You should see a wave of /nix/store/* paths being displayed on your screen.

+
nix-channel --add https://github.com/nix-community/home-manager/archive/master.tar.gz home-manager
+nix-channel --update
+nix-shell '<home-manager>' -A install
+

You should see a wave of /nix/store/* paths being displayed on your screen.

Now, to start off with a basic configuration, open up ~/.config/nixpkgs/home.nix in the editor of your choice and paste this in (you will want to change userName and homeDirectory):

-
{ config, pkgs, ... }:
-
-{
-  programs.home-manager.enable = true;
-
-  home = {
-    username = "dave";
-    homeDirectory = "/home/dave";
-    stateVersion = "21.11";
-    packages = with pkgs; [
-      bind
-      exa
-      fd
-      ripgrep
-    ];
-  };
-
-  programs = {
-
-    git = {
-      enable = true;
-      aliases = {
-        aa = "add -A .";
-        br = "branch";
-        c = "commit -S";
-        ca = "commit -S --amend";
-        cb = "checkout -b";
-        co = "checkout";
-        d = "diff";
-        l =
-          "log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit";
-      };
-
-      delta = {
-        enable = true;
-
-        options = {
-          features = "line-numbers decorations";
-          whitespace-error-style = "22 reverse";
-          plus-style = "green bold ul '#198214'";
-          decorations = {
-            commit-decoration-style = "bold yellow box ul";
-            file-style = "bold yellow ul";
-            file-decoration-style = "none";
-          };
-        };
-      };
-
-      extraConfig = {
-        push = { default = "current"; };
-        pull = { rebase = true; };
-      };
-
-    };
-
-    starship = {
-      enable = true;
-      enableZshIntegration = true;
-
-      settings = {
-        add_newline = false;
-        scan_timeout = 10;
-      };
-    };
-
-    zsh = {
-      enable = true;
-      enableAutosuggestions = true;
-      enableSyntaxHighlighting = true;
-      history.size = 1000000;
-
-      localVariables = {
-        CASE_SENSITIVE = "true";
-        DISABLE_UNTRACKED_FILES_DIRTY = "true";
-        RPROMPT = ""; # override because macOS defaults to filepath
-        ZSH_AUTOSUGGEST_HIGHLIGHT_STYLE = "fg=#838383,underline";
-        ZSH_DISABLE_COMPFIX = "true";
-      };
-
-      initExtra = ''
-        export PAGER=less
-      '';
-
-      shellAliases = {
-        ".." = "cd ..";
-        grep = "rg --smart-case";
-        ls = "exa -la --git";
-      };
-
-      "oh-my-zsh" = {
-        enable = true;
-        plugins = [
-          "gitfast"
-          "last-working-dir"
-        ];
-      };
-
-    };
-
-  };
-}
-
-

Save the file and run:

-
home-manager switch
-
-

You should see another wave of /nix/store/* paths. The new configuration should now be active.

+
{ config, pkgs, ... }:
+
+{
+  programs.home-manager.enable = true;
+
+  home = {
+    username = "dave";
+    homeDirectory = "/home/dave";
+    stateVersion = "21.11";
+    packages = with pkgs; [
+      bind
+      exa
+      fd
+      ripgrep
+    ];
+  };
+
+  programs = {
+
+    git = {
+      enable = true;
+      aliases = {
+        aa = "add -A .";
+        br = "branch";
+        c = "commit -S";
+        ca = "commit -S --amend";
+        cb = "checkout -b";
+        co = "checkout";
+        d = "diff";
+        l =
+          "log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit";
+      };
+
+      delta = {
+        enable = true;
+
+        options = {
+          features = "line-numbers decorations";
+          whitespace-error-style = "22 reverse";
+          plus-style = "green bold ul '#198214'";
+          decorations = {
+            commit-decoration-style = "bold yellow box ul";
+            file-style = "bold yellow ul";
+            file-decoration-style = "none";
+          };
+        };
+      };
+
+      extraConfig = {
+        push = { default = "current"; };
+        pull = { rebase = true; };
+      };
+
+    };
+
+    starship = {
+      enable = true;
+      enableZshIntegration = true;
+
+      settings = {
+        add_newline = false;
+        scan_timeout = 10;
+      };
+    };
+
+    zsh = {
+      enable = true;
+      enableAutosuggestions = true;
+      enableSyntaxHighlighting = true;
+      history.size = 1000000;
+
+      localVariables = {
+        CASE_SENSITIVE = "true";
+        DISABLE_UNTRACKED_FILES_DIRTY = "true";
+        RPROMPT = ""; # override because macOS defaults to filepath
+        ZSH_AUTOSUGGEST_HIGHLIGHT_STYLE = "fg=#838383,underline";
+        ZSH_DISABLE_COMPFIX = "true";
+      };
+
+      initExtra = ''
+        export PAGER=less
+      '';
+
+      shellAliases = {
+        ".." = "cd ..";
+        grep = "rg --smart-case";
+        ls = "exa -la --git";
+      };
+
+      "oh-my-zsh" = {
+        enable = true;
+        plugins = [
+          "gitfast"
+          "last-working-dir"
+        ];
+      };
+
+    };
+
+  };
+}
+

Save the file and run:

+
home-manager switch
+

You should see another wave of /nix/store/* paths. The new configuration should now be active.

If you run zsh, you should see that you have starship and access to several other utils such as rg, fd, and exa.

This basic configuration above is also defining your ~/.config/git/config and .zshrc. If you already have either of these files, home-manager will complain about them already existing.

If you run cat ~/.zshrc, you will see the way these configuration files are generated.

@@ -315,6 +316,9 @@ nix-shell '<home-manager>' -A install + + + diff --git a/public/blog/2021/09/17/automatically-rotating-aws-access-keys/index.html b/public/blog/2021/09/17/automatically-rotating-aws-access-keys/index.html index bbd8b70b..c99b4c5f 100644 --- a/public/blog/2021/09/17/automatically-rotating-aws-access-keys/index.html +++ b/public/blog/2021/09/17/automatically-rotating-aws-access-keys/index.html @@ -20,8 +20,8 @@ - - + + @@ -68,7 +68,7 @@ - + @@ -91,6 +91,8 @@ + +
@@ -179,6 +181,7 @@ + diff --git a/public/blog/2021/10/11/replacing-docker-with-podman-on-macos-and-linux/index.html b/public/blog/2021/10/11/replacing-docker-with-podman-on-macos-and-linux/index.html index aadfe4ad..aad2daf6 100644 --- a/public/blog/2021/10/11/replacing-docker-with-podman-on-macos-and-linux/index.html +++ b/public/blog/2021/10/11/replacing-docker-with-podman-on-macos-and-linux/index.html @@ -20,8 +20,8 @@ - - + + @@ -68,7 +68,7 @@ - + @@ -91,6 +91,12 @@ + + + + + +
@@ -119,74 +125,65 @@

I’ve also observed that so far my 2019 16" Macbook Pro hasn’t sounded like a jet engine, although I haven’t performed any disk-intensive operations yet.

Installing Podman#

Running Podman on macOS is more involved than on Linux, because the podman-machine must run Linux inside of a virtual machine. Fortunately, the installation is made simple with brew (read this if you’re installing Podman on Linux):

-
brew install podman
-
-

The podman-machine must be started:

-
# This is not necessary on Linux
-podman machine init
-podman machine start
-
-

Running a container#

+
brew install podman
+

The podman-machine must be started:

+
# This is not necessary on Linux
+podman machine init
+podman machine start
+

Running a container#

Let’s try to pull an image:

-
$ podman pull alpine
-Trying to pull docker.io/library/alpine:latest...
-Getting image source signatures
-Copying blob sha256:a0d0a0d46f8b52473982a3c466318f479767577551a53ffc9074c9fa7035982e
-Copying config sha256:14119a10abf4669e8cdbdff324a9f9605d99697215a0d21c360fe8dfa8471bab
-Writing manifest to image destination
-Storing signatures
-14119a10abf4669e8cdbdff324a9f9605d99697215a0d21c360fe8dfa8471bab
-
-
+
$ podman pull alpine
+Trying to pull docker.io/library/alpine:latest...
+Getting image source signatures
+Copying blob sha256:a0d0a0d46f8b52473982a3c466318f479767577551a53ffc9074c9fa7035982e
+Copying config sha256:14119a10abf4669e8cdbdff324a9f9605d99697215a0d21c360fe8dfa8471bab
+Writing manifest to image destination
+Storing signatures
+14119a10abf4669e8cdbdff324a9f9605d99697215a0d21c360fe8dfa8471bab
+

If you’re having an issue pulling images, you may need to remove ~/.docker/config.json or remove the set of auths in the configuration as mentioned here.

and then run and exec into the container:

-
$ podman run --rm -ti alpine
-Error: error preparing container 99ace1ef8a78118e178372d91fd182e8166c399fbebe0f676af59fbf32ce205b for attach: error configuring network namespace for container 99ace1ef8a78118e178372d91fd182e8166c399fbebe0f676af59fbf32ce205b: error adding pod unruffled_bohr_unruffled_bohr to CNI network "podman": unexpected end of JSON input
-
-

What does this error mean? A bit of searching lead to this github issue.

+
$ podman run --rm -ti alpine
+Error: error preparing container 99ace1ef8a78118e178372d91fd182e8166c399fbebe0f676af59fbf32ce205b for attach: error configuring network namespace for container 99ace1ef8a78118e178372d91fd182e8166c399fbebe0f676af59fbf32ce205b: error adding pod unruffled_bohr_unruffled_bohr to CNI network "podman": unexpected end of JSON input
+

What does this error mean? A bit of searching lead to this github issue.

Until the fix is released, a workaround is to just specify a port (even when it’s not needed):

-
podman run -p 4242 --rm -ti alpine
-
-

If you’re reading this from the future, there is a good chance specifying a port won’t be needed.

+
podman run -p 4242 --rm -ti alpine
+

If you’re reading this from the future, there is a good chance specifying a port won’t be needed.

Another example of running a container with Podman can be found in the Jellyfin Documentation.

Aliasing docker with podman#

Force of habit (or other scripts) may have you calling docker. To work around this:

-
alias docker=podman
-
-

podman-compose#

+
alias docker=podman
+

podman-compose#

You may be wondering: what about docker-compose? Well, there claims to be a drop-in replacement for it: podman-compose.

-
pip3 install --user podman-compose
-
-

Now let’s create a docker-compose.yml file to test:

-
cat << EOF >> docker-compose.yml
-version: '2'
-services:
-  hello_world:
-    image: ubuntu
-    command: [/bin/echo, 'Hello world']
-EOF
-
-

Now run:

-
$ podman-compose up
-podman pod create --name=davegallant.github.io --share net
-40d61dc6e95216c07d2b21cea6dcb30205bfcaf1260501fe652f05bddf7e595e
-0
-podman create --name=davegallant.github.io_hello_world_1 --pod=davegallant.github.io -l io.podman.compose.config-hash=123 -l io.podman.compose.project=davegallant.github.io -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=hello_world --add-host hello_world:127.0.0.1 --add-host davegallant.github.io_hello_world_1:127.0.0.1 ubuntu /bin/echo Hello world
-Resolved "ubuntu" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
-Trying to pull docker.io/library/ubuntu:latest...
-Getting image source signatures
-Copying blob sha256:f3ef4ff62e0da0ef761ec1c8a578f3035bef51043e53ae1b13a20b3e03726d17
-Copying blob sha256:f3ef4ff62e0da0ef761ec1c8a578f3035bef51043e53ae1b13a20b3e03726d17
-Copying config sha256:597ce1600cf4ac5f449b66e75e840657bb53864434d6bd82f00b172544c32ee2
-Writing manifest to image destination
-Storing signatures
-1a68b2fed3fdf2037b7aef16d770f22929eec1d799219ce30541df7876918576
-0
-podman start -a davegallant.github.io_hello_world_1
-Hello world
-
-

This should more or less provide the same results you would come to expect with docker. The README does clearly state that podman-compose is under development.

+
pip3 install --user podman-compose
+

Now let’s create a docker-compose.yml file to test:

+
cat << EOF >> docker-compose.yml
+version: '2'
+services:
+  hello_world:
+    image: ubuntu
+    command: [/bin/echo, 'Hello world']
+EOF
+

Now run:

+
$ podman-compose up
+podman pod create --name=davegallant.github.io --share net
+40d61dc6e95216c07d2b21cea6dcb30205bfcaf1260501fe652f05bddf7e595e
+0
+podman create --name=davegallant.github.io_hello_world_1 --pod=davegallant.github.io -l io.podman.compose.config-hash=123 -l io.podman.compose.project=davegallant.github.io -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=hello_world --add-host hello_world:127.0.0.1 --add-host davegallant.github.io_hello_world_1:127.0.0.1 ubuntu /bin/echo Hello world
+Resolved "ubuntu" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
+Trying to pull docker.io/library/ubuntu:latest...
+Getting image source signatures
+Copying blob sha256:f3ef4ff62e0da0ef761ec1c8a578f3035bef51043e53ae1b13a20b3e03726d17
+Copying blob sha256:f3ef4ff62e0da0ef761ec1c8a578f3035bef51043e53ae1b13a20b3e03726d17
+Copying config sha256:597ce1600cf4ac5f449b66e75e840657bb53864434d6bd82f00b172544c32ee2
+Writing manifest to image destination
+Storing signatures
+1a68b2fed3fdf2037b7aef16d770f22929eec1d799219ce30541df7876918576
+0
+podman start -a davegallant.github.io_hello_world_1
+Hello world
+

This should more or less provide the same results you would come to expect with docker. The README does clearly state that podman-compose is under development.

Summary#

Installing Podman on macOS was not seamless, but it was manageable well within 30 minutes. I would recommend giving Podman a try to anyone who is unhappy with experiencing forced docker updates, or who is interested in using a more modern technology for running containers.

One caveat to mention is that there isn’t an official graphical user interface for Podman, but there is an open issue considering one. If you rely heavily on Docker Desktop’s UI, you may not be as interested in using podman yet.

@@ -266,6 +263,9 @@ Hello world + + + diff --git a/public/blog/2021/11/14/running-k3s-in-lxc-on-proxmox/index.html b/public/blog/2021/11/14/running-k3s-in-lxc-on-proxmox/index.html index 836d743d..2c1752ff 100644 --- a/public/blog/2021/11/14/running-k3s-in-lxc-on-proxmox/index.html +++ b/public/blog/2021/11/14/running-k3s-in-lxc-on-proxmox/index.html @@ -20,8 +20,8 @@ - - + + @@ -68,7 +68,7 @@ - + @@ -91,6 +91,12 @@ + + + + + +
@@ -110,18 +116,16 @@

This gist contains snippets and discussion on how to deploy K3s in LXC on Proxmox. It mentions that bridge-nf-call-iptables should be loaded, but I did not understand the benefit of doing this.

Disable swap#

There is an issue on Kubernetes regarding swap here. There claims to be support for swap in 1.22, but for now let’s disable it:

-
sysctl vm.swappiness=0
-swapoff -a
-
-

It might be worth experimenting with swap enabled in the future to see how that might affect performance.

+
sudo sysctl vm.swappiness=0
+sudo swapoff -a
+

It might be worth experimenting with swap enabled in the future to see how that might affect performance.

Enable IP Forwarding#

To avoid IP Forwarding issues with Traefik, run the following on the host:

-
sudo sysctl net.ipv4.ip_forward=1
-sudo sysctl net.ipv6.conf.all.forwarding=1
-sudo sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf
-sudo sed -i 's/#net.ipv6.conf.all.forwarding=1/net.ipv6.conf.all.forwarding=1/g' /etc/sysctl.conf
-
-

Create LXC container#

+
sudo sysctl net.ipv4.ip_forward=1
+sudo sysctl net.ipv6.conf.all.forwarding=1
+sudo sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf
+sudo sed -i 's/#net.ipv6.conf.all.forwarding=1/net.ipv6.conf.all.forwarding=1/g' /etc/sysctl.conf
+

Create LXC container#

Create an LXC container in the Proxmox interface as you normally would. Remember to:

  • Uncheck unprivileged container
  • @@ -132,12 +136,11 @@ sudo sed -i 's/#net.ipv6.conf.all.forwarding=1/net.ipv6.conf.all.forwarding=1/g'

    Modify container config#

    Now back on the host run pct list to determine what VMID it was given.

    Open /etc/pve/lxc/$VMID.conf and append:

    -
    lxc.apparmor.profile: unconfined
    -lxc.cap.drop:
    -lxc.mount.auto: "proc:rw sys:rw"
    -lxc.cgroup2.devices.allow: c 10:200 rwm
    -
    -

    All of the above configurations are described in the manpages. +

    lxc.apparmor.profile: unconfined
    +lxc.cap.drop:
    +lxc.mount.auto: "proc:rw sys:rw"
    +lxc.cgroup2.devices.allow: c 10:200 rwm
    +

    All of the above configurations are described in the manpages. Notice that cgroup2 is used since Proxmox VE 7.0 has switched to a pure cgroupv2 environment.

    Thankfully cgroup v2 support has been supported in k3s with these contributions:

      @@ -146,47 +149,44 @@ Notice that cgroup2 is used since Proxmox VE 7.0 has switched to a

    Enable shared host mounts#

    From within the container, run:

    -
    echo '#!/bin/sh -e
    -ln -s /dev/console /dev/kmsg
    -mount --make-rshared /' > /etc/rc.local
    -chmod +x /etc/rc.local
    -reboot
    -
    -

    Install K3s#

    +
    echo '#!/bin/sh -e
    +ln -s /dev/console /dev/kmsg
    +mount --make-rshared /' > /etc/rc.local
    +chmod +x /etc/rc.local
    +reboot
    +

    Install K3s#

    One of the simplest ways to install K3s on a remote host is to use k3sup. Ensure that you supply a valid CONTAINER_IP and choose the k3s-version you prefer. As of 2021/11, it is still defaulting to the 1.19 channel, so I overrode it to 1.22 for cgroup v2 support. See the published releases here.

    -
    ssh-copy-id root@$CONTAINER_IP
    -k3sup install --ip $CONTAINER_IP --user root --k3s-version v1.22.3+k3s1
    -
    -

    If all goes well, you should see a path to the kubeconfig generated. I moved this into ~/.kube/config so that kubectl would read this by default.

    +
    ssh-copy-id root@$CONTAINER_IP
    +k3sup install --ip $CONTAINER_IP --user root --k3s-version v1.22.3+k3s1
    +

    If all goes well, you should see a path to the kubeconfig generated. I moved this into ~/.kube/config so that kubectl would read this by default.

    Wrapping up#

    Installing K3s in LXC on Proxmox works with a few tweaks to the default configuration. I later followed the Tekton’s Getting Started guide and was able to deploy it in a few commands.

    -
    $ kubectl get all --namespace tekton-pipelines
    -NAME                                               READY   STATUS    RESTARTS      AGE
    -pod/tekton-pipelines-webhook-8566ff9b6b-6rnh8      1/1     Running   1 (50m ago)   12h
    -pod/tekton-dashboard-6bf858f977-qt4hr              1/1     Running   1 (50m ago)   11h
    -pod/tekton-pipelines-controller-69fd7498d8-f57m4   1/1     Running   1 (50m ago)   12h
    -
    -NAME                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                              AGE
    -service/tekton-pipelines-controller   ClusterIP   10.43.44.245    <none>        9090/TCP,8080/TCP                    12h
    -service/tekton-pipelines-webhook      ClusterIP   10.43.183.242   <none>        9090/TCP,8008/TCP,443/TCP,8080/TCP   12h
    -service/tekton-dashboard              ClusterIP   10.43.87.97     <none>        9097/TCP                             11h
    -
    -NAME                                          READY   UP-TO-DATE   AVAILABLE   AGE
    -deployment.apps/tekton-pipelines-webhook      1/1     1            1           12h
    -deployment.apps/tekton-dashboard              1/1     1            1           11h
    -deployment.apps/tekton-pipelines-controller   1/1     1            1           12h
    -
    -NAME                                                     DESIRED   CURRENT   READY   AGE
    -replicaset.apps/tekton-pipelines-webhook-8566ff9b6b      1         1         1       12h
    -replicaset.apps/tekton-dashboard-6bf858f977              1         1         1       11h
    -replicaset.apps/tekton-pipelines-controller-69fd7498d8   1         1         1       12h
    -
    -NAME                                                           REFERENCE                             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    -horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook   Deployment/tekton-pipelines-webhook   9%/100%   1         5         1          12h
    -
    -

    I made sure to install Tailscale in the container so that I can easily access K3s from anywhere.

    +
    $ kubectl get all --namespace tekton-pipelines
    +NAME                                               READY   STATUS    RESTARTS      AGE
    +pod/tekton-pipelines-webhook-8566ff9b6b-6rnh8      1/1     Running   1 (50m ago)   12h
    +pod/tekton-dashboard-6bf858f977-qt4hr              1/1     Running   1 (50m ago)   11h
    +pod/tekton-pipelines-controller-69fd7498d8-f57m4   1/1     Running   1 (50m ago)   12h
    +
    +NAME                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                              AGE
    +service/tekton-pipelines-controller   ClusterIP   10.43.44.245    <none>        9090/TCP,8080/TCP                    12h
    +service/tekton-pipelines-webhook      ClusterIP   10.43.183.242   <none>        9090/TCP,8008/TCP,443/TCP,8080/TCP   12h
    +service/tekton-dashboard              ClusterIP   10.43.87.97     <none>        9097/TCP                             11h
    +
    +NAME                                          READY   UP-TO-DATE   AVAILABLE   AGE
    +deployment.apps/tekton-pipelines-webhook      1/1     1            1           12h
    +deployment.apps/tekton-dashboard              1/1     1            1           11h
    +deployment.apps/tekton-pipelines-controller   1/1     1            1           12h
    +
    +NAME                                                     DESIRED   CURRENT   READY   AGE
    +replicaset.apps/tekton-pipelines-webhook-8566ff9b6b      1         1         1       12h
    +replicaset.apps/tekton-dashboard-6bf858f977              1         1         1       11h
    +replicaset.apps/tekton-pipelines-controller-69fd7498d8   1         1         1       12h
    +
    +NAME                                                           REFERENCE                             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    +horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook   Deployment/tekton-pipelines-webhook   9%/100%   1         5         1          12h
    +

    I made sure to install Tailscale in the container so that I can easily access K3s from anywhere.

    If I’m feeling adventurous, I might experiment with K3s rootless.

    + diff --git a/public/blog/2022/03/13/backing-up-gmail-with-synology/index.html b/public/blog/2022/03/13/backing-up-gmail-with-synology/index.html index 94e68742..cc2379c4 100644 --- a/public/blog/2022/03/13/backing-up-gmail-with-synology/index.html +++ b/public/blog/2022/03/13/backing-up-gmail-with-synology/index.html @@ -20,8 +20,8 @@ - - + + @@ -68,7 +68,7 @@ - + @@ -91,6 +91,8 @@ + +
    @@ -199,6 +201,7 @@ Encrypting your shared volumes should also be done, since unfortunately 2023 Dave Gallant + diff --git a/public/blog/2022/04/02/virtualizing-my-router-with-pfsense/index.html b/public/blog/2022/04/02/virtualizing-my-router-with-pfsense/index.html index b99e7799..6178fc9c 100644 --- a/public/blog/2022/04/02/virtualizing-my-router-with-pfsense/index.html +++ b/public/blog/2022/04/02/virtualizing-my-router-with-pfsense/index.html @@ -20,8 +20,8 @@ - - + + @@ -68,7 +68,7 @@ - + @@ -91,6 +91,8 @@ + +
    @@ -221,6 +223,7 @@ When setting up the machine, I mostly went with all of the defaults. Configurati + diff --git a/public/blog/2022/12/10/watching-youtube-in-private/index.html b/public/blog/2022/12/10/watching-youtube-in-private/index.html index ebd8b232..473e0fd1 100644 --- a/public/blog/2022/12/10/watching-youtube-in-private/index.html +++ b/public/blog/2022/12/10/watching-youtube-in-private/index.html @@ -20,8 +20,8 @@ - - + + @@ -68,7 +68,7 @@ - + @@ -91,6 +91,12 @@ + + + + + +
    @@ -110,49 +116,48 @@

    A few days ago, yewtu.be went down briefly, and that motivated me enough to self-host invidious. There are several other hosted instances listed here, but being able to easily backup my own instance (including subscriptions and watch history) is more compelling in my case.

    Hosting invidious#

    The quickest way to get invidious up is with docker-compose as mentioned in the docs.

    -

    I made a few modifications (such as pinning the container’s tag), and ended up with:

    -
    version: "3"
    -services:
    -  invidious:
    -    image: quay.io/invidious/invidious:5160d8bae39dc5cc5d51abee90571a03c08d0f2b
    -    restart: unless-stopped
    -    ports:
    -      - "0.0.0.0:3000:3000"
    -    environment:
    -      INVIDIOUS_CONFIG: |
    -        db:
    -          dbname: invidious
    -          user: kemal
    -          password: kemal
    -          host: invidious-db
    -          port: 5432
    -        check_tables: true
    -    healthcheck:
    -      test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/comments/jNQXAC9IVRw || exit 1
    -      interval: 30s
    -      timeout: 5s
    -      retries: 2
    -    depends_on:
    -      - invidious-db
    -
    -  invidious-db:
    -    image: docker.io/library/postgres:14
    -    restart: unless-stopped
    -    volumes:
    -      - postgresdata:/var/lib/postgresql/data
    -      - ./config/sql:/config/sql
    -      - ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
    -    environment:
    -      POSTGRES_DB: invidious
    -      POSTGRES_USER: kemal
    -      POSTGRES_PASSWORD: kemal
    -    healthcheck:
    -      test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
    -
    -volumes:
    -  postgresdata:
    -
    -

    After invidious was up and running, I installed Tailscale on it to leverage its MagicDNS, and I’m now able to access this instance from anywhere at http://invidious:3000/feed/subscriptions.

    +

    I made a few modifications, and ended up with:

    +
    version: "3"
    +services:
    +  invidious:
    +    image: quay.io/invidious/invidious
    +    restart: unless-stopped
    +    ports:
    +      - "0.0.0.0:3000:3000"
    +    environment:
    +      INVIDIOUS_CONFIG: |
    +        db:
    +          dbname: invidious
    +          user: kemal
    +          password: kemal
    +          host: invidious-db
    +          port: 5432
    +        check_tables: true        
    +    healthcheck:
    +      test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/comments/jNQXAC9IVRw || exit 1
    +      interval: 30s
    +      timeout: 5s
    +      retries: 2
    +    depends_on:
    +      - invidious-db
    +
    +  invidious-db:
    +    image: docker.io/library/postgres:14
    +    restart: unless-stopped
    +    volumes:
    +      - postgresdata:/var/lib/postgresql/data
    +      - ./config/sql:/config/sql
    +      - ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
    +    environment:
    +      POSTGRES_DB: invidious
    +      POSTGRES_USER: kemal
    +      POSTGRES_PASSWORD: kemal
    +    healthcheck:
    +      test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
    +
    +volumes:
    +  postgresdata:
    +

    After invidious was up and running, I installed Tailscale on it to leverage its MagicDNS, and I’m now able to access this instance from anywhere at http://invidious:3000/feed/subscriptions.

    I figured it would be nice to redirect existing YouTube links that others send me, so that I could seamlessly watch the videos using invidious.

    I went looking for a way to redirect paths at the browser level. I found the lightweight proxy requestly, which can be used to modify http requests in my browser. I created the following rules:

    @@ -239,6 +244,9 @@ volumes: + + + diff --git a/public/blog/2023/05/22/using-aks-and-socks-to-connect-to-a-private-azure-db/index.html b/public/blog/2023/05/22/using-aks-and-socks-to-connect-to-a-private-azure-db/index.html index 08661c12..2ac832a4 100644 --- a/public/blog/2023/05/22/using-aks-and-socks-to-connect-to-a-private-azure-db/index.html +++ b/public/blog/2023/05/22/using-aks-and-socks-to-connect-to-a-private-azure-db/index.html @@ -20,8 +20,8 @@ - - + + @@ -68,7 +68,7 @@ - + @@ -91,6 +91,12 @@ + + + + + +
    @@ -124,15 +130,14 @@ If this sounds more appealing, install kubectl-plugin-socks5-proxy that I was convinced that using SOCKS could be made simple.

    So how does it work? By installing the kubectl plugin and then running kubectl socks5-proxy, a SOCKS proxy server is spun up in a pod and then opens up port-forwarding session using kubectl.

    As you can see below, this k8s plugin is wrapped up nicely:

    -
    $ kubectl socks5-proxy
    -using: namespace=default
    -using: port=1080
    -using: name=davegallant-proxy
    -using: image=serjs/go-socks5-proxy
    -Creating SOCKS5 Proxy (Pod)...
    -pod/davegallant-proxy created
    -
    -

    With the above proxy connection open, it is possible to access both the DNS and private IPs accessible within the k8s cluster. In this case, I am able to access the private database, since there is network connectivity between the k8s cluster and the database.

    +
    $ kubectl socks5-proxy
    +using: namespace=default
    +using: port=1080
    +using: name=davegallant-proxy
    +using: image=serjs/go-socks5-proxy
    +Creating SOCKS5 Proxy (Pod)...
    +pod/davegallant-proxy created
    +

    With the above proxy connection open, it is possible to access both the DNS and private IPs accessible within the k8s cluster. In this case, I am able to access the private database, since there is network connectivity between the k8s cluster and the database.

    Caveats and Conclusion#

    The above outlined solution makes some assumptions:

      @@ -232,6 +237,9 @@ pod/davegallant-proxy created + + + diff --git a/public/blog/2023/12/10/setting-up-gitea-actions-with-tailscale/index.html b/public/blog/2023/12/10/setting-up-gitea-actions-with-tailscale/index.html index 43f45fb2..76afdf97 100644 --- a/public/blog/2023/12/10/setting-up-gitea-actions-with-tailscale/index.html +++ b/public/blog/2023/12/10/setting-up-gitea-actions-with-tailscale/index.html @@ -20,8 +20,8 @@ - - + + @@ -68,7 +68,7 @@ - + @@ -91,6 +91,12 @@ + + + + + +
      @@ -122,68 +128,65 @@

    My preferred approach to deploying code in a homelab environment is with docker compose. I have deployed this in a proxmox lxc container based on debian with a hostname gitea. This could be deployed in any environment and with any hostname (as long you updated the tailscale machine name to your preferred subdomain for magic dns).

    The docker-compose.yaml file looks like:

    -
    version: "3.7"
    -services:
    -  gitea:
    -    image: gitea/gitea:1.21.1
    -    container_name: gitea
    -    environment:
    -      - USER_UID=1000
    -      - USER_GID=1000
    -
    -      - GITEA__server__DOMAIN=gitea.my-tailnet-name.ts.net
    -      - GITEA__server__ROOT_URL=https://gitea.my-tailnet-name.ts.net
    -      - GITEA__server__HTTP_ADDR=0.0.0.0
    -      - GITEA__server__LFS_JWT_SECRET=my-secret-jwt
    -    restart: always
    -    volumes:
    -      - ./data:/data
    -      - /etc/timezone:/etc/timezone:ro
    -      - /etc/localtime:/etc/localtime:ro
    -  traefik:
    -    image: traefik:v3.0.0-beta4
    -    container_name: traefik
    -    security_opt:
    -      - no-new-privileges:true
    -    restart: unless-stopped
    -    ports:
    -      - 80:80
    -      - 443:443
    -    volumes:
    -      - ./traefik/data/traefik.yaml:/traefik.yaml:ro
    -      - ./traefik/data/dynamic.yaml:/dynamic.yaml:ro
    -      - /var/run/tailscale/tailscaled.sock:/var/run/tailscale/tailscaled.sock
    -
    -

    traefik/data/traefik.yaml:

    -
    entryPoints:
    -  https:
    -    address: ":443"
    -providers:
    -  file:
    -    filename: dynamic.yaml
    -certificatesResolvers:
    -  myresolver:
    -    tailscale: {}
    -log:
    -  level: INFO
    -
    -

    and finally traefik/data/dynamic/dynamic.yaml:

    -
    http:
    -  routers:
    -    gitea:
    -      rule: Host(`gitea.my-tailnet-name.ts.net`)
    -      entrypoints:
    -        - "https"
    -      service: gitea
    -      tls:
    -        certResolver: myresolver
    -  services:
    -    gitea:
    -      loadBalancer:
    -        servers:
    -          - url: "http://gitea:3000"
    -
    -

    Something to consider is whether or not you want to use ssh with git. One method to get this to work with containers is to use ssh container passthrough. I decided to keep it simple and not use ssh, since communicating over https is perfectly fine for my use case.

    +
    version: "3.7"
    +services:
    +  gitea:
    +    image: gitea/gitea:1.21.1
    +    container_name: gitea
    +    environment:
    +      - USER_UID=1000
    +      - USER_GID=1000
    +
    +      - GITEA__server__DOMAIN=gitea.my-tailnet-name.ts.net
    +      - GITEA__server__ROOT_URL=https://gitea.my-tailnet-name.ts.net
    +      - GITEA__server__HTTP_ADDR=0.0.0.0
    +      - GITEA__server__LFS_JWT_SECRET=my-secret-jwt
    +    restart: always
    +    volumes:
    +      - ./data:/data
    +      - /etc/timezone:/etc/timezone:ro
    +      - /etc/localtime:/etc/localtime:ro
    +  traefik:
    +    image: traefik:v3.0.0-beta4
    +    container_name: traefik
    +    security_opt:
    +      - no-new-privileges:true
    +    restart: unless-stopped
    +    ports:
    +      - 80:80
    +      - 443:443
    +    volumes:
    +      - ./traefik/data/traefik.yaml:/traefik.yaml:ro
    +      - ./traefik/data/dynamic.yaml:/dynamic.yaml:ro
    +      - /var/run/tailscale/tailscaled.sock:/var/run/tailscale/tailscaled.sock
    +

    traefik/data/traefik.yaml:

    +
    entryPoints:
    +  https:
    +    address: ":443"
    +providers:
    +  file:
    +    filename: dynamic.yaml
    +certificatesResolvers:
    +  myresolver:
    +    tailscale: {}
    +log:
    +  level: INFO
    +

    and finally traefik/data/dynamic/dynamic.yaml:

    +
    http:
    +  routers:
    +    gitea:
    +      rule: Host(`gitea.my-tailnet-name.ts.net`)
    +      entrypoints:
    +        - "https"
    +      service: gitea
    +      tls:
    +        certResolver: myresolver
    +  services:
    +    gitea:
    +      loadBalancer:
    +        servers:
    +          - url: "http://gitea:3000"
    +

    Something to consider is whether or not you want to use ssh with git. One method to get this to work with containers is to use ssh container passthrough. I decided to keep it simple and not use ssh, since communicating over https is perfectly fine for my use case.

    After adding the above configuration, running docker compose up -d should be enough to get an instance up and running. It will be accessible at https://gitea.my-tailnet-name.ts.net from within the tailnet.

    Connecting a Runner#

    I installed the runner by following the docs. I opted for installing it on a separate host (another lxc container) as recommended in the docs. I used the systemd unit file to ensure that the runner comes back online after system reboots. I installed tailscale on this act runner as well, so that it can have the same “networking privileges” as the main instance.

    @@ -193,64 +196,63 @@ log:

    Now it’s time start running some automation. I used the demo workflow as a starting point to verify that the runner is executing workflows.

    After this, I wanted to make sure that some of my existing workflows could be migrated over.

    The following workflow uses a matrix to run a job for several of my hosts using ansible playbooks that will do various tasks such as patching os updates and updating container images.

    -
    name: Run ansible
    -on:
    -  push:
    -  schedule:
    -    - cron: "0 */12 * * *"
    -
    -jobs:
    -  run-ansible-playbook:
    -    runs-on: ubuntu-latest
    -    strategy:
    -      matrix:
    -        host:
    -          - changedetection
    -          - homelab
    -          - invidious
    -          - jackett
    -          - ladder
    -          - miniflux
    -          - plex
    -          - qbittorrent
    -          - tailscale-exit-node
    -          - uptime-kuma
    -    steps:
    -      - name: Check out repository code
    -        uses: actions/checkout@v4
    -      - name: Install ansible
    -        run: |
    -          apt update && apt install ansible -y
    -      - name: Run playbook
    -        uses: dawidd6/action-ansible-playbook@v2
    -        with:
    -          playbook: playbooks/main.yml
    -          requirements: requirements.yml
    -          key: ${{ secrets.SSH_PRIVATE_KEY}}
    -          options: |
    -            --inventory inventory
    -            --ssh-extra-args "-o StrictHostKeyChecking=no"
    -            --limit ${{ matrix.host }}
    -  send-failure-notification:
    -    needs: run-ansible-playbook
    -    runs-on: ubuntu-latest
    -    if: always() && failure()
    -    steps:
    -      - name: Send failure notification
    -        uses: dawidd6/action-send-mail@v3
    -        with:
    -          server_address: smtp.gmail.com
    -          server_port: 465
    -          secure: true
    -          username: myuser
    -          password: ${{ secrets.MAIL_PASSWORD }}
    -          subject: gitea job ${{github.repository}} failed!
    -          to: me@davegallant.ca
    -          from: Gitea
    -          body: |
    -            ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_number }}
    -
    -

    And voilà:

    +
    name: Run ansible
    +on:
    +  push:
    +  schedule:
    +    - cron: "0 */12 * * *"
    +
    +jobs:
    +  run-ansible-playbook:
    +    runs-on: ubuntu-latest
    +    strategy:
    +      matrix:
    +        host:
    +          - changedetection
    +          - homelab
    +          - invidious
    +          - jackett
    +          - ladder
    +          - miniflux
    +          - plex
    +          - qbittorrent
    +          - tailscale-exit-node
    +          - uptime-kuma
    +    steps:
    +      - name: Check out repository code
    +        uses: actions/checkout@v4
    +      - name: Install ansible
    +        run: |
    +          apt update && apt install ansible -y          
    +      - name: Run playbook
    +        uses: dawidd6/action-ansible-playbook@v2
    +        with:
    +          playbook: playbooks/main.yml
    +          requirements: requirements.yml
    +          key: ${{ secrets.SSH_PRIVATE_KEY}}
    +          options: |
    +            --inventory inventory
    +            --ssh-extra-args "-o StrictHostKeyChecking=no"
    +            --limit ${{ matrix.host }}            
    +  send-failure-notification:
    +    needs: run-ansible-playbook
    +    runs-on: ubuntu-latest
    +    if: always() && failure()
    +    steps:
    +      - name: Send failure notification
    +        uses: dawidd6/action-send-mail@v3
    +        with:
    +          server_address: smtp.gmail.com
    +          server_port: 465
    +          secure: true
    +          username: myuser
    +          password: ${{ secrets.MAIL_PASSWORD }}
    +          subject: gitea job ${{github.repository}} failed!
    +          to: me@davegallant.ca
    +          from: Gitea
    +          body: |
    +            ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_number }}            
    +

    And voilà: