Redesign home page

This commit is contained in:
Dave Gallant
2024-01-07 22:42:41 -05:00
parent 6a405662e9
commit fad71f3265
31 changed files with 28 additions and 41 deletions

1
content/blog/_index.md Normal file
View File

@@ -0,0 +1 @@
[RSS Feed](https://davegallant.ca/index.xml)

View File

@@ -0,0 +1,155 @@
---
title: "AppGate SDP on Arch Linux"
date: 2020-03-16T22:00:15-04:00
draft: false
comments: true
tags: ['linux', 'vpn', 'python']
author: "Dave Gallant"
---
AppGate SDP provides a Zero Trust network. This post describes how to get AppGate SDP `4.3.2` working on Arch Linux.
<!--more-->
Depending on the AppGate SDP Server that is running, you may require a client that is more recent than the latest package on [AUR](https://aur.archlinux.org/packages/appgate-sdp/).
As of right now, the latest AUR is `4.2.2-1`.
These steps highlight how to get it working with `Python3.8` by making a 1 line modification to AppGate source code.
# Packaging
We already know the community package is currently out of date, so let's clone it:
```shell
git clone https://aur.archlinux.org/appgate-sdp.git
cd appgate-sdp
```
You'll likely notice that the version is not what we want, so let's modify the `PKGBUILD` to the following:
```shell
# Maintainer: Pawel Mosakowski <pawel at mosakowski dot net>
pkgname=appgate-sdp
conflicts=('appgate-sdp-headless')
pkgver=4.3.2
_download_pkgver=4.3
pkgrel=1
epoch=
pkgdesc="Software Defined Perimeter - GUI client"
arch=('x86_64')
url="https://www.cyxtera.com/essential-defense/appgate-sdp/support"
license=('custom')
# dependecies calculated by namcap
depends=('gconf' 'libsecret' 'gtk3' 'python' 'nss' 'libxss' 'nodejs' 'dnsmasq')
source=("https://sdpdownloads.cyxtera.com/AppGate-SDP-${_download_pkgver}/clients/${pkgname}_${pkgver}_amd64.deb"
"appgatedriver.service")
options=(staticlibs)
prepare() {
tar -xf data.tar.xz
}
package() {
cp -dpr "${srcdir}"/{etc,lib,opt,usr} "${pkgdir}"
mv -v "$pkgdir/lib/systemd/system" "$pkgdir/usr/lib/systemd/"
rm -vrf "$pkgdir/lib"
cp -v "$srcdir/appgatedriver.service" "$pkgdir/usr/lib/systemd/system/appgatedriver.service"
mkdir -vp "$pkgdir/usr/share/licenses/appgate-sdp"
cp -v "$pkgdir/usr/share/doc/appgate/copyright" "$pkgdir/usr/share/licenses/appgate-sdp"
cp -v "$pkgdir/usr/share/doc/appgate/LICENSE.github" "$pkgdir/usr/share/licenses/appgate-sdp"
cp -v "$pkgdir/usr/share/doc/appgate/LICENSES.chromium.html.bz2" "$pkgdir/usr/share/licenses/appgate-sdp"
}
md5sums=('17101aac7623c06d5fbb95f50cf3dbdc'
'002644116e20b2d79fdb36b7677ab4cf')
```
Let's first make sure we have some dependencies. If you do not have [yay](https://github.com/Jguer/yay), check it out.
```shell
yay -S dnsmasq gconf
```
Now, let's install it:
```shell
makepkg -si
```
# Running the client
Ok, let's run the client by executing `appgate`.
It complains about not being able to connect.
Easy fix:
```shell
sudo systemctl start appgatedriver.service
```
Now we should be connected... but DNS is not working?
# Fixing the DNS
Running `resolvectl` should display that something is not right.
Why is the DNS not being set by appgate?
```shell
$ head -3 /opt/appgate/linux/set_dns
#!/usr/bin/env python3
'''
This is used to set and unset the DNS.
```
It seems like python3 is required for the DNS setting to happen.
Let's try to run it.
```shell
$ sudo /opt/appgate/linux/set_dns
/opt/appgate/linux/set_dns:88: SyntaxWarning: "is" with a literal. Did you mean "=="?
servers = [( socket.AF_INET if x.version is 4 else socket.AF_INET6, map(int, x.packed)) for x in servers]
Traceback (most recent call last):
File "/opt/appgate/linux/set_dns", line 30, in <module>
import dbus
ModuleNotFoundError: No module named 'dbus'
```
Ok, let's install it:
```shell
$ sudo python3.8 -m pip install dbus-python
```
Will it work now? Not yet. There's another issue:
```shell
$ sudo /opt/appgate/linux/set_dns
/opt/appgate/linux/set_dns:88: SyntaxWarning: "is" with a literal. Did you mean "=="?
servers = [( socket.AF_INET if x.version is 4 else socket.AF_INET6, map(int, x.packed)) for x in servers]
module 'platform' has no attribute 'linux_distribution'
```
This is a breaking change in Python3.8.
So what is calling `platform.linux_distribution`?
Let's search for it:
```shell
$ sudo grep -r 'linux_distribution' /opt/appgate/linux/
/opt/appgate/linux/nm.py: if platform.linux_distribution()[0] != 'Fedora':
```
Aha! So this is in the local AppGate source code. This should be an easy fix. Let's just replace this line with:
```python
if True: # Since we are not using Fedora :)
```
# Wrapping up
It turns out there are [breaking changes](https://docs.python.org/3.7/library/platform.html#platform.linux_distribution) in Python3.8.
The docs say `Deprecated since version 3.5, will be removed in version 3.8: See alternative like the distro package.`
I suppose this highlights one of the caveats of relying upon the system's python, rather than having an isolated, dedicated environment for all dependencies.

View File

@@ -0,0 +1,14 @@
---
title: "Automatically rotating AWS access keys"
date: 2021-09-17T12:48:33-04:00
lastmod: 2021-09-17T12:48:33-04:00
draft: false
comments: true
tags: ['aws', 'python', 'security', 'aws-vault']
author: "Dave Gallant"
---
Rotating credentials is a security best practice. This morning, I read a question about automatically rotating AWS Access Keys without having to go through the hassle of navigating the AWS console. There are some existing solutions already, but I decided to write a [script](https://gist.github.com/davegallant/2c042686a78684a657fe99e20fa7a924#file-aws_access_key_rotator-py) since it was incredibly simple. The script could be packed up as a systemd/launchd service to continually rotate access keys in the background.
In the longer term, migrating my local workflows to [aws-vault](https://github.com/99designs/aws-vault) seems like a more secure solution. This would mean that credentials (even temporary session credentials) never have to be written in plaintext to disk (i.e. where [AWS suggests](https://docs.aws.amazon.com/sdkref/latest/guide/file-location.html)). Any existing applications, such as terraform, could be have their credentials passed to them from aws-vault, which retrieves them from the OS's secure keystore. There is even a [rotate command](https://github.com/99designs/aws-vault/blob/master/USAGE.md#rotating-credentials) included.

View File

@@ -0,0 +1,50 @@
---
title: "Backing up gmail with Synology"
date: 2022-03-13T18:49:10-04:00
lastmod: 2022-03-13T18:49:10-04:00
comments: true
draft: false
tags: ["synology", "gmail", "backup", "ransomware"]
author: "Dave Gallant"
---
I've used gmail since the beta launched touting a whopping 1GB of storage. I thought this was a massive leap in email technology at the time. I was lucky enough to get an invite fairly quickly. Not suprisingly, I have many years of emails, attachments, and photos. I certainly do not want to lose the content of many of these emails. Despite the redundancy of the data that Google secures, I still feel better retaining a copy of this data on my own physical machines.
<!--more-->
The thought of completely de-googling has crossed my mind on occassion. Convenience, coupled with my admiration for Google engineering, has prevented me from doing so thus far. Though, I may end up doing so at some point in the future.
## Synology MailPlus Server
Synology products are reasonably priced for what you get (essentially a cloud-in-a-box) and there is very little maintenance required. I've recently been in interested in syncing and snapshotting my personal data. I've setup [Synology's Cloud Sync](https://www.synology.com/en-ca/dsm/feature/cloud_sync) and keep copies of most of my cloud data.
I've used tools such as [gmvault](http://www.gmvault.org) with success in the past. Setting this up on a cron seems like a viable option. However, I don't really need a lot of the features it offers and do not plan to restore this data to another account.
Synology's MailPlus seems to be a good candidate for backing up this data. By enabling POP3 fetching, it's possible to fetch all existing emails, as well as periodically fetch all new emails. If a disaster ever did occur, having these emails would be beneficial, as they are an extension of my memory bank.
Installing MailPlus can be done from the Package Center:
![image](install-mailplus-server.png)
Next, I went into **Synology MailPlus Server** and on the left, clicked on **Account** and ensured my user was marked as active.
Afterwords, I followed [these instructions](https://kb.synology.com/en-in/DSM/tutorial/How_should_I_receive_external_email_messages_via_MailPlus) in order to start backing up emails.
When entering the POP3 credentials, I created an [app password](https://myaccount.google.com/apppasswords) solely for authenticating to POP3 from the Synology device. This is required because I have 2-Step verification enabled on my account. There doesn't seem to be a more secure way to access POP3 at the moment. It does seem like app password access is limited in scope (when MFA is enabled). These app passwords can't be used to login to the main Google account.
I made sure to set the `Fetch Range` to `All` in order to get all emails from the beginning of time.
After this, mail started coming in.
![image](mail-plus-incoming-mail.png)
After fetching 19 years worth of emails, I tried searching for some emails. It only took a few seconds to search through ~50K emails, which is a relief if I ever did have to search for something important.
## Securing Synology
Since Synology devices are not hermetically sealed, it's best to secure them by [enabling MFA](https://kb.synology.com/en-us/DSM/tutorial/How_to_add_extra_security_to_your_Synology_NAS#x_anchor_id8) to help prevent being the [victim of ransomware](https://www.bleepingcomputer.com/news/security/qlocker-ransomware-returns-to-target-qnap-nas-devices-worldwide/). It is also wise to backup your system settings and volumes to the cloud using a tool such as [Hyper Backup](https://www.synology.com/en-ca/dsm/feature/hyper_backup).
Encrypting your shared volumes should also be done, since unfortunately [DSM does not support full disk encryption](https://community.synology.com/enu/forum/12/post/144665).
## Summary
Having backups of various forms of cloud data is a good investment, especially in [times of war](https://en.wikipedia.org/wiki/2022_Ukraine_cyberattacks). I certainly feel more at ease for having backed up my emails.

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

View File

@@ -0,0 +1,147 @@
---
title: "Replacing docker with podman on macOS (and Linux)"
date: 2021-10-11T10:43:35-04:00
lastmod: 2021-10-11T10:43:35-04:00
draft: false
comments: true
tags: ["docker", "podman", "containers"]
author: "Dave Gallant"
---
There are a number of reasons why you might want to replace docker, especially on macOS. The following feature bundled in Docker Desktop might have motivated you enough to consider replacing docker:
<!--more-->
{{< tweet user="moyix" id="1388586550682861568" >}}
Docker has been one of the larger influencers in the container world, helping to standardize the [OCI Image Format Specification](https://github.com/opencontainers/image-spec/blob/main/spec.md). For many developers, containers have become synonymous with terms like `docker` and `Dockerfile` (a file containing build instructions for a container image). Docker has certainly made it very convenient to build and run containers, but it is not the only solution for doing so.
This post briefly describes my experience swapping out docker for podman on macOS.
### What is a container?
A container is a standard unit of software that packages up all application dependencies within it. Multiple containers can be run on a host machine all sharing the same kernel as the host. Linux namespaces help provide an isolated view of the system, including mnt, pid, net, ipc, uid, cgroup, and time. There is an [in-depth video](https://www.youtube.com/watch?v=sK5i-N34im8) that discusses what containers are made from, and [near the end](https://youtu.be/sK5i-N34im8?t=2468) there is a demonstration on how to build your own containers from the command line.
By easily allowing the necessary dependencies to live alongside the application code, containers make the "works on my machine" problem less of a problem.
### Benefits of Podman
One of the most interesting features of Podman is that it is daemonless. There isn't a process running on your system managing your containers. In contrast, the docker client is reliant upon the docker daemon (often running as root) to be able to build and run containers.
Podman is rootless by default. It is now possible to [run the docker daemon rootless](https://docs.docker.com/engine/security/rootless/) as well, but it's still not the default behaviour.
I've also observed that so far my 2019 16" Macbook Pro hasn't sounded like a jet engine, although I haven't performed any disk-intensive operations yet.
### Installing Podman
Running Podman on macOS is more involved than on Linux, because the podman-machine must run Linux inside of a virtual machine. Fortunately, the installation is made simple with [brew](https://formulae.brew.sh/formula/podman) (read [this](https://podman.io/getting-started/installation#linux-distributions) if you're installing Podman on Linux):
```sh
brew install podman
```
The podman-machine must be started:
```sh
# This is not necessary on Linux
podman machine init
podman machine start
```
### Running a container
Let's try to pull an image:
```console
$ podman pull alpine
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob sha256:a0d0a0d46f8b52473982a3c466318f479767577551a53ffc9074c9fa7035982e
Copying config sha256:14119a10abf4669e8cdbdff324a9f9605d99697215a0d21c360fe8dfa8471bab
Writing manifest to image destination
Storing signatures
14119a10abf4669e8cdbdff324a9f9605d99697215a0d21c360fe8dfa8471bab
```
> If you're having an issue pulling images, you may need to remove `~/.docker/config.json` or remove the set of auths in the configuration as mentioned [here](https://stackoverflow.com/a/69121873/1191286).
and then run and exec into the container:
```console
$ podman run --rm -ti alpine
Error: error preparing container 99ace1ef8a78118e178372d91fd182e8166c399fbebe0f676af59fbf32ce205b for attach: error configuring network namespace for container 99ace1ef8a78118e178372d91fd182e8166c399fbebe0f676af59fbf32ce205b: error adding pod unruffled_bohr_unruffled_bohr to CNI network "podman": unexpected end of JSON input
```
What does this error mean? A bit of searching lead to [this github issue](https://github.com/containers/podman/issues/11837).
Until the fix is released, a workaround is to just specify a port (even when it's not needed):
```sh
podman run -p 4242 --rm -ti alpine
```
If you're reading this from the future, there is a good chance specifying a port won't be needed.
Another example of running a container with Podman can be found in the [Jellyfin Documentation](https://jellyfin.org/docs/general/administration/installing.html#podman).
### Aliasing docker with podman
Force of habit (or other scripts) may have you calling `docker`. To work around this:
```sh
alias docker=podman
```
### podman-compose
You may be wondering: what about docker-compose? Well, there _claims_ to be a drop-in replacement for it: [podman-compose](https://github.com/containers/podman-compose).
```sh
pip3 install --user podman-compose
```
Now let's create a `docker-compose.yml` file to test:
```sh
cat << EOF >> docker-compose.yml
version: '2'
services:
hello_world:
image: ubuntu
command: [/bin/echo, 'Hello world']
EOF
```
Now run:
```console
$ podman-compose up
podman pod create --name=davegallant.github.io --share net
40d61dc6e95216c07d2b21cea6dcb30205bfcaf1260501fe652f05bddf7e595e
0
podman create --name=davegallant.github.io_hello_world_1 --pod=davegallant.github.io -l io.podman.compose.config-hash=123 -l io.podman.compose.project=davegallant.github.io -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=hello_world --add-host hello_world:127.0.0.1 --add-host davegallant.github.io_hello_world_1:127.0.0.1 ubuntu /bin/echo Hello world
Resolved "ubuntu" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/ubuntu:latest...
Getting image source signatures
Copying blob sha256:f3ef4ff62e0da0ef761ec1c8a578f3035bef51043e53ae1b13a20b3e03726d17
Copying blob sha256:f3ef4ff62e0da0ef761ec1c8a578f3035bef51043e53ae1b13a20b3e03726d17
Copying config sha256:597ce1600cf4ac5f449b66e75e840657bb53864434d6bd82f00b172544c32ee2
Writing manifest to image destination
Storing signatures
1a68b2fed3fdf2037b7aef16d770f22929eec1d799219ce30541df7876918576
0
podman start -a davegallant.github.io_hello_world_1
Hello world
```
This should more or less provide the same results you would come to expect with docker. The README does clearly state that podman-compose is under development.
### Summary
Installing Podman on macOS was not seamless, but it was manageable well within 30 minutes. I would recommend giving Podman a try to anyone who is unhappy with experiencing forced docker updates, or who is interested in using a more modern technology for running containers.
One caveat to mention is that there isn't an official graphical user interface for Podman, but there is an [open issue](https://github.com/containers/podman/issues/11494) considering one. If you rely heavily on Docker Desktop's UI, you may not be as interested in using podman yet.
> Update: After further usage, bind mounts do not seem to work out of the box when the client and host are on different machines. A rather involved solution using [sshfs](https://en.wikipedia.org/wiki/SSHFS) was shared [here](https://github.com/containers/podman/issues/8016#issuecomment-920015800).
I had been experimenting with Podman on Linux before writing this, but after listening to this [podcast episode](https://kubernetespodcast.com/episode/164-podman/), I was inspired to give Podman a try on macOS.

View File

@@ -0,0 +1,130 @@
---
title: "Running K3s in LXC on Proxmox"
date: 2021-11-14T10:07:03-05:00
lastmod: 2021-11-14T10:07:03-05:00
draft: false
comments: true
tags: ["k3s", "proxmox", "lxc", "self-hosted"]
author: "Dave Gallant"
---
It has been a while since I've actively used Kubernetes and wanted to explore the evolution of tools such as [Helm](https://helm.sh) and [Tekton](https://tekton.dev). I decided to deploy [K3s](https://k3s.io), since I've had success with deploying it on resource-contrained Raspberry Pis in the past. I thought that this time it'd be convenient to have K3s running in a LXC container on Proxmox. This would allow for easy snapshotting of the entire Kubernetes deployment. LXC containers also provide an efficient way to use a machine's resources.
## What is K3s?
K3s is a Kubernetes distro that advertises itself as a lightweight binary with a much smaller memory-footprint than traditional k8s. K3s is not a fork of k8s as it seeks to remain as close to upstream as it possibly can.
## Configure Proxmox
This [gist](https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185) contains snippets and discussion on how to deploy K3s in LXC on Proxmox. It mentions that `bridge-nf-call-iptables` should be loaded, but I did not understand the benefit of doing this.
## Disable swap
There is an issue on Kubernetes regarding swap [here](https://github.com/kubernetes/kubernetes/issues/53533). There claims to be support for swap in 1.22, but for now let's disable it:
```shell
sudo sysctl vm.swappiness=0
sudo swapoff -a
```
It might be worth experimenting with swap enabled in the future to see how that might affect performance.
### Enable IP Forwarding
To avoid IP Forwarding issues with Traefik, run the following on the host:
```shell
sudo sysctl net.ipv4.ip_forward=1
sudo sysctl net.ipv6.conf.all.forwarding=1
sudo sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf
sudo sed -i 's/#net.ipv6.conf.all.forwarding=1/net.ipv6.conf.all.forwarding=1/g' /etc/sysctl.conf
```
## Create LXC container
Create an LXC container in the Proxmox interface as you normally would. Remember to:
- Uncheck `unprivileged container`
- Use a LXC template (I chose a debian 11 template downloaded with [pveam](https://pve.proxmox.com/wiki/Linux_Container#Create_container))
- In memory, set swap to 0
- Create and start the container
### Modify container config
Now back on the host run `pct list` to determine what VMID it was given.
Open `/etc/pve/lxc/$VMID.conf` and append:
```
lxc.apparmor.profile: unconfined
lxc.cap.drop:
lxc.mount.auto: "proc:rw sys:rw"
lxc.cgroup2.devices.allow: c 10:200 rwm
```
All of the above configurations are described in the [manpages](https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html).
Notice that `cgroup2` is used since Proxmox VE 7.0 has switched to a [pure cgroupv2 environment](https://pve.proxmox.com/pve-docs/chapter-pct.html#pct_cgroup).
Thankfully cgroup v2 support has been supported in k3s with these contributions:
- https://github.com/k3s-io/k3s/pull/2584
- https://github.com/k3s-io/k3s/pull/2844
## Enable shared host mounts
From within the container, run:
```shell
echo '#!/bin/sh -e
ln -s /dev/console /dev/kmsg
mount --make-rshared /' > /etc/rc.local
chmod +x /etc/rc.local
reboot
```
## Install K3s
One of the simplest ways to install K3s on a remote host is to use [k3sup](https://github.com/alexellis/k3sup).
Ensure that you supply a valid `CONTAINER_IP` and choose the `k3s-version` you prefer.
As of 2021/11, it is still defaulting to the 1.19 channel, so I overrode it to 1.22 for cgroup v2 support. See the published releases [here](https://github.com/k3s-io/k3s/releases).
```shell
ssh-copy-id root@$CONTAINER_IP
k3sup install --ip $CONTAINER_IP --user root --k3s-version v1.22.3+k3s1
```
If all goes well, you should see a path to the `kubeconfig` generated. I moved this into `~/.kube/config` so that kubectl would read this by default.
## Wrapping up
Installing K3s in LXC on Proxmox works with a few tweaks to the default configuration. I later followed the Tekton's [Getting Started](https://tekton.dev/docs/getting-started/) guide and was able to deploy it in a few commands.
```console
$ kubectl get all --namespace tekton-pipelines
NAME READY STATUS RESTARTS AGE
pod/tekton-pipelines-webhook-8566ff9b6b-6rnh8 1/1 Running 1 (50m ago) 12h
pod/tekton-dashboard-6bf858f977-qt4hr 1/1 Running 1 (50m ago) 11h
pod/tekton-pipelines-controller-69fd7498d8-f57m4 1/1 Running 1 (50m ago) 12h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tekton-pipelines-controller ClusterIP 10.43.44.245 <none> 9090/TCP,8080/TCP 12h
service/tekton-pipelines-webhook ClusterIP 10.43.183.242 <none> 9090/TCP,8008/TCP,443/TCP,8080/TCP 12h
service/tekton-dashboard ClusterIP 10.43.87.97 <none> 9097/TCP 11h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tekton-pipelines-webhook 1/1 1 1 12h
deployment.apps/tekton-dashboard 1/1 1 1 11h
deployment.apps/tekton-pipelines-controller 1/1 1 1 12h
NAME DESIRED CURRENT READY AGE
replicaset.apps/tekton-pipelines-webhook-8566ff9b6b 1 1 1 12h
replicaset.apps/tekton-dashboard-6bf858f977 1 1 1 11h
replicaset.apps/tekton-pipelines-controller-69fd7498d8 1 1 1 12h
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook Deployment/tekton-pipelines-webhook 9%/100% 1 5 1 12h
```
I made sure to install Tailscale in the container so that I can easily access K3s from anywhere.
If I'm feeling adventurous, I might experiment with [K3s rootless](https://rancher.com/docs/k3s/latest/en/advanced/#running-k3s-with-rootless-mode-experimental).

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

@@ -0,0 +1,217 @@
---
title: "Setting up Gitea Actions with Tailscale"
date: 2023-12-10T17:22:11-05:00
comments: true
lastmod: 2023-12-10T17:22:11-05:00
draft: false
description: ""
tags: ["gitea", "gitea actions", "github actions", "tailscale", "self-hosted"]
author: "Dave Gallant"
---
In this post I'll go through the process of setting up Gitea Actions and [Tailscale](https://tailscale.com/), unlocking a simple and secure way to automate workflows.
<!--more-->
## What is Gitea?
[Gitea](https://about.gitea.com/) is a lightweight and fast git server that has much of the same look and feel as github. I have been using it in my homelab to mirror repositories hosted on other platforms such as github and gitlab. These mirrors take advantage of the decentralized nature of git by serving as "backups". One of the main reasons I hadn't been using it more often was due to the lack of integrated CI/CD. This is no longer the case.
## Gitea Actions
[Gitea Actions](https://docs.gitea.com/usage/actions/overview) have made it into the [1.19.0 release](https://blog.gitea.com/release-of-1.19.0/). This feature had been in an experimental state up until [1.21.0](https://blog.gitea.com/release-of-1.21.0/) and is now enabled by default 🎉.
So what are they? If you've ever used GitHub Actions (and if you're reading this, I imagine you have), these will look familiar. Gitea Actions essentially enable the ability to run github workflows on gitea. Workflows between gitea and github are not completely interopable, but a lot of the same workflow syntax is already compatible on gitea. You can find a documented list of [unsupported workflows syntax](https://docs.gitea.com/usage/actions/comparison#unsupported-workflows-syntax).
Actions work by using a [custom fork](https://gitea.com/gitea/act) of [nekos/act](https://github.com/nektos/act). Workflows run in a new container for every job. If you specify an action such as `actions/checkout@v4`, it defaults to downloading the scripts from github.com. To avoid internet egress, you could always clone the required actions to your local gitea instance.
Actions (gitea's implementation) has me excited because it makes spinning up a network-isolated environment for workflow automation incredibly simple.
## Integration with Tailscale
So how does Tailscale help here? Well, more recently I've been exposing my self-hosted services through a combination of traefik and the tailscale (through the tailscale-traefik proxy integration described [here](https://traefik.io/blog/exploring-the-tailscale-traefik-proxy-integration/)). This allows for a nice looking dns name (i.e. gitea.my-tailnet-name.ts.net) and automatic tls certificate management. I can also share this tailscale node securely with other tailscale users without configuring any firewall rules on my router.
## Deploying Gitea, Traefik, and Tailscale
In my case, the following is already set up:
- [docker-compose is installed](https://docs.docker.com/compose/install/linux/)
- [tailscale is installed on the gitea host](https://tailscale.com/kb/1017/install/)
- [tailscale magic dns is enabled](https://tailscale.com/kb/1081/magicdns/)
My preferred approach to deploying code in a homelab environment is with docker compose. I have deployed this in a [proxmox lxc container](https://pve.proxmox.com/wiki/Linux_Container) based on debian with a hostname `gitea`. This could be deployed in any environment and with any hostname (as long you updated the tailscale machine name to your preferred subdomain for magic dns).
The `docker-compose.yaml` file looks like:
```yaml
version: "3.7"
services:
gitea:
image: gitea/gitea:1.21.1
container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
- GITEA__server__DOMAIN=gitea.my-tailnet-name.ts.net
- GITEA__server__ROOT_URL=https://gitea.my-tailnet-name.ts.net
- GITEA__server__HTTP_ADDR=0.0.0.0
- GITEA__server__LFS_JWT_SECRET=my-secret-jwt
restart: always
volumes:
- ./data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
traefik:
image: traefik:v3.0.0-beta4
container_name: traefik
security_opt:
- no-new-privileges:true
restart: unless-stopped
ports:
- 80:80
- 443:443
volumes:
- ./traefik/data/traefik.yaml:/traefik.yaml:ro
- ./traefik/data/dynamic.yaml:/dynamic.yaml:ro
- /var/run/tailscale/tailscaled.sock:/var/run/tailscale/tailscaled.sock
```
`traefik/data/traefik.yaml`:
```yaml
entryPoints:
https:
address: ":443"
providers:
file:
filename: dynamic.yaml
certificatesResolvers:
myresolver:
tailscale: {}
log:
level: INFO
```
and finally `traefik/data/dynamic/dynamic.yaml`:
```yaml
http:
routers:
gitea:
rule: Host(`gitea.my-tailnet-name.ts.net`)
entrypoints:
- "https"
service: gitea
tls:
certResolver: myresolver
services:
gitea:
loadBalancer:
servers:
- url: "http://gitea:3000"
```
Something to consider is whether or not you want to use ssh with git. One method to get this to work with containers is to use [ssh container passthrough](https://docs.gitea.com/installation/install-with-docker#ssh-container-passthrough). I decided to keep it simple and not use ssh, since communicating over https is perfectly fine for my use case.
After adding the above configuration, running `docker compose up -d` should be enough to get an instance up and running. It will be accessible at [https://gitea.my-tailnet-name.ts.net](https://gitea.my-tailnet-name.ts.net) from within the tailnet.
## Theming
I discovered some themes for gitea [here](https://git.sainnhe.dev/sainnhe/gitea-themes) and decided to try out gruvbox.
I added the theme by cloning [theme-gruvbox-auto.css](https://git.sainnhe.dev/sainnhe/gitea-themes/raw/branch/master/dist/theme-gruvbox-auto.css) into `./data/gitea/public/assets/css`. I then added the following to `environment` in `docker-compose.yml`:
```yaml
- GITEA__ui__DEFAULT_THEME=gruvbox-auto
- GITEA__ui__THEMES=gruvbox-auto
```
After restarting the gitea instance, the default theme was applied.
## Connecting runners
I installed the runner by [following the docs](https://docs.gitea.com/usage/actions/quickstart#set-up-runner). I opted for installing it on a separate host (another lxc container) as recommended in the docs. I used the systemd unit file to ensure that the runner comes back online after system reboots. I installed tailscale on this gitea runner as well, so that it can have the same "networking privileges" as the main instance.
After registering this runner and starting the daemon, the runner appeared in `/admin/actions/runners`. I added two other runners to help with parallelization.
![image](gitea-runners.png)
## Running a workflow
Now it's time start running some automation. I used the [demo workflow](https://docs.gitea.com/usage/actions/quickstart#use-actions) as a starting point to verify that the runner is executing workflows.
After this, I wanted to make sure that some of my existing workflows could be migrated over.
The following workflow uses a matrix to run a job for several of my hosts using ansible playbooks that will do various tasks such as patching os updates and updating container images.
```yaml
name: Run ansible
on:
push:
schedule:
- cron: "0 */12 * * *"
jobs:
run-ansible-playbook:
runs-on: ubuntu-latest
strategy:
matrix:
host:
- changedetection
- homer
- invidious
- jackett
- jellyfin
- ladder
- miniflux
- plex
- qbittorrent
- tailscale-exit-node
- tailscale-subnet-router
- uptime-kuma
steps:
- name: Check out repository code
uses: actions/checkout@v4
- name: Install ansible
run: |
apt update && apt install ansible -y
- name: Run playbook
uses: dawidd6/action-ansible-playbook@v2
with:
playbook: playbooks/main.yml
requirements: requirements.yml
options: |
--inventory inventory
--limit ${{ matrix.host }}
- name: Send failure notification
uses: dawidd6/action-send-mail@v3
if: always() && failure()
with:
server_address: smtp.gmail.com
server_port: 465
secure: true
username: myuser
password: ${{ secrets.MAIL_PASSWORD }}
subject: ansible runbook '${{ matrix.host }}' failed
to: me@davegallant.ca
from: RFD Notify
body: |
${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_number }}
```
And voilà:
![image](gitea-workflow.png)
You may be wondering how the gitea runner is allowed to connect to the other hosts using ansible? Well, the nodes are in the same tailnet and have [tailscale ssh](https://tailscale.com/tailscale-ssh) enabled.
## Areas for improvement
One enhancement that I would like to see is the ability to send notifications on workflow failures. Currently, this [doesn't seem possible](https://github.com/go-gitea/gitea/issues/23725) without adding logic to each workflow.
## Conclusion
Gitea Actions are fast and the resource footprint is minimal. My gitea instance is currently using around 250mb of memory and a small fraction of a single cpu core (and the runner is using a similar amount of resources). This is impressive since many alternatives tend to require substantially more resources. It likely helps that the codebase is largely written in go.
By combining gitea with the networking marvel that is tailscale, running workflows becomes simple and fun. Whether you are working on a team or working alone, this setup ensures that your workflows are securely accessible from anywhere with an internet connection.

View File

@@ -0,0 +1,85 @@
---
title: "Using AKS and SOCKS to connect to a private Azure DB"
date: 2023-05-22T16:31:29-04:00
lastmod: 2023-05-22T16:31:29-04:00
draft: false
comments: true
tags:
[
"aks",
"aws",
"azure",
"bastion",
"cloud-sql-proxy",
"database",
"eks",
"k8s",
"kubectl-plugin-socks5-proxy",
"proxy",
"socat",
"socks",
]
author: "Dave Gallant"
---
I ran into a roadblock recently where I wanted to be able to conveniently connect to a managed postgres database within Azure that was not running on public subnets. And by conveniently, I mean that I'd rather not have to spin up an ephemeral virtual machine running in the same network and proxy the connection, and I'd like to use a local client (preferably with a GUI). After several web searches, it became evident that Azure does not readily provide much tooling to support this.
<!--more-->
## Go Public?
Should the database be migrated to public subnets? Ideally not, since it is good practice to host internal infrastructure in restricted subnets.
## How do others handle this?
With GCP, connecting to a private db instance from any machine can be achieved with [cloud-sql-proxy](https://github.com/GoogleCloudPlatform/cloud-sql-proxy). This works by proxying requests from your machine to the SQL database instance in the cloud, while the authentication is handled by GCP's IAM.
So what about Azure? Is there any solution that is as elegant as cloud-sql-proxy?
## A Bastion
Similar to what [AWS has recommended](https://aws.amazon.com/blogs/database/securely-connect-to-an-amazon-rds-or-amazon-ec2-database-instance-remotely-with-your-preferred-gui/), perhaps a bastion is the way forward?
Azure has a fully-managed service called [Azure Bastion](https://azure.microsoft.com/en-ca/products/azure-bastion) that provides secure access to virtual machines that do not have public IPs. This looks interesting, but unfortunately it [costs money](https://azure.microsoft.com/en-ca/pricing/details/azure-bastion/#pricing) and requires an additional virtual machine.
Because this adds cost (and complexity), it does not seem like a desirable option in its current state. If it provided a more seamless connection to the database, it would be more appealing.
## SOCKS
> **2023-12-13:**
> An alternative to using a socks proxy is [socat](http://www.dest-unreach.org/socat/). This would allow you to relay tcp connections to a pod running in k8s, and then port-forward them to your localhost.
> If this sounds more appealing, install [krew-net-forward](https://github.com/antitree/krew-net-forward/tree/master) and then run "kubectl net-forward -i mydb.postgres.database.azure.com -p 5432 -l 5432" to access the database through "localhost:5432"
[SOCKS](https://en.wikipedia.org/wiki/SOCKS) is a protocol that enables a way to proxy connections by exchanging network packets between the client and the server. There are many implementations and many readily available container images that can run a SOCKS server.
It's possible to use this sort of proxy to connect to a private DB, but is it any simpler than using a virtual machine as a jumphost? It wasn't until I stumbled upon [kubectl-plugin-socks5-proxy](https://github.com/yokawasa/kubectl-plugin-socks5-proxy) that I was convinced that using SOCKS could be made simple.
So how does it work? By installing the kubectl plugin and then running `kubectl socks5-proxy`, a SOCKS proxy server is spun up in a pod and then opens up port-forwarding session using kubectl.
As you can see below, this k8s plugin is wrapped up nicely:
```console
$ kubectl socks5-proxy
using: namespace=default
using: port=1080
using: name=davegallant-proxy
using: image=serjs/go-socks5-proxy
Creating SOCKS5 Proxy (Pod)...
pod/davegallant-proxy created
```
With the above proxy connection open, it is possible to access both the DNS and private IPs accessible within the k8s cluster. In this case, I am able to access the private database, since there is network connectivity between the k8s cluster and the database.
## Caveats and Conclusion
The above outlined solution makes some assumptions:
- there is a k8s cluster
- the k8s cluster has network connectivity to the desired private database
If these stars align, than this solution might work as a stopgap for accessing a private Azure DB (and I'm assuming this could work similarly on AWS).
It would be nice if Azure provided tooling similar to cloud-sql-proxy, so that using private databases would be more of a convenient experience.
One other thing to note is that some clients (such as [dbeaver](https://dbeaver.io/)) [do not provide DNS resolution over SOCKS](https://github.com/dbeaver/dbeaver/issues/872). So in this case, you won't be able to use DNS as if you were inside the cluster, but instead have to rely on knowing private ip addresses.

View File

@@ -0,0 +1,83 @@
---
title: "Virtualizing my router with pfSense"
date: 2022-04-02T18:50:09-04:00
lastmod: 2022-04-02T18:50:09-04:00
draft: false
comments: true
tags:
[
"pfsense",
"router",
"openwrt",
"router-on-a-stick",
"proxmox",
"vlan",
"self-hosted",
]
author: "Dave Gallant"
---
My aging router has been running [OpenWrt](https://en.wikipedia.org/wiki/OpenWrt) for years and for the most part has been quite reliable. OpenWrt is an open-source project used on embedded devices to route network traffic. It supports many different configurations and there exists a [large index of packages](https://openwrt.org/packages/index/start). Ever since I've connected some standalone wireless access points, I've had less of a need for an off-the-shelf all-in-one wireless router combo. I've also recently been experiencing instability with my router (likely the result of a combination of configuration tweaking and firmware updating). OpenWrt has served me well, but it is time to move on!
<!--more-->
## pfSense
I figured this would be a good opportunity to try [pfSense](https://en.wikipedia.org/wiki/PfSense). I've heard nothing but positive things about pfSense and the fact it's been around since 2004, based on FreeBSD, and written in PHP gave me the impression that it would be relatively stable (and I'd expect nothing less because it has an important job to do!). pfSense can be run on many different machines, and there are even some [officially supported appliances](https://www.netgate.com/appliances). Since I already have a machine running Proxmox, why not just run it in a VM? It'd allow for automatic snapshotting of the machine. There is a good [video](https://www.youtube.com/watch?v=hdoBQNI_Ab8) on this by Techno Tim. Tim has a lot of good videos, and this one is about virtualizing pfSense.
## Router on a stick
I had initially made the assumption that in order to build a router, you would need more than a single NIC (or a dual-port NIC) in order to support both WAN and LAN. This is simply [not the case](https://en.wikipedia.org/wiki/Router_on_a_stick), because VLANs are awesome! In order to create a router, all you need is a single port NIC and a network switch that supports VLANs (also marketed as a managed switch). I picked up the Netgear GS308E because it has both a sufficient amount of ports for my needs, and it supports VLANs. It also has a nice sturdy metal frame which was a pleasant surprise.
After setting up this Netgear switch, it shoud be possible to access the web interface at [http://192.168.0.239](http://192.168.0.239). It may be at a different address. To find the address, try checking your DHCP leases in your router interface (if you plugged it into an existing router). I realized I was unable to access this interface because I was on a different subnet, so I set my machine's address to `192.168.0.22` in order to temporarily setup this switch. I assigned a static ip address to the switch (in `System > Switch Information`) so that it was in the same subnet as the rest of my network.
The web interface is nothing spectactular, but it allows for managing VLANs.
The following configuration will:
- assign port 1 to be the LAN (connected to the Proxmox machine)
- assign port 8 to be the WAN (connected to my ISP's modem)
In the switch's web interface, I went to `VLAN` and then `802.1Q`, and then clicked on `VLAN Configuration`. I configured the ports to look like this:
![vlan-config](netgear-vlan-configuration.png)
Note that the `VLAN Identifier Setting` has been setup already with two VLANs (1 and 10). More VLANs can be created (i.e. to isolate IoT devices), but 2 VLANs is all we need for the initial setup of a router.
To replicate the above configuration, add a new VLAN ID 10 (1 should exist by default).
Next, go into `VLAN Membership` and configure VLAN 1's port membership to be the following:
![vlan-membership-1](netgear-vlan-membership-1.png)
and then configure VLAN 10's port membership to be the following:
![vlan-membership-10](netgear-vlan-membership-10.png)
Now, go into `Port PVID` and ensure that port 8 is set to PVID 10.
![vlan-port-pvid](netgear-port-pvid.png)
This above configuration will dedicate two of the eight ports to WAN and LAN. This will allow the internet to flow into the pfSense from the modem.
## Setting up pfSense
pfSense is fairly easy to setup. Just [download the latest ISO](https://www.pfsense.org/download/) and boot up the virtual machine.
When setting up the machine, I mostly went with all of the defaults. Configuration can be changed later in the web interface, which is quite a bit simpler.
Since VLANs are going to be leveraged, when you go to `Assign Interfaces`, VLANs should be setup now like the following:
- `WAN` should be `vtnet0.10`
- `LAN` should be `vtnet0`
After going through the rest of the installation, if everything is connected correctly it should display both WAN and LAN addresses.
If all goes well, the web interface should be running at [https://192.168.1.1](https://192.168.1.1).
![pfsense-dashboard](pfsense-dashboard.png)
And this is where the fun begins. There are many tutorials and blogs about how to setup pfSense and various services and packages that can be installed. I've already installed [pfBlocker-NG](https://docs.netgate.com/pfsense/en/latest/packages/pfblocker.html).
## Summary
It is fairly simple to setup a router with pfSense from within a virtual machine. A physical dedicated routing machine is not necessary and often does not perform as well as software running on faster and more reliable hardware. So far, pfSense has been running for over a week without a single hiccup. pfSense is a mature piece of software that is incredibly powerful and flexible. To avoid some of the instability I had experienced with OpenWrt, I enabled [AutoConfigBackup](https://docs.netgate.com/pfsense/en/latest/backup/autoconfigbackup.html), which is capable of automatically backing up configuration upon every change. I plan to explore and experiment with more services and configuration in the future, so the ability to track all of these changes gives me the peace of mind that experimentation is safe.

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 935 KiB

View File

@@ -0,0 +1,92 @@
---
title: "Watching YouTube in private"
date: 2022-12-10T21:46:55-05:00
lastmod: 2022-12-10T21:46:55-05:00
draft: false
comments: true
tags:
[
"invidious",
"youtube",
"yewtu.be",
"tailscale",
"privacy",
"self-hosted",
]
author: "Dave Gallant"
---
I recently stumbled upon [yewtu.be](https://yewtu.be) and found it intriguing. It not only allows you to watch YouTube without _being on YouTube_, but it also allows you to create an account and subscribe to channels without a Google account. What sort of wizardry is going on under the hood? It turns out that it's a hosted instance of [invidious](https://invidious.io/).
<!--more-->
![image](computerphile.png)
The layout is simple, and **JavaScript is not required**.
I started using [yewtu.be](https://yewtu.be) as my primary client for watching videos. I subscribe to several YouTube channels and I prefer the interface invidiuous provides due to its simplicity. It's also nice to be in control of my search and watch history.
A few days ago, yewtu.be went down briefly, and that motivated me enough to self-host invidious. There are several other hosted instances listed [here](https://docs.invidious.io/instances/), but being able to easily backup my own instance (including subscriptions and watch history) is more compelling in my case.
### Hosting invidious
The quickest way to get invidious up is with docker-compose as mentioned in the [docs](https://docs.invidious.io/installation/).
I made a few modifications, and ended up with:
```yaml
version: "3"
services:
invidious:
image: quay.io/invidious/invidious
restart: unless-stopped
ports:
- "0.0.0.0:3000:3000"
environment:
INVIDIOUS_CONFIG: |
db:
dbname: invidious
user: kemal
password: kemal
host: invidious-db
port: 5432
check_tables: true
healthcheck:
test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/comments/jNQXAC9IVRw || exit 1
interval: 30s
timeout: 5s
retries: 2
depends_on:
- invidious-db
invidious-db:
image: docker.io/library/postgres:14
restart: unless-stopped
volumes:
- postgresdata:/var/lib/postgresql/data
- ./config/sql:/config/sql
- ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
environment:
POSTGRES_DB: invidious
POSTGRES_USER: kemal
POSTGRES_PASSWORD: kemal
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
volumes:
postgresdata:
```
After invidious was up and running, I installed [Tailscale](https://tailscale.com/) on it to leverage its MagicDNS, and I'm now able to access this instance from anywhere at [http://invidious:3000/feed/subscriptions](http://invidious:3000/feed/subscriptions).
### Redirecting YouTube links
I figured it would be nice to redirect existing YouTube links that others send me, so that I could seamlessly watch the videos using invidious.
I went looking for a way to redirect paths at the browser level. I found the lightweight proxy [requestly](https://requestly.io/), which can be used to modify http requests in my browser. I created the following rules:
![requestly](requestly-rules.png)
Now the link https://www.youtube.com/watch?v=-lz30by8-sU will redirect to [http://invidious:3000/watch?v=-lz30by8-sU](http://invidious:3000/watch?v=-lz30by8-sU)
I'm still looking for ways to improve this invidious setup. There doesn't appear to be a way to stream in 4K yet.

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

View File

@@ -0,0 +1,62 @@
---
title: "What to do with a homelab"
date: 2021-09-06T01:12:54-04:00
lastmod: 2021-09-06T01:12:54-04:00
draft: false
comments: true
author: "Dave Gallant"
tags: ["self-hosted", "proxmox", "tailscale"]
---
A homelab can be an inexpensive way to host a multitude of internal/external services and learn _a lot_ in the process.
<!--more-->
Do you want host your own Media server? Ad blocker? Web server?
Are you interested in learning more about Linux? Virtualization? Networking? Security?
Building a homelab can be an entertaining playground to enhance your computer skills.
One of the best parts about building a homelab is that it doesn't have to be a large investment in terms of hardware. One of the simplest ways to build a homelab is out of a [refurbished computer](https://ca.refurb.io/products/hp-800-g1-usff-intel-core-i5-4570s-16gb-ram-512gb-ssd-wifi-windows-10-pro?variant=33049503825943).
Having multiple machines/nodes provides the advantage of increased redundancy, but starting out with a single node is enough to reap many of the benefits of having a homelab.
## Virtualization
Virtualizing your hardware is an organized way of dividing up your machine's resources. This can be done with something such as a _Virtual Machine_ or something lighter like a container using _LXC_ or _runC_.
Containers have much less overhead in terms of boot time and storage allocation. This [Stack Overflow answer](https://stackoverflow.com/questions/16047306/how-is-docker-different-from-a-virtual-machine) sums it up nicely.
![image](proxmox.png)
A hypervisor such as [Proxmox](https://www.proxmox.com/en/proxmox-ve/get-started) can be installed in minutes on a new machine. It provides a web interface and a straight-forward way to spin up new VMs and containers. Even if your plan is to run mostly docker containers, Proxmox can be a useful abstraction for managing VMs, disks and running scheduled backups. You can even run docker within an LXC container by enabling nested virtualization. You'll want to ensure that VT-d and VT-x are enabled in the BIOS if you decide to install a hypervisor to manage your virtualization.
## Services
So what are some useful services to deploy?
- [Jellyfin](https://jellyfin.org/) or [Plex](https://www.plex.tv/) - basically a self-hosted Netflix that can be used to stream from multiple devices, and the best part is that you manage the content! Unlike Plex, Jellyfin is open source and can be found [here](https://github.com/jellyfin/jellyfin).
- [changedetection](https://github.com/dgtlmoon/changedetection.io) - is a self-hosted equivalent to something like [visualping.io](https://visualping.io/) that will notify you when a webpage changes and keep track of the diffs
- [Adguard](https://github.com/AdguardTeam/AdGuardHome) or [Pihole](https://pi-hole.net/) - can block a list of known trackers for all clients on your local network. I've used pihole for a long time, but have recently switched to Adguard since the UI is more modern and it has the ability to toggle on/off a pre-defined list of services, including Netflix (this is useful if you have stealthy young kids). Either of these will speed up your internet experience, simply because you won't need to download all of the extra tracking bloat.
- [Gitea](https://gitea.io/) - A lightweight git server. I use this to mirror git repos from GitHub, GitLab, etc.
- [Homer](https://github.com/bastienwirtz/homer) - A customizable landing page for services you need to access (including the ability to quickly search).
- [Uptime Kuma](https://github.com/louislam/uptime-kuma) - A fancy tool for monitoring the uptime of services.
There is a large number of services you can self-host, including your own applications that you might be developing. [awesome-self-hosted](https://github.com/awesome-selfhosted/awesome-selfhosted) provides a curated list of services that might be of interest to you.
## VPN
You could certainly setup and manage your own VPN by using something like [OpenVPN](https://openvpn.net/community-downloads/), but there is also something else you can try: [tailscale](https://tailscale.com/). It is a very quick way to create fully-encrypted connections between clients. With its [MagicDNS](https://tailscale.com/kb/1081/magicdns/), your can reference the names of machines like `homer` rather than using an IP address. By using this mesh-like VPN, you can easily create a secure tunnel to your homelab from anywhere.
## Monitoring
![dashboard](netdata.png)
Monitoring can become an important aspect of your homelab after it starts to become something that is relied upon. One of the simplest ways to setup some monitoring is using [netdata](https://www.netdata.cloud/). It can be installed on individual containers, VMs, and also a hypervisor (such as Proxmox). All of the monitoring works out of the box by detecting disks, memory, network interfaces, etc.
Additionally, agents installed on different machines can all be centrally viewed in netdata, and it can alert you when some of your infrastructure is down or in a degraded state. Adding additional nodes to netdata is as simple as a 1-line shell command.
As mentioned above, [Uptime Kuma](https://github.com/louislam/uptime-kuma) is a convenient way to track uptime and monitor the availability of your services.
![uptime-kuma](uptime-kuma.png)
## In Summary
Building out a homelab can be a rewarding experience and it doesn't require buying a rack full of expensive servers to get a significant amount of utility. There are many services that you can run that require very minimal setup, making it possible to get a server up and running in a short period of time, with monitoring, and that can be securely connected to remotely.

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

View File

@@ -0,0 +1,184 @@
---
title: "Why I threw out my dotfiles"
date: 2021-09-08T00:42:33-04:00
lastmod: 2021-09-08T00:42:33-04:00
draft: false
comments: true
tags: ['nix', 'dotfiles', 'home-manager']
author: "Dave Gallant"
---
Over the years I have collected a number of dotfiles that I have shared across both Linux and macOS machines (`~/.zshrc`, `~/.config/git/config`, `~/.config/tmux/tmux.conf`, etc). I have tried several different ways to manage them, including [bare git repos](https://www.atlassian.com/git/tutorials/dotfiles) and utilities such as [GNU Stow](https://www.gnu.org/software/stow/). These solutions work well enough, but I have since found what I would consider a much better solution for organizing user configuration: [home-manager](https://github.com/nix-community/home-manager).
<!--more-->
## What is home-manager?
Before understanding home-manager, it is worth briefly discussing what nix is. [nix](https://nixos.org/) is a package manager that originally spawned from a [PhD thesis](https://edolstra.github.io/pubs/phd-thesis.pdf). Unlike other package managers, it uses symbolic links to keep track of the currently installed packages, keeping around the old ones in case you may want to rollback.
For example, I have used nix to install the package [bind](https://search.nixos.org/packages?channel=unstable&show=bind&from=0&size=50&sort=relevance&type=packages&query=bind) which includes `dig`. You can see that it is available on multiple platforms. The absolute path of `dig` can be found by running:
```console
$ ls -lh $(which dig)
lrwxr-xr-x 73 root 31 Dec 1969 /run/current-system/sw/bin/dig -> /nix/store/0r4qdyprljd3dki57jn6c6a8dh2rbg9g-bind-9.16.16-dnsutils/bin/dig
```
Notice that there is a hash included in the file path? This is a nix store path and is computed by the nix package manager. This [nix pill](https://nixos.org/guides/nix-pills/nix-store-paths.html) does a good job explaining how this hash is computed. All of the nix pills are worth a read, if you are interested in learning more about nix itself. However, using home-manager does not require extensive knowledge of nix.
Part of the nix ecosystem includes [nixpkgs](https://github.com/NixOS/nixpkgs). Many popular tools can be found already packaged in this repository. As you can see with these [stats](https://repology.org/repositories/statistics/total), there is a large number of existing packages that are being maintained by the community. Contributing a new package is easy, and anyone can do it!
home-manager leverages the nix package manager (and nixpkgs), as well the nix language so that you can declaratively define your system configuration. I store my [nix-config](https://github.com/davegallant/nix-config) in git so that I can keep track of my packages and configurations, and retain a clean and informative git commit history so that I can understand what changed and why.
## Setting up home-manager
> ⚠️ If you run this on your main machine, make sure you backup your configuration files first. home-manager is pretty good about not overwriting existing configuration, but it is better to have a backup! Alternatively, you could test this out on a VM or cloud instance.
The first thing you should do is [install nix](https://nixos.org/guides/install-nix.html):
```shell
curl -L https://nixos.org/nix/install | sh
```
It's generally not a good idea to curl and execute files from the internet (without verifying integrity), so you might want to download the install script first and take a look before executing it!
Open up a new shell in your terminal and running `nix` *should* work. If not, run `. ~/.nix-profile/etc/profile.d/nix.sh`
Now, [install home-manager](https://github.com/nix-community/home-manager#installation):
```shell
nix-channel --add https://github.com/nix-community/home-manager/archive/master.tar.gz home-manager
nix-channel --update
nix-shell '<home-manager>' -A install
```
You should see a wave of `/nix/store/*` paths being displayed on your screen.
Now, to start off with a basic configuration, open up `~/.config/nixpkgs/home.nix` in the editor of your choice and paste this in (you will want to change `userName` and `homeDirectory`):
```nix
{ config, pkgs, ... }:
{
programs.home-manager.enable = true;
home = {
username = "dave";
homeDirectory = "/home/dave";
stateVersion = "21.11";
packages = with pkgs; [
bind
exa
fd
ripgrep
];
};
programs = {
git = {
enable = true;
aliases = {
aa = "add -A .";
br = "branch";
c = "commit -S";
ca = "commit -S --amend";
cb = "checkout -b";
co = "checkout";
d = "diff";
l =
"log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit";
};
delta = {
enable = true;
options = {
features = "line-numbers decorations";
whitespace-error-style = "22 reverse";
plus-style = "green bold ul '#198214'";
decorations = {
commit-decoration-style = "bold yellow box ul";
file-style = "bold yellow ul";
file-decoration-style = "none";
};
};
};
extraConfig = {
push = { default = "current"; };
pull = { rebase = true; };
};
};
starship = {
enable = true;
enableZshIntegration = true;
settings = {
add_newline = false;
scan_timeout = 10;
};
};
zsh = {
enable = true;
enableAutosuggestions = true;
enableSyntaxHighlighting = true;
history.size = 1000000;
localVariables = {
CASE_SENSITIVE = "true";
DISABLE_UNTRACKED_FILES_DIRTY = "true";
RPROMPT = ""; # override because macOS defaults to filepath
ZSH_AUTOSUGGEST_HIGHLIGHT_STYLE = "fg=#838383,underline";
ZSH_DISABLE_COMPFIX = "true";
};
initExtra = ''
export PAGER=less
'';
shellAliases = {
".." = "cd ..";
grep = "rg --smart-case";
ls = "exa -la --git";
};
"oh-my-zsh" = {
enable = true;
plugins = [
"gitfast"
"last-working-dir"
];
};
};
};
}
```
Save the file and run:
```
home-manager switch
```
You should see another wave of `/nix/store/*` paths. The new configuration should now be active.
If you run `zsh`, you should see that you have [starship](https://starship.rs/) and access to several other utils such as `rg`, `fd`, and `exa`.
This basic configuration above is also defining your `~/.config/git/config` and `.zshrc`. If you already have either of these files, home-manager will complain about them already existing.
If you run `cat ~/.zshrc`, you will see the way these configuration files are generated.
You can extend this configuration for programs such as (neo)vim, emacs, alacritty, ssh, etc. To see other programs, take a look at [home-manager/modules/programs](https://github.com/nix-community/home-manager/tree/master/modules/programs).
## Gateway To Nix
In ways, home-manager can be seen as a gateway to the nix ecosystem. If you have enjoyed the way you can declare user configuration with home-manager, you may be interested in expanding your configuration to include other system dependencies and configuration. For example, in Linux you can define your entire system's configuration (including the kernel, kernel modules, networking, filesystems, etc) in nix. For macOS, there is [nix-darwin](https://github.com/LnL7/nix-darwin) that includes nix modules for configuring launchd, dock, and other preferences and services. You may also want to check out [Nix Flakes](https://nixos.wiki/wiki/Flakes): a more recent feature that allows you declare dependencies, and have them automatically pinned and hashed in `flake.lock`, similar to that of many modern package managers.
## Wrapping up
The title of this post is slightly misleading, since it's possible to retain some of your dotfiles and have them intermingle with home-manager by including them alongside nix. The idea of defining user configuration using nix can provide a clean way to maintain your configuration, and allow it to be portable across platforms. Is it worth the effort to migrate away from shell scripts and dotfiles? I'd say so.