Systems and Data

2025-10-20
, , , , , ,

Context

Any Nixos package can be built, using a nix function src which takes its source code and outputs Nix-store path of the built package, and it can be used from that location. Nix is a pure language and this output is reproducible and this data is immutable. This data can be cached this outputs can be cached. While Caching data in a database typically involves storing frequently accessed data to improve retrieval speed and reduce load on the backend, while caching in Cachix for Nix involves storing pre-built binaries to avoid rebuilding packages, thus speeding up package management and deployment across the machines of homelab, in cloud, datacenter (with terraform) etc.

Nix writers and bash Strings

Regular bash shell interprets a string litrally ( try echoing [:6-*,’s81 ) , if that seems like a cherry picked example, try to echo any out pout from the script below tput you get from the following

import random
import string

def generate_string(length):
    if length < 8 or length > 64:
        raise ValueError("Length must be between 8 and 64 characters.")

    allowed_chars = string.ascii_letters + string.digits + "!?-_.:,;()=[]\"'?'+%*#~@{}<>$^?/\\"
    return ''.join(random.choice(allowed_chars) for _ in range(length))

# Example usage
result = generate_string(10)
print(result)

Okay? If still feel tricked, my freind you RTFM! I don’t.

So any bash script in nix writer like pkgs.writeShellScript etc ll need the script to be quoted as ‘#script ..’; so that you can put special chars for nix attrs like $ in it and Nix can parse it , wihtout side effects. An example of not doing it , in absence of any special chars can be

# just imperative terminal commands, no env setup with export and accessing them later with special chars like $
services.redis.servers.nextcloud = {
    enable = true;
    bind = "::1";
    port = 6379;
  };

  systemd.services.nextcloud-setup.serviceConfig.ExecStartPost = pkgs.writeScript "nextcloud-redis.sh" ''
    #!${pkgs.runtimeShell}
    nextcloud-occ config:system:set redis 'host' --value '::1' --type string
    nextcloud-occ config:system:set redis 'port' --value 6379 --type integer
    nextcloud-occ config:system:set memcache.local --value '\OC\Memcache\Redis' --type string
    nextcloud-occ config:system:set memcache.locking --value '\OC\Memcache\Redis' --type string
  '';

OTOH different nix builders like any other anoymous lambda (takes no other parameters), that you write as nix module, requires differnt number of args, and without sufficinet number of args provided, it won’t build.

To know the arguments of the modules you ve written, you can use

nix eval .#nixosConfigurations.thinkpadTest-vm.config.disko

or

nix eval .#nixosConfigurations.thinkpadTest-vm.options.disko

and applying | jq . 1 or attrNames to these outputs

which gives [ “checkScripts” “devices” “enableConfig” “imageBuilder” “memSize” “rootDisk” “rootMountPoint” “testMode” “tests” ]

Nix is dynamically typed so there re no type checks as you write, you to evel in nix repl as you write , which is too much work. That said , you don’t need a browser to see avaialble options, you can just start with the skeleton of what you need in a flake and you can get all the options with Pressing tab in the nix eval lines above, or to get them one by one nix develop -j 50 .#nixosConfigurations.thinkpad.config.services.gitlab-runner.services and it expands to runner for docker executor gitlab-pm-docker and for shell executor gitlab-pm-shell. fwiw , they don’t come out of box and that where , but to illustrate it on topic , I need minio s3 listenting to 9001 to act as distribut web

store for nextcloud listening to 8080, which uses pgsql listens to eg /run/postgresql.sock for storage and redis listens to /run/redis-nextcloud/redis.sock for disributed caching , locking of data.

Note on secrets providiosning and runners

Reverse proxy and ingress.

Why do we need it in 1st place It is not same as NAT port forward. try tailscale serve 3000 on terminal and it ll tell you to use the flag to tun off https on 443, as that’s where its already serving and hence it gets the ssl certs for the host. To self host tailscale and listen to another port , you need a reverse proxy with nginx or caddy with a virtualhost. Caddy saves you from having to configure ssl certs as it autoconfigures it. Not to mention its plugins for tailscale and dns.

Make your HTTP (or HTTPS) network service available using a protocol-aware configuration mechanism, that understands web concepts like URIs, hostnames, paths, and more. The Ingress concept lets you map traffic to different backends based on rules you define, you do them maually with a reverse proxy and declarativelt wuth ingress for clustred servers , manage by kubernetes api or otherwise. In general its helpful as a traffice controller when the infra has max load, to pass any futher traffic, just like a real traffic signal A reverse proxy for localhost typically listens on a TCP port, allowing external clients to connect through that port. In contrast, a reverse proxy using a Unix socket communicates through a file system path, which can be more efficient for local inter-process communication and avoids port conflicts Sought outcome

WIth caddy, tls is auto configured, even when When no port is mentioned on virtualhost like just localhost instead of localhost:8080, Caddy listens on 80 and 443 by default and redirects requests from port 80 (unsecured) to 443 (secured). Adding the reverse proxy to a webservice is done by virtualhosts.service-name..extraConfig as ‘’reverse_proxy http://localhost:8096’’ and it can be checked with src_bash[:exports code]{curl –connect-to <virtualhost>:443:<realhost>:443 https://<virtualhost> -k}

  1. Ports vs sockets again

    Many NixOS service modules create Unix sockets, and the service owner (root or a specific user) determines the permissions and access control for that socket file in the filesystem, the services here like minio,nextcloud .. and neither of them re owned by nixos home,system user , despite their files being in /var/lib. The good news is that these services default to tcp ports and we need not reconfigure them to create unix sockets. Now there re nixos services bound to listne to a port , to use them on the domain of the service which listens to another port say portx i ll need another reverse proxy from port to portx

    virtualHosts =
         let
           base = locations: {
             inherit locations;
    
             forceSSL = true;
             enableACME = true;
           };
           proxy = port:
             base { "/".proxyPass = "http://127.0.0.1:" + toString (port) + "/"; };
         in
         {
           # this service doesn't have a hostname option hence manual
           # minio doesn't need a vhost to a web ui
           "${config.services.nextcloud.config.objectstore.s3.hostname}" = proxy 9000
             // {
             listen = [{
               addr = "127.0.0.1";
               port = 8080;
             }];
             # VERY IMPORTANT for S3v4 signatures
             extraConfig = ''
               proxy_set_header Host $host;
               proxy_set_header X-Forwarded-Host $host;
               proxy_set_header X-Forwarded-Proto $scheme;
               proxy_set_header Connection "";
               proxy_http_version 1.1;
             '';
    
             root = "/var/lib/minio/data/nextcloud";
             locations."/var/lib/nextcloud/data/" = {
               # done above by overriding
               #proxyPass = "http://127.0.0.1:9005";
             };
           };
    
    
    
    

Firewalls

My node , peers use wireguard tunnel wth tailscale and a general iptable firewall, to allow acctepting ports like 5432 for pgsql or forward my online port virtual nic as internal interfaces for my nixos-VM , is just some command in the firewall besides allowing tailscale0 and wg0 TCP ports.

networking.firewall.extraCommands =  ''
        iptables -A INPUT -p tcp --dport 5432 -j ACCEPT
        iptables -A FORWARD -i vnet0 -o wlp0s20f3 -j ACCEPT
        iptables -A FORWARD -i wlp0s20f3 -o vnet0 -m state --state RELATED,ESTABLISHED -j ACCEPT
        iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o wlp0s20f3 -j MASQUERADE
'';

Pros 2

Native, reproducible. Integrates with NixOS modules directly and Shares host’s Nix store automatically.

Lazy eval thanks to partial application allows reusable code which can be referred in other modules possible by partial attributes {attr1, attr2 ? type:{}} (where type is certain type of value eg bool or set or string) and lib.mkif and optional attrs & recursive attrsets in defining a key,value and then declaring key1, key2, key3 etc, with neset attrs. Nix function for terrafornm can be written like this.

inherit (inputs.terranix.lib) terranixConfiguration;
terraform = pkgs.terraform;
      infra = import ./terraform/default.nix { inherit system pkgs inputs; };
      tfConfig = terranix.lib.terranixConfiguration {
        inherit system;
        modules = [ infra ];
      };
      cachix-deploy-lib = cachix-deploy-flake.lib pkgs;
            terraformConfigurations.default = terranixConfiguration {
        extraArgs = { inherit terranix system pkgs; };
        #   {
        #     inherit system
        #       #inherit (inputs.)
        #       pkgs; # <-- Here we inherit terranix, system and pkgs so it's available in our configuration.nix file as a function parameter
        #   };
        modules = [ ./terraform/default.nix ];
      };

# default.nix
            { pkgs, system, ... }: {
  # Example Terraform config via Terranix
  provider.aws.region = "us-east-1";

  resource.aws_s3_bucket.my_bucket = {
    bucket = "my-nix-generated-bucket";
    acl = "private";
    # acl obsolete use sue  aws s3 bucket
    # set api check skip or credntials
    tags = {
      Name = "my-nix-bucket";
      Environment = "production";
    };
  };
}

An example of which I use in my flake 3

Secret provisioning.

# A helper function for agenix secrets
 agenixSecrets = { userHome ? "/home/xameer", files, }: {
        age = {
          identityPaths = [
            "/etc/ssh/ssh_host_ed25519_key"
            "/etc/ssh/ssh_host_rsa_key"
            "${userHome}/.config/sops/age/keys.txt"
            "${userHome}/.ssh/age_id_rsa"
            "${userHome}/.ssh/age-id_ed25519"
          ];

          secrets = files;
        };
      };
 # resued in a module
#only for systemd services
        (agenixSecrets {
          files = {
            # nextcloud secrets
            "nextcloud-admin-password" = {
              file = ./secrets/nextcloud-admin-password.age;
              owner = "nextcloud";
              group = "nextcloud";
            };
            };
          # more file paths in other modules
          })
          #which can be referenced in any other module
              secretKey = config.age.secrets.minio-secret-key.path;
              #or literal string not a nix $ of rec
              export MINIO_ACCESS_KEY=$(cat /run/agenix/minio-access-key)
          # or can be declaterd in a let block of a module and later accessed there in the in block.

Otions need not be browsed online, just start with skeleton and per TAB age

Cons

Not ideal for persistent state (unless you configure shared mounts) , & Less control over snapshotting or systemd integration. I didn’t do it with docker, as that islolates gpu passthrough and I needed an accelerated VM. Although the config includes a gitlab-runner to deploy it over gitlab, withe executors docker,shell or kubernets.

storage Nextcloud allows to configure object storages like OpenStack Swift or Amazon Simple Storage Service (S3) or any compatible S3-implementation (e.g. Minio or Ceph Object Gateway) as primary storage replacing the default storage of files. 4

Abstract

Share Nix-store with nixos options

Intergrate necesssay nixos and home-manager modules of host

Write a router to forward isolated traffic from host to qemu vm , with NIC bridges for wifi , ie a point 2 point virtual interface, dynamically creare by qemu/kvm for each VM’s network card with NAT, user mode networking. 5

With LAN cord connection, it d be a Linux software bridge, which aggregates multiple physical or virtual network interfaces into a single Layer 2 broadcast domain.

I prioritize VM-to-host communication the most, then VM-to-VM, may want LAN access with network isolation high performance, else I could ve gone just with briged subnet ie fully visible VM .

Difference

The difference between br0 in Ir original and vnet1 as used in the recommended approach comes down to their roles and what they represent in Ir network setup:

br0 (Linux Bridge Interface)

In Ir original config, br0 is bridged directly to Ir wireless interface wlp0s20f3. This attempts to make the VM appear as if it’s directly on the host’s local network. Bridging Wi-Fi interfaces is often problematic because mywireless drivers and consumer-grade routers do not support being enslaved to Linux bridges or do not handle promiscuous mode and broadcast traffic well. br0 represents a shared Layer 2 segment where connected devices see each other directly.

vnet1 (Virtual NIC Interface)

Concurrent NAT and Bridge Networking

I may have multiple virtual network configurations (NAT and bridged) coexisting, but different VMs or NICs use different network interfaces, but cannot use NAT hostfwd-style port forwarding on bridged mode interfaces. VM’s network configuration (IP addresses, gateways, DNS) must correspond to the network mode: NATed network IPs (like 10.0.2.x) for user-mode or NAT network, LAN IPs for bridged mode networks.

Firewall and Port Forwarding Firewall (iptables or nftables) rules on the host affect NAT and forwarding for virtual networks. Port forwarding works with the NAT network but is not applicable in bridged mode. I need extra firewall and NAT rules on the host bridging interface (br0) if I want NAT or isolation or selective access.

Segmentaion Policy I tag the incoming connections by their purpose, eg shopping, banking, work, communication etc,to prioritize which is them loads quicker.When I emulate this with this vm, I also emulate my relevant hardware acceleartors, gpu drivers, kernel modules and display service.

As a general limitations these VM’s are not visible to extenal devices even on the same lan be it ethernet or wifi lan. As a workaround I use an overlay network with tailscale 6.

share build critical state data and secrets from host, using virtiofs with bindmounts, which re chrooted jails.

Use Case

Tailscale access to multiple VMs or containers behind a single gateway VM, or when I need to segment a network within my VM environment.

In QEMU, network devices can be emulated, allowing guest operating systems to interact with virtual network interfaces. The register_netdev() function plays a role in ensuring that these virtual devices are properly registered within the Linux kernel running in the guest environment.

This function is used in the QEMU rocker switch driver to allocate and register a network device for each physical switch port, allowing for control traffic and higher-level constructs like bridges and VLANs.

_xhcg instruction in Linux boot processes is used to exchange values between registers or between a register and memory, which can be important for managing data during system initialization. It helps in tasks like setting up the stack or managing memory addresses efficiently.

File sharing ACLs

To edit anything in files shared as shared Directories option, using virtioFS, the UID/GID from the host should be visible directly inside the VM.

ssh/scp between different machine insataces local+remote

No filesystem transparency compared to file sharing protocols, dependency of network and security

Selctive isolation with controlled sharing using a local or remote vm ssh

Instantiation can be done with nbd-connect ( raw block device) or lvm when performance is critical and with qemu command falgs and raw file image qcow2 for file ops heavy tasks or snapshots (tricky).

Tailscale

Is more a netwrok traffic container mesh , and isolates that traffic rathe than providing isolation or transparency for file sharing, that is still ssh+ scp way.

file:./images/vm.png/

For wired eth interface

I use virtual nat(ed) subnet interface vnet# of my host NixOs to emulate its network in the guest NixOs 7,

My wireless driver does not support being enslaved to Linux bridges due to Wi-Fi protocol limitations.

MacVTap is generally more suitable for vnet# for connecting directly to the physical network interface, improving performance, resource consumption. QEMU’s rocker switch is typically used vibr# tap bridges.

Host Specs

CPU - grep -E 'vmx|sse4|aes' /proc/cpuinfo checks yes to all.

GPU: VGA compatible controller: NVIDIA Corporation GP107GLM [Quadro P1000 Mobile] (rev a1) It’s not supported by open drivers , but isn’t legacy too. Optional wayland.

Network specs

wifi interface subnet or ethernet interface

Local isolated private netwroks

Docker bridges virtual network switches for process, filesystem namespace isolation and vm’s virtual subnets from host. NixOs uses cgroup control with namespace isolation and hence a more constent outcome.

Tailscale subnet bypass

Emulation specs

VM specs

Disk: VM disk created by nixos-rebuild build-vm typically contains just a single virtual disk /dev/vda which acts as the entire VM’s storage.This single disk is formatted as ext4 with the label “nixos,”which usually corresponds to the root filesystem of the VM. Unlike a physical or fully partitioned disk, this virtual disk does not usually have separate partitions for /boot, swap, or other mount points inside the image created by build-vm, to keep the VM simple and self-contained. VM uses a fixed closure and does not have a nix-channel inside.

Qemu write- locks, host filesystems and build methods

Command ./result/bin/run-nixos-vm typically uses the specified .qcow2 image as a writable disk for the virtual machine, allowing the VM to operate on that image while maintaining the original file intact. This process involves creating a temporary overlay or snapshot of the image, which enables changes to be made without altering the base image directly

Unix socket communication channel for kernel verified FS bases authentication logic.

Socket-based authentication in Linux, particularly with Unix domain sockets like those used by ssh-unix-local, postgresql, or Redis, relies primarily on file system permissions and peer credentials. Unlike network sockets that depend on IP addresses and ports, Unix sockets operate within the local file system. which implies the lower latency of the former, no network stack needed.

Port 0 in Linux is a reserved port that is not used for actual network communication. Instead, it is a special programming technique that allows applications to request the operating system to allocate a dynamic port number automatically when binding a socket.

Choosing either does not inherently prevent race conditions or deadlocks. These issues are generally handled by Redis’s single-threaded nature and Nextcloud’s application logic, which uses Redis for atomic operations and locking mechanisms.

Unix domain sockets allow the server process to retrieve the credentials of the connecting client process. This includes the client’s User ID (UID) and Group ID (GID). If matched, the client connected.

Another type of auth from UDS is from agenix 8 which focuses on runtime secret provisioning safety . sth decryption only happens on runtine and isn’t permanent.

On an untrusted network, peer authentication is encrypted by ssh, which can be used to tunnel UDS connections over a secure network channel. This allows a remote client to connect to a UDS on a server as if it were a local connection, leveraging SSH’s authentication and encryption for the transport or pam, when a connection is made to a systemd-managed UDS, systemd can activate a corresponding service unit. This service can then perform its own authentication, potentially utilizing PAM or other mechanisms for user verification.

QEMU disk caching and locking

Disk and caching is managed with Disko, no zfs, and it isolates host’s hard disk paritions from vm.

Disko module can be understood with nix repl to expanding

here rootDIsk is a disko attr, which was not deined in my flake, so to hardcode it for /dev/vda instead of anything possible i wrote a tiny script

{ lib, config, inputs, ... }:
 {
  options.disko.rootDisk = lib.mkOption {
    type = lib.types.str;
    default = "/dev/vda";
    description = "Primary disk device used by Disko (for VM or host).";
  };
  # I might use diskoLib for nodev
  # optional: let your runtime override this
  config = {
    # example default for virtual machine runtime
    disko.rootDisk = lib.mkDefault "/dev/vda";

  };
}

Bulding and running the partitioned image

expanding nix eval .#nixosConfigurations.thinkpadTest-vm.config.system.build --apply builtins.attrNames gives the follwoing ways ,

warning: Git tree ‘/home/xameer/clone/dot’ is dirty [ “binsh” “bootStage1” “bootStage2” “destroy” “destroyFormatMount” “destroyFormatMountNoDeps” “destroyNoDeps” “destroyScript” “destroyScriptNoDeps” “disko” “diskoImages” “diskoImagesScript” “diskoNoDeps” “diskoScript” “diskoScriptNoDeps” “earlyMountScript” “etc” “etcActivationCommands” “etcBasedir” “etcMetadataImage” “extraUtils” “fileSystems” “format” “formatMount” “formatMountNoDeps” “formatNoDeps” “formatScript” “formatScriptNoDeps” “images” “initialRamdisk” “initialRamdiskSecretAppender” “installBootLoader” “installTest” “kernel” “mount” “mountNoDeps” “mountScript” “mountScriptNoDeps” “nixos-enter” “nixos-generate-config” “nixos-install” “nixos-option” “nixos-rebuild” “separateActivationScript” “setEnvironment” “sops-nix-manifest” “sops-nix-users-manifest” “toplevel” “uki” “units” “unmount” “unmountNoDeps” “vm” “vmWithBootLoader” “vmWithDisko” ]

including those with disko, ie “disko” “diskoImages” “diskoImagesScript” “diskoNoDeps” “diskoScript” “diskoScriptNoDeps” “vmWithDisko” , the vmWithDisko is a self explanatory attr here sp

Runtime

Disko reparitioning wiped or corrupted my exisiting /etc/shadow in vm so I can’t login from tty now and only way to restore it to mount these paritions.

Doing a nix run for vmWithDisko gives me the typical result/bin runscript

while the one in tmp is ephemeral (deleted when VM exits) , but when its rinning and has a write lock it’s symlinked to the former.

to inspect current filesystem of running vm, I connect to the overlay, while for immutable base paritions I can connect to them any time , so I run cp --reflink=auto /tmp/nix-shell.fDCrLR/tmp.uJdNKkvOdw/vda.qcow2

The VM

While the entire flake for NixOS is quiet long, but this is the reusable snippet

# mod and build with rebuild build-vm --flake .#thinkpadTest-vm
{}: {
 thinkpadTest-vm = nixpkgs.lib.nixosSystem {
          system = "x86_64-linux";
          # flake special Args for shared Modules
          modules = sharedModules ++ [{
          virtualisation.vmVariant = {
          # sharing system and home critical data from host to vm , virt nic bridge controls how vm communicates externally
          # shared directories are mounted as filesystems inside the VM, but the VM only sees what the host explicitly
          # shares.
          # through the virtualization layer.
              virtualisation.sharedDirectories = {
              authorizedKeys = {
                  source = "/home/xameer/.ssh";
                  target = "/home/xameer/.ssh";
                  securityModel =
                    "passthrough"; # preserves ownership and permissions
                };
                homeConf = {
                  source = "/home/xameer/.config";
                  target = "/home/xameer/.config";
                  securityModel =
                    "passthrough"; # preserves ownership and permissions
                };
                homeLocal = {
                  source = "/home/xameer/.local";
                  target = "/home/xameer/.local";
                  securityModel =
                    "passthrough"; # preserves ownership and permissions
                };
                };
          }] ++ [
            #./sys/my-module-to-test--in-a-vm.nix
          ];
        };

}

nix

The issue

NixOS uses symbolic links extensively. For example, many of the configuration files in /etc are symbolic links to generated files in /nix/store. When using services.Which re neither owned by uid 1000 xameer or any groups its in, Which is fine on serial port 0 tty , to test that build worked. But won’t let xameer launch any programs, sevrices in home or do any file ops , simple as xameer isn’t authorized to do that here, I can’t even login on gui. Now there re activationscipts which let you run any bash ops, but it’s unusable for this case, given the immutable design of NixOS

Utilizing tunnels to serve host’s Nix store to VM over ssh

The Don’t repeat your builds thing in NixOs

The one way tunnel qemu hostfwd=tcp::2222-:22 so from the vm I can connect to host on port 22, but host’s access request to vm is refused over port 2222 .

it just does basic key forwarding and sets up a master scoket for single tcp connection for session(s), given the saught key has been advserised on handlshake. Tailscale does the same , but with netcat , beware of its magicdns it can rewrite your resolve.conf , until you actively tack its bugs (now fixed i guess). My vm-guest is already forwarding ports, if your use wrong port (L instead of R), it won’t find the control socket.

  1. Network emulation

    In the vm NBD consists of three parts: the server exporting the block device, the client using a kernel driver to present a virtual block device (e.g., /dev/nbd0), and the network connecting them. When applications on the client access the virtual device, the requests are sent to the server which executes them on the actual storage.

    programs.ssh = {
      enable = true;
      matchBlocks = {
        "machine-Hostname" = {
          hostname = "127.0.0.1";
          port = 2222;
          user = "machineUsername";
          identityFile = "~/.ssh/id_rsa_host_to_vm";
          identitiesOnly = true;
          serverAliveInterval = 30;
          serverAliveCountMax = 3;
        };
      };
    };
    
    

The possible workarounds

Failed I can rsync these dirs to exclude these symlinks in some backup folder(s) an share them instead. todo I can use imperamncene way to just organize my perist dirs like this, using the same immutability. For now I just rebuild my system within the vm and that hopefully fixes the home perms for letting me use gui and home-manager programs.

The Network

VM side

# nat
{}: {
              networking = {
                interfaces.eth0.useDHCP = true; # For net0 (default)
                interfaces.eth1.useDHCP = true; # For net1
                # Default gateway will come via DHCP, router is host NAT IP on each subnet
                nameservers = [ "8.8.8.8" "1.1.1.1" ];
              };
              # rest of the virtualisatotion.vmvartiant
              qemu.networkingOptions = [
              #hostfwd=tcp::2222-:22 forwards traffic from the host machine to the VM, allowing connections to the VM's
              #SSH service on port 22 through port 2222 on the host. This means you can access the VM by connecting to
              #the host's IP address on port 2222
                  "-nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22"
                  "-nic user,model=virtio-net-pci,hostfwd=tcp::8088-:8088"
                ];

}

nix

Reusable part of my snippet for host machine has to be consistent with my existing firewall below

Host side

{}:{
networking = {
   nat = {
     enable = true;
     internalInterfaces = [ "vnet0" "vnet1" ]; # QEMU virtual NIC interfaces
     externalInterface = "wlp0s20f3"; # Host wireless interface
   };

   firewall = {
     enable = true;
     allowPing = true;
     trustedInterfaces = [ "tailscale0" ];
     allowedUDPPorts = [ config.services.tailscale.port 41641 ];
     allowedTCPPorts = [ config.services.tailscale.port ];

     interfaces."tailscale0".allowedTCPPorts = config.services.openssh.ports;

     extraCommands = ''
       iptables -A FORWARD -i vnet0 -o wlp0s20f3 -j ACCEPT
       iptables -A FORWARD -i wlp0s20f3 -o vnet0 -m state --state RELATED,ESTABLISHED -j ACCEPT
       iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o wlp0s20f3 -j MASQUERADE

       iptables -A FORWARD -i vnet1 -o wlp0s20f3 -j ACCEPT
       iptables -A FORWARD -i wlp0s20f3 -o vnet1 -m state --state RELATED,ESTABLISHED -j ACCEPT
       iptables -t nat -A POSTROUTING -s 192.168.101.0/24 -o wlp0s20f3 -j MASQUERADE
     '';

     extraStopCommands = ''
       iptables -D FORWARD -i vnet0 -o wlp0s20f3 -j ACCEPT || true
       iptables -D FORWARD -i wlp0s20f3 -o vnet0 -m state --state RELATED,ESTABLISHED -j ACCEPT || true
       iptables -t nat -D POSTROUTING -s 192.168.100.0/24 -o wlp0s20f3 -j MASQUERADE || true

       iptables -D FORWARD -i vnet1 -o wlp0s20f3 -j ACCEPT || true
       iptables -D FORWARD -i wlp0s20f3 -o vnet1 -m state --state RELATED,ESTABLISHED -j ACCEPT || true
       iptables -t nat -D POSTROUTING -s 192.168.101.0/24 -o wlp0s20f3 -j MASQUERADE || true
     '';
   };
 };
 }

nix

The snippet for combining IP addr classification (via IP sets) , nftables rules, advanced routing (policy routing based on marks or IPs) to ensure traffic goes through the intended network bridge. To ensure that traffic intended for one bridge doesn’t leak to the other. Host Network Topology is primarily tap , but I also need tun for Rocker ports on QEMU.

Raw code insertion

Footnotes


  1. you can also do them with github actions, but i won’t put my secrets in plaintext even in yaml https://stackoverflow.com/questions/64031598/creating-a-minios3-container-inside-a-github-actions-yml-file/71855338#71855338↩︎

  2. It is a very good opportunity to get to nixpkgs [a haskell dsl’s] lib

    eg fromt eh eval warning lib.crossLists is deprecated, use lib.mapCartesianProduct instead.

    For example, the following function call:

    nix-repl> lib.crossLists (x: y: x+y) [[1 2] [3 4]] [ 4 5 5 6 ]

    Can now be replaced by the following one:

    nix-repl> lib.mapCartesianProduct ({x,y}: x+y) { x = [1 2]; y = [3 4]; } [ 4 5 5 6 ]↩︎

  3. https://discourse.nixos.org/t/minio-in-distributed-mode/29876/12↩︎

  4. https://docs.nextcloud.com/server/stable/admin_manual/configuration_files/primary_storage.html↩︎

  5. My wireless drivers do not support being enslaved to Linux bridges due to Wi-Fi protocol limitations.↩︎

  6. Ensures all trusted interfaces such as tailscale0 are accepted fully (not just specific ports) Allow OpenSSH ports specifically on tailscale0 (which I already do by allowing tcp port 22). Confirm that the Tailscale UDP port configured in config.services.tailscale.port (often 41641) is allowed on input and output.↩︎

  7. Ensures all trusted interfaces such as tailscale0 are accepted fully (not just specific ports) Allow OpenSSH ports specifically on tailscale0 (which I already do by allowing tcp port 22). Confirm that the Tailscale UDP port configured in config.services.tailscale.port (often 41641) is allowed on input and output.↩︎

  8. agenix natively supports age and SSH keys, but it does not directly support cloud KMS providers (GCP, AWS, Azure) or PGP/GPG . So that we need sops-nix↩︎

  9. https://github.com/TecharoHQ/anubis
    
    ↩︎