Emulation specs
VM specs
Disk: VM disk created by nixos-rebuild build-vm typically contains just a single virtual disk /dev/vda which acts as the entire VM’s storage.This single disk is formatted as ext4 with the label “nixos,”which usually corresponds to the root filesystem of the VM. Unlike a physical or fully partitioned disk, this virtual disk does not usually have separate partitions for /boot, swap, or other mount points inside the image created by build-vm, to keep the VM simple and self-contained. VM uses a fixed closure and does not have a nix-channel inside.
Qemu write- locks, host filesystems and build methods
Command ./result/bin/run-nixos-vm typically uses the specified .qcow2 image as a writable disk for the virtual machine, allowing the VM to operate on that image while maintaining the original file intact. This process involves creating a temporary overlay or snapshot of the image, which enables changes to be made without altering the base image directly
Unix socket communication channel for kernel verified FS bases authentication logic.
Socket-based authentication in Linux, particularly with Unix domain sockets like those used by ssh-unix-local, postgresql, or Redis, relies primarily on file system permissions and peer credentials. Unlike network sockets that depend on IP addresses and ports, Unix sockets operate within the local file system. which implies the lower latency of the former, no network stack needed.
Port 0 in Linux is a reserved port that is not used for actual network communication. Instead, it is a special programming technique that allows applications to request the operating system to allocate a dynamic port number automatically when binding a socket.
Choosing either does not inherently prevent race conditions or deadlocks. These issues are generally handled by Redis’s single-threaded nature and Nextcloud’s application logic, which uses Redis for atomic operations and locking mechanisms.
Unix domain sockets allow the server process to retrieve the credentials of the connecting client process. This includes the client’s User ID (UID) and Group ID (GID). If matched, the client connected.
Another type of auth from UDS is from agenix [fn:7] which focuses on runtime secret provisioning[fn:8] safety . sth decryption only happens on runtine and isn’t permanent.
On an untrusted network, peer authentication is encrypted by ssh, which can be used to tunnel UDS connections over a secure network channel. This allows a remote client to connect to a UDS on a server as if it were a local connection, leveraging SSH’s authentication and encryption for the transport or pam, when a connection is made to a systemd-managed UDS, systemd can activate a corresponding service unit. This service can then perform its own authentication, potentially utilizing PAM or other mechanisms for user verification.
QEMU disk caching and locking
Disk and caching is managed with Disko, no zfs, and it isolates host’s hard disk paritions from vm.
Disko module can be understood with nix repl to expanding
here rootDIsk is a disko attr, which was not deined in my flake, so to hardcode it for /dev/vda instead of anything possible i wrote a tiny script
{ lib, config, inputs, ... }:
{
options.disko.rootDisk = lib.mkOption {
type = lib.types.str;
default = "/dev/vda";
description = "Primary disk device used by Disko (for VM or host).";
};
# I might use diskoLib for nodev
# optional: let your runtime override this
config = {
# example default for virtual machine runtime
disko.rootDisk = lib.mkDefault "/dev/vda";
};
}Bulding and running the partitioned image
expanding nix eval .#nixosConfigurations.thinkpadTest-vm.config.system.build --apply builtins.attrNames gives
the follwoing ways ,
warning: Git tree ‘/home/xameer/clone/dot’ is dirty [ “binsh” “bootStage1” “bootStage2” “destroy” “destroyFormatMount” “destroyFormatMountNoDeps” “destroyNoDeps” “destroyScript” “destroyScriptNoDeps” “disko” “diskoImages” “diskoImagesScript” “diskoNoDeps” “diskoScript” “diskoScriptNoDeps” “earlyMountScript” “etc” “etcActivationCommands” “etcBasedir” “etcMetadataImage” “extraUtils” “fileSystems” “format” “formatMount” “formatMountNoDeps” “formatNoDeps” “formatScript” “formatScriptNoDeps” “images” “initialRamdisk” “initialRamdiskSecretAppender” “installBootLoader” “installTest” “kernel” “mount” “mountNoDeps” “mountScript” “mountScriptNoDeps” “nixos-enter” “nixos-generate-config” “nixos-install” “nixos-option” “nixos-rebuild” “separateActivationScript” “setEnvironment” “sops-nix-manifest” “sops-nix-users-manifest” “toplevel” “uki” “units” “unmount” “unmountNoDeps” “vm” “vmWithBootLoader” “vmWithDisko” ]
including those with disko, ie “disko” “diskoImages” “diskoImagesScript” “diskoNoDeps” “diskoScript” “diskoScriptNoDeps” “vmWithDisko” , the vmWithDisko is a self explanatory attr here sp
Runtime
Disko reparitioning wiped or corrupted my exisiting /etc/shadow in vm so I can’t login from tty now and only way to restore it to mount these paritions.
Doing a nix run for vmWithDisko gives me the typical result/bin runscript
while the one in tmp is ephemeral (deleted when VM exits) , but when its rinning and has a write lock it’s symlinked to the former.
to inspect current filesystem of running vm, I connect to the overlay, while for immutable base paritions I can connect
to them any time , so I run cp --reflink=auto /tmp/nix-shell.fDCrLR/tmp.uJdNKkvOdw/vda.qcow2
The VM
While the entire flake for NixOS is quiet long, but this is the reusable snippet
# mod and build with rebuild build-vm --flake .#thinkpadTest-vm
{}: {
thinkpadTest-vm = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
# flake special Args for shared Modules
modules = sharedModules ++ [{
virtualisation.vmVariant = {
# sharing system and home critical data from host to vm , virt nic bridge controls how vm communicates externally
# shared directories are mounted as filesystems inside the VM, but the VM only sees what the host explicitly
# shares.
# through the virtualization layer.
virtualisation.sharedDirectories = {
authorizedKeys = {
source = "/home/xameer/.ssh";
target = "/home/xameer/.ssh";
securityModel =
"passthrough"; # preserves ownership and permissions
};
homeConf = {
source = "/home/xameer/.config";
target = "/home/xameer/.config";
securityModel =
"passthrough"; # preserves ownership and permissions
};
homeLocal = {
source = "/home/xameer/.local";
target = "/home/xameer/.local";
securityModel =
"passthrough"; # preserves ownership and permissions
};
};
}] ++ [
#./sys/my-module-to-test--in-a-vm.nix
];
};
}nix
The issue
NixOS uses symbolic links extensively. For example, many of the configuration files in /etc are symbolic links to generated files in /nix/store. When using services.Which re neither owned by uid 1000 xameer or any groups its in, Which is fine on serial port 0 tty , to test that build worked. But won’t let xameer launch any programs, sevrices in home or do any file ops , simple as xameer isn’t authorized to do that here, I can’t even login on gui. Now there re activationscipts which let you run any bash ops, but it’s unusable for this case, given the immutable design of NixOS
Utilizing tunnels to serve host’s Nix store to VM over ssh
The Don’t repeat your builds thing in NixOs
The one way tunnel qemu hostfwd=tcp::2222-:22 so from the vm I can connect to host on port 22, but host’s access request to vm is refused over port 2222 .
it just does basic key forwarding and sets up a master scoket for single tcp connection for session(s), given the
saught key has been advserised on handlshake. Tailscale does the same , but with netcat , beware of its magicdns it
can rewrite your resolve.conf , until you actively tack its bugs (now fixed i guess). My vm-guest is already forwarding
ports, if your use wrong port (L instead of R), it won’t find the control socket.
Network emulation
In the vm NBD consists of three parts: the server exporting the block device, the client using a kernel driver to present a virtual block device (e.g., /dev/nbd0), and the network connecting them. When applications on the client access the virtual device, the requests are sent to the server which executes them on the actual storage.
programs.ssh = { enable = true; matchBlocks = { "machine-Hostname" = { hostname = "127.0.0.1"; port = 2222; user = "machineUsername"; identityFile = "~/.ssh/id_rsa_host_to_vm"; identitiesOnly = true; serverAliveInterval = 30; serverAliveCountMax = 3; }; }; };
The possible workarounds
Failed I can rsync these dirs to exclude these symlinks in some backup folder(s) an share them instead. todo I can use imperamncene way to just organize my perist dirs like this, using the same immutability. For now I just rebuild my system within the vm and that hopefully fixes the home perms for letting me use gui and home-manager programs.

WIth caddy, tls is auto configured, even when When no port is mentioned on virtualhost like just localhost instead of
localhost:8080, Caddy listens on 80 and 443 by default and redirects requests from port 80 (unsecured) to 443 (secured).
Adding the reverse proxy to a webservice is done by virtualhosts.service-name..extraConfig as
‘’reverse_proxy http://localhost:8096’’ and it can be checked with src_bash[:exports code]{curl –connect-to
<virtualhost>:443:<realhost>:443 https://<virtualhost> -k}
File sharing ACLs
To edit anything in files shared as shared Directories option, using virtioFS, the UID/GID from the host should be visible directly inside the VM.
ssh/scp between different machine insataces local+remote
No filesystem transparency compared to file sharing protocols, dependency of network and security
Selctive isolation with controlled sharing using a local or remote vm ssh
Instantiation can be done with nbd-connect ( raw block device) or lvm when performance is critical and with qemu command falgs and raw file image qcow2
for file ops heavy tasks or snapshots (tricky).
Tailscale
Is more a netwrok traffic container mesh , and isolates that traffic rathe than providing isolation or transparency for file sharing, that is still ssh+ scp way.

For wired eth interface
I use virtual nat(ed) subnet interface vnet# of my host NixOs to emulate its network in the guest NixOs [fn:6],
My wireless driver does not support being enslaved to Linux bridges due to Wi-Fi protocol limitations.
MacVTap is generally more suitable for vnet# for connecting directly to the physical network interface, improving performance, resource consumption. QEMU’s rocker switch is typically used vibr# tap bridges.
Host Specs
CPU - grep -E 'vmx|sse4|aes' /proc/cpuinfo checks yes to all.
GPU: VGA compatible controller: NVIDIA Corporation GP107GLM [Quadro P1000 Mobile] (rev a1) It’s not supported by open drivers , but isn’t legacy too. Optional wayland.
Network specs
wifi interface subnet or ethernet interface
Local isolated private netwroks
Docker bridges virtual network switches for process, filesystem namespace isolation and vm’s virtual subnets from host. NixOs uses cgroup control with namespace isolation and hence a more constent outcome.
Tailscale subnet bypass
The Network
VM side
# nat
{}: {
networking = {
interfaces.eth0.useDHCP = true; # For net0 (default)
interfaces.eth1.useDHCP = true; # For net1
# Default gateway will come via DHCP, router is host NAT IP on each subnet
nameservers = [ "8.8.8.8" "1.1.1.1" ];
};
# rest of the virtualisatotion.vmvartiant
qemu.networkingOptions = [
#hostfwd=tcp::2222-:22 forwards traffic from the host machine to the VM, allowing connections to the VM's
#SSH service on port 22 through port 2222 on the host. This means you can access the VM by connecting to
#the host's IP address on port 2222
"-nic user,model=virtio-net-pci,hostfwd=tcp::2222-:22"
"-nic user,model=virtio-net-pci,hostfwd=tcp::8088-:8088"
];
}
nix
Reusable part of my snippet for host machine has to be consistent with my existing firewall below
Host side
{}:{
networking = {
nat = {
enable = true;
internalInterfaces = [ "vnet0" "vnet1" ]; # QEMU virtual NIC interfaces
externalInterface = "wlp0s20f3"; # Host wireless interface
};
firewall = {
enable = true;
allowPing = true;
trustedInterfaces = [ "tailscale0" ];
allowedUDPPorts = [ config.services.tailscale.port 41641 ];
allowedTCPPorts = [ config.services.tailscale.port ];
interfaces."tailscale0".allowedTCPPorts = config.services.openssh.ports;
extraCommands = ''
iptables -A FORWARD -i vnet0 -o wlp0s20f3 -j ACCEPT
iptables -A FORWARD -i wlp0s20f3 -o vnet0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o wlp0s20f3 -j MASQUERADE
iptables -A FORWARD -i vnet1 -o wlp0s20f3 -j ACCEPT
iptables -A FORWARD -i wlp0s20f3 -o vnet1 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -t nat -A POSTROUTING -s 192.168.101.0/24 -o wlp0s20f3 -j MASQUERADE
'';
extraStopCommands = ''
iptables -D FORWARD -i vnet0 -o wlp0s20f3 -j ACCEPT || true
iptables -D FORWARD -i wlp0s20f3 -o vnet0 -m state --state RELATED,ESTABLISHED -j ACCEPT || true
iptables -t nat -D POSTROUTING -s 192.168.100.0/24 -o wlp0s20f3 -j MASQUERADE || true
iptables -D FORWARD -i vnet1 -o wlp0s20f3 -j ACCEPT || true
iptables -D FORWARD -i wlp0s20f3 -o vnet1 -m state --state RELATED,ESTABLISHED -j ACCEPT || true
iptables -t nat -D POSTROUTING -s 192.168.101.0/24 -o wlp0s20f3 -j MASQUERADE || true
'';
};
};
}nix
The snippet for combining IP addr classification (via IP sets) , nftables rules, advanced routing (policy routing based on marks or IPs) to ensure traffic goes through the intended network bridge. To ensure that traffic intended for one bridge doesn’t leak to the other. Host Network Topology is primarily tap , but I also need tun for Rocker ports on QEMU.