2026-01-19
kubernetes, QEMU, containers, nix, wasm, Linux, microservices, cloud, devops, Compliance, Orchestration, policy engines, filesystems
This post build on Policy Engines providing automated Governance, admission Control, security & Compliance, network segmentation and resource control, for building the infra 1, designed to cater the rise of awareness in ownership of individual’s data and mandate of the policies such as Article 25 GDPR (EU) and DP DP rules 2 (India) and need for deterministic AI architecture.3
As Multi tenant systems change continuously and based on their use cases they differ in their need of availability and scaling. It changes the requirements models of isolation, trust ( through verifiable or through participation based social consensus) and storage semantics to package and deploy any micro service or app which feeds on this data and infra for that need comply to the above.
It is often done by building the required runtime and env to compile and run that app, which is separated from the host machine(s), serving the app over a network, mechanism can be as native a sandboxing, to any variant of container , an artifact composed of a minimum of necessary OS components and namespaces, syscall filtering and cgroups to control the computing resources, packaged as a a software like podman or a handy script in Python.4 These variants to provide can be in built bins like chroot, systems-nspwan, a software package like docker and unikernel,Primarily differing in what they isolate and how,
Sandboxing
chroot alters a process’s view of the filesystem only, offering minimal security on its own. Namespaces are a fundamental set of Linux kernel features that isolate various system resources (filesystem, processes, network, users, etc.) to create a truly isolated and secure environment. The latter isolates, pids,net, uts (hostname), user,gid,mount points more securely than chroot.
Other languages
WASM
Cloudflare Workers all run in V8 sandboxes. There are no containers. You have the option of writing your worker in JavaScript/TypeScript, or compiled WASM. A container cannot call another container in the same process. But V8 can. In other words: by deploying WASM in V8 sandboxes, you get all of the developer benefits of microservices with all of the runtime benefits of monoliths. Cloudflare is not the only provider doing this. Wasmer is trying to build a solution in this space as well.5
Nix
There s a dynamically typed domain specific programming language Nix. It reads a configuration expressed in the Nix language, 6 evaluates it to produce derivations (reproducible build plans), and arranges dependencies into a graph. The most common dependencies are shared at the root of this graph, and less common ones become leaves. Each derivation is a pure function of its inputs, so when Nix builds it (either from source or by fetching from a cache), the output is placed in /nix/store under a path that includes a cryptographic hash of all its inputs. As long as the build definition and its inputs don’t change, the hash — and therefore the store path — stays the same. Nixpkgs provides standard utilities and definitions used by most derivations, and the results are cached in both the Nix store and local caches (and binary caches) for efficiency. These hashes and the store can be examined with tools like nix-store –query and the store’s own database (e.g. db.sqlite). Its a variafiable trust model, with non persistent storage as it can be garabge collected and hence unlike blockchains, it complies to GRDP’s - right to be forgotten.
Nix store can be shared from host to guest containers using virtioFS with posix acls (linux- to linux).
Nix modules re can be ported to other linux distors7
Container orchestration
In Git ops
When using declarative configurations stored in Git as the source of truth, ie GitOps, the desired state of a cluster is defined in version-controlled files, and a controller continuously reconciles the actual cluster state to match that desired state. This provides strong traceability, versioning, and rollback semantics — properties directly aligned with compliance requirements.
Argo CD, can run as a Kubernetes controller and continuously monitors Git repositories to ensure that the deployed state on Kubernetes matches the declared state.
While Argo Workflows – a container-native workflow engine implemented as Kubernetes CRDs. Workflows are defined DAGs where each step runs in a container, enabling reproducible, auditable multi-step jobs such as CI/CD pipelines, data processing, and ML workflows.
Argo Events – an event-driven automation framework that triggers actions based on event sources such as webhooks, message queues, and cloud events. :
- Argo Rollouts – a progressive delivery engine for Kubernetes that provides canary releases, blue/green deployments, and traffic shaping, enabling controlled change management with observable metrics.
Taken together, these tools help implement declarative, observable, and policy-driven orchestration across CI/CD and runtime workflows.
Policy Enforcement and Gatekeepers
Modern orchestration platforms also integrate policy engines (e.g., Open Policy Agent, Gatekeeper) to enforce organizational and regulatory rules at deploy time. These engines can validate Kubernetes manifests, enforce image signing policies, restrict the use of privileged containers, or require labels and annotations that map back to governance models. policy engines ensure that:
- all deployments are traceable to version-controlled intent
- policy violations are caught before changes take effect
- audit trails reflect both configuration and runtime decisions
Storage semantics
s3f6 vs longhorn for db storage
s3fs exposes object storage through a POSIX-like filesystem interface. However, this interface is an approximation layered on top of an object store that is designed for immutable or append-oriented objects. As a result, s3fs inherits the underlying system’s eventual consistency model and limited support for file locking. Operations such as rename, concurrent writes, or fine-grained locking may not behave atomically.
Whereas Longhorn provides distributed block storage with strong, immediate consistency guarantees. Volumes are synchronously replicated, and write ordering is preserved.
Gatekeepers
RKE or opa acts as runtime policy engine8 that acts as an admission controller within the cluster to block non-compliant resources. Deployment Gatekeeping refers to checks performed before resources reach the cluster, typically in a CI/CD pipeline, often using tools like kube-linter or kyverno cli to prevent bad YAML from being applied
Similarly Nix flake check9 commands act as effective, hermetic CI/CD gatekeepers by ensuring that only code which builds and passes defined checks can be deployed. By incorporating this command into CI pipelines (like GitHub Actions), you can prevent flawed code from passing through the pipeline, enforcing a consistent, reproducible environment
Routing host network to guest vm or Qemu
On a wi-fi I go with less performant virtual NIC ie vnet.
Libvirt creates a virtual network (default virbr0) and uses TAP devices. QEMU maps the guest’s internal IP (e.g., 192.168.122.x) to the host’s IP for external traffic.
SSH: Using the default QEMU “user” mode (slirp), the host acts as a gateway (typically 10.0.2.2). You can configure QEMU to forward specific host ports to guest ports.
On a lan cord , a linux bridge Connects the virtual NIC directly to the physical network card (e.g., eth0) via a virtual switch (br0). The guest behaves as a separate machine, obtaining its own IP address from your network’s DHCP server and guest is visible on local network.
Unix socket communication channel for kernel verified FS bases authentication logic.
Socket-based authentication in Linux, particularly with Unix domain sockets like those used by ssh-unix-local, postgresql, or Redis, relies primarily on file system permissions and peer credentials. Unlike network sockets that depend on IP addresses and ports, Unix sockets operate within the local file system. which implies the lower latency of the former, no network stack needed.
Port 0 in Linux is a reserved port that is not used for actual network communication. Instead, it is a special programming technique that allows applications to request the operating system to allocate a dynamic port number automatically when binding a socket.
Choosing either does not inherently prevent race conditions or deadlocks. These issues are generally handled by Redis’s single-threaded nature and Nextcloud’s application logic, which uses Redis for atomic operations and locking mechanisms.
Unix domain sockets allow the server process to retrieve the credentials of the connecting client process. This includes the client’s User ID (UID) and Group ID (GID). If matched, the client connected.
Another type of auth from UDS is from agenix which focuses on runtime secret provisioning safety . sth decryption only happens on runtime and isn’t permanent.
On an untrusted network, peer authentication is encrypted by ssh, which can be used to tunnel UDS connections over a secure network channel. This allows a remote client to connect to a UDS on a server as if it were a local connection, leveraging SSH’s authentication and encryption for the transport or pam, when a connection is made to a systemd-managed UDS, systemd can activate a corresponding service unit. This service can then perform its own authentication, potentially utilizing PAM or other mechanisms for user verification.
Footnotes
which can be provisioned with IAAC tools like terraform↩︎
https://static.pib.gov.in/WriteReadData/specificdocs/documents/2025/nov/doc20251117695301.pdf↩︎
https://www.pwc.com.au/digitalpulse/ai-predictions-2020-report.html↩︎
https://medium.com/@ursulaonyi/building-an-isolated-application-environment-on-linux-understanding-dockers-internal-mechanisms-068cd6c46090↩︎
most of this post is being tested in nix and wasm. https://carnotweat.srht.site/post/2025-10-20-os-and-networking-for-bare-metal-and-their-emulation.html↩︎
https://www.technowizardry.net/2024/09/adopting-nixos-for-my-rke1-kubernetes-nodes/↩︎
https://git.sr.ht/~carnotweat/hermitmesh/tree/main/item/flake.nix#L302↩︎