Hillingar: MirageOS Unikernels on NixOS

Published Wed 14 Dec 2022. Last update Mon 3 Feb 2025.
A post in the series.

A version of this blog post can be found on the Tarides website: https://tarides.com/blog/2022-12-14-hillingar-mirageos-unikernels-on-nixos.


An arctic mirage [1]

Introduction

The Domain Name System (DNS) is a critical component of the modern Internet, allowing domain names to be mapped to IP addresses, mailservers, and more2. This allows users to access services independent of their location in the Internet using human-readable names. We can host a DNS server ourselves to have authoritative control over our domain, protect the privacy of those using our server, increase reliability by not relying on a third party DNS provider, and allow greater customization of the records served (or the behaviour of the server itself). However, it can be quite challenging to deploy one’s own server reliably and reproducibly, as I discovered during my master’s thesis [2]. The Nix deployment system aims to address this. With a NixOS machine, deploying a DNS server is as simple as:

{
  services.bind = {
    enable = true;
    zones."freumh.org" = {
      master = true;
      file = "freumh.org.zone";
    };
  };
}

Which we can then query with

$ dig ryan.freumh.org @ns1.ryan.freumh.org +short
135.181.100.27

To enable the user to query our domain without specifying the nameserver, we have to create a glue record with our registrar pointing ns1.freumh.org to the IP address of our DNS-hosting machine.

You might notice this configuration is running the venerable bind3, which is written in C. As an alternative, using functional, high-level, type-safe programming languages to create network applications can greatly benefit safety and usability whilst maintaining performant execution [3]. One such language is OCaml.

MirageOS4 is a deployment method for these OCaml programs [4]. Instead of running them as a traditional Unix process, we instead create a specialised ‘unikernel’ operating system to run the application, which allows dead code elimination improving security with smaller attack surfaces and improved efficiency.

However, to deploy a Mirage unikernel with NixOS, one must use the imperative deployment methodologies native to the OCaml ecosystem, eliminating the benefit of reproducible systems that Nix offers. This blog post will explore how we enabled reproducible deployments of Mirage unikernels by building them with Nix.

At this point, the curious reader might be wondering, what is ‘Nix’? Please see the separate webpage on Nix for more.

MirageOS

MirageOS is a library operating system that allows users to create unikernels, which are specialized operating systems that include both low-level operating system code and high-level application code in a single kernel and a single address space.[4].

It was the first such ‘unikernel creation framework’, but comes from a long lineage of OS research, such as the exokernel library OS architecture [5]. Embedding application code in the kernel allows for dead-code elimination, removing OS interfaces that are unused, which reduces the unikernel’s attack surface and offers improved efficiency.

Contrasting software layers in existing VM appliances vs. unikernel’s standalone kernel compilation approach [4]

Mirage unikernels are written OCaml6. OCaml is more practical for systems programming than other functional programming languages, such as Haskell. It supports falling back on impure imperative code or mutable variables when warranted.

Deploying Unikernels

Now that we understand what Nix and Mirage are, and we’ve motivated the desire to deploy Mirage unikernels on a NixOS machine, what’s stopping us from doing just that? Well, to support deploying a Mirage unikernel, like for a DNS server, we would need to write a NixOS module for it.

A paired-down7 version of the bind NixOS module, the module used in our Nix expression for deploying a DNS server on NixOS (§), is:

{ config, lib, pkgs, ... }:

with lib;

{
  options = {
    services.bind = {
      enable = mkEnableOption "BIND domain name server";

      zones = mkOption {
        ...
      };
    };
  };

  config = mkIf cfg.enable {
    systemd.services.bind = {
      description = "BIND Domain Name Server";
      after = [ "network.target" ];
      wantedBy = [ "multi-user.target" ];

      serviceConfig = {
        ExecStart = "${pkgs.bind.out}/sbin/named";
      };
    };
  };
}

Notice the reference to pkgs.bind. This is the Nixpkgs repository Nix derivation for the bind package. Recall that every input to a Nix derivation is itself a Nix derivation (§); in order to use a package in a Nix expression – i.e., a NixOS module – we need to build said package with Nix. Once we build a Mirage unikernel with Nix, we can write a NixOS module to deploy it.

Building Unikernels

Mirage uses the package manager for OCaml called opam8. Dependencies in opam, as is common in programming language package managers, have a file which – among other metadata, build/install scripts – specifies dependencies and their version constraints. For example9

...
depends: [
  "arp" { ?monorepo & >= "3.0.0" & < "4.0.0" }
  "ethernet" { ?monorepo & >= "3.0.0" & < "4.0.0" }
  "lwt" { ?monorepo }
  "mirage" { build & >= "4.2.0" & < "4.3.0" }
  "mirage-bootvar-solo5" { ?monorepo & >= "0.6.0" & < "0.7.0" }
  "mirage-clock-solo5" { ?monorepo & >= "4.2.0" & < "5.0.0" }
  "mirage-crypto-rng-mirage" { ?monorepo & >= "0.8.0" & < "0.11.0" }
  "mirage-logs" { ?monorepo & >= "1.2.0" & < "2.0.0" }
  "mirage-net-solo5" { ?monorepo & >= "0.8.0" & < "0.9.0" }
  "mirage-random" { ?monorepo & >= "3.0.0" & < "4.0.0" }
  "mirage-runtime" { ?monorepo & >= "4.2.0" & < "4.3.0" }
  "mirage-solo5" { ?monorepo & >= "0.9.0" & < "0.10.0" }
  "mirage-time" { ?monorepo }
  "mirageio" { ?monorepo }
  "ocaml" { build & >= "4.08.0" }
  "ocaml-solo5" { build & >= "0.8.1" & < "0.9.0" }
  "opam-monorepo" { build & >= "0.3.2" }
  "tcpip" { ?monorepo & >= "7.0.0" & < "8.0.0" }
  "yaml" { ?monorepo & build }
]
...

Each of these dependencies will have its own dependencies with their own version constraints. As we can only link one dependency into the resulting program, we need to solve a set of dependency versions that satisfies these constraints. This is not an easy problem. In fact, it’s NP-complete [6]. Opam uses the Zero Install10 SAT solver for dependency resolution.

Nixpkgs has many OCaml packages11 which we could provide as build inputs to a Nix derivation12. However, Nixpkgs has one global coherent set of package versions13, 14. The support for installing multiple versions of a package concurrently comes from the fact that they are stored at a unique path and can be referenced separately, or symlinked, where required. So different projects or users that use a different version of Nixpkgs won’t conflict, but Nix does not do any dependency version resolution – everything is pinned15. This is a problem for opam projects with version constraints that can’t be satisfied with a static instance of Nixpkgs.

Luckily, a project from Tweag already exists (opam-nix) to deal with this16, 17. This project uses the opam dependency versions solver inside a Nix derivation, and then creates derivations from the resulting dependency versions18.

This still doesn’t support building our Mirage unikernels, though. Unikernels quite often need to be cross-compiled: compiled to run on a platform other than the one they’re being built on. A common target, Solo519, is a sandboxed execution environment for unikernels. It acts as a minimal shim layer to interface between unikernels and different hypervisor backends. Solo5 uses a different glibc which requires cross-compilation. Mirage 420 supports cross compilation with toolchains in the Dune build system21. This uses a host compiler installed in an opam switch (a virtual environment) as normal, as well as a target compiler22. But the cross-compilation context of packages is only known at build time, as some metaprogramming modules may require preprocessing with the host compiler. To ensure that the right compilation context is used, we have to provide Dune with all our sources’ dependencies. A tool called opam-monorepo was date to do just that23.

We extended the opam-nix project to support the opam-monorepo workflow with this pull request: github.com/tweag/opam-nix/pull/18.

This is very low-level support for building Mirage unikernels with Nix, however. In order to provide a better user experience, we also date the Hillingar Nix flake: github.com/RyanGibb/hillingar. This wraps the Mirage tooling and opam-nix function calls so that a simple high-level flake can be dropped into a Mirage project to support building it with Nix. To add Nix build support to a unikernel, simply:

# create a flake from hillingar's default template
$ nix flake new . -t github:/RyanGibb/hillingar
# substitute the name of the unikernel you're building
$ sed -i 's/throw "Put the unikernel name here"/"<unikernel-name>"/g' flake.nix
# build the unikernel with Nix for a particular target
$ nix build .#<target>

For example, see the flake for building the Mirage website as a unikernel with Nix: github.com/RyanGibb/mirage-www/blob/master/flake.nix.

Dependency Management

To step back for a moment and look at the big picture, we can consider a number of different types of dependencies at play here:

  1. System dependencies: Are dependencies installed through the system package manager – depexts in opam parlance. This is Nix for Hillingar, but another platform’s package managers include apt, pacman, and brew. For unikernels, these are often C libraries like gmp.
  2. Library dependencies: Are installed through the programming language package manager. For example opam, pip, and npm. These are the dependencies that often have version constraints and require resolution possibly using a SAT solver.
  3. File dependencies: Are dependencies at the file system level of granularity. For example, C files, Java (non-inner) classes, or OCaml modules. Most likely this will be for a single project, but in a monorepo, these could span many projects which all interoperate (e.g., Nixpkgs). This is the level of granularity that builds systems often deal with, like Make, Dune, and Bazel.
  4. Function dependencies: Are dependencies between functions or another unit of code native to a language. For example, if function a calls function b, then a ‘depends’ on b. This is the level of granularity that compilers and interpreters are normally concerned with. In the realms of higher-order functions this dependance may not be known in advance, but this is essentially the same problem that build systems face with dynamic dependencies [7].

Nix deals well with system dependencies, but it doesn’t have a native way of resolving library dependency versions. Opam deals well with library dependencies, but it doesn’t have a consistent way of installing system packages in a reproducible way. And Dune deals with file dependencies, but not the others. The OCaml compiler keeps track of function dependencies when compiling and linking a program.

Cross-Compilation

Dune is used to support cross-compilation for Mirage unikernels (§). We encode the cross-compilation context in Dune using the preprocess stanza from Dune’s DSL, for example from mirage-tcpip:

(library
 (name tcp)
 (public_name tcpip.tcp)
 (instrumentation
  (backend bisect_ppx))
 (libraries logs ipaddr cstruct lwt-dllist mirage-profile tcpip.checksum
   tcpip duration randomconv fmt mirage-time mirage-clock mirage-random
   mirage-flow metrics)
 (preprocess
  (pps ppx_cstruct)))

Which tells Dune to preprocess the opam package ppx_cstruct with the host compiler. As this information is only available from the build manager, this requires fetching all dependency sources to support cross-compilation with the opam-monorepo tool:

Cross-compilation - the details of how to build some native code can come late in the pipeline, which isn’t a problem if the sources are available24.

This means we’re essentially encoding the compilation context in the build system rules. To remove the requirement to clone dependency sources locally with opam-monorepo we could try and encode the compilation context in the package manager. However, preprocessing can be at the OCaml module level of granularity. Dune deals with this level of granularity with file dependencies, but opam doesn’t. Tighter integration between the build and package manager could improve this situation, like Rust’s Cargo. There are some plans towards modularising opam and creating tighter integration with Dune.

There is also the possibility of using Nix to avoid cross-compilation. Nixpkg’s cross compilation25 will not innately help us here, as it simply specifies how to package software in a cross-compilation friendly way. However, Nix remote builders would enable reproducible builds on a remote machine26 with Nix installed that may sidestep the need for cross-compilation in certain contexts.

Version Resolution

Hillingar uses the Zero Install SAT solver for version resolution through opam. While this works, it isn’t the most principled approach for getting Nix to work with library dependencies. Some package managers are just using Nix for system dependencies and using the existing tooling as normal for library dependencies27. But generally, X2nix projects are numerous and created in an ad hoc way. Part of this is dealing with every language’s ecosystems package repository system, and there are existing approaches28, 29 aimed at reducing code duplication, but there is still the fundamental problem of version resolution. Nix uses pointers (paths) to refer to different versions of a dependency, which works well when solving the diamond dependency problem for system dependencies, but we don’t have this luxury when linking a binary with library dependencies.

The diamond dependency problem [6].

This is exactly why opam uses a constraint solver to find a coherent package set. But what if we could split version-solving functionality into something that can tie into any language ecosystem? This could be a more principled, elegant, approach to the current fragmented state of library dependencies (program language package managers). This would require some ecosystem-specific logic to obtain, for example, the version constraints and to create derivations for the resulting sources, but the core functionality could be ecosystem agnostic. As with opam-nix, materialization30 could be used to commit a lock file and avoid IFD. Although perhaps this is too lofty a goal to be practical, and perhaps the real issues are organisational rather than technical.

Nix allows multiple versions of a package to be installed simultaneously by having different derivations refer to different paths in the Nix store concurrently. What if we could use a similar approach for linking binaries to sidestep the version constraint solving altogether at the cost of larger binaries? Nix makes a similar tradeoff makes with disk space. A very simple approach might be to programmatically prepend/append functions in D with the dependency version name vers1 and vers2 for calls in the packages B and C respectively in the diagram above.

Another way to avoid NP-completeness is to attack assumption 4: what if two different versions of a package could be installed simultaneously? Then almost any search algorithm will find a combination of packages to build the program; it just might not be the smallest possible combination (that’s still NP-complete). If B needs D 1.5 and C needs D 2.2, the build can include both packages in the final binary, treating them as distinct packages. I mentioned above that there can’t be two definitions of printf built into a C program, but languages with explicit module systems should have no problem including separate copies of D (under different fully-qualified names) into a program. [6]

Another wackier idea is, instead of having programmers manually specific constraints with version numbers, to resolve dependencies purely based on typing31. The issue here is that solving dependencies would now involve type checking, which could prove computationally expensive.

Build Systems

The build script in a Nix derivation (if it doesn’t invoke a compiler directly) often invokes a build system like Make, or in this case Dune. But Nix can also be considered a build system with a suspending scheduler and deep constructive trace rebuilding [7]. But Nix is at a coarse-grained package level, invoking these finer-grained build systems to deal with file dependencies.

In Chapter 10 of the original Nix thesis [8], low-level build management using Nix is discussed, proposing extending Nix to support file dependencies. For example, to build the ATerm library:

{sharedLib ? true}:

with (import ../../../lib);

rec {
  sources = [
    ./afun.c ./aterm.c ./bafio.c ./byteio.c ./gc.c ./hash.c
    ./list.c ./make.c ./md5c.c ./memory.c ./tafio.c ./version.c
  ];

  compile = main: compileC {inherit main sharedLib;};

  libATerm = makeLibrary {
    libraryName = "ATerm";
    objects = map compile sources;
    inherit sharedLib;
  };
}

This has the advantage over traditional build systems like Make that if a dependency isn’t specified, the build will fail. And if the build succeeds, the build will succeed. So it’s not possible to make incomplete dependency specifications, which could lead to inconsistent builds.

A downside, however, is that Nix doesn’t support dynamic dependencies. We need to know the derivation inputs in advance of invoking the build script. This is why in Hillingar we need to use IFD to import from a derivation invoking opam to solve dependency versions.

There is prior art that aims to support building Dune projects with Nix in the low-level manner described called tumbleweed. While this project is now abandoned, it shows the difficulties of trying to work with existing ecosystems. The Dune build system files need to be parsed and interpreted in Nix, which either requires convoluted and error-prone Nix code or painfully slow IFD. The former approach is taken with tumbleweed which means it could potentially benefit from improving the Nix language. But fundamentally this still requires the complex task of reimplementing part of Dune in another language.

I would be very interested if anyone reading this knows if this idea went anywhere! A potential issue I see with this is the computational and storage overhead associated with storing derivations in the Nix store that are manageable for coarse-grained dependencies might prove too costly for fine-grained file dependencies.

While on the topic of build systems, to enable more minimal builds tighter integration with the compiler would enable analysing function dependencies32. For example, Dune could recompile only certain functions that have changed since the last invocation. Taking granularity to such a fine degree will cause a great increase in the size of the build graph, however. Recomputing this graph for every invocation may prove more costly than doing the actual rebuilding after a certain point. Perhaps persisting the build graph and calculating differentials of it could mitigate this. A meta-build-graph, if you will.

Evaulation

Hillingar’s primary limitations are (1) complex integration is required with the OCaml ecosystem to solve dependency version constraints using opam-nix, and (2) that cross-compilation requires cloning all sources locally with opam-monorepo (§). Another issue that proved an annoyance during this project is the Nix DSL’s dynamic typing. When writing simple derivations this often isn’t a problem, but when writing complicated logic, it quickly gets in the way of productivity. The runtime errors produced can be very hard to parse. Thankfully there is work towards creating a typed language for the Nix deployment system, such as Nickel33. However, gradual typing is hard, and Nickel still isn’t ready for real-world use despite being open-sourced (in a week as of writing this) for two years.

A glaring omission is that despite it being the primary motivation, we haven’t actually written a NixOS module for deploying a DNS server as a unikernel. There are still questions about how to provide zonefile data declaratively to the unikernel, and manage the runtime of deployed unikernels. One option to do the latter is Albatross34, which has recently had support for building with nix added35. Albatross aims to provision resources for unikernels such as network access, share resources for unikernels between users, and monitor unikernels with a Unix daemon. Using Albatross to manage some of the inherent imperative processes behind unikernels, as well as share access to resources for unikernels for other users on a NixOS system, could simplify the creation and improve the functionality of a NixOS module for a unikernel.

There also exists related work in the reproducible building of Mirage unikernels. Specifically, improving the reproducibility of opam packages (as Mirage unikernels are opam packages themselves)36. Hillingar differs in that it only uses opam for version resolution, instead using Nix to provide dependencies, which provides reproducibility with pinned Nix derivation inputs and builds in isolation by default.

Conclusion

To summarise, this project was motivated (§) by deploying unikernels on NixOS (§). Towards this end, we added support for building MirageOS unikernels with Nix; we extended opam-nix to support the opam-monorepo workflow and created the Hillingar project to provide a usable Nix interface (§). This required scrutinising the OCaml and Nix ecosystems along the way in order to marry them; some thoughts on dependency management were developed in this context (§). Many strange issues and edge cases were uncovered during this project but now that we’ve encoded them in Nix, hopefully, others won’t have to repeat the experience!

While only the first was the primary motivation, the benefits of building unikernels with Nix are:

  • Reproducible and low-config unikernel deployment using NixOS modules is enabled.
  • Nix allows reproducible builds pinning system dependencies and composing multiple language environments. For example, the OCaml package conf-gmp is a ‘virtual package’ that relies on a system installation of the C/Assembly library gmp (The GNU Multiple Precision Arithmetic Library). Nix easily allows us to depend on this package in a reproducible way.
  • We can use Nix to support building on different systems (§).

While NixOS and MirageOS take fundamentally very different approaches, they’re both trying to bring some kind of functional programming paradigm to operating systems. NixOS does this in a top-down manner, trying to tame Unix with functional principles like laziness and immutability37; whereas, MirageOS does this by throwing Unix out the window and rebuilding the world from scratch in a very much bottom-up approach. Despite these two projects having different motivations and goals, Hillingar aims to get the best from both worlds by marrying the two.


I want to thank some people for their help with this project:

  • Lucas Pluvinage for invaluable help with the OCaml ecosystem.
  • Alexander Bantyev for getting me up to speed with the opam-nix project and working with me on the opam-monorepo workflow integration.
  • David Allsopp for his opam expertise.
  • Jules Aguillon and Olivier Nicole for their fellow Nix-enthusiasm.
  • Sonja Heinze for her PPX insights.
  • Anil Madhavapeddy for having a discussion that led to the idea for this project.
  • Björg Bjarnadóttir for her Icelandic language consultation (‘Hillingar’).
  • And finally, everyone at Tarides for being so welcoming and helpful!

This work was completed with the support of Tarides.

If you spot any errors, have any questions, notice something I’ve mentioned that someone has already thought of, or notice any incorrect assumptions or assertions made, please get in touch at ryan@freumh.org.

If you have a unikernel, consider trying to build it with Hillingar, and please report any problems at github.com/RyanGibb/hillingar/issues!


References

[1]
W. H. Lehn, “The Novaya Zemlya effect: An arctic mirage,” J. Opt. Soc. Am., JOSA, vol. 69, no. 5, pp. 776–781, May 1979, doi: 10.1364/JOSA.69.000776. [Online]. Available: https://opg.optica.org/josa/abstract.cfm?uri=josa-69-5-776. [Accessed: Oct. 05, 2022]
[2]
R. T. Gibb, “Spatial Name System,” Nov. 30, 2022. [Online]. Available: http://arxiv.org/abs/2210.05036. [Accessed: Jun. 30, 2023]
[3]
A. Madhavapeddy, A. Ho, T. Deegan, D. Scott, and R. Sohan, “Melange: Creating a "functional" internet,” SIGOPS Oper. Syst. Rev., vol. 41, no. 3, pp. 101–114, Mar. 2007, doi: 10.1145/1272998.1273009. [Online]. Available: https://doi.org/10.1145/1272998.1273009. [Accessed: Feb. 10, 2022]
[4]
A. Madhavapeddy et al., “Unikernels: Library operating systems for the cloud,” SIGARCH Comput. Archit. News, vol. 41, no. 1, pp. 461–472, Mar. 2013, doi: 10.1145/2490301.2451167. [Online]. Available: https://doi.org/10.1145/2490301.2451167. [Accessed: Jan. 25, 2022]
[5]
D. R. Engler, M. F. Kaashoek, and J. O’Toole, “Exokernel: An operating system architecture for application-level resource management,” SIGOPS Oper. Syst. Rev., vol. 29, no. 5, pp. 251–266, Dec. 1995, doi: 10.1145/224057.224076. [Online]. Available: https://doi.org/10.1145/224057.224076. [Accessed: Jan. 25, 2022]
[6]
R. Cox, “Version SAT,” Dec. 13, 2016. [Online]. Available: https://research.swtch.com/version-sat. [Accessed: Oct. 16, 2022]
[7]
A. Mokhov, N. Mitchell, and S. Peyton Jones, “Build systems à la carte,” Proc. ACM Program. Lang., vol. 2, pp. 1–29, Jul. 2018, doi: 10.1145/3236774. [Online]. Available: https://dl.acm.org/doi/10.1145/3236774. [Accessed: Oct. 11, 2022]
[8]
E. Dolstra, “The purely functional software deployment model,” [s.n.], S.l., 2006 [Online]. Available: https://edolstra.github.io/pubs/phd-thesis.pdf

  1. Generated with Stable Diffusion and GIMP↩︎

  2. DNS LOC↩︎

  3. ISC bind has many CVE’s↩︎

  4. mirage.io↩︎

  5. Credits to Takayuki Imada↩︎

  6. Barring the use of foreign function interfaces (FFIs).↩︎

  7. The full module can be found here↩︎

  8. opam.ocaml.org↩︎

  9. For mirage-www targetting hvt.↩︎

  10. 0install.net↩︎

  11. github.com/NixOS/nixpkgs pkgs/development/ocaml-modules↩︎

  12. NB they are not as complete nor up-to-date as those in opam-repository github.com/ocaml/opam-repository.↩︎

  13. Bar some exceptional packages that have multiple major versions packaged, like Postgres.↩︎

  14. In fact Arch has the same approach, which is why it doesn’t support partial upgrades (§).↩︎

  15. This has led to much confusion with how to install a specific version of a package github.com/NixOS/nixpkgs/issues/9682.↩︎

  16. github.com/tweag/opam-nix↩︎

  17. Another project, timbertson/opam2nix, also exists but depends on a binary of itself at build time as it’s written in OCaml as opposed to Nix, is not as minimal (higher LOC count), and it isn’t under active development (with development focused on github.com/timbertson/fetlock)↩︎

  18. Using something called Import From Derivation (IFD) nixos.wiki/wiki/ImportFromDerivation. Materialisation can be used to create a kind of lock file for this resolution, which can be committed to the project to avoid having to do IFD on every new build. An alternative may be to use opam’s built-in version pinning[fn:47].↩︎

  19. github.com/Solo5/solo5↩︎

  20. mirage.io/blog/announcing-mirage-40↩︎

  21. dune.build↩︎

  22. github.com/mirage/ocaml-solo5↩︎

  23. github.com/tarides/opam-monorepo↩︎

  24. github.com/tarides/opam-monorepo↩︎

  25. nixos.org/manual/nixpkgs/stable/#chap-cross↩︎

  26. nixos.org/manual/nix/stable/advanced-topics/distributed-builds.html↩︎

  27. docs.haskellstack.org/en/stable/nixintegration↩︎

  28. github.com/nix-community/dream2nix↩︎

  29. github.com/timbertson/fetlock↩︎

  30. https://github.com/tweag/opam-nix#materialization↩︎

  31. twitter.com/TheLortex/status/1571884882363830273↩︎

  32. signalsandthreads.com/build-systems/#4305↩︎

  33. www.tweag.io/blog/2020-10-22-nickel-open-sourcing↩︎

  34. hannes.robur.coop/Posts/VMM↩︎

  35. https://github.com/roburio/albatross/pull/120↩︎

  36. hannes.nqsb.io/Posts/ReproducibleOPAM↩︎

  37. tweag.io/blog/2022-07-14-taming-unix-with-nix↩︎