The guide on using the application catalog provides user-friendly information on using the Unikraft application registry and the catalog
repository.
It presents some hints into what's happening behind the scenes, but it aims to keep the user away for this.
This guide takes a deep dive into the internals of configuring, building and running Unikernel applications from the catalog.
It is aimed for those more technically inclined who would be interested in understanding what's happening behind the scenes and maybe contribute to the application catalog.
Similar to the guide on using the application catalog, we will use two applications:
For the nginx/1.25
bincompat application, there is a build phase and a run phase.
The build phase creates the output kernel, and the run phase launches a Unikraft virtual machine instance from the kernel.
The kernel is a join of the actual Unikraft kernel and the application filesystem, packed as an initial ramdisk. We call the packed initial ramdisk the embedded initial ramdisk or embedded initrd.
The build and run configuration is part of the Kraftfile
.
The Kraftfile
defines the:
nginx
/usr/bin/nginx
app-elfloader
unikraft
, lwip
, libelf
)CONFIG_...
option enables the emdedded initrd buildThe root filesystem is generated from a Dockerfile
specification, as configured in the Kraftfile
.
The Dockerfile
specification collects the required files (binary executable, depending libraries, configuration files, and data files):
FROM nginx:1.25.3-bookworm AS build# These are normally syminks to /dev/stdout and /dev/stderr, which don't# (currently) work with Unikraft. We remove them, such that NGINX will create# them by hand.RUN rm /var/log/nginx/error.logRUN rm /var/log/nginx/access.logFROM scratch# NGINX binaries, modules, configuration, log and runtime filesCOPY /usr/sbin/nginx /usr/bin/nginxCOPY /usr/lib/nginx /usr/lib/nginxCOPY /etc/nginx /etc/nginxCOPY /etc/passwd /etc/passwdCOPY /etc/group /etc/groupCOPY /var/log/nginx /var/log/nginxCOPY /var/cache/nginx /var/cache/nginxCOPY /var/run /var/run# LibrariesCOPY /lib/x86_64-linux-gnu/libcrypt.so.1 /lib/x86_64-linux-gnu/libcrypt.so.1COPY /lib/x86_64-linux-gnu/libpcre2-8.so.0 /lib/x86_64-linux-gnu/libpcre2-8.so.0COPY /lib/x86_64-linux-gnu/libssl.so.3 /lib/x86_64-linux-gnu/libssl.so.3COPY /lib/x86_64-linux-gnu/libcrypto.so.3 /lib/x86_64-linux-gnu/libcrypto.so.3COPY /lib/x86_64-linux-gnu/libz.so.1 /lib/x86_64-linux-gnu/libz.so.1COPY /lib/x86_64-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu/libc.so.6COPY /lib64/ld-linux-x86-64.so.2 /lib64/ld-linux-x86-64.so.2COPY /etc/ld.so.cache /etc/ld.so.cache# Custom configuration files, including using a single process for NginxCOPY ./conf/nginx.conf /etc/nginx/nginx.confCOPY ./conf/unikraft.local.crt /etc/nginx/unikraft.local.crtCOPY ./conf/unikraft.local.key /etc/nginx/unikraft.local.key# Web rootCOPY ./wwwroot /wwwroot
The Dockerfile
is being interpreted via BuildKit
, hence the need to set up the BuildKit
container.
The build command requires the BuildKit
container to be configured beforehand:
docker run -d --name buildkitd --privileged moby/buildkit:latestexport KRAFTKIT_BUILDKIT_HOST=docker-container://buildkitd
The build command is:
kraft build --plat qemu --arch x86_64
kraft build
goes through the following steps:
Dockerfile
specification.Kraftfile
.The resulting embedded kernel image is .unikraft/build/nginx_qemu-x86_64
:
$ ls -lh .unikraft/build/nginx_qemu-x86_64
-rwxr-xr-x 2 razvand docker 15M Jan 2 21:23 .unikraft/build/nginx_qemu-x86_64
This image is run with a command such as:
kraft run -W -p 8080:80 .
It can also be run manually with qemu-system-x86_64
:
qemu-system-x86_64 \-kernel .unikraft/build/nginx_qemu-x86_64 \-nographic \-m 128M \-device virtio-net-pci,mac=02:b0:b0:d3:d2:01,netdev=hostnet0 \-netdev user,id=hostnet0,hostfwd=tcp::8080-:80 \-append "/usr/bin/nginx" \-cpu max
This starts a QEMU virtual machine instance. Query it using:
curl http://localhost:8080
If you want to use a bridge interface, first create the bridge interface as root
(prefix with sudo
if required):
kraft net create -n 172.44.0.1/24 virbr0
And then run manually with qemu-system-x86_64
as root
(prefix with sudo
if required):
qemu-system-x86_64 \-kernel .unikraft/build/nginx_qemu-x86_64 \-nographic \-m 128M \-netdev bridge,id=en0,br=virbr0 -device virtio-net-pci,netdev=en0 \-append "netdev.ip=172.44.0.2/24:172.44.0.1 -- /usr/bin/nginx" \-cpu max
This starts a QEMU virtual machine instance. Query it using:
curl http://172.44.0.2
To close the running QEMU instance, use Ctrl+a x
in the QEMU console.
For the http-go1.21
bincompat example, there is no build phase, only a run phase.
The example is using a prebuilt kernel image.
The prebuilt base
kernel image is pulled from the registry, from unikraft.org/base
.
This happens during the run phase.
The run configuration is part of the Kraftfile
:
spec: v0.6runtime: base:latestrootfs: ./Dockerfilecmd: ["/server"]
The Kraftfile
defines:
unikraft.org/base:latest' (it can be summarized as just
base:latest`)Dockerfile
/server
The root filesystem is generated from a Dockerfile
specification, as configured in the Kraftfile
.
The Dockerfile
specification collects the required files (binary executable, depending libraries, configuration files, and data files):
FROM golang:1.21.3-bookworm AS buildWORKDIR /srcCOPY ./server.go /src/server.goRUN set -xe; \CGO_ENABLED=1 \go build \-buildmode=pie \-ldflags "-linkmode external -extldflags '-static-pie'" \-tags netgo \-o /server server.go \;FROM scratchCOPY /server /serverCOPY /lib/x86_64-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu/COPY /lib64/ld-linux-x86-64.so.2 /lib64/
The Dockerfile is being interpreted via BuildKit, hence the need to set up the BuildKit container.
The run command requires the BuildKit
container to be configured beforehand:
docker run -d --name buildkitd --privileged moby/buildkit:latestexport KRAFTKIT_BUILDKIT_HOST=docker-container://buildkitd
The run command is:
kraft run -W -p 8080:8080 .
kraft run
goes through the following steps:
unikraft.org/base:latest
.Dockerfile
specification.
The generation of the root filesystem implies the building of the Go source code files into a binary executable (ELF
).
The executable, together with the depending libraries is then extracted into the root filesystem./http_server
.The resulting initrd image is .unikraft/build/initramfs.cpio
.
$ ls -lh .unikraft/build/initramfs.cpio
-rw-r--r-- 1 root root 8.9M Jan 4 18:16 .unikraft/build/initramfs-x86_64.cpio
To view the contents of the root filesystem you can use cpio
:
$ cpio -itv < .unikraft/build/initramfs.cpio
d--------- 0 root root 0 Jan 1 1970 /libd--------- 0 root root 0 Jan 1 1970 /lib/x86_64-linux-gnu-rwxr-xr-x 1 root root 1922136 Sep 30 11:31 /lib/x86_64-linux-gnu/libc.so.6d--------- 0 root root 0 Jan 1 1970 /lib64-rwxr-xr-x 1 root root 210968 Sep 30 11:31 /lib64/ld-linux-x86-64.so.2-rwxr-xr-x 1 root root 7151306 Jan 4 18:16 /server18136 blocks
The kernel image is pulled into a temporary directory.
To run the application manually, first pull the kernel image from unikraft.org/base:latest
:
kraft pkg pull -w base unikraft.org/base:latest
The kernel image is base/unikraft/bin/kernel
:
$ treebase/`-- unikraft/`-- bin/`-- kernel3 directories, 1 file$ ls -lh base/unikraft/bin/kernel-rw-rw-r-- 1 razvand razvand 1.6M Jan 25 14:48 base/unikraft/bin/kernel
You can run the application manually with qemu-system-x86_64
and the passing of the -kernel
, -initrd
and -append
arguments:
qemu-system-x86_64 \-kernel base/unikraft/bin/kernel \-nographic \-m 128M \-device virtio-net-pci,mac=02:b0:b0:d3:d2:01,netdev=hostnet0 \-netdev user,id=hostnet0,hostfwd=tcp::8080-:8080 \-append "vfs.fstab=[ \"initrd0:/:extract:::\" ] -- /server" \-initrd .unikraft/build/initramfs-x86_64.cpio \-cpu max
This starts a QEMU virtual machine instance. Query it using:
curl http://localhost:8080
If you want to use a bridge interface, first create the bridge interface as root
(prefix with sudo
if required):
kraft net create -n 172.44.0.1/24 virbr0
And then run manually with qemu-system-x86_64
as root
(prefix with sudo
if required):
qemu-system-x86_64 \-kernel base/unikraft/bin/kernel \-nographic \-m 128M \-netdev bridge,id=en0,br=virbr0 -device virtio-net-pci,netdev=en0 \-append "netdev.ip=172.44.0.2/24:172.44.0.1 vfs.fstab=[ \"initrd0:/:extract:::\" ] -- /server" \-initrd .unikraft/build/initramfs-x86_64.cpio \-cpu max
This starts a QEMU virtual machine instance. Query it using:
curl http://172.44.0.2:8080
To close the running QEMU instance, use Ctrl+a x
in the QEMU console.
Feel free to ask questions, report issues, and meet new people.