DocsReleasesCommunityGuidesBlog

Unikraft Summer Workshop 2024

A free and virtual workshop held by members of the Unikraft community this July 1 - July 20, 2024.

It focuses on cloud-native applications and on the unikernel technology that powers efficient and high performance cloud deployments. This is the fourth edition of the event, after USoC'21 to USoC'23, and unlike the previous editions, we will focus on cloud deployments of Unikraft applications.

The three-week event holds a number of starter tutorials and workshops on how to configure, build, run, deploy and debug cloud applications using Unikraft.

There will be 6 sessions taking place in the first two weeks (between July 1 and July 12, 2024). Each session takes place for 3 hours, in the 4pm-7pm CEST, in English. Sessions will consist of talks and demos delivered by members of the Unikraft community, followed by practical tutorials that you will work on with support and supervision. Sessions take place on Unikraft's Discord server.

Topics include building unikernels, benchmarking, debugging, porting applications, virtualization and platform specifics. The first 3 sessions (first week) will focus on using KraftKit, Unikraft's compation tool, to manage cloud applications. The next 3 sessions (second week) will focus on the internals of Unikraft: the build system, native configuration options, application porting.

The two weeks with sessions will be followed by a week of working on the final project. You will work on the project in teams of 2-3 people. We will have support sessions online to help with the project.

On Saturday, 20 July 2024, 9am-5pm CEST, we will have the final hackathon, that consists of adding final touches to the project. The hackathon will take place in hybrid format, in person, at the The National University of Science and Technology POLITEHNICA of Bucharest, and, online, Unikraft's Discord server. Participants will receive a participation diploma. The first three teams will get special prizes. All hackathon in-person participants will get a Unikraft T-shirt.

Registration#

If you're eager to learn more about efficient cloud computing and unikernel technology, to work on practical open source tasks and to expand your knowledge of cloud-native and low-level topics, you'll want to be part of USW'24. You will need to complete a set of challenges that will get you accustomed to the environment you will be using during the sessions. You will need to submit the challenge solution on the registration form by Saturday, June 29, 2024, 10pm CEST.

It's recommended you check these prerequisites before taking part in USW'24:

  • fair knowledge of Linux command-line interface
  • good knowledge of programming concepts; knowledge of the C programming language is a plus
  • basic understanding of operating system concepts: processes, threads, virtual memory, filesystems, file descriptors
  • some exposure to the assembly language and computer architecture
  • fondness for software engineering, hacking, tinkering with software components

People#

USW'24 will be held by members of the Unikraft community including professors and students from The National University of Science and Technology POLITEHNICA of Bucharest and the comercial side of Unikraft, Unikraft.io. Other members of the Unikraft community will provide online support on Discord.

Schedule#

USW'24 consists of 6 sessions, 3 support sessions and a final hackathon. Each session is 3 hours long and consists of practical tutorials and challenges for participants. The support sessions are 2 hours long and consists of providing support for the final project the teams are working on. The hackathon is a full day event (8 hours) where you'll add the final touches to the project, followed by the evaluation of the projects and the awarding ceremony.

The complete schedule for USW'24 is (all times in CEST - Central European Summer Time):

DateIntervalActivity
Tue, 02.07.20233:30pm-4pm
4pm-7pm
Opening Ceremony
Session 01: Overview of Unikraft
Thu, 04.07.20234pm-7pmSession 02: Baby Steps
Fri, 05.07.20234pm-7pmSession 03: Behind the Scenes
Tue, 09.07.20234pm-7pmSession 04: Binary Compatibility
Thu, 11.07.20234pm-7pmSession 05: Debugging in Unikraft
Fri, 12.07.20234pm-7pmSession 06: Application Porting
Tue, 16.07.20234pm-7pmSupport Session 01
Thu, 18.07.20234pm-7pmSupport Session 02
Fri, 19.07.20234pm-7pmSupport Session 03
Sat, 20.07.20239am-5pmFinal Hackathon

Registration Challenges#

First Challenge - Cloud Master#

Create a docker-compose.yml file that sets up a Compose application consisting of a database, a monitoring service, and a simple service that queries the database. The goal is to create an environment where these services can interact seamlessly within Docker containers.

Services Overview:

  • Database Service: You can use whatever database suits you.
  • Stats Service: A service that computes and provides statistical data (e.g. Grafana).
  • Query Service: A simple API that interacts with the database (can be done in any programming language you like).

Be creative, use networks, volume, everything you like. You can extend the stack as much as you like.

Submit the docker-compose.yml in the registration form.

Second challenge - My First Unikernel#

Download the Unikernel image from here. Run it using qemu-system-x86_64 and find the flag. Upload the flag in the registration form.

Read the basic unikernel concepts here.

Session 01: Overview of Unikraft#

For the next 3 sessions, we will use KraftCloud to deploy our applications. If you did not create an account already, signup here and get a token. You will be using that in the following sessions.

Once you have a token, follow the steps here to deploy your first unikernel. If everything went well, deploy more applications and use extra features following the tasks here.

There are two types of tutorials in this session: application tutorials and feature tutorials. This means you will both learn how to use some already existing applications and make use of different KraftCloud features, like load balancing, scale to 0, etc. You will likely use those features for the final project too, so make sure to focus on them.

Session 02: Baby Steps#

In Session 01, we deployed some applications using KraftCloud. In this session, we will use the same applications, but we will configure, build and run them locally, on our system. With this, you will get a better look at what kraft cloud does behind the scenes. Make sure you have docker installed.

Applications for this session are stored in the catalog repository. Similar to the kraftcloud/examples repository we used in Session 01, the catalog repository contains some minimal applications we can run locally. Make sure you clone it before starting the session.

Follow the steps here and bring the cloud to your machine. While you work on them, mark the progress here, in the Session 02 spreadsheet. After you are done with all of them, take a look to some more applications, following the tasks here, and the same steps for building and running you used before. Go through them orderly and aim to complete all items until the Extra section. If you have extra time on your hands, go through the Extra section as well.

Session 03: Behind the Scenes#

In order to run applications using kraft, both locally and with kraftcloud, we need to build a minimal required filesystem for the application we want to run. We do that using Docker. This is useful both to understand what is happening behind the scenes and to have a test environment for your application. In case there are issues with KraftCloud / KraftKit, you can use Docker to see if everything is in the right place and to assist in debugging.

Follow the steps here to see how you can port a new application on top of Unikraft. Mark the items as completed here.

Session 04: Binary Compatibility#

In the previous sessions, we managed to run some applications on top of Unikraft, with only a minimal filesystem required. We extracted the filesystem making use of Docker and KraftKit. We made use of the already existing kernel images from the registry, but sometimes we want to configure our kernel in a particular way, so we want to have manual control over the build process.

In this session, we will take a look at what kraft does behind the scenes in order to build the Unikraft kernel image.

To run the application that we have inside the minimal filesystem, we will use an application called elfloader, together with the Unikraft core and some external libraries. All of them will be cloned by kraft, so we don't have to worry about that.

helloworld-c#

Let's start with the helloworld-c application. We need to update the Kraftfile, so it build a kernel image locally, whithout pulling it directly from the registry. You can copy the Nginx Kraftfile, change the name: to helloworld and the cmd: to ["/helloworld"].

Let's run kraft build and notice what happens. First, we will see some messages that look like this:

[+] pulling app/elfloader:staging ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 100% [0.0s]
[+] finding core/unikraft:staging... done! [0.5s]
[+] finding lib/lwip:staging... done! [0.3s]
[+] finding lib/libelf:staging... done! [0.3s]
[+] pulling lib/libelf:staging ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 100% [0.0s]
[+] pulling lib/lwip:staging ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 100% [0.0s]
[+] pulling core/unikraft:staging ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 100% [0.1s]

This tells us that kraft successfully cloned all the required dependencies to build the kernel. They are placed under .unikraft/:

$ tree -L 1 .unikraft/
.unikraft/
|-- apps/
|-- build/
|-- libs/
`-- unikraft/

The default configuration will be placed by kraft in .config.helloworld_qemu-x86_64. The final image will be placed under the build/ directory, as shown by the output of kraft build:

[*] Build completed successfully!
|
|---- kernel: .unikraft/build/helloworld_qemu-x86_64 (2.8 MB)
`- initramfs: .unikraft/build/initramfs-x86_64.cpio (2.1 MB)

To tweak the configuration of the kernel, we need to add a Makefile and choose what we want to include in the final image. The Makefile will look like this:

UK_APP ?= $(PWD)/workdir/apps/elfloader
UK_ROOT ?= $(PWD)/workdir/unikraft
UK_LIBS ?= $(PWD)/workdir/libs
UK_BUILD ?= $(PWD)/workdir/build
LIBS ?= $(UK_LIBS)/lwip:$(UK_LIBS)/libelf
all:
@$(MAKE) -C $(UK_ROOT) A=$(UK_APP) L=$(LIBS) O=$(UK_BUILD)
$(MAKECMDGOALS):
@$(MAKE) -C $(UK_ROOT) A=$(UK_APP) L=$(LIBS) O=$(UK_BUILD) $(MAKECMDGOALS)

All it does it call the Makefile from .unikraft/unikraft/ with the right parameters, so we can just copy-paste it any time we want to configure the kernel. To enter the configuration menu, we can run make C=$(pwd)/.config.helloworld_qemu-x86_64 menuconfig. This will prompt us with a text interface that allows us to select certain features. Let's select Library Configuration -> ukdebug: Debugging and tracing -> Enable debug messages globally. We then exit by repeatedly pressing ESC on our keyboard.

To build the image, we run make C=$(pwd)/.config.helloworld_qemu-x86_64 -j$(nproc). The final image will be placed under .unikraft/build/elfloader_qemu-x86_64. We can run it manually, using qemu-system-x86, which is what kraft does behind the scenes.

$ qemu-system-x86_64 -cpu max -nographic -kernel .unikraft/build/elfloader_qemu-x86_64 --append "/helloworld"
[...]
[ 0.431128] dbg: [appelfloader] brk @ 0x407821000 (brk heap region: 0x407800000-0x407a00000)
[ 0.432070] dbg: [libposix_fdio] (ssize_t) uk_syscall_r_write((int) 0x1, (const void *) 0x4078002a0, (size_t) 0xc)
Bye, World!
[ 0.433875] dbg: [libposix_process] (int) uk_syscall_r_exit_group((int) 0x0)
[ 0.434095] dbg: [libposix_process] Terminating PID 1: Self-killing TID 1...
[...]

To close the application, press Ctrl+a, then x on the keyboard.

You can toy around with the configuration, enable different features and see how the application changes.

nginx#

Let's move to nginx, a more complex application. To configure and build it, we follow the same steps. This time we don't have to modify the Kraftfile, just add a Makefile with the same content as the one above, and run make C=$(pwd)/.config.helloworld_qemu-x86_64 -j$(nproc).

To run nginx, we also need to setup the networking. Let's create a script called run.sh.

# Remove previously created network interfaces.
sudo ip link set dev tap0 down
sudo ip link del dev tap0
sudo ip link set dev virbr0 down
sudo ip link del dev virbr0
# Create bridge interface for QEMU networking.
sudo ip link add dev virbr0 type bridge
sudo ip address add 172.44.0.1/24 dev virbr0
sudo ip link set dev virbr0 up
sudo qemu-system-x86_64 \
-kernel .unikraft/build/elfloader_qemu-x86_64 \
-nographic \
-m 1024M \
-netdev bridge,id=en0,br=virbr0 -device virtio-net-pci,netdev=en0 \
-append "netdev.ip=172.44.0.2/24:172.44.0.1::: -- /usr/bin/nginx" \
-cpu max

You can see that we firstly remove the network interfaces, then we recreate them and run the application. We give more memory to the application and also give an ip address. To test that this works, we can open another terminal and run curl 172.44.0.2. We close the application by pressing Ctrl+a then x.

redis#

Follow the same steps with redis. Create a Makefile, build the application and then run it.

hugo#

Follow the same steps with hugo. Create a Makefile, build the application and then run it.

redis#

Follow the same steps with node/21. Create a Makefile, build the application and then run it.

redis#

Follow the same steps with PHP. Create a Makefile, build the application and then run it.

Custom Application#

Create an application of your choice in a compiled programming language (i.e. obtain an executable) and run it with Unikraft in binary-compatibility mode. Use compiled programming languages such as C, C++, Go, Rust.

Add System Call Tracing#

Uncomment the syscall tracing feature in the Kraftfile of one of the applications above:

CONFIG_LIBSYSCALL_SHIM_STRACE: 'y'

Build and run the application with Unikraft with the syscall tracing feature enabled. See the system calls.

Compare the system calls from the Unikraft-based run, with those from a native Linux run. They are identical, since the application is run unmodified on Linux and on Unikraft.

Session 05: Debugging#

Many times, when we try to port an application and to use it on top of Unikraft, we will run into issues, as we do when we use any other platform. Unikernels can seem harder to debug, since they function as virtual machines, but having the kernel code in the same address space as the application makes it easy to jump from the application code to the kernel code.. In this sessions, we will look at different ways we can debug our Unikraft applications, from simple debug messages to using gdb to attach to the guest.

Enable Debug Messages#

To enable debug messages, we will need to change the unikernel configuration, like we did in session 04. We will use the nginx application from the last session.

First, let's enter the configuration menu using make C=$(pwd)/.config.helloworld_qemu-x86_64 menuconfig. There are multiple types of debugging messages we can enable. For now, let's go for the strace-like output. We do that by enabling Library Configuration -> Syscall Shim -> Debugging -> strace-like messages. After that, we can rebuild the application and run it, using the script from the last session. We should get an output similar to what strace will show on a usual linux setup:

close(fd:7) = OK
socketpair(0x1, 0x1, ...) = 0x0
epoll_ctl(0x3, 0x1, ...) = 0x0
close(fd:8) = OK
epoll_wait(0x3, 0x1000158844, ...) = 0x1
close(fd:7) = OK
epoll_ctl(0x3, 0x1, ...) = 0x0
gettimeofday(0x1000158980, 0x0, ...) = 0x0

This is very useful when the application requires certain files to be present in the filesystem, and we have no way of determining that at build time. If that is the case, we will likey see a message like:

openat(AT_FDCWD, "/etc/localtime", O_RDONLY|O_CLOEXEC) = No such file or directory (-2)

If the application crashes after that, we can assume that the file is a requirement and we can add it to the filesystem, using the Dockerfile. If the application continues to run properly without that file, then most likely the file is not needed, it might be part of extra functionalities and we can choose if we want to add it or not.

Another option is to enable all available debug messages. To do this, enable Library Configuration -> ukdebug -> uk_printd. This will show a lot of output, enabling debug messages globaly.

You can toy around the configurations under ukdebug and see what they do and how they affect the printed messages.

Using GDB#

Since we are running the applications using qemu, we can attach gdb and debug it like any other application. To do that, we need to update the run script accordingly.

Let's start with the helloworld application that we used in the last session. The new run command will be:

qemu-system-x86_64 -cpu max -nographic -kernel .unikraft/build/elfloader_qemu-x86_64 --append "/helloworld" -S -s

Notice the extra -S -s flags. The -S option will start the application in a paused state, while the -s will open a gdbserver on TCP port 1234. After that, we can open another terminal and run gdb:

gdb --eval-command="target remote :1234" .unikraft/build/elfloader_qemu-x86_64.dbg

This will connect to the gdbserver, and we can go ahead and debug the application as usual. Notice that we used the .unikraft/build/elfloader_qemu-x86_64.dbg, with the extra .dbg when we started gdb. That is a non-stripped kernel image, that we can not run, but we will always use when debugging via gdb. When debugging, instead of the usual breakpoints, use hb (hardware breakpoints).

nginx with gdb#

Follow the same steps on the nginx application. Attach gdb, toy around, place some breakpoints and see how the application flows.

redis#

Follow the steps for debugging messages and gdb for redis. Use the redis setup from the last session.

hugo#

Follow the steps for debugging messages and gdb for hugo. Use the hugo setup from the last session.

node#

Follow the steps for debugging messages and gdb for node. Use the node setup from the last session.

Session 06: Porting an Application#

Now that we have learned how to debug Unikraft applications, we can move on to porting some more complex applications, that might require more then we have seen in the third session. The workflow for porting an application is the same as the one in the third session: create a Dockerfile, add a Kraftfile, create a minimal filesystem and run the application on top of Unikraft. Sometimes, the minimal filesystem can not be created corectly whithout running the application, so we will make use of the debugging messages from the last session.

Let's take nginx as an example. We start from the already existing nginx port and we remove the Dockerfile, since we will write that ourselves.

Next, we start a docker container from the nginx official image:

docker run --rm -it nginx:1.25.3-bookworm /bin/bash

We use ldd to get the dependencies:

$ ldd /usr/sbin/nginx
linux-vdso.so.1 (0x00007ffdf39e8000)
libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x000073162deb9000)
libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0 (0x000073162de1f000)
libssl.so.3 => /lib/x86_64-linux-gnu/libssl.so.3 (0x000073162dd75000)
libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3 (0x000073162d8f3000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x000073162d8d4000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x000073162d6f1000)
/lib64/ld-linux-x86-64.so.2 (0x000073162e133000)

Next, we create a Dockerfile where we copy the dependencies to a scratch image:

FROM nginx:1.25.3-bookworm AS build
FROM scratch
COPY --from=build /usr/sbin/nginx /usr/bin/nginx
COPY --from=build /usr/lib/nginx /usr/lib/nginx
# Libraries
COPY --from=build /lib/x86_64-linux-gnu/libcrypt.so.1 /lib/x86_64-linux-gnu/libcrypt.so.1
COPY --from=build /lib/x86_64-linux-gnu/libpcre2-8.so.0 /lib/x86_64-linux-gnu/libpcre2-8.so.0
COPY --from=build /lib/x86_64-linux-gnu/libssl.so.3 /lib/x86_64-linux-gnu/libssl.so.3
COPY --from=build /lib/x86_64-linux-gnu/libcrypto.so.3 /lib/x86_64-linux-gnu/libcrypto.so.3
COPY --from=build /lib/x86_64-linux-gnu/libz.so.1 /lib/x86_64-linux-gnu/libz.so.1
COPY --from=build /lib/x86_64-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu/libc.so.6
COPY --from=build /lib64/ld-linux-x86-64.so.2 /lib64/ld-linux-x86-64.so.2
COPY --from=build /etc/ld.so.cache /etc/ld.so.cache

Then, to enable debug messages, we uncomment this line from the Kraftfile.

Now, we can run kraft build and kraft run, to see if the application works. We will get an error message:

openat(AT_FDCWD, "/etc/nginx/nginx.conf", O_RDONLY) = No such file or directory (-2)
gettid() = pid:1
2024/07/12 08:11:50 [emerg] 1#1: open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)

This tells us that nginx will look for a config file. We have a minimal config in the conf/ directory, so we copy that too, adding this line in the Dockerfile:

COPY ./conf/nginx.conf /etc/nginx/nginx.conf

We do kraft build and kraft run again, and we get another message:

openat(AT_FDCWD, "/etc/passwd", O_RDONLY|O_CLOEXEC) = No such file or directory (-2)
gettid() = pid:1
2024/07/12 08:19:32 [emerg] 1#1: getpwnam("root") failed (2: No such file or directory) in /etc/nginx/nginx.conf:4

This tells us that getpwnam("root") failed, because no /etc/passwd and /etc/group files are provided, so we also add those in the Dockerfile:

COPY --from=build /etc/passwd /etc/passwd
COPY --from=build /etc/group /etc/group

We rebuild and run again, and get:

2024/07/12 08:25:31 [emerg] 1#1: open() "/etc/nginx/mime.types" failed (2: No such file or directory) in /etc/nginx/nginx.conf:11

So we add /etc/nginx/ to the filesystem:

COPY --from=build /etc/nginx /etc/nginx

We repeat the same process and find out more requirements for our application:

COPY --from=build /var/cache/nginx /var/cache/nginx
COPY --from=build /var/run /var/run
COPY --from=build /usr/lib/nginx /usr/lib/nginx
COPY --from=build /var/log/nginx /var/log/nginx

After that, the application seems to run properly. We use kraft run -p 8080:80 and then, from another terminal, we run curl localhost:8080 and we get a 404 page response. This is because we also need to add an initial page for nginx to serve. We have that already under wwwroot/, and we add that to the Dockerfile:

COPY ./wwwroot /wwwroot

After that, everything should work properly.

node#

Now that you have seen how porting an application works, you can try it yourself with the node application. Remove the Dockerfile, start from the node:21-alpine image and follow the same steps as above.

memcached#

Do the same for memcached. Remove the Dockerfile and start from memcached:1.6.23-bookworm.

Session Recordings#

You can check the recordings of the initial presentations for each session on YouTube.

Connect with the community

Feel free to ask questions, report issues, and meet new people.

Join us on Discord!
®

Getting Started

What is a unikernel?Install CLI companion toolUnikraft InternalsRoadmap

© 2024  The Unikraft Authors. All rights reserved. Documentation distributed under CC BY-NC 4.0.