Posted by binsquare 8 hours ago
Also libkrun is not secure by default. From their README.md:
> The libkrun security model is primarily defined by the consideration that both the guest and the VMM pertain to the same security context. For many operations, the VMM acts as a proxy for the guest within the host. Host resources that are accessible to the VMM can potentially be accessed by the guest through it.
> While defining the security implementation of your environment, you should think about the guest and the VMM as a single entity. To prevent the guest from accessing host's resources, you need to use the host's OS security features to run the VMM inside an isolated context. On Linux, the primary mechanism to be used for this purpose is namespaces. Single-user systems may have a more relaxed security policy and just ensure the VMM runs with a particular UID/GID.
> While most virtio devices allow the guest to access resources from the host, two of them require special consideration when used: virtio-fs and virtio-vsock+TSI.
> When exposing a directory in a filesystem from the host to the guest through virtio-fs devices configured with krun_set_root and/or krun_add_virtiofs, libkrun does not provide any protection against the guest attempting to access other directories in the same filesystem, or even other filesystems in the host.
for virtio-fs, yes the risk of exposing the host fs struture exists, and we plan to:
1. creating staging directory for each vm and bind-mount the host dir onto them
2. having private mount namespaces for vms
they are both tracked in our github issues:
https://github.com/smol-machines/smolvm/issues/152 https://github.com/smol-machines/smolvm/issues/151
2 may need much more efforts than we imagine, but we will ensure to call this out in our doc.
For the concern around TSI, we are developing virtio-net in-parallel, it is also tracked in our github and will be released soon: https://github.com/smol-machines/smolvm/issues/91
Would like to collect mroe suggestions on how to make this safer. Thanks!
Here's how my perspective:
smolvm operates on the same shared responsibility model as other virtual machines.
VM provides VM-level isolation.
If the user mounts a directory with the capability of symlinks or a host OS with a path for guest software that is designed to escape - that is the responsibility of the user rather than the VM.
Security is not guaranteed by using a specific piece of software, it's a process that requires different pieces for different situations. smolvm can be a part of that process.
Would you be ok with a trampoline that launched the VM as a sibling to the Vagrant VM?
I'm building a different virtual machine.
Can you pipe into one? It would be cute if I could wget in machine 1 and send that result to offline machine 2 for processing.
Yes! GPU passthrough is being actively worked on and will land in next major release: https://github.com/smol-machines/smolvm/pull/96
Yea just tried piping, it works:
``` smolvm machine exec --name m1 -- wget -qO- https://example.com/data.csv \ | smolvm machine exec --name m2 -i -- python3 process.py ```
*yes, FreeBSD is specifically developed against Firecracker which is specifically avoided w "Smol machines", but interesting nonetheless
[0] https://github.com/NetBSDfr/smolBSD
[1] https://www.usenix.org/publications/loginonline/freebsd-fire...
microvm space is still underserved.
Colins FreeBSD work or Emiles NetBSD work?
You'll see that philosophy in this project as well (i hope).
freeBSD focuses on features, which is great too.
Cheers!
Edit: I see this appears to be a contributor to the project as well. It was not obvious to me.
@binsquare is this one: https://github.com/BinSquare
question: why do you report that qemu is 15s<x<30s? for instance with katacontainers, you can run fast microvms, and even faster with unikernels. what was your setup?
thanks a lot
Got a lot of questions on how I spin up linux VM's so quickly
Explanation is pretty straight forward.
Linux was built in the 90s. Hardware improved more than a 1000x. Linux virtual machine startup times stayed relatively the same.
Turns out we kept adding junk to the linux kernel + bootup operations.
So all I did was cut and remove unnecessary parts until it still worked. This ended up also getting boot up times to under 1s.
Big part of it was systemd btw.