It is true that MirageOS has its focus on OCaml as a type-safe language. Its libraries are implemented in OCaml.
However, even with having most parts written in C in Unikraft, we are also able to apply dead-code elimination and link-time optimizations with the compiler. In principle you should even be able to combine libraries that are written with different languages.
Regarding inspecting: The design does not exclude that you could add libraries to enable inspection and monitoring. We are even thinking that this is quite important for today deployments: Imagine an integration with Prometheus or a little embedded SSH shell.
The difference to Unix and what we want to achieve is to utilize specialization. Unix/Linux/etc. are general purpose OSes, that are very good fits for cases where you do not know beforehand which applications you are going to run on it (e.g., end-devices like desktops, smartphones). Unikraft wants to optimize the cases where you know beforehand what you are going to run on it and where you are going to run it. It is optimizing the kernel layers and optionally also the application for the use case.
Thanks for the clarification. That does make sense but I guess I have a hard time coming up with use cases for it. That doesn't mean they don't exist of course!
In my mind unikernels in the cloud specifically don't seem that different from Unix, because you're running on a hypervisor anyway. If you zoom out, there could be 64 OCaml unikernels running on a machine, or maybe you have 10 OCaml unikernels, 20 Java unikernels, and 30 C++ unikernels.
That looks like a Unix machine to me, except the interface is the VMM rather than the kernel interface. (I know there have been some papers arguing about this; I only remember them vaguely, but this is how I see it.)
It was very logical to use the VMM interface for awhile, because AWS EC2 was dominant, and it is a stronger security boundary than the Linux kernel. But I do think the kernel interface is actually better for most developers and most languages.
Apparently they use Firecracker VMs which are derived from what AWS lambda uses internally. So I can see the container/kernel interface becoming more popular than the VMM interface. (And I hope it does).
To me, it makes sense for the functionality of the kernel to live on the side of the service provider than being something that the application developer deploys every time. Though Nginx, redis, and sqlite are interesting use cases ... I'd guess they're the minority of cloud use cases, as opposed to apps in high level languages. But that doesn't mean they're not useful as part of an ensemble, most of which are NOT unikernels.
I agree, an argument for 4 is the fact that the hypervisor attack surface can be scaled up and down by adding/removing virtual devices. There is only a little set that stays permanently, like 30+ hypercalls on Xen. Overall compared to a standard OS interface (Linux has in the range of 350+ syscalls) this is still very little. The Solo5 VMM project tried even out another extreme by reducing the hypercalls to less than 10 if I remember correctly.
This is true, there are a number of embedded frameworks that you could use as well and even run it as virtual machine too. In contrast to this we want to make it as seamless as possible by still providing you the Linux-OS-like layers if you need them. The goal is that a previously developed app for Linux should be seamless to port. The OS interfaces in the higher-level language should be the same as you have it on Linux, so no code changes.
Yes, we have first experiments to run on AWS [1], we are currently up-streaming the left pieces so that everyone can try it by themselves. In my point of view, a main difference to rump is the finer grained modularity of our libraries. In theory every library (which implement OS primitives, like thread schedulers, heap management and APIs/ABIs (e.g., Linux-Syscall ABI) can be individually selected and replaced. This is following our specialization vision: Take only components that you need and choose the best fitting ones for your use case. This could mean that for a virtual network appliance, you may end up writing code to the virtual NIC drivers as close as possible. Basically you won't use a standard network stack or a VFS, you may even want to get rid of any noise caused by a guest-OS scheduler.
How maintainable do you think this is in the long run in comparison to e.g. using a Linux based unikernel? Do you think you can keep pace with the speed at which features are added to Linux?
With Unikraft as being a librarized unikernel system you can actually choose if the OS layer should provide you a network stack (likely written in C/C++) for your runtime or if you prefer doing it in a higher-level language. Similar to this the MirageOS folks developed a network stack in OCaml.
Regarding inspecting: The design does not exclude that you could add libraries to enable inspection and monitoring. We are even thinking that this is quite important for today deployments: Imagine an integration with Prometheus or a little embedded SSH shell.
The difference to Unix and what we want to achieve is to utilize specialization. Unix/Linux/etc. are general purpose OSes, that are very good fits for cases where you do not know beforehand which applications you are going to run on it (e.g., end-devices like desktops, smartphones). Unikraft wants to optimize the cases where you know beforehand what you are going to run on it and where you are going to run it. It is optimizing the kernel layers and optionally also the application for the use case.