Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It turns out that the kernel is smart enough to deduplicate (in memory) the same version of a shared library across VM boundaries, so I don't really see why we need to make packaging handle this, especially if this can be applied to containers (if it isn't already). Duplicates of the same file on disk is not a big deal, and can be solved by a good file system which handles deduplication. I don't see why we necessarily want to do all of this in a package manager.

It's actually a good thing that we have these duplicates from a packaging perspective, because you completely remove the host requirements altogether, and can focus on what an app needs, and if you want to take advantage of deduplication (on disk, or in memory), then you can let another subsystem resolve that for you.

To resolve the patching something system-wide, you simply use a common base image, it's orthogonal to containers. Just because you can have different versions of a dependency, doesn't mean you need to. The main advantage of the container having it's own version is that you can independently upgrade components without worrying that a change will effect another application.

You might argue that this is a security concern, but I'd argue that it's more secure to have an easily updatable application than an easy way to update a particular library across all applications. In the latter case, we already know what happens, people don't update the library nearly as often as they should, because it could break other applications which might rely on that particular version. At least in the first case we can upgrade with confidence, meaning we actually do the upgrades.



This means your security depends on the app maintainer, which is a terrible place to be in. I don't want to have to wait for the latest image of 100 apps and hope they didn't break anything else just to deal with an openssl vulnerability.


If your system consists of 100 apps, you have a bigger problem, and likely is a shop big enough to deal with it.

I'm working on a production deployment of a CoreOS+Docker system for a client now, and the entire system consists of about a dozen container-images, most of which have small, largely non-overlapping dependencies.

Only two have a substantial number of dependencies.

This is a large part of what excites people about Docker and the like: It gives us dependency isolation that often results in drastically reducing the actual dependencies.

None of this e.g. requires statically linked binaries, so no, you don't have to wait for the latest image of 100 apps. You need to wait for the latest package of whatever library is an issue, at which point you rebuild your images, if necessary overriding the package in question for them.


One of the touted benefits of containers is shipping images to people with your software. That means as a customer you cant rebuild the image yourself.

It's exactly like statically linked binaries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: