Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Swarmlet: A self-hosted, open-source Platform as a Service (github.com/swarmlet)
88 points by mono-bob on Feb 15, 2023 | hide | past | favorite | 46 comments


Sadly like many Docker Swarm based projects, this has become abandonware. I use Docker Swarm with Portainer for managing services with ecoeats[1], a decision that was made years ago, just before Swarm was revealed to be absolutely dead in the water. I initially used Swarmlet before switching to Portainer, as there were far too many bugs and missing tools needed to effectively manage a Swarm that went beyond stateless Node containers.

With Portainer and Swarm I've been forced to manually intervene with rollouts more times than I would have liked due to Swarm-specific errors and other quirky networking behaviour. At least it's simpler than Kubernetes!

[1] https://ecoeats.uk


I really liked the option to deploy through a git push without additional setup. But it indeed looks like abandonware, thanks for sharing your experience! If you would start over, would you choose for Portainer and Swarm again? Do you know of any alternatives to Swarmlet?


Portainer can poll your git repo and supports webhooks. I went with portainer early on and haven't looked back.


portainer now has a banner to subscribe as well on the free version


Portainer also has default opt-out analytics via matomo. According to their github the analytics are in Germany but the DNS points to server in France at the moment. Either way, I find it questionable to have analytics running on self hosted open source software.


Oh, i wasn't aware of this


Swarm was great. Docker/Moby should never have abandoned it.

Kubernetes solves a similar class of problems to Swarm, but in a much more complex way. Sometimes that complexity helps solve problems. For many organizations, though, Swarm would have been the better option.

Both Swarm and Kubernetes have their purposes. I'm sad that no option has popped up to replace Swarm in the "simple and easy container orchestration" space. Now it's either Kubernetes or... ECS, I guess.


I've been trying out Nomad on a home server setup, its still not as simple as Swarm and has some hard edges to learn around, but overall it hasn't been a bad experience, definitely easier than my attempts at k8s. And Nomad got basic built in service discovery in the last couple versions, which was the main thing turning me away (it used to require running Consul).


Can you describe your setup?

I tried setting up nomad but couldn't figure out service discovery, external ingress / letsencrypt in the time I allotted myself...


Nomad running as a server (manager) and client (worker) on the main server, and as a client on a rpi3, through systemd on both, the install [0] and deploy [1] instructions worked well enough. It has constraints to control which client takes what workload (ex. I don't care which runs ddclient, but photoprism can't move to the rpi, and the sensor readers can't leave the rpi).

Caddy for reverse proxy and tls handling running as a service, so all ingress goes through that, there wasn't too much exciting there. I was porting over a docker-compose workload, so most services have a static port and I just route to the port in the caddyfile (I passed my router as the DNS for the reverse proxy, so its by hostname). I'll get caddy to use nomad for service addresses at some point.

But for photoprism, I've got the service discovery set up to its database, it's kind of awkward since the nomad native discovery goes into env vars or a file through the `template` block [2], but it does work.

Nomad is pickier about setting resource limits, so I had to actually set those to something reasonable.

I've mostly got Docker based services, so this reference [3] has been useful. There's a couple ways to mount the volumes, which is annoying, and there's some gotchas around docker image handling (short shorty is don't use the `latest` tag).

I probably haven't organized my jobs/groups/tasks well (the analog to k8s pods hierarchy), but that's for later.

[0] https://developer.hashicorp.com/nomad/tutorials/get-started/... [1] https://developer.hashicorp.com/nomad/tutorials/enterprise/p... [2] https://www.hashicorp.com/blog/nomad-service-discovery [3] https://developer.hashicorp.com/nomad/docs/drivers/docker


> services have a static port and I just route to the port in the caddyfile

Just so I understand - this means deploying a new service involves building a docker image, deploy it, then manually update the caddyfile (ie manual ingress)?

> There's a couple ways to mount the volumes, which is annoying,

Fwiw this is annoying on multi-node docker swarm too - i even consider proper volume support to be one of the strongest arguments for considering k8s even for somewhat simple setups.


yeah, that's what I have now. I see the path to letting nomad deal with the addressing (rather than specifying static ports), but setting up a route in caddy for a new service would still be manual. I expect you could do some fancy scripting/go templating with the caddyfile and template block to make it spin up new routes more "automatically", but at that point I think you'd be better off seeing what Consul could do for you. Or traefik discovery with tags, I expect that could be convinced to work with Nomad, not that I've tried. (edit, seems like yes [0])

And I'd agree that the volume story isn't great compared to what I know of k8s. There's info out there for setting up NFS volumes, or something like portworx or ceph, but that's going beyond what I want to do for 1-3 pet nodes on a home server, I can deal with volumes staying bound to a node.

[0] https://traefik.io/blog/traefik-proxy-fully-integrates-with-...


Been using traefik with docker swarm for ingress - and it works. But doesn't exactly feel slick imnho :/


You won't find a resource which encompasses that. I have the feeling its purposeful so you pay for a course or certificate.


Are you saying course authors are removing other people's content from the Web?


Docker should have built a "better" K8s distro. Right now what they have is junk. Even Rancher isn't filling the niche. I don't think K8s is such a huge hill for new folks. Anyone who spends 5 minutes looking at basic deployment YAML can figure out what to do next. Getting a cluster up and running is the hardest part these days. Minikube, K8s in Docker, and even K3s all suck. There is still ample opportunity for them to own this market.


Have you used Rancher Desktop recently?


what makes you think docker swarm is abandonned ? The latest docker engine release (23.0) continue support it and add new features


A few years ago (2018?), I read somewhere - I think it was on Hacker News, even - that Docker/Moby was quietly dropping paid support for Swarm.

I've had a hard time finding much via hn.algolia.com to back this up, other than some discussions in mid-2018, though.


This doesn't look that abandoned? https://github.com/moby/swarmkit

Or are you talking about swarm the product (versus docker swarm mode)


It doesn't look well-supported given the desolation in the Issues list. Which is a shame, because I would love to be able to apply docker-compose files to something like that to create groups of services that can talk to each other. Much simpler than a Kubernetes deployment.


I really like the option to deploy through a git push without additional setup, and I am looking for something similar to host a bunch of containers. Does anyone here have experience with such a tool, and what is your experience regarding reliability?


You might want to check out dokku


You might want to be careful with this though because if your application is built on the same server where your productive apps consuming some of the memory it could affect performance of your deployment or even take it down.


Dokku allows for deploying apps from images built in CI - which is quite effective if you also test your image artifacts in CI and don't want to build twice.

If you are using our Nomad or Kubernetes plugins, you can also run the apps on servers other than the one you are building on.


Yup I am aware :) I am just warning them of jumping in without reading too much of the documentation.


I wasn't satisfied with any of the solutions, so I wrote Harbormaster:

https://pypi.org/project/docker-harbormaster

It's great if you want to run generic utilities at home (though I've used it at work in internal production and it was good), but it doesn't do ingress, so you have to bring your own.

It's basically a fancy/opinionated wrapper over "git pull && docker-compose up", with allowing you to specify all configuration in one file/repo.


A 2nd for Dokku. It's dead simple and works on any host. Yes, it is limited to single server architecture, but for most people this shouldn't be a problem. Vertical scaling can go a long way.


Maintainer of Dokku here.

Dokku does support both Kubernetes and Nomad as deployment targets, so it's not strictly single-server (though app builds currently are).


CapRover is a nice alternative to dokku/ledokku


Have you looked into Google Cloud Run?


Yarn is a PaaS that deploys through a git push, without additional setup deployment is done from a hosted site with additional setup . . .


I have built all our infrastructure on Docker Swarm before learning about its state. Currently waging migrating to Nomad out of fear for K8s complexity - I’ve worked with it in a previous job with more employees than now, and it still was a big hassle - but am afraid I’ll repeat the same mistake and should just bite the bullet.

Does anyone have suggestions?


I'd say go for Nomad! As for getting started...

- Single-server and non-secure (no mTLS; no ACLs) clusters are super easy to set up and a great way to try out things before committing.

- ...However, enabling especially ACLs but also TLS on running clusters is going to be more hassle than simply setting up a fresh, properly bootstrapped cluster.

- Their minimum and recommended resource requirements are hugely inflated, you can generally get good mileage out of way less

- ...However: Do follow their advice on keeping nodes single-responsibility (ie don't run a consul server and nomad server on the same node; generally keep your servers dedicated and not running jobs)

- Consul and Vault integrations are generally rock solid.

- ...However: Nomad native service discovery is not yet (wrt consistency and template rewrites). Consul Connect may or may not have edge (it's some time since last we seriously tried it).

- learn.hashicorp.com has material for most things you want to do.

- Put all your ACL configuration in Terraform (or whatever else you have for the same purpose)


Thank you for this list, very much appreciated! I'll keep those things in mind.


In similar position - currently leaning towards self-hosted k3s (it's kubernetes - but somewhat simple).


Rancher + k3s


I have a use for something like this and was intending on building something similar. Unfortunately the whole thing is written in Bash.


What about dokku?


Not worth the effort. Best to learn k8s. Inertia is a thing; and k8s has the ecosystem behind it. It’s also a vibrant project and is evolving.


Best to stay the hell away from k8s if you care about being productive. You're not Google and we'll all be better for it when everyone admits that fact.


It's not that complicated.

It's just that people need to start planning more before they dive into k8s, there's a limit to where and when k8s makes sense in your app. And most people could get away with the more traditional setup of Frontend, App and DB Backend for a service oriented architecture and still be able to scale up into k8s when necessary.


You don't really need to be Google to have a few runtimes with healthchecks across a few nodes with a few metrics and an ssl-terminating reverse proxy.

This is a pretty basic setup and with k3s one can be very productive at it. In fact, you will very likely hit quite a lot of k8s bottlenecks at Google scale if you throw everything at a single cluster.

I feel like we would actually be better off if people admit they'd rather manage all the complexity by hand all the tine, than spend a few hours reading the docs on k8s objects properties and lifecycles - that's really all there is to it for basic setups like k3s.


Of course a startup with 1-2 or 3 engineers with nothing off the ground should probably just run a their service on a VPS and keep things super simple.

But for mature companies K8s can make a ton of sense.


Meh. I find k8s + istio + cert manager pretty productive.


Replace Istio with Linkerd for even faster time to productive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: