Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What makes containers in general, and Kubernetes specifically, compelling to me are the following:

- the deployment artifact is a "fresh" instance that's "booted" in a fraction of the time—as in a couple of seconds vs a few minutes—than a new EC2 instance would be

- since the deployment artifact is a fresh instance, there is no need to manipulate an existing instance already running your application as part of your deploy process (e.g. scp, ssh, unpack, run commands, restart/reload application)

- if you design your build pipeline appropriately, the actual data needed to be shipped is effectively just a diff from the previous image. In my org, that could mean the difference between shipping a 300MB vehicle to ~300 instances and shipping a 300KB vehicle to almost certainly less than 300 Kubernetes nodes

There are other things as well, but I wanted to stay roughly within the parameters of your examples.

If your organization doesn't already have a continuous integration story, I'd say it probably isn't ready for containers in production. If your cloud footprint isn't costing you let's say >= $50k/mo or so, I might argue that the engineering effort to Kubernetize your things is probably better spent in more direct ways on your product.

If you do have a CI/CD story, and you do have a very wide, expansive, and expensive cloud footprint, containers and Kubernetes are extremely compelling possible solutions.



What if you're in the ground-zero scenario: there is an application but practically no infrastructure?

I'm in that situation now (with a SaaS product), and my approach so far has been:

* set up tests

* set up Docker images

* set up docker-compose for easy local deployments

* do lots of work to make the app behave inside Docker

* take control of AWS and IAM using CloudFormation, making it easy to set up and roll back permissions

* bootstrap KOPS/K8s with CloudFormation

Future work:

* set up Jenkins CI on top of K8s

* K8s deployments of the app for CI

* staging K8s deployments

* K8s in production

This has been my first venture in operations and it's been very interesting and rewarding, but at times it gets very intimidating to "be" operations with no senior guidance. I enjoy the freedom and responsibility and I feel like I at least have a significantly better grasp on this than anyone else in the company, but I wish I knew of a way to feel like my work was good in an absolute sense, like "definitely good enough".

Has anyone here gone through a similar scenario, bootstrapping ops infrastructure in a small business? Would you have any remarks regarding architecture, or the politics required to make the importance of solid ops and the improvement of software reliability and security clear?


Generally it's fun resume builder but not actually time well spent on your business. One super server can go far enough and hopefully you can design it to be able to scale as needed later.


> you can design it to be able to scale as needed later

What's different about what you're referring to above and what the commenter mentioned? It seems to me like they were designing the app to be able to scale.


Except they are also doing the dev-ops yucky work to do the scaling. It is a lot of extra work with no value until later.

Say they are making a crud app. Keep no state in session. Deploy as easily as possible. Boom, done. Later you can add a load balancer to scale to 100 servers. Right now you can run it on 1. It is 'designed to scale'. But actually running your single app in a 1 node k8s cluster? That is days of extra work for nothing right now.


This is a risky assertion to make, in some cases for us.


All engineering advice comes with the implicit caveat "subject to your local concerns".

Yes, of course you may be working in a startup that requires 25GPUs to serve even one customer. No sarcasm. I can imagine some startups that might meet that requirement.

But there are an awful lot of startups that start massively overengineering their footprint early when all they need is a web server with a cold or hot spare (use a load balancer with automatic failover if you've got one in your cloud or something, if not I wouldn't stress, and automatic failover on prototype-level application code can cause its own issues) and a database server with a good backup story and some mechanism for quickly bringing up a new one if necessary. (This generally leads you to some sort of clustering thing or a hot replicated spare because it doesn't take long before your database requires hours to rebuild from a backup or something.)

You're often better served just giving occasional thought to how you might split things up in the future and using that to at least hint your design than actually splitting things up immediately.


It's tough to give a definitive answer because every company is different, but I work at a very small strange shop, 5 devs + 1 manager, maintaining 10-15 custom websites (5 of which are on a single VM) and I have been deploying our new apps in Docker (no K8s). I use each container as a "miniature VM" which runs the entire web app (except the database), blasphemous I know. Compared to putting multiple apps on one box, the Docker method adds some minor complexity, but keeps apps isolated. That was my biggest requirement, to prevent devs from cross-pollinating applications, which happened constantly when everything was on a single server. It was much simpler than setting up Puppet on a bunch of legacy machines. I also considered putting each new app on its own VM, but went with Docker because a lot of our apps hardly get any traffic and would have wasted quite a bit of resources to spin up a VM for each (all our servers are in house).

The pros to Docker so far: Dependencies: Dockerfile gives a list of explicit system dependencies for each app. This can be done in other ways with package files or config management but this was not being done before and this is an easy catch all to force it for any different type of environment. Logical Grouping: App environment (Dockerfile + docker-compose.yml) lives alongside codebase in a single git repo Deployment: Deploy to any box with `git clone myapp && docker-compose up` for testing/dev instances or migrations Development: We mount the codebase from a host directory into each container, with git hooks to update the codebase, which works well for us (we have no CI) Plus it's fun!

Cons: Operational Complexity: Devs/Ops teams probably won't want to learn a new tool. I setup a Rancher instance to provide a GUI which makes things a bit easier to swallow. It has things like a drop in shell, log viewer, performance metrics, etc. Network complexity: we never needed reverse proxies before, now we do. Clustering/Orchestration: We don't cluster our containers, but the more we add the more I think we might want to, which would add a whole new layer of complexity to the mix and seems unnecessary for such a small shop. Security?: lots of unknowns, lack of persistence can be bad for forensics, etc. Newness: Documentation isn't great, versions change fast, online resources may be outdated.

Like you, I'm sometimes unsure if this is the right choice. Maybe a monolithic server or traditional VMs + Puppet would be easier, simpler, better? In the end, I think Docker just fit with the way I conceptualized my problem so I went for it. You may never get that "definitely good enough" feeling, but if it fits your workflow and keeps your pipeline organized and manageable, then I say go for it.


Very interesting ! I am a solo guy but I sort of followed the same way you did. And when I had to go with the Kubernetes road because managing multiple Docker over multiple boxes became too complicated, I just went back to one website = one VM... Giving me time to learn all the k8s stuff, which will be probably be useful soon but just not right now.


That's interestig to me, the Rancher bit: I went the route of writing down all my routine docker-compose invocations in a Makefile and I gave that to the devs with builtin documentation (list of targets + examples of workflows), but I see how Rancher could standardize that.


Docker-compose is horribly broken, it manages deps in a much too coarse manner, effect lay becoming a broken package manager on top of whatever the container has.


I agree the code quality and "feature parity"/consistency with Docker is absolutely crap. I'm now hoping K8s will provide a better experience while also being more amenable to quick and scalable AWS deployments.


> What if you're in the ground-zero scenario: there is an application but practically no infrastructure? > I'm in that situation now (with a SaaS product), and my approach so far has been: > * set up tests > * set up Docker images > * set up docker-compose for easy local deployments

In my personal opinion, this is all good work to do. One minor nit: You probably want to get out of the habit of thinking about `docker-compose up` as a deployment, and just think of it as a much cleaner, much more "prod-like" local development environment.

> * do lots of work to make the app behave inside Docker

I'm curious about this. What was the overall nature of the work? Things you would have had to figure out in a distributed platform anyway, or container-specific issues?

> * take control of AWS and IAM using CloudFormation, making it easy to set up and roll back permissions

This is amazing, I applaud you for your foresight. I wish we had known about this pattern much earlier.

> * bootstrap KOPS/K8s with CloudFormation

This is where there's going to be another giant time sink, unless you're already pretty familiar with Kubernetes' resources—Deployments, Services, PersistentVolumeClaims, Secrets, etc—how they work together, what they enable, what their limitations are, and so on. Not to mention picking a network overlay plugin, deciding on a service discovery model, and understanding how Kubernetes works. BTW I am currently working on this very knowledge, and at least for me it's been pretty heady stuff at times.

>Future work: >* set up Jenkins CI on top of K8s >* K8s deployments of the app for CI >* staging K8s deployments >* K8s in production

Everything here seems relatively sane, in broad terms. I mainly caution against moving too quickly and also against spending engineering efforts on this if you are also an IC for the applications your teams are writing.

>This has been my first venture in operations and it's been very interesting and rewarding, but at times it gets very intimidating to "be" operations with no senior guidance.

Honestly, you seem to have the right mindset for this and so far you haven't mentioned anything that I see as a huge red flag.

>I enjoy the freedom and responsibility and I feel like I at least have a significantly better grasp on this than anyone else in the company, but I wish I knew of a way to feel like my work was good in an absolute sense, like "definitely good enough".

Welcome to ops! There will never not be an operational issue that either needs to be fixed or is silently waiting to strike at 3AM on Monday morning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: