Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Caddy – HTTP/2 Web Server with Automatic HTTPS (caddyserver.com)
308 points by amjd on Oct 16, 2016 | hide | past | favorite | 79 comments


For a fun time, run it with the -quic flag and browse your site in Chrome.

Thanks to work by Lucas Clemente and Marten Seemann, Caddy ships with a functional (but still experimental) QUIC server implementation[1] you can try right now. Your site will load better over slow connections or while you switch from WiFi to cellular, for instance.

There was a lightning talk by Lucas Clemente just last week at dotGo about QUIC; looking forward to the video being posted!

[1]: https://github.com/lucas-clemente/quic-go


Where does QUIC sit in between TCP and HTTP2?


QUIC is a UDP based network protocol. It is an alternative to HTTP2 or HTTP, which are TCP based protocols. If you visit google.com with a recent version of chrome, you're using QUIC. Open chrome://net-internals/#quic in another tab and click a connection UID to see the conversation.


This is mostly correct but not entirely.

HTTPS is layered such that it's HTTP > TLS > TCP.

QUIC replaces the 'TLS > TCP' portion with 'QUIC > UDP', which you can then run HTTP or HTTP2 on top of.

QUIC is not an alternative to HTTP2 (yet), although there's work underway to (re-)define HTTP2 in terms of QUIC [1], thereby replacing the awkward transport protocol aspects of HTTP2 with the very similar mechanisms provided by QUIC.

[1] https://tools.ietf.org/html/draft-shade-quic-http2-mapping-0...


Thanks for the clarification, I guess I was confused about how it worked.

For those seeking more detail, I found these two links helpful:

https://docs.google.com/presentation/d/15e1bLKYeN56GL1oTJSF9...

https://ma.ttias.be/googles-quic-protocol-moving-web-tcp-udp...


The not-so-nice part about the newer HTTP versions is that they are not really layered in the classical sense. This means HTTP/1, HTTP/2 and HTTP over QUIC all carry the same semantics (you got methods, addresses, headers, and body streams), but implementation of HTTP itself is different and it's not only a substitution of the underlying transport layer: E.g. in HTTP/2 there's no classical HTTP data transferred over a multiplexed stream (like what HTTP over SCTP could have brought us). Instead it carries HTTP aspects (headers), multiplexing and flow control all in a single layer which makes layering and generic reuse hard.

Have not looked into that QUIC mapping yet, but I have read that generic use is a goal for it, which is good. If HTTP over QUIC succeeds (it probably will if pushed by google) I'm wondering how of how much use HTTP/2 still will be. Classical HTTP will most likely always exist as it's easy to implement and is covered by lots of systems (even embedded) and libraries.


Is QUIC specific to HTTP, or can it be used as as transport mechanism for arbitrary protocols?


QUIC stands on its own; it just so happens that QUIC and SPDY (the predecessor of HTTP/2) show some convergent evolution.

You could use QUIC to transport other L7 protocols, but tracking down generic-enough implementations may be difficult, or at least was the case in the past [1][2]. Maybe things are better now [3].

[1] https://daniel.haxx.se/blog/2016/07/20/curl-wants-to-quic/

[2] http://stackoverflow.com/questions/17896432/

[3] https://github.com/google/proto-quic


Any other browser support QUIC other then Google Chrome?


See https://lwn.net/Articles/558826/ for a good introduction.


Matt, please, add proxy policy path_hash (path sans host).

That would enable content optimized caching for things like IPFS gateways with multiple backend nodes.


Nice idea! Would you open an issue to suggest it so I don't forget? I'm sure a contributor would love to build it in, doesn't sound too hard. (EDIT: I see you posted it in the forum, I'll respond there.)


That sounds great! What ports do i need to open over UDP on server side? 443?


Yep.


Automatic HTTPS may be the most recognized feature of Caddy but there are more that make it worth to give it a spin.

Configuration simplicity is important. For example try to properly setup a reverse proxy to jenkins from nginx. Almost impossible unless you google and find the specific nginx snippet on jenkins' documentation. Deviate and you will have jenkins complaining for an improperly configured proxy. On Caddy the same can be achieved in one line with just two words (and a slash).

The last feature I used for a project, was Caddy's browse directive which lets you browse files in a directory (unless there is an index.html). Not only it has a great look from the start, not only it can be templated, but also Caddy will be happy to serve you the contents of the directory as JSON. So now, not only humans but also software can browse your files easily.

Have a look at its modules to get some ideas: https://caddyserver.com/docs

Also go makes it easy to write a module of your own or alter one of the existing ones to fit your needs.


So Caddy has been discussed a few times prior. My question is who is using this for production? (In other words, not for a personal or hobby site, but an actual product). Is anyone using it stand alone (no nginx or apache involved - CloudFlare is ok edit: or similar), sorry if it was asked in a previous discussion I want to know if anything has changed since.


I know of several companies/organizations using Caddy in production; one instance in particular is serving tens of thousands of domains and managing their certificates without difficulty. Netflix uses it for Vector (https://github.com/Netflix/vector/blob/6d013993fc1a5b2ae296f...), ChannelMeter uses it (https://channelmeter.com/), and there are others.

It seems that the biggest trouble that people have using Caddy in production is that there aren't official packages for installing/updating Caddy. Fortunately, it's a single, statically-compiled binary no matter how many plugins you choose, and downloading Caddy is as simple as a GET request. But I anticipate we will have some packages by version 1.0. (There's also this Docker image by abiosoft that has 100k+ pulls: https://hub.docker.com/r/abiosoft/caddy/)

Caddy is generally _easiest_ to use standalone and does fine for most people. Granted, it's not the right tool for everyone, but it's pretty good most of the time. Most of the troubles I've seen reported are in combination with Docker, faulty init system configurations, behind other proxies and load balancers, or trying to serve sites that don't have proper DNS resolution yet (which is required for auto HTTPS, which is on by default).


> It seems that the biggest trouble that people have using Caddy in production is that there aren't official packages for installing/updating Caddy.

Indeed. The closest thing is GetCaddy[1] which is far from ideal.

I think if nginx were to get similarly simple Let's Encrypt integration, the case for using Caddy would become very niche.

nginx is already easy to use and configure, and the only reason I recently switched to Caddy for my personal site, and a couple of smaller clients, was the time saved during server setup.

[1] https://getcaddy.com


> I think if nginx were to get similarly simple Let's Encrypt integration, the case for using Caddy would become very niche.

I hope to see this. If Caddy can inspire/motivate the really mainstream servers to add auto HTTPS, then a significant aspect of Caddy's mission would be a success.

Caddy was fairly innovative on the auto HTTPS front. As interest in the project continues, I expect that we (the 100+ contributors) will keep making Caddy relevant. I'm really looking forward to the improved plugin system / build server and Caddy API to come together.


Another reason why someone might pick Caddy over nginx or apache is to get rid of OpenSSL (and forks). Go's TLS stack is generally considered to be one of the better TLS implementations out there. Despite not yet having received the same amount of attention as OpenSSL (which the authors acknowledge), the fact that it's (mostly) written in a safe(r) language and isn't as bloated as OpenSSL (which has many features that are completely irrelevant to the vast majority of TLS servers, but still cause vulnerabilities - see Heartbleed) would make me feel more comfortable with picking crypto/tls over OpenSSL.


Let's Encrypt is pretty easy to set up using lego on just about any HTTP server. I think Caddy uses it as well, at least go get pulls it from github when you attempt to build Caddy from source. Plus it's super easy to cross compile and deploy anywhere since, like Caddy, it's a single statically linked binary.


Fun fact: lego was originally commissioned for use in Caddy. Sebastian has done a great job with it!


We use it on https://hsreplay.net, which serves about 100-200k requests / day behind Cloudflare.

Pros: It's super useful not to have to worry about HTTPS renewal. The caddyfiles are super simple. The Caddy code is beautiful to read and simple to understand. The plugin system is great.

Cons: At load, we got bitten hard by the default proxy timeout behaviour. That particular issue has been fixed in 0.9.3 (fail_timeout).

Haven't had any other issues, but it does feel pretty stressful to run unproven software in prod. Thankfully, we can easily switch back to nginx if we need to.


Thank you for helping to prove it in production!


And thank you for writing it! It's a great initiative.


I switched https://iwl.me to it from nginx a few months ago and I'm very happy so far. I use it as a proxy for a Go app server and also to serve static files.


Can you explain the problems you had with nginx that prompted you to migrate? Http/2 and Https?


No particular problems, just wanted something simpler and written in a memory-safe language.


Oh, and of course, automatic certificate renewal without configuring certbot.


It was briefly used by MELPA [0] but they moved back to nginx. Performance, load appeared to be the driving reasons. (See their issue tracker for a bit of history.)

[0] https://melpa.org/


Gopher Academy and golang news use Caddy (if they count) according to this:

https://golangnews.com/stories/43


We're using Caddy in production. It's working quite well so far. It's a really smart helpful project.

I'd recommend reading the documentation. We had a nasty surprise when Caddy decided all our backend servers were broken and it removed them. After that we stopped naively running the proxy with the default settings. https://caddyserver.com/docs/proxy


We changed the default proxy failover behavior in 0.9.3. You now have to opt-in to failure handling by specifying the behavior you want. See https://github.com/mholt/caddy/pull/1135 for more details.


I've been using it on several sites for over a year as a simple reverse proxy with lets encrypt built in. Resource usage is low and performance is good, certainly good enough for ~95% of websites which are low to med traffic. Here is one example in production:

https://golangnews.com


It's a patchwork of different Go libraries, Caddy doesn't implement anything itself like Apache or Nginx would. It's useful but certainly not "proved" in a production environment. The heck you don't need Caddy if you are already using Go to develop servers, just use the libraries it uses directly.


It is simply not true that caddy is JUST a collection of Go libraries, since parts of it like the Caddyfile concept are developed by Matt Holt, the (main) dev behind Caddy. Apart from that I would definetly say that Caddy is far easier to use than Apache or Nginx, which is a big bonus if you are just getting started in web development.


> It's a patchwork of different Go libraries, Caddy doesn't implement anything itself

If that were true (I'm not expert but I don't believe it to be the case), why would it be a problem? If it implements what people want efficiently enough and securely it shouldn't matter that it isn't written from scratch. Yes, anyone can tie libraries together, but if you are trying to work on something else you might not want to spend time doing that and resolving all the edge cases that you'll run into along the way.

> like Apache or Nginx would.

Apache started life as a huge collection of patches onto something else rather then implementing everything itself, and many would argue that this history still shows in negative ways at times.

Apache is a great project, but it really isn't a good example to use when trying to make the case for implementing everything in-project as efficiently as possible instead of using external dependencies.

> It's useful but certainly not "proved" in a production environment.

Some have already indicated otherwise on this thread.

Though as it hasn't been around all that long and has relativity recently seen significant internal changes so I'll grant that if you are being very careful you might not consider it mature/refined/proven enough for some production environments.

> if you are already using Go to develop servers, just use the libraries it uses directly.

IF you are already using Go. Many people aren't. I for one don't.

IF it were true (which I don't think it is) that it just strings libraries together

IF it didn't detract from your other ("not writing a http* proxy" based) project goals; because it took zero time & effort to put those libraries together and test & support the arrangement going forward, dealing with changes to said libraries over time, edge cases in their interactions, new and interesting problems in the wild due to odd client applications and proxies connecting to it, ...

I don't use Caddy yet (I have experimented with it and when time permits it will probably become part of my infrastructure at some point soon) so I have no particular axe to grind in support of it (other than it looks useful for my use case due to the relatively hassle free config and automatic LE certificate processing), but your argument against using it is at best flimsy.

Of course you may have registered a throwaway account in order to just troll a bit and get reactions; in which case good show sir, you appear to have achieved your goal!


Apache started life as a huge collection of patches onto something else rather then implementing everything itself, and many would argue that this history still shows in negative ways at times. Apache is a great project, but it really isn't a good example to use when trying to make the case for implementing everything in-project as efficiently as possible instead of using external dependencies.

That's not quite true. Maybe you are confused about the "module" aspect of Apache httpd and what it really means. Sure, there are a bunch of external, 3rd party modules, but the bulk of Apache capability is handled by bundled, official, "in-project" modules and not external dependencies.


> Maybe you are confused about the "module" aspect of Apache httpd and what it really means

No, I'm referring to Aapche originally starting life as a series of patches to the code for the NCSA HTTPd server and related libraries - stringing together existing code rather than being a fresh new implementation as implied by the comment replied to.


Do you really need a throwaway account to make a comment?


I think there's a certain subset of people that like to read HN but not submit/comment, either for reasons of anonymity or just because they don't want to. They may be inclined to comment from time to time, and create a temp account to do so, since HN does not allow anonymity. Also, it's possible they had an account that was banned/shadow banned, and refuse to make another full account out of protest.

That is, this may not be someone using a throwaway account in lieu of their real account, but because they don't have a real account to use.


It's funny how when they ask you if you really have to be anonymous it tips the scales towards "yes".


I used it in prod for couple of months, then all of a sudden it stopped working, could not figure out how to fix it since there is not so much information on the web.

Removed and installed latest nginx. In my opinion it's not prod-ready.


All of this does look lovely.

I guess a big question for me would be:

What are the downsides? When should I not use this?

Also, is there something similar to the Baader-Meinhof phenomenon, but for finding software that neatly solves a problem after you've just spent a while doing yourself?


HTTP/2 out of the box with a free download is a great reason to use it over nginx. Are any performance comparisons available though? I suspect nginx's maturity may make up some of the difference.


Does nginx not have HTTP/2 or a free download? I've got a couple of webservers that as best I can tell are responding to HTTP2 requests, running nginx (that I didn't pay for)


> Does nginx not have HTTP/2 or a free download?

It does, as a module, not as core code[0].

However:

> The module is experimental, caveat emptor applies.

[0] https://nginx.org/en/docs/http/ngx_http_v2_module.html


I think the parent is making reference to Caddy generating the proper certificate files for you automatically using Let's Encrypt.


Also worth mentioning if you want automatic HTTPS is Russ Cox's implementation that works with Go's built in HTTP server:

https://github.com/rsc/letsencrypt

I've used it. It's incredibly easy. It seems he now recommends this more official-looking package:

https://godoc.org/golang.org/x/crypto/acme/autocert


I cannot recommend it enough. In comparison to nginx, you can setup the reverse proxy with configuration many times shorter and https without hassle. Also, what it does is much more readable.

It is not without quirks and there is no installer in debian repo, but I recommend giving it a try.



Matt Holt, the creator of Caddy, was recently a guest on the Go Time podcast where he talks quite a bit about Caddy as well as TLS and ACME protocols[0]. I really enjoyed this episode and recommend others check it out.

[0]: https://changelog.com/gotime/14


I already have nginx set up to use HTTP/2. Would I get any benefits from switching?


Reasons to switch? Most of the possible ones are listed under Features and in the FAQ on their website, if any of them are compelling enough. It also lists easy deployment, but that is arguable at this point. Running it properly on a server will most likely involve running it under a supervisor such as runit or daemontools under its own user. It does have some init scripts included in the git repo however. Also no packages for your favourite Linux dist yet.

I would pick Caddy over Nginx when in need of a capable webserver for playing around, while being too busy/lazy to set up a proper (https, http/1.1, http/2) Nginx instance.


Only if you wanted to use the automated Let's Encrypt certificate feature in Caddy.


That sounds like a very useful feature for new installations. However, I already have automatic renewal set up with nginx.


For static files:

1. Mimefy plug: check

2. Gzip feature: check

3. HTTP/2 new protocol: check

4. Auto SSL: check

5. Cache static files on memory to avoid reading then on disk, then Mimefy, then gzip on eveeeery request: MISSING <- this plug would be nice to have it to reduce disk I/O and CPU.


I am not a caddy developer but I notice the same plugin missing so I started my own. https://github.com/nicolasazrak/caddy-cache still needs a lot of work thought.


Cool! Thanks for working on this. It may address one of the oldest issues: https://github.com/mholt/caddy/issues/10


Wow - now you forced me to switch from what I have to Caddy. Cool nazrak, tks!!


Isn't 5 handled by the OS?


OS can cache the read but can't help with gzipping that is still a lot of CPU cycles


With HTTP2, you're no longer limited to a request-response model. What interfaces do people use to expose this new paradigm to the application? (e.g. application-based push).


They usually don't, because server push is a micro-optimization, proper use of which requires a lot of effort from the server (and perhaps the client), as evident from Google's research on the matter [1].

Google has coded their App Engine to read a 'push manifest' generated by a tool they publish [2]. Akamai gives you a GUI [3]. Cloudflare wants you to manually set headers [4] defined by the brand-new W3C draft 'preload' [5]. Last year, the Caddy devs blogged that HTTP/2 Push is essentially a big exercise for the reader/implementer [6].

I'm currently unaware of any web application framework which exposes idiomatic hooks to use HTTP/2 to push additional resources to the server. There are some generic server push addons or plugins that use older techniques from the websocket or pre-websocket days.

[1] https://news.ycombinator.com/item?id=12224258

[2] https://github.com/GoogleChrome/http2-push-manifest

[3] https://blogs.akamai.com/2016/04/are-you-ready-for-http2-ser...

[4] https://blog.cloudflare.com/announcing-support-for-http-2-se...

[5] https://www.w3.org/TR/preload/#server-push-http-2

[6] https://caddyserver.com/blog/implementing-http2-isnt-trivial


Also server push ignores the client's cache, so it's not actually a good optimization unless they're things you know aren't in the cache or they're small enough that youi can re-send them every time.


Push was just an example. HTTP2 is fundamentally different from HTTP; it's more akin to TCP in L7. Can applications take advantage of this fundamental paradigm shift? What about for non-browser clients (api clients)?


Server push is pretty much the only real difference from an interface perspective. Otherwise HTTP/2 is semantically the same as HTTP/1.1, just with certain optimizations like header compression and multiplexing. I don't think we'll see many applications built on top of HTTP/2 libraries wanting to directly control that kind of plumbing.

Even server push can be abstracted away via a cache. Pushed resources fill the cache, and when the application tries to fetch those resources the underlying HTTP/2 library could return the cached resource. This should be quite interesting to API clients, so that services don't have to make aggregate resource endpoints just to avoid round trips.

I do think future applications will want to have code triggered when a pushed resource arrives, though. I'm not aware of anyone doing this but it could be an interesting alternative to long-polling or streaming. That said, long-polling and streaming become very attractive within HTTP/2 as well, so it'll be interesting to see what developers end up doing.


Nothing that HTTP/1.1 and websockets can't do, really.


Sure but then you have a websockets interface , is there something similar for http2 that breaks the resource requests-resource response model?


grpc uses it for streaming. This means the if you use grpc for RPC, you are not limited to sending one request object and get one reply object but there can be a stream of request and reply objects in each direction. You can e.g. make things like server-sent-events or some websocket use-cases superfluos with that. E.g. you send an empty request body (headers contain the method you want to call) and get a stream of status updates (events, chat messages, ...) in the response stream.

However this is not something that is limited to HTTP/2. In principal you could do the same with HTTP/1 as the request and response bodies were also already streams. However there are some limitations to that: At first most HTTP/1 implementations do only allow a certain amount of parallel HTTP connections, which means long-running streams are not useful because they block off other requests. The second issue is that many HTTP library implementations (including XHR browser API and the current fetch API) do not allow to read the bodys in streaming form. This is also the blocker for having full grpc support in the browser. WIP browser APIs (fetch API with readable stream support) will allow to make use of these capabilities, for HTTP/2 and HTTP/1.


I run high load nginx servers that typically serve static 1-2 GB files typically at 1gbit throughput.

Does any one have experience with Caddy know how well it can be used to serve large static files with heavy load or is caddy best used to serve smaller files like a website.

Thanks!


Caddy should serve you well. I would be interested in knowing how it goes for you if you try it! If it has problems, let's fix them.


It allows wildcard domains/subdomains with automatic certificates... how does it know which certificate to serve without an IP-to-domain mapping? Is it relying on SNI (i.e. no support for old devices)?


Yes, Caddy uses SNI. Although it can't get wildcard certificates from Let's Encrypt, it can obtain certificates during the TLS handshake for a specific server name.


Looks like HTTP/2 and Auto HTTPS servers written in Go are on the rise, https://armor.labstack.com/

Good to have options!


Apache license for any wondering


How battle-ready (that is, security-wise) is this beautiful piece of software?


In what regards? Caddy has never been vulnerable to a number of widespread CVEs including Heartbleed, DROWN, POODLE, and BEAST. Caddy uses TLS_FALLBACK_SCSV to prevent protocol downgrade attacks. Like any other web-facing service, it's exposed to DDoS attacks. I've never heard of a machine being compromised by exploiting Caddy...

If anyone has a vulnerability to report, please email me directly[1] (or if it's not serious, a PR would be faster).

[1]: https://github.com/mholt/caddy/blob/master/CONTRIBUTING.md#v...


Any benchmarks?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: