Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Non blocking disk I/O is one thing where NT is really ahead of all nix OSes. Unlike say network IO where we have all sorts of platforms(go, node) which allow you to scale by doing async IO, there aren't much options for disk I/O, primarily because of lack of nix options.


Author of the post (and the engineer who did the work) here.

There are ways to do non-blocking disk I/O in *nix (aio/io_submit in linux) but all of which requires you to have an open file descriptor first. Does NT allow you to open a file in an async fashion?


Netflix kernel engineer here.. We use FreeBSD's async sendfile() and not aio, so it would be a bit harder for us to fix open latency, since we're not using aio.

I had not thought about open latency being an issue, that's fascinating. Looking at one of our busy 100G servers w/NVME storage, I see an openat syscall latency of no more than 8ms sampling over a few minute period, with most everything being below 65us. However, the workload is probably different than yours (more longer lived connections, fewer but larger files, fewer opens in general). Eg, we probably don't have nearly the "long tail" issue you do..


right I suspect you have way fewer files than we do and everything is in the dentry cache. Pretty sure that most of your files are bigger than 60KB too :-) (which is our p90)


At my job we have to open many small files from NFS. The latency of open() absolutely murders sequential performance (>80 seconds just to open a scene descriptions). Prewawrming the fairly shortlived NFS access caches in parallel evaporates most of the performance penalty.


Wow, it is a different world :)


our machines also run many different services (the CDN product is just one of them) and isolating I/O from different products is difficult. I'd also love to have NVMe.


Any chance you could use dtrace to isolate process specific I/O?


"isolate" here I meant prevent one process's IO from affecting another one


Have you looked into using something like SQLite instead of the filesystem? [1]

[1] https://www.sqlite.org/fasterthanfs.html


SQLite makes a ton of sense for systems that don't need to worry about concurrent writes. It's possible that a CDN's cache system might need to concern itself with concurrent writes.


Other than the concurrent write that another comment mentioned, looks like this test is done with a data set that fits entirely in RAM. I wish we have enough RAM for the entire internet :-(


TIL what a 'dentry' is!


A few more aio_* syscalls would really simplify things a lot. I suspect the most important missing aio_* syscalls are aio_open()/close() and aio_stat(). The semantics for async open/close would be tricky.


8ms actually seems quite slow at that level.


Yes, that's the absolute worst one out of a large sample size (far less than a fraction of a percent). I suspect that openat() was particularly unlucky, and interrupted multiple times..


IOCP doesn't do it[1]. Well, if it does then it's not documented. You can post custom completion packets so at first glance it looks easy to make open/close be async...I think there is probably a good reason why NT won't do that for you.

That's pretty awesome though, that you have to worry about latency of open().

[1] https://docs.microsoft.com/en-us/windows/desktop/fileio/i-o-...


Nope, CreateFile and NtCreateFile are synchronous, only reading and writing are asynchronous.


Why open(2) and close(2) all the time? If I hit this problem—and hacking on Nginx itself were an option—then I'd make the following Nginx changes:

1. at startup, before threads are spawned, find all static files dirs referenced in the config, and walk them, finding all the files in them, and open handles to all of those files, putting them into a hash-map keyed by path that will then be visible to all spawned threads;

2. in the code for reading a static file, replace the call to open(2) with a look up against the shared file-descriptor from the pool, and then a call to reopen(2) to get a separately seekable userland handle to the same kernel FD (i.e. to make the shared FD into a thread-specific FD, without having to hit the disk or even the VFS logic.)

3. (optionally) add fs-notify logic to discover new files added to the static dirs, and—thread-safely!—open them, and add them to the shared pool.

This assumes there aren't that many static files (say, less than a million.) If there were magnitudes more than that, in-kernel latency of modifying a huge kernel-side FD table might become a problem. At that point, I'd maybe consider simply partitioning the static file set across several Nginx processes on the same machine (similar to partitioned tables living in the same DBMS instance); and then, if even further scaling is needed, distributing those shards on a hash-ring and having a dumb+fast HTTP load-balancer [e.g. HAProxy] hash the requested path and route to those ring-nodes. (But at that point it you're somewhat reinventing what a clustered filesystem like GlusterFS does, so it might make more sense to just make the "TCP load-balancing" part be a regular Nginx LB layer, and then just mount a clustered filesystem to each machine in read-only-indefinite-cache mode. Then you've got a cheap, stateless Nginx layer, and a separate SAN layer for hosting the clustered filesystem, where your SSDs now live.)


I think you are underestimating cloudflare's scale here. Obviously we do shard across many machines but each one still has many more files than what's reasonable to keep open all the time.


This will not scale at CF and is not compatible with their current architecture.


The use case here isn't static files, it's an HTTP cache.


>Does NT allow you to open a file in an async fashion?

No.

As a side note, there was an interesting old hn subthread about async disk i/o of philosophies Windows NT vs Linux:

https://news.ycombinator.com/item?id=11865760


If you mean IOCP API that requires you to hold memory hostage for it, then no, NT sucks at it. The idea behind that API only works well if there is no switch between kernel space and user space. Otherwise you can emulate that awful API with threads on unix-like systems, it will perform the same.

Don't get me wrong though, caching/file serving in nginx was always very hacky. And blowing latency by blocking on filesystem and yet relying on vfs cache underneath is just one side effect of that.


> If you mean IOCP API that requires you to hold memory hostage for it, then no, NT sucks at it.

“hold memory hostage” really doesn't help make your point. It's inflammatory wording one step above spelling it Micro$oft and it makes it sound like you don't understand the engineering trade-offs which the Microsoft engineers made. It'd be much better if you explained _why_ you disagree with their choice and especially whether there are differences in the type of work which you've used it for which might explain that opinion.


[flagged]


I remember Slashdot in 1997, too, and would like to avoid repeating it. HN is poorer for substance-free jokes.


I've found reddit to be substantive, sometimes on par with HN, but filled with jokes. Depending on the subreddit, you may be able to scratch your itch there.


I've found most jokes on reddit seem to be attempts to get upvotes.

Then again I'm not a fan of the upvote system generally. One thing I dislike is that downvoted comments are hidden. It equates disagreement with spam and discourages people from expressing genuine beliefs and reactions.

I was going to make sure I haven't downvoted anything here but I can't find if there's a way to check.


I get the feeling you've had a bad experience with the API and that colours your opinion of the overall design of IOCP. It's really quite well thought out.

Also in my experience it handles slightly more concurrent connections with a fair bit less CPU usage. There are some weird and not very nice parts + limitations to the API that makes it pretty hard to write cross-platform things with it, granted.


The memory is either held hostage in your buffer or the kernel’s. For that matter, if you’re making a blocking read/write call the memory is just as much a hostage.


Thought you were emphasizing a lot of your text, until I realized you just have two \*nix in your comment but didn't escape the asterisk!


Well, there's opportunistic nonblocking IO which you can do on your event loop with readaheads (assuming you know your access patterns) and then RWF_NOWAIT and fall back to a thread pool when that fails. Of course that only helps if you're reading from a file into a userspace buffer. If you want to sendfile that doesn't help.


How can you use sendfile and TLS at the same time? Aren’t they incompatible?


There are patches for FreeBSD and Linux sendfile respectively to perform TLS symmetric encryption inside the kernel and use out of band signaling for the key exchange.


Freebsd have had kqueue for the longest time. Unfortunately Linux didn't adopt it for political reasons and osx did a half port and dropped disk io support.


kqueue doesn't work for disk. It always reports ready to read.


If you try to use the readiness-based kqueue events that is the case, but the completion-based aio events do work with disk files on kqueue.


Ah, that slipped my mind.


On FreeBSD kqueue does work for normal files if you use aio_* and SIGEV_KEVENT.


https://blogs.msdn.microsoft.com/oldnewthing/20180725-00/?p=...

This is an old new thing post which demonstrates it is easy tot fall off the happy path on nt. Im not sure the windows world is much better off here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: