Hacker Newsnew | past | comments | ask | show | jobs | submit | nocut12's commentslogin

High Flying Bird was a Netflix movie shot on iPhones. Granted, that was Soderbergh, so he probably gets a lot more pull than most other filmmakers. It was a movie where they bought the distribution rights and didn't produce, so maybe that stuff is different. I'm pretty unclear on what really constitutes a "Netflix Original" — lots of stuff that gets label is really just distributed by them.

Pretty sure some of their other original stuff was shot on film too, which obviously wouldn't meet these requirements. At least, I know the non de-aged scenes for The Irishman were done on film. But again, maybe the big guys get more leeway here.


I don't know enough about this to give a real answer, but there is some neanderthal ancestry in modern day African populations.

https://www.sciencemag.org/news/2020/01/africans-carry-surpr...


One thing that points to a big difference is art. There's some claimed Neanderthal cave art, but we're talking about things like pigment circles and lines -- nothing on the level of the modern human paintings at Chauvet or Lascaux.

There's enough evidence to convince me that Neanderthals decorated things and probably could think symbolically, but I to me, based on the evidence we have at least, it's pretty clear that modern humans went about this stuff differently.

I don't know if I think this really means "less intelligent," but I think this does point to some fundamental differences in how they thought. And who knows, maybe we'll find some beautiful Neanderthal cave art that disproves this. Our ideas about Neanderthals have changed a lot over the last few years after all.


The earliest Homo Sapiens remains are from ~200kya while the earliest (crude) Homo Sapiens paintings are from ~75kya. Lascaux, Chauvet, Altamira et al are ~20kya.


For a to do list, I think that makes a lot of sense. The items at the top of your list are higher priority, so having higher contrast text and a more saturated background up there draws your attention to the stuff that's most important.


If it's perspective corrected for the camera, it would probably look very distorted for anyone else on set -- whether there's depth or not

And that's certainly not the goal with this. Something along those lines has been around for a while (https://en.wikipedia.org/wiki/Cave_automatic_virtual_environ...). This system seems specifically targeted for solving challenges for film production, as it probably should be.

I am pretty impressed that real time rendering has gotten good enough to use for these purposes. I certainly wouldn't have expected that those backgrounds in the show were coming out of a video game engine.


They mention they cannot push enough GPU juice to the screens, so they only render the camera focus area in full resolution. Also there is 12 frame lag which prevents moving camera too fast.


One solution to this would be to put the camera on a fixture that replays the same motions every time, so they can do a 'dry run' and correct the rendering. (Putting a camera on a fixture is not new, IIRC they did it in Back to the Future - https://www.youtube.com/watch?v=AtPA6nIBs5g is a really good yet succinct documentary on it)


IIRC they did it in Star Wars for the space battles (but it is forty years since I read about it, so may memory may be playing tricks on me).


They did. They only had so many models of the spaceships, so multiple takes with a programmed camera path and compositing was used to increase the number of vessels in the scene.


> they cannot push enough GPU juice to the screens

...yet. That's just a matter of waiting a few more years.

I wonder if most of the next Star Wars movies will be shot with this tech.


I don’t fully get this. They could just employ different computers to render different parts of their cave. It seems more like a cost savings thing than a technical limitation.

And I’m not sure why you’d skimp on a few PC’s if you’re already building a humongous led wall, so maybe there’s something I’m missing.


I worked in a somewhat similar project in 2015, though not as complex as this, to build background videos for DotParty, using UE4 for panoramas and then stitching them. One of the hugest issues we found was that, because of this being a game engine, a lot of things are not deterministic, so if we used multiple cards or computers, particles and other environmental effects would not be in sync, and the stitches were glaring.


Yeah, that’s a good point. And taking out the particles and doing those separately is probably near impossible.

For the non-visible screens it wouldn’t matter that much, but they’d still end up with the moving fulcrum for the main engine.


I believe it's been improved in later versions, as they've focused in these use cases, and there might even be deterministic particles now, but I'm not sure, because I've been out of the VFX market for quite a while.


The part you are missing is the insane complexity involved in keeping perfect frame sync with a low latency across GPUs and machines, especialliy when some final compositing of partial outputs is involved. The stuff sounds simple on paper and it looks like you can just go buy the tech, unpack it and switch it on. The reality is nothing like that. The off the shelf tech is fiddly and barely stable because it is always a low priority feature added with the least possible effort.


If you have a 12 frame delay regardless you have an awful lot of time to get your clocks in sync.

Obviously that tech is not simply unpackable, because they’re on the cutting edge. But that’s also why you could expect some customization.


The overall latency says nothing about the sync precision required. The displays need to be synced and the graphics cards need to have their vsync synchronized between them (usually via dedicated hardware). If the displays are out of sync, you immediately get visible tearing at the seams.

If your parallel renderer divides the image along a grid that does not correspond to display boundaries, you need to gather and composite the partial framebuffers after rendering them. This means that you're now sending frames across the network amd you need to take care that you aren't compositing frames from different timesteps, for example, because the the part of the rendered framebuffer that goes to compositor/display node A arrived in time, but the part going to compositor/display node B somehow didn't.


“They could just...”

Pretty much every time I’ve thought this, it’s turned out I was underestimating the difficulty of doing “just” that.

If it were just that easy, wouldn’t they have done that already?


Who knows? Sometimes people make things a lot more difficult than they have to be.

That isn’t always the case, but asking the question is better than the alternative.


From the video posted upthread, around 3:40 it looks like they're doing just that: https://youtu.be/gUnxzVOs3rk?t=220


The most interesting aspect to me is that the system is pulling double duty by displaying both a dynamic, perspective-correct backdrop for the camera's POV and a static view for environment lighting and reflections outside of the camera's view frustum.

I wonder if they had to take care to mitigate artifacts caused by the dynamic view bouncing off of surfaces facing away from the camera.


What artifacts?


I guess he is thinking about situations where you would use a negative fill


What does that mean?


Look at the ceiling, you can see it moving.


If you want to learn more about this stuff, the guy in the video wrote a great book called Origami Design Secrets (https://www.amazon.com/dp/1568814364/). It's certainly still the book about origami design.

For something that still touches on some of this stuff but is more beginner/child friendly, I'd recommend Jun Maekawa's book Genuine Origami (https://www.amazon.com/dp/4889962514)

There's also the OSME books (http://osme.info/) which collect origami related papers.


https://arstechnica.com/science/2019/03/climate-change-may-h...

Certainly not unheard of for modern humans though...


As far as technology, modern humans weren't really too different at the time (in Europe at least). And of course, modern humans and neanderthals were similar enough that we were willing/able to interbreed, which certainly says something.

I do think people overstate how similar we were though -- we don't have any super clear examples of neanderthal artwork (people make arguments, but this stuff isn't exactly Chauvet https://en.wikipedia.org/wiki/Neanderthal#Art). To me, this seems to indicate that their brains might have worked pretty differently.


"we were willing/able to interbreed, which certainly says something"

Does it?

Men like to spread their seed, and I'm not sure what the rules were regarding giving women a choice in the matter. I'm guessing rape wasn't in their vocabulary though.


The ability to produce viable offspring is generally considered the primary differentiating characteristic between species and subspecies. That's one of a number of arguments in the debate over whether Neanderthals should be named H. neanderthalensis or H. sapiens neanderthalensis.


That criterion has had to be abandoned, replaced with "forms reproductive groups with practical boundaries".

Sometimes a river is that boundary, or a preferred prey species, a mating strategy, or odor preference.

Otherwise, we cannot distinguish bear, dog, or great-cat species.


Isn't there already a separate word—subspecies—for "isolated reproductive groups, with different phenotypes, which could still interbreed if the opportunity arose"? My understanding was that every phenotypically-distinct isolated reproductive group was considered a subspecies until its genetics diverged enough to have speciated, at which point it was now a species.

It seems to me that if e.g. American black bears and Asian black bears can interbreed, then we could call them all one species—black bears—and put all their subspecies together in into that taxonomic category. Maybe with some optional taxonomic level between "species" and "subspecies" for describing their phenotypic groupings.

But I see, looking at various sources, that those two types of bears are indeed considered separate species. Why do we do that? What's better/more useful about drawing the species boundary there?


"Species" is an organizational convenience for biologists. Nature doesn't have such a boundary. It just has varying degrees of reproductive compatibility, inclination, and opportunity.

"Subspecies" is a concession to what lumpers call splitters.


There is certainly a conservative definition for speciation, though: the point where something has zero reproductive compatibility—where there is no known example of viable offspring. At that point, inclination and opportunity cease to matter.

Why not just define “species” by that clear formal boundary, and then call everything that doesn’t manage to reach that line “subspecies”?


Because the line is very hard to discern, where it exists as a line at all, and it is nowhere sharp. Lions can be bred with tigers, in captivity. Are their offspring fertile? Well, sorta. Does it make sense to call lions and tigers subspecies? Hell, no. Say lions and tigers are one species and biologists will call you a lumper. You don't want that.

Sometimes the product of mating between species becomes, instantly, another species, if they prefer mating with one another over either progenitor. That just happened, with some birds, in the Galapagos.

Legally, there are no endangered subspecies, only endangered species. So, claiming some variety is "just a subspecies" may mean they get no legal protection against extermination. To me that's more than enough reason for a species.


I was addressing the willing rather than the able.


Looks like the real ones are actually 1-2-2-1. The file names of the samples end in either "gt" or "gen", which kinda gives it away.

I thought the same thing initially -- I guess it fooled a few of us with that first one!


I thought the first one was the clearest once you've read that the synthesised voice attempts to guess which words should be stressed from syntax: sentences beginning with the word "that" often should stress "that" because they're distinguishing that choice from some other, but probably not for this particular instance where it's an off hand reference to some girl from some video...


This is a bit misleading...

He was appointed by Obama, but not as chairman -- and was a recommendation from McConnell.

Obama was pretty clearly pro net neutrality.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: