Depth of field for a normal camera is a function of the relative aperture size (f-number), focal length and distance to the subject. This camera is able to create a huge depth of field with a far smaller aperture than would be expected with a conventional camera.
So how does it break the laws of physics? Well it doesn't; it cheats by not preserving the phase of the light. In a normal lens, the phase of the light is preserved. Light that must travel further to reach the point of focus travels faster as it passes through less lens material (where it travels slower than the speed of light).
The lens is also made on a flat disc a little bit like a Fresnel lens. The difference with this lens is that it doesn't have a single point of focus, instead light is focused onto many planes, simultaneously with equal power of light focused onto each plane. In this paper the planes were chosen to be between 5 and 1200mm. A clever computer algorithm calculates the exact shape of the lens to get this distribution given a specific wavelength of light.
The end result is a lens which has amazing depth of field but the trade off of only operating well at a single frequency and poor efficiency. Most of the incoming light is being focused on a plane that the sensor isn't on. Indeed at 1200mm it has an equivalent f-number of 555 requiring some insanely bright lighting and long exposure time to get a good image.
I can't see many real world applications of this camera given the downsides but in a world of more computational photography this could be one part of a better imaging system.
> The end result is a lens which has amazing depth of field but the trade off of only operating well at a single frequency and poor efficiency. Most of the incoming light is being focused on a plane that the sensor isn't on. Indeed at 1200mm it has an equivalent f-number of 555 requiring some insanely bright lighting and long exposure time to get a good image.
How does this compare to a pinhole camera, which (ideally) has perfect depth of field and focuses all incoming light onto the plane of the sensor, but which achieves that at the cost of admitting very little incoming light? (Thus requiring insanely bright lighting and long exposures.)
This is very interesting indeed, but at an equivalent f/555 I don't see many practical applications for this either. For context, usual photography uses apertures from f/1.2 to f/11, with lower numbers letting being a larger aperture and thus more light in. It'd require a massive amount of light to get a usable image.
Maybe the images could be sharper with this lens, or have very good macro? As you stop down (increase the aperture number) images become sharper up to a point where the diffraction limit steps in and quality degrades; usually this happens at f/9 to f/11 (which is why I mentioned f/11 as the lower range of commonly used apertures; although almost all lenses can go smaller, for pixel-peeping quality you'd want to use an ND filter and f/11 aperture).
> Most of the incoming light is being focused on a plane that the sensor isn't on. Indeed at 1200mm it has an equivalent f-number of 555 requiring some insanely bright lighting and long exposure time to get a good image.
I'm don't think that's why the lens has such a high f-number at large focal lengths. Rather, it's simply because they only manufactured a very small (1.8mm diameter) lens. Given the exotic shape and manufacturing of the lens I'm not surprised it's so small, and I suspect scaling this to larger lenses would be a very serious barrier to practicality. It's also not clear why they chose that particular focal range, so there may be some other issues I didn't notice. But on the whole this is very interesting work!
The fundamental observation (not caring about the phase of light at the focal plane) that makes this work is more interesting to me tbh. The authors' previous work uses a similar technique to produce a thin, single element apochromatic lens -- something that normally takes 3 thick lenses and expensive glass formulations.
Wondering how diffraction applies to this one. Certainly not to the equivalent aperture, because any f/555 lens would resolve essentially nothing (probably measured in mm/lp, not lp/mm; edit: not quite, about 3.5 lp/mm).
Clearly there are limitations for standard color photography, but lasers provide abundant and cheap monochromatic light. If the scene doesn't depend on natural lighting the exposure problems could be overcome.
I've only skimmed it, but it looks super interesting.
I was expecting some kind of computational holography wizardry, but no, it's just a multilayer diffractive lens with circular symmetry which produces a beam with a really stretched-out waist.
Usually diffractive lenses are limited to a single wavelength — they're using 850 nm. Is there a way something like this can work over the whole visible spectrum? If not, this might be more like a lightsaber than a camera.
> Conventional cameras also use multiple lenses to keep different colors of light in focus simultaneously. Since our design is very general, we can also use it to create a single flat lens that focuses all colors of light, drastically simplifying cameras even further.
Not sure if they already have a full-spectrum version working though.
Even if it doesn't work over the whole visible spectrum, is there any way that this could be used in some way that is sort of like an inverse ink jet printer where half-toning is used with a couple basis wavelengths to simulate the whole color spectrum? Am I basically just describing something similar to an LED screen but in lens form?
> I was expecting some kind of computational holography wizardry
Is this the type of tech the Lytro camera was based on? I thought that was an interesting concept. Imagine you'd be able to do something similar with this type of lens.
No, that was a microlens array. This one is just a lens, it just focuses the light (within the near-field zone!) in a way that I wouldn't have imagined was possible.
The problem is that 3 narrowband wavelengths doesn't add up to very much light. If we call visible range 400-700 nanometers, and bandwidth is 1nm, then you are throwing away 99% of visible light.
I thought 'Nah that'd never work' because I was thinking of the lenses stacked. But you're talking iPhone 11 Pro style. Depending on the properties of the glass that could actually work quite well -- though It may require a standard 4th lens for other optical properties these aren't suited to. White balance for instance, may be hard in this case, I'm not sure.
Up until a few years ago I would have said 'No way. The software isn't there yet, and it'd be prohibitive from a cost/processing power standpoint in mobile form factor.' But... see previous comment on iPhone 11 Pro.
might be good enough for machine vision applications with a narrow bandwidth LED or something like that.
I'm guessing light that's not exactly at that wavelength will cause additional blurring.
They are probably experimenting with sandwiches of high refractive index material.
There are no statements regarding transmissivity, which leads me to believe that its actually lower than a typical lens wide open i.e. the incident irradiance (light on the sensor) is equivalent to a regular lens set to a narrow aperture. There is no free lunch in optics or physics usually.
It sounds like kind of a cross between a light-field and a fresnel lens. It uses the shape of the surface to make patterns on the chip, and which then has to be decoded with a lot of processing. Instead of microlenses on the chip you put microlens-replacements in the flat surface.
Maybe it can, but it doesn't do a thing. At least for me. Deconvolution of astigmatic eye lens image? Nope. Spatial superresolution by temporal integration? Nope.
Low-level visual processing in our retinas and brains seems to be too specialized to learn new tricks.
Saccades are just compensation for low peripheral resolution and shortcomings of rods and cones. I had in mind getting more resolved images than what number of photoreceptors allows.
There are no "resolved images" that can be measured by pixels anywhere in the humans' visual system. Only talking about identifying things makes sense here, and that too, it is capable:
"Because the disalignments are often much smaller than the diameter and spacing of retinal receptors, vernier acuity requires neural processing and "pooling" to detect it."
or use EVFs or hybrid EVF/OVFs in place of you glasses lenses. Could conceivably allow for AR glasses that would facilitate telescopic capabilities, once a million other roadblocks are overcome.
The various processing and encoding pathways in our brains have developed over millions of years of natural selection. You could think of human behavior as a "program" where each generation is an iteration on that program. Plasticity is part of that program: ancestors with neural plasticity that benefited them passed on those genes and thus "conserved" learning as part of the program.
This sounds handwavey. I read The Brain that Changes Itself and it essentially gave the opposite impression of what you are saying: the whole point of the brain is plasticity. People can see, balance or hear through nerves wired to their tongues, for example.
It's handwavey because many of the details aren't known, but what is known is complex and can't really be fit into a HN comment. Also I'm condensing what I know from everything I've read about biology. It's not quite accurate to say that the "point" of the brain is plasticity, and we can tell this by looking at organisms with simple brains and examining what their brain is used for.
Worms use their brain to identify and manipulate objects, to taste objects if those objects are identified as food, and to sense aspects of their environment including temperature and humidity. Bees use their brains to see, to fly, to communicate directions, and to coordinate various behaviors with other bees. There is very little plasticity in these simple organisms: a bee that can't communicate will not find a different way to do so, it will simply die.
As animals get more complex we can see that plasticity becomes more important. A bird needs more complex processing to be able to adapt its hunting strategies for different environments. If environmental changes force a bird to move to a new habitat, we can hypothesize that they are smart enough to adapt their hunting strategies to the new habitat.
The evolution of humans has involved many factors that would favor this kind of plasticity: roaming over diverse landscapes, predator spotting, group hunting, tool use, language, and social communities. The resulting large cerebral cortex that we have from these evolutionary trends also gives us the ability to re-wire sensory pathways in the way you pointed out. But when I say "brain" I think of much more than just the cerebral cortex.
Scientists agree that the Cambrian explosion caused an inexorable trend toward complicated nervous systems and I suppose therefore the initial flexible brain.
What I can't see evidence for is any notion that the brain was less flexible then became more flexible over time. Would it be incorrect to assert that it essentially began flexible, as a result of the appearance of nervous systems, and that's its primary property? Of course complexity increased with time. But did the flexibility also increase?
This is a fascinating question, and I don't think I'm qualified to give an authoritative answer to it, but I think the _primary_ property of brains and brain-like neural complexes is that they encode behavioral responses to stimuli.
Similar to how a signal transduction process mediates between molecular receptors on cell membrane and cell behaviors, the neural complex mediates between stimuli and response at the level of the organism.
But at the same time I can see why you are thinking about flexibility as some sort of "primary property" because they seem to be inherently flexible in some way. I'm locating _plasticity_ more narrowly to the cortex part of the animal brain, but perhaps the cortex is just an amplification of some simpler lower-level flexibility. At this point my knowledge of the subject falls short.
Edit: thank you for the interesting discussion, you have sparked me to dig more deeply into these topics!
A lens that keeps everything in focus (aka, large or infinite depth of field) is not desirable for most photography applications; this mainly has important implications for industrial use. It is always interesting to see innovations in this space.
I'd be interested if this lens would transmit more light than a traditional lens stopped down to f/13 and focussed to hyperfocal distance. If that's the case, it could find application in photography for low-light photography.
Not sure I understand this statement.
The depth of field spans a distance both in front and back of the focussing range where all objects are in equivalent focus. To focus all objects of interest spanning a set of ranges, the appropriate approach is to focus on an element roughly in the middle of the set, then increase the depth of field until all elements are in roughly equivalent focus. One does not focus at the hyperfocal distance and step everything down.
If all objects are at or beyond the hyperfocal distance, one can focus the lens to infinity and be done with it. All objects will remain in focus regardless of the size of the aperture as all rays incident on the lens from a given object are quasi-parallel.
EDIT: Quasi-parallel from the object, not quasi-parallel to the optic axis.
It depends on the focal length, the chosen aperture, and to some extent on the resolution of the camera.
What I want to say is, if these new lenses transmit light at a high rate (let's say 1:2.8) while being hyperfocal from 0 mm to infinity all the time, you could take pictures at night more easily without having to focus or stop down to get larger scenes fully in focus.
Thus being actually useful in traditional photography.
Landscape photography could be another application.
> is not desirable for most photography applications
Hardly. It's namely portraiture that benefits from thin DoF, or similar applications where figure/ground relationship can be improved by keeping OOF elements blurry.
There are a great variety of situations where you do want as much in focus as possible. Landscape and macro photography are the most notable examples, where most high-level images utilize focus-stacking composites to simulate extreme DoF.
Certainly focus depth is a creative tool, but don't confuse expensive wide-aperture lenses for what is necessarily good.
A common opinion among software engineers, but less common among serious photographers. "Portrait mode" looks like a cheap instagram filter. To be fair, it does let a much larger population experiment with effects that otherwise take a bit of specialized gear and knowledge.
Modern phones get you 90% there. The bokeh quality apparently wasn't a priority until very recently. For indoor applications, a Huawei P20 Pro with artificial bokeh and good onboard image processing beats my APS-C-format Sony camera with an F/1.4 lens most of the time — not even counting cases when the depth of field of said F/1.4 is too shallow! No way can that mobile thing compete with a good lens in good lighting though. But soon that won't be because camera hardware isn't competitive; rather, the phone's computational photography pipeline and lack of dedicated controls wouldn't allow enough flexibility in shooting techniques.
Again, the mobile phone manufacturers are just starting advancing to higher-end photography — the science and computing power is basically there, you can do a lot with deep learning combined with real understanding of image formation and photography.
I wouldn't call myself anything serious as a photographer; OTOH some serious photographers whom I follow start turning more into 3D artists or videographers.
Photography as a profession seems to be in a crisis: not that it's not needed, it's that many adjacent things (3D, video, design, marketing) are also needed, not unlike in programming: there's almost no such thing as a senior (e.g.) Python developer, there are senior Web, Data, etc. developers which primarily use Python along with other tools (JS, sql, etc).
Yes, you get where I'm coming from, and I think both of us agree that nothing will ever replace a photographer who knows how to compose a photo. Taking a picture is different!
They’re not wrong. Software does fill in many gaps for photographers. Photoshop is a great example of this.
Amateur astrophotography is a good example of where software is utilized to fill in gaps in the hardware all of the time. Color correction and focus stacking are often done in software, star tracking mounts are using software, and so on.
We can just make a lens that's unfocused at every depth and merge the two as needed.
Joking aside, I wouldn't doubt this creeping into the film industry or other professional photography. Everything already goes through so much post processing, color correction, etc etc.
As someone who owns an SLR and an f/1.4 lens, I can sympathize, but anytime I hear some popular new technique being derided as not ideal among "serious" practitioners, I know it's going to take over the world.
Remember when "personal" computers weren't fashionable among "serious" programmers? Or GUIs, or scripting languages, or the web?
Whenever the choice is "software that's only 80% as good" or "more expensive hardware", the former will win every time.
No, I totally get where you're coming from. But I would also argue that even for photographers, there is a lot more freedom and creativity in having everything in focus and working with it later.
But yes, the best photographers are generally trained to spot exactly the right framing and composition from the moment the moment the photo is snapped.
I think you're referring to so called electrically tunable lens. Most every camera these days uses an electromagnetic diaphragm to control the aperture.
And yes, it's a electrically tunable lens are a thing. Optotune is one company I've heard of making hardware for the industrial space, there are probably others.
Which market will bring costs down and feed into the next cycle of development, a market that potentially includes everyone - or a market restricted to desperate legally blind people? I'm not a fan of consumerism, but it goes a lot further in improving the lives of everyone, versus conspicuous virtue projects.
There is nothing phone-camera specific about this idea, the (lay) article just uses that as an example of "conventional camera".
Visual impairments is pretty broad, and this approach is using collimated light so not general purpose - but things in this direction could plausibly have applications in the area.
Bokeh is the effect of blurring out the background.
This is slightly incorrect in an important way. Depth of field has the effect of blurring out the background and/or foreground; it is a property of optical systems. However, the design of lenses changes how this looks. The latter part is what "bokeh" means, and it is why people talk about "the bokeh" of a particular lens.
In other words, bokeh is fundamentally an aesthetic characteristic.
But isn't the problem with isolating parts of the image at the optical level that you can't later undo it? It seems like at least one of the advantages to everything being in focus is that you can isolate whatever you want in post-processing. Cameras are already doing this, they just require multiple lenses and thus more space and material to do it.
Interesting thought, if cameras (and generally optics) wouldn't have bokeh, our eyes wouldn't have to focus either and wouldn't have bokeh in unfocussed areas as well.
Well like the others said, bokeh != Depth of field. That said even if you removed all depth of field (everything is in focus somehow) it would still be missed.
Depth of field is incredibly important for managing a viewers focus. Imagine a picture of a coffee in a coffeeshop with the background out of focus - you can clearly see a coffee cup. That same photo but with everything in focus will be much more confusing - what should I be looking at, it's too busy, etc.
Totally agree. I'm specifically looking at this for phone use. And even in this narrow use case, there are many lens types with different uses. And for photography, even more so. People will keep old lenses with optical flaws to achieve specific moods or looks.
So how does it break the laws of physics? Well it doesn't; it cheats by not preserving the phase of the light. In a normal lens, the phase of the light is preserved. Light that must travel further to reach the point of focus travels faster as it passes through less lens material (where it travels slower than the speed of light).
The lens is also made on a flat disc a little bit like a Fresnel lens. The difference with this lens is that it doesn't have a single point of focus, instead light is focused onto many planes, simultaneously with equal power of light focused onto each plane. In this paper the planes were chosen to be between 5 and 1200mm. A clever computer algorithm calculates the exact shape of the lens to get this distribution given a specific wavelength of light.
The end result is a lens which has amazing depth of field but the trade off of only operating well at a single frequency and poor efficiency. Most of the incoming light is being focused on a plane that the sensor isn't on. Indeed at 1200mm it has an equivalent f-number of 555 requiring some insanely bright lighting and long exposure time to get a good image.
I can't see many real world applications of this camera given the downsides but in a world of more computational photography this could be one part of a better imaging system.