Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

Stanford engineers enable simple cameras to see in 3D
EurekaAlert ^ | 03/28/2022 | STANFORD UNIVERSITY

Posted on 03/28/2022 7:27:19 PM PDT by aimhigh

Standard image sensors, like the billion or so already installed in practically every smartphone in use today, capture light intensity and color. Relying on common, off-the-shelf sensor technology – known as CMOS – these cameras have grown smaller and more powerful by the year and now offer tens-of-megapixels resolution. But they’ve still seen in only two dimensions, capturing images that are flat, like a drawing – until now.

Researchers at Stanford University have created a new approach that allows standard image sensors to see light in three dimensions. That is, these common cameras could soon be used to measure the distance to objects.

The engineering possibilities are dramatic. Measuring distance between objects with light is currently possible only with specialized and expensive lidar – short for “light detection and ranging” – systems. If you’ve seen a self-driving car tooling around, you can spot it right off by the hunchback of technology mounted to the roof. Most of that gear is the car’s lidar crash-avoidance system, which uses lasers to determine distances between objects.

Lidar is like radar, but with light instead of radio waves. By beaming a laser at objects and measuring the light that bounces back, it can tell how far away an object is, how fast it’s traveling, whether it’s moving closer or farther away and, most critically, it can calculate whether the paths of two moving objects will intersect at some point in the future. “Existing lidar systems are big and bulky, but someday, if you want lidar capabilities in millions of autonomous drones or in lightweight robotic vehicles, you’re going to want them to be very small, very energy efficient, and offering high performance,” explains Okan Atalar, a doctoral candidate in electrical engineering at Stanford and the first author on the new paper in the journal Nature Communications that introduces this compact, energy-efficient device that can be used for lidar.

For engineers, the advance offers two intriguing opportunities. First, it could enable megapixel-resolution lidar – a threshold not possible today. Higher resolution would allow lidar to identify targets at greater range. An autonomous car, for example, might be able to distinguish a cyclist from a pedestrian from farther away – sooner, that is – and allow the car to more easily avoid an accident. Second, any image sensor available today, including the billions in smartphones now, could capture rich 3D images with minimal hardware additions.

Changing how machines
One approach to adding 3D imaging to standard sensors is achieved by adding a light source (easily done) and a modulator (not so easily done) that turns the light on and off very quickly, millions of times every second. In measuring the variations in the light, engineers can calculate distance. Existing modulators can do it, too, but they require relatively large amounts of power. So large, in fact, that it makes them entirely impractical for everyday use.

The solution that the Stanford team, a collaboration between the Laboratory for Integrated Nano-Quantum Systems (LINQS) and ArbabianLab, came up with relies on a phenomenon known as acoustic resonance. The team built a simple acoustic modulator using a thin wafer of lithium niobate – a transparent crystal that is highly desirable for its electrical, acoustic and optical properties – coated with two transparent electrodes.

Critically, lithium niobate is piezoelectric. That is, when electricity is introduced through the electrodes, the crystal lattice at the heart of its atomic structure changes shape. It vibrates at very high, very predictable and very controllable frequencies. And, when it vibrates, lithium niobate strongly modulates light – with the addition of a couple polarizers, this new modulator effectively turns light on and off several million times a second. “What’s more, the geometry of the wafers and the electrodes defines the frequency of light modulation, so we can fine-tune the frequency,” Atalar says. “Change the geometry and you change the frequency of modulation.”

In technical terms, the piezoelectric effect is creating an acoustic wave through the crystal that rotates the polarization of light in desirable, tunable and usable ways. It is this key technical departure that enabled the team’s success. Then a polarizing filter is carefully placed after the modulator that converts this rotation into intensity modulation – making the light brighter and darker – effectively turning the light on and off millions of times a second. “While there are other ways to turn the light on and off,” Atalar says, “this acoustic approach is preferable because it is extremely energy efficient.”

Practical outcomes
Best of all, the modulator’s design is simple and integrates into a proposed system that uses off-the-shelf cameras, like those found in everyday cellphones and digital SLRs. Atalar and advisor Amin Arbabian, associate professor of electrical engineering and the project’s senior author, think it could become the basis for a new type of compact, low-cost, energy-efficient lidar – “standard CMOS lidar,” as they call it – that could find its way into drones, extraterrestrial rovers and other applications.

The impact for the proposed modulator is enormous; it has the potential to add the missing 3D dimension to any image sensor, they say. To prove it, the team built a prototype lidar system on a lab bench that used a commercially available digital camera as a receptor. The authors report that their prototype captured megapixel-resolution depth maps, while requiring small amounts of power to operate the optical modulator.

Better yet, with additional refinements, Atalar says the team has since further reduced the energy consumption by at least 10 times the already-low threshold reported in the paper, and they believe several-hundred-times-greater energy reduction is within reach. If that happens, a future of small-scale lidar with standard image sensors – and 3D smartphone cameras – could become a reality.


TOPICS: Miscellaneous; News/Current Events
KEYWORDS: 3d; cameras; stanford

1 posted on 03/28/2022 7:27:19 PM PDT by aimhigh
[ Post Reply | Private Reply | View Replies]

To: aimhigh

If these guys can expand even more on the work of Tohichi Hikita and Dr. Emilio Lizardo original inventors of OSCILLATION OVERTHRUSTER, who knows where this could lead us.

Honestly I am not sure if we are ready for 8th dimension imaging


2 posted on 03/28/2022 7:37:25 PM PDT by algore
[ Post Reply | Private Reply | To 1 | View Replies]

To: aimhigh

I’m not an engineer, so I’m probably being stupid when I say this, it seems to me that one would only have to sit lenses side by side taking a slightly different angle when taking a picture of the same object.

Like how our eyes work.


3 posted on 03/28/2022 7:40:57 PM PDT by Jonty30 ( I am an extremely responsible person. When something goes wrong, my boss asks if I was responsible.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: aimhigh

This is significant, especially since it is relatively cheap and simple.


4 posted on 03/28/2022 7:41:07 PM PDT by Kevmo (Give back Ukes their Nukes https://freerepublic.com/focus/news/4044080/posts)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Jonty30

I think you might only have to have one sensor, and one emitter, although I would have designed it with 2 emitters one on each side of the sensor using slightly different wavelengths.

Like the article said, it is all about the modulation and reflection


5 posted on 03/28/2022 7:48:28 PM PDT by algore
[ Post Reply | Private Reply | To 3 | View Replies]

I am still waiting for my xray goggles I ordered from that superman comic in 1977


6 posted on 03/28/2022 7:50:47 PM PDT by dsrtsage ( Complexity is just simple lacking imagination)
[ Post Reply | Private Reply | To 5 | View Replies]

To: aimhigh

Hope it can be done in infrared


7 posted on 03/28/2022 7:56:28 PM PDT by dila813
[ Post Reply | Private Reply | To 1 | View Replies]

To: dsrtsage

https://fossbytes.com/sony-accidentally-launched-camcorders-see-peoples-clothes/#:~:text=Short%20Bytes%3A%20In%201998%2C%20Sony%20accidentally%20released%20a,same%20with%20the%20help%20of%20two%20volunteer%20models.

Some of the still cameras did too iirc.


8 posted on 03/28/2022 8:16:45 PM PDT by algore
[ Post Reply | Private Reply | To 6 | View Replies]

To: aimhigh

Fascinating stuff.


9 posted on 03/28/2022 8:28:37 PM PDT by TChad ("Joe, we should evacuate the civilians before the military. You understand that, right? Joe?")
[ Post Reply | Private Reply | To 1 | View Replies]

To: algore

Unfortunately, due to chicom virus restrictions, Buckaroo was able to sneak in and steal most of the relevant technologies and sell them on the black market...


10 posted on 03/28/2022 8:28:49 PM PDT by SuperLuminal (Where is another Sam Adams now that we desperately need him?)
[ Post Reply | Private Reply | To 2 | View Replies]

To: algore
Before we proceed, I will need to know if you are allied with
the Red Lectroids, or the Black Lectroids.

SPEAK!
Schnell! Schnell!!

/s


11 posted on 03/28/2022 8:36:15 PM PDT by GaltAdonis ( )
[ Post Reply | Private Reply | To 2 | View Replies]

To: dsrtsage

I wanted the sub but didn’t have the patience to save up for it, since I could open my eye underwater for free.


12 posted on 03/28/2022 9:10:03 PM PDT by skr (May God confound the enemy)
[ Post Reply | Private Reply | To 6 | View Replies]

To: Jonty30
I’m not an engineer, so I’m probably being stupid when I say this, it seems to me that one would only have to sit lenses side by side taking a slightly different angle when taking a picture of the same object. Like how our eyes work.

Parallax as perceived by our brains is good for a quick and dirty estimate of distance, but machines in motion such as self-driving cars need precise measurements of distance to each object being tracked.

13 posted on 03/28/2022 10:14:58 PM PDT by JustaTech (A mind is a terrible thing)
[ Post Reply | Private Reply | To 3 | View Replies]

To: algore
No worries. We just beta-tested v.2.0 and got it calibrated to within .9999835 of interdimensional threshold:


14 posted on 03/29/2022 1:17:10 AM PDT by Viking2002 (Whatever.)
[ Post Reply | Private Reply | To 2 | View Replies]

To: Jonty30

You may want to look up “photogrammetry”. This is the process of taking multiple 2D pictures and building a 3D model from them. You can reconstruct very detailed models from photos. The problem is, with current algorithms, you need many photos and the processing takes a while...making it unusable for anything real-time, like a car sensor.

That said, we’ve experimented with just 2 cameras side-by-side, as you point out - see how we do. We optimized the photogrammetry processing pipeline to eliminate only what was needed to create a depth map, not a full color 3D image. We managed to reach ~20 frames/sec on low’ish power CPU’s. The process works but is far (FAR) from where it’d need to be for anything in production. My company didn’t want to take the prototype further.

My opinion is that, eventually, a 3D vision system will be done by 2x2D cameras and see just as we do. Any animal that sees in 3D with two eyes is proof - and you don’t need to be emitting anything, it can be a very low power & passive sensor. I believe Tesla is trying to take Lidar/camera data and training AI neural networks on what “depth should look like” when it only has a camera. We’ll see - but as very small animals seem to have the capability, as just part of their brain function, it must be doable - if you can crack how. I read about some of these solutions and the crazy, yet incredibly innovative, techniques they use - surely it can be simpler.

There’s some other technologies, like this one from Intel, that do similar (in real time) but are only good for a few meters.

https://www.intel.com/content/www/us/en/architecture-and-technology/realsense-overview.html


15 posted on 03/29/2022 5:33:33 AM PDT by fuzzylogic (welfare state = sharing of poor moral choices among everybody)
[ Post Reply | Private Reply | To 3 | View Replies]

To: aimhigh

I’m waiting for someone to invent an adequate 3D viewing system.


16 posted on 03/29/2022 2:00:00 PM PDT by aimhigh (THIS is His commandment . . . . 1 John 3:23)
[ Post Reply | Private Reply | To 1 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson