Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: aimhigh

I’m not an engineer, so I’m probably being stupid when I say this, it seems to me that one would only have to sit lenses side by side taking a slightly different angle when taking a picture of the same object.

Like how our eyes work.


3 posted on 03/28/2022 7:40:57 PM PDT by Jonty30 ( I am an extremely responsible person. When something goes wrong, my boss asks if I was responsible.)
[ Post Reply | Private Reply | To 1 | View Replies ]


To: Jonty30

I think you might only have to have one sensor, and one emitter, although I would have designed it with 2 emitters one on each side of the sensor using slightly different wavelengths.

Like the article said, it is all about the modulation and reflection


5 posted on 03/28/2022 7:48:28 PM PDT by algore
[ Post Reply | Private Reply | To 3 | View Replies ]

To: Jonty30
I’m not an engineer, so I’m probably being stupid when I say this, it seems to me that one would only have to sit lenses side by side taking a slightly different angle when taking a picture of the same object. Like how our eyes work.

Parallax as perceived by our brains is good for a quick and dirty estimate of distance, but machines in motion such as self-driving cars need precise measurements of distance to each object being tracked.

13 posted on 03/28/2022 10:14:58 PM PDT by JustaTech (A mind is a terrible thing)
[ Post Reply | Private Reply | To 3 | View Replies ]

To: Jonty30

You may want to look up “photogrammetry”. This is the process of taking multiple 2D pictures and building a 3D model from them. You can reconstruct very detailed models from photos. The problem is, with current algorithms, you need many photos and the processing takes a while...making it unusable for anything real-time, like a car sensor.

That said, we’ve experimented with just 2 cameras side-by-side, as you point out - see how we do. We optimized the photogrammetry processing pipeline to eliminate only what was needed to create a depth map, not a full color 3D image. We managed to reach ~20 frames/sec on low’ish power CPU’s. The process works but is far (FAR) from where it’d need to be for anything in production. My company didn’t want to take the prototype further.

My opinion is that, eventually, a 3D vision system will be done by 2x2D cameras and see just as we do. Any animal that sees in 3D with two eyes is proof - and you don’t need to be emitting anything, it can be a very low power & passive sensor. I believe Tesla is trying to take Lidar/camera data and training AI neural networks on what “depth should look like” when it only has a camera. We’ll see - but as very small animals seem to have the capability, as just part of their brain function, it must be doable - if you can crack how. I read about some of these solutions and the crazy, yet incredibly innovative, techniques they use - surely it can be simpler.

There’s some other technologies, like this one from Intel, that do similar (in real time) but are only good for a few meters.

https://www.intel.com/content/www/us/en/architecture-and-technology/realsense-overview.html


15 posted on 03/29/2022 5:33:33 AM PDT by fuzzylogic (welfare state = sharing of poor moral choices among everybody)
[ Post Reply | Private Reply | To 3 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson