Science AMA Series: I develop LiDAR to to give vision to self-driving cars, I'm Jason Eichenholz, CTO of Luminar Technologies, AMA!

Abstract

Creating LiDAR your life can depend on, Luminar Technologies uses advanced LiDAR sensors to measure millions of points per second, and put that resolution where it matters most. This allows Luminar sensors to see not just where objects are, but what they are — even at distance. Co-Founder and CTO, Jason Eichenholz is a serial entrepreneur and pioneer in laser, optics and photonics product development and commercialization. Over the past twenty-five years, he led the development of hundreds of millions of dollars of new photonics products.

Before joining Luminar as CTO and Co-Founder, Eichenholz was the CEO and founder of Open Photonics, an open innovation company dedicated to the commercialization of optics and photonics technologies. Prior to that, he served as the Divisional Technology Director at Halma PLC. In that role he was responsible for supporting innovation, technology and strategic development for the Photonics and Health Optics Divisions. Before joining Halma, he was the CTO and Board Member of Ocean Optics Inc. as well as the Director of Strategic Marketing at Newport/Spectra-Physics.

Eichenholz is a Fellow of The Optical Society (OSA) and SPIE. He has served as the principal investigator for Air Force and DARPA funded research and development programs and holds ten U.S. patents on new types of solid-state lasers, displays and photonic devices. Eichenholz has a M.S. and Ph.D in Optical Science and Engineering from CREOL – The College of Optics and Photonics at the University of Central Florida and a B.S. in Physics from Rensselaer Polytechnic Institute.

Hello Jason, thanks for doing the AMA! I have two questions for you:

  • I know there are many companies out there that are making self-driving cars, do Luminar Technologies use similar technologies? How does this self-driving car differ from previous/existing ones from other manufacturers?

  • (A more technical question) How do you optimise sensors to know the road? As in, how does the car take into account the geometry of the road, spacing between cars and such things? Also, what sensors do you use to optimise for such criteria?

kaushik_93

Hey Reddit! Happy to be here, excited to answer your LiDAR and self-driving vehicle questions. Here we go!

1) Luminar is producing LiDAR sensors for self-driving cars to power the entire industry. Most companies building LiDAR today are using the same off the shelf components and the technology hasn’t seen any major performance improvements in decades. Luminar’s LiDAR is designed from the chip level up, so we’re building all of our own components: our own lasers, receivers, scanning mechanisms and processing electronics. The architecture is entirely new and allows us to achieve much more resolution and much further range than today’s systems. Rather than buy silicon chips off the shelf, we have developed our own highly sensitive InGaAs chip by using a fraction of an InGaAs wafer — keeping costs down and performance up. These breakthroughs create the first dynamically configurable system operating at 1550 nm compared to most LiDARs at 905.

2) Driving conditions dynamically change and therefore resolution needs constantly change. Where you need resolution differs depending on when you’re driving on a highway vs. driving on a city street. Therefore, we have designed the first LiDAR sensor capable of adaptively covering the vertical field of view to configure for these sorts of changes. Our sensor is unique in that we have the ability to dynamically adjust the vertical resolution as needed. We always maintain a 120 degree horizontal field of view; we never want to miss important edge cases like children running out between parked cars.


How long until snow and sleet are irrelevant to image resolution and identification?

adenovato

Like with human drivers, poor weather conditions will never be irrelevant. Our goal at Luminar is to deliver sensor systems capable of sensing the world as deeply through weather as we can so that a driving system can best deal with the driving task, even if that means reacting with slower speeds. Here is a run down on some of the relevant details...

During the round-trip speed of light at automotive distances (100s of meters), the environment doesn’t move which limits the interaction with an active sensor like LiDAR. Each of our measurements is taken through a narrow instantaneous field of view rather than looking at the whole scene, so rain and snow simply adds just a little noise in the foreground - we maintain our ability to see the scene behind the rain or snow. Further, humidity (water vapor) can contribute absorption problems for very long range LiDAR (many kilometers range) but at automotive distances we see no impact. Because we’re able to output dramatically more power at the eye-safe 1550nm wavelength, we’re successfully punching through inclement weather conditions.

Small, dense obscurants like fog, dust, exhaust, or smog, just like with human vision, results in light scatter which rewards longer wavelength operation (see Mie Scattering, or the small-particle approximation Rayleigh Scattering). Therefore, our system experiences scatterers with an effective 8.6x smaller cross-section than our competitors at 905nm.

Our system is capable of detecting multiple returns per laser pulse. This enables the above feature of seeing through scatters as well as an easy way to filter out low packing density objects like rain drops.

Finally, a related comment, is that a bigger practical problem to adverse weather conditions is what will happen to the external window of the sensor or sensor package - ice, salt, packed snow, bird droppings…


Hi Jason. Two quick and hopefully easy questions. What are the advantages of LiDAR compared to stereo CCD for object detection, identification and distance calculation? In what situations does LiDAR excel where CCD fails?

PhyterNL

Stereo cameras work by matching corresponding features between the cameras and then calculating the distance based on the perspective difference in the two cameras. Therefore they depend on features being present in the scene so stereo cameras have trouble measuring flat surfaces which can include trucks or roads or buildings. Because of high processing power requirements, it can be very difficult to produce a real time depth map out of stereo cameras, even with very low depth precision. Another challenge is that they have a finite range over which they can work. Objects too close may only fall into the field of view of a single camera, while objects are larger distances appear in the same location of the two cameras. This can be tuned by appropriately spacing the cameras, but the max/min ratio is somewhat limited. An additional area where cameras find challenges is at night where lighting may not be present. This applies less to forward looking usages and more to side or rear facing applications. Blind Spot Monitoring is an example where the early camera based systems have been displaced by active systems like radar.

By comparison, LiDAR systems do not depend on specific scene characteristics, ambient or other illumination, and can concurrently range to long and short distances. So while cameras are great for assisted driving and have their use cases - traffic lights as an example - if you want to see a tire on the road ahead of you at more than 100 meters, LiDAR is your only choice.


Hi, Tesla and Daimler are developing autonomous driving based on camera and visual cues alone. I have heard people claim that lidar is not useful at high speeds due to the long range is involved at which detection is necessary. Do you think lidar can compete with image recognition for this situation and if so what advantages does it bring to the table?

MrMasterplan

To date, the only auto company attempting to develop fully autonomous driving systems without LiDAR is Tesla. While camera technology has matured to a large degree and all of them operate off of the same basic principles, LiDARs come in several flavors. All commercially available LiDAR sensors for the automotive market have severe limitations in maximum range, resolution, inclement weather performance, interference, scalability, and cost. This is why we have created the first LiDAR architecture that meets those requirements, and can accurately see even dark objects (<10% reflectivity) past 200 meters, which is same range typically targeted for detecting headlights and tail lights using camera systems. While cameras are often useful at seeing objects at distance, they cannot do so with the accuracy and reliability needed for safe, truly autonomous driving. Nevertheless, cameras alone are still quite useful for basic ADAS (assisted driving) developments.

Every major player we are working with have stated that they need a LiDAR system that can detect low reflectivity targets (<10%) at distances of 200m and beyond to autonomously drive safely.


Hi,

Does LiDAR measure both the relative position and relative velocity, or is velocity calculated from changes in position of objects? What is the effective range of LiDAR? How do rain, fog and spray affect LiDAR?

How will LiDAR systems be integrated into production vehicle designs? Prototypes have rotating roof-mounted components which will be at risk of damage or theft, and will add aerodynamic drag that will increase fuel consumption.

mean_fiddler

Most LiDAR systems, including ours, transmit a pulse of light and then measure one or more returns timing the arrival to infer the distance based on the speed of light. Using this approach, multiple returns need to be used to determine the closing velocity of an object. There are other LiDAR architectures that are capable of detecting closing velocity in a single dimension on each return at the expense of the measurement time, accuracy, or even laser power/eye safety. Our customers are accurately and reliably tracking objects in 3 dimensions using frame-to-frame correlations from high-res LiDAR data.

The range of a lidar system greatly depends on the design of the system and trade-offs made and quite importantly on the reflectivity of the target. You can go down to Best Buy today and pick up a hand-held LiDAR system that can measure a half-kilometer but it can only do that in one point and it likely takes tens of thousands of averages to do that. Many in the market boast a 200 meter range, but ours is unique in that we deliver millions of ranges per second, each capable of measuring a <10% reflective target past 200 meters. This is important to not just make out if there’s an object there, but what is, including low reflectivity objects, such as tires, darkly clothed pedestrians and dark colored vehicles, which must be visible at long range.

Refer to my previous response on rain, snow, and fog.

Early LiDAR systems were indeed bulky requiring roof rack mounting. We intentionally developed a 120 degree system rather than 360 degree, have shrunk the size of the LiDAR, and improved our control of the scan pattern such that the sensor can modularly integrate into vehicles rather be mounted on a mast on a roof.


I've bought a used car last month and it has a radar sensor in front. A couple of days ago there was some sleet, and even though the sensor remained reasonably clean, it was enough for the system to say "system off, sensor has no view". It's one of the dumber systems without autonomous driving, it just detects obstacles.

Why is this technology so unreliable - according to your introduction, even after decades of research and hundreds of millions of dollars being invested? I would understand that it has trouble with deciding depth of fallen snow, or determining what is a particular object it "sees", but why it has trouble with basically just moisture in the air?

DigiMagic

Hey there - I can’t speak to the capabilities of your radar since I don’t have the specs, but check out my response to adenovato’s question about weather interference.


Jason, what do you see as the regulatory issues that will impact LIDAR technology and the advent of driverless cars? For example, will the Department of Transportation or another federal agency need to set guidelines and standards that LIDAR systems must meet for vehicles?

milrey76

Currently, ADAS systems are regulated at the system rather than the component level. If you take Lane Departure Warning as an example, the camera resolution or dynamic range are not specified and instead the behavior of the vehicle level system is specified and tested. That being said, we are actively developing standardized metrics for both regulators and auto companies to ensure minimum performance requirements for LiDAR become common knowledge before sensors are commercially deployed, rather than when it’s too late.


What are your opinions on flash lidar? Do you think they'll work well in the automotive space? Why or why not

darkconfidantislife

A “Flash” LiDAR system literally sends out a flash of laser light that illuminates the full field of view of the sensor. These systems use a large format focal plane array (FPA) to capture all the returns across the field of view simultaneously. Thus, flash LiDARs require no moving parts, driving some companies to suggest using them in the automotive space, where increased mechanical reliability and longer mean time to failure are attractive features. However, since flash LiDAR exposes the entire FPA, this technique is sensitive to interference by sun, other sensors, and nefarious actors seeking to “spoof” the vehicle. As a result, flash LiDAR has shortcomings for safety-critical application in vehicles, but may have niche applications that support sensing for low level autonomy.

Because of the quench time of current Geiger mode APD arrays, performance degrades quickly in the presence of solar background radiation. Care must be taken to balance the laser and solar power to make sure that all the detectors aren’t depleted by incoming solar photons before the laser energy returns from the surface. Overcoming solar noise usually requires more laser power and the use of neutral density filters in the receive channel. As a result, range and seeing dark objects is often an issue with flash-based architectures.

Also, while flash lidar based systems sound great in theory from a mechanical standpoint, there are other less attractive features of flash lidar systems. The primary barrier is the cost to produce the detector arrays that work in the non-visible spectrum. The FPAs sell in the 10’s of thousands of dollars range due to expensive materials and poor yields. Other factors, such as limited array size, will also pose challenges in the automotive space where 360 degree ranging around a vehicle is desired. For example, 360-degree coverage can only be accomplished by either: 1) installing 12-20 FPA-based systems around the vehicle assuming reasonable angular resolution per detector, or 2) mechanically scanning the array. The first option is cost prohibitive, while the second eliminates flash LiDAR’s theoretical mechanical stability over existing traditional spinning LiDARs.


Jason, what kind of impact has being involved in professional associations had on your career? What are some of the benefits?

milrey76

I was involved with OSA as a student chapter president while an undergraduate at Rensselaer Polytechnic Institute, in fact I helped to start that chapter! From there continue my involvement with the society in graduate school at CREOL at University of Central Florida. This ultimately lead to becoming the student representative on the OSA Membership and Education Services (MES) Council, providing me with an opportunity to be the only student attending international leadership meetings of the society.

All of those connections and relationships directly led to opportunities to participate in OSA technically. I am fortunate to have served as technical group chair, technical division chair, and even on OSA’s meeting committee. In 2015 I was honored to have been elected fellow of OSA.

I am a firm believer in the value of social capital - paying it forward professionally. I have had the opportunity to hire former graduate students, mentor and interact professionally with people I’ve known for 20+ years because of this involvement.


Additional Assets

Reviews

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.