I'm a fourth-year Ph.D. student at Stanford University. I'm interested in problems in 3D imaging, optimization, computer vision, and remote sensing, and my current work in Stanford's Computational Imaging Lab involves computational LIDAR systems, imaging around corners, and sensor fusion.

News

May 2019
Two papers accepted! Acoustic Non-Line-of-Sight Imaging was accepted as an oral to CVPR, and Wave-Based Non-Line-of-Sight Imaging Using Fast f-k Migration was accepted to SIGGRAPH.
June 2018
I'm interning at the Intelligent Systems Lab at Intel this summer with Vladlen Koltun
March 2018
Our paper on seeing around corners was published in Nature!

Education

2016-Present
Stanford University
Ph.D Electrical Engineering
2009-2016
Brigham Young University
B.S. Electrical Engineering Summa Cum Laude
M.S. Electrical Engineering

Research

2016-Present
Stanford University, Ph.D. Student
Advisor: Gordon Wetzstein
Area: computational imaging, single-photon detectors
projects: 3D imaging, non-line-of-sight imaging, machine learning
2014-2016
Brigham Young University, M.S. Student
Advisor: David Long
Area: radar image processing, resolution enhancement, geoscience
Project: Arctic ice classification, soil moisture estimation from satellite microwave sensors
2013-2014
Brigham Young University, B.S. Student
Advisor: Aaron Hawkins
Area: semiconductor devices, cleanroom fabrication, circuit design
Project: semi-conductor fabrication, developing solid-state single ion detectors

Publications

Wave-Based Non-Line-of-Sight Imaging Using Fast f–k Migration

ACM. Trans. Graph. (SIGGRAPH) 2019

Citation

D. B. Lindell, G. Wetzstein, M. O'Toole, “Wave-based non-line-of-sight imaging using fast f–k migration”, ACM Trans. Graph. (SIGGRAPH), 38 (4), 116, 2019.

Imaging objects outside a camera’s direct line of sight has important applications in robotic vision, remote sensing, and many other domains. Time-of-flight-based non-line-of-sight (NLOS) imaging systems have recently demonstrated impressive results, but several challenges remain. Image formation and inversion models have been slow or limited by the types of hidden surfaces that can be imaged. Moreover, non-planar sampling surfaces and non-confocal scanning methods have not been supported by efficient NLOS algorithms. With this work, we introduce a wave-based image formation model for the problem of NLOS imaging. Inspired by inverse methods used in seismology, we adapt a frequency-domain method, f−k migration, for solving the inverse NLOS problem. Unlike existing NLOS algorithms, f−k migration is both fast and memory efficient, it is robust to specular and other complex reflectance properties, and we show how it can be used with non-confocally scanned measurements as well as for non-planar sampling surfaces. f−k migration is more robust to measurement noise than alternative methods, generally produces better quality reconstructions, and is easy to implement. We experimentally validate our algorithms with a new NLOS imaging system that records room-sized scenes outdoors under indirect sunlight, and scans persons wearing retroreflective clothing at interactive rates.

Acoustic Non-Line-of-Sight Imaging

CVPR 2019 (oral)

Citation

D. B. Lindell, G. Wetzstein, V. Koltun, “Acoustic non-line-of-sight imaging”, In Proc. CVPR, 2019.

Non-line-of-sight (NLOS) imaging enables unprecedented capabilities in a wide range of applications, including robotic and machine vision, remote sensing, autonomous vehicle navigation, and medical imaging. Recent approaches to solving this challenging problem employ optical time-of-flight imaging systems with highly sensitive time-resolved photodetectors and ultra-fast pulsed lasers. However, despite recent successes in NLOS imaging using these systems, widespread implementation and adoption of the technology remains a challenge because of the requirement for specialized, expensive hardware. We introduce acoustic NLOS imaging, which is orders of magnitude less expensive than most optical systems and captures hidden 3D geometry at longer ranges with shorter acquisition times compared to state-of-the-art optical methods. Inspired by hardware setups used in radar and algorithmic approaches to model and invert wave-based image formation models developed in the seismic imaging community, we demonstrate a new approach to seeing around corners.

Non-Line-of-Sight Imaging with Partial Occluders and Surface Normals

ACM Trans. Graph. 2019

Citation

F. Heide, M. O'Toole, K. Zang, D. B. Lindell, S. Diamond, G. Wetzstein, “Non-line-of-sight imaging with partial occluders and surface normals”, ACM Trans. Graph., 2019.

Imaging objects obscured by occluders is a significant challenge for many applications. A camera that could “see around corners” could help improve navigation and mapping capabilities of autonomous vehicles or make search and rescue missions more effective. Time-resolved single-photon imaging systems have recently been demonstrated to record optical information of a scene that can lead to an estimation of the shape and reflectance of objects hidden from the line of sight of a camera. However, existing non-line-of-sight (NLOS) reconstruction algorithms have been constrained in the types of light transport effects they model for the hidden scene parts. We introduce a factored NLOS light transport representation that accounts for partial occlusions and surface normals. Based on this model, we develop a factorization approach for inverse time-resolved light transport and demonstrate high-fidelity NLOS reconstructions for challenging scenes both in simulation and with an experimental NLOS imaging system.

Sub-Picosecond Photon-Efficient 3D Imaging
Using Single-Photon Sensors

Scientific Reports 2018

Citation

F. Heide, S. Diamond, D. B. Lindell, G. Wetzstein, “Sub-picosecond photon-efficient 3D imaging using single-photon sensors”, Scientific Reports, 17726, 2018.

Active 3D imaging systems have broad applications across disciplines, including biological imaging, remote sensing and robotics. Applications in these domains require fast acquisition times, high timing accuracy, and high detection sensitivity. Single-photon avalanche diodes (SPADs) have emerged as one of the most promising detector technologies to achieve all of these requirements. However, these detectors are plagued by measurement distortions known as pileup, which fundamentally limit their precision. In this work, we develop a probabilistic image formation model that accurately models pileup. We devise inverse methods to efficiently and robustly estimate scene depth and reflectance from recorded photon counts using the proposed model along with statistical priors. With this algorithm, we not only demonstrate improvements to timing accuracy by more than an order of magnitude compared to the state-of-the-art, but our approach is also the first to facilitate sub-picosecond-accurate, photon-efficient 3D imaging in practical scenarios where widely-varying photon counts are observed.

Single-Photon 3D Imaging with Deep Sensor Fusion

ACM Trans. Graph. (SIGGRAPH) 2018

Citation

D. B. Lindell, M. O'Toole, G. Wetzstein, “Single-photon 3D imaging with deep sensor fusion”, ACM Trans. Graph. (SIGGRAPH), 2018.

Sensors which capture 3D scene information provide useful data for tasks in vehicle navigation, gesture recognition, human pose estimation, and geometric reconstruction. Active illumination time-of-flight sensors in particular have become widely used to estimate a 3D representation of a scene. However, the maximum range, density of acquired spatial samples, or overall acquisition time of these sensors is fundamentally limited by the minimum signal required to estimate depth reliably. In this paper, we propose a data-driven method for photon-efficient 3D imaging which leverages sensor fusion and computational reconstruction to rapidly and robustly estimate a dense depth map from low photon counts. Our sensor fusion approach uses measurements of single photon arrival times from a low-resolution single-photon detector array and an intensity image from a conventional high-resolution camera. Using a multi-scale deep convolutional network, we jointly process the raw measurements from both sensors and output a high-resolution depth map. To demonstrate the efficacy of our approach, we implement a hardware prototype and show results using captured data. At low signal-to-background levels our depth reconstruction algorithm with sensor fusion outperforms other methods for depth estimation from noisy measurements of photon arrival times.

Towards Transient Imaging at Interactive Rates
with Single-Photon Detectors

ICCP 2018

Citation

D. B. Lindell, M. O'Toole, G. Wetzstein, “Towards transient imaging at interactive rates with single-photon detectors”, In Proc. ICCP, 2018.

Active imaging at the picosecond timescale reveals transient light transport effects otherwise not accessible by com- puter vision and image processing algorithms. For example, analyzing the time of flight of short laser pulses emitted into a scene and scattered back to a detector allows for depth imaging, which is crucial for autonomous driving and many other applications. Moreover, analyzing or removing global light transport effects from photographs becomes feasible. While several transient imaging systems have recently been proposed using various imaging technologies, none is capable of acquiring transient images at interactive frame rates. In this paper, we present an imaging system that leverages single-photon avalanche diodes together with a pulsed picosecond laser to record transient images with up to 25 Hz at a low spatial resolution of 64 x 80 pixels or 1 Hz at a moderate resolution of 256 x 250 pixels. We show several transient video clips recorded with this system and demonstrate transient imaging applications, including direct-global light transport separation and enhanced depth imaging.

Confocal Non-Line-of-Sight Imaging based on the Light-Cone Transform

Nature 2018

Citation

M. O'Toole, D. B. Lindell, G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform”, Nature, 555 (7696), 338, 2018.

Imaging objects hidden from a camera’s view is a problem of fundamental importance to many fields of research with applications in robotic vision, defense, remote sensing, medical imaging, and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging aims at reconstructing the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical due to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that confocalizing the scanning procedure provides a means to address these key challenges. Confocal scanning facilitates the derivation of a novel closed-form solution to the NLOS reconstruction problem, which requires computational and memory resources that are orders of magnitude fewer than previous reconstruction methods and recovers hidden objects at unprecedented image resolutions. Confocal scanning also uniquely benefits from a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate real-time tracking capabilities, and derive efficient algorithms that incorporate image priors and a physically-accurate noise model. Most notably, we demonstrate successful outdoor experiments for NLOS imaging under indirect sunlight.

Reconstructing Transient Images from Single-Photon Sensors

CVPR 2017 (spotlight)

Citation

M. O'Toole, F. Heide, D. B. Lindell, K.Zang, S. Diamond, G. Wetzstein, “Reconstructing transient images from single-photon sensors”, In Proc. CVPR, 2017.

Computer vision algorithms build on 2D images or 3D videos that capture dynamic events at the millisecond time scale. However, capturing and analyzing “transient images” at the picosecond scale—i.e., at one trillion frames per second—reveals unprecedented information about a scene and light transport within. This is not only crucial for time-of-flight range imaging, but it also helps further our understanding of light transport phenomena at a more fundamental level and potentially allows to revisit many assumptions made in different computer vision algorithms.

In this work, we design and evaluate an imaging system that builds on single photon avalanche diode (SPAD) sensors to capture multi-path responses with picosecond-scale active illumination. We develop inverse methods that use modern approaches to deconvolve and denoise measurements in the presence of Poisson noise, and compute transient images at a higher quality than previously reported. The small form factor, fast acquisition rates, and relatively low cost of our system potentially makes transient imaging more practical for a range of applications.

High-Resolution Soil Moisture Retrieval with ASCAT

IEEE Geoscience and Remote Sensing Letters 2016

Citation

D. B. Lindell, D. Long, “High-resolution soil moisture retrieval with ASCAT”, Geoscience and Remote Sensing Letters, 13 (7), 972-976, 2016.

Satelliteborne C-band scatterometer measurements of the radar backscatter coefficient (σ0) of the Earth can be used to estimate soil moisture levels over land. Such estimates are currently produced at 25- and 50-km resolution using the Advanced Scatterometer (ASCAT) sensor and a change detection algorithm originally developed at the Vienna University of Technology (TU-Wien). Using the ASCAT spatial response function (SRF), high-resolution (approximately 15-20 km per pixel) images of σ0 can be produced, enabling the creation of a high-resolution soil moisture product using a modified version of the TU-Wien algorithm. The high-resolution soil moisture images are compared to images produced with the Water Retrieval Package 5.5 algorithm, which is also based on the TU-Wien algorithm, and to in situ measurements from the National Oceanic and Atmospheric Administration U.S. Climate Reference Network (NOAA CRN). The WARP 5.5 and high-resolution image products generally show good agreement with each other; the high-resolution estimates appear to resolve soil moisture features at a finer scale and demonstrate a tendency toward greater moisture values in some areas. When compared to volumetric soil moisture measurements from NOAA CRN stations for 2010 and 2011, the WARP 5.5 and high-resolution soil moisture estimates perform similarly, with both having a root-mean-square difference from the in situ data of approximately 0.06 m3/m3 in one study area and 0.09 m3/m3 in another.

Multiyear Arctic Ice Classification Using ASCAT and SSMIS

Remote Sensing 2016

Citation

D. B. Lindell, D. Long, “Multiyear Arctic Ice Classification Using ASCAT and SSMIS”, Remote Sensing, 8 (4), 294, 2016.

The concentration, type, and extent of sea ice in the Arctic can be estimated based on measurements from satellite active microwave sensors, passive microwave sensors, or both. Here, data from the Advanced Scatterometer (ASCAT) and the Special Sensor Microwave Imager/Sounder (SSMIS) are employed to broadly classify Arctic sea ice type as first-year (FY) or multiyear (MY). Combining data from both active and passive sensors can improve the performance of MY and FY ice classification. The classification method uses C-band σ0 measurements from ASCAT and 37 GHz brightness temperature measurements from SSMIS to derive a probabilistic model based on a multivariate Gaussian distribution. Using a Gaussian model, a Bayesian estimator selects between FY and MY ice to classify pixels in images of Arctic sea ice. The ASCAT/SSMIS classification results are compared with classifications using the Oceansat-2 scatterometer (OSCAT), the Equal-Area Scalable Earth Grid (EASE-Grid) Sea Ice Age dataset available from the National Snow and Ice Data Center (NSIDC), and the Canadian Ice Service (CIS) charts, also available from the NSIDC. The MY ice extent of the ASCAT/SSMIS classifications demonstrates an average difference of 282 thousand km - + from that of the OSCAT classifications from 2009 to 2014. The difference is an average of 13.6% of the OSCAT MY ice extent, which averaged 2.19 million km2 over the same period. Compared to the ice classified as two years or older in the EASE-Grid Sea Ice Age dataset (EASE-2+) from 2009 to 2012, the average difference is 617 thousand km2 . The difference is an average of 22.8% of the EASE-2+ MY ice extent, which averaged 2.79 million km2 from 2009 to 2012. Comparison with the Canadian Ice Service (CIS) charts shows that most ASCAT/SSMIS classifications of MY ice correspond to a MY ice concentration of approximately 50% or greater in the CIS charts. The addition of the passive SSMIS data appears to improve classifications by mitigating misclassifications caused by ASCAT's sensitivity to rough patches of ice which can appear similar to, but are not, MY ice.

Multiyear Arctic Ice Classification Using OSCAT and QuikSCAT

IEEE Transactions on Geoscience and Remote Sensing 2016

Citation

D. B. Lindell, D. Long, “Multiyear Arctic Ice Classification Using OSCAT and QuikSCAT”, IEEE Transactions on Geoscience and Remote Sensing, 54 (1), 167-175, 2016.

Arctic sea ice can be classified as first-year (FY) or multiyear (MY) based on data collected by satellite microwave scatterometers. The Oceansat-2 Ku-band Scatterometer (OSCAT) was operational from 2009 to 2014 and is here used to classify ice as FY or MY during these years. Due to similarities in backscatter measurements from sea ice and open water, a NASA Team ice concentration product derived from passive microwave brightness temperatures is used to restrict the classification area to within the sea ice extent. Classification of FY and MY ice is completed with OSCAT by applying a temporally adjusted threshold on backscatter values. The classification method is also applied to the Quick Scatterometer (QuikSCAT) data set, and ice age classifications are processed using QuikSCAT for 1999-2009. The combined QuikSCAT and OSCAT classifications represent a 15-year record, which extends from 1999 to 2014. The classifications show a decrease in MY ice, while the total area of the ice cover remains consistent throughout winter seasons over the time series.

Selected Projects

Virtual Reality Motion Parallax with the Facebook Surround-360

D. B. Lindell, J. Thatte, EE 367, 2017.

Current virtual reality displays for viewing captured 360-degree stereo videos typically provide the wearer with a view from a single vantagepoint. We demonstrate that images from a commercial 360-degree camera rig can be processed to enable head-motion parallax while viewing with a head-mounted display. Such a viewing experience more closely mimics how we experience the real-world and can help to alleviate virtual-reality sickness and viewing discomfort.

Experience

June 2018-Oct 2018
Santa Clara, CA
Intel Intelligent Systems Lab
Position: Intern
Project: Developed hardware system and algorithms for acoustic non-line-of-sight imaging.
Dahlia Lights
September 2017-January 2019
Palo Alto, CA
Dahlia Lighting (startup)
Position: Computer Vision Specialist
Project: Worked on computer vision for smart lighting systems. Acquired by Lexi Devices.
March 2016-June 2019
Mankato, MN (remote)
Software For Hire
Position: Computer Vision Consultant
Project: Developed fast, multithreaded vision algorithm for a pharmaceutical tablet counter using Boost, OpenCV, and Point Cloud Library. Built and shipped an improved neural-net-based algorithm for recognition and counting using PyTorch and MXNet for real-time inference on Intel CPUs.
June 2016-July 2016
Tucson, AZ
Rincon Research Corporation
Position: Electrical Engineering Intern
Project: Developed a cloud-based digital video recording system to stream and record live video. Integrated live broadcast television demodulation capability using GNU Radio and proprietary signal processing hardware.

Invited Talks

Stanford Center for Image Systems Engineering
Computational Imaging with Single-Photon Detectors
5/8/2019
Carnegie Mellon University Graphics Lab
Computational Single-Photon Imaging
1/23/2019
Silicon Valley ACM SIGGRAPH Chapter
Computational Single-Photon Imaging
5/30/2019

Last updated on 9 July 2019