top of page

RESEARCH

We use machine learning to design bio-photonics and algorithms together as a whole. This approach streamlines the design process resulting seamless photonics and algorithms. To achieve this, we: (1) custom design individual computational optics; (2) use them to build optical systems; (3) build low-level algorithms to reconstruct images from these systems; and finally, (4) build high-level algorithms to extract application-specific data from the images.

computational-optics-v2.png

COMPUTATIONAL
OPTICS

PROJECTS
high-level-image-processing-algorithms.png

HIGH-LEVEL IMAGE PROCESSING

DNA-DAMAGE TOXICOLOGY
COMPUTATIONAL PATHALOGY
BACTERIA DIAGNOSTICS
IN-VIVO NEUR0-IMAGING

PROJECTS
meta-volumes
computational-optics.png

META-VOLUMES

--

The type of optics we use are historically constrained because of the way they are manufactured. For instance, we use circular lenses as they can be very preciously made with fine grinding. The optical systems we can build with these elements are also constrained to their individual properties. After centuries of this practice, we the optical scientists, are some-what conditioned to design systems using these classical elements. But today, we can use computational imaging approaches (like differentiable microscopy developed in our lab) to build unconventional optical systems challenging the classical designs. For such designs, the design space is heavily constrained by the available classical optical elements. Moreover, the design space is “non-differentiable” or “sparse”, making algorithms harder to converge. Computer scientists’ solution to his problem is to build computational tools and optimizers that better converge despite sparse design space. While working on such approaches, we also undertook another approach to smooth the design space by being able to build new arbitrary 3D-optics. To this end we teamed up with the Boyden lab at MIT and Irradiant Technologies to build a new optics manufacturing approach that is capable of fabricating 3D diffractive optical elements we call “Meta Volumes”. We think that this unique capability will reshape the way optics are designed and manufactured in the future. As a consequent we will also enable an entirely new class of optical systems that can be designed to address novel problems in bio-photonics.

esgd-opt

EVOLUTIONARY GRADIENT DESCENT TO TRAIN PHYSICS CONSTRAINED MODELS

--

Conventional gradient-based optimizers often work well with over-parameterized models, yet fails in smaller or parameter-constrained models. This is especially problematic for physics-constrained neural networks. Since these models are not/ cannot be over-parameterized, gradient-based optimizers often fail to achieve a better convergence, and there is no standard way to train them. In this work, we investigate how simple organisms optimizes their relatively smaller neural networks without over-parameterization. We combine evolution, gradient-based optimization, and genomics and propose a novel biology-inspired optimization method that can optimize non-overparameterized simple neural networks including physics-constrained ones. Specifically, we encode neural networks/ brains into latent vectors/ genomes and perform evolution in the latent/ genome space. We test our method for optimizing physics-constrained models.

deep-2-photons
Screen Shot 2023-02-15 at 9.30.54 AM.png

DE-SCATTERING WITH EXCITATION PATTERNING (DEEP, 2-PHOTON)

--

Nonlinear optical microscopy has enabled in-vivo deep tissue imaging on the millimeter scale. A key unmet challenge is its limited throughput especially compared to rapid wide-field modalities that are used ubiquitously in thin specimens. Wide-field imaging methods in tissue specimens have found successes in optically cleared tissues and at shallower depths, but the scattering of emission photons in thick turbid samples severely degrades image quality at the camera. To address this challenge, we introduce De-scattering with Excitation Patterning or “DEEP,” which uses patterned nonlinear excitation followed by computational imaging-assisted wide-field detection. Multiphoton temporal focusing allows high-resolution excitation patterns to be projected deep inside specimen at multiple scattering lengths due to the use of long wavelength light. Computational reconstruction allows high-resolution structural features to be reconstructed from tens to hundreds of DEEP images instead of millions of point-scanning measurements.

deep-1-photon

DECONVOLUTION WITH EXCITATION PATTERNING (DEEP, 1-PHOTON)

--

In our original DEEP we used patterned 2-photon excitation to encode focal plane information. In this work, we translate the same idea to more widely used 1-photon systems. Here we can encode focal-plane information using patterned excitation; but the out-of-focal planes are also excited (but without the encoding). We can then use the same decoding algorithms to reject the out of focal plane information. In this form, DEEP can be thought of as a structured illumination microscopy (SIM) approach with arbitrary illumination structures. But in contrast to other SIM, DEEP algorithms can decode when the illumination structures aren’t resolved on the detector. This allows compressive sensing capabilities, and hence 1-photon DEEP can perform compressive imaging in the wide-field mode allowing 10-100x potential speed up improvements.

deep-3d

FEW-SHOT VOLUMETRIC IMAGING (THROUGH 3D-DEEP)

--

In DEEP we encode, information from the focal plane using a patterned excitation. To image a volume, we should image one focal-plane at a time and scan in the z-direction. In the 2-photon implementation, out-of-focal planes aren’t illuminated due to temporally focused single-plane excitation. In the 1-photon case, the out-of-focal plane light is rejected as background. In this work, we use volumetric patterned excitation, to encode an entire volume rather than just one z-plane. This approach can compressively image 3d data to better exploit sparsity in image data.

all-optica-qpm
figure-4.png

ALL - OPTICAL PHASE TO INTENSITY

--

Quantitative phase microscopy (QPM) is a powerful label-free imaging modality that measure refractive index maps of thin, transparent biological specimen like cells or thin tissue sections. QPM first uses interferometric techniques to encode the phase of the optical field (that is a function of the refractive index map) onto “interferograms”. Interferograms are then measured on optical detectors like cameras, and computationally processed to extract phase information. In this work, we designed optical processors –using learnable Fourier filters and meta volumes– to convert the phase information of the optical field directly into intensity information at the detector. This way we propose to perform QPM measurements all optically without having to reconstruct. Our approach may also be useful in other applications, such as optogenetics and multiphoton lithography, where phase-maps needs be converted to intensity maps.

g-dabba-mu
figure-confocal.png

GENERALIZED DIFFERENTIABLE MICROSCOPY ​ (G-𝜕𝜇)

--

Ever since the first microscope in the late 16th century, scientists have been inventing new types of microscopes for various tasks including phase contrast (1930s), confocal (1950s), and super-resolution (2000s). Inventing a novel architecture demands years, if not decades of scientific experience and fortuitous creativity. In this work, we reinvented this creative process by framing the design of optical systems as a machine-learning problem. Similar to how a single neural-network architecture (e.g., a convolutional neural network), can be trained to perform multiple image-processing tasks (e.g., image classification, denoising, and segmentation), a single “optical architecture” may also be configured to perform multiple imaging tasks. Based on this idea, we propose a new paradigm for optical system design called Differentiable Microscopy (dabba-). Differentiable microscopy first models a widely used, physics-informed optical system with learnable optical elements at key locations on the optical path. Next, we train the model end-to-end for a task of interest using carefully curated training data. Finally, we implement the learned design in the lab. 

 

We recently showed that dabba- can learn core microscopy concepts such as phase contrast, and the confocal effect. We have also shown that in some cases learned designs can outperform original designs. Our contributions will fundamentally shape not only how microscopes are designed in the future, but also how measurement systems can be improved in general. As a consequence, dabba- will enable measurements that simply aren’t possible today.

deep^2
DEEP2.png

DEEP LEARNING POWERED DE-SCATTERING WITH EXCITATION PATTERNING (DEEP^2)

--

Limited throughput is a key challenge in in-vivo deep-tissue imaging using nonlinear optical microscopy. Point scanning multiphoton microscopy, the current gold standard, is slow especially compared to the wide-field imaging modalities used for optically cleared or thin specimens. We recently introduced “De-scattering with Excitation Patterning or DEEP”, as a widefield alternative to point-scanning geometries. Using patterned multiphoton excitation, DEEP encodes spatial information inside tissue before scattering. However, to de-scatter at typical depths, hundreds of such patterned excitations are needed. In this work, we present DEEP^2, a deep learning based model, that can de-scatter images from just tens of patterned excitations instead of hundreds. Consequently, we improve DEEP’s throughput by almost an order of magnitude. We demonstrate our method in multiple numerical and physical experiments including in-vivo cortical vasculature imaging up to four scattering lengths deep, in alive mice.

dabba-deep
dabba-DEEP.png

DIFFERENTIABLE DECONVOLUTION WITH EXCITATION PATTERNING (𝜕-DEEP)

--

The trade-off between throughput and image quality is an inherent challenge in microscopy. To improve throughput, compressive imaging under-samples image signals; the images are then computationally reconstructed by solving a regularized inverse problem. Compared to traditional regularizers, Deep Learning based methods have achieved greater success in compression and image quality. However, the information loss in the acquisition process sets the compression bounds. Further improvement in compression, without compromising the reconstruction quality is thus a challenge. In this work, we propose differentiable deconvolution with excitation patterning, or “𝜕-DEEP”, which includes a realistic generalizable forward model with learnable-physical parameters (e.g. illumination patterns), and a physics-inspired inverse model. The cascaded model is end-to-end differentiable and can learn optimal compressive sampling schemes through training data. With our model, we performed thousands of numerical experiments on various compressive microscope configurations. Our results suggest that learned sampling outperforms widely used traditional compressive sampling schemes at higher compressions in terms of reconstruction quality. We further utilize our framework for Task Aware Compression.

phy-augs

PHYSICS AUGMENTED INVERSE MODELS (PHY-AUGs)

--

Deep learning based inverse models are now ubiquitous in computational imaging. In these models, an inverse mapping from the measurement space to image space is learned through paired data. But the learned inverse model, inherently encodes the information of the forward imaging model; thus, when the forward imaging model is changed the inverse models needs be re-trained. This can be impractical for day-to-day microscopy applications where the imaging conditions may frequently change. In this work we propose a strategy to augment state-of-the-art data-driven inverse models as generative priors. We call these, “Physics Augmented Inverse Models (Phy-augs)”. Phy-augs are intendent of the forward imaging process, and thus need be trained only once. To use them, we propose an inference recipe that can optimize a part of the latent representation of Phy-augs during the inference time with any physics-based forward model of the microscope.

dabba-adaptive-scan

DIFFERENTIABLE ADAPTIVE SCANNING

--

With applications ranging from cancer diagnostics and drug discovery to neuroscience, high-throughput high-content microscopy has become an essential tool in biology and medicine. Today gigabyte-scale microscopy image datasets are routinely collected, stored, and analyzed. To collect these large datasets, much progress has been made in fast imaging systems with newer optical designs, better mechanical movements, and faster and more multiplexed imaging sensors. However, in many applications like whole slide imaging for computational pathology, much of the collected data contains redundant information. For instance, pathologists never look at the entire whole slide at the maximum resolution to make decisions. They rather selectively scan the slide while deciding where to image next based on the partial information seen before. In this work, we tried to mimic the same “smart” sample observation technique using a neural network. Specifically, we propose a transformer-based model, as an attention mechanism to progressively image only the regions-of-interest (ROIs) at optimal resolutions that are required to make a downstream decision. 

 

We tested our method in-silico on two tasks: (1) to image a collage of handwritten MNIST digits to process their values; (2) to image ROIs from cancer slides to classify them. In both cases, the transformer learned to guide the imaging processing using their attention maps. Our preliminary results suggest that it might be possible to “train” a microscope – in an end-to-end differentiable manner— to image only the ROIs of interest at different resolutions. Consequently, this work sets the foundation to integrate image-based-decision making with the imaging process in real-time to minimize collecting redundant information in large-scale-image studies.

dabba-qpm
comp-qpm.png

DIFFERENTIABLE COMPRESSIVE QUANTITATIVE PHASE MICROSCOPY (𝜕-QPM) 

--

With applications ranging from metabolomics to histopathology, quantitative phase microscopy (QPM) is a powerful label-free imaging modality. Despite significant advances in fast multiplexed imaging sensors and deep-learning-based inverse solvers, the throughput of QPM is currently limited by the speed of electronic hardware. Complementarily, to improve throughput further, here we propose to acquire images in a compressed form such that more information can be transferred beyond the existing electronic hardware bottleneck. To this end, we present a learnable optical compression-decompression framework that learns content-specific features. The proposed differentiable optical-electronic quantitative phase microscopy first uses learnable optical feature extractors as image compressors. The intensity representation produced by these networks is then captured by the imaging sensor. Finally, a reconstruction network running on electronic hardware decompresses the QPM images. The proposed system achieves compression of x64 while maintaining the SSIM of 0.9 and PSNR of  ~30dB. The promising results demonstrated by our experiments open a new pathway for achieving end-to-end optimized QPM systems that provide unprecedented throughput improvements.

bottom of page