leave some space here and don't edit this line
Our method, Deep Prior Diffraction Tomography (DP-DT), reparameterizes the 3D sample reconstruction as the output of an untrained convolutional neural network (CNN), which serves as the [deep image prior (DIP)](https://arxiv.org/abs/1711.10925). Thus, instead of optimizing the 3D reconstruction, we optimize the parameters of the CNN. At first glance, this may not seem to make a difference; after all, a CNN may have more parameters than the reconstruction itself. However, Ulyanov and colleagues have recently found that CNNs have an inherent preference for "natural" images, even without training, a bias which was demonstrated [on a variety of computer vision tasks](https://dmitryulyanov.github.io/deep_image_prior) such as digital super-resolution (not to be confused with optical super-resolution!), inpainting, and denoising. We were intrigued by this result, and hypothesized that the DIP may be applied to fill in the missing cone in 3D diffraction tomography, which can be thought of as missing information in the Fourier domain that produces "unnatural" features in the real-space domain, especially in the z direction. Another way to think about this application is Fourier-domain inpainting. We confirmed this hypothesis empirically through a variety of samples, both simulated and experimental, which you can read about in our paper!
) Here are are sample results from our paper (Fig. 10). We can see that using the DIP (DP-DT) produces superior results in this 2-layer polystyrene bead sample, compared to alternative regularization techniques like positivity (+) and total variation (TV). In particular, we can see better axial separation between the two layers with DP-DT, as well as refractive index values closer to the ground truth (1.59).
We have open-sourced the code and datasets necessary to reproduce the experimental results figures in our paper (Figs. 7-10):Code: Link to code