Education, Science, Technology, Innovation and Life
Open Access
Sign In

Image Inpainting for Defective Microscopy Images

Download as PDF

DOI: 10.23977/jipta.2025.080106 | Downloads: 5 | Views: 398

Author(s)

Tao Wen 1,2, Yang Yang 1,2

Affiliation(s)

1 School of Information Science and Technology, Yunnan Normal University, Kunming, China
2 Laboratory of Pattern Recognition and Artificial Intelligence, Yunnan Normal University, Kunming, China

Corresponding Author

Tao Wen

ABSTRACT

Recent advancements in tissue clearing and light-sheet microscopy have transformed whole-brain imaging, enabling cellular-resolution visualization of intact murine brains. However, aggressive clearing protocols and mechanical handling during sample preparation frequently introduce structural defects—such as tissue cracks and regional loss—into microscopy images. These artifacts pose significant challenges for downstream computational analyses, particularly image registration, which depends on structural continuity for precise alignment. Despite the well-documented prevalence of such defects, their impact on registration fidelity remains underexplored, and effective computational solutions for mitigating these challenges are scarce. To bridge this gap, we introduce a mask-free generative framework for digital restoration of damaged neuroimaging data. Unlike existing methods that require labor-intensive manual annotation of defect masks, our approach eliminates mask dependency entirely during both training and inference. By leveraging a diffusion-based architecture with defect-invariant learning, the model autonomously adapts to diverse defect geometries—from fine cracks to large-scale tissue loss—without prior knowledge of corruption patterns. We validate our approach using whole-brain murine microscopy datasets containing real-world artifacts induced by tissue clearing. Quantitative evaluations show that our method not only generates photorealistic restorations of missing structures but also significantly enhances registration accuracy in defective samples.

KEYWORDS

Image Inpainting, Deep Learning, Diffusion Model

CITE THIS PAPER

Tao Wen, Yang Yang, Image Inpainting for Defective Microscopy Images. Journal of Image Processing Theory and Applications (2025) Vol. 8: 45-50. DOI: http://dx.doi.org/10.23977/jipta.2025.080106.

REFERENCES

[1] Renier N, Wu Z, Simon D J, et al. iDISCO: a simple, rapid method to immunolabel large tissue samples for volume imaging[J]. Cell, 2014, 159(4): 896-910.
[2] Meng C, He Y, Song Y, et al. Sdedit: Guided image synthesis and editing with stochastic differential equations[J]. arXiv preprint arXiv:2108.01073, 2021.
[3] Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models[J]. Advances in neural information processing systems, 2020, 33: 6840-6851.
[4] Zeng Y, Fu J, Chao H, et al. Aggregated contextual transformations for high-resolution image inpainting[J]. IEEE transactions on visualization and computer graphics, 2022, 29(7): 3266-3280.
[5] Wan Z, Zhang J, Chen D, et al. High-fidelity pluralistic image completion with transformers[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 4692-4701.
[6] Lugmayr A, Danelljan M, Romero A, et al. Repaint: Inpainting using denoising diffusion probabilistic models[C]. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 11461-11471.
[7] Wang Y, Chen Y C, Tao X, et al. Vcnet: A robust approach to blind image inpainting[C]. Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16. Springer International Publishing, 2020: 752-768.

Downloads: 2241
Visits: 161894

Sponsors, Associates, and Links


All published work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2016 - 2031 Clausius Scientific Press Inc. All Rights Reserved.