Abstract:
Radiation dose reduction in Computed Tomography (CT) can be achieved by decreasing the number of projections. However, reconstructing CT images via filtered back projection algorithm from sparse-view projections often contains severe streak artifacts, affecting clinical diagnosis. To address this issue, this paper proposes TransitNet, an iterative unrolling deep neural network that combines model-driven data consistency, a physical a prior constraint, with deep learning’s feature extraction capabilities. TransitNet employs a novel iterative architecture, implementing flexible physical constraints through learnable data consistency operations, utilizing Transformer’s self-attention mechanism to model long-range dependencies in image features, and introducing linear attention mechanisms to reduce self-attention’s computational complexity from quadratic to linear. Extensive experiments demonstrate that this method exhibits significant advantages in both reconstruction quality and computational efficiency, effectively suppressing streak artifacts while preserving structures and details of images.