Semantic segmentation in medical images through transfused convolution and transformer networks

Tashvik Dhamija, Anunay Gupta, Shreyansh Gupta, Anjum, Rahul Katarya, Ghanshyam Singh

Research output: Contribution to journalArticlepeer-review

31 Citations (Scopus)


Recent decades have witnessed rapid development in the field of medical image segmentation. Deep learning-based fully convolution neural networks have played a significant role in the development of automated medical image segmentation models. Though immensely effective, such networks only take into account localized features and are unable to capitalize on the global context of medical image. In this paper, two deep learning based models have been proposed namely USegTransformer-P and USegTransformer-S. The proposed models capitalize upon local features and global features by amalgamating the transformer-based encoders and convolution-based encoders to segment medical images with high precision. Both the proposed models deliver promising results, performing better than the previous state of the art models in various segmentation tasks such as Brain tumor, Lung nodules, Skin lesion and Nuclei segmentation. The authors believe that the ability of USegTransformer-P and USegTransformer-S to perform segmentation with high precision could remarkably benefit medical practitioners and radiologists around the world.

Original languageEnglish
Pages (from-to)1132-1148
Number of pages17
JournalApplied Intelligence
Issue number1
Publication statusPublished - Jan 2023


  • COVID-19
  • Deep learning
  • Image-segmentation
  • Transformer
  • U-net

ASJC Scopus subject areas

  • Artificial Intelligence


Dive into the research topics of 'Semantic segmentation in medical images through transfused convolution and transformer networks'. Together they form a unique fingerprint.

Cite this