Probe localization from ultrasound image sequences using deep learning for volume reconstruction

Kanta Miura, Koichi Ito, Takafumi Aoki, Jun Ohmiya, Satoshi Kondo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We propose a probe localization method only from ultrasound (US) image sequences using deep learning for three-dimensional (3D) US image reconstruction. The proposed method employs a convolutional neural network (CNN) to estimate the motion of the probe from two US images. Our CNN architecture consists of two parts: inplane and out-of-plane probe motion estimation. Two loss functions are introduced to guarantee the consistency of estimated motion of the probe between multiple frames. Through experiments, we demonstrate that the proposed method exhibits efficient performance on probe localization compared with the conventional method.

Original languageEnglish
Title of host publicationInternational Forum on Medical Imaging in Asia 2021
EditorsRuey-Feng Chang
PublisherSPIE
ISBN (Electronic)9781510644205
DOIs
Publication statusPublished - 2021
EventInternational Forum on Medical Imaging in Asia 2021, IFMIA 2021 - Taipei, Taiwan, Province of China
Duration: 2021 Jan 242021 Jan 26

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume11792
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

ConferenceInternational Forum on Medical Imaging in Asia 2021, IFMIA 2021
Country/TerritoryTaiwan, Province of China
CityTaipei
Period21/1/2421/1/26

Keywords

  • CNN
  • probe localization
  • ultrasound
  • volume reconstruction

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Probe localization from ultrasound image sequences using deep learning for volume reconstruction'. Together they form a unique fingerprint.

Cite this