Skip to main content
Sections

Introduction

A smartphone application (app) may facilitate identification of cardiac implanted electrical devices (CIEDs) (e.g., pacemakers and defibrillators) in urgent or emergent settings. Programming and interrogation of CIEDs requires the use of proprietary software and manufacturer-specific equipment. In an urgent setting, the device identification card or other relevant details may not be readily available (1). Device identification currently relies on manual inspection of chest radiographs. However, this is time consuming and difficult, requiring subjective provider interpretation (2). Recently, a group in the United Kingdom described the use of a convolutional neural network-generated model to classify implanted cardiac devices on chest radiography (3). We used similar techniques to develop an easily usable, smartphone-enabled, point-of-care application for front-line clinicians to access such artificial intelligence algorithms (4,5). Development of medical artificial intelligence is vital, and the deployment of these advancements to the bedside is the pragmatic next step.

In this study, anteroposterior and posteroanterior chest radiographs from patients with pacemakers or defibrillators implanted between 2016 and 2018 were obtained at a single institution. Images were obtained from our electronic medical record, deidentified, and coded as 1 of the 4 major device manufacturers: Medtronic (Dublin, Ireland), Abbott/St. Jude Medical (Chicago, Illinois), Boston Scientific (Marlborough, Massachusetts), and Biotronik (Berlin, Germany). Raw X-rays were cropped to 400- × 400-pixel single-channel red, green, blue (RGB) files. Data augmentation was performed with horizontal and vertical flipping, random cropping, and varying contrast/brightness levels. Next, images were captured using a mobile phone to incorporate screenshots and artifact variations. Ambient lighting conditions were altered during the mobile phone image capture process. Batched learning was completed through a commercially available platform. Tensorflow software (Ann Arbor, Michigan) and Keras software (New York City, New York) were used as the training frameworks with code written in Python. The model was generated de novo and used k-fold cross-validation for selecting the optimal number of features and hidden layers producing the highest accuracy with minimal loss. Images were randomly assigned to a 7:2:1 ratio of training:validation:testing. None of the images used for the model development sequence were used in the validation or test set. Accuracy of mobile phone image classification was determined using Tensorflow’s pre-programmed analysis. Weighted averages of multiclass outcomes was used to calculate sensitivity and specificity and to create receiver-operating characteristic curves. Pearson’s chi-squared test was used to assess for statistical significance. The study protocol was approved by the Institutional Review Board.

A total of 1,509 unique radiographs were included in the final analysis. Approximately 47% of the radiographs were from patients with pacemaker implants and 53% were from patients with insertable cardioverter-defibrillator implants. After mobile phone camera capture data were augmented, 3,008 images were loaded for analysis. According to the pre-specified 7:2:1 ratio, 2,106 images were in the training dataset, 602 images were in the validation dataset to fine-tune the model, and 300 images were in the final testing dataset. In the testing group, the mobile phone model correctly classified 95% (82 of 86) of Boston Scientific images, 91% (53 of 58) of Biotronik images, 94% (100 of 106) of Medtronic images, and 100% (50 of 50) of Abbott/St. Jude Medical images (Table 1). These results yielded receiver-operating characteristic curves with excellent areas under the curve (>0.95). The k-fold cross-yielded an optimal model with 7 hidden layers and 8-16-32-64-64-64-128 neurons, respectively. The optimal number of training epochs was 23. When the validation dataset was examined, the accuracy was 97%, and the loss was minimal at 0.11. When the overall results of the 300 validation images were examined, there was 95% sensitivity and 98% specificity. Using Pearson’s chi-square test, we determined that results were not likely to be due to chance alone (p < 0.001).

Table 1. Classification Samples and Sensitivity by Manufacturer

Boston ScientificBiotronikMedtronicSt. Jude MedicalTotal
Trained5184286724882,106
Validated162112182146602
Tested865810650300
Total7665989606843,008
Sensitivity95.391.494.310095.2

Values are n.

Our results are limited by a relatively small training sample size from a single institution. External validation at an outside institution would be a valuable future endeavor. Additionally, mobile phone images were captured on a monitor with a retina display, inferring a high pixel density. This may have created specific artifacts and may not be extrapolated to other hardware. However, image captures by our phone application can be used to retrain the model as novel devices become available.

Here we report the development and validation of a neural network-driven classification model that accurately identified cardiac devices implanted in the United States on chest radiographs. Furthermore, this model has been developed into a mobile phone application which can be used as a point-of-care tool (4). Rather than the conventional “bench-to-bedside” approach of translational research, we demonstrated the feasibility of “big data-to-bedside” endeavors. This research has the potential to facilitate device identification in urgent scenarios in medical settings with limited resources.

  • 1. Stevenson W.G., Chaitman B.R., Ellenbogen K.A.et al. : "Clinical assessment and management of patients with implanted cardioverter-defibrillators presenting to nonelectrophysiologists". Circulation 2004; 110: 3866.

    CrossrefMedlineGoogle Scholar
  • 2. Jacob S., Shahzad M.A., Maheshwari R., Panaich S.S. and Aravindhakshan R. : "Cardiac rhythm device identification algorithm using X-Rays: CaRDIA-X". Heart Rhythm 2011; 8: 915.

    CrossrefMedlineGoogle Scholar
  • 3. Howard J.P.F.L., Shun-Shin M.J., Keene D.et al. : "Cardiac rhythm device identification using neural networks". J Am Coll Cardiol EP 2019; 5: 576.

    Google Scholar
  • 4. Weinreich B. : "PaceMaker-ID: Identify any pacemaker in seconds". Available at: www.pacemakerid.com.

    Google Scholar
  • 5. Weinreich M., Weinreich B., Chudow J.J.et al. : "Computer-aided detection and identification of implanted cardiac devices on chest radiography utilizing deep convolutional neural networks, a form of machine learning". J Am Coll Cardiol 2019; 73:Suppl 1.

    Google Scholar

Footnotes

Please note: Dr. Fisher is a consultant for Medtronic. All other authors have reported that they have no relationships relevant to the contents of this paper to disclose.

The authors attest they are in compliance with human studies committees and animal welfare regulations of the authors’ institutions and Food and Drug Administration guidelines, including patient consent where appropriate. For more information, visit the JACC: Clinical Electrophysiology author instructions page.