Optimized dual threshold entity resolution for electronic health record databases--training set size and active learning.

Erel Joffe, Michael J. Byrne, Phillip Reeder, Jorge R. Herskovic, Craig W. Johnson, Allison B. McCoy, Elmer V. Bernstam

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Clinical databases may contain several records for a single patient. Multiple general entity-resolution algorithms have been developed to identify such duplicate records. To achieve optimal accuracy, algorithm parameters must be tuned to a particular dataset. The purpose of this study was to determine the required training set size for probabilistic, deterministic and Fuzzy Inference Engine (FIE) algorithms with parameters optimized using the particle swarm approach. Each algorithm classified potential duplicates into: definite match, non-match and indeterminate (i.e., requires manual review). Training sets size ranged from 2,000-10,000 randomly selected record-pairs. We also evaluated marginal uncertainty sampling for active learning. Optimization reduced manual review size (Deterministic 11.6% vs. 2.5%; FIE 49.6% vs. 1.9%; and Probabilistic 10.5% vs. 3.5%). FIE classified 98.1% of the records correctly (precision=1.0). Best performance required training on all 10,000 randomly-selected record-pairs. Active learning achieved comparable results with 3,000 records. Automated optimization is effective and targeted sampling can reduce the required training set size.

Original languageEnglish (US)
Pages (from-to)721-730
Number of pages10
JournalAMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium
Volume2013
StatePublished - 2013

ASJC Scopus subject areas

  • General Medicine

MD Anderson CCSG core facilities

  • Bioinformatics Shared Resource

Fingerprint

Dive into the research topics of 'Optimized dual threshold entity resolution for electronic health record databases--training set size and active learning.'. Together they form a unique fingerprint.

Cite this