Research Spotlight · Awaiting Publication

AI-Driven Classification of Nursing Diagnoses in
Administrative Claims Data

Applying transformer-based deep learning models — including BEHRT — to automatically classify nursing diagnoses from ICD-coded administrative claims, enabling scalable population-level nursing quality research.

Nursing Informatics Deep Learning NLP Administrative Data UNLV · 2025

AI-Driven Classification of Nursing Diagnoses Using Administrative Claims Data: A Transformer-Based Approach with BEHRT

Yash Devkota Sobhan Ebrahimi Azar Sepideh Farivar Jesse Ortega Jennifer Vanderlaan Jorge Fonseca* Kazem Taghva

* Contact author  ·  1UNLV Department of Computer Science  ·  2UNLV School of Nursing

University of Nevada, Las Vegas 2025 Awaiting Peer Review

Research Overview

Background: Nursing diagnoses represent standardized classifications of patient conditions that guide clinical care planning. While nursing-sensitive quality indicators are increasingly important for healthcare policy, manually coding nursing diagnoses from large administrative datasets is prohibitively resource-intensive. All-payer claims databases (APCDs) and hospital discharge records encode clinical conditions through ICD (International Classification of Diseases) codes — but the mapping between these codes and formal nursing diagnoses remains largely unautomated.

Objective: This study evaluates the use of transformer-based language models — specifically BEHRT (BERT for Electronic Health Records) — for automated classification of nursing diagnoses from ICD-coded administrative claims sequences. We assess model performance across nursing diagnosis categories, compare transformer approaches against traditional machine learning baselines, and examine racial and socioeconomic fairness in model predictions.

Methods: We construct a labeled dataset from a de-identified administrative claims corpus, mapping ICD-10 diagnostic and procedure codes to standardized nursing diagnoses. BEHRT is fine-tuned on sequential ICD code inputs, with performance benchmarked against logistic regression, gradient boosting, and BioBERT baselines. Fairness analysis follows established guidelines for algorithmic equity in healthcare AI.

Results: BEHRT achieves a macro-averaged F1 score of 0.847 across nursing diagnosis categories, outperforming all baselines. Performance is strongest for high-frequency diagnoses with clear ICD-code correlates (F1 > 0.91) and lowest for complex, multi-factorial diagnoses requiring contextual clinical judgment. Racial disparities in prediction accuracy are identified and discussed.

Conclusions: Transformer-based models can reliably automate nursing diagnosis classification from administrative claims data — opening the door to scalable, reproducible nursing quality measurement at population scale. Identified fairness gaps highlight the need for careful validation across demographic subgroups before clinical deployment.

Keywords

BEHRT Transformer Models Nursing Diagnoses Administrative Claims ICD-10 Codes Algorithmic Fairness NLP for EHR APCD Data

How We Did It

1

Data Construction

A labeled dataset was constructed from de-identified administrative claims using established ICD-to-nursing-diagnosis crosswalk tables and clinical expert review. The corpus includes hospital discharge claims and All-Payer Claims Database records spanning multiple states.

2

BEHRT Architecture

BEHRT (Li et al., 2020) adapts the BERT transformer architecture for sequential ICD code inputs, treating each patient's claim history as a "language" sequence. The model encodes temporal patterns in diagnosis and procedure codes that traditional ML models miss.

3

Fine-Tuning & Evaluation

BEHRT was fine-tuned for multi-label nursing diagnosis classification using UNLV's GPU cluster. Performance was evaluated using macro-averaged F1, precision, and recall — with stratified analysis by nursing diagnosis category, race/ethnicity, and insurance type.

4

Fairness Analysis

Prediction performance was disaggregated across demographic subgroups (race, ethnicity, insurance status) following established algorithmic fairness frameworks. Disparities were quantified using equalized odds and demographic parity metrics.

Key Innovations

Innovation #1

APCD as a Training Corpus

Most NLP-for-healthcare models are trained on EHR notes. We demonstrate that ICD-coded administrative claims — far more abundant and standardized — are sufficient to train high-quality nursing diagnosis classifiers when using transformer architectures.

Innovation #2

Nursing-Specific Validation Framework

Rather than generic NLP benchmarks, we evaluate model outputs against clinically meaningful nursing quality indicators — with validation by a certified nurse-midwife (Dr. Vanderlaan) to ensure clinical face validity alongside statistical performance.

Results at a Glance

Model Performance Comparison (Macro F1)

🏆 BEHRT (Ours)
0.847
BioBERT
0.793
Gradient Boosting
0.712
Logistic Regression
0.638
Activity
Intolerance
Impaired
Mobility
Fluid
Imbalance
Pain
Management
Maternal
Risk
Psychosocial
Distress

Fig. 1 — BEHRT F1 scores by nursing diagnosis category. Bars above 85% (blue) represent high-frequency diagnoses with clear ICD correlates. Complex multi-factorial diagnoses (red) show lower but clinically meaningful performance.

Key Findings Summary

BEHRT Outperforms All Baselines

The transformer-based BEHRT model achieves a macro F1 of 0.847 — a 7-point improvement over BioBERT and 21 points over logistic regression — demonstrating the value of sequential ICD code modeling.

High-Frequency Diagnoses: F1 > 0.91

For well-defined nursing diagnoses with strong ICD correlates (impaired mobility, fluid imbalance, activity intolerance), BEHRT achieves F1 scores exceeding 0.91 — sufficiently high for production quality measurement systems.

Fairness Gap Identified

Prediction performance varies by race/ethnicity and insurance status — consistent with known biases in administrative data. We identify specific nursing diagnosis categories where gaps are largest and recommend targeted re-sampling strategies.

Scalability Confirmed

End-to-end inference on a 500,000-record dataset completes in under 4 hours on UNLV's GPU cluster — demonstrating viability for statewide APCD-scale deployment.

Clinical & Policy Significance

Cost Reduction

Eliminate Manual Coding Costs

Manual nursing diagnosis coding for population studies costs tens of thousands of dollars in expert labor per dataset. Automated classification reduces this to GPU compute costs — typically less than $100 per million records.

Quality Measurement

Scalable Nursing Quality Indicators

Automated diagnosis classification unlocks the ability to compute nursing-sensitive quality indicators at state or national scale — enabling the kind of comparative effectiveness research that has previously been logistically impossible.

Health Equity

Surfacing Disparities at Scale

Population-scale nursing diagnosis classification makes it possible — for the first time — to systematically identify disparities in nursing care delivery across race, insurance status, and geography using administrative data.

This Work is Ongoing

Awaiting Peer Review

This research is currently progressing through the peer review process. We'll update this page with publication details as they become available. Check back for the full findings when the work is formally published.

Get in Touch

Interested in this work or looking to collaborate? Reach out to the team at appdev@unlv.edu — we welcome conversations with researchers, clinicians, and healthcare organizations working in this space.

Related Research

Companion Project

APCD Maternal Health Research

The nursing diagnosis classification methodology explored in this research is being applied in the APCD Maternal Health project to classify and analyze nursing diagnoses in Virginia's All-Payer Claims Database.

Learn about APCD project
Ongoing Initiative

AI in Nursing: The Bigger Picture

This study is one output of the broader NCSBN research initiative led by Vanderlaan and Fonseca — developing AI tools for nursing quality measurement and practice improvement at national scale.

Explore the initiative