Man vs. machine: Predicting hospital bed demand from an emergency department

Link to article at PubMed

PLoS One. 2020 Aug 27;15(8):e0237937. doi: 10.1371/journal.pone.0237937. eCollection 2020.

ABSTRACT

BACKGROUND: The recent literature reports promising results from using intelligent systems to support decision making in healthcare operations. Using these systems may lead to improved diagnostic and treatment protocols and to predict hospital bed demand. Predicting hospital bed demand in emergency department (ED) attendances could help resource allocation and reduce pressure on busy hospitals. However, there is still limited knowledge on whether intelligent systems can operate as fully autonomous, user-independent systems.

OBJECTIVE: Compare the performance of a computer-based algorithm and humans in predicting hospital bed demand (admissions and discharges) based on the initial SOAP (Subjective, Objective, Assessment, Plan) records of the ED.

METHODS: This was a retrospective cohort study that compared the performance of humans and machines in predicting hospital bed demand from an ED. It considered electronic medical records (EMR) of 9030 patients (230 used as a testing set, and hence evaluated both by humans and by an algorithm, and 8800 used as a training set exclusively by the algorithm) who visited the ED of a tertiary care and teaching public hospital located in Porto Alegre, Brazil between January and December 2014. The machine role was played by Support Vector Machine Classifier and the human prediction was performed by four ED physicians. Predictions were compared in terms of sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUROC).

RESULTS: All graders achieved similar accuracies. The accuracy by AUROC for the testing set was 0.82 [95% confidence interval (CI) of 0.77-0.87], 0.80 (95% CI: 0.75-0.85), 0.76 (95% CI: 0.71-0.81) for novice physicians, machine, experienced physicians, respectively. Processing time per test EMR was 0.00812±0.0009 seconds. In contrast, novice physicians took on average 156.80 seconds per test EMR, while experienced physicians took on average 56.40 seconds per test EMR.

CONCLUSIONS: Our data indicated that the system could predict patient admission or discharge states with 80% accuracy, which was similar the performance of novice and experienced physicians. These results suggested that the algorithm could operate as an autonomous and independent system to complete this task.

PMID:32853217 | DOI:10.1371/journal.pone.0237937

Leave a Reply

Your email address will not be published. Required fields are marked *