Background
Higher cryptococcal antigen (CrAg) concentrations are associated with meningitis and death. A novel CrAg semi-quantitative lateral flow assay (CrAgSQTLFA, IMMY, USA) has demonstrated excellent diagnostic performance. However, its visual interpretation is complex and requires operator expertise which may limit performance in clinical practice. We describe an AI digital system to automatically read the CrAgSQ-LFA which removes the subjectivity and variability associated with the interpretation of the test by different readers.
Methods
Fifty-five CrAg concentrations were tested in duplicate on three different days with the CrAgSQ-LFA. The concentrations ranged from 0-5000 ng/ml and were prepared from the manufacturer positive control diluted with reference human sera (Merck, Sigma-Aldrich, Madrid, Spain). Each test was visually read by three different observers and photographed twice using TiraSpot mobile app (Spotlab, Madrid, Spain) using three different smartphone models. An AI algorithm to read CrAgSQLFA was developed. Each image was processed by identifying both T1 and T2 lines and quantifying their intensities. The ratio between both signal intensities was calculated and used to establish the corresponding semi-quantitative score.
Results
A total of 2163 images were used for comparing human visual reading and automatic analysis by means of an AI algorithm. Each LFA strip was assigned to a semi-quantitative score ranging from 0 (negative) to 5+. Discrepancies in interpretations of the semi-quantitative score were observed in 28% of visual readings among different observers. For each of the scores (0-5), the variability in concentration range associated with visual interpretation was higher than the one obtained by the AI (Figure 1), implying that for a given concentration the human has a higher probability of assigning two different scores than the algorithm.
Conclusions
AI reading of a smartphone picture reduces the variability of human visual reading of the CrAgSQ-LFA. No previous training is needed for health workers as using the App is as intuitive as taking a picture of your loved ones. The results and other metadata introduced in the App can be automatically uploaded to a cloud database which can allow better burden estimations of the disease as well as to keep records of the test results for further purposes.