Difficulty Translation in Histopathology Images

Title:

Difficulty Translation in Histopathology Images

Link:

https://link.springer.com/chapter/10.1007/978-3-030-59137-3_22

Abstract:

The unique nature of histopathology images opens the door to domain-specific formulations of image translation models. We propose a difficulty translation model that modifies colorectal histopathology images to be more challenging to classify. Our model comprises a scorer, which provides an output confidence to measure the difficulty of images, and an image translator, which learns to translate images from easy-to-classify to hard-to-classify using a training set defined by the scorer. We present three findings. First, generated images were indeed harder to classify for both human pathologists and machine learning classifiers than their corresponding source images. Second, image classifiers trained with generated images as augmented data performed better on both easy and hard images from an independent test set. Finally, human annotator agreement and our model’s measure of difficulty correlated strongly, implying that for future work requiring human annotator agreement, the confidence score of a machine learning classifier could be used as a proxy.

Citation:

Jerry Wei, Arief Suriawinata, Xiaoying Liu, Bing Ren, Mustafa Nasir-Moin, Naofumi Tomita, Jason Wei, Saeed Hassanpour, “Difficulty Translation in Histopathology Images”, International Conference on Artificial Intelligence in Medicine (AIME), 12299:238-248, 2020.

Previous
Previous

Multi-Ontology Refined Embeddings (MORE): A Hybrid Multi-Ontology and Corpus-based Semantic Representation Model for Biomedical Concepts

Next
Next

Associations Between Substance Use and Instagram Participation to Inform Social Network-Based Screening Models: Multimodal Cross-Sectional Study