AI in Action: New Machine Learning Technique May Enhance Computer-aided Diagnosis of Melanoma

Tuesday, November 28, 2017 | Skin Cancer , Research and Publications , Melanoma


Researchers at Florida Atlantic University’s College of Engineering and Computer Science  have developed a technique using machine learning – a sub-field of artificial intelligence (AI) – that will enhance computer-aided diagnosis (CADx) of melanoma.

Thanks to the algorithm they created – which can be used in mobile apps, they were able to determine the “sweet spot” in classifying images of skin lesions.

This new finding, published in the Journal of Digital Imaging, will ultimately help clinicians more reliably identify and diagnose melanoma skin lesions, distinguishing them from other types of skin lesions. The research was conducted in the NSF Industry/University Cooperative Research Center for Advanced Knowledge Enablement (CAKE) at FAU and was funded by the Center's industry members.

Over the years, dermatologists have developed different heuristic classification methods to diagnose melanoma, but to limited success (65 to 80 percent accuracy). As a result, computer scientists and doctors have teamed up to develop CADx tools capable of aiding physicians to classify skin lesions, which could potentially save numerous lives each year.

“Contemporary CADx systems are powered by deep learning architectures, which basically consist of a method used to train computers to perform an intelligent task. We feed them massive amounts of data so that they can learn to extract meaning from the data and, eventually, demonstrate performance comparable to human experts – dermatologists in this case,” says Oge Marques, Ph.D., lead author of the study and a professor in FAU’s Department of Computer and Electrical Engineering and Computer Science  in a news release. “We are not trying to replace physicians or other medical professionals with artificial intelligence. We are trying to help them solve a problem faster and with greater accuracy, in other words enabling augmented intelligence.”

Images of skin lesions often contain more than just skin lesions – background noise, hair, scars, and other artifacts in the image can potentially confuse the CADx system. To prevent the classifier from incorrectly associating these irrelevant artifacts with melanoma, the images are segmented into two parts, separating the lesion from the surrounding skin, hoping that the segmented lesion can be more easily analyzed and classified.

“Previous studies have produced conflicting results: some research suggests that segmentation improves classification while other research suggests that segmentation is detrimental, due to a loss of contextual information around the lesion area,” says Marques. “How much we segment an image can either help or impede skin lesion classification.”

With that in mind, Marques and his collaborators Borko Furht, Ph.D., a professor in FAU’s Department of Electrical and Computer Engineering and Computer Science and director of the NSF-sponsored CAKE at FAU; Jack Burdick, a second-year master’s student at FAU; and Janet Weinthal, an undergraduate student at FAU, tested their hypothesis: “How much segmentation is too much?”

To do this, they compared the effects of no segmentation, full segmentation, and partial segmentation on classification and demonstrated that partial segmentation led to the best results. They then proceeded to determine how much segmentation would be “just right.” To do that, they used three degrees of partial segmentation, investigating how a variable-sized non-lesion border around the segmented skin lesion affects classification results. They performed comparisons in a systematic and reproducible manner to demonstrate empirically that a certain amount of segmentation border around the lesion could improve classification performance.

Their findings suggest that extending the border beyond the lesion to include a limited amount of background pixels improves their classifier’s ability to distinguish melanoma from a benign skin lesion.

“Our experimental results suggest that there appears to be a ‘sweet spot’ in the degree to which the surrounding skin included is neither too great nor too small and provides a ‘just right’ amount of context,” says Marques.

Their method showed an improvement across all relevant measures of performance for a skin lesion classifier.

This research is supported by the NSF CAKE at FAU working in collaboration with industry partners as well as international collaborators from Universitat Politècnica de Catalunya (UPC), Barcelona.

Photo Credit: Florida Atlantic University

Photo Caption: Confusion matrix, with true (and false) positives and negatives (as well as usual artifacts, e.g., hair).

Next Story

Comments

You must be logged in to leave a comment.