Perspective - Imaging in Medicine (2022) Volume 14, Issue 11

Outdoor Activities and Sunburn for Skin Cancer Classification

Chao Azim*

Department of Pharmaceutics and Pharmaceutical Technology, University of Utah Health Sciences Center, USA

*Corresponding Author:
Chao Azim
Department of Pharmaceutics and Pharmaceutical Technology, University of Utah Health Sciences Center, USA

Received: 01-Nov-2022, Manuscript No. FMIM-22-82461; Editor assigned: 05-Nov-2022, Pre-QC No.FMIM-22-82461 (PQ); Reviewed: 19-Nov-2022, QC No. FMIM-22-82461; Revised: 24-Nov-2022, Manuscript No. FMIM-22-82461 (R); Published: 30-Nov-2022, DOI: 10.37532/1755- 5191.2022.14(11).01-03


Background early detection and treatment of skin cancer, whose prevalence is rising annually around the world and constitutes a serious danger to human health; have made significant strides in recent years thanks to the use of artificial intelligence to recognise dermoscopic pictures [1]. The deep backbone of the convolutional neural network has been used to make advances in the categorization of skin cancer images [2]. However, this method only recovers the attributes of the little items in the image and is unable to identify the key components [3]. Objectives: Researchers in the publication thus resort to vision transformers, which have proven to perform well in conventional classification tasks [4]. The goal of the self-attention is to enhance the value of crucial qualities and to eliminate distracting ones [5].



The suggested model's 94.3% accuracy on the HAM10000 and 94.1% accuracy on our datasets attest to Skin Trans' effectiveness [6]. The transformer network has not only produced excellent results in natural language processing but also excellent results in the field of vision, providing a strong basis for the classification of skin cancer based on multimodal data [7]. The author of this report is certain that it will benefit dermatologists, clinical researchers, computer scientists, and academics working in related sectors, as well as provide patients more convenience [8]. It is a prevalent cancer whose prevalence is rising globally every year and poses a serious danger to human health [9]. The most prevalent skin cancers are malignant melanoma, squamous cell carcinoma, and basal cell carcinoma. Despite being very uncommon, MM has a greater level of malignancy [10]. The "three kings of cancer" are known as malignant melanoma, liver cancer, and pancreatic cancer. The World Health Organization has estimated that there are 2 to 3 million new cases of skin cancer each year [4]. The worldwide population will be affected by MM to an estimated 4.2% in the prevalence of MM is unknown, although metastasis frequently happens early on. The patient 5-year survival rate is poor. An annual average of 65%–75% of all fatalities brought on by skin malignant tumours is attributable to MM. Once BCC, SCC, and other skin cancers spread, their prognosis is often not good.


In recent years, dermatoscopy has emerged as a brand-new non-invasive diagnostic technique for skin illnesses, however in clinical applications; it has been shown that its significant subjectivity and low repeatability depend heavily on the clinical expertise of the treating physician. The development of artificial intelligence (AI) offers a fresh approach to successfully resolving the aforementioned issues. The outcomes of the diagnostic are more precise and impartial when AI is used to examine skin imaging data. In both domestic and foreign settings, artificial intelligence has been used to recognise dermoscopic pictures. Early skin cancer classification collects features from skin cancer images using hand-craft feature extraction methods based on form, texture, geometry, and other factors. Deep learning has advanced artificial intelligence technologies significantly in the modern era. While a convolutional neural network's deep backbone can extract the attributes of many tiny objects in an image, it cannot identify the picture's truly crucial components. The top model in the field of natural language processing (NLP) is the transformer architecture offered by Vaswani. Dosovitskiy created the vision transformer architecture for image classification applications in response to the success of a deep neural network based on self-attention of the transformer model in natural language processing (NLP). These models' general training procedure is based on considering each embedded block of the input picture as a word in natural language processing. These models learn the link between these embedded patches using self-attention modules. Vision Transformer recently demonstrated its potent performance in conventional categorization. Networks are seldom used to classify skin cancer.


The primary dataset type for categorization is dermoscopic. Additionally, identifying skin cancer has lately used patient data as an input. Numerous deep learning models have been developed and are effective in the categorization of skin cancer. On the basis of a deep learning methodology, researchers at Mehak Arshad et al. presented a deep convolutional neural network model that demonstrated testing accuracy on the HAM10000 dataset. Proposed a technique that involved data augmentation, feature extraction using deep learning models, feature fusion, selection of parts, and classification and that, when applied to the HAM10000 dataset, obtained 91.7% testing accuracy. Data suggested using a convolutional neural network in conjunction with soft attention to Transformers have had a lot of success recently, both in the area of computer vision and natural language processing. In order to test the performance of the transformer network for image classification, Alexey Dosovitskiy divided a picture into patches and gave the series of linear embedding of these patches as an input. The transformer focused on itself to increase the value of crucial characteristics and reduce the impact of disruptive features. The transformer has shown remarkable success classifying medical images as well. Gelatin, Behnaz. Used VIT to categorise breast US pictures, and the results were as effective as or more effective than CNNs therefore, the study in this paper may be used to examine the potential of transformer networks in the categorization of of transformer networks in the categorization of skin cancer.


  1. Ferlay J, Colombet M, Soerjomataram I et al. Cancer statistics for the year 2020: An overview. Int J Cancer. (2021).
  2. Indexed at, Crossref, Google Scholar

  3. Apalla Z, Nashan D, Weller RB et al. Skin cancer: Epidemiology, disease burden, pathophysiology, diagnosis and therapeutic approaches. Dermatol Ther. 7, 5-19 (2017).
  4. Indexed at, Crossref, Google Scholar

  5. Davis LE, Shalin SC, Tackett AJ et al. Current state of melanoma diagnosis and treatment. Cancer Biol Ther. 20, 1366-1379 (2019).
  6. Indexed at, Crossref, Google Scholar

  7. Malvehy J, Pellacani G Dermoscopy, confocal microscopy and other non-invasive tools for the diagnosis of non-melanoma skin cancers and other skin conditions. Acta Derm Venereol. 97, 22-30 (2017).
  8. Indexed at, Crossref, Google Scholar

  9. Jutzi TB, Krieghoff Henning EI, Holland Letz T et al. Artificial Intelligence in Skin Cancer Diagnostics: The Patients’ Perspective. Front Med. 7, 233 (2020).
  10. Indexed at, Crossref, Google Scholar

  11. Sengupta S, Mittal N, Modi M et al. Improved skin lesions detection using color space and artificial intelligence techniques. J Dermatolog Treat. 31, 511-518 (2020).
  12. Indexed at, Crossref, Google Scholar

  13. Haenssle HA, Fink C, Schneiderbauer R et al. Man against machine: Diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol. 29, 1836-1842 (2018).
  14. Indexed at, Crossref, Google Scholar

  15. Bohr A, Memarzadeh K The rise of artificial intelligence in healthcare applications. Artif Intell Healthc. 25-60 (2020).
  16. Crossref, Google Scholar

  17. Hernández Orallo J, Martínez Plumed F, Schmid U et al. Computer models solving intelligence test problems: Progress and implications. Artif Intell 230, 74-107(2016).
  18. Crossref, Google Scholar

  19. Chan S, Reddy V, Myers B et al. Machine learning in dermatology: Current applications, opportunities, and limitations. Dermatol Ther 10, 365-386 (2020).
  20. Indexed at, Crossref, Google Scholar

Awards Nomination 20+ Million Readerbase

Select your language of interest to view the total content in your interested language

Google Scholar citation report
Citations : 4878

Imaging in Medicine received 4878 citations as per Google Scholar report

Imaging in Medicine peer review process verified at publons

Indexed In