The new type of Coronavirus (COVID-19), which started in 2019 in Wuhan, China, is an infectious virus that causes respiratory tract infection. This virus became effective in the world in a short time and turned into an epidemic. Early diagnosis of such infectious diseases and initiation of the necessary treatment at an early stage are very important. The use of X-ray (X-Ray) and Computed Tomography (CT) medical radiological imaging methods and deep learning and machine learning techniques help in the accurate and rapid detection of this disease. In this study; Two different datasets were used, including X-Ray images labeled normal-COVID-19-pneumonia (pneumonia) and CT images labeled normal-COVID-19. Inception ResNetV2, VGG-16 and DenseNet121 deep learning architectures and kNN and SVM classifiers are used. In this context, 3 different experiments were carried out. First of all, the classification performance of each network was examined. Then, the feature vectors produced by the networks were separately processed with classifiers. Finally, the feature vectors produced by the networks were combined and the classification process was carried out. As a result, the highest result for COVID-19 and normal images in the chest CT dataset was obtained with the combined features and kNN classifier with 98.9% accuracy.
2019 yılında Çin’in Wuhan kentinde başlayan yeni tip Koronavirüs (COVID-19), solunum yolu enfeksiyonuna neden olan bulaşıcı bir virüstür. Bu virüs dünyada kısa sürede etkili olmuş ve bir salgına dönüşmüştür. Bu tür bulaşıcı hastalıkların erken teşhisi ve gerekli tedavinin erken süreçte başlatılması çok önemlidir. X-ışını (X-Ray) ve Bilgisayarlı Tomografi (BT) tıbbi radyolojik görüntüleme yöntemleri ile derin öğrenme ve makine öğrenmesi tekniklerinin kullanılması bu hastalığın doğru ve hızlı tespitine yardımcı olmaktadır. Bu çalışmada; normal-COVID-19-pnömoni (zatürre) etiketli X-Ray ve normal-COVID-19 etiketli BT görüntülerini içeren 2 farklı veri kümesi kullanılmıştır. Bununla birlikte; InceptionResNetV2, VGG-16 ve DenseNet121 derin öğrenme mimarileri ve kNN ile SVM sınıflandırıcıları kullanılmıştır. Bu kapsamda 3 farklı çalışma yürütülmüştür. Öncelikle her bir ağın sınıflandırma başarımı incelenmiştir. Daha sonra ağların ürettiği öznitelik vektörleri ayrı olarak sınıflandırıcılarla işleme sokulmuştur. Son olarak ağların ürettiği öznitelik vektörleri birleştirilmiş ve sınıflandırma işlemi gerçekleştirilmiştir. Sonuç olarak göğüs BT veri kümesindeki COVID-19 ve normal görüntüleri için en yüksek sonuç %98,9 doğruluk ile birleştirilmiş öznitelikler ve kNN sınıflandırıcısı ile elde edilmiştir.
Primary Language | Turkish |
---|---|
Subjects | Engineering |
Journal Section | Makaleler(Araştırma) |
Authors | |
Early Pub Date | October 22, 2023 |
Publication Date | November 20, 2023 |
Published in Issue | Year 2023 Volume: 16 Issue: 2 |
Article Acceptance
Use user registration/login to upload articles online.
The acceptance process of the articles sent to the journal consists of the following stages:
1. Each submitted article is sent to at least two referees at the first stage.
2. Referee appointments are made by the journal editors. There are approximately 200 referees in the referee pool of the journal and these referees are classified according to their areas of interest. Each referee is sent an article on the subject he is interested in. The selection of the arbitrator is done in a way that does not cause any conflict of interest.
3. In the articles sent to the referees, the names of the authors are closed.
4. Referees are explained how to evaluate an article and are asked to fill in the evaluation form shown below.
5. The articles in which two referees give positive opinion are subjected to similarity review by the editors. The similarity in the articles is expected to be less than 25%.
6. A paper that has passed all stages is reviewed by the editor in terms of language and presentation, and necessary corrections and improvements are made. If necessary, the authors are notified of the situation.
. This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.