Men ushbu amaliy ishda boqilgan Chorva mollarini dataset qilib oldim. Chorva mollari haqida malumotlar: Men dasturni tuzishda bo’yi va vazn hususiyatlaridan foydalandim Dastur kodi va natijasi


Download 40.35 Kb.
Sana31.01.2024
Hajmi40.35 Kb.
#1819437
Bog'liq
anvar robo amaliy4


O‘ZBEKISTON RESPUBLIKASI RAQAMLI TEXNOLOGIYALAR VAZIRLIGI MUHAMMAD AL-XORAZMIY NOMIDAGI
TOSHKENT AXBOROT TEXNOLOGIYALARI UNIVERSITETI

Kompyuter injiniringi fakulteti


Sun’iy intellekt kafedrasi

Robotatexnikada sun’iy intellekt


texnologiyalari va vositasi
fanidan


4-TOPSHIRIQ
Mavzu: Mashinali o‘qitishda klassik sinflashtirish algoritmlari va ularni dasturlash

Bajardi: 221-21guruh talabasi


Abdusattorov Anvar
Tekshirdi: Umidjon Xasanov
Baho:_____

TOSHKENT 2023


Vazifalar:

  1. Berilgan variantdagi masala yuzasidan o’rgatuvchi tanlama(dataset)ni shakllantiring.

  2. Yaratilgan dataset ning ixtiyoriy ikkita xususiyatini olgan holda matplotlib kutubxonasidan foydalanib grafik tasvirlang.

  3. Yaratilgan datasetni modelni o’qitish uchun 85 % va testlash uchun 15% nisbatda bo’laklarga ajrating.

  4. Sinflashtiruvchi modellarni yarating.

  1. KNN (k-Nearest Neighbors)

  2. SVM (Support Vector Machine)

  3. DT (Decision Tree)

  4. RF (Random Forest)

  1. Har bir model bo’yicha model aniqligini hisoblang (o’rgatuvchi tanalama uchun).

  2. Har bir modelni test to’plam bilan testlang. Modelini test to’plamdagi aniqligini hisoblang.

  3. Har bir modelda test to’plam uchun tartibsizlik matritsasi (confusion_matrix) ni hisoblang va tariflang.

  4. Ko’rib chiqilgan modellarni qiyosiy tahlilini amalga oshiring.



Men ushbu amaliy ishda boqilgan Chorva mollarini dataset qilib oldim.


Chorva mollari haqida malumotlar:



Men dasturni tuzishda bo’yi va vazn hususiyatlaridan foydalandim
Dastur kodi va natijasi:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.datasets import make_classification

boyi = [1.45, 1.47, 1.5, 1.51, 1.53, 1.57, 1.58, 1.59, 1.6, 1.62, 1.63, 1.64, 1.65, 1.66, 1.7]


vazn = [400, 420, 430, 435, 440, 442, 450, 453, 456, 459, 461, 463, 466, 467, 470]

# Datasetni tasvir etish


plt.figure(figsize=(8, 6))
plt.scatter(boyi, vazn, c=vazn, cmap=plt.cm.coolwarm, s=50, edgecolors='k')
plt.xlabel('Boyi')
plt.ylabel('Vazn')
plt.title('Dataset')
plt.show()

# Datasetni bo'limlash


boyi_train, boyi_test, vazn_train, vazn_test = train_test_split(np.array(boyi).reshape(-1, 1), vazn, test_size=0.15, random_state=42)

# Sinflashtiruvchi modellarni yaratish


# a) KNN (k-Nearest Neighbors)
knn_model = KNeighborsClassifier(n_neighbors=5)
knn_model.fit(boyi_train, vazn_train)

# b) SVM (Support Vector Machine)


svm_model = SVC(kernel='linear')
svm_model.fit(boyi_train, vazn_train)

# c) Decision Tree


dt_model = DecisionTreeClassifier()
dt_model.fit(boyi_train, vazn_train)

# d) Random Forest


rf_model = RandomForestClassifier(n_estimators=100)
rf_model.fit(boyi_train, vazn_train)

# Har bir model bo’yicha model aniqligini hisoblang


accuracy_knn_train = accuracy_score(vazn_train, knn_model.predict(boyi_train))
accuracy_svm_train = accuracy_score(vazn_train, svm_model.predict(boyi_train))
accuracy_dt_train = accuracy_score(vazn_train, dt_model.predict(boyi_train))
accuracy_rf_train = accuracy_score(vazn_train, rf_model.predict(boyi_train))

print(f"KNN Training Accuracy: {accuracy_knn_train}")


print(f"SVM Training Accuracy: {accuracy_svm_train}")
print(f"Decision Tree Training Accuracy: {accuracy_dt_train}")
print(f"Random Forest Training Accuracy: {accuracy_rf_train}")

# Har bir modelni test to’plam bilan testlang. Modelini test to’plamdagi aniqligini hisoblang


accuracy_knn_test = accuracy_score(vazn_test, knn_model.predict(boyi_test))
accuracy_svm_test = accuracy_score(vazn_test, svm_model.predict(boyi_test))
accuracy_dt_test = accuracy_score(vazn_test, dt_model.predict(boyi_test))
accuracy_rf_test = accuracy_score(vazn_test, rf_model.predict(boyi_test))

print(f"KNN Testing Accuracy: {accuracy_knn_test}")


print(f"SVM Testing Accuracy: {accuracy_svm_test}")
print(f"Decision Tree Testing Accuracy: {accuracy_dt_test}")
print(f"Random Forest Testing Accuracy: {accuracy_rf_test}")

# Har bir modelda test to’plam uchun tartibsizlik matritsasi (confusion_matrix) ni hisoblang va tariflang


cm_knn = confusion_matrix(vazn_test, knn_model.predict(boyi_test))
cm_svm = confusion_matrix(vazn_test, svm_model.predict(boyi_test))
cm_dt = confusion_matrix(vazn_test, dt_model.predict(boyi_test))
cm_rf = confusion_matrix(vazn_test, rf_model.predict(boyi_test))

print("Confusion Matrix for KNN:")


print(cm_knn)
print("\nConfusion Matrix for SVM:")
print(cm_svm)
print("\nConfusion Matrix for Decision Tree:")
print(cm_dt)
print("\nConfusion Matrix for Random Forest:")
print(cm_rf)



Xulosa:
Mashinali o'qitish (Machine Learning) dunyosida klassik sinflashtirish algoritmlari ko'p vaqt ichida o'rganilgan va dasturlashda keng qo'llanilgan algoritmlardir. Bu algoritmlar ma'lumotlar tahlili, sonlashuvli qism, va ma'lumotlar orasida bog'lovchi o'lchamlar (features) bo'yicha sinflashtirishni o'rganish uchun ishlatiladi. Endi, klassik sinflashtirish algoritmlariga qisqacha bir nazar qilamiz:

Logistik regressiya:


Bu algoritm, ikki sinfli sinovlar (masalan, "to'g'ri" yoki "noto'g'ri") uchun sinflashtirishni bajarish uchun ishlatiladi.


Bu algoritmning maqsadi ma'lumotlar orasidagi bog'lovchilarni o'rganib, ularni bitta sinfli sinovga joylashishdir.
Logistik regressiya uchun gradient optimizatsiya va boshqa yordamchi usullar qo'llaniladi.
K-yaqin kompanentlar sinflashtiruvchi (k-Nearest Neighbors, KNN):

Bu algoritm, yangi obyektni qaysi sinfga kiritsa, ya'ni uni eng yaqin bo'lgan k ta obyektning sinfi bilan sinflashtiradi.


KNN uchun asosiy parametr "k" (ya'ni, yaqin kompanentlar soni)dir.
Qo'llanmali vektor masalalari (Support Vector Machines, SVM):

Bu algoritm, obyektlarni sinflashtirish uchun biror bo'yoqotilgan sinflandiruvchi doiralar (bu doiralar SVM-ni yordamchi deyiladi) orqali joylashtiradi.


SVM, ma'lumotlar orasidagi sodda va sonlashuvli chegaralar (boundary) uchun mashhur.
Qaror qo'lda qarorlaydigan aga (Decision Trees):

Bu algoritm, ma'lumotlarning bir to'plamini qo'llanib, turli parametrlar (bog'lovchilar) asosida qaror qabul qiladi.


Qaror qo'lda qarorlaydigan aga, ma'lumotlarni bir nechta ikki guruhga ajratish uchun ishlatiladi.
Ko'p qayerli qaror qo'lda qarorlaydigan aga (Random Forest):

Bu algoritm, ko'p qayerli qaror qo'lda qarorlaydigan aga (decision trees) ishlatadi va ularni birlashtiradi.


Random Forest, modellarning birlamchi qo'llanilishi va bag'riksiz qo'llanilishi uchun mashhur.
Bu algoritmlar dasturlashning har bir qadamida o'zlarining afzalliklariga ega. Ma'lumotlar tahlili (feature selection), chegaralarni tuzatish (boundary tuning), va dasturlashning optimizatsiyasi (model tuning) jihatidan, har bir algoritmning o'z vaqtini aniqlab olishi mumkin. Har bir algoritmning afzalliklarini va cheklovlarni tushuntirish, ma'lumotlar tahlilining va sinflashtirishning amaliyotiy qismlarini tushunishning asosiy qismidir.
Download 40.35 Kb.

Do'stlaringiz bilan baham:




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling