This repository has been archived on 2024-01-06. You can view files and clone it, but cannot push or open issues or pull requests.
justhomework/AIandML/e2_matchine_learning/e2.3_logreg.ipynb

676 lines
338 KiB
Text
Raw Normal View History

2022-12-06 11:47:05 +00:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"run_control": {
"marked": true
}
},
"source": [
"# 逻辑回归实验\n",
"\n",
"> 此次练习中,我们使用[Human Activity Recognition Using Smartphones](https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones)数据集。它通过对参加测试者的智能手机上安装一个传感器而采集了参加测试者每天的日常活动ADL。目标是将日常活动分成六类walking, walking upstairs, walking downstairs, sitting, standing, and laying。\n",
">\n",
"> 该数据集也可以在Kaggle网站上获得https://www.kaggle.com/uciml/human-activity-recognition-with-smartphones/downloads/human-activity-recognition-with-smartphones.zip \n",
"\n",
"把训练文件重新命名为`e2.3_Human_Activity_Recognition_Using_Smartphones_Data.csv`"
]
},
{
"cell_type": "markdown",
"metadata": {
"run_control": {
"marked": true
}
},
"source": [
"## 第一步:导入数据\n",
"\n",
"* 查看数据类型---因为有太多的列所以最好使用value_counts\n",
"* 判断其中的小数数值是否需要尺度缩放\n",
"* 检查数据中各活动类型的划分\n",
"* 把活动类型标签编码成一个整数"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {
"run_control": {
"marked": true
}
},
"outputs": [],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"\n",
"filepath = 'e2.3_Human_Activity_Recognition_Using_Smartphones_Data.csv'\n",
"data = pd.read_csv(filepath)"
]
},
{
"cell_type": "markdown",
"metadata": {
"run_control": {
"marked": true
}
},
"source": [
"所有列的数据类型都是浮点数,除了活动标签列。"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {
"run_control": {
"marked": true
}
},
"outputs": [
{
"data": {
"text/plain": [
"float64 561\n",
"int64 1\n",
"object 1\n",
"dtype: int64"
]
},
"execution_count": 38,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data.dtypes.value_counts()"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {
"run_control": {
"marked": true
}
},
"outputs": [
{
"data": {
"text/plain": [
"angle(X,gravityMean) float64\n",
"angle(Y,gravityMean) float64\n",
"angle(Z,gravityMean) float64\n",
"subject int64\n",
"Activity object\n",
"dtype: object"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data.dtypes.tail()"
]
},
{
"cell_type": "markdown",
"metadata": {
"run_control": {
"marked": true
}
},
"source": [
"数据都已经全部被缩放到-1到1之间了。"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {
"run_control": {
"marked": true
}
},
"outputs": [
{
"data": {
"text/plain": [
"-1.000000 466\n",
"-0.995377 2\n",
"-0.999996 2\n",
"-0.999893 2\n",
"-1.000000 2\n",
" ... \n",
"-0.999983 1\n",
"-0.943439 1\n",
"-0.998014 1\n",
"-0.999915 1\n",
" 1.000000 1\n",
"Length: 93, dtype: int64"
]
},
"execution_count": 40,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data.iloc[:, :-1].min().value_counts()"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {
"run_control": {
"marked": true
}
},
"outputs": [
{
"data": {
"text/plain": [
"1.000000 452\n",
"0.994731 2\n",
"0.805064 1\n",
"0.908361 1\n",
"0.891736 1\n",
" ... \n",
"0.990935 1\n",
"0.979031 1\n",
"0.928416 1\n",
"0.848031 1\n",
"30.000000 1\n",
"Length: 110, dtype: int64"
]
},
"execution_count": 41,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data.iloc[:, :-1].max().value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"检查数据中各活动类型的划分---已经比较平衡了。"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LAYING 1407\n",
"STANDING 1374\n",
"SITTING 1286\n",
"WALKING 1226\n",
"WALKING_UPSTAIRS 1073\n",
"WALKING_DOWNSTAIRS 986\n",
"Name: Activity, dtype: int64"
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data.Activity.value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Scikit learn的分类器不接受一个稀疏矩阵作为预测列。所以可以使用`LabelEncoder`将活动标签编码为整数。"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"5593 3\n",
"4163 4\n",
"4519 5\n",
"640 4\n",
"6463 2\n",
"Name: Activity, dtype: int64"
]
},
"execution_count": 43,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from sklearn.preprocessing import LabelEncoder\n",
"\n",
"le = LabelEncoder()\n",
"data['Activity'] = le.fit_transform(data.Activity)\n",
"data['Activity'].sample(5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 第二步:划分训练数据和测试数据\n",
"\n",
"* 可以考虑使用Scikit-learn中的`StratifiedShuffleSplit`,以保证划分后的数据集中每个类别个案的比例与整个数据集相同。\n"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {},
"outputs": [],
"source": [
"feature_cols = data.columns[:-1]\n",
"\n",
"from sklearn.model_selection import StratifiedShuffleSplit\n",
"\n",
"# Get the split indexes\n",
"strat_shuf_split = StratifiedShuffleSplit(n_splits=1,test_size=0.3, random_state=42)\n",
"\n",
"train_idx, test_idx = next(strat_shuf_split.split(data[feature_cols], data.Activity))\n",
"\n",
"# Create the dataframes\n",
"X_train = data.loc[train_idx, feature_cols]\n",
"y_train = data.loc[train_idx, 'Activity']\n",
"\n",
"X_test = data.loc[test_idx, feature_cols]\n",
"y_test = data.loc[test_idx, 'Activity']"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0 0.191411\n",
"2 0.186941\n",
"1 0.174893\n",
"3 0.166731\n",
"5 0.145939\n",
"4 0.134085\n",
"Name: Activity, dtype: float64"
]
},
"execution_count": 45,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"y_train.value_counts(normalize=True)"
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0 0.191296\n",
"2 0.186763\n",
"1 0.174977\n",
"3 0.166818\n",
"5 0.145966\n",
"4 0.134180\n",
"Name: Activity, dtype: float64"
]
},
"execution_count": 46,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"y_test.value_counts(normalize=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 第三步:训练模型\n",
"\n",
"* 用所有特征训练一个基本的使用缺省参数的逻辑回归模型。\n",
"* 分别用L1和L2正则化来训练一个模型使用交叉验证确定超参数的值。注意正则化模型尤其是L1模型可能需要一定训练时间。"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/ir/dev/justhomework/AIandML/.venv/lib/python3.10/site-packages/sklearn/linear_model/_logistic.py:444: ConvergenceWarning: lbfgs failed to converge (status=1):\n",
"STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n",
"\n",
"Increase the number of iterations (max_iter) or scale the data as shown in:\n",
" https://scikit-learn.org/stable/modules/preprocessing.html\n",
"Please also refer to the documentation for alternative solver options:\n",
" https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n",
" n_iter_i = _check_optimize_result(\n",
"/home/ir/dev/justhomework/AIandML/.venv/lib/python3.10/site-packages/sklearn/linear_model/_logistic.py:444: ConvergenceWarning: lbfgs failed to converge (status=1):\n",
"STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n",
"\n",
"Increase the number of iterations (max_iter) or scale the data as shown in:\n",
" https://scikit-learn.org/stable/modules/preprocessing.html\n",
"Please also refer to the documentation for alternative solver options:\n",
" https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n",
" n_iter_i = _check_optimize_result(\n",
"/home/ir/dev/justhomework/AIandML/.venv/lib/python3.10/site-packages/sklearn/linear_model/_logistic.py:444: ConvergenceWarning: lbfgs failed to converge (status=1):\n",
"STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n",
"\n",
"Increase the number of iterations (max_iter) or scale the data as shown in:\n",
" https://scikit-learn.org/stable/modules/preprocessing.html\n",
"Please also refer to the documentation for alternative solver options:\n",
" https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n",
" n_iter_i = _check_optimize_result(\n",
"/home/ir/dev/justhomework/AIandML/.venv/lib/python3.10/site-packages/sklearn/linear_model/_logistic.py:444: ConvergenceWarning: lbfgs failed to converge (status=1):\n",
"STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n",
"\n",
"Increase the number of iterations (max_iter) or scale the data as shown in:\n",
" https://scikit-learn.org/stable/modules/preprocessing.html\n",
"Please also refer to the documentation for alternative solver options:\n",
" https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n",
" n_iter_i = _check_optimize_result(\n",
"/home/ir/dev/justhomework/AIandML/.venv/lib/python3.10/site-packages/sklearn/linear_model/_logistic.py:444: ConvergenceWarning: lbfgs failed to converge (status=1):\n",
"STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n",
"\n",
"Increase the number of iterations (max_iter) or scale the data as shown in:\n",
" https://scikit-learn.org/stable/modules/preprocessing.html\n",
"Please also refer to the documentation for alternative solver options:\n",
" https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n",
" n_iter_i = _check_optimize_result(\n",
"/home/ir/dev/justhomework/AIandML/.venv/lib/python3.10/site-packages/sklearn/linear_model/_logistic.py:444: ConvergenceWarning: lbfgs failed to converge (status=1):\n",
"STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n",
"\n",
"Increase the number of iterations (max_iter) or scale the data as shown in:\n",
" https://scikit-learn.org/stable/modules/preprocessing.html\n",
"Please also refer to the documentation for alternative solver options:\n",
" https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n",
" n_iter_i = _check_optimize_result(\n"
]
}
],
"source": [
"# 请在此处填写你的代码(训练一个基本的使用缺省参数的逻辑回归模型)\n",
"from sklearn.linear_model import LogisticRegressionCV\n",
"lr=LogisticRegressionCV().fit(X_train, y_train)"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {},
"outputs": [],
"source": [
"# L1 正则化的逻辑回归\n",
"lr_l1 = LogisticRegressionCV(Cs=10, cv=4, penalty='l1', solver='liblinear').fit(X_train, y_train)"
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {},
"outputs": [],
"source": [
"# 请在此处填写你的代码L2 正则化的逻辑回归)\n",
"lr_l2 = LogisticRegressionCV(Cs=10, cv=4, penalty='l2', solver='liblinear').fit(X_train, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 第四步:\n",
"\n",
"* 输出上面训练出的三个模型中每个特征的系数;\n",
"* 并绘制成图来比较它们的差异 (每个类别一张图)"
]
},
{
"cell_type": "code",
"execution_count": 50,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[-0.11183502 0.01204105 0.06963152 ... -0.55016141 -0.20539043\n",
" -0.08843306]\n",
" [-0.08548885 -0.11598664 -0.13913328 ... -1.26772337 -0.25360636\n",
" -0.03148728]\n",
" [ 0.0751782 0.12985791 0.11187319 ... 1.68572526 0.3474987\n",
" -0.08177631]\n",
" [-0.01769051 -0.03426794 0.09696276 ... -0.09957735 0.09537238\n",
" 0.05246363]\n",
" [ 0.29121888 0.10542645 0.10344476 ... -0.29944873 -0.12072361\n",
" 0.08701147]\n",
" [-0.15138271 -0.09707084 -0.24277896 ... 0.5311856 0.13684932\n",
" 0.06222155]]\n",
"[[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 ... -1.36405254e+00\n",
" 0.00000000e+00 -3.64077773e-02]\n",
" [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 ... 0.00000000e+00\n",
" 0.00000000e+00 4.37527792e-02]\n",
" [ 0.00000000e+00 0.00000000e+00 0.00000000e+00 ... 0.00000000e+00\n",
" 0.00000000e+00 -5.12308232e-02]\n",
" [-7.97455970e-01 0.00000000e+00 0.00000000e+00 ... 0.00000000e+00\n",
" 0.00000000e+00 -7.32824635e-03]\n",
" [ 2.20777060e+00 0.00000000e+00 0.00000000e+00 ... 0.00000000e+00\n",
" 0.00000000e+00 1.59171722e-03]\n",
" [-1.79412930e+00 -8.70353323e+00 -3.63950699e+00 ... 3.68925247e+00\n",
" 2.54478256e+00 9.44320760e-03]]\n",
"[[-1.03725930e-01 5.50632499e-03 6.33482840e-02 ... -3.88543368e-01\n",
" -1.92209172e-01 -5.84898375e-02]\n",
" [ 5.25641157e-01 -5.49387275e-01 -1.43249433e+00 ... -3.28946016e+00\n",
" -3.20149183e-01 6.20475591e-02]\n",
" [ 8.37677925e-03 2.41687648e-01 1.26037480e-01 ... 2.54258164e+00\n",
" 3.31285345e-01 -4.26362275e-02]\n",
" [-3.09566944e-01 -1.46430241e-01 2.88616344e-01 ... -4.63003337e-01\n",
" 1.34385341e-01 1.33288296e-03]\n",
" [ 6.71720716e-01 1.43032994e-01 2.24426262e-01 ... -4.58979070e-01\n",
" -1.02600783e-01 8.06090080e-03]\n",
" [-4.62839540e-01 -7.29210412e-01 -8.47552456e-01 ... 1.85363537e+00\n",
" 7.97945531e-01 1.60030165e-02]]\n"
]
}
],
"source": [
"# 请在此处填写你的代码(输出各模型训练到的特征系数值)\n",
"print(lr.coef_)\n",
"print(lr_l1.coef_)\n",
"print(lr_l2.coef_)"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAABKgAAASXCAYAAADF8UddAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjYuMCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy89olMNAAAACXBIWXMAAAexAAAHsQEGxWGGAAEAAElEQVR4nOzdd3hcZ5n38e8p09QtF7nbcXrvkAoJLQmhhkAWCCS7tIRQdrNZAoR96bCB3dCTUBMglfSOU51e3XuXZdnqdeppz/P+cWZGklXcZEmW7891+bI9Gs2c6XN+577vx9Baa4QQQgghhBBCCCGEGCXmaG+AEEIIIYQQQgghhDiwSUAlhBBCCCGEEEIIIUaVBFRCCCGEEEIIIYQQYlTZI3ElF110EXPnzh2JqxJCCCHEOFFbW8v9998/2pshdkK+5wkhhBBidw30PW9EAqq5c+dyww03jMRVCSGEEGKcuPrqq0d7E8QukO95QgghhNhdA33PkxY/IYQQQgghhBBCCDGqJKASQgghhBBCCCGEEKNqRFr8estkMrS3t6O1HumrLjIMg+rqakpKSkZtG4QQQgghhBBCCCH2tf0lhxnxgKq1tZUZM2ZgWdZIX3VREARs27aN2bNnj9o2CCGEEEIIIYQQQuxr+0sOM+ItfoZhjOqdAmBZFoZhjOo2CCGEEEIIIYQQQuxr+0sOM+IVVPvSrbfeyqRJk/jABz7A9773PdatW4fWmvPPP5/LLrtstDdPCCGEEEIIIYQQYtwYzhxm1Iaka63xArXHf3ald/Lb3/42f//733nqqadG4BYJIYQQQgghhBBCjE1jPYcZtQoqX2kOve6JPf799T++gIg1dHnY9ddfz7Jly/jVr361x9cjhBBCCCGEEEIIsb8b6znMqAVUtmmw/scX7NXv78y1117LxIkT+dGPfsQ555yzx9clhBBCCCGEEEIIsT8b6znMqLX4GYZBxDL3+M9gw7VuvPFGrrjiCn72s58BMG3aNEpKSli6dOlI3jwhhBBCCCGEEEKIMWOs5zDjakj65ZdfzuWXXw7AzTffXDz95z//+ShtkRBCCCGEEEIIIcT4NJw5zKhVUAkhhBBCCCGEEEIIARJQCSGEEEIIIYQQQohRJgGVEEIIIXbdXy4ANzPaWyH2kU2bNvG5z32Oiy++GIA77riDL3zhC3z2s58lnU6P8tYJMYLW/hMW/M9ob4UQQhxQJKASQgghxK6rfxM8CajGq3nz5vHnP/+5+P8HHniAP/7xj3ziE5/g/vvvH8Ut2w84ydHeAjGcurZC24bR3gohhDigjF5ApTUE3p7/0brfRd566608+uijALzxxhtccsklXHPNNSN9y4QQQojxSweg1WhvhRghhdV65syZQ319fZ+fzZ8/n6uvvpra2tpR2LIx6E/vhbaNo70VYrhoDSoY7a0QQojhNcZzmNFbxU/58MNJe/77/90KVmTQH7/tbW/j+uuv57e//e2eX4cQQggh+tJqwC8nYnyrq6tj5syZfU4777zzOO+887j66qtHaavGGDcFrrRBjhtahYG8EEKMJ2M8hxm9gMq0wxu3N78vhBBCiJGj8pVTUkE1brW1tXHdddexePFifvrTn/KRj3yEK6+8kmw2y+9+97vR3ryxTfny2hhPtJIKKiHE+DPGc5jRS3kMY8jkTQghhBBjTHHnWyqoxquJEydy88039zntU5/61ChtzX5G+chrYxyRgEoIMR6N8Rxm3JUh3XjjjTz66KPMmzePpUuXsnLlSv7whz/wxS9+cbQ3TQghhNi/FdpdpEpEiP6kgmqc0dLiJ4QQu2i4cphxFVBdfvnlXH755aO9GUIIIcT4pKXFT4hBKZnPNq5IBZUQQuyS4cxhRm8VPyGEEELsXwo7a7ITLkR/UkE1vsiQdCGEGHESUAkhhBBi10gFlRCDk4BqfJEKKiGEGHESUAkhhBBi1xSrCaSCSoh+JKAaX7SSx1MIIUbYXgdUDz74IF/4whe45JJLePLJJ4djm/bYrbfeyqOPPgrAn/70J774xS9y8cUXs2TJklHdLiGEEGJcKLT2SYufEH3p/EBtCTTGD63zKzMKIYQYynDmMHs9JP0jH/kIH/nIR+jo6OCaa67hfe973y79ntYaX+/5m75t2BiGMejPP//5z/P5z3+exYsX88gjj3DCCSfs8XUJIYQQAmnxE2IwSla4HHe0lhY/IcS4M9ZzmGFbxe9HP/oRV111VZ/T5s+fz/z586mtre13fl/7nPT3k/b4+hZ9ZhERIzLkeXzf59e//jU/+MEP9vh6hBBCCJEnQ9KFGFih0kYCqvFDhqQLIcahsZ7D7HVApbXmm9/8JhdccAEnndT3hp533nmcd955XH311f2v2LBZ9JlFe3y9tjH0pnuex1VXXcW///u/M2vWrD2+HiGEEELkSQWVEAPTUkE17siQdCHEODTWc5i9nkH1m9/8hqeffpp7772Xm2++eZd/zzAMImZkj/8MVlZ24403csUVVxCLxVi3bh033XQT99xzz97eTCGEEELIkHQhBlasoJLXxrghQ9KFEOPQWM9h9rqC6mtf+xpf+9rX9vZihsXll1/O5ZdfDrBbYZkQQgghdoFUUAkxMGl/HX+kgkoIIXbJcOYwe11BJYQQQogDhOyECzEwmUE1/mglq/gJIcQIG/GASmtNEIzu0YggCNDy5VoIIYTYPVJBJcTAJKAaf2RIuhBiHNlfcphhW8VvV02aNImGhoZRDYgMw2DSpEmjdv1CCCHEfkkrLps2hZv8DCWjvS1CjCUSUI1DWlr8hBDjxv6Sw4x4QFVSUkJJiXytFUIIIfY7WrEsFiMbOBJQCdGbBFTjj9byeAohxo39JYeRGVRCCCGE2DUqIACUzGURoi8l7a/jjVIBgVRQCSHEiJKASgghhBC7RCkfbRgESnbChehDKqjGncey9fy0dODl1IUQQuwbElAJIYQQYpcEygNAy+BgIfqSgGrc6dYuXYYsqiSEECNJAiohhBBC7JJCa18gAZUQfRXbXiXQGC+UVnjyeAohxIiSgEoIIYQQu6Qwj0UqqITYQWFW0SiujiSGV6AV7mhvhBBCHGAkoBJCCCHELilWUMngYCH6kha/cceXCiohhBhxElAJIYQQYpcUZlApCaiE6EsCqnEn0ApvtDdCCCEOMBJQCSGEEGKXBPmdcCUtfkL0JQHVuBNohSeL+AkhxIiSgEoIIYQQu0TlK6gC2QkXoq9CaCuvjXEjQEsFlRBCjDAJqIQQQgixSwqzp4LiimVCCKDXkHQJqMaLYgWVDL4XQogRIwGVEEIIIXZJoYIq5cjaVkL0IS1+406AwjOMnvBRCCHEPicB1Rhw7j/OJekmR3szhBBCiCH5OtwJ92WHTYi+JKAad3yt8TB62jeFEELscxJQjQGt2Vbqk/WjvRlCCCHEkHw/3AkPJKASB5icn6PL6Rr8DMWAStrBxgtFvsVP3u+EEGLESEA1RuSC3GhvghBCCDEkX0kFlTgwffulb3PWXWcNfgaZQTXuhDOopIJKCCFGkgRUY0TWz472JgghhBBD8gOpoBIHpqZMU/iPZOPAZ5AKqnHH1zo/g0oWhRBCiJEiAdUY4fjOaG+CEEIIMaRCBZWSKhFxgKmIVoT/WHL7wGeQCqpxJ0DhGgYoeUyFEGKkSEA1RkiLnxBCiLHO9cNV/KTFTxxoigHVYM99GZI+7qh8BZWWCiohhBgxElCNETlfAiohhBBjW6G1L5CZLOIAUwyoBgmg7mpdyNpoRAKqccQnbNf0A3eUt0QIIQ4cElCNMp2fVZDxZAaVEEKIsa3Y4icVVOIAUx4tB8APvAF//uPG57hu0kQJqMaRIB9QeYGM4RBCiJEiAdUo0/kPv7ZMepS3RAghhBhaYUi6H0hAJQ4sMSsGQPcQIxnWxqISUI0jQf6x9GROrBBCjBgJqEZZoYIq5WZGeUuEEEKIoQU6v4qf7ISLA0zhgGJnMHDF+7xIFQCOtIONGz0VVDKGQwghRooEVKNMEX7JT3kSUAkhhBjbvKCwip9UUIkDS+GAYtcgYUWlGQVgi9c1Ytsk9q1AS4ufEEKMNAm
"text/plain": [
"<Figure size 1500x1500 with 6 Axes>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# 请在此处填写你的代码绘制6张图\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"%matplotlib inline\n",
"plt.figure(figsize=(30, 30), dpi=50)\n",
"for i in range(6):\n",
" plt.subplot(3, 2, i+1)\n",
" data = np.vstack((lr.coef_[i, :], lr_l1.coef_[i, :], lr_l2.coef_[i, :]))\n",
" plt.plot(np.arange(562), data.transpose(), '', label=['LR', 'L1', 'L2'])\n",
" plt.legend()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 第五步:预测数据\n",
"\n",
"* 将每个模型预测的类别和概率值都保存下来。"
]
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {},
"outputs": [],
"source": [
"# 将每个模型预测的类别和概率值都保存下来。\n",
"y_pred = lr.predict(X_test)\n",
"y_pred_l1 = lr_l1.predict(X_test)\n",
"y_pred_l2 = lr_l2.predict(X_test)\n",
"\n",
"y_pred_proba = lr.predict_proba(X_test)\n",
"y_pred_proba_l1 = lr_l1.predict_proba(X_test)\n",
"y_pred_proba_l2 = lr_l2.predict_proba(X_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 第六步:评价模型\n",
"\n",
"对每个模型,分别计算下面的各评测指标值: \n",
"\n",
"* accuracy\n",
"* precision\n",
"* recall\n",
"* fscore\n",
"* confusion matrix"
]
},
{
"cell_type": "code",
"execution_count": 53,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"LR accuracy: 0.9873073436083409\n",
"LR precision: 0.9883023003161968\n",
"LR recall: 0.9881513487864519\n",
"LR fscore: 0.9882215465046406\n",
"LR confusion matrix: [[422 0 0 0 0 0]\n",
" [ 0 371 15 0 0 0]\n",
" [ 0 12 400 0 0 0]\n",
" [ 0 0 0 368 0 0]\n",
" [ 0 0 0 0 296 0]\n",
" [ 0 0 0 1 0 321]]\n",
"L1 accuracy: 0.9877606527651859\n",
"L1 precision: 0.9887539714995753\n",
"L1 recall: 0.9886689471301372\n",
"L1 fscore: 0.9887068901807181\n",
"L1 confusion matrix: [[422 0 0 0 0 0]\n",
" [ 0 371 15 0 0 0]\n",
" [ 0 12 400 0 0 0]\n",
" [ 0 0 0 368 0 0]\n",
" [ 0 0 0 0 296 0]\n",
" [ 0 0 0 0 0 322]]\n",
"L2 accuracy: 0.9868540344514959\n",
"L2 precision: 0.9878399933872739\n",
"L2 recall: 0.9877740662269671\n",
"L2 fscore: 0.987806320894575\n",
"L2 confusion matrix: [[422 0 0 0 0 0]\n",
" [ 0 372 14 0 0 0]\n",
" [ 0 14 398 0 0 0]\n",
" [ 0 0 0 368 0 0]\n",
" [ 0 0 0 0 296 0]\n",
" [ 0 0 0 1 0 321]]\n"
]
}
],
"source": [
"from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, confusion_matrix\n",
"\n",
"print('LR accuracy: ', accuracy_score(y_test, y_pred))\n",
"print('LR precision: ', precision_score(y_test, y_pred, average='macro'))\n",
"print('LR recall: ', recall_score(y_test, y_pred, average='macro'))\n",
"print('LR fscore: ', f1_score(y_test, y_pred, average='macro'))\n",
"print('LR confusion matrix: ', confusion_matrix(y_test, y_pred))\n",
"\n",
"print('L1 accuracy: ', accuracy_score(y_test, y_pred_l1))\n",
"print('L1 precision: ', precision_score(y_test, y_pred_l1, average='macro'))\n",
"print('L1 recall: ', recall_score(y_test, y_pred_l1, average='macro'))\n",
"print('L1 fscore: ', f1_score(y_test, y_pred_l1, average='macro'))\n",
"print('L1 confusion matrix: ', confusion_matrix(y_test, y_pred_l1))\n",
"\n",
"print('L2 accuracy: ', accuracy_score(y_test, y_pred_l2))\n",
"print('L2 precision: ', precision_score(y_test, y_pred_l2, average='macro'))\n",
"print('L2 recall: ', recall_score(y_test, y_pred_l2, average='macro'))\n",
"print('L2 fscore: ', f1_score(y_test, y_pred_l2, average='macro'))\n",
"print('L2 confusion matrix: ', confusion_matrix(y_test, y_pred_l2))"
]
}
],
"metadata": {
"kernelspec": {
2022-12-09 16:14:16 +00:00
"display_name": "Python 3 (ipykernel)",
2022-12-06 11:47:05 +00:00
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
},
"vscode": {
"interpreter": {
"hash": "1f0d395e06aa83586067b19165efc9b683889967164248deef4bbf1fa27cfb00"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}