diff --git a/AIandML/e3_deep_learning/PyTorch.md b/AIandML/e3_deep_learning/PyTorch.md new file mode 100644 index 0000000..429510e --- /dev/null +++ b/AIandML/e3_deep_learning/PyTorch.md @@ -0,0 +1,30 @@ +# 安装PyTorch + +**人工智能与机器学习**课程的深度学习部分代码采用PyTorch框架。以下为安装说明: + +### 1. 官网 + +以下为官方网站,其中给出了安装方式。 + +``` +https://pytorch.org/ +``` + +###2. 选择安装方式 + +需注意,安装时需注意考虑: + +* 操作系统(Windows、Linux或MacOs) +* CPU或GPU,后者需安装cuda(指定版本) + +在官网上,选择适当安装方式后,将给出在线安装命令。例如,用conda安装cuda10.2版本的PyTorch: + +```shell +conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch +``` + +将其拷贝到命令行中执行即可。 + +### 3. 建议 + +因为PyTorch可能较大,可以下载whl文件安装。 \ No newline at end of file diff --git a/AIandML/e3_deep_learning/e3.0_tensor.ipynb b/AIandML/e3_deep_learning/e3.0_tensor.ipynb new file mode 100644 index 0000000..5b78980 --- /dev/null +++ b/AIandML/e3_deep_learning/e3.0_tensor.ipynb @@ -0,0 +1,789 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 实验3-1 Tensors\n", + "\n", + "实验目标:\n", + "\n", + "* 初步掌握PyTorch的张量用法\n", + "\n", + "张量(Tensor)是一种特殊的数据结构(可简单理解为高维数组),在使用方法上与数组或矩阵相似。在PyTorch中,我们使用张量来描述模型的输入、输出,以及模型参数。\n", + "\n", + "张量类似于NumPy的ndarrays,并且张量可以在GPU上运行(或其他专用硬件),以实现加速计算。如果我们熟悉numpy.ndarray,就很容易掌握PyTorch的Tensor。\n" + ] + }, + { + "cell_type": "code", + "execution_count": 62, + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline" + ] + }, + { + "cell_type": "code", + "execution_count": 63, + "metadata": {}, + "outputs": [], + "source": [ + "import torch\n", + "import numpy as np" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 1. Tensor 初始化\n", + "\n", + "张量可以通过多种方式初始化。请看以下示例:\n", + "\n", + "#### 1.1 直接来自数据\n", + "\n", + "张量可以直接从数据中创建。数据类型是自动推断的。" + ] + }, + { + "cell_type": "code", + "execution_count": 64, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[1, 2],\n", + " [3, 4]])\n", + "torch.int64\n" + ] + } + ], + "source": [ + "data = [[1, 2],[3, 4]]\n", + "x_data = torch.tensor(data)\n", + "print(x_data)\n", + "print(x_data.dtype)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### 1.2 从NumPy数组创建\n", + "\n", + "张量可以从NumPy数组创建(反之亦然)。" + ] + }, + { + "cell_type": "code", + "execution_count": 65, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "torch.int64\n" + ] + } + ], + "source": [ + "np_array = np.array(data)\n", + "x_np = torch.from_numpy(np_array)\n", + "print(x_np.dtype)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### 1.3 来自另一个张量\n", + "\n", + "新张量保留参数张量的属性(形状、数据类型),除非显式重写。\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 66, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Ones Tensor: \n", + " tensor([[1, 1],\n", + " [1, 1]]) \n", + "\n", + "torch.int64\n", + "Random Tensor: \n", + " tensor([[0.4114, 0.9433],\n", + " [0.6890, 0.5708]]) \n", + "\n", + "torch.float32\n" + ] + } + ], + "source": [ + "x_ones = torch.ones_like(x_data) # retains the properties of x_data\n", + "print(f\"Ones Tensor: \\n {x_ones} \\n\")\n", + "print(x_ones.dtype)\n", + "x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data\n", + "print(f\"Random Tensor: \\n {x_rand} \\n\")\n", + "print(x_rand.dtype)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### 1.4 使用随机或常量值:\n", + "\n", + "''shape'' 是张量维数的元组。在下面的函数中,它确定输出张量的维数。\n" + ] + }, + { + "cell_type": "code", + "execution_count": 67, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "torch.float32\n", + "torch.float32\n", + "torch.float32\n", + "Random Tensor: \n", + " tensor([[0.3535, 0.5414, 0.5149],\n", + " [0.7026, 0.4758, 0.2522]]) \n", + "\n", + "Ones Tensor: \n", + " tensor([[1., 1., 1.],\n", + " [1., 1., 1.]]) \n", + "\n", + "Zeros Tensor: \n", + " tensor([[0., 0., 0.],\n", + " [0., 0., 0.]])\n" + ] + } + ], + "source": [ + "shape = (2,3,)\n", + "rand_tensor = torch.rand(shape)\n", + "ones_tensor = torch.ones(shape)\n", + "zeros_tensor = torch.zeros(shape)\n", + "print(rand_tensor.dtype)\n", + "print(ones_tensor.dtype)\n", + "print(zeros_tensor.dtype)\n", + "\n", + "print(f\"Random Tensor: \\n {rand_tensor} \\n\")\n", + "print(f\"Ones Tensor: \\n {ones_tensor} \\n\")\n", + "print(f\"Zeros Tensor: \\n {zeros_tensor}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> 请回答:\n", + "> 1. 采用以上各种方法创建Tensor时,其数据类型(即`.dtype`属性)是怎样的?" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "数据类型可在上面的输出中发现。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 2. Tensor 属性\n", + "\n", + "\n", + "张量属性描述它们的形状、数据类型以及存储它们的设备。\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 68, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Number of dimension: 2\n", + "Shape of tensor: torch.Size([5, 4])\n", + "Datatype of tensor: torch.float32\n", + "Device tensor is stored on: cpu\n" + ] + } + ], + "source": [ + "tensor = torch.rand(5,4)\n", + "\n", + "print(f\"Number of dimension: {tensor.ndim}\")\n", + "print(f\"Shape of tensor: {tensor.shape}\")\n", + "print(f\"Datatype of tensor: {tensor.dtype}\")\n", + "print(f\"Device tensor is stored on: {tensor.device}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> 请回答:\n", + "> 1. 修改变量 tensor 的尺寸,重新执行,给出结果。\n", + "> 2. 并根据结果分析和解释`tensor.ndim`、`tensor.shape`、`tensor.dtype`和`tensor.device`各是什么含义?\n", + "\n" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "分别为:维度、形状、数据类型、存储位置" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 3. Tensor 操作\n", + "\n", + "\n", + "* 在\"官网文档\"中,详尽的介绍了约百余个张量的运算函数,包括转置、索引、切片、数学运算,线性代数,随机抽样等。\n", + "\n", + "* 需知晓的是,这些函数都可以在GPU上运行。在批量化运行时,GPU运算速度通常比在CPU上运行的速度更高。\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 69, + "metadata": {}, + "outputs": [], + "source": [ + "# We move our tensor to the GPU if available\n", + "if torch.cuda.is_available():\n", + " tensor = tensor.to('cuda')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> 请回答:\n", + "> 1. 使用张量的任意操作函数,并给出代码和效果。\n", + "* 如果你熟悉NumPy API,你会发现Tensor API使用起来轻而易举。\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 70, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[0.8814, 0.4445, 0.7783, 0.5680],\n", + " [0.8376, 0.7283, 0.5712, 0.6707],\n", + " [0.7175, 0.3082, 0.1837, 0.4545],\n", + " [0.2929, 0.8227, 0.3114, 0.1816],\n", + " [0.9362, 0.5050, 0.8165, 0.6510]], device='cuda:0')\n", + "tensor([[0.4919, 1.1102, 0.6788, 0.9667],\n", + " [0.5779, 0.7550, 0.9628, 0.8356],\n", + " [0.7705, 1.2575, 1.3861, 1.0990],\n", + " [1.2735, 0.6047, 1.2541, 1.3882],\n", + " [0.3591, 1.0414, 0.6154, 0.8619]], device='cuda:0')\n" + ] + } + ], + "source": [ + "print(tensor)\n", + "print(torch.acos(tensor))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 4. 类似 numpy的索引(indexing)和切片(slicing) " + ] + }, + { + "cell_type": "code", + "execution_count": 71, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[1., 0., 1., 1.],\n", + " [1., 0., 1., 1.],\n", + " [1., 0., 1., 1.],\n", + " [1., 0., 1., 1.]])\n" + ] + } + ], + "source": [ + "tensor = torch.ones(4, 4)\n", + "tensor[:,1] = 0\n", + "print(tensor)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> 请回答:\n", + "> 1. 请用索引或切片操作,给出`tensor`的第一行、第一列和中间2x2的子矩阵。" + ] + }, + { + "cell_type": "code", + "execution_count": 72, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([1., 0., 1., 1.])\n", + "tensor([1., 1., 1., 1.])\n", + "tensor([[0., 1.],\n", + " [0., 1.]])\n" + ] + } + ], + "source": [ + "print(tensor[0])\n", + "print(tensor[:,0])\n", + "print(tensor[1:3,1:3])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 5. 合并Tensor\n", + "\n", + "合并Tensor有多种方式,例如`torch.cat`和`torch.stack`。\n", + "\n", + "> 请回答:\n", + "> 1. 请查阅文档,并用示例描述`torch.cat`和`torch.stack`各是什么方式合并,有何不同。" + ] + }, + { + "cell_type": "code", + "execution_count": 73, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],\n", + " [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],\n", + " [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.],\n", + " [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.]])\n" + ] + } + ], + "source": [ + "t1 = torch.cat([tensor, tensor, tensor], dim=1)\n", + "print(t1)" + ] + }, + { + "cell_type": "code", + "execution_count": 74, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[1., 2., 3.],\n", + " [1., 2., 3.]])\n", + "tensor([1., 2., 3., 1., 2., 3.])\n" + ] + } + ], + "source": [ + " #注: .cat 和 .stack的区别在于 cat会增加现有维度的值,,stack会新加增加一个维度\n", + "a=torch.Tensor([1,2,3])\n", + "print(torch.stack((a,a)))\n", + "print(torch.cat((a,a)))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 6. Tensors相乘\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 75, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor.mul(tensor) \n", + " tensor([[1., 0., 1., 1.],\n", + " [1., 0., 1., 1.],\n", + " [1., 0., 1., 1.],\n", + " [1., 0., 1., 1.]]) \n", + "\n", + "tensor * tensor \n", + " tensor([[1., 0., 1., 1.],\n", + " [1., 0., 1., 1.],\n", + " [1., 0., 1., 1.],\n", + " [1., 0., 1., 1.]])\n" + ] + } + ], + "source": [ + "# This computes the element-wise product\n", + "print(f\"tensor.mul(tensor) \\n {tensor.mul(tensor)} \\n\")\n", + "# Alternative syntax:\n", + "print(f\"tensor * tensor \\n {tensor * tensor}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "这将计算两个张量之间的矩阵乘法\n" + ] + }, + { + "cell_type": "code", + "execution_count": 76, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor.matmul(tensor.T) \n", + " tensor([[3., 3., 3., 3.],\n", + " [3., 3., 3., 3.],\n", + " [3., 3., 3., 3.],\n", + " [3., 3., 3., 3.]]) \n", + "\n", + "tensor @ tensor.T \n", + " tensor([[3., 3., 3., 3.],\n", + " [3., 3., 3., 3.],\n", + " [3., 3., 3., 3.],\n", + " [3., 3., 3., 3.]])\n" + ] + } + ], + "source": [ + "print(f\"tensor.matmul(tensor.T) \\n {tensor.matmul(tensor.T)} \\n\")\n", + "# Alternative syntax:\n", + "print(f\"tensor @ tensor.T \\n {tensor @ tensor.T}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> 请回答:\n", + "> 1. 请查阅文档,并结合示例,说明以上两种乘法的区别" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "一种是按元素相乘,一种是矩阵乘法" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 7. 就地(In-place)操作\n", + "\n", + "> 就地操作是指,操作的输入和输出都是同一个变量。例如C语言中的`x++`和`y*=5`都属于就地操作。由于就地操作避免了内存拷贝,可以提升运算速度。\n", + "\n", + "\n", + "具有`_`后缀的成员函数即为就地操作。例如:`x.copy_(y)`、`x.t_()`执行后将都将更改变量`x`。" + ] + }, + { + "cell_type": "code", + "execution_count": 77, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[1., 0., 1., 1.],\n", + " [1., 0., 1., 1.],\n", + " [1., 0., 1., 1.],\n", + " [1., 0., 1., 1.]]) \n", + "\n", + "tensor([[6., 5., 6., 6.],\n", + " [6., 5., 6., 6.],\n", + " [6., 5., 6., 6.],\n", + " [6., 5., 6., 6.]])\n" + ] + } + ], + "source": [ + "print(tensor, \"\\n\")\n", + "tensor.add_(5)\n", + "print(tensor)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "注意:\n", + "\n", + "就地操作虽然可节省一些内存,但在计算梯度时可能会出现问题。因为就地操作会丢失计算的历史。因此,不鼓励使用它们。\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> 请回答:\n", + "> 1. 请尝试`x.copy_(y)`函数,并说明其用法。\n" + ] + }, + { + "cell_type": "code", + "execution_count": 78, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([[1, 2],\n", + " [3, 4]])\n", + "tensor([[4, 3],\n", + " [2, 1]])\n", + "tensor([[4, 3],\n", + " [2, 1]])\n" + ] + } + ], + "source": [ + "#将y复制到x中\n", + "x = torch.tensor([[1, 2],[3, 4]])\n", + "print(x)\n", + "y = torch.tensor([[4, 3],[2,1]])\n", + "print(y)\n", + "x.copy_(y)\n", + "print(x)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 8. 与 NumPy 间的转换\n", + "\n", + "Torch的Tensor可以和NumPy的ndarray互相转换。\n", + "\n", + "若Tensor是在CPU上,那么其转换所得 NumPy 数组可与之共享其底层内存。也就是,更改其中一个将导致另一个也被更改。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### 8.1 Tensor 转为 NumPy array\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 79, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "t: tensor([1., 1., 1., 1., 1.])\n", + "n: [1. 1. 1. 1. 1.]\n" + ] + } + ], + "source": [ + "t = torch.ones(5)\n", + "print(f\"t: {t}\")\n", + "n = t.numpy()\n", + "print(f\"n: {n}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "张量的更改亦反映在NumPy数组中。\n" + ] + }, + { + "cell_type": "code", + "execution_count": 80, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "t: tensor([2., 2., 2., 2., 2.])\n", + "n: [2. 2. 2. 2. 2.]\n" + ] + } + ], + "source": [ + "t.add_(1)\n", + "print(f\"t: {t}\")\n", + "print(f\"n: {n}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### 8.2 NumPy array 转为 Tensor" + ] + }, + { + "cell_type": "code", + "execution_count": 81, + "metadata": {}, + "outputs": [], + "source": [ + "n = np.ones(5)\n", + "t = torch.from_numpy(n)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "更改NumPy数组亦影响张量。\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 82, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "t: tensor([2., 2., 2., 2., 2.], dtype=torch.float64)\n", + "n: [2. 2. 2. 2. 2.]\n" + ] + } + ], + "source": [ + "np.add(n, 1, out=n)\n", + "print(f\"t: {t}\")\n", + "print(f\"n: {n}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> 请回答:\n", + "> 1. 请查阅文档,以了解如何复制Tensor或ndarray,以避免内存共享时的干扰。请给出示例代码。" + ] + }, + { + "cell_type": "code", + "execution_count": 83, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([2., 2., 2.])\n", + "[2. 2. 2.]\n", + "tensor([2., 2., 2.])\n", + "[1. 1. 1.]\n" + ] + } + ], + "source": [ + "#使用clone方法\n", + "a = torch.ones(3)\n", + "b = a.numpy()\n", + "a.add_(1)\n", + "print(a)\n", + "print(b) #a和b共享内存\n", + "\n", + "a = torch.ones(3)\n", + "b = a.clone().numpy()\n", + "a.add_(1)\n", + "print(a)\n", + "print(b) #a和b不共享内存" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.16" + }, + "vscode": { + "interpreter": { + "hash": "0733c54d9044ea299f7b7f48049f3576c8ad4e6ff5a97e2c60d8a9e3bff0bc54" + } + } + }, + "nbformat": 4, + "nbformat_minor": 1 +} diff --git a/AIandML/requirements.txt b/AIandML/requirements.txt index 3143bad..b3cad3b 100644 --- a/AIandML/requirements.txt +++ b/AIandML/requirements.txt @@ -1,62 +1,78 @@ +anyio==3.6.2 appdirs==1.4.4 argon2-cffi==21.3.0 argon2-cffi-bindings==21.2.0 -asttokens==2.0.8 +arrow==1.2.3 +asttokens==2.2.1 attrs==22.1.0 backcall==0.2.0 beautifulsoup4==4.11.1 bleach==5.0.1 -certifi==2022.9.14 +certifi==2022.12.7 cffi==1.15.1 -contourpy==1.0.5 +charset-normalizer==2.1.1 +comm==0.1.2 +contourpy==1.0.6 cycler==0.11.0 -debugpy==1.6.3 +debugpy==1.6.4 decorator==5.1.1 defusedxml==0.7.1 entrypoints==0.4 -executing==1.0.0 +executing==1.2.0 fastjsonschema==2.16.2 -fonttools==4.37.3 -importlib-metadata==4.12.0 -ipykernel==6.15.3 -ipython==8.5.0 +fonttools==4.38.0 +fqdn==1.5.1 +idna==3.4 +importlib-metadata==5.1.0 +importlib-resources==5.10.1 +ipykernel==6.19.2 +ipython==8.7.0 ipython-genutils==0.2.0 -ipywidgets==8.0.2 -jedi==0.18.1 +ipywidgets==8.0.3 +isoduration==20.11.0 +jedi==0.18.2 Jinja2==3.1.2 joblib==1.2.0 -jsonschema==4.16.0 +jsonpointer==2.3 +jsonschema==4.17.3 jupyter==1.0.0 jupyter-console==6.4.4 -jupyter-core==4.11.1 -jupyter_client==7.3.5 +jupyter-events==0.5.0 +jupyter_client==7.4.8 +jupyter_core==5.1.0 +jupyter_server==2.0.1 +jupyter_server_terminals==0.4.2 jupyterlab-pygments==0.2.2 -jupyterlab-widgets==3.0.3 +jupyterlab-widgets==3.0.4 kiwisolver==1.4.4 -lxml==4.9.1 +lxml==4.9.2 MarkupSafe==2.1.1 -matplotlib==3.6.0 +matplotlib==3.6.2 matplotlib-inline==0.1.6 mistune==2.0.4 -nbclient==0.6.8 -nbconvert==7.0.0 -nbformat==5.5.0 -nest-asyncio==1.5.5 -notebook==6.4.12 -numpy==1.23.3 -packaging==21.3 -pandas==1.5.0 -pandoc==2.2 +nbclassic==0.4.8 +nbclient==0.7.2 +nbconvert==7.2.6 +nbformat==5.7.0 +nest-asyncio==1.5.6 +notebook==6.5.2 +notebook_shim==0.2.2 +numpy==1.23.5 +packaging==22.0 +pandas==1.5.2 +pandoc==2.3 pandocfilters==1.5.0 parso==0.8.3 pexpect==4.8.0 pickleshare==0.7.5 -Pillow==9.2.0 -plumbum==1.7.2 +Pillow==9.3.0 +pkgutil_resolve_name==1.3.10 +platformdirs==2.6.0 +plumbum==1.8.0 ply==3.11 -prometheus-client==0.14.1 -prompt-toolkit==3.0.31 -psutil==5.9.2 +prometheus-client==0.15.0 +prompt-toolkit==3.0.36 +psutil==5.9.4 ptyprocess==0.7.0 pure-eval==0.2.2 pycparser==2.21 @@ -64,28 +80,41 @@ pyee==8.2.2 Pygments==2.13.0 pyparsing==3.0.9 pyppeteer==1.0.2 -pyrsistent==0.18.1 +pyrsistent==0.19.2 python-dateutil==2.8.2 -pytz==2022.2.1 -pyzmq==24.0.0 -qtconsole==5.3.2 -QtPy==2.2.0 -scikit-learn==1.1.3 +python-json-logger==2.0.4 +pytz==2022.6 +PyYAML==6.0 +pyzmq==24.0.1 +qtconsole==5.4.0 +QtPy==2.3.0 +requests==2.28.1 +rfc3339-validator==0.1.4 +rfc3986-validator==0.1.1 +scikit-learn==1.2.0 scipy==1.9.3 -seaborn==0.12.0 +seaborn==0.12.1 Send2Trash==1.8.0 six==1.16.0 +sniffio==1.3.0 soupsieve==2.3.2.post1 -stack-data==0.5.0 -terminado==0.15.0 +stack-data==0.6.2 +terminado==0.17.1 threadpoolctl==3.1.0 -tinycss2==1.1.1 +tinycss2==1.2.1 +torch==1.13.0+cu116 +torchaudio==0.13.0+cu116 +torchvision==0.14.0+cu116 tornado==6.2 tqdm==4.64.1 -traitlets==5.4.0 -urllib3==1.26.12 +traitlets==5.7.1 +typing_extensions==4.4.0 +uri-template==1.2.0 +urllib3==1.26.13 wcwidth==0.2.5 +webcolors==1.12 webencodings==0.5.1 -websockets==10.3 -widgetsnbextension==4.0.3 -zipp==3.8.1 +websocket-client==1.4.2 +websockets==10.4 +widgetsnbextension==4.0.4 +zipp==3.11.0