{"id":2468,"date":"2025-05-12T13:35:49","date_gmt":"2025-05-12T05:35:49","guid":{"rendered":"https:\/\/thereisno.top\/?p=2468"},"modified":"2025-05-12T13:36:59","modified_gmt":"2025-05-12T05:36:59","slug":"torch-%e5%bf%ab%e9%80%9f%e5%85%a5%e9%97%a8","status":"publish","type":"post","link":"https:\/\/thereisno.top\/?p=2468","title":{"rendered":"torch \u5feb\u901f\u5165\u95e8"},"content":{"rendered":"\n<h1 class=\"wp-block-heading\" id=\"quickstart\">Quickstart<\/h1>\n\n\n\n<p>This section runs through the API for common tasks in machine learning. Refer to the links in each section to dive deeper.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"working-with-data\">Working with data<\/h2>\n\n\n\n<p>PyTorch has two&nbsp;<a href=\"https:\/\/pytorch.org\/docs\/stable\/data.html\">primitives to work with data<\/a>:&nbsp;<code>torch.utils.data.DataLoader<\/code>and&nbsp;<code>torch.utils.data.Dataset<\/code>.&nbsp;<code>Dataset<\/code>&nbsp;stores the samples and their corresponding labels, and&nbsp;<code>DataLoader<\/code>&nbsp;wraps an iterable around the&nbsp;<code>Dataset<\/code>.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><a href=\"#cb2-1\"><\/a><strong>import<\/strong> torch\n<a href=\"#cb2-2\"><\/a><strong>from<\/strong> torch <strong>import<\/strong> nn\n<a href=\"#cb2-3\"><\/a><strong>from<\/strong> torch.utils.data <strong>import<\/strong> DataLoader\n<a href=\"#cb2-4\"><\/a><strong>from<\/strong> torchvision <strong>import<\/strong> datasets\n<a href=\"#cb2-5\"><\/a><strong>from<\/strong> torchvision.transforms <strong>import<\/strong> ToTensor<\/code><\/pre>\n\n\n\n<p>PyTorch offers domain-specific libraries such as&nbsp;<a href=\"https:\/\/pytorch.org\/text\/stable\/index.html\">TorchText<\/a>,&nbsp;<a href=\"https:\/\/pytorch.org\/vision\/stable\/index.html\">TorchVision<\/a>, and<a href=\"https:\/\/pytorch.org\/audio\/stable\/index.html\">TorchAudio<\/a>, all of which include datasets. For this tutorial, we will be using a TorchVision dataset.<\/p>\n\n\n\n<p>The&nbsp;<code>torchvision.datasets<\/code>&nbsp;module contains&nbsp;<code>Dataset<\/code>&nbsp;objects for many real-world vision data like CIFAR, COCO (<a href=\"https:\/\/pytorch.org\/vision\/stable\/datasets.html\">full list here<\/a>). In this tutorial, we use the FashionMNIST dataset. Every TorchVision&nbsp;<code>Dataset<\/code>&nbsp;includes two arguments:<code>transform<\/code>&nbsp;and&nbsp;<code>target_transform<\/code>&nbsp;to modify the samples and labels respectively.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><a href=\"#cb3-1\"><\/a><em># Download training data from open datasets.<\/em>\n<a href=\"#cb3-2\"><\/a>training_data = datasets.FashionMNIST(\n<a href=\"#cb3-3\"><\/a>    root=\"data\",\n<a href=\"#cb3-4\"><\/a>    train=True,\n<a href=\"#cb3-5\"><\/a>    download=True,\n<a href=\"#cb3-6\"><\/a>    transform=ToTensor(),\n<a href=\"#cb3-7\"><\/a>)\n<a href=\"#cb3-8\"><\/a>\n<a href=\"#cb3-9\"><\/a><em># Download test data from open datasets.<\/em>\n<a href=\"#cb3-10\"><\/a>test_data = datasets.FashionMNIST(\n<a href=\"#cb3-11\"><\/a>    root=\"data\",\n<a href=\"#cb3-12\"><\/a>    train=False,\n<a href=\"#cb3-13\"><\/a>    download=True,\n<a href=\"#cb3-14\"><\/a>    transform=ToTensor(),\n<a href=\"#cb3-15\"><\/a>)\n<a href=\"#cb3-16\"><\/a><mark style=\"background-color:#fcb900\" class=\"has-inline-color\">print(len(training_data))<\/mark><\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code><mark style=\"background-color:#fcb900\" class=\"has-inline-color\">60000<\/mark>\n<\/code><\/pre>\n\n\n\n<!--more-->\n\n\n\n<p>We pass the&nbsp;<code>Dataset<\/code>&nbsp;as an argument to&nbsp;<code>DataLoader<\/code>. This wraps an iterable over our dataset, and supports automatic batching, sampling, shuffling and multiprocess data loading. Here we define a batch size of 64, i.e. each element in the dataloader iterable will return a batch of 64 features and labels.<\/p>\n\n\n\n<p><mark style=\"background-color:#fcb900\" class=\"has-inline-color\"><strong><code>DataLoader<\/code><\/strong>&nbsp;\u8fd4\u56de\u7b2c\u4e00\u9879\u662f\u6570\u636e\uff0c\u7b2c\u4e8c\u9879\u76ee\u662f\u6807\u7b7e<\/mark><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><a href=\"#cb5-1\"><\/a>batch_size = 64\n<a href=\"#cb5-2\"><\/a>\n<a href=\"#cb5-3\"><\/a><em># Create data loaders.<\/em>\n<a href=\"#cb5-4\"><\/a>train_dataloader = DataLoader(training_data, batch_size=batch_size)\n<a href=\"#cb5-5\"><\/a>test_dataloader = DataLoader(test_data, batch_size=batch_size)\n<a href=\"#cb5-6\"><\/a>\n<a href=\"#cb5-7\"><\/a><strong>for<\/strong> X, y <strong>in<\/strong> test_dataloader:\n<a href=\"#cb5-8\"><\/a>    print(f\"Shape of X &#91;N, C, H, W]: {X.shape}\")\n<a href=\"#cb5-9\"><\/a>    print(f\"Shape of y: {y.shape} {y.dtype}\")\n<a href=\"#cb5-10\"><\/a>    <strong>break<\/strong>\n<a href=\"#cb5-11\"><\/a>\n<a href=\"#cb5-12\"><\/a><mark style=\"background-color:#fcb900\" class=\"has-inline-color\">print(len(train_dataloader))<\/mark><\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>Shape of X &#91;N, C, H, W]: torch.Size(&#91;64, 1, 28, 28])\nShape of y: torch.Size(&#91;64]) torch.int64\n<mark style=\"background-color:#fcb900\" class=\"has-inline-color\">938<\/mark> #<mark style=\"background-color:#fcb900\" class=\"has-inline-color\">\u517160000\u6761\u6570\u636e\uff0c\u6bcf\u4e2a\u6279\u6b2164\uff0c\u4e00\u5171938\u6279\n<\/mark><\/code><\/pre>\n\n\n\n<p>Read more about&nbsp;<a href=\"data_tutorial.html\">loading data in PyTorch<\/a>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">Creating Models<\/h1>\n\n\n\n<p>To define a neural network in PyTorch, we create a class that inherits from&nbsp;<a href=\"https:\/\/pytorch.org\/docs\/stable\/generated\/torch.nn.Module.html\">nn.Module<\/a>. We define the layers of the network in the&nbsp;<code>__init__<\/code>&nbsp;function and specify how data will pass through the network in the&nbsp;<code>forward<\/code>&nbsp;function. To accelerate operations in the neural network, we move it to the&nbsp;<a href=\"https:\/\/pytorch.org\/docs\/stable\/torch.html#accelerators\">accelerator<\/a>&nbsp;such as CUDA, MPS, MTIA, or XPU. If the current accelerator is available, we will use it. Otherwise, we use the CPU.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><a href=\"#cb7-1\"><\/a>device = torch.accelerator.current_accelerator().type <strong>if<\/strong> torch.accelerator.is_available() <strong>else<\/strong> \"cpu\"\n<a href=\"#cb7-2\"><\/a>\n<a href=\"#cb7-3\"><\/a>print(f\"Using {device} device\")\n<a href=\"#cb7-4\"><\/a>\n<a href=\"#cb7-5\"><\/a><em># Define model<\/em>\n<a href=\"#cb7-6\"><\/a><strong>class<\/strong> NeuralNetwork(nn.Module):\n<a href=\"#cb7-7\"><\/a>    <strong>def<\/strong> __init__(self):\n<a href=\"#cb7-8\"><\/a>        super().__init__()\n<a href=\"#cb7-9\"><\/a>        self.flatten = nn.Flatten() <em>#\u7ef4\u5ea6\u5c55\u5e73<\/em>\n<a href=\"#cb7-10\"><\/a>        self.linear_relu_stack = nn.Sequential(\n<a href=\"#cb7-11\"><\/a>            nn.Linear(28*28, 512),\n<a href=\"#cb7-12\"><\/a>            nn.ReLU(),\n<a href=\"#cb7-13\"><\/a>            nn.Linear(512, 512),\n<a href=\"#cb7-14\"><\/a>            nn.ReLU(),\n<a href=\"#cb7-15\"><\/a>            nn.Linear(512, 10)\n<a href=\"#cb7-16\"><\/a>        )\n<a href=\"#cb7-17\"><\/a>\n<a href=\"#cb7-18\"><\/a>    <strong>def<\/strong> forward(self, x):\n<a href=\"#cb7-19\"><\/a>        x = self.flatten(x)\n<a href=\"#cb7-20\"><\/a>        logits = self.linear_relu_stack(x)\n<a href=\"#cb7-21\"><\/a>        <strong>return<\/strong> logits\n<a href=\"#cb7-22\"><\/a>\n<a href=\"#cb7-23\"><\/a>model = NeuralNetwork().to(device)\n<a href=\"#cb7-24\"><\/a>print(model)<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>Using mps device\nNeuralNetwork(\n  (flatten): Flatten(start_dim=1, end_dim=-1)\n  (linear_relu_stack): Sequential(\n    (0): Linear(in_features=784, out_features=512, bias=True)\n    (1): ReLU()\n    (2): Linear(in_features=512, out_features=512, bias=True)\n    (3): ReLU()\n    (4): Linear(in_features=512, out_features=10, bias=True)\n  )\n)\n<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code><\/code><\/pre>\n\n\n\n<p>Read more about&nbsp;<a href=\"buildmodel_tutorial.html\">building neural networks in PyTorch<\/a>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">Optimizing the Model Parameters<\/h1>\n\n\n\n<p>To train a model, we need a&nbsp;<a href=\"https:\/\/pytorch.org\/docs\/stable\/nn.html#loss-functions\">loss function<\/a>&nbsp;and an&nbsp;<a href=\"https:\/\/pytorch.org\/docs\/stable\/optim.html\">optimizer<\/a>.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><a href=\"#cb10-1\"><\/a>loss_fn = nn.CrossEntropyLoss()\n<a href=\"#cb10-2\"><\/a>optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)<\/code><\/pre>\n\n\n\n<p>In a single training loop, the model makes predictions on the training dataset (fed to it in batches), and backpropagates the prediction error to adjust the model&#8217;s parameters.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><a href=\"#cb11-1\"><\/a><strong>def<\/strong> train(dataloader, model, loss_fn, optimizer):\n<a href=\"#cb11-2\"><\/a>    size = len(dataloader.dataset)\n<a href=\"#cb11-3\"><\/a>    print(\"size=\"+str(size))\n<a href=\"#cb11-4\"><\/a>    model.train() <em><mark style=\"background-color:#fcb900\" class=\"has-inline-color\"># \u542f\u7528 Batch Normalization \u548c Dropout\uff0c\u5f52\u4e00\u5316\uff0c\u968f\u673a\u4e22\u5f03\u795e\u7ecf\u5143\u9632\u6b62\u8fc7\u62df\u5408\uff0c\u6d4b\u8bd5\u65f6\u4e0d\u4e22\u5f03<\/mark><\/em>\n<a href=\"#cb11-5\"><\/a>    <strong>for<\/strong> batch, (X, y) <strong>in<\/strong> enumerate(dataloader):\n<a href=\"#cb11-6\"><\/a>        X, y = X.to(device), y.to(device)\n<a href=\"#cb11-7\"><\/a>\n<a href=\"#cb11-8\"><\/a>        <em># Compute prediction error<\/em>\n<a href=\"#cb11-9\"><\/a>        pred = model(X)\n<a href=\"#cb11-10\"><\/a>        <em>#print(\"----------&gt;pred=\")<\/em>\n<a href=\"#cb11-11\"><\/a>        <em>#print(pred)<\/em>\n<a href=\"#cb11-12\"><\/a>        <em>#print(\"-----------------------&lt;\")<\/em>\n<a href=\"#cb11-13\"><\/a>        loss = loss_fn(pred, y)\n<a href=\"#cb11-14\"><\/a>\n<a href=\"#cb11-15\"><\/a>        <em># Backpropagation<\/em>\n<a href=\"#cb11-16\"><\/a>        loss.backward() <em><mark style=\"background-color:#fcb900\" class=\"has-inline-color\"># \u8ba1\u7b97\u68af\u5ea6<\/mark><\/em>\n<a href=\"#cb11-17\"><\/a>        optimizer.step() <em><mark style=\"background-color:#fcb900\" class=\"has-inline-color\"># \u6839\u636e\u68af\u5ea6\u4f18\u5316\u53c2\u6570<\/mark><\/em>\n<a href=\"#cb11-18\"><\/a>        optimizer.zero_grad() <em><mark style=\"background-color:#fcb900\" class=\"has-inline-color\"># \u68af\u5ea6\u5f52\u96f6<\/mark><\/em>\n<a href=\"#cb11-19\"><\/a>\n<a href=\"#cb11-20\"><\/a>        <strong>if<\/strong> batch % 100 == 0: <em><mark style=\"background-color:#fcb900\" class=\"has-inline-color\"># \u6bcf100\u4e2abatch\u6253\u5370\u4e00\u6b21<\/mark><\/em>\n<a href=\"#cb11-21\"><\/a>            loss, current = loss.item(), (batch + 1) * len(X)\n<a href=\"#cb11-22\"><\/a>            print(f\"loss: {loss:&gt;7f}  &#91;{current:&gt;5d}\/{size:&gt;5d}]\")<\/code><\/pre>\n\n\n\n<p>We also check the model&#8217;s performance against the test dataset to ensure it is learning.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><a href=\"#cb12-1\"><\/a><strong>def<\/strong> test(dataloader, model, loss_fn):\n<a href=\"#cb12-2\"><\/a>    size = len(dataloader.dataset)\n<a href=\"#cb12-3\"><\/a>    num_batches = len(dataloader)\n<a href=\"#cb12-4\"><\/a>    model.eval()\n<a href=\"#cb12-5\"><\/a>    test_loss, correct = 0, 0\n<a href=\"#cb12-6\"><\/a>    <strong>with<\/strong> torch.no_grad():\n<a href=\"#cb12-7\"><\/a>        <strong>for<\/strong> X, y <strong>in<\/strong> dataloader:\n<a href=\"#cb12-8\"><\/a>            X, y = X.to(device), y.to(device)\n<a href=\"#cb12-9\"><\/a>            pred = model(X)\n<a href=\"#cb12-10\"><\/a>            test_loss += loss_fn(pred, y).item()\n<a href=\"#cb12-11\"><\/a>            correct += (pred.argmax(1) == y).type(torch.float).sum().item()\n<a href=\"#cb12-12\"><\/a>    test_loss \/= num_batches\n<a href=\"#cb12-13\"><\/a>    correct \/= size\n<a href=\"#cb12-14\"><\/a>    print(f\"Test Error: \\n Accuracy: {(100*correct):&gt;0.1f}%, Avg loss: {test_loss:&gt;8f} \\n\")<\/code><\/pre>\n\n\n\n<p>The training process is conducted over several iterations (<em>epochs<\/em>). During each epoch, the model learns parameters to make better predictions. We print the model&#8217;s accuracy and loss at each epoch; we&#8217;d like to see the accuracy increase and the loss decrease with every epoch.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><a href=\"#cb13-1\"><\/a>epochs = 5\n<a href=\"#cb13-2\"><\/a><strong>for<\/strong> t <strong>in<\/strong> range(epochs):\n<a href=\"#cb13-3\"><\/a>    print(f\"Epoch {t+1}\\n-------------------------------\")\n<a href=\"#cb13-4\"><\/a>    train(train_dataloader, model, loss_fn, optimizer)\n<a href=\"#cb13-5\"><\/a>    test(test_dataloader, model, loss_fn)\n<a href=\"#cb13-6\"><\/a>print(\"Done!\")<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>Epoch 1\n-------------------------------\nsize=60000 # <mark style=\"background-color:#fcb900\" class=\"has-inline-color\">938\u4e2a\u6279\u6b21\uff0c\u6bcf\u4e00\u767e\u6253\u5370\u4e00\u6b21<\/mark>\uff0c\u6253\u537010\u6b21\nloss: 2.315621  &#91;   64\/60000]\nloss: 2.297486  &#91; 6464\/60000]\nloss: 2.284707  &#91;12864\/60000]\nloss: 2.276697  &#91;19264\/60000]\nloss: 2.251865  &#91;25664\/60000]\nloss: 2.229615  &#91;32064\/60000]\nloss: 2.235096  &#91;38464\/60000]\nloss: 2.205865  &#91;44864\/60000]\nloss: 2.203413  &#91;51264\/60000]\nloss: 2.159754  &#91;57664\/60000]\nTest Error: \n Accuracy: 39.3%, Avg loss: 2.162051 \n\nEpoch 2\n-------------------------------\nsize=60000\nloss: 2.175932  &#91;   64\/60000]\nloss: 2.160811  &#91; 6464\/60000]\nloss: 2.113213  &#91;12864\/60000]\nloss: 2.128377  &#91;19264\/60000]\nloss: 2.065253  &#91;25664\/60000]\nloss: 2.014427  &#91;32064\/60000]\nloss: 2.038499  &#91;38464\/60000]\nloss: 1.966430  &#91;44864\/60000]\nloss: 1.969670  &#91;51264\/60000]\nloss: 1.885340  &#91;57664\/60000]\nTest Error: \n Accuracy: 51.0%, Avg loss: 1.894609 \n\nEpoch 3\n-------------------------------\nsize=60000\nloss: 1.927377  &#91;   64\/60000]\nloss: 1.893924  &#91; 6464\/60000]\nloss: 1.791221  &#91;12864\/60000]\nloss: 1.828687  &#91;19264\/60000]\nloss: 1.707735  &#91;25664\/60000]\nloss: 1.663736  &#91;32064\/60000]\nloss: 1.677831  &#91;38464\/60000]\nloss: 1.585051  &#91;44864\/60000]\nloss: 1.603040  &#91;51264\/60000]\nloss: 1.495138  &#91;57664\/60000]\nTest Error: \n Accuracy: 59.2%, Avg loss: 1.523530 \n\nEpoch 4\n-------------------------------\nsize=60000\nloss: 1.584166  &#91;   64\/60000]\nloss: 1.549330  &#91; 6464\/60000]\nloss: 1.415807  &#91;12864\/60000]\nloss: 1.485298  &#91;19264\/60000]\nloss: 1.358027  &#91;25664\/60000]\nloss: 1.356192  &#91;32064\/60000]\nloss: 1.363756  &#91;38464\/60000]\nloss: 1.292613  &#91;44864\/60000]\nloss: 1.323514  &#91;51264\/60000]\nloss: 1.225876  &#91;57664\/60000]\nTest Error: \n Accuracy: 62.5%, Avg loss: 1.258216 \n\nEpoch 5\n-------------------------------\nsize=60000\nloss: 1.329733  &#91;   64\/60000]\nloss: 1.309326  &#91; 6464\/60000]\nloss: 1.161434  &#91;12864\/60000]\nloss: 1.264519  &#91;19264\/60000]\nloss: 1.133686  &#91;25664\/60000]\nloss: 1.158902  &#91;32064\/60000]\nloss: 1.173635  &#91;38464\/60000]\nloss: 1.112870  &#91;44864\/60000]\nloss: 1.150839  &#91;51264\/60000]\nloss: 1.069254  &#91;57664\/60000]\nTest Error: \n Accuracy: 64.2%, Avg loss: 1.094074 \n\nDone!\n<\/code><\/pre>\n\n\n\n<p>Read more about&nbsp;<a href=\"optimization_tutorial.html\">Training your model<\/a>.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h1 class=\"wp-block-heading\">Saving Models<\/h1>\n\n\n\n<p>A common way to save a model is to serialize the internal state dictionary (containing the model parameters).<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><a href=\"#cb15-1\"><\/a><strong>for<\/strong> var_name <strong>in<\/strong> model.state_dict():\n<a href=\"#cb15-2\"><\/a>    print(var_name, \"\\t\", model.state_dict()&#91;var_name])\n<a href=\"#cb15-3\"><\/a>\n<a href=\"#cb15-4\"><\/a><strong>for<\/strong> var_name <strong>in<\/strong> optimizer.state_dict():\n<a href=\"#cb15-5\"><\/a>    print(var_name, \"\\t\", optimizer.state_dict()&#91;var_name])<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>state \t {}\nparam_groups \t &#91;{'lr': 0.001, 'momentum': 0, 'dampening': 0, 'weight_decay': 0, 'nesterov': False, 'maximize': False, 'foreach': None, 'differentiable': False, 'fused': None, 'params': &#91;0, 1, 2, 3, 4, 5]}]\n<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code><a href=\"#cb17-1\"><\/a>torch.save(model.state_dict(), \"model.pth\")\n<a href=\"#cb17-2\"><\/a>print(\"Saved PyTorch Model State to model.pth\")<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>Saved PyTorch Model State to model.pth\n<\/code><\/pre>\n\n\n\n<h1 class=\"wp-block-heading\">Loading Models<\/h1>\n\n\n\n<p>The process for loading a model includes re-creating the model structure and loading the state dictionary into it.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><a href=\"#cb19-1\"><\/a>model = NeuralNetwork().to(device)\n<a href=\"#cb19-2\"><\/a>model.load_state_dict(torch.load(\"model.pth\", weights_only=True))<\/code><\/pre>\n\n\n\n<p>This model can now be used to make predictions.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><a href=\"#cb20-1\"><\/a>classes = &#91;\n<a href=\"#cb20-2\"><\/a>    \"T-shirt\/top\",\n<a href=\"#cb20-3\"><\/a>    \"Trouser\",\n<a href=\"#cb20-4\"><\/a>    \"Pullover\",\n<a href=\"#cb20-5\"><\/a>    \"Dress\",\n<a href=\"#cb20-6\"><\/a>    \"Coat\",\n<a href=\"#cb20-7\"><\/a>    \"Sandal\",\n<a href=\"#cb20-8\"><\/a>    \"Shirt\",\n<a href=\"#cb20-9\"><\/a>    \"Sneaker\",\n<a href=\"#cb20-10\"><\/a>    \"Bag\",\n<a href=\"#cb20-11\"><\/a>    \"Ankle boot\",\n<a href=\"#cb20-12\"><\/a>]\n<a href=\"#cb20-13\"><\/a>\n<a href=\"#cb20-14\"><\/a>model.eval()\n<a href=\"#cb20-15\"><\/a>x, y = test_data&#91;0]&#91;0], test_data&#91;0]&#91;1]\n<a href=\"#cb20-16\"><\/a><strong>with<\/strong> torch.no_grad():\n<a href=\"#cb20-17\"><\/a>    x = x.to(device)\n<a href=\"#cb20-18\"><\/a>    pred = model(x)\n<a href=\"#cb20-19\"><\/a>    predicted, actual = classes&#91;pred&#91;0].argmax(0)], classes&#91;y]\n<a href=\"#cb20-20\"><\/a>    print(f'Predicted: \"{predicted}\", Actual: \"{actual}\"')<\/code><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>Predicted: \"Ankle boot\", Actual: \"Ankle boot\"\n<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Quickstart This section runs through the API for common &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/thereisno.top\/?p=2468\" class=\"more-link\">\u7ee7\u7eed\u9605\u8bfb<span class=\"screen-reader-text\">\u201ctorch \u5feb\u901f\u5165\u95e8\u201d<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[246,14],"tags":[253],"class_list":["post-2468","post","type-post","status-publish","format-standard","hentry","category-ai","category-python","tag-torch"],"_links":{"self":[{"href":"https:\/\/thereisno.top\/index.php?rest_route=\/wp\/v2\/posts\/2468","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/thereisno.top\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thereisno.top\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thereisno.top\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/thereisno.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2468"}],"version-history":[{"count":2,"href":"https:\/\/thereisno.top\/index.php?rest_route=\/wp\/v2\/posts\/2468\/revisions"}],"predecessor-version":[{"id":2470,"href":"https:\/\/thereisno.top\/index.php?rest_route=\/wp\/v2\/posts\/2468\/revisions\/2470"}],"wp:attachment":[{"href":"https:\/\/thereisno.top\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2468"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thereisno.top\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2468"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thereisno.top\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2468"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}