代码内容来自PyTorch官网
Wroking With data
PyTorch提供了两种数据处理的基本组件:torch.utils.data.DataLoader和torch.utils.data.Dataset。Dataset用来存储样本及其对应的标签,而DataLoader则在Dataset周围封装一个可迭代对象。
1 2 3 4 5 import torchfrom torch import nnfrom torch.utils.data import DataLoaderfrom torchvision import datasetsfrom torchvision.transforms import ToTensor
torchvision.datasets 模块包含许多实际视觉数据的 Dataset 对象,例如 CIFAR、COCO。这里我们将使用 FashionMNIST 数据集。每个 TorchVision Dataset 包括两个参数:transform 和 target_transform,分别用于修改样本和标签。
一些参数的解释
root:这是数据集将存储的目录。如果数据集已经被下载过,它将不会被重新下载。
transform:这个参数指定了在加载数据集中的图像之前将应用的转换。在本例中,使用了ToTensor()转换,它将图像数据从形状为(H,W,C)的numpy数组转换为形状为(C,H,W)的PyTorch张量,并将值从0到255转换为0到1的范围内。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 training_data = datasets.FashionMNIST( root="data" , train=True , download=True , transform=ToTensor(), ) test_data = datasets.FashionMNIST( root="data" , train=False , download=True , transform=ToTensor(), )
我们将Dataset作为参数传递给DataLoader。这将在我们的数据集上创建一个可迭代对象,支持自动分批、采样、洗牌和多进程数据加载。更重要的是,DataLoader允许您一次加载一小批数据,处理它,然后继续处理下一批数据。在这里,我们定义了batch大小为64,即dataloader迭代器中的每个元素将返回一个批次包含64个特征和标签.
1 2 3 4 5 6 7 8 9 10 batch_size = 64 train_dataloader = DataLoader(training_data, batch_size=batch_size) test_dataloader = DataLoader(test_data, batch_size=batch_size) for X, y in test_dataloader: print (f"Shape of X [N, C, H, W]: {X.shape} " ) print (f"Shape of y: {y.shape} {y.dtype} " ) break
Creating Models
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 device = "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu" print (f"Using {device} device" )class NeuralNetwork (nn.Module): def __init__ (self ): super ().__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28 *28 , 512 ), nn.ReLU(), nn.Linear(512 , 512 ), nn.ReLU(), nn.Linear(512 , 10 ) ) def forward (self, x ): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits model = NeuralNetwork().to(device) print (model)
Optimizing the Model Parameters
1 2 loss_fn = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=1e-3 )
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 def train (dataloader, model, loss_fn, optimizer ): size = len (dataloader.dataset) model.train() for batch, (X, y) in enumerate (dataloader): X, y = X.to(device), y.to(device) pred = model(X) loss = loss_fn(pred, y) optimizer.zero_grad() loss.backward() optimizer.step() if batch % 100 == 0 : loss, current = loss.item(), batch * len (X) print (f"loss: {loss:>7f} [{current:>5d} /{size:>5d} ]" )
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 def test (dataloader, model, loss_fn ): size = len (dataloader.dataset) num_batches = len (dataloader) model.eval () test_loss, correct = 0 , 0 with torch.no_grad(): for X, y in dataloader: X, y = X.to(device), y.to(device) pred = model(X) test_loss += loss_fn(pred, y).item() correct += (pred.argmax(1 ) == y).type (torch.float ).sum ().item() test_loss /= num_batches correct /= size print (f"Test Error: \n Accuracy: {(100 *correct):>0.1 f} %, Avg loss: {test_loss:>8f} \n" )
1 2 3 4 5 6 epochs = 20 for t in range (epochs): print (f"Epoch {t+1 } \n-------------------------------" ) train(train_dataloader, model, loss_fn, optimizer) test(test_dataloader, model, loss_fn) print ("Done!" )