From 1a41f31f31dbbc1aeea171e2f1c743c9ffdcf8f2 Mon Sep 17 00:00:00 2001 From: Brian Johnson Date: Mon, 6 Mar 2023 16:36:35 -0500 Subject: [PATCH] updated files --- .gitignore | 1 + docs/01-Introduction.md | 43 ++++ docs/02-Quickstart.md | 385 ++++++++++++++++++++++++++++++++ docs/03-Tensors.md | 355 +++++++++++++++++++++++++++++ docs/04-Data.md | 286 ++++++++++++++++++++++++ docs/05-Transforms.md | 77 +++++++ docs/06-BuildModel.md | 312 ++++++++++++++++++++++++++ docs/07-Autograd.md | 294 ++++++++++++++++++++++++ docs/08-Optimization.md | 369 ++++++++++++++++++++++++++++++ docs/09-SaveLoad.md | 141 ++++++++++++ docs/docs/04-Data_21_1.png | Bin 0 -> 7368 bytes docs/docs/04-Data_6_0.png | Bin 0 -> 26209 bytes tutorials/01-Introduction.ipynb | 9 +- 13 files changed, 2265 insertions(+), 7 deletions(-) create mode 100644 docs/01-Introduction.md create mode 100644 docs/02-Quickstart.md create mode 100644 docs/03-Tensors.md create mode 100644 docs/04-Data.md create mode 100644 docs/05-Transforms.md create mode 100644 docs/06-BuildModel.md create mode 100644 docs/07-Autograd.md create mode 100644 docs/08-Optimization.md create mode 100644 docs/09-SaveLoad.md create mode 100644 docs/docs/04-Data_21_1.png create mode 100644 docs/docs/04-Data_6_0.png diff --git a/.gitignore b/.gitignore index 619d98e..156b785 100644 --- a/.gitignore +++ b/.gitignore @@ -21,4 +21,5 @@ yarn-error.log* /tutorials/*.md /tutorials/data +/tutorials/*.pth diff --git a/docs/01-Introduction.md b/docs/01-Introduction.md new file mode 100644 index 0000000..6e643bb --- /dev/null +++ b/docs/01-Introduction.md @@ -0,0 +1,43 @@ +**Learn the Basics** || +[Quickstart](Quickstart.html) || +[Tensors](Tensors.html) || +[Datasets & DataLoaders](Data.html) || +[Transforms](transforms_tutorial.html) || +[Build Model](buildmodel_tutorial.html) || +[Autograd](autogradqs_tutorial.html) || +[Optimization](optimization_tutorial.html) || +[Save & Load Model](saveloadrun_tutorial.html) + +# Learn the Basics + +Authors: +[Suraj Subramanian](https://github.com/suraj813), +[Seth Juarez](https://github.com/sethjuarez/), +[Cassie Breviu](https://github.com/cassieview/), +[Dmitry Soshnikov](https://soshnikov.com/), +[Ari Bornstein](https://github.com/aribornstein/) + +Most machine learning workflows involve working with data, creating models, optimizing model +parameters, and saving the trained models. This tutorial introduces you to a complete ML workflow +implemented in PyTorch, with links to learn more about each of these concepts. + +We'll use the FashionMNIST dataset to train a neural network that predicts if an input image belongs +to one of the following classes: T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, +Bag, or Ankle boot. + +`This tutorial assumes a basic familiarity with Python and Deep Learning concepts.` + + +## Running the Tutorial Code +You can run this tutorial in a couple of ways: + +- **In the cloud**: This is the easiest way to get started! Each section has a "Run in Microsoft Learn" link at the top, which opens an integrated notebook in Microsoft Learn with the code in a fully-hosted environment. +- **Locally**: This option requires you to setup PyTorch and TorchVision first on your local machine ([installation instructions](https://pytorch.org/get-started/locally/)). Download the notebook or copy the code into your favorite IDE. + + +## How to Use this Guide +If you're familiar with other deep learning frameworks, check out the [0. Quickstart](quickstart_tutorial.html) first +to quickly familiarize yourself with PyTorch's API. + +If you're new to deep learning frameworks, head right into the first section of our step-by-step guide: [1. Tensors](tensor_tutorial.html). + diff --git a/docs/02-Quickstart.md b/docs/02-Quickstart.md new file mode 100644 index 0000000..c529ed9 --- /dev/null +++ b/docs/02-Quickstart.md @@ -0,0 +1,385 @@ +[Learn the Basics](Introduction.html) || +**Quickstart** || +[Tensors](Tensors.html) || +[Datasets & DataLoaders](Data.html) || +[Transforms](transforms_tutorial.html) || +[Build Model](buildmodel_tutorial.html) || +[Autograd](autogradqs_tutorial.html) || +[Optimization](optimization_tutorial.html) || +[Save & Load Model](saveloadrun_tutorial.html) + +# Quickstart +This section runs through the API for common tasks in machine learning. Refer to the links in each section to dive deeper. + +## Working with data +PyTorch has two [primitives to work with data](https://pytorch.org/docs/stable/data.html): +``torch.utils.data.DataLoader`` and ``torch.utils.data.Dataset``. +``Dataset`` stores the samples and their corresponding labels, and ``DataLoader`` wraps an iterable around +the ``Dataset``. + + + +```python +import torch +from torch import nn +from torch.utils.data import DataLoader +from torchvision import datasets +from torchvision.transforms import ToTensor +``` + +PyTorch offers domain-specific libraries such as [TorchText](https://pytorch.org/text/stable/index.html), +[TorchVision](https://pytorch.org/vision/stable/index.html), and [TorchAudio](https://pytorch.org/audio/stable/index.html), +all of which include datasets. For this tutorial, we will be using a TorchVision dataset. + +The ``torchvision.datasets`` module contains ``Dataset`` objects for many real-world vision data like +CIFAR, COCO ([full list here](https://pytorch.org/vision/stable/datasets.html)). In this tutorial, we +use the FashionMNIST dataset. Every TorchVision ``Dataset`` includes two arguments: ``transform`` and +``target_transform`` to modify the samples and labels respectively. + + + + +```python +# Download training data from open datasets. +training_data = datasets.FashionMNIST( + root="data", + train=True, + download=True, + transform=ToTensor(), +) + +# Download test data from open datasets. +test_data = datasets.FashionMNIST( + root="data", + train=False, + download=True, + transform=ToTensor(), +) +``` + +We pass the ``Dataset`` as an argument to ``DataLoader``. This wraps an iterable over our dataset, and supports +automatic batching, sampling, shuffling and multiprocess data loading. Here we define a batch size of 64, i.e. each element +in the dataloader iterable will return a batch of 64 features and labels. + + + + +```python +batch_size = 64 + +# Create data loaders. +train_dataloader = DataLoader(training_data, batch_size=batch_size) +test_dataloader = DataLoader(test_data, batch_size=batch_size) + +for X, y in test_dataloader: + print(f"Shape of X [N, C, H, W]: {X.shape}") + print(f"Shape of y: {y.shape} {y.dtype}") + break +``` + + Shape of X [N, C, H, W]: torch.Size([64, 1, 28, 28]) + Shape of y: torch.Size([64]) torch.int64 + + +Read more about [loading data in PyTorch](data_tutorial.html). + + + + +-------------- + + + + +## Creating Models +To define a neural network in PyTorch, we create a class that inherits +from [nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). We define the layers of the network +in the ``__init__`` function and specify how data will pass through the network in the ``forward`` function. To accelerate +operations in the neural network, we move it to the GPU if available. + + + + +```python +# Get cpu or gpu device for training. +device = "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu" +print(f"Using {device} device") + +# Define model +class NeuralNetwork(nn.Module): + def __init__(self): + super().__init__() + self.flatten = nn.Flatten() + self.linear_relu_stack = nn.Sequential( + nn.Linear(28*28, 512), + nn.ReLU(), + nn.Linear(512, 512), + nn.ReLU(), + nn.Linear(512, 10) + ) + + def forward(self, x): + x = self.flatten(x) + logits = self.linear_relu_stack(x) + return logits + +model = NeuralNetwork().to(device) +print(model) +``` + + Using mps device + NeuralNetwork( + (flatten): Flatten(start_dim=1, end_dim=-1) + (linear_relu_stack): Sequential( + (0): Linear(in_features=784, out_features=512, bias=True) + (1): ReLU() + (2): Linear(in_features=512, out_features=512, bias=True) + (3): ReLU() + (4): Linear(in_features=512, out_features=10, bias=True) + ) + ) + + +Read more about [building neural networks in PyTorch](buildmodel_tutorial.html). + + + + +-------------- + + + + +## Optimizing the Model Parameters +To train a model, we need a [loss function](https://pytorch.org/docs/stable/nn.html#loss-functions) +and an [optimizer](https://pytorch.org/docs/stable/optim.html). + + + + +```python +loss_fn = nn.CrossEntropyLoss() +optimizer = torch.optim.SGD(model.parameters(), lr=1e-3) +``` + +In a single training loop, the model makes predictions on the training dataset (fed to it in batches), and +backpropagates the prediction error to adjust the model's parameters. + + + + +```python +def train(dataloader, model, loss_fn, optimizer): + size = len(dataloader.dataset) + model.train() + for batch, (X, y) in enumerate(dataloader): + X, y = X.to(device), y.to(device) + + # Compute prediction error + pred = model(X) + loss = loss_fn(pred, y) + + # Backpropagation + optimizer.zero_grad() + loss.backward() + optimizer.step() + + if batch % 100 == 0: + loss, current = loss.item(), batch * len(X) + print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]") +``` + +We also check the model's performance against the test dataset to ensure it is learning. + + + + +```python +def test(dataloader, model, loss_fn): + size = len(dataloader.dataset) + num_batches = len(dataloader) + model.eval() + test_loss, correct = 0, 0 + with torch.no_grad(): + for X, y in dataloader: + X, y = X.to(device), y.to(device) + pred = model(X) + test_loss += loss_fn(pred, y).item() + correct += (pred.argmax(1) == y).type(torch.float).sum().item() + test_loss /= num_batches + correct /= size + print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n") +``` + +The training process is conducted over several iterations (*epochs*). During each epoch, the model learns +parameters to make better predictions. We print the model's accuracy and loss at each epoch; we'd like to see the +accuracy increase and the loss decrease with every epoch. + + + + +```python +epochs = 5 +for t in range(epochs): + print(f"Epoch {t+1}\n-------------------------------") + train(train_dataloader, model, loss_fn, optimizer) + test(test_dataloader, model, loss_fn) +print("Done!") +``` + + Epoch 1 + ------------------------------- + loss: 2.300704 [ 0/60000] + loss: 2.294491 [ 6400/60000] + loss: 2.270792 [12800/60000] + loss: 2.270757 [19200/60000] + loss: 2.246651 [25600/60000] + loss: 2.223734 [32000/60000] + loss: 2.230299 [38400/60000] + loss: 2.197789 [44800/60000] + loss: 2.186385 [51200/60000] + loss: 2.171854 [57600/60000] + Test Error: + Accuracy: 40.4%, Avg loss: 2.158354 + + Epoch 2 + ------------------------------- + loss: 2.157282 [ 0/60000] + loss: 2.157837 [ 6400/60000] + loss: 2.098653 [12800/60000] + loss: 2.123712 [19200/60000] + loss: 2.070209 [25600/60000] + loss: 2.017735 [32000/60000] + loss: 2.044564 [38400/60000] + loss: 1.971302 [44800/60000] + loss: 1.963748 [51200/60000] + loss: 1.920766 [57600/60000] + Test Error: + Accuracy: 55.5%, Avg loss: 1.902382 + + Epoch 3 + ------------------------------- + loss: 1.919148 [ 0/60000] + loss: 1.903148 [ 6400/60000] + loss: 1.782882 [12800/60000] + loss: 1.834309 [19200/60000] + loss: 1.722989 [25600/60000] + loss: 1.676954 [32000/60000] + loss: 1.698752 [38400/60000] + loss: 1.602475 [44800/60000] + loss: 1.614792 [51200/60000] + loss: 1.532669 [57600/60000] + Test Error: + Accuracy: 61.7%, Avg loss: 1.533873 + + Epoch 4 + ------------------------------- + loss: 1.585873 [ 0/60000] + loss: 1.560321 [ 6400/60000] + loss: 1.407954 [12800/60000] + loss: 1.488211 [19200/60000] + loss: 1.364034 [25600/60000] + loss: 1.362447 [32000/60000] + loss: 1.370802 [38400/60000] + loss: 1.302972 [44800/60000] + loss: 1.327800 [51200/60000] + loss: 1.235748 [57600/60000] + Test Error: + Accuracy: 63.4%, Avg loss: 1.260575 + + Epoch 5 + ------------------------------- + loss: 1.331637 [ 0/60000] + loss: 1.313866 [ 6400/60000] + loss: 1.153163 [12800/60000] + loss: 1.257744 [19200/60000] + loss: 1.137783 [25600/60000] + loss: 1.162715 [32000/60000] + loss: 1.172138 [38400/60000] + loss: 1.120971 [44800/60000] + loss: 1.149632 [51200/60000] + loss: 1.069323 [57600/60000] + Test Error: + Accuracy: 64.6%, Avg loss: 1.093657 + + Done! + + +Read more about [Training your model](optimization_tutorial.html). + + + + +-------------- + + + + +## Saving Models +A common way to save a model is to serialize the internal state dictionary (containing the model parameters). + + + + +```python +torch.save(model.state_dict(), "model.pth") +print("Saved PyTorch Model State to model.pth") +``` + + Saved PyTorch Model State to model.pth + + +## Loading Models + +The process for loading a model includes re-creating the model structure and loading +the state dictionary into it. + + + + +```python +model = NeuralNetwork() +model.load_state_dict(torch.load("model.pth")) +``` + + + + + + + + +This model can now be used to make predictions. + + + + +```python +classes = [ + "T-shirt/top", + "Trouser", + "Pullover", + "Dress", + "Coat", + "Sandal", + "Shirt", + "Sneaker", + "Bag", + "Ankle boot", +] + +model.eval() +x, y = test_data[0][0], test_data[0][1] +with torch.no_grad(): + pred = model(x) + predicted, actual = classes[pred[0].argmax(0)], classes[y] + print(f'Predicted: "{predicted}", Actual: "{actual}"') +``` + + Predicted: "Ankle boot", Actual: "Ankle boot" + + +Read more about [Saving & Loading your model](saveloadrun_tutorial.html). + + + diff --git a/docs/03-Tensors.md b/docs/03-Tensors.md new file mode 100644 index 0000000..d87b048 --- /dev/null +++ b/docs/03-Tensors.md @@ -0,0 +1,355 @@ +[Learn the Basics](intro.html) || +[Quickstart](quickstart_tutorial.html) || +**Tensors** || +[Datasets & DataLoaders](data_tutorial.html) || +[Transforms](transforms_tutorial.html) || +[Build Model](buildmodel_tutorial.html) || +[Autograd](autogradqs_tutorial.html) || +[Optimization](optimization_tutorial.html) || +[Save & Load Model](saveloadrun_tutorial.html) + +# Tensors + +Tensors are a specialized data structure that are very similar to arrays and matrices. +In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters. + +Tensors are similar to [NumPy’s](https://numpy.org/) ndarrays, except that tensors can run on GPUs or other hardware accelerators. In fact, tensors and +NumPy arrays can often share the same underlying memory, eliminating the need to copy data (see `bridge-to-np-label`). Tensors +are also optimized for automatic differentiation (we'll see more about that later in the [Autograd](autogradqs_tutorial.html)_ +section). If you’re familiar with ndarrays, you’ll be right at home with the Tensor API. If not, follow along! + + + +```python +import torch +import numpy as np +``` + +## Initializing a Tensor + +Tensors can be initialized in various ways. Take a look at the following examples: + +**Directly from data** + +Tensors can be created directly from data. The data type is automatically inferred. + + + + +```python +data = [[1, 2],[3, 4]] +x_data = torch.tensor(data) +``` + +**From a NumPy array** + +Tensors can be created from NumPy arrays (and vice versa - see `bridge-to-np-label`). + + + + +```python +np_array = np.array(data) +x_np = torch.from_numpy(np_array) +``` + +**From another tensor:** + +The new tensor retains the properties (shape, datatype) of the argument tensor, unless explicitly overridden. + + + + +```python +x_ones = torch.ones_like(x_data) # retains the properties of x_data +print(f"Ones Tensor: \n {x_ones} \n") + +x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data +print(f"Random Tensor: \n {x_rand} \n") +``` + + Ones Tensor: + tensor([[1, 1], + [1, 1]]) + + Random Tensor: + tensor([[0.0504, 0.9505], + [0.6485, 0.6105]]) + + + +**With random or constant values:** + +``shape`` is a tuple of tensor dimensions. In the functions below, it determines the dimensionality of the output tensor. + + + + +```python +shape = (2,3,) +rand_tensor = torch.rand(shape) +ones_tensor = torch.ones(shape) +zeros_tensor = torch.zeros(shape) + +print(f"Random Tensor: \n {rand_tensor} \n") +print(f"Ones Tensor: \n {ones_tensor} \n") +print(f"Zeros Tensor: \n {zeros_tensor}") +``` + + Random Tensor: + tensor([[0.6582, 0.2838, 0.1244], + [0.1692, 0.0394, 0.2638]]) + + Ones Tensor: + tensor([[1., 1., 1.], + [1., 1., 1.]]) + + Zeros Tensor: + tensor([[0., 0., 0.], + [0., 0., 0.]]) + + +-------------- + + + + +## Attributes of a Tensor + +Tensor attributes describe their shape, datatype, and the device on which they are stored. + + + + +```python +tensor = torch.rand(3,4) + +print(f"Shape of tensor: {tensor.shape}") +print(f"Datatype of tensor: {tensor.dtype}") +print(f"Device tensor is stored on: {tensor.device}") +``` + + Shape of tensor: torch.Size([3, 4]) + Datatype of tensor: torch.float32 + Device tensor is stored on: cpu + + +-------------- + + + + +## Operations on Tensors + +Over 100 tensor operations, including arithmetic, linear algebra, matrix manipulation (transposing, +indexing, slicing), sampling and more are +comprehensively described [here](https://pytorch.org/docs/stable/torch.html)_. + +Each of these operations can be run on the GPU (at typically higher speeds than on a +CPU). If you’re using Colab, allocate a GPU by going to Runtime > Change runtime type > GPU. + +By default, tensors are created on the CPU. We need to explicitly move tensors to the GPU using +``.to`` method (after checking for GPU availability). Keep in mind that copying large tensors +across devices can be expensive in terms of time and memory! + + + + +```python +# We move our tensor to the GPU if available +if torch.cuda.is_available(): + tensor = tensor.to("cuda") +``` + +Try out some of the operations from the list. +If you're familiar with the NumPy API, you'll find the Tensor API a breeze to use. + + + + +**Standard numpy-like indexing and slicing:** + + + + +```python +tensor = torch.ones(4, 4) +print(f"First row: {tensor[0]}") +print(f"First column: {tensor[:, 0]}") +print(f"Last column: {tensor[..., -1]}") +tensor[:,1] = 0 +print(tensor) +``` + + First row: tensor([1., 1., 1., 1.]) + First column: tensor([1., 1., 1., 1.]) + Last column: tensor([1., 1., 1., 1.]) + tensor([[1., 0., 1., 1.], + [1., 0., 1., 1.], + [1., 0., 1., 1.], + [1., 0., 1., 1.]]) + + +**Joining tensors** You can use ``torch.cat`` to concatenate a sequence of tensors along a given dimension. +See also [torch.stack](https://pytorch.org/docs/stable/generated/torch.stack.html)_, +another tensor joining op that is subtly different from ``torch.cat``. + + + + +```python +t1 = torch.cat([tensor, tensor, tensor], dim=1) +print(t1) +``` + + tensor([[1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.], + [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.], + [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.], + [1., 0., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1.]]) + + +**Arithmetic operations** + + + + +```python +# This computes the matrix multiplication between two tensors. y1, y2, y3 will have the same value +# ``tensor.T`` returns the transpose of a tensor +y1 = tensor @ tensor.T +y2 = tensor.matmul(tensor.T) + +y3 = torch.rand_like(y1) +torch.matmul(tensor, tensor.T, out=y3) + + +# This computes the element-wise product. z1, z2, z3 will have the same value +z1 = tensor * tensor +z2 = tensor.mul(tensor) + +z3 = torch.rand_like(tensor) +torch.mul(tensor, tensor, out=z3) +``` + + + + + tensor([[1., 0., 1., 1.], + [1., 0., 1., 1.], + [1., 0., 1., 1.], + [1., 0., 1., 1.]]) + + + +**Single-element tensors** If you have a one-element tensor, for example by aggregating all +values of a tensor into one value, you can convert it to a Python +numerical value using ``item()``: + + + + +```python +agg = tensor.sum() +agg_item = agg.item() +print(agg_item, type(agg_item)) +``` + + 12.0 + + +**In-place operations** +Operations that store the result into the operand are called in-place. They are denoted by a ``_`` suffix. +For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``. + + + + +```python +print(f"{tensor} \n") +tensor.add_(5) +print(tensor) +``` + + tensor([[1., 0., 1., 1.], + [1., 0., 1., 1.], + [1., 0., 1., 1.], + [1., 0., 1., 1.]]) + + tensor([[6., 5., 6., 6.], + [6., 5., 6., 6.], + [6., 5., 6., 6.], + [6., 5., 6., 6.]]) + + +

Note

In-place operations save some memory, but can be problematic when computing derivatives because of an immediate loss + of history. Hence, their use is discouraged.

+ + + +-------------- + + + + + +## Bridge with NumPy +Tensors on the CPU and NumPy arrays can share their underlying memory +locations, and changing one will change the other. + + + +### Tensor to NumPy array + + + + +```python +t = torch.ones(5) +print(f"t: {t}") +n = t.numpy() +print(f"n: {n}") +``` + + t: tensor([1., 1., 1., 1., 1.]) + n: [1. 1. 1. 1. 1.] + + +A change in the tensor reflects in the NumPy array. + + + + +```python +t.add_(1) +print(f"t: {t}") +print(f"n: {n}") +``` + + t: tensor([2., 2., 2., 2., 2.]) + n: [2. 2. 2. 2. 2.] + + +### NumPy array to Tensor + + + + +```python +n = np.ones(5) +t = torch.from_numpy(n) +``` + +Changes in the NumPy array reflects in the tensor. + + + + +```python +np.add(n, 1, out=n) +print(f"t: {t}") +print(f"n: {n}") +``` + + t: tensor([2., 2., 2., 2., 2.], dtype=torch.float64) + n: [2. 2. 2. 2. 2.] + diff --git a/docs/04-Data.md b/docs/04-Data.md new file mode 100644 index 0000000..8cd12e9 --- /dev/null +++ b/docs/04-Data.md @@ -0,0 +1,286 @@ +```python +%matplotlib inline +``` + + +[Learn the Basics](intro.html) || +[Quickstart](quickstart_tutorial.html) || +[Tensors](tensorqs_tutorial.html) || +**Datasets & DataLoaders** || +[Transforms](transforms_tutorial.html) || +[Build Model](buildmodel_tutorial.html) || +[Autograd](autogradqs_tutorial.html) || +[Optimization](optimization_tutorial.html) || +[Save & Load Model](saveloadrun_tutorial.html) + +# Datasets & DataLoaders + + +Code for processing data samples can get messy and hard to maintain; we ideally want our dataset code +to be decoupled from our model training code for better readability and modularity. +PyTorch provides two data primitives: ``torch.utils.data.DataLoader`` and ``torch.utils.data.Dataset`` +that allow you to use pre-loaded datasets as well as your own data. +``Dataset`` stores the samples and their corresponding labels, and ``DataLoader`` wraps an iterable around +the ``Dataset`` to enable easy access to the samples. + +PyTorch domain libraries provide a number of pre-loaded datasets (such as FashionMNIST) that +subclass ``torch.utils.data.Dataset`` and implement functions specific to the particular data. +They can be used to prototype and benchmark your model. You can find them +here: [Image Datasets](https://pytorch.org/vision/stable/datasets.html), +[Text Datasets](https://pytorch.org/text/stable/datasets.html), and +[Audio Datasets](https://pytorch.org/audio/stable/datasets.html) + + + + +## Loading a Dataset + +Here is an example of how to load the [Fashion-MNIST](https://research.zalando.com/project/fashion_mnist/fashion_mnist/) dataset from TorchVision. +Fashion-MNIST is a dataset of Zalando’s article images consisting of 60,000 training examples and 10,000 test examples. +Each example comprises a 28×28 grayscale image and an associated label from one of 10 classes. + +We load the [FashionMNIST Dataset](https://pytorch.org/vision/stable/datasets.html#fashion-mnist) with the following parameters: + - ``root`` is the path where the train/test data is stored, + - ``train`` specifies training or test dataset, + - ``download=True`` downloads the data from the internet if it's not available at ``root``. + - ``transform`` and ``target_transform`` specify the feature and label transformations + + + + +```python +import torch +from torch.utils.data import Dataset +from torchvision import datasets +from torchvision.transforms import ToTensor +import matplotlib.pyplot as plt + + +training_data = datasets.FashionMNIST( + root="data", + train=True, + download=True, + transform=ToTensor() +) + +test_data = datasets.FashionMNIST( + root="data", + train=False, + download=True, + transform=ToTensor() +) +``` + +## Iterating and Visualizing the Dataset + +We can index ``Datasets`` manually like a list: ``training_data[index]``. +We use ``matplotlib`` to visualize some samples in our training data. + + + + +```python +labels_map = { + 0: "T-Shirt", + 1: "Trouser", + 2: "Pullover", + 3: "Dress", + 4: "Coat", + 5: "Sandal", + 6: "Shirt", + 7: "Sneaker", + 8: "Bag", + 9: "Ankle Boot", +} +figure = plt.figure(figsize=(8, 8)) +cols, rows = 3, 3 +for i in range(1, cols * rows + 1): + sample_idx = torch.randint(len(training_data), size=(1,)).item() + img, label = training_data[sample_idx] + figure.add_subplot(rows, cols, i) + plt.title(labels_map[label]) + plt.axis("off") + plt.imshow(img.squeeze(), cmap="gray") +plt.show() +``` + + + +![png](../docs/04-Data_files/../docs/04-Data_6_0.png) + + + +.. + .. figure:: /_static/img/basics/fashion_mnist.png + :alt: fashion_mnist + + + +-------------- + + + + +## Creating a Custom Dataset for your files + +A custom Dataset class must implement three functions: `__init__`, `__len__`, and `__getitem__`. +Take a look at this implementation; the FashionMNIST images are stored +in a directory ``img_dir``, and their labels are stored separately in a CSV file ``annotations_file``. + +In the next sections, we'll break down what's happening in each of these functions. + + + + +```python +import os +import pandas as pd +from torchvision.io import read_image + +class CustomImageDataset(Dataset): + def __init__(self, annotations_file, img_dir, transform=None, target_transform=None): + self.img_labels = pd.read_csv(annotations_file) + self.img_dir = img_dir + self.transform = transform + self.target_transform = target_transform + + def __len__(self): + return len(self.img_labels) + + def __getitem__(self, idx): + img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0]) + image = read_image(img_path) + label = self.img_labels.iloc[idx, 1] + if self.transform: + image = self.transform(image) + if self.target_transform: + label = self.target_transform(label) + return image, label +``` + +### __init__ + +The __init__ function is run once when instantiating the Dataset object. We initialize +the directory containing the images, the annotations file, and both transforms (covered +in more detail in the next section). + +The labels.csv file looks like: :: + + tshirt1.jpg, 0 + tshirt2.jpg, 0 + ...... + ankleboot999.jpg, 9 + + + + +```python +def __init__(self, annotations_file, img_dir, transform=None, target_transform=None): + self.img_labels = pd.read_csv(annotations_file) + self.img_dir = img_dir + self.transform = transform + self.target_transform = target_transform +``` + +### __len__ + +The __len__ function returns the number of samples in our dataset. + +Example: + + + + +```python +def __len__(self): + return len(self.img_labels) +``` + +### __getitem__ + +The __getitem__ function loads and returns a sample from the dataset at the given index ``idx``. +Based on the index, it identifies the image's location on disk, converts that to a tensor using ``read_image``, retrieves the +corresponding label from the csv data in ``self.img_labels``, calls the transform functions on them (if applicable), and returns the +tensor image and corresponding label in a tuple. + + + + +```python +def __getitem__(self, idx): + img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0]) + image = read_image(img_path) + label = self.img_labels.iloc[idx, 1] + if self.transform: + image = self.transform(image) + if self.target_transform: + label = self.target_transform(label) + return image, label +``` + +-------------- + + + + +## Preparing your data for training with DataLoaders +The ``Dataset`` retrieves our dataset's features and labels one sample at a time. While training a model, we typically want to +pass samples in "minibatches", reshuffle the data at every epoch to reduce model overfitting, and use Python's ``multiprocessing`` to +speed up data retrieval. + +``DataLoader`` is an iterable that abstracts this complexity for us in an easy API. + + + + +```python +from torch.utils.data import DataLoader + +train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True) +test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True) +``` + +## Iterate through the DataLoader + +We have loaded that dataset into the ``DataLoader`` and can iterate through the dataset as needed. +Each iteration below returns a batch of ``train_features`` and ``train_labels`` (containing ``batch_size=64`` features and labels respectively). +Because we specified ``shuffle=True``, after we iterate over all batches the data is shuffled (for finer-grained control over +the data loading order, take a look at [Samplers](https://pytorch.org/docs/stable/data.html#data-loading-order-and-sampler)). + + + + +```python +# Display image and label. +train_features, train_labels = next(iter(train_dataloader)) +print(f"Feature batch shape: {train_features.size()}") +print(f"Labels batch shape: {train_labels.size()}") +img = train_features[0].squeeze() +label = train_labels[0] +plt.imshow(img, cmap="gray") +plt.show() +print(f"Label: {label}") +``` + + Feature batch shape: torch.Size([64, 1, 28, 28]) + Labels batch shape: torch.Size([64]) + + + + +![png](../docs/04-Data_files/../docs/04-Data_21_1.png) + + + + Label: 1 + + +-------------- + + + + +## Further Reading +- [torch.utils.data API](https://pytorch.org/docs/stable/data.html) + + diff --git a/docs/05-Transforms.md b/docs/05-Transforms.md new file mode 100644 index 0000000..20d50be --- /dev/null +++ b/docs/05-Transforms.md @@ -0,0 +1,77 @@ +[Learn the Basics](intro.html) || +[Quickstart](quickstart_tutorial.html) || +[Tensors](tensorqs_tutorial.html) || +[Datasets & DataLoaders](data_tutorial.html) || +**Transforms** || +[Build Model](buildmodel_tutorial.html) || +[Autograd](autogradqs_tutorial.html) || +[Optimization](optimization_tutorial.html) || +[Save & Load Model](saveloadrun_tutorial.html) + +# Transforms + +Data does not always come in its final processed form that is required for +training machine learning algorithms. We use **transforms** to perform some +manipulation of the data and make it suitable for training. + +All TorchVision datasets have two parameters -``transform`` to modify the features and +``target_transform`` to modify the labels - that accept callables containing the transformation logic. +The [torchvision.transforms](https://pytorch.org/vision/stable/transforms.html) module offers +several commonly-used transforms out of the box. + +The FashionMNIST features are in PIL Image format, and the labels are integers. +For training, we need the features as normalized tensors, and the labels as one-hot encoded tensors. +To make these transformations, we use ``ToTensor`` and ``Lambda``. + + + +```python +%matplotlib inline + +import torch +from torchvision import datasets +from torchvision.transforms import ToTensor, Lambda + +ds = datasets.FashionMNIST( + root="data", + train=True, + download=True, + transform=ToTensor(), + target_transform=Lambda(lambda y: torch.zeros(10, dtype=torch.float).scatter_(0, torch.tensor(y), value=1)) +) +``` + +## ToTensor() + +[ToTensor](https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.ToTensor) +converts a PIL image or NumPy ``ndarray`` into a ``FloatTensor``. and scales +the image's pixel intensity values in the range [0., 1.] + + + + +## Lambda Transforms + +Lambda transforms apply any user-defined lambda function. Here, we define a function +to turn the integer into a one-hot encoded tensor. +It first creates a zero tensor of size 10 (the number of labels in our dataset) and calls +[scatter_](https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html) which assigns a +``value=1`` on the index as given by the label ``y``. + + + + +```python +target_transform = Lambda(lambda y: torch.zeros( + 10, dtype=torch.float).scatter_(dim=0, index=torch.tensor(y), value=1)) +``` + +-------------- + + + + +### Further Reading +- [torchvision.transforms API](https://pytorch.org/vision/stable/transforms.html) + + diff --git a/docs/06-BuildModel.md b/docs/06-BuildModel.md new file mode 100644 index 0000000..92a7fca --- /dev/null +++ b/docs/06-BuildModel.md @@ -0,0 +1,312 @@ +[Learn the Basics](intro.html) || +[Quickstart](quickstart_tutorial.html) || +[Tensors](tensorqs_tutorial.html) || +[Datasets & DataLoaders](data_tutorial.html) || +[Transforms](transforms_tutorial.html) || +**Build Model** || +[Autograd](autogradqs_tutorial.html) || +[Optimization](optimization_tutorial.html) || +[Save & Load Model](saveloadrun_tutorial.html) + +# Build the Neural Network + +Neural networks comprise of layers/modules that perform operations on data. +The [torch.nn](https://pytorch.org/docs/stable/nn.html) namespace provides all the building blocks you need to +build your own neural network. Every module in PyTorch subclasses the [nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html). +A neural network is a module itself that consists of other modules (layers). This nested structure allows for +building and managing complex architectures easily. + +In the following sections, we'll build a neural network to classify images in the FashionMNIST dataset. + + + +```python +%matplotlib inline + +import os +import torch +from torch import nn +from torch.utils.data import DataLoader +from torchvision import datasets, transforms +``` + +## Get Device for Training +We want to be able to train our model on a hardware accelerator like the GPU, +if it is available. Let's check to see if +[torch.cuda](https://pytorch.org/docs/stable/notes/cuda.html) is available, else we +continue to use the CPU. + + + + +```python +device = "cuda" if torch.cuda.is_available() else "cpu" +print(f"Using {device} device") +``` + + Using cpu device + + +## Define the Class +We define our neural network by subclassing ``nn.Module``, and +initialize the neural network layers in ``__init__``. Every ``nn.Module`` subclass implements +the operations on input data in the ``forward`` method. + + + + +```python +class NeuralNetwork(nn.Module): + def __init__(self): + super().__init__() + self.flatten = nn.Flatten() + self.linear_relu_stack = nn.Sequential( + nn.Linear(28*28, 512), + nn.ReLU(), + nn.Linear(512, 512), + nn.ReLU(), + nn.Linear(512, 10), + ) + + def forward(self, x): + x = self.flatten(x) + logits = self.linear_relu_stack(x) + return logits +``` + +We create an instance of ``NeuralNetwork``, and move it to the ``device``, and print +its structure. + + + + +```python +model = NeuralNetwork().to(device) +print(model) +``` + + NeuralNetwork( + (flatten): Flatten(start_dim=1, end_dim=-1) + (linear_relu_stack): Sequential( + (0): Linear(in_features=784, out_features=512, bias=True) + (1): ReLU() + (2): Linear(in_features=512, out_features=512, bias=True) + (3): ReLU() + (4): Linear(in_features=512, out_features=10, bias=True) + ) + ) + + +To use the model, we pass it the input data. This executes the model's ``forward``, +along with some [background operations](https://github.com/pytorch/pytorch/blob/270111b7b611d174967ed204776985cefca9c144/torch/nn/modules/module.py#L866). +Do not call ``model.forward()`` directly! + +Calling the model on the input returns a 2-dimensional tensor with dim=0 corresponding to each output of 10 raw predicted values for each class, and dim=1 corresponding to the individual values of each output. +We get the prediction probabilities by passing it through an instance of the ``nn.Softmax`` module. + + + + +```python +X = torch.rand(1, 28, 28, device=device) +logits = model(X) +pred_probab = nn.Softmax(dim=1)(logits) +y_pred = pred_probab.argmax(1) +print(f"Predicted class: {y_pred}") +``` + + Predicted class: tensor([9]) + + +-------------- + + + + +## Model Layers + +Let's break down the layers in the FashionMNIST model. To illustrate it, we +will take a sample minibatch of 3 images of size 28x28 and see what happens to it as +we pass it through the network. + + + + +```python +input_image = torch.rand(3,28,28) +print(input_image.size()) +``` + + torch.Size([3, 28, 28]) + + +### nn.Flatten +We initialize the [nn.Flatten](https://pytorch.org/docs/stable/generated/torch.nn.Flatten.html) +layer to convert each 2D 28x28 image into a contiguous array of 784 pixel values ( +the minibatch dimension (at dim=0) is maintained). + + + + +```python +flatten = nn.Flatten() +flat_image = flatten(input_image) +print(flat_image.size()) +``` + + torch.Size([3, 784]) + + +### nn.Linear +The [linear layer](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) +is a module that applies a linear transformation on the input using its stored weights and biases. + + + + + +```python +layer1 = nn.Linear(in_features=28*28, out_features=20) +hidden1 = layer1(flat_image) +print(hidden1.size()) +``` + + torch.Size([3, 20]) + + +### nn.ReLU +Non-linear activations are what create the complex mappings between the model's inputs and outputs. +They are applied after linear transformations to introduce *nonlinearity*, helping neural networks +learn a wide variety of phenomena. + +In this model, we use [nn.ReLU](https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html) between our +linear layers, but there's other activations to introduce non-linearity in your model. + + + + +```python +print(f"Before ReLU: {hidden1}\n\n") +hidden1 = nn.ReLU()(hidden1) +print(f"After ReLU: {hidden1}") +``` + + Before ReLU: tensor([[-5.5712e-01, 4.1135e-01, -7.4510e-03, -5.4891e-02, 7.3538e-02, + 4.6617e-01, 5.3287e-01, 7.2283e-02, -3.7471e-01, -3.9285e-01, + -6.7889e-01, 2.1088e-01, 1.8742e-01, 4.0150e-01, -5.6422e-02, + -4.8977e-02, -1.6230e-01, 3.0556e-01, -7.1455e-01, -6.6180e-02], + [-4.2601e-01, 6.2487e-01, -5.9415e-02, 2.3934e-02, 3.9810e-01, + 3.2441e-01, 7.0026e-01, -1.2423e-01, -5.2260e-01, -1.7234e-01, + -5.5835e-01, 2.2128e-01, 2.7830e-01, 2.4191e-01, -7.7681e-02, + -2.4954e-01, 1.5836e-01, 1.9990e-01, -1.1715e-01, -3.2138e-01], + [-4.9225e-01, 4.1050e-01, -1.5492e-01, 8.9106e-03, 3.5985e-01, + 3.1355e-01, 6.2615e-01, -1.9053e-04, -5.7080e-01, -1.7064e-01, + -6.5802e-01, 3.3700e-01, 4.5726e-01, 3.1022e-01, -4.0316e-01, + -3.8029e-01, -1.2243e-01, 3.6732e-01, -5.6789e-01, -9.4490e-02]], + grad_fn=) + + + After ReLU: tensor([[0.0000, 0.4113, 0.0000, 0.0000, 0.0735, 0.4662, 0.5329, 0.0723, 0.0000, + 0.0000, 0.0000, 0.2109, 0.1874, 0.4015, 0.0000, 0.0000, 0.0000, 0.3056, + 0.0000, 0.0000], + [0.0000, 0.6249, 0.0000, 0.0239, 0.3981, 0.3244, 0.7003, 0.0000, 0.0000, + 0.0000, 0.0000, 0.2213, 0.2783, 0.2419, 0.0000, 0.0000, 0.1584, 0.1999, + 0.0000, 0.0000], + [0.0000, 0.4105, 0.0000, 0.0089, 0.3599, 0.3136, 0.6262, 0.0000, 0.0000, + 0.0000, 0.0000, 0.3370, 0.4573, 0.3102, 0.0000, 0.0000, 0.0000, 0.3673, + 0.0000, 0.0000]], grad_fn=) + + +### nn.Sequential +[nn.Sequential](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html) is an ordered +container of modules. The data is passed through all the modules in the same order as defined. You can use +sequential containers to put together a quick network like ``seq_modules``. + + + + +```python +seq_modules = nn.Sequential( + flatten, + layer1, + nn.ReLU(), + nn.Linear(20, 10) +) +input_image = torch.rand(3,28,28) +logits = seq_modules(input_image) +``` + +### nn.Softmax +The last linear layer of the neural network returns `logits` - raw values in [-\infty, \infty] - which are passed to the +[nn.Softmax](https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html) module. The logits are scaled to values +[0, 1] representing the model's predicted probabilities for each class. ``dim`` parameter indicates the dimension along +which the values must sum to 1. + + + + +```python +softmax = nn.Softmax(dim=1) +pred_probab = softmax(logits) +``` + +## Model Parameters +Many layers inside a neural network are *parameterized*, i.e. have associated weights +and biases that are optimized during training. Subclassing ``nn.Module`` automatically +tracks all fields defined inside your model object, and makes all parameters +accessible using your model's ``parameters()`` or ``named_parameters()`` methods. + +In this example, we iterate over each parameter, and print its size and a preview of its values. + + + + + +```python +print(f"Model structure: {model}\n\n") + +for name, param in model.named_parameters(): + print(f"Layer: {name} | Size: {param.size()} | Values : {param[:2]} \n") +``` + + Model structure: NeuralNetwork( + (flatten): Flatten(start_dim=1, end_dim=-1) + (linear_relu_stack): Sequential( + (0): Linear(in_features=784, out_features=512, bias=True) + (1): ReLU() + (2): Linear(in_features=512, out_features=512, bias=True) + (3): ReLU() + (4): Linear(in_features=512, out_features=10, bias=True) + ) + ) + + + Layer: linear_relu_stack.0.weight | Size: torch.Size([512, 784]) | Values : tensor([[ 0.0211, 0.0168, 0.0334, ..., -0.0151, -0.0033, 0.0032], + [-0.0022, 0.0293, -0.0090, ..., -0.0044, -0.0147, -0.0251]], + grad_fn=) + + Layer: linear_relu_stack.0.bias | Size: torch.Size([512]) | Values : tensor([0.0128, 0.0086], grad_fn=) + + Layer: linear_relu_stack.2.weight | Size: torch.Size([512, 512]) | Values : tensor([[-0.0165, -0.0068, -0.0016, ..., -0.0098, 0.0119, 0.0326], + [ 0.0330, -0.0306, -0.0129, ..., -0.0371, -0.0291, -0.0273]], + grad_fn=) + + Layer: linear_relu_stack.2.bias | Size: torch.Size([512]) | Values : tensor([ 0.0024, -0.0164], grad_fn=) + + Layer: linear_relu_stack.4.weight | Size: torch.Size([10, 512]) | Values : tensor([[ 0.0046, 0.0249, 0.0123, ..., 0.0352, -0.0170, 0.0232], + [ 0.0038, 0.0283, 0.0235, ..., -0.0416, 0.0304, 0.0217]], + grad_fn=) + + Layer: linear_relu_stack.4.bias | Size: torch.Size([10]) | Values : tensor([0.0118, 0.0417], grad_fn=) + + + +-------------- + + + + +## Further Reading +- [torch.nn API](https://pytorch.org/docs/stable/nn.html) + + diff --git a/docs/07-Autograd.md b/docs/07-Autograd.md new file mode 100644 index 0000000..f8e1eee --- /dev/null +++ b/docs/07-Autograd.md @@ -0,0 +1,294 @@ +```python +%matplotlib inline +``` + + +[Learn the Basics](intro.html) || +[Quickstart](quickstart_tutorial.html) || +[Tensors](tensorqs_tutorial.html) || +[Datasets & DataLoaders](data_tutorial.html) || +[Transforms](transforms_tutorial.html) || +[Build Model](buildmodel_tutorial.html) || +**Autograd** || +[Optimization](optimization_tutorial.html) || +[Save & Load Model](saveloadrun_tutorial.html) + +# Automatic Differentiation with ``torch.autograd`` + +When training neural networks, the most frequently used algorithm is +**back propagation**. In this algorithm, parameters (model weights) are +adjusted according to the **gradient** of the loss function with respect +to the given parameter. + +To compute those gradients, PyTorch has a built-in differentiation engine +called ``torch.autograd``. It supports automatic computation of gradient for any +computational graph. + +Consider the simplest one-layer neural network, with input ``x``, +parameters ``w`` and ``b``, and some loss function. It can be defined in +PyTorch in the following manner: + + + +```python +import torch + +x = torch.ones(5) # input tensor +y = torch.zeros(3) # expected output +w = torch.randn(5, 3, requires_grad=True) +b = torch.randn(3, requires_grad=True) +z = torch.matmul(x, w)+b +loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y) +``` + +## Tensors, Functions and Computational graph + +This code defines the following **computational graph**: + +.. figure:: /_static/img/basics/comp-graph.png + :alt: + +In this network, ``w`` and ``b`` are **parameters**, which we need to +optimize. Thus, we need to be able to compute the gradients of loss +function with respect to those variables. In order to do that, we set +the ``requires_grad`` property of those tensors. + + + +

Note

You can set the value of ``requires_grad`` when creating a + tensor, or later by using ``x.requires_grad_(True)`` method.

+ + + +A function that we apply to tensors to construct computational graph is +in fact an object of class ``Function``. This object knows how to +compute the function in the *forward* direction, and also how to compute +its derivative during the *backward propagation* step. A reference to +the backward propagation function is stored in ``grad_fn`` property of a +tensor. You can find more information of ``Function`` [in the +documentation](https://pytorch.org/docs/stable/autograd.html#function)_. + + + + + +```python +print(f"Gradient function for z = {z.grad_fn}") +print(f"Gradient function for loss = {loss.grad_fn}") +``` + + Gradient function for z = + Gradient function for loss = + + +## Computing Gradients + +To optimize weights of parameters in the neural network, we need to +compute the derivatives of our loss function with respect to parameters, +namely, we need $\frac{\partial loss}{\partial w}$ and +$\frac{\partial loss}{\partial b}$ under some fixed values of +``x`` and ``y``. To compute those derivatives, we call +``loss.backward()``, and then retrieve the values from ``w.grad`` and +``b.grad``: + + + + + +```python +loss.backward() +print(w.grad) +print(b.grad) +``` + + tensor([[0.3244, 0.2353, 0.0700], + [0.3244, 0.2353, 0.0700], + [0.3244, 0.2353, 0.0700], + [0.3244, 0.2353, 0.0700], + [0.3244, 0.2353, 0.0700]]) + tensor([0.3244, 0.2353, 0.0700]) + + +

Note

- We can only obtain the ``grad`` properties for the leaf + nodes of the computational graph, which have ``requires_grad`` property + set to ``True``. For all other nodes in our graph, gradients will not be + available. + - We can only perform gradient calculations using + ``backward`` once on a given graph, for performance reasons. If we need + to do several ``backward`` calls on the same graph, we need to pass + ``retain_graph=True`` to the ``backward`` call.

+ + + + +## Disabling Gradient Tracking + +By default, all tensors with ``requires_grad=True`` are tracking their +computational history and support gradient computation. However, there +are some cases when we do not need to do that, for example, when we have +trained the model and just want to apply it to some input data, i.e. we +only want to do *forward* computations through the network. We can stop +tracking computations by surrounding our computation code with +``torch.no_grad()`` block: + + + + + +```python +z = torch.matmul(x, w)+b +print(z.requires_grad) + +with torch.no_grad(): + z = torch.matmul(x, w)+b +print(z.requires_grad) +``` + + True + False + + +Another way to achieve the same result is to use the ``detach()`` method +on the tensor: + + + + + +```python +z = torch.matmul(x, w)+b +z_det = z.detach() +print(z_det.requires_grad) +``` + + False + + +There are reasons you might want to disable gradient tracking: + - To mark some parameters in your neural network as **frozen parameters**. + - To **speed up computations** when you are only doing forward pass, because computations on tensors that do + not track gradients would be more efficient. + + + +## More on Computational Graphs +Conceptually, autograd keeps a record of data (tensors) and all executed +operations (along with the resulting new tensors) in a directed acyclic +graph (DAG) consisting of +[Function](https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)_ +objects. In this DAG, leaves are the input tensors, roots are the output +tensors. By tracing this graph from roots to leaves, you can +automatically compute the gradients using the chain rule. + +In a forward pass, autograd does two things simultaneously: + +- run the requested operation to compute a resulting tensor +- maintain the operation’s *gradient function* in the DAG. + +The backward pass kicks off when ``.backward()`` is called on the DAG +root. ``autograd`` then: + +- computes the gradients from each ``.grad_fn``, +- accumulates them in the respective tensor’s ``.grad`` attribute +- using the chain rule, propagates all the way to the leaf tensors. + +

Note

**DAGs are dynamic in PyTorch** + An important thing to note is that the graph is recreated from scratch; after each + ``.backward()`` call, autograd starts populating a new graph. This is + exactly what allows you to use control flow statements in your model; + you can change the shape, size and operations at every iteration if + needed.

+ + + +## Optional Reading: Tensor Gradients and Jacobian Products + +In many cases, we have a scalar loss function, and we need to compute +the gradient with respect to some parameters. However, there are cases +when the output function is an arbitrary tensor. In this case, PyTorch +allows you to compute so-called **Jacobian product**, and not the actual +gradient. + +For a vector function $\vec{y}=f(\vec{x})$, where +$\vec{x}=\langle x_1,\dots,x_n\rangle$ and +$\vec{y}=\langle y_1,\dots,y_m\rangle$, a gradient of +$\vec{y}$ with respect to $\vec{x}$ is given by **Jacobian +matrix**: + +\begin{align}J=\left(\begin{array}{ccc} + \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\\ + \vdots & \ddots & \vdots\\ + \frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} + \end{array}\right)\end{align} + +Instead of computing the Jacobian matrix itself, PyTorch allows you to +compute **Jacobian Product** $v^T\cdot J$ for a given input vector +$v=(v_1 \dots v_m)$. This is achieved by calling ``backward`` with +$v$ as an argument. The size of $v$ should be the same as +the size of the original tensor, with respect to which we want to +compute the product: + + + + + +```python +inp = torch.eye(4, 5, requires_grad=True) +out = (inp+1).pow(2).t() +out.backward(torch.ones_like(out), retain_graph=True) +print(f"First call\n{inp.grad}") +out.backward(torch.ones_like(out), retain_graph=True) +print(f"\nSecond call\n{inp.grad}") +inp.grad.zero_() +out.backward(torch.ones_like(out), retain_graph=True) +print(f"\nCall after zeroing gradients\n{inp.grad}") +``` + + First call + tensor([[4., 2., 2., 2., 2.], + [2., 4., 2., 2., 2.], + [2., 2., 4., 2., 2.], + [2., 2., 2., 4., 2.]]) + + Second call + tensor([[8., 4., 4., 4., 4.], + [4., 8., 4., 4., 4.], + [4., 4., 8., 4., 4.], + [4., 4., 4., 8., 4.]]) + + Call after zeroing gradients + tensor([[4., 2., 2., 2., 2.], + [2., 4., 2., 2., 2.], + [2., 2., 4., 2., 2.], + [2., 2., 2., 4., 2.]]) + + +Notice that when we call ``backward`` for the second time with the same +argument, the value of the gradient is different. This happens because +when doing ``backward`` propagation, PyTorch **accumulates the +gradients**, i.e. the value of computed gradients is added to the +``grad`` property of all leaf nodes of computational graph. If you want +to compute the proper gradients, you need to zero out the ``grad`` +property before. In real-life training an *optimizer* helps us to do +this. + + + +

Note

Previously we were calling ``backward()`` function without + parameters. This is essentially equivalent to calling + ``backward(torch.tensor(1.0))``, which is a useful way to compute the + gradients in case of a scalar-valued function, such as loss during + neural network training.

+ + + + +-------------- + + + + +### Further Reading +- [Autograd Mechanics](https://pytorch.org/docs/stable/notes/autograd.html) + + diff --git a/docs/08-Optimization.md b/docs/08-Optimization.md new file mode 100644 index 0000000..aeb9aa0 --- /dev/null +++ b/docs/08-Optimization.md @@ -0,0 +1,369 @@ +```python +%matplotlib inline +``` + + +[Learn the Basics](intro.html) || +[Quickstart](quickstart_tutorial.html) || +[Tensors](tensorqs_tutorial.html) || +[Datasets & DataLoaders](data_tutorial.html) || +[Transforms](transforms_tutorial.html) || +[Build Model](buildmodel_tutorial.html) || +[Autograd](autogradqs_tutorial.html) || +**Optimization** || +[Save & Load Model](saveloadrun_tutorial.html) + +# Optimizing Model Parameters + +Now that we have a model and data it's time to train, validate and test our model by optimizing its parameters on +our data. Training a model is an iterative process; in each iteration the model makes a guess about the output, calculates +the error in its guess (*loss*), collects the derivatives of the error with respect to its parameters (as we saw in +the [previous section](autograd_tutorial.html)), and **optimizes** these parameters using gradient descent. For a more +detailed walkthrough of this process, check out this video on [backpropagation from 3Blue1Brown](https://www.youtube.com/watch?v=tIeHLnjs5U8)_. + +## Prerequisite Code +We load the code from the previous sections on [Datasets & DataLoaders](data_tutorial.html) +and [Build Model](buildmodel_tutorial.html). + + + +```python +import torch +from torch import nn +from torch.utils.data import DataLoader +from torchvision import datasets +from torchvision.transforms import ToTensor + +training_data = datasets.FashionMNIST( + root="data", + train=True, + download=True, + transform=ToTensor() +) + +test_data = datasets.FashionMNIST( + root="data", + train=False, + download=True, + transform=ToTensor() +) + +train_dataloader = DataLoader(training_data, batch_size=64) +test_dataloader = DataLoader(test_data, batch_size=64) + +class NeuralNetwork(nn.Module): + def __init__(self): + super(NeuralNetwork, self).__init__() + self.flatten = nn.Flatten() + self.linear_relu_stack = nn.Sequential( + nn.Linear(28*28, 512), + nn.ReLU(), + nn.Linear(512, 512), + nn.ReLU(), + nn.Linear(512, 10), + ) + + def forward(self, x): + x = self.flatten(x) + logits = self.linear_relu_stack(x) + return logits + +model = NeuralNetwork() +``` + +## Hyperparameters + +Hyperparameters are adjustable parameters that let you control the model optimization process. +Different hyperparameter values can impact model training and convergence rates +([read more](https://pytorch.org/tutorials/beginner/hyperparameter_tuning_tutorial.html)_ about hyperparameter tuning) + +We define the following hyperparameters for training: + - **Number of Epochs** - the number times to iterate over the dataset + - **Batch Size** - the number of data samples propagated through the network before the parameters are updated + - **Learning Rate** - how much to update models parameters at each batch/epoch. Smaller values yield slow learning speed, while large values may result in unpredictable behavior during training. + + + + + +```python +learning_rate = 1e-3 +batch_size = 64 +epochs = 5 +``` + +## Optimization Loop + +Once we set our hyperparameters, we can then train and optimize our model with an optimization loop. Each +iteration of the optimization loop is called an **epoch**. + +Each epoch consists of two main parts: + - **The Train Loop** - iterate over the training dataset and try to converge to optimal parameters. + - **The Validation/Test Loop** - iterate over the test dataset to check if model performance is improving. + +Let's briefly familiarize ourselves with some of the concepts used in the training loop. Jump ahead to +see the `full-impl-label` of the optimization loop. + +### Loss Function + +When presented with some training data, our untrained network is likely not to give the correct +answer. **Loss function** measures the degree of dissimilarity of obtained result to the target value, +and it is the loss function that we want to minimize during training. To calculate the loss we make a +prediction using the inputs of our given data sample and compare it against the true data label value. + +Common loss functions include [nn.MSELoss](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html#torch.nn.MSELoss) (Mean Square Error) for regression tasks, and +[nn.NLLLoss](https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html#torch.nn.NLLLoss) (Negative Log Likelihood) for classification. +[nn.CrossEntropyLoss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss) combines ``nn.LogSoftmax`` and ``nn.NLLLoss``. + +We pass our model's output logits to ``nn.CrossEntropyLoss``, which will normalize the logits and compute the prediction error. + + + + +```python +# Initialize the loss function +loss_fn = nn.CrossEntropyLoss() +``` + +### Optimizer + +Optimization is the process of adjusting model parameters to reduce model error in each training step. **Optimization algorithms** define how this process is performed (in this example we use Stochastic Gradient Descent). +All optimization logic is encapsulated in the ``optimizer`` object. Here, we use the SGD optimizer; additionally, there are many [different optimizers](https://pytorch.org/docs/stable/optim.html) +available in PyTorch such as ADAM and RMSProp, that work better for different kinds of models and data. + +We initialize the optimizer by registering the model's parameters that need to be trained, and passing in the learning rate hyperparameter. + + + + +```python +optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) +``` + +Inside the training loop, optimization happens in three steps: + * Call ``optimizer.zero_grad()`` to reset the gradients of model parameters. Gradients by default add up; to prevent double-counting, we explicitly zero them at each iteration. + * Backpropagate the prediction loss with a call to ``loss.backward()``. PyTorch deposits the gradients of the loss w.r.t. each parameter. + * Once we have our gradients, we call ``optimizer.step()`` to adjust the parameters by the gradients collected in the backward pass. + + + + +## Full Implementation +We define ``train_loop`` that loops over our optimization code, and ``test_loop`` that +evaluates the model's performance against our test data. + + + + +```python +def train_loop(dataloader, model, loss_fn, optimizer): + size = len(dataloader.dataset) + for batch, (X, y) in enumerate(dataloader): + # Compute prediction and loss + pred = model(X) + loss = loss_fn(pred, y) + + # Backpropagation + optimizer.zero_grad() + loss.backward() + optimizer.step() + + if batch % 100 == 0: + loss, current = loss.item(), (batch + 1) * len(X) + print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]") + + +def test_loop(dataloader, model, loss_fn): + size = len(dataloader.dataset) + num_batches = len(dataloader) + test_loss, correct = 0, 0 + + with torch.no_grad(): + for X, y in dataloader: + pred = model(X) + test_loss += loss_fn(pred, y).item() + correct += (pred.argmax(1) == y).type(torch.float).sum().item() + + test_loss /= num_batches + correct /= size + print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n") +``` + +We initialize the loss function and optimizer, and pass it to ``train_loop`` and ``test_loop``. +Feel free to increase the number of epochs to track the model's improving performance. + + + + +```python +loss_fn = nn.CrossEntropyLoss() +optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) + +epochs = 10 +for t in range(epochs): + print(f"Epoch {t+1}\n-------------------------------") + train_loop(train_dataloader, model, loss_fn, optimizer) + test_loop(test_dataloader, model, loss_fn) +print("Done!") +``` + + Epoch 1 + ------------------------------- + loss: 2.310308 [ 64/60000] + loss: 2.291682 [ 6464/60000] + loss: 2.282847 [12864/60000] + loss: 2.278148 [19264/60000] + loss: 2.259573 [25664/60000] + loss: 2.246842 [32064/60000] + loss: 2.237948 [38464/60000] + loss: 2.221490 [44864/60000] + loss: 2.215676 [51264/60000] + loss: 2.186174 [57664/60000] + Test Error: + Accuracy: 50.1%, Avg loss: 2.185173 + + Epoch 2 + ------------------------------- + loss: 2.192464 [ 64/60000] + loss: 2.176265 [ 6464/60000] + loss: 2.138019 [12864/60000] + loss: 2.155484 [19264/60000] + loss: 2.096774 [25664/60000] + loss: 2.064352 [32064/60000] + loss: 2.073422 [38464/60000] + loss: 2.019561 [44864/60000] + loss: 2.018754 [51264/60000] + loss: 1.944076 [57664/60000] + Test Error: + Accuracy: 56.9%, Avg loss: 1.951974 + + Epoch 3 + ------------------------------- + loss: 1.979550 [ 64/60000] + loss: 1.944613 [ 6464/60000] + loss: 1.850896 [12864/60000] + loss: 1.885921 [19264/60000] + loss: 1.766024 [25664/60000] + loss: 1.721881 [32064/60000] + loss: 1.732149 [38464/60000] + loss: 1.646069 [44864/60000] + loss: 1.663508 [51264/60000] + loss: 1.542335 [57664/60000] + Test Error: + Accuracy: 60.8%, Avg loss: 1.575167 + + Epoch 4 + ------------------------------- + loss: 1.641383 [ 64/60000] + loss: 1.597785 [ 6464/60000] + loss: 1.460881 [12864/60000] + loss: 1.522893 [19264/60000] + loss: 1.394849 [25664/60000] + loss: 1.381750 [32064/60000] + loss: 1.389999 [38464/60000] + loss: 1.324359 [44864/60000] + loss: 1.359623 [51264/60000] + loss: 1.242349 [57664/60000] + Test Error: + Accuracy: 63.2%, Avg loss: 1.281596 + + Epoch 5 + ------------------------------- + loss: 1.364956 [ 64/60000] + loss: 1.337699 [ 6464/60000] + loss: 1.179997 [12864/60000] + loss: 1.276043 [19264/60000] + loss: 1.145318 [25664/60000] + loss: 1.163051 [32064/60000] + loss: 1.179221 [38464/60000] + loss: 1.127842 [44864/60000] + loss: 1.170320 [51264/60000] + loss: 1.072596 [57664/60000] + Test Error: + Accuracy: 64.8%, Avg loss: 1.102368 + + Epoch 6 + ------------------------------- + loss: 1.181124 [ 64/60000] + loss: 1.175671 [ 6464/60000] + loss: 0.999543 [12864/60000] + loss: 1.125861 [19264/60000] + loss: 0.994338 [25664/60000] + loss: 1.020635 [32064/60000] + loss: 1.052101 [38464/60000] + loss: 1.005876 [44864/60000] + loss: 1.050259 [51264/60000] + loss: 0.969423 [57664/60000] + Test Error: + Accuracy: 65.8%, Avg loss: 0.989962 + + Epoch 7 + ------------------------------- + loss: 1.055653 [ 64/60000] + loss: 1.073796 [ 6464/60000] + loss: 0.878792 [12864/60000] + loss: 1.027988 [19264/60000] + loss: 0.902191 [25664/60000] + loss: 0.923560 [32064/60000] + loss: 0.970771 [38464/60000] + loss: 0.927402 [44864/60000] + loss: 0.969056 [51264/60000] + loss: 0.901827 [57664/60000] + Test Error: + Accuracy: 66.8%, Avg loss: 0.914991 + + Epoch 8 + ------------------------------- + loss: 0.964512 [ 64/60000] + loss: 1.004631 [ 6464/60000] + loss: 0.793878 [12864/60000] + loss: 0.959500 [19264/60000] + loss: 0.842306 [25664/60000] + loss: 0.854395 [32064/60000] + loss: 0.914801 [38464/60000] + loss: 0.875149 [44864/60000] + loss: 0.910963 [51264/60000] + loss: 0.853945 [57664/60000] + Test Error: + Accuracy: 67.8%, Avg loss: 0.861828 + + Epoch 9 + ------------------------------- + loss: 0.895530 [ 64/60000] + loss: 0.953656 [ 6464/60000] + loss: 0.731293 [12864/60000] + loss: 0.908750 [19264/60000] + loss: 0.800252 [25664/60000] + loss: 0.803487 [32064/60000] + loss: 0.873069 [38464/60000] + loss: 0.838708 [44864/60000] + loss: 0.867891 [51264/60000] + loss: 0.817475 [57664/60000] + Test Error: + Accuracy: 68.9%, Avg loss: 0.821918 + + Epoch 10 + ------------------------------- + loss: 0.841097 [ 64/60000] + loss: 0.913210 [ 6464/60000] + loss: 0.683007 [12864/60000] + loss: 0.869649 [19264/60000] + loss: 0.768555 [25664/60000] + loss: 0.764901 [32064/60000] + loss: 0.839639 [38464/60000] + loss: 0.811697 [44864/60000] + loss: 0.834432 [51264/60000] + loss: 0.788075 [57664/60000] + Test Error: + Accuracy: 70.1%, Avg loss: 0.790321 + + Done! + + +## Further Reading +- [Loss Functions](https://pytorch.org/docs/stable/nn.html#loss-functions) +- [torch.optim](https://pytorch.org/docs/stable/optim.html) +- [Warmstart Training a Model](https://pytorch.org/tutorials/recipes/recipes/warmstarting_model_using_parameters_from_a_different_model.html) + + + diff --git a/docs/09-SaveLoad.md b/docs/09-SaveLoad.md new file mode 100644 index 0000000..d07b56e --- /dev/null +++ b/docs/09-SaveLoad.md @@ -0,0 +1,141 @@ +```python +%matplotlib inline +``` + + +[Learn the Basics](intro.html) || +[Quickstart](quickstart_tutorial.html) || +[Tensors](tensorqs_tutorial.html) || +[Datasets & DataLoaders](data_tutorial.html) || +[Transforms](transforms_tutorial.html) || +[Build Model](buildmodel_tutorial.html) || +[Autograd](autogradqs_tutorial.html) || +[Optimization](optimization_tutorial.html) || +**Save & Load Model** + +# Save and Load the Model + +In this section we will look at how to persist model state with saving, loading and running model predictions. + + + +```python +import torch +import torchvision.models as models +``` + +## Saving and Loading Model Weights +PyTorch models store the learned parameters in an internal +state dictionary, called ``state_dict``. These can be persisted via the ``torch.save`` +method: + + + + +```python +model = models.vgg16(pretrained=True) +torch.save(model.state_dict(), 'model_weights.pth') +``` + + /Users/brianjo/anaconda3/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. + warnings.warn( + /Users/brianjo/anaconda3/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights. + warnings.warn(msg) + + +To load model weights, you need to create an instance of the same model first, and then load the parameters +using ``load_state_dict()`` method. + + + + +```python +model = models.vgg16() # we do not specify pretrained=True, i.e. do not load default weights +model.load_state_dict(torch.load('model_weights.pth')) +model.eval() +``` + + + + + VGG( + (features): Sequential( + (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (1): ReLU(inplace=True) + (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (3): ReLU(inplace=True) + (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) + (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (6): ReLU(inplace=True) + (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (8): ReLU(inplace=True) + (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) + (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (11): ReLU(inplace=True) + (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (13): ReLU(inplace=True) + (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (15): ReLU(inplace=True) + (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) + (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (18): ReLU(inplace=True) + (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (20): ReLU(inplace=True) + (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (22): ReLU(inplace=True) + (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) + (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (25): ReLU(inplace=True) + (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (27): ReLU(inplace=True) + (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) + (29): ReLU(inplace=True) + (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) + ) + (avgpool): AdaptiveAvgPool2d(output_size=(7, 7)) + (classifier): Sequential( + (0): Linear(in_features=25088, out_features=4096, bias=True) + (1): ReLU(inplace=True) + (2): Dropout(p=0.5, inplace=False) + (3): Linear(in_features=4096, out_features=4096, bias=True) + (4): ReLU(inplace=True) + (5): Dropout(p=0.5, inplace=False) + (6): Linear(in_features=4096, out_features=1000, bias=True) + ) + ) + + + +

Note

be sure to call ``model.eval()`` method before inferencing to set the dropout and batch normalization layers to evaluation mode. Failing to do this will yield inconsistent inference results.

+ + + +## Saving and Loading Models with Shapes +When loading model weights, we needed to instantiate the model class first, because the class +defines the structure of a network. We might want to save the structure of this class together with +the model, in which case we can pass ``model`` (and not ``model.state_dict()``) to the saving function: + + + + +```python +torch.save(model, 'model.pth') +``` + +We can then load the model like this: + + + + +```python +model = torch.load('model.pth') +``` + +

Note

This approach uses Python [pickle](https://docs.python.org/3/library/pickle.html) module when serializing the model, thus it relies on the actual class definition to be available when loading the model.

+ + + +## Related Tutorials +[Saving and Loading a General Checkpoint in PyTorch](https://pytorch.org/tutorials/recipes/recipes/saving_and_loading_a_general_checkpoint.html) + + diff --git a/docs/docs/04-Data_21_1.png b/docs/docs/04-Data_21_1.png new file mode 100644 index 0000000000000000000000000000000000000000..01d7edb5943a5acd4066e3de74249eef631a04db GIT binary patch literal 7368 zcmbtZ2Ut_tx;=`bSU?3)kfy_cNEhiK(i9{#X(EPVp~cW+Xfe(JGN^PBL5lPa(j}p& zNE3pTK#(d$T7b|3Loo1moEhKTd*|MFpTl?Zk?-W3v(LZxzy7t>ze9`+E*;#*y$^z* zgSt9eS0HFd3iu@MWd>*PhuS#6KSeL?t6nCq_FiZ^kK2&GotN7kSFbxxHw5n9_V9Fa zb&-*fl{hOV;OOP$=BXqpiTrkfgsX>xb&MC$BH`{ctFuvVBR7aDLUB~Z&*?gFzh@Hkgd} z?=O?(AV@wZElso3o`74B9+wCNccJz-rye{Z=S>oW()EGPi+mkEm-gL}4$S9+6RiAp{Ai~H`)g5E~BC{2WKai zIdxYfe!kAu%s=vZ#KheC*-HdTqQI)s;Dp?rj^dk5UX+^k+}c`ehYv}F#u*6mIU{-! z+#^IEJ66{riQUkn|JtP1J4twMW@(uN?z12Fg6^xtEFI$J;URXWT{5$a#5KO1LyDpF?vOtWUA=Ja5dCpRCTxsQ)e|JL%d2MUD>RjSR&(d~3RSHARK0eYrT zvS#0tmmJ~I9=ajeL!**=1_qw@D=8_J^c;a~6Iy=F8@F$JmXey9YK&GL8X8Jl2on_H zVHM?Jb-*2jF7P$~nrNh}8?sQnVjwFopEd95;?g_T@JI%u1b%FNzZuBQ&AqykynTH? zr&z}jdZa*&K4Yllx0aveGZ}X$ffb6#DmkJdYMyUoXc&HPb6_O5wA4gaR<^&?Av`xX zx5OPhCuyJJR04{9TQXATG%d>cEo+>ziIucjE3E3gUR*3?_9J$6ZEd#K%C%-e+W!gDza-sXEOM8SLy%YyZ_=2TJ_ zaufc)(*7@o`*y0H1G&rG(sBlFVrn`8mpONC0zRIwQW%*Y6?KFX9~pU=5--q=3xHLy zf}u10*;rY9NxR~f>5KJjek3Vvzb|73dm-Bghjv2GP6k4d{oa$%eJ+jv>J;;{Q!h(O z&Vm7$U|_Z-y9JmO0(y7nu#~W>Kfo}C2&sGS-yJIx>~LSy zk9ySz2{`Vz5LwRNm&r~^Nl8d>@b>P*3pE=Vi}c0s$zd5I7uB`5_(Yr-Wcj0(aIvK{ zHeL+UNGTKNd~}<34!XZHB4#RZvOO^myS0*>Qxvduof|Uh13A~w&@f1cAeYmkA6UxE z%Nb>*r>DpF%Tg1HwmO60dkzzd(Gl#DgI>RjSUPm=m0QZ)xxSPdzxCO2lC=o{9jByq zeo|7BbhxXt^ObYl`pB4=m^c>}NXp>0O5iM+OzpcF?lfHOLz^+g<__V}dQlawBj+cV z$fP0*S65eO^7M4GmmYNA{S4XFJW&oA4=|opQBfhy&lX(2FS!1r3lk*8dYg{XXt{NT zAN!?vY{|RAa5vN}1EQQt##$E?6ga!Ol0abm^f?DLlT(o$?_)$|FlPW{T7L~6ul=&D zr@HixT?xIAVBemQqy30;rXxiY!0P>K4k$Dd<0n@ePq^NqRlZZ8*7I+5;O!ak5o(s| zV&oII2)bS;AVD~m2^NqbY9SjfB#q3ESgZsQ{>S3|EoJ}lDp;=?ku^v(4hHg)>nhi+ z<4zGNSyx`ZNsZJ2MYvgjB*4FQ+$#ccNWJ~&gzqO!sr~!W|Arp^%?STQBCyVjw(3G= zAW_aHR=rZ_%iv9~6Nr9Mw_Zc>A+4^j`;m6d=tm`ZD(0|s*%%u~q7ODoF8n{C@eM$B z_eFL-KYRNjTK{ZZn}%@OZZ0iuqIl)t7@YkU@7vLrXu_vYFVLY+2B<4Xy%fY7g};BK zBLX{596!ETC?X=VRH%sEszwvib>Gz1#e)<=6RJF?su6scRbxs#yRd;b3Di7alFk)I ze1yftmuP(_ryR7tpCA4o!=)Tja{F}uskl|fhDmLvbpTXjyc%>mJzfo4TXE;(*~xLqx-TVi$o_P3tBUEx(fWrCH+%t~_K-;E zZo*Dz-=p>Y>;MsFHa2RLH`^7F!_}4y_FMkk`lIeZrK^RHPu1Op!JMq*d$dX1ZiwT_ z z6BzG+R1LfRxyL=l==3~dYA_o!a;@%Ojjx}diPfb`m*lSPgdS^hVM?9FIHSEjhn?mN z&ssyH(cw5{nz;fGINEQW09+f{shl!9>)dcdnn7`E2U%Hj%gfCk9g}RI05ID9dedMU zP#$fr^uraK%F0S|J)0VFb*39NfUI;H!lC+Kxu@BGNGkYP>VOz;42@Sp*|( zg_^O$C%q<`4~tut2vSj+ngN1NpWnyOT5Yyck6C-G$4bDvNTbcTR7=0V$3z&o> zMC&X`7J4Dz9c+X|79@C*{8c#it7`R7Z(?c zmxK+!|Dw7iqVXl?zkLvQko?04cX>2;bZvT(7A3 z0pxRh^ooG|-wJcCIr4{I@#`k1Io^M)+S*#Lo0?FW`jEU{xBR;Sa#?aSPu3tKC! zDL-5ab8LzFF{h{wbKU`9wI7GdGe}TK;Z?pn{WfU-!=~|TCAUKE&i7~N2u4*E?tpf` zQ4VxQAc~X+-6+E~@;!%^FJZBDzt>{-;%$@IAAa~@SXMnz33WzzN)baVL+e{wTE6hy z&1K6yK@n+|4g9<4%*U3TS=?4qR<1!@zI?eF!4PWtM#dBd zk>F^NBOwfIgsUw7a(-hXy6k3xr}LdVrrO#;m-Y0_tgP~)g$?b?oS+MKi~CY~d(8{1 zYpw;e@?P=uEPe3cfvJT>*4EY*QTpE!Z0yqH4rqfw4q~CyZo2f71}cyymX>j0e5$#d z3wU6e{{z`7c_uPJ%ftGr!o-0AQQi3S7@S~EO${9U<>uw-@v^f9qqgbmhX}OsP&| z+QaJDn{SyzFTM)vicwDWR!Qm~85&ZZU6{54MQGGbL5IQ6Qz+qygzk7*#Oma8SLcyh zOfK-le$_bKHE(H1Y*hmD?;)gcAS4w<;{tEs&8!2<_&<%&wQ6R-!>nKM#fe!u-@f<>CBGv5q*{CEO{ z@$lipojsuz>ZFsu2x#8a)6*;chbs2iVlQB=nL_-+6~QCM0Ev7)6NfR&Ey7Ms`Pa%X z10{;IDswcx_?#KyU@khuNXprsJk)f@Ngyr}j*dD2O0yJ-p|Z=LD7eM38@m7Y6IE;p zPpN&rMy)^&n3|hsrd;bCjtG_{hdj12ADw{OSG zID{qn&WfPWXq%OF9iiyUf=8TAK(;sb@@TXx%APwno#eM*tgo-{jKyGTtg&>|{6M8) zQT6geR{rz(d-%LBwA}J&8_D{;Z)9d}&WvP%A{=(s89EZ;WtHxIc{L2wc^MtFadhao z%*>0R*<)m87IT*}T=2OyHk^j(EwU*B<%~!Q(fV0n=Y5t_Mtck3cymhiN*F7T!VAk% zd-!^vw@P7d?it|4LpQv?%5T;#W=P%4}ewb%>S|jf;L*o29D(3f~fTV{{AvLsH5GvW_hNjrs=>(IW~py;@n1m9$R|# zRLn9Jj=u})`mOx0WqZqgkox55Q|I649kLy>eJJPQJlPgEEL%C~@d!-_+JCGXA?Vbe zp!m~lW8Mps+yK}JYmmmUth}!s`#ydq*pB~KJLa4I2Aa|uopNmn?QzZ-i*VseGiK52MJ-G?Ut4I#XzfZwXh1vCbWLXM|UPBCPpBD zmoA_%7|gJ2L&Mf~;;t&b2({YQ?WvRU?OUYnHNFH%PfAk}vw^3rOHaB@1Y%;CadN(t+Rhw6m>-ueD=J&>8*4-TI2n(LDu0-VAS z)PS7|tXgw`;|EPsnes#h_qVu=d@_YX`m(W7e!KOlG(UFZb8I5?w*Lq(+O}1T%sp8ZXrRI`+tA>&r$j@>$dbV`+#aplc;K zA63O#`-+rPfYq1a4#fek4QJ+^Z0sR_O7Il5DD+}e`|?s(%^ZivCkle`XA_>kBGrRm zuWwyv^KlnGr=XAxsvMwm2_zN{Ph{m&aU{lsR#eEDnVMz;eh*jMdS#G&9yt`1v$l2@ z)Xog}2_OvCN3h$0csaqGf2n|0l~2ro+=xn_>CV!N|2o)Wf&VMM)!u@*jWr53)#gN$sf|q$ z$p#DGT$}F}Evinf4`w6(w1>~=Y+RX(xG0c(j*cCw>w{_W@!Y!XE564Oyy;)6_!@^e z=E+sGG?3(JYe!3HG#asJ3N!&hk+-TGxrYb+!a|_3>f7h^R}q>~=J{76X()aDP)@O{ zF95dZhij}_Vw zs?{HgD~W{(QJ0$dMft+5DsNUU)6h>jH%$~qqvoZO+ zztS}ssLXD@z9)ERPlxt|Vi$1- zKvs&RuC6h#?DMR=N)vp((uEK{7(R%x8r42W%v`VE`O(prY^Qw4vlvh^bLHu~I5Cim2 zd(=*6qkiovy7xJ?GaKRL&?Q%txNL9H4Z5;CfY8qN-mnZvHVh`GaH& za-c%|Pw%s6%nG*Fe3lXpK`*2Ze@7p6MeTKA#oBT212te*5z^H*&??lh{rP_Y5j!cU literal 0 HcmV?d00001 diff --git a/docs/docs/04-Data_6_0.png b/docs/docs/04-Data_6_0.png new file mode 100644 index 0000000000000000000000000000000000000000..5f07fe27cd12fb2835888611542191675dcd8835 GIT binary patch literal 26209 zcmbSz2{_gJ_ipDjIMpdflQN!^Xh5l*B%DMOp;X3Ag$&!g&0~|3jETsc%wa3@tWy~x zWG-W6j1Xm>?)%aC-+TYhy}w(}JnO?QDxon}ke8@_(ldAM3OPDW{U%$jl#u+SGqFZ%J z`sjJv(7`r4%L@%-&qg@Qq>l^lt-kU4CjY~0(X7VtMQ`_w3mc>j zGj1gRR{OY^MenK^f7)+liY!4};dr*rAwB4S?0>UWXUx?{@D$eFJG3KHvRD2HKri(nVs&Z+L+obQ>_GQUZ%8+ z%&GgkE<`)cj0-zXep%NMn5>?4T1Dl7T-aesU#shC)=|#u!k7E&6I>bUA3l6Imv;3` z&fIv5ZWDugv`7;DMJ0ZtOHZ|<6;k1*69f)1##ZhoA%tD`r6CtE@IrA zzTIZ^p%ka_cX}3cV@!#rkH!NR)k z2TWR3Qd3jA5>pEA?b9wP3)FJn+2OKK@Uo;NV0gq)HP_O2cGl6ZKOs)1#B40t+AeFAUX zxbat^wL|OpKul$bNX4~ZmPXr;^91lw@%34ktFH1{hvrO9D$I^b0@(m9cOh`y@buaWQzdtkF z)bQ<7U3Y(CN`PU*<5RAyHb$TF+aY}AUr}SYxI<>Ec`4_)gmb)i?%WX_be!1Y+Og#{A_tHA)wAnMyT4XT209EED31?! z2yq^^y!$RquY$xNx6ABP%i(q^mwn&XoQ3(BW6ko_;o{vbrbYLkYL}LWiqdsHzPwYL zYef^q^SBhI7jAdlB&U0mL34US{?{t8(TSo3s<5zdm-}wU-LZitR{Mg`P>!7W>46wp z6JxaI1ox}>vr$!D+DqJJX2u7}(Hb4lT2KoRr0&Ye$!Q#Oo*g;y>0Mf!T=)^yt5u;p zi{=JRyTfeSeFqqML+Ms@I=y>(ASKYGE%((4Zl~=wt>(Ao9_&3D?l6?|?69h;YHfjo zO%U$ibIbnY?M@Ta1htIEjj(}Hwd=-($Zb#F5t41qDI^5h&4!0M? zSag>Q>z4c1jmF7`?PAq(e41|b?wqvrradZua*A2|zqrj@fxzmD@{zdbA((s8q;Tr$ zK*#%byq$F{EL&X`W;(hHCffbn+}y6X`1J0%$+P8%?Gwbcz=nbP1V0v*{if~tQX?$e zj&Ek!CmSdzuz~xa(E~)me#zM{H}{o1SiX7_{tr?Ie|md>fV!vKgPlsTo3%Yf&E8*Q zVfjk#wXJBdB`f617qhR=mNIW}*VEB?@c2SvaEM86!W##C4W1|Kr-G%tg(VFK@>!~1>9p8zz7l^NqCXZ- z_PUEg=%E@nx2e7;pH;~=O;nqWhgpqBM@L^HcI1qVlI%u`S(lm*&aOD!?7ve9*UBp! z@SwF-!_U|Evc%jsB_#yxdTwsh@z3wH4_o!?;FqP^(EERGo(Vf_wW?V9r6-4|@RwmF zpE2Yfr2>1ax<@jr{URc`RfM)}dvTtj@y@dE&DK9=+J>F=cAbyEw3b_f!!22>sNK!Y z?o&+QuTM+7*h3HoA+4>gQZu!AHl|9+Y8lV0eQ*)3|f=a zQ)FY=y4W)@)I4j~kH`N0okjb#!5SV(K2cHC{-zY6&Jr&_R!!TZg)>8WQbod>Sb`|D zfUfTDSH@XwJM8T2t}i@_ii%N+lW#Ohbe@}Dp27X>UBJD2JX$Vu8GEz}HNu_7RX^3m z`53ikpXGA+yw9NKk<5GV=AOzB-S^L~pUW`3uv4orA8+C$f=gOj=f&S^Ts8Oa+4DJF z?7Dz@=ALhf@9_E;(}m02m}?{ZA+z@eGq#`K?z8(=b0P+3eH|B8B>Wx4E6~w&mGWBxkj&^(h=6xlgOe zIaTM?I#K7@Y`59oDvgU51LPtk#Gka|6>i_PYnRzP%GIg)xnUQk`L!c1Ddyr1os;xWu<-@(u z&^<-7-9p`;;vxcYaqseMQrz6qjawS0*%ll|@3@i7C?zGOIP;6Y|3TyCW%Iv@i-;(5 zIej}DwD(-;spFfXAfG<8g zWQQGRf)DuOFyVo@k4<0Tkk{-qO3^|r?&Rmcd+hewQ*v^i2WX9Q&6&o#m15<5^M(u6 zN56jU=3bb*P0|Sxwv6`#&`J8)28Zofy`SbG)I24}spMqYnJh=GAYTrsIL zJUl2_JUrJuY7$&54QIq6aU{yRUg0Bj5PhRnV#8&A3KBa-MMZ@TYNMag3Mbkf&LLXv zj9zxM`d?Ab!(;R}|Q=^UaKUFo*`E?L32 zR-Q+iOcnQc^WoH7e~UyZh)YN@di&()LDTj8vh^ma4lFD`ij((`pIyKA3ZJ~o9I z6B1BM|IH}sgbx>qZAm1J_a?6r7JktU8hbL2hv#%xKy!DJj`CxHKji1trzIvqVaS2SKWfgLt&)V$^Z3 z=;K~&qmAmOclPc0qx$u0e*g=~H=i781+rRsYHDhPt@_Ug9kzO0AAj~30{}-V#pZ)6 zYk7Y|V#9#CqobqSaH}LT%4#mhFGqt88V7&tZ;(Ci!zEzVU+-HjF(apG*S&EI68Zt7 zw~Es9Uun@pZFv>Qx!ZT{JOvE7?Q2h^AG5^d!#4xTs%giEiWZ!6MoQSbpRX43FDpA4 zBt`JhD7Dt`EAIO zXLiUv_%o}`I^@XQw>k@Qx4jWAf@xPPccp7eN!`$}>%K2+(t7CBnKSoutp>^o3iabt z2yoxHJN!c))9GqtmR%`@(~o~=J8*I+cR=uzf*KdFF?0E^VLw<_et&m&n@xv!)Wq=pb0&hKdivCY%3w+wqX=khzuEi0{O;Ymf;*MN z8-JZ`-nq)^&=zeuHMQVR9YsZ_a?M}n_s5@+y zq3wh--BX`Z6eea9jIt601WbzN^w@wNzR$9m7Oo+*bLT6-(-0aXrzDA**GR75cdjFc zo^VkM_;<@&=L%{Sw&t|6o915z>b`<9bp~~=68Y&^f4{f^5PeUrtPlkezT@D*M$3W5 z%2}YaR1*hO=IbW&UhE=jO{xX<89AR~L*($<&PYq&B1r2~b4Fs&0fTdIh8y%Qm&{f> zsi3_I&6;HY$AF;hlwdyzRR~uqd1RlZ9IsxpHZT% zt=$m1YU3`IPCjl60>BxM#UOt=mNmZ&AITujo-)j3Lh@y6! z54>;HI`ZEdX7bwpTaW#w!NFZV>~KOw`VduF`J9OfgTk;Ha(9zHetgB48jroQsiEQH zV5O+zYiO-we+{F(Nx*}PTOH#-TDwMDobQIvn@r3rHnP@;nP-@2vG}g%lUM0vyjmam zX1p>ZOuN*3IX;NzOvu5Ek|N+2O7Gw7CchnyOM379Zx{c6T-1MS{4dA4IrT6ahrYhT zlc?9lDQvX8dNnrKFECKg>%@r@iK1uEo^8yg8F^-geg14j;ggVPXVWTl%n7bb(UP=! zm&5PBZ?#^CP%qo;^Bn73z?+1M#uP16FIwkmY3cGnAycp0w+H-M(rfq`5z73MvV&=B zb(eY44tSJga2lUiN2&})(qyB}qEq_Kq_pZr#LSdg>3=3A$n8XCP1~Lxa9_uQHaoS1 z%6VLEf{~~mk2lq5i)-XGcp)@P3JwEGiXc&VW~I7{%@4X|M4*QR&{S+jPNxd3;#Mg zTF;T}C4rJvX*WOBn><3GP35}`S|G*w%lo606D`B3lY6a}9O@vrh#E^3(`e#14x)v6 z#AcT!ZQyH~F?W(+GxiJ_Xx5CBa($SU((nhb*N-kE*IvOlBqUK#_T_Nwr2lrV4(ATbVAre}A%h(~AhoZD#I^fd*}t863tP zg;PBtm#kOwTSrJ={+ynjRQzZvcC&nKLdp$3G6(7f`hycA<)ZjJ7*JFK&M)v6oL z$%@(IqW%mtWxwHGC!)@b^2`2w*4syG++!8gC^cmNAm`u}o^9KBmWNXtjTG^|2@b`{r^fyzm99Bj@olm{9} zHXjs|k#>vzKL3%Ey&;McHx<40j1S1EKIi;F zS)&#leVap%qj0gaC-&$wpKs!NHkfKqx5G;H=882*BPd_-qG}U0Nd7lk_C0)HP$^RU9;pd|F(dP-yz^&{Djh>YF`S~p- zV71QXV=dJ2p*D6)@5I6RFCShUE#j);j`VbXb9`g=aXI~MDjLjxG3y#t+*-LFL0s?a zR%X>iWyNJjh7DhMeEuwbv4s0@OG@+&=5cYPc&))9U1=}IFVSpc!@up)X+!^0{4M6x zsPrbflZ}3a&j5Y(c!mZb$<3;r20yy+Hx2B!KoY8VHaxzdM0fvsy{6QgQz=kaPfxv5 zEilmSD}WCyJ@q`E|WKd)2F|1H$=N5AcEAbB39-0iqNdc+69$s??MxREFRnSvho}XsgE};1KKRzms$~;jlli9d zeJYVs%V`=j&VR5NwB=?{hJr6XG(R)Nzx$`7`o`U$AOo*Fu4{~po802|2k@ve39nH< zE_0ee<^HF}S+IESYi&tpbaZsQXtRp@7qhuUX?98TPA?IZakuEWaQoAh1d-*+MGb~n zOSnEp%i5SdFnh%O<1I=dxVMK{)j68Ef*@VA3TWf(T_-=-8*h=ok`KmGNDp_8X9=gS3XA@a-#~U|7poFyS?N0cxCH# z!5|z{Y+*H*OlOPX^RL+x`G)JSdBCa1qAB^K?v(KGa7K@YWsOXuEg*bDC>rTkY&PNt z3J#U}+!|F|ze7$j(D?0>f7le{Do3YhsrKvVr;>23XcuJ;*ZBOjXddH&Me zgTcK1PxnQ3Ww-kr{JUhVEz-$^VJ-+liQo}A)qj2>?YuesMT1iv=MOjV=?_idmfSt) zMA(|R)(U+bdMW+s)alcyC7mBXqI6dnFU-#wR#BQXw7oiPI-CsU0Zu4GhpPx z2td|vNVKZGAI%qT`<$(5TvgiT;mk;9X+tP@)a4mYEWV5Nk}>oW&Dn~uI`rx^NA$#o z-qnwlQ#6fIy8CZbR#s+|WXLyt+;sZ3nYneRM_-34gKmsosxjMI;*MMBA@k>KLPHbu zgFiw+|02zjoMlJZby>K%Q+o8b9ZF|m6;w+sbeq4_2f~@c=g)Mz^YcJMA~49}89}AX zEUBxlt>-ZIEOlkojPG$=n44&vL_7Mp=XY+2RyG%eV0|dW0M*XsnGGzTz)QWEiS?po zzrV4%o@4O(5cp~(`YvULk(ZEqrd%aq?kM5G!H;{odVAG78Mm8h8lB79*ZIl8*Ks+}=& zcAJ*}$HwGe$;xtU#nFLO3#-)zRBL?TD*Ks8OUWGR6DOF$Zl((32tG7FBl$)@tT}QvNtW$L_xW9#yA&$> z2jLdgJ0m^mHC$4|>&kB?P`6zIk;VB<;^8-a=5?$rQ(q$%Gh;Y9+}rW@0<0t%|J%_m(Ux2vf9>vJ)7b)9*}_{^qNI0=~+gTO&gKCy%2` zdKHzW;6>@!8_P;^a&#k)=+7X;jt(!Kwf^{WCCw4s0&hNFZxP7m`{RNSAAqAkr+a}8 z!Z3LRz5ZiS6uPXEwA56bS%@#hcG zWpw_eY1DLxFF3gAMR5javTN(uL#w8E)+K+9xOsS_m3)V7NCXKLE10>AyiQ0<$BP2g zuGBpsdn!M`?PQL{_>3w_v;=?Hx+XK~2pwH<|Zsf5kqtibbZTvG%rM=hCcs{N$Y+@LDDTC!znK(+j zkOIv#*e^U>GiheYl0#H*ycCAlG8UFsRaKcK8T*8Ic)*%7o0LjNOS!K4P829HLg&KI zPK|coYG?3LlkH}d`J7zEs-Yh!(S7{_D;Ps6$uCM9_HA61oTxld zxQ+!(uJO~MrAyvh`uh5!e$Ut~rqIMF$_5nlqs30CrX3Pis=!0ZBcY+8X&GY0fB&eL z`YwzwwscH;Xv{9w;$^DrEwx~2-&=pygC1PX)>QJFTPQoT-->H=Dk<3E_wGIFF#_xL zHgkBIA%xEOji|1wre-*WK`o+cMA6r+U!Oce|3L5jQ9sUTzw*UpVZx<9^~Q}~7n_XY zTxO7?NtE_vbrFhQRj4TbVac^RFnCV?QUj3RH@1vZIg`~`_ejsr_DUvJsHN*)_9{XT zqeruGU-NUx<+uHjM{+i!W$NYl(YyHnG7|i6&Dij&A}IFN>o#rLfnzM$*~Io;s-F97 zkFyV? z%K-3opas|F(DhVtb*gQGuH5A}h(2raiZ`$fv{pA1pMaqKdgV}&_(j@nqvV1QoEm~G zoR-=qAig=XKCFm#GS^u16k%_jZ-tC+lwu$ga$eJ zPZ5W{@9x^JIcwrIGZ{0f3FQLCPOew@g-_u<9u3f!BlIrDY8GWhZs@JQEPCI*@m z2veUkJ6h3=2L3M8@UxH4CtRr%hxi5k$-kb#vrMyXB;<%)s+-%9na^ppk;j;(dLV4$gi)I6s!+athqnx1lz5bm0wS78Vv= zJ}$FS#J%z8(W7X{a6~Yv0C925*#!uchX+FOS^h$&8S587GhyQbycJw5%2n?PE6)v8rGaGG4FCxiU7Xfe)uh0t8xjfZst%-l9mBL2w4 z25%l-C+s}Q_A0$3%J9lnq@^KzupVOCB&cCK-o8h11xRi65VwqjwfW8r>lh-AlR1rCsv+jr~;&NOae)AqE3o&!xeK`CyNnrTIVz;S6gFcwfk zG_hOh+~9b=V?m5e-dTubRsbga;Ih%f!V%|5_;$5+BbG zmM{2OF2Lh{iedd;n1_V@cAOF3uUB;dVp0(lerRRS?2)@;QD6z(eq>?x>s1i#J)}Kd z$;>PQa^v@;EKn^<=rik(5z_GI#b1zpc;+`;+aCzl5?aA)ayFvJh6`s*w(<1L&kT!@ zA47f|`LP*>4Z`4^nUh3dB!NV+`Rhn2cZg8YY%Ph6Sd?!f;_u+&E8VR9+M84C^-|^| zm!w}he))&#I{kf_?>4>ef?s(J%Kdj9mwt`oo?L-CdzDa~5PGh9PE}L(5+Ng5y$NTn z1Tt#%>tIS^SyMK&>IVoKI*wrwa?S&|q=H2{9P5bQ(eozqSjlGXcIO#mT=I1$uDy7U z2N4Sta@j>VFS1?xMY-*iCUHb7-Q1uq+=J}_mW1`Jta^;>kM2?fVS8rrV$a6gf(8$Q zp<%qmlGrsIEd9drk;|I{2H*3$+*KGK1)KH6yZ^}lJt9El!%`m*5 zTY%%#aXv|9UwziH)uzr3Vs@iC35t<7e&4+JFKKD$_gy9Irv52K^J<96(z?cBv_$?w zS`uG|p$odZmEjszCF^`*@)8 z82Y*xA3%qpimq}GIw9iR_|#sIM}DNl?3WkM4u9>f4un*GnzSd~YazyY?h|&IpM7Qz z9kRB_WudNdJxh>?siyDvN`6^lmElZZ;|fS~)CI!IqCz06 z3|^u0eF{QZ!&P%x;8}RM-;x{!Qv&e}#GvCH9m*R%jTWH` z?CI(x;6_2wM%-!BA z5v34nqcXbHJut>a)Td}^5yy&9;pCO~872i|7wupx)Wy}-Ld{6f$Vt_({}dw(taZ<> zu6L_Yu01>|R{KiI;1S~+hC#`oqQkax%~U7%f=b_7t+O&RGOundUmcAGWzldYN{`*n z%PUGpJ?7=*l^Hzj>Epv}F9{(8#sF}uet>Sumu{k0Qv__%7X`PSuv_S3S- z!TAwKP*+ybTr#L?-Y@LAV6mwLQNjP`4}tphac_>Dc-sLGJ68;qNA*?Bh&kh1M{%A` zg5lBq=f?85IT5(vh$9mZmZ!@&-K0&eaCTJo9Y|kd@YB1k0qRs8jsjjuNi7I{^kx?? zHp(gkN#Y10g)lxo{@KD(G;SeDT1E5e@0`($`8|2E7uiSlQUO%1u!9k%_(ntxp1C>G zvY7s-rCjT^JzH@q(HYX4N8z7>rBopr93KzQD8UkMkemZ6P|bcV9ed@`l)(9%FOPL0 z(0YRVPz331iR}cMpFht0;ifg+Y|4d>b~)c_WxFA~d$_q-aT&v_MdF=+2w8b~?<5!+ zZQDcj$RhwV;QG0}Kpl=d(Sf(VFe_D52o>!i8h?9%!|nVqB&@O~Ef)d2-N4-``R977 z!V=r>Y=)0CALx7$&{mo{Zv*WP*V}L8QSDKA@)rRYK=pI)jGY00uCoX_Obp$}$52W0 zUvZOJf*)cu?hh%t9Pc<9hm05=Q0%=O;gTIK-h-gj-m;9MN}Gt_WT^j$q%`%eVTr*2; z2kPqrE(?IC?S^&;}(NTaxl>4f>S}1)Uh37Sx;xngDyTgvQ7so;v&q>U<;M+G# z)Ygdy$dN&~9vp4Xuh5=@aDP$*$g|~^pWoI1jBjQU9W3~TR~K*Y>X-Nb#99wF?+_NQ z1)>Wk7C%sxFh=+G_J*L4RKZ~LhFBWw>(B4kfAeP=PL@3IOD>>zB3!qo^WgQl}B zlrPbbG&(m4X&OzB57G#04V*5A+xr9~=M+xT(v+{r9S;Plxr>Dz51v>U>2+sBHnOSz z{r=V#ZG#$E)1TS(<}WN7CT5xbqpJ7cv}_q?i`#>Cj?Z!!d%T}^Y3$ZyR4b`pR8%y} zv@TT-rRn;OC$%hE6`FfDl~=zV+y6$h`aXeAgacewz^ps+K)^ks#q>u#uzPF6>Lt4G zVCgaN5<$Q6U1oEc{J=zs!vk%c1hK{Gz~mpkDE-?{3>kaDrTPL5`KpyMi6ZdQRg&_Y zG@>>8k@cu)uVe9#$f57|zq$9<_er~|m>$9lt4bvXFWM~5rV`zHsMWx0VLS`+WBOSk z)Z26Teg9vl$|@@_1XA?hJo`?z1L*5}1@2~@E!f_4i1iGzZG-L!Z;qAIV~#I=bOqFc zB~_0Vb8QwDIY_l$hTBOPASs%O7X>JM0PAd$mVI}F{epuHy z>Z}sv$7E=V-ZtXb#1ZyvMM&Tc9F-6*>U9?#G8q#)c}VB0+DY$ueU>h@<>u12*(FIy z8aeo4AjpPL)1tXXw(3`}&eGK~jkZ8*k_KCE;w55GD_uD?)3Lx1eBtsV9+Bm$``g2t zHf^%boG9MSsNU~yQV96?l1<_C>2q`-73i#o*{D>v-bbl#ML$*K^w^utE@#!aMBEWK ze*=U99a}y5^3Hlf`#la$v;1@pG@p5rm6#?}PQ$9Hzv+XYcPm#p3zarB%v8I~*Rc)5 zJ|7GRvLQ`l;Q2|9nMsWcFeZs{E&eB9%J9${dOoZ zy=Fbnbl+_tjRes}A&`rc6AxLSV`OB+unNWOir2!Rsf!YwxOHjy=#~ttaF0nNZ{Mm= z;?TM`+LGYn2z5e2&McvBQ%X%sOCHJ3KSDV%Gj-zlacFg-@7eZ6il3K&hq#io(G*Si zd?`q};*Hq{Q;)qA&s^GDqmJj@GC{IE#+}Ag< zM_;>KW6^AaM8`Cz*8A&^&TwD9xa&Y14COJnl5Cz#J1ZRxdt~F}bJx`kq0m6%D-`V+x&6EFvVMuTG97Ntkgcq# zQc&(SBH~dYe07u)16&N%v{-nM9?tiKm|molN=M5-RCL?SxrU*|(B2^~-p1zR;h{uF z!q)Rr#>hk^DpWGX85U+rDdn;h9f|!R-BB#==akf2S)Cf`nG=d7IoTH9L+el=I1!c3 zK&o{Rt|R?0oWYefT;t*X{?R?~ivPoAT8rGLnOx z(Rq>HeP3Cv6Wo|Kz4tL*6Ddbt zdEULGQ)E++l~sZRr82MT6P%GMoydb{zjL+3geAJD7}6RZ$Mh`Wy$s+Ey@*$c=UkDq z6YYqQkU}MDcUFnb=eLnPuotR#q8uaTu4J7Q`cd3{=TAQjBLC)C*b8u!|Mo!f$d_kJ z8z+Hr>N%R;Zt`l}RYYqN#A}E<`}B$oey@C#G2ziCFvVcm3*+J0z{qdt>GXbdfQD;lV>z=J!_A8-W%FAe z7GD7S6$N2vmJ{8nOT+o?drO*#Tt$8G1yXE zLvPq@tTJi6%|QJ1q0sMDIx$TY6!GHsv<{rBjU>f3aCH1Ao&Qg5HS{?} zunXUFw5a{=)?-N{=)e+1nbvYVDWP`ir0X(94!|n_;x$2(Rm1E$W${4iYpZK8H#cgg zKm#$9x3|yD{avrAL0OB_vI=OgURUABi1eG3I8Hrc)AF{Kr)hggOT(F+Gy+T%FX}S? zH6nQg-Fbp2a8+u_mfjm5p*;xEG{mZ8q|co@mo&m9ZcpqFyiFNVc$bjF>*a2*<;1%? z#H#I?``Pj34_?`=ENa1r593+kVPTx#Xa>wmuor#-ib&|u=ge)E@aUtSVsH!|{l;K& z;@iGmrBm(p6StF#hJwJrxE{1UEo>=hmV@i!&u+8YQpLb+J+9CQ>n4a3k@WSr7?Vb6 zS^%L1iGPIcfIe(#APu7g>kflD;(?B=akFFB#iV$_xBgENIFABI&h zBPNZYr%DuU9>{?cjaD<|_H*h@XoI<6v&uRF!ACyIX5ISCM%G@wzr{FVG-WumM9XOm zH3l3c9@)!V2O0llRKwz7{~#li8I+8n_ra=eOxfV~U>SD4Z9HUZ5ys3M3P2tLw1RR~ z4-M(jqHuiY*jFD037q!&F6pn87(Z4C`Q4gaZu^(aGD4UUXT9>qyfR6G_sf=iK)fNeSa^y4nuk9 zB@}1mafFk32AEq4ABEDu=Hu>uj;?q_gY#PC8*i`0P-J{PMr@VdLEjCi-koUj8rc>; zdI;5rt`^G2*>hTP?F<@)23@tRMB2R)Q1dWCjk>y>1nHVI0uvoV!StX>YkUvspxBtZ zXFmKa@~yHBmF15*@2(D4Q}PI5%`1u1{5$;Adv%ZZU#St@9_`4sZO1BCGG{HQKof1? zID;v|Tsem&nd3`LSPFP0K+9Ctc(V*%p5U2&u%@Z(cYzb^9@lFxz#3!q4s37Ih@$tI zrp-))wiaGKwN=$*Kk+ymEP7|xh0SEJMU;{bjIH(F59%M zf)Z;4RitqeFB^Aw7;Z5uN-stb(o0|plSNIPiYW+ibN2d8x3zA{{Rdg+4o_B&2ECHf zXbU2zfg`zP{GP1fS$g_fx#SjQX&1A+@h4ZCgq2If8(&3Raoy=dmJ?JNoXb9}hBoQ4 z7xy8vzbY|Bi++b|q*+*muHgeRANgOZt8n;Z(zat;6#VbreJHqkQ>VrsNf4i4rXPr-g;t;vCyyYf z9{bUt~#h@hZg-=IF#nRxa2Io%lYI|d7Sy1I}qmkuv>{3m99Mh1=gO%CZ6SAJrOo8cjk5+v(dT{9)6PClwU3N~oo=){L4Qk4pv} zUOdwiW~B9t3d_SRJq|kUEJ)A>j=|Z_a99N^q0^>0;$Iexfcbru#eQlo?)lN zA&?p;!JQ@bq#HJrvniZ9g-8t;{%&>Lyu4}Yn+#nB%eH2>J1%nrT8331Uz0~L5}hcz zcx+)eDx9CD(SXo|l8%q{c_8^62-n?G%X z(i5pIUq6=D6q=$3iSEH1&cQyXB}N)e836GpuLa3epOoy6t0;O1P<)KhYmkmNs#LSh zA}RBoK9WeOdS-ZwGfdS2M|A_K-{1dWc5eFzp(aZv;~O^;>n;qOTkQTZ{!a6vwH5O4 zKzStN3Z>0DccF+Uh%#vHKdkcqZfhX0rWj43@lm0#$ueRpsKR>&sn?j&_pOB|5+$fO za}G?JRnWmq!m*PA=wfWROWX$e9f~Q_lCIrDe`9y^ndg=6TBl>4We9Tnr?xUPJ?P2(t|4sgNnqFcH<1 zO^Ecgih|_|RM#hY(vYnUq7r(_+|r0#tf%R6#qa8o+Go zbEw7_2$c{gD8@SX)65SN#d2Z+I{0&?;J?i|{Wlu+|GG=$f3pCsDoWEq!Y9ICUlH1u zQy3E?A}jnZOUpf#s@D$|L5&C)XZ+e59U~h^h8$x+YQwC_i?ZPh`K$r~E9=lCEgcQv zALHx8we4$6Zv~F84IPe^3*v!NwMp9(&+-JJ9C9#S*@ZL`fXxYvbQ@-VD-Tv7!EeLF zDo!+4h_pf*h_-ZUXp@=dF+b-k37k80_ zY*GuH@vJBX-Uv8}?`3s3hdgKg!?!{(FX>GYh)yDhhzU49M)b+b$-REpnvwQVqs=rl zq{N$ZH^8zVdfrpElP*G2OIQ4U7q~CrWB7fJs+{MO7fe@rczXxnOhXpKp_oDVIOW47-i3+p-RV{6nzsuI z$`>xoJLF9FC-O>2Xo6J;X?ZNMx3 zxbuzW4&O~>au*pIJLhI50s)a=m71?Mspi9Mas{MFUQj!p`%-O&HMfYEgkn&!96i#~ z0ivA!y_RzqPW&boUl^Fm;0XB>vidzDt~_Zc!`|fnLHJ5zJY^U3_&~5Z<(TT1opt(! zS3p4E%!)N^+lg>Zl(%yX4bgX?89}@qCBCPGxbSE0j_&9y|-zQvl?_K;jf*%^mqtL^Ndh$2$hB5|(h) zr;6VvJ%Ck?#kX!Ay9$KKltFMH3`xj~mFOXeat8Ysd?{%rZHM7A!EzOW^m)kG#?mh_ z>^o^x$oY``ftZ{&xntz~73?!`T|`b|#0ZiWa?B=oX7+=x;SdYrcAoqPwjYT4-^H#C z|5~VClSLi-N;nRgJ;P#;x`_Gcp$^3R8Q8+@-d|^0RlHwVSPl|g&X+zV zqG5J+w$#8D?bp~*V=Da;N3ANCyE$X9E7M`DUl$vTOvt1&BD}IShrYA|_UALG10>tY zJ5)k+4MNUbeg)X>0M8Y?t2=*6?sWN**q&yDRF~>FWkGB!a7wZY+kUx}1FKY5cc9kV zr2Q`Qj>N=*1aTTlD=!a^6nW8PIvvdI9k<1GN<2K+`=b=uG>n#YY?FjpA<2Dc7&p0B zLWm7MXuJ!}(C!V~uy$_NmZ$wOMVbUco|0pg|K<5=A|G#@zW>y%4F1jSICUm@g9kA) zzIv9GlT(1OUe?@jAqvIMEpeB{`*p;_B-d;Qi8}_PoLEtCoCuTAI1Cn;Ab*z2O0_)u zekioeW&S_{KujuhzP~;b=ME^flgJmVITxUXB)iQ{sPUdlr?s5=^yw3pn1}{?k^?|!?W#FH>u_@!M z3)M08u12PjsxQNs&lrnWu;gYtl%_zesLFqn>es7*VVH&q8w{>u3(uO_mT{-+3lG2S zUM`@X>5s$u3et9W#KLSGIm9G!0a{(R!kwSx@9kQdgA|&84E<+#U?A(a;x&g@Fr(MWG(F81biL~<* z%Gk|KF7(j8FD{Sd!|6f=-|jUJy7Zr6sS@LK)!KRJ9j6NHFJ!G0-qKFkxG&j3T$&}D zHn0SXnHHxIA&9j`=6}w^#x_#=@ykWBt^)^oaa~7mG1N9~Y_r|CwY%W-WI>C?Tq$->g^*9SRo!gX3iW3r*}A-=RJf}QRwYx?2IglN;|R4k$*>O#{MapE+qJF z*fRrR7H7IjM~n7fdpv}(*kawno+TmR!cRh>>CPQs_#r4|v7jkQE#oxt+hUr1G^As8 zM|>0MJ&1Sruw~z$#7s#RVBHG~699g>Po6RYY&(AJp3#iK^8_1!jv_lRaDrc7#{J*gFf`vFzS(v2Q6 zmg#Dh)De)j!1{Ondm1KnoF%psWEJUIIbztLI^VODd zR9iU^(Inr#hIhhiUx_0MW-KW59ipP|h+hTF!2{&TAo%A*?%Jd#+8{gWjXn))dhWUN z`F0t9RS1vwz{8Y>m=>z)AN&*ckp%wPs;!XxeX&`v0?R8F9o2xXWPQqRJkwNwNhz1V zFg9+sYrbTd)CU&RfkwspgkGsqH@q&|1eF8w{&6a&EcL=d5X4&_!XK|lM z3D&vYY@g3@r?27PB5A|;&y%LMI_%eY>gG2WYKYD)95YIBrYy4QPf9# zTm{3KEa(}aP@Q4L5LHIW%-BF&4?KCzY!O(^5*+2!sQdKPug`LB{(#&3mwMP+wfrw% zgD7H&$0QRrT3Nk=F&+ACnmwioO}wyn!UJ7(CRwy)C=Y5ic?2Q+SQMm0zp&T1YF@Zd z1!kxuZ~x%O-Px=3^59-&zAa{qT^`iCy&rQfw$98#w?Vr-^cGRC+KDt}ScNov*$a40 zt`aubn-mF*dQf`r@P$7qfp0UR2d>_BwraBBpP`szl%iWeyoqk9x zQAQd&7?MX&U=u{KNs_<7h}rw>8EL~Ll{6H zr}b4?*^?4j#EuaKCwb3qj~S=JFxs`U>ImA8F7rF{!jm7rThscTE{qSq^o1NWPAvg3 zewCs}oC~%Pz~K3KeZ-$-8!xYa&5C02@YqA#zjF6W=u5r3jVWPx2a2wLjPg3auEuA- zajT!3L@?iu9XrTA38)2DLDoiwGkd-d%jOlXrbhT2pP9D^Q_wZnS#>Sd^+%@~S#gNa zS_%pZ%5>7Qcwt*v9dxacf&PnhtKonP4911Qf?SEE#zBco71b|p%C$c>74-M$mOl?Q zKmE{uqk(v`kM?qiS<6u`;;>;^24t;1!_wG9ha zi$NuW^3l!AO@#&#R{YF*HgmQa`&f&`t74)teXX(WHV~jirZt~VZ+~w}O_h!D%R6q) zMneD!mB7&%-?>OUb1c=+*yaWdm;7BBo%tPHRrh=ABv1a2F|pENroC$B7MvmAFdA)% z3=j$eLFM~)hX5fh6E7E(P(ykC&X6jR|7XGY?l-w?dV9@t)XunnNmxj3``e7R@-_s; zSd}ATY9g;yU1^%BKfLPLJ7)uza6yK5a%yX_Q(+=3&$&sc=8co;rUfxQ^^Y%9eD^Am z`5v#^wK*YVCK`=7xXe;ebu&rk%7GTh8%gHL4piywu#r$#xKObW;Ak&4s*3l8BAYy} zApTG<8s{(%54!)~b_w!~LV;81WEU|xPd|Mcjdbz|F%^gwO*f<@jg)dZ?CF~O$(2Sj zT)h_WbOTj8q1vvrk7AfF3syWHlwIQLKi=o_kt=9j`HZ<&}CM#{>{9%j+k{m3K*5v!on8{##XvSkG|7%i|; zTIFAd95*@qx8-piLV> zcQdk|v)L3x09~O3(a$KM2eBgSWRpe!9bo-g)|hK1BMl&a$*Ti{PYB`;TC;PW;eGHG zs-58cF4CRX{^)+cbIkmshXT>fsCF(+p+^BU_ov=tqn~SzI;%0*q9*IeA9Ynj`|N2$ z%QO4}^O5-%&7DntlgDsL?q^I`0WVYr$Mk@KC}H#Q@xdX#9h6scNWavRcqw9$Qd-%F z%`vgZ5$k*!Cy~1vIF@}M&0ai`klbnc=x_|HKo$emVb`cJsKUyc}$qjMI5aElg??s_0GXL%}h>Vv}^7d=gA{0m4fr=AP!zD${Zs;}zIo zl||UV%Ej$2M9FdLRovqY-!nvshCFZ$53}mB&9$xGp-oL50kxkXs>W&r3LG&ec46SJ zjV$N@E(R{IXjqh z^N72Hv`;k|Z1u`>7DVbvFUcUz*Z8gQe;9we=`k+;{_s_pyuK=l9aK_Mf(_xwNo=|f zzWH81N;;i|50dR1*{sDe_<-<8+QXlb6j?022U^|8$`q^>x#29{WCumJd0gfjy`1tt z8awxxrp_>o*Xih(mkK!1k}eK3>zKMsJZV8GUbyIV4a6wQin=jEX$5UcTSY~uZd|gG z=$LU>2bG(bQYTi#L&Q04QNcn5T4|*!jeLSq(R|<( z<&xOgGv+Tlx`fRkY4I#2=-#E54vROiea7i}r+FO&i6O=T8*$6C#BnESk;0`jAQTCK zGOR*G;-C&p0G7I6WQ5LvuZj{=et1`xYVyOOle<0er`hP1!ISY5fl+>tmOHQ`r*qgz zPOlEh*#9uS+>Icj1NPjpb8IEf@#_x3?c29CefLIO3K4&$Qfe}=s|=qC19*Z2#v9r- zQYEx`mN@Oi&1*zg30JzTb!*BZeP}BEio3+}@%9Gsk@T4P_0*duCV+%t-Y5*^^70ZY zxwX@?I~||t3ckzA&;%LvF33pl2fB_c+oqcK97eY&;$wSg`k<>~>#jdkU%#4&DpLA* zn;Jd=bmyEg>0y5N(4epvUWy?R9}D^hQ7uxEUZyaPWrd$}(M`g9gLo2kz+Exf{}Z-VU$yVlN`1$tTJ5kgIBLmV8e688Yu9Sl$4~^Y%bIJD z@x`^}-BOr2D4va(lPOnaicmW0htg}LvUh-FS5J6e{Otd`(zinavi%#@x<~gsh=GNi zAz0>5)FF=_TvcRgRpcDunGP$T9c&d&v%K(e1~s;WG^Znj?fi&bX)_r%0nHZ{%@!oA zTQ>NM2={HhM~_bO5e1&-%I}f}kK}+=0!A)q{j&zKSp#5^k)~iZSZlstus|l2=IrXR9>ZM3OVhw17 z!cBE0#fB!K<6}XvRjxD5@*k4?>CSSZ@(fxMxyR&N@Si*HoV|)lg$7?H?hwZ=ia`Zy zTF`s>m7Td2q&lD#4o2dzCLvVoX#iegaBJM&g3(MR*JdjE$q$Y2ER+_u&!^@(WDn=^ zppvlT5TPL9G+AqMd={GjXfpl$lqIpg8Y}^w%zTo8z!~`Q2*}H2Ama+%&JG@^GTF?W z%z>wtkW&%QQf)Yps7C7np5eQja>KkvRB~j3X_HwE{^^{G)5|$|PlNLKyJ;B@Mqx|d zBn~AlHtLB$HMbmYOLvUm&)0dhn!5UWoq@p$fu{w%YEj{jh#nOeo1kNr_qoZMe55p= zs-tq)mDPreLWCpfyjEJ-fi(80$tOa__~>upeFc90IPkclKavXMgsx)VP8~y3T)gtN zLyY>?*8ntjrFf^eQ6P;%DGVQe?Wjx3WTI+ zsoT9l5%;ykzB%jBzWqc}@evgUM25Ghpp}XjQ%i1&?5GkJNok}1)*q|A3c6Tsx6He2 zNn)RR0BX-+P!01j?SVsRoo2=}xga<$cB@2=ZKM-hjIRwkI>%g!Zow5jciPGxZGnq3 zew~$T*?Xu@+A-E}nB literal 0 HcmV?d00001 diff --git a/tutorials/01-Introduction.ipynb b/tutorials/01-Introduction.ipynb index 751f83d..8fcbf9f 100644 --- a/tutorials/01-Introduction.ipynb +++ b/tutorials/01-Introduction.ipynb @@ -1,6 +1,7 @@ { "cells": [ { + "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -46,13 +47,7 @@ "If you're familiar with other deep learning frameworks, check out the [0. Quickstart](quickstart_tutorial.html) first\n", "to quickly familiarize yourself with PyTorch's API.\n", "\n", - "If you're new to deep learning frameworks, head right into the first section of our step-by-step guide: [1. Tensors](tensor_tutorial.html).\n", - "\n", - "\n", - ".. include:: /beginner_source/basics/qs_toc.txt\n", - "\n", - ".. toctree::\n", - " :hidden:\n" + "If you're new to deep learning frameworks, head right into the first section of our step-by-step guide: [1. Tensors](tensor_tutorial.html).\n" ] } ],