Skip to content

Commit

Permalink
updates files
Browse files Browse the repository at this point in the history
  • Loading branch information
brianjo committed Mar 7, 2023
1 parent 4331035 commit 98c3bb1
Show file tree
Hide file tree
Showing 13 changed files with 844 additions and 348 deletions.
110 changes: 55 additions & 55 deletions docs/02-Quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -229,78 +229,78 @@ print("Done!")

Epoch 1
-------------------------------
loss: 2.300704 [ 0/60000]
loss: 2.294491 [ 6400/60000]
loss: 2.270792 [12800/60000]
loss: 2.270757 [19200/60000]
loss: 2.246651 [25600/60000]
loss: 2.223734 [32000/60000]
loss: 2.230299 [38400/60000]
loss: 2.197789 [44800/60000]
loss: 2.186385 [51200/60000]
loss: 2.171854 [57600/60000]
loss: 2.300994 [ 0/60000]
loss: 2.289627 [ 6400/60000]
loss: 2.278757 [12800/60000]
loss: 2.273481 [19200/60000]
loss: 2.260533 [25600/60000]
loss: 2.230715 [32000/60000]
loss: 2.240870 [38400/60000]
loss: 2.210235 [44800/60000]
loss: 2.205794 [51200/60000]
loss: 2.179301 [57600/60000]
Test Error:
Accuracy: 40.4%, Avg loss: 2.158354
Accuracy: 42.7%, Avg loss: 2.175595

Epoch 2
-------------------------------
loss: 2.157282 [ 0/60000]
loss: 2.157837 [ 6400/60000]
loss: 2.098653 [12800/60000]
loss: 2.123712 [19200/60000]
loss: 2.070209 [25600/60000]
loss: 2.017735 [32000/60000]
loss: 2.044564 [38400/60000]
loss: 1.971302 [44800/60000]
loss: 1.963748 [51200/60000]
loss: 1.920766 [57600/60000]
loss: 2.179688 [ 0/60000]
loss: 2.170581 [ 6400/60000]
loss: 2.125383 [12800/60000]
loss: 2.134987 [19200/60000]
loss: 2.104071 [25600/60000]
loss: 2.039638 [32000/60000]
loss: 2.065766 [38400/60000]
loss: 1.994649 [44800/60000]
loss: 1.991123 [51200/60000]
loss: 1.927214 [57600/60000]
Test Error:
Accuracy: 55.5%, Avg loss: 1.902382
Accuracy: 56.1%, Avg loss: 1.929943

Epoch 3
-------------------------------
loss: 1.919148 [ 0/60000]
loss: 1.903148 [ 6400/60000]
loss: 1.782882 [12800/60000]
loss: 1.834309 [19200/60000]
loss: 1.722989 [25600/60000]
loss: 1.676954 [32000/60000]
loss: 1.698752 [38400/60000]
loss: 1.602475 [44800/60000]
loss: 1.614792 [51200/60000]
loss: 1.532669 [57600/60000]
loss: 1.957387 [ 0/60000]
loss: 1.929036 [ 6400/60000]
loss: 1.825893 [12800/60000]
loss: 1.850506 [19200/60000]
loss: 1.775094 [25600/60000]
loss: 1.708617 [32000/60000]
loss: 1.727947 [38400/60000]
loss: 1.628896 [44800/60000]
loss: 1.653404 [51200/60000]
loss: 1.548985 [57600/60000]
Test Error:
Accuracy: 61.7%, Avg loss: 1.533873
Accuracy: 60.7%, Avg loss: 1.570322

Epoch 4
-------------------------------
loss: 1.585873 [ 0/60000]
loss: 1.560321 [ 6400/60000]
loss: 1.407954 [12800/60000]
loss: 1.488211 [19200/60000]
loss: 1.364034 [25600/60000]
loss: 1.362447 [32000/60000]
loss: 1.370802 [38400/60000]
loss: 1.302972 [44800/60000]
loss: 1.327800 [51200/60000]
loss: 1.235748 [57600/60000]
loss: 1.634544 [ 0/60000]
loss: 1.598077 [ 6400/60000]
loss: 1.457816 [12800/60000]
loss: 1.511364 [19200/60000]
loss: 1.425202 [25600/60000]
loss: 1.398494 [32000/60000]
loss: 1.412483 [38400/60000]
loss: 1.328141 [44800/60000]
loss: 1.371268 [51200/60000]
loss: 1.270080 [57600/60000]
Test Error:
Accuracy: 63.4%, Avg loss: 1.260575
Accuracy: 63.2%, Avg loss: 1.298073

Epoch 5
-------------------------------
loss: 1.331637 [ 0/60000]
loss: 1.313866 [ 6400/60000]
loss: 1.153163 [12800/60000]
loss: 1.257744 [19200/60000]
loss: 1.137783 [25600/60000]
loss: 1.162715 [32000/60000]
loss: 1.172138 [38400/60000]
loss: 1.120971 [44800/60000]
loss: 1.149632 [51200/60000]
loss: 1.069323 [57600/60000]
loss: 1.375485 [ 0/60000]
loss: 1.353134 [ 6400/60000]
loss: 1.197045 [12800/60000]
loss: 1.282228 [19200/60000]
loss: 1.185837 [25600/60000]
loss: 1.195442 [32000/60000]
loss: 1.213788 [38400/60000]
loss: 1.140980 [44800/60000]
loss: 1.188507 [51200/60000]
loss: 1.102179 [57600/60000]
Test Error:
Accuracy: 64.6%, Avg loss: 1.093657
Accuracy: 64.6%, Avg loss: 1.124997

Done!

Expand Down
8 changes: 4 additions & 4 deletions docs/03-Tensors.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,8 +73,8 @@ print(f"Random Tensor: \n {x_rand} \n")
[1, 1]])

Random Tensor:
tensor([[0.0504, 0.9505],
[0.6485, 0.6105]])
tensor([[0.7786, 0.0142],
[0.3120, 0.9157]])



Expand All @@ -97,8 +97,8 @@ print(f"Zeros Tensor: \n {zeros_tensor}")
```

Random Tensor:
tensor([[0.6582, 0.2838, 0.1244],
[0.1692, 0.0394, 0.2638]])
tensor([[0.7263, 0.5640, 0.3222],
[0.9226, 0.3125, 0.3739]])

Ones Tensor:
tensor([[1., 1., 1.],
Expand Down
13 changes: 5 additions & 8 deletions docs/04-Data.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,3 @@
```python
%matplotlib inline
```


[Learn the Basics](intro.html) ||
[Quickstart](quickstart_tutorial.html) ||
[Tensors](tensorqs_tutorial.html) ||
Expand Down Expand Up @@ -49,6 +44,8 @@ We load the [FashionMNIST Dataset](https://pytorch.org/vision/stable/datasets.ht


```python
%matplotlib inline

import torch
from torch.utils.data import Dataset
from torchvision import datasets
Expand Down Expand Up @@ -106,7 +103,7 @@ plt.show()



![png](../docs/04-Data_files/../docs/04-Data_6_0.png)
![png](../docs/04-Data_files/../docs/04-Data_5_0.png)



Expand Down Expand Up @@ -268,11 +265,11 @@ print(f"Label: {label}")



![png](../docs/04-Data_files/../docs/04-Data_21_1.png)
![png](../docs/04-Data_files/../docs/04-Data_20_1.png)



Label: 1
Label: 3


--------------
Expand Down
60 changes: 28 additions & 32 deletions docs/06-BuildModel.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ y_pred = pred_probab.argmax(1)
print(f"Predicted class: {y_pred}")
```

Predicted class: tensor([9])
Predicted class: tensor([1])


--------------
Expand Down Expand Up @@ -191,30 +191,26 @@ hidden1 = nn.ReLU()(hidden1)
print(f"After ReLU: {hidden1}")
```

Before ReLU: tensor([[-5.5712e-01, 4.1135e-01, -7.4510e-03, -5.4891e-02, 7.3538e-02,
4.6617e-01, 5.3287e-01, 7.2283e-02, -3.7471e-01, -3.9285e-01,
-6.7889e-01, 2.1088e-01, 1.8742e-01, 4.0150e-01, -5.6422e-02,
-4.8977e-02, -1.6230e-01, 3.0556e-01, -7.1455e-01, -6.6180e-02],
[-4.2601e-01, 6.2487e-01, -5.9415e-02, 2.3934e-02, 3.9810e-01,
3.2441e-01, 7.0026e-01, -1.2423e-01, -5.2260e-01, -1.7234e-01,
-5.5835e-01, 2.2128e-01, 2.7830e-01, 2.4191e-01, -7.7681e-02,
-2.4954e-01, 1.5836e-01, 1.9990e-01, -1.1715e-01, -3.2138e-01],
[-4.9225e-01, 4.1050e-01, -1.5492e-01, 8.9106e-03, 3.5985e-01,
3.1355e-01, 6.2615e-01, -1.9053e-04, -5.7080e-01, -1.7064e-01,
-6.5802e-01, 3.3700e-01, 4.5726e-01, 3.1022e-01, -4.0316e-01,
-3.8029e-01, -1.2243e-01, 3.6732e-01, -5.6789e-01, -9.4490e-02]],
grad_fn=<AddmmBackward0>)
Before ReLU: tensor([[-0.6535, 0.0475, 0.2762, 0.2739, 0.3857, 0.1837, -0.1904, -0.3036,
-0.0609, -0.2871, 0.0446, 0.2365, -0.2100, 0.3802, 0.1994, -0.4515,
0.1591, 0.1378, 0.1966, -0.0231],
[-0.7906, 0.0717, 0.3879, 0.0195, 0.2133, 0.4331, 0.1080, -0.3002,
-0.0044, -0.3400, 0.2174, 0.4808, -0.1150, 0.2409, 0.3484, -0.0483,
0.3890, 0.1460, 0.1570, 0.1086],
[-0.8346, 0.3771, 0.3634, -0.3699, 0.5272, -0.2396, -0.4630, -0.0269,
-0.0439, -0.4653, 0.1175, 0.4506, -0.1127, 0.1764, 0.1627, 0.0395,
0.4420, 0.1518, 0.0156, 0.0423]], grad_fn=<AddmmBackward0>)


After ReLU: tensor([[0.0000, 0.4113, 0.0000, 0.0000, 0.0735, 0.4662, 0.5329, 0.0723, 0.0000,
0.0000, 0.0000, 0.2109, 0.1874, 0.4015, 0.0000, 0.0000, 0.0000, 0.3056,
0.0000, 0.0000],
[0.0000, 0.6249, 0.0000, 0.0239, 0.3981, 0.3244, 0.7003, 0.0000, 0.0000,
0.0000, 0.0000, 0.2213, 0.2783, 0.2419, 0.0000, 0.0000, 0.1584, 0.1999,
0.0000, 0.0000],
[0.0000, 0.4105, 0.0000, 0.0089, 0.3599, 0.3136, 0.6262, 0.0000, 0.0000,
0.0000, 0.0000, 0.3370, 0.4573, 0.3102, 0.0000, 0.0000, 0.0000, 0.3673,
0.0000, 0.0000]], grad_fn=<ReluBackward0>)
After ReLU: tensor([[0.0000, 0.0475, 0.2762, 0.2739, 0.3857, 0.1837, 0.0000, 0.0000, 0.0000,
0.0000, 0.0446, 0.2365, 0.0000, 0.3802, 0.1994, 0.0000, 0.1591, 0.1378,
0.1966, 0.0000],
[0.0000, 0.0717, 0.3879, 0.0195, 0.2133, 0.4331, 0.1080, 0.0000, 0.0000,
0.0000, 0.2174, 0.4808, 0.0000, 0.2409, 0.3484, 0.0000, 0.3890, 0.1460,
0.1570, 0.1086],
[0.0000, 0.3771, 0.3634, 0.0000, 0.5272, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.1175, 0.4506, 0.0000, 0.1764, 0.1627, 0.0395, 0.4420, 0.1518,
0.0156, 0.0423]], grad_fn=<ReluBackward0>)


### nn.Sequential
Expand Down Expand Up @@ -281,23 +277,23 @@ for name, param in model.named_parameters():
)


Layer: linear_relu_stack.0.weight | Size: torch.Size([512, 784]) | Values : tensor([[ 0.0211, 0.0168, 0.0334, ..., -0.0151, -0.0033, 0.0032],
[-0.0022, 0.0293, -0.0090, ..., -0.0044, -0.0147, -0.0251]],
Layer: linear_relu_stack.0.weight | Size: torch.Size([512, 784]) | Values : tensor([[ 0.0007, 0.0351, 0.0290, ..., 0.0157, -0.0041, -0.0052],
[ 0.0163, -0.0053, 0.0237, ..., -0.0294, 0.0200, 0.0072]],
grad_fn=<SliceBackward0>)

Layer: linear_relu_stack.0.bias | Size: torch.Size([512]) | Values : tensor([0.0128, 0.0086], grad_fn=<SliceBackward0>)
Layer: linear_relu_stack.0.bias | Size: torch.Size([512]) | Values : tensor([-0.0143, -0.0101], grad_fn=<SliceBackward0>)

Layer: linear_relu_stack.2.weight | Size: torch.Size([512, 512]) | Values : tensor([[-0.0165, -0.0068, -0.0016, ..., -0.0098, 0.0119, 0.0326],
[ 0.0330, -0.0306, -0.0129, ..., -0.0371, -0.0291, -0.0273]],
Layer: linear_relu_stack.2.weight | Size: torch.Size([512, 512]) | Values : tensor([[-0.0091, 0.0016, 0.0303, ..., 0.0147, 0.0108, 0.0114],
[-0.0018, 0.0363, -0.0248, ..., -0.0332, 0.0185, 0.0011]],
grad_fn=<SliceBackward0>)

Layer: linear_relu_stack.2.bias | Size: torch.Size([512]) | Values : tensor([ 0.0024, -0.0164], grad_fn=<SliceBackward0>)
Layer: linear_relu_stack.2.bias | Size: torch.Size([512]) | Values : tensor([0.0409, 0.0064], grad_fn=<SliceBackward0>)

Layer: linear_relu_stack.4.weight | Size: torch.Size([10, 512]) | Values : tensor([[ 0.0046, 0.0249, 0.0123, ..., 0.0352, -0.0170, 0.0232],
[ 0.0038, 0.0283, 0.0235, ..., -0.0416, 0.0304, 0.0217]],
Layer: linear_relu_stack.4.weight | Size: torch.Size([10, 512]) | Values : tensor([[ 0.0349, -0.0004, 0.0420, ..., -0.0023, 0.0277, 0.0173],
[ 0.0015, -0.0185, 0.0072, ..., -0.0159, -0.0068, 0.0271]],
grad_fn=<SliceBackward0>)

Layer: linear_relu_stack.4.bias | Size: torch.Size([10]) | Values : tensor([0.0118, 0.0417], grad_fn=<SliceBackward0>)
Layer: linear_relu_stack.4.bias | Size: torch.Size([10]) | Values : tensor([-0.0129, 0.0260], grad_fn=<SliceBackward0>)



Expand Down
23 changes: 10 additions & 13 deletions docs/07-Autograd.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,3 @@
```python
%matplotlib inline
```


[Learn the Basics](intro.html) ||
[Quickstart](quickstart_tutorial.html) ||
[Tensors](tensorqs_tutorial.html) ||
Expand Down Expand Up @@ -31,6 +26,8 @@ PyTorch in the following manner:


```python
%matplotlib inline

import torch

x = torch.ones(5) # input tensor
Expand Down Expand Up @@ -77,8 +74,8 @@ print(f"Gradient function for z = {z.grad_fn}")
print(f"Gradient function for loss = {loss.grad_fn}")
```

Gradient function for z = <AddBackward0 object at 0x10fa1ee80>
Gradient function for loss = <BinaryCrossEntropyWithLogitsBackward0 object at 0x10fa1e430>
Gradient function for z = <AddBackward0 object at 0x10427e550>
Gradient function for loss = <BinaryCrossEntropyWithLogitsBackward0 object at 0x10427e670>


## Computing Gradients
Expand All @@ -101,12 +98,12 @@ print(w.grad)
print(b.grad)
```

tensor([[0.3244, 0.2353, 0.0700],
[0.3244, 0.2353, 0.0700],
[0.3244, 0.2353, 0.0700],
[0.3244, 0.2353, 0.0700],
[0.3244, 0.2353, 0.0700]])
tensor([0.3244, 0.2353, 0.0700])
tensor([[0.1692, 0.2790, 0.2088],
[0.1692, 0.2790, 0.2088],
[0.1692, 0.2790, 0.2088],
[0.1692, 0.2790, 0.2088],
[0.1692, 0.2790, 0.2088]])
tensor([0.1692, 0.2790, 0.2088])


<div class="alert alert-info"><h4>Note</h4><p>- We can only obtain the ``grad`` properties for the leaf
Expand Down
Loading

0 comments on commit 98c3bb1

Please sign in to comment.