Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write Readme with implementation details #30

Merged
merged 22 commits into from
Jan 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
168 changes: 164 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,181 @@

<p align="center">
<img src="img/edugrad-header.png" alt="drawing" width="300"/>
</p>

**edugrad** is the most simple and accesible implementation of a deep learning framework. Its purpose is to reveal the
core components of such libraries.

## Key Features

- **Autograd Mechanism**: Features an automatic differentiation system for computing gradients, vital for training
various types of neural networks within a single framework.
- **Tensor Operations**: Implements a tensor class enabling fundamental matrix operations crucial for neural network
computations, with numpy as the backend.
- **Simple Interface**: Provides an API that mirrors PyTorch, making edugrad easy to use.
- **Educational Code**: The code style and module structure are designed for ease of understanding, both
programmatically and conceptually.

Please note that while edugrad theoretically supports the implementation of any neural network model, it lacks the
memory and computational optimizations found in more advanced frameworks. This design choice maximizes code readability
but limits the framework's capability to smaller models.

![test workflow badge](https://github.com/tostenzel/edugrad/actions/workflows/Tests.yaml/badge.svg)

# edugrad
## Example

## The Code

In this section we look at how the code implements i.) the tensor operations and ii.) the autograd mechanism.

- [I. Low-level (`data.py`), Mid-level (`function.py`) and High-level (`tensor.py`) Operations](#i-low-level-data.py-mid-level-function.py-and-high-level-tensor.py-operations)
- [II. Computational Graphs in edugrad: Forward and Backward Passes](#ii-computational-graphs-in-edugrad-forward-and-backward-passes)

<p align="center">
<img src="img/edugrad-code-i.png" alt="drawing" width="400"/>
</p>

### 1. Low-level (`data.py`), Mid-level (`function.py`) and High-level (`tensor.py`) Operations

The computation processes are structured across different levels of operations, namely low-level, mid-level, and high-level operations.

#### 1. Low-Level Operations
- **Module**: `data.py` (`TensorData` class)
- **Purpose**: Execution of most basic tensor operations.
- **Characteristics**:
- Implement elemental tensor operations like addition, multiplication, reshaping, etc.
- Immediate execution of operations using CPU, leveraging `numpy.array`'s capabilities. Using a different backend like PyTorch or Jax would only require reimpleneting 17 operations in the module (enumerated in `ops.py`).
- Operations at this level do not involve gradient computations or the autograd mechanism.
- Acts as the foundational building block for higher-level operations.

#### 2. Mid-Level Operations
- **Module**: `function.py` (Function class and its subclasses)
- **Purpose**: Define differentiable functions that include both forward and backward computation logic.
- **Characteristics**:
- Compose low-level ops from `data.py` to define more complex operations.
- Each operation (e.g., `Add`, `Mul`, `Sin`) encapsulates a forward pass and a corresponding backward pass for gradient computation.
- Serves as the backbone of edugrad's autograd system, allowing for automatic differentiation of different models defined with `edugrad.Tensor`.
- Mid-level operations are used as nodes to build complex computational graphs during the forward pass, storing necessary information for the backward pass.

#### 3. High-Level Operations (High-Level Ops)
- **Module**: `tensor.py` (`Tensor` class)
- **Purpose**: Provide a user-friendly interface for tensor operations and enable building and training neural network models.
- **Characteristics**:
- High-level abstraction for tensor operations.
- Utilizes mid-level ops from `function.py` to implement tensor methods and matrix algebra, enabling automatic differentiation without defining a backward function.
- Includes a broad range of operations commonly used in neural networks, like matrix multiplication, activation functions, and loss functions.
- Facilitates the construction and manipulation of larger computational graphs through tensor operations.
- This level is where most users interact with the edugrad library, building and training models using a familiar, PyTorch-like API.

<p align="center">
<img src="img/edugrad-code-ii.png" alt="drawing" width="400"/>
</p>

### II. Computational Graphs in edugrad: Forward and Backward Passes

In edugrad, the handling of the computational graph, particularly the relationships between nodes (tensors) during the forward and backward passes, is crucial for understanding how automatic differentiation works. Let's delve into the details of how the parents of each node are stored in `Tensor._ctx` and how they are utilized during the backward pass by functions in `autograd.py`.

#### Forward Pass: Storing Parent Nodes

During the forward pass, when operations are performed on tensors, new tensors are created as a result of these operations. Each new tensor maintains a reference to its "parent" tensors – the tensors that were used to compute it. This reference is stored in a special attribute called `_ctx`.

##### _ctx Attribute:

- When an operation is performed on one or more tensors, an instance of the corresponding `Function` class (from `function.py`) is created. This instance represents the operation itself.
- The `_ctx` attribute of the resultant tensor is set to this instance. It effectively becomes a context that encapsulates the operation and its input tensors.
- The `Function` instance (context) stores the input tensors as parents. These are the tensors whose attributes were used to calculate the resultant tensor.

##### Example of Forward Pass

Consider an operation `z = x + y`, where `x` and `y` are tensors. The `Add` function from `function.py` is used:

```python
class Add(Function):
def forward(self, x: TensorData, y: TensorData) -> TensorData:
return x.elementwise(ops.BinaryOps.ADD, y)

def backward(self, grad_output: TensorData) -> Tuple[Optional[TensorData], Optional[TensorData]]:
return grad_output if self.needs_input_grad[0] else None, grad_output if self.needs_input_grad[1] else None
```

When z is computed:

```python
z = x + y # Internally calls Add.apply(x, y)
```

`z._ctx` is set to an instance of `Add`, and this instance stores `x` and `y` as its parents.

#### Backward Pass: Utilizing Parent Nodes

During the backward pass, gradients are computed in reverse order, starting from the final output tensor and propagating through its ancestors.

#### Backward Function in autograd.py:
- When `backward()` is called on the final output tensor, usually the cost/loss, `autograd.ollect_backward_graph()` starts traversing the computational graph in reverse.
- It begins with the tensor on which `backward()` was called and recursively visits the parent tensors stored in `_ctx`.

#### Gradient Computation:
- At each tensor, `backward()` calculates the gradient of the tensor with respect to each of its parents. This is done using the `backward` method of the `Function` instance stored in `_ctx`.
- The gradients are then propagated to each parent tensor, where the process repeats.
- If a parent tensor contributes to multiple children, its gradient is accumulated from each child.

#### Example of Backward Pass

Continuing with the `z = x + y` example:

```python
z.backward()
```

This call initiates the backward pass:

- It computes the gradient of `z` (let's call it `dz`) and sets `z.grad` to `dz`.
- Then, it uses `z._ctx` (which is an instance of `Add`) to compute the gradients with respect to `x` and `y` (`dx` and `dy`).
- These gradients are then assigned to `x.grad` and `y.grad`.

#### Summary

The essence of edugrad's approach lies in how it builds and navigates the computational graph:

- **Forward Pass**: Stores parent tensors in `_ctx` of each resultant tensor, encapsulating the operation and its inputs.
- **Backward Pass**: Traverses the graph in reverse, using `_ctx` to access parent tensors (parent in forward pass direction) and compute gradients recursively. This elegantly ties together the chain of computations and their gradients, enabling efficient automatic differentiation.

## Installation

```
git clone https://github.com/tostenzel/edugrad
cd edugrad
```

# Setup environment in edugrad/.env and install requirements with conda from environment.yaml
Set up environment in `edugrad/.env` and install requirements with conda from `environment.yaml`:

```
conda create --prefix .env
conda activate .env/
conda env update --file environment.yaml --prefix .env
```

Install edugrad from source in editable mode to enable absolute imports:

# Install edugrad from source in editable mode to enable absolute imports
```
pip install -e .
```

Verify installation:

# Verify installation
```
python applications/learn_mnist.py
```

## Credits

Starting point of this project is George Hotz' [tinygrad](https://github.com/tinygrad/tinygrad/tree/master), see
[license](https://github.com/tostenzel/edugrad/blob/24-write-readmemd-with-implementation-details/LICENSE). I removed
features that did not align with edugrad's purpose, eliminated all optimizations, and adjusted the module structures and
coding style, adding extensive explanations in docstrings and comments. My changes and additions to the shortened and refactored code are
relatively minor. The autograd mechanism is inspired by Andrej Karpathy's
[micrograd](https://github.com/karpathy/micrograd).

## Deep Learning Blog

edugrad is complemented by my Deep Learning Blog Series @ [tobiasstenzel.com/blog](https://www.tobiasstenzel.com/blog/tag/dl-fundamentals/) that explains the fundamental concepts of deep learning including backpropagation and automatic reverse-mode differentiation.
2 changes: 1 addition & 1 deletion edugrad/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
from edugrad.tensor import Tensor
from edugrad.tensor import Tensor # noqa: F401 # pylint:disable=unused-import
2 changes: 0 additions & 2 deletions edugrad/_tensor/tensor_combine_segment.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,6 @@ def stack(tensors: list[Tensor], dim: int) -> Tensor:
Tensor: A new tensor resulting from stacking the given tensors.

"""
from edugrad.tensor import Tensor

# Unsqueeze the first tensor and prepare the rest.
first = tensors[0].unsqueeze(dim)
unsqueezed_tensors = [tensor.unsqueeze(dim) for tensor in tensors[1:]]
Expand Down
2 changes: 1 addition & 1 deletion edugrad/_tensor/tensor_create.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
import time
import math

from typing import Optional, Any
from typing import Any

from edugrad.dtypes import DType, dtypes
from edugrad.helpers import argfix, prod, shape_int
Expand Down
6 changes: 2 additions & 4 deletions edugrad/_tensor/tensor_index_slice.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
from typing import Sequence, Optional, Tuple, Union
from typing import Sequence, Optional, Tuple
from collections import defaultdict

from edugrad.dtypes import dtypes
from edugrad.helpers import shape_int
from edugrad._tensor.tensor_reshape import pad, _flatten
from edugrad._tensor.tensor_reshape import _flatten


# ***** movement high level ops *****
Expand Down Expand Up @@ -169,8 +169,6 @@ def tslice(tensor: "Tensor", arg: Sequence[Optional[Tuple[int, shape_int]]], val
- value (float): The padding value to be used if necessary.

"""
from edugrad.tensor import Tensor

arg_ = tuple([a if a is not None else (0, s) for s, a in zip(tensor.shape, arg)])
padding = tuple([(max(0, -p[0]), max(0, p[1] - tensor.shape[i])) for i, p in enumerate(arg_)])
return tensor.pad(padding, value=value).shrink(
Expand Down
1 change: 0 additions & 1 deletion edugrad/_tensor/tensor_reshape.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
"""Contains various tensor manipulation operations that can change the shape of a tensor."""

from __future__ import annotations
from typing import List, Tuple, Union

from edugrad.helpers import argfix, prod, shape_int
import edugrad.function as function
Expand Down
1 change: 0 additions & 1 deletion edugrad/helpers.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
import os
import functools
from math import prod # noqa: F401 # pylint:disable=unused-import
from dataclasses import dataclass

shape_int = int

Expand Down
2 changes: 1 addition & 1 deletion edugrad/optim/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1 @@
from edugrad.optim.optimizer import SGD, Adam, AdamW
from edugrad.optim.optimizer import SGD, Adam, AdamW # noqa: F401 # pylint:disable=unused-import
2 changes: 1 addition & 1 deletion edugrad/tensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
from __future__ import annotations
import time
import math
from typing import ClassVar, Sequence, Any, Type
from typing import ClassVar, Sequence, Any

import numpy as np

Expand Down
Binary file added img/edugrad-code-i.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/edugrad-code-ii.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/edugrad-dependencies.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/edugrad-header.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 1 addition & 2 deletions tests/test_tensor_reduce.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import numpy as np
import unittest, copy
import unittest
from edugrad import Tensor
from edugrad.dtypes import dtypes


class TestZeroShapeTensor(unittest.TestCase):
Expand Down
Loading