Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Concept Entry] PyTorch: Built-in Loss Functions #6085

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

PragatiVerma18
Copy link
Collaborator

Description

Add a new entry on Built-in Loss Functions concept under PyTorch.

Issue Solved

Closes #5861

Type of Change

  • Adding a new entry
  • Editing an existing entry (fixing a typo, bug, issues, etc)
  • Updating the documentation

Checklist

  • All writings are my own.
  • My entry follows the Codecademy Docs style guide.
  • My changes generate no new warnings.
  • I have performed a self-review of my own writing and code.
  • I have checked my entry and corrected any misspellings.
  • I have made corresponding changes to the documentation if needed.
  • I have confirmed my changes are not being pushed from my forked main branch.
  • I have confirmed that I'm pushing from a new branch named after the changes I'm making.
  • I have linked any issues that are relevant to this PR in the Issues Solved section.

@avdhoottt avdhoottt self-assigned this Feb 3, 2025
@avdhoottt avdhoottt added new entry New entry or entries status: under review Issue or PR is currently being reviewed pytorch PyTorch labels Feb 3, 2025
Copy link
Collaborator

@avdhoottt avdhoottt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @PragatiVerma18, I've reviewed the entry and left some comments. Please make the changes, thank you!!

Comment on lines +61 to +63
- `size_average` (bool, optional): Deprecated (use `reduction` instead). If `True`, the loss is averaged over observations for each mini-batch. Default is `True`.
- `ignore_index` (int, optional): Specifies a target value that is ignored and does not contribute to the loss calculation.
- `reduce` (bool, optional): Deprecated (use `reduction` instead). If `True`, it sums the losses across the batch. Default is `True`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If these are deprecated, then I think we don't need to include them.

predictions = torch.tensor([2.0, 3.0, 4.0])
targets = torch.tensor([2.5, 3.5, 4.5])
loss = loss_fn(predictions, targets)
print(loss) # Output: tensor(0.0833)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recheck the output of this code here. I'm getting this output - tensor(0.2500)

logits = torch.tensor([[1.0, 2.0, 3.0], [1.0, 2.0, 3.0]])
labels = torch.tensor([2, 0])
loss = loss_fn(logits, labels)
print(loss) # Output: tensor(0.4076)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm getting this output for the above code - tensor(1.4076)

logits = torch.tensor([0.5, -1.5, 2.0])
labels = torch.tensor([1.0, 0.0, 1.0])
loss = loss_fn(logits, labels)
print(loss) # Output: tensor(0.4891)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recheck the output.

Comment on lines +166 to +175
import torch
import torch.nn as nn

# Example of CosineEmbeddingLoss
loss_fn = nn.CosineEmbeddingLoss()
input1 = torch.tensor([1.0, 0.0])
input2 = torch.tensor([0.0, 1.0])
target = torch.tensor([1]) # 1 means the inputs should be similar
loss = loss_fn(input1, input2, target)
print(loss) # Output: tensor(2.0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's giving a runtime error. Please correct the code.

Comment on lines +198 to +207
import torch
import torch.nn as nn

# Example of KLDivLoss
loss_fn = nn.KLDivLoss(reduction='batchmean')
input = torch.tensor([[0.0, 0.1, 0.2], [0.1, 0.0, 0.3]])
target = torch.tensor([[0.0, 0.1, 0.3], [0.1, 0.2, 0.2]])
loss = loss_fn(input.log(), target)
print(loss) # Output: tensor(0.0237)
```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something wrong with the code snippet. It's giving tensor(nan) output.


## Choosing the Right Loss Function

When selecting a loss function for your model, it is essential to consider the task you're working on.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please don't use first and second-person pronouns.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Concept Entry] PyTorch: Built-in Loss Functions
2 participants