-
Notifications
You must be signed in to change notification settings - Fork 23.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrates nll_loss_backward from TH to Aten (CUDA) #60299
Migrates nll_loss_backward from TH to Aten (CUDA) #60299
Conversation
💊 CI failures summary and remediationsAs of commit 3eccb41 (more details on the Dr. CI page and at hud.pytorch.org/pr/60299):
🕵️ 2 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good, thank you!
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Summary: Addresses a part of #59765 This PR adds byte support for nll_loss on the CPU for `input.dim() == 2`. CUDA support will be implemented when `nll_loss` migration to CUDA is completed in #60299 and #60097 Pull Request resolved: #60308 Reviewed By: VitalyFedyunin Differential Revision: D29329458 Pulled By: jbschlosser fbshipit-source-id: d3585c4966030bc61e451f8aa817406a8a3acf47
Fixes #24609
Aten Umbrella issue #24507
Related to #59765
There are no performance differences when running the following benchmark:
Benchmark script
master
this PR