Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix GroupNorm backward grad #10045

Merged
merged 8 commits into from
Mar 29, 2023
Merged

fix GroupNorm backward grad #10045

merged 8 commits into from
Mar 29, 2023

Conversation

fpzh2011
Copy link
Contributor

@fpzh2011 fpzh2011 commented Mar 27, 2023

修复 GroupNorm 梯度问题。仅当 gamma 和 beta 需要梯度时,才调用 GroupNormParamGrad。
问题复现代码

import oneflow as flow
from oneflow import nn

class demoModule(nn.Module):
    def __init__(
        self,
    ):
        super().__init__()
        self.linear1 = nn.Conv2d(8, 4, 1)
        self.linear2 = nn.Conv2d(4, 8, 1)
        self.group_norm = nn.GroupNorm(2, 8)
        self.linear3 = nn.Conv2d(8 ,16, 1)

        self.linear2.requires_grad_(False)
        self.group_norm.requires_grad_(False)
        self.linear3.requires_grad_(False)


    def forward(self, x):
        x = self.linear1(x)
        x = self.linear2(x)
        x = self.group_norm(x)
        x = self.linear3(x)
        return x


dmodel = demoModule().cuda()
of_sgd = flow.optim.SGD(dmodel.parameters(), lr=1.0, momentum=0.9)
a = flow.zeros(32, 8, 126, 64, device="cuda", requires_grad=True)
loss = dmodel(a).sum()
loss.backward()
of_sgd.step()

错误信息

Traceback (most recent call last):
  File "test3.py", line 33, in <module>
    loss.backward()
  File "/home/chengpeng/chengpeng/miniconda3/envs/sd/lib/python3.8/site-packages/oneflow/framework/tensor.py", line 44, in _backward
    flow.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/chengpeng/chengpeng/miniconda3/envs/sd/lib/python3.8/site-packages/oneflow/autograd/autograd.py", line 110, in backward
    backward_api(
oneflow._oneflow_internal.exception.Exception: group_norm_backward calculate grad for tensor which requires_grad is False. Please submit an issue in `https://github.com/Oneflow-Inc/oneflow/issues` and we will fix it as soon as possible
  File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/api/python/autograd/autograd.cpp", line 97, in Backward
    one::GetThreadLocalAutogradEngine()->RunBackwardAndSaveGrads4LeafTensorIf( outputs, *gradients, retain_graph, create_graph)
  File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/core/autograd/autograd_engine.cpp", line 464, in RunBackwardAndSaveGrads4LeafTensor
    graph_task.Apply( true)
  File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/core/autograd/autograd_engine.cpp", line 427, in Apply
    node->Apply(create_graph_)
  File "/home/ci-user/runners/release/_work/oneflow/oneflow/oneflow/core/autograd/autograd_engine.cpp", line 215, in Apply
    CHECK_OR_RETURN(input_meta_data_[i] != nullptr)
Error Type: oneflow.ErrorProto.check_failed_error

@CLAassistant
Copy link

CLAassistant commented Mar 27, 2023

CLA assistant check
All committers have signed the CLA.

@fpzh2011 fpzh2011 requested a review from hjchen2 March 27, 2023 13:57
@fpzh2011 fpzh2011 enabled auto-merge (squash) March 28, 2023 02:53
@fpzh2011 fpzh2011 disabled auto-merge March 28, 2023 02:58
@fpzh2011 fpzh2011 enabled auto-merge (squash) March 28, 2023 02:58
@hjchen2 hjchen2 requested a review from oneflow-ci-bot March 28, 2023 03:12
@github-actions
Copy link
Contributor

Code got formatted by CI. Please request CI again if you still want to have this PR merged. If the PR is from a forked repo, please download the patch files from the GitHub Actions web page and apply them locally.

@github-actions
Copy link
Contributor

CI failed when running job: Build cpu-asan-ubsan. PR label automerge has been removed

@fpzh2011 fpzh2011 changed the title 修复 GroupNorm 梯度问题 fix GroupNorm backward grad Mar 29, 2023
@github-actions
Copy link
Contributor

Speed stats:
GPU Name: GeForce GTX 1080 

❌ OneFlow resnet50 time: 141.2ms (= 14121.2ms / 100, input_shape=[16, 3, 224, 224])
PyTorch resnet50 time: 143.9ms (= 14385.5ms / 100, input_shape=[16, 3, 224, 224])
❌ Relative speed: 1.02 (= 143.9ms / 141.2ms)

OneFlow resnet50 time: 82.7ms (= 8275.0ms / 100, input_shape=[8, 3, 224, 224])
PyTorch resnet50 time: 87.5ms (= 8749.1ms / 100, input_shape=[8, 3, 224, 224])
✔️ Relative speed: 1.06 (= 87.5ms / 82.7ms)

OneFlow resnet50 time: 51.4ms (= 10275.5ms / 200, input_shape=[4, 3, 224, 224])
PyTorch resnet50 time: 60.2ms (= 12035.1ms / 200, input_shape=[4, 3, 224, 224])
✔️ Relative speed: 1.17 (= 60.2ms / 51.4ms)

OneFlow resnet50 time: 34.1ms (= 6823.6ms / 200, input_shape=[2, 3, 224, 224])
PyTorch resnet50 time: 41.4ms (= 8278.9ms / 200, input_shape=[2, 3, 224, 224])
✔️ Relative speed: 1.21 (= 41.4ms / 34.1ms)

OneFlow resnet50 time: 26.4ms (= 5284.9ms / 200, input_shape=[1, 3, 224, 224])
PyTorch resnet50 time: 37.6ms (= 7521.2ms / 200, input_shape=[1, 3, 224, 224])
✔️ Relative speed: 1.42 (= 37.6ms / 26.4ms)

OneFlow swin dataloader time: 0.235s (= 47.009s / 200, num_workers=1)
PyTorch swin dataloader time: 0.148s (= 29.586s / 200, num_workers=1)
Relative speed: 0.629 (= 0.148s / 0.235s)

OneFlow swin dataloader time: 0.068s (= 13.514s / 200, num_workers=4)
PyTorch swin dataloader time: 0.042s (= 8.446s / 200, num_workers=4)
Relative speed: 0.625 (= 0.042s / 0.068s)

OneFlow swin dataloader time: 0.042s (= 8.481s / 200, num_workers=8)
PyTorch swin dataloader time: 0.022s (= 4.494s / 200, num_workers=8)
Relative speed: 0.530 (= 0.022s / 0.042s)

❌ OneFlow resnet50 time: 152.9ms (= 15291.3ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 165.3ms (= 16532.1ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
❌ Relative speed: 1.08 (= 165.3ms / 152.9ms)

OneFlow resnet50 time: 93.7ms (= 9374.1ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 104.2ms (= 10416.1ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.11 (= 104.2ms / 93.7ms)

OneFlow resnet50 time: 61.3ms (= 12257.1ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 80.4ms (= 16085.7ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.31 (= 80.4ms / 61.3ms)

OneFlow resnet50 time: 43.3ms (= 8653.3ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 71.7ms (= 14340.7ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.66 (= 71.7ms / 43.3ms)

OneFlow resnet50 time: 36.8ms (= 7354.8ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 68.3ms (= 13660.0ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.86 (= 68.3ms / 36.8ms)

@github-actions
Copy link
Contributor

View latest API docs preview at: https://staging.oneflow.info/docs/Oneflow-Inc/oneflow/pr/10045/

@github-actions
Copy link
Contributor

Speed stats:
GPU Name: GeForce GTX 1080 

❌ OneFlow resnet50 time: 141.3ms (= 14128.4ms / 100, input_shape=[16, 3, 224, 224])
PyTorch resnet50 time: 144.2ms (= 14422.8ms / 100, input_shape=[16, 3, 224, 224])
❌ Relative speed: 1.02 (= 144.2ms / 141.3ms)

OneFlow resnet50 time: 82.9ms (= 8285.7ms / 100, input_shape=[8, 3, 224, 224])
PyTorch resnet50 time: 87.7ms (= 8771.6ms / 100, input_shape=[8, 3, 224, 224])
✔️ Relative speed: 1.06 (= 87.7ms / 82.9ms)

OneFlow resnet50 time: 50.9ms (= 10184.4ms / 200, input_shape=[4, 3, 224, 224])
PyTorch resnet50 time: 58.5ms (= 11709.6ms / 200, input_shape=[4, 3, 224, 224])
✔️ Relative speed: 1.15 (= 58.5ms / 50.9ms)

OneFlow resnet50 time: 34.0ms (= 6791.3ms / 200, input_shape=[2, 3, 224, 224])
PyTorch resnet50 time: 44.5ms (= 8893.2ms / 200, input_shape=[2, 3, 224, 224])
✔️ Relative speed: 1.31 (= 44.5ms / 34.0ms)

OneFlow resnet50 time: 26.2ms (= 5242.0ms / 200, input_shape=[1, 3, 224, 224])
PyTorch resnet50 time: 39.8ms (= 7955.1ms / 200, input_shape=[1, 3, 224, 224])
✔️ Relative speed: 1.52 (= 39.8ms / 26.2ms)

OneFlow swin dataloader time: 0.237s (= 47.404s / 200, num_workers=1)
PyTorch swin dataloader time: 0.149s (= 29.705s / 200, num_workers=1)
Relative speed: 0.627 (= 0.149s / 0.237s)

OneFlow swin dataloader time: 0.069s (= 13.848s / 200, num_workers=4)
PyTorch swin dataloader time: 0.041s (= 8.153s / 200, num_workers=4)
Relative speed: 0.589 (= 0.041s / 0.069s)

OneFlow swin dataloader time: 0.044s (= 8.702s / 200, num_workers=8)
PyTorch swin dataloader time: 0.022s (= 4.460s / 200, num_workers=8)
Relative speed: 0.512 (= 0.022s / 0.044s)

❌ OneFlow resnet50 time: 153.7ms (= 15370.8ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 167.4ms (= 16740.5ms / 100, input_shape=[16, 3, 224, 224], ddp, world size=2)
❌ Relative speed: 1.09 (= 167.4ms / 153.7ms)

OneFlow resnet50 time: 93.5ms (= 9351.2ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 104.1ms (= 10410.4ms / 100, input_shape=[8, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.11 (= 104.1ms / 93.5ms)

OneFlow resnet50 time: 61.0ms (= 12199.5ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 89.6ms (= 17929.2ms / 200, input_shape=[4, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.47 (= 89.6ms / 61.0ms)

OneFlow resnet50 time: 43.3ms (= 8655.1ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 69.7ms (= 13934.2ms / 200, input_shape=[2, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.61 (= 69.7ms / 43.3ms)

OneFlow resnet50 time: 35.7ms (= 7135.2ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
PyTorch resnet50 time: 68.6ms (= 13714.6ms / 200, input_shape=[1, 3, 224, 224], ddp, world size=2)
✔️ Relative speed: 1.92 (= 68.6ms / 35.7ms)

@github-actions
Copy link
Contributor

View latest API docs preview at: https://staging.oneflow.info/docs/Oneflow-Inc/oneflow/pr/10045/

@fpzh2011 fpzh2011 merged commit 924e7f7 into master Mar 29, 2023
@fpzh2011 fpzh2011 deleted the fix_group_norm_grad branch March 29, 2023 10:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants