Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducing failed #26

Open
WannaBSteve opened this issue Nov 28, 2023 · 5 comments
Open

Reproducing failed #26

WannaBSteve opened this issue Nov 28, 2023 · 5 comments

Comments

@WannaBSteve
Copy link

Hi, I really appriciate your fantastic work and releasing the source code., but by follwing the training instruction on readme(using default config which is baseline.yaml), I found it hard to reproduce the result in the paper.
For example, the ADE of my result is nearly 0.9(through both RTX A4000 and RTX 3090) but the result shown in the paper is merely 0.22.

Could you please shed some light on it? And I will be really appriciated if you could provide some pretrained models.

@Gutianpei
Copy link
Owner

Hi, I really appriciate your fantastic work and releasing the source code., but by follwing the training instruction on readme(using default config which is baseline.yaml), I found it hard to reproduce the result in the paper. For example, the ADE of my result is nearly 0.9(through both RTX A4000 and RTX 3090) but the result shown in the paper is merely 0.22.

Could you please shed some light on it? And I will be really appriciated if you could provide some pretrained models.

Hello,

0.9 seems not right, there might be a bug or incorrect config in your code. The original pretrained models are lost since server changes etc but I can re-train a new one when I have time. Thanks for the comments.

@WannaBSteve
Copy link
Author

Hi, I really appriciate your fantastic work and releasing the source code., but by follwing the training instruction on readme(using default config which is baseline.yaml), I found it hard to reproduce the result in the paper. For example, the ADE of my result is nearly 0.9(through both RTX A4000 and RTX 3090) but the result shown in the paper is merely 0.22.
Could you please shed some light on it? And I will be really appriciated if you could provide some pretrained models.

Hello,

0.9 seems not right, there might be a bug or incorrect config in your code. The original pretrained models are lost since server changes etc but I can re-train a new one when I have time. Thanks for the comments.

Thank you for replying.

I didn't modify a bit on the config file. Now I'm looking forward to getting the model, that would really help.
Thanks again.

@JunningSu
Copy link

Hi, I really appriciate your fantastic work and releasing the source code., but by follwing the training instruction on readme(using default config which is baseline.yaml), I found it hard to reproduce the result in the paper. For example, the ADE of my result is nearly 0.9(through both RTX A4000 and RTX 3090) but the result shown in the paper is merely 0.22.

Could you please shed some light on it? And I will be really appriciated if you could provide some pretrained models.

I also failed when I tried to reproduce on the univ dataset, and the value was similar to what you said (0.9/1.1). Have you solved the problem?

@VanHelen
Copy link

Hello, I also obtained similar results under baseline.yaml. Have you solve this problem?

@mh-kav-institute
Copy link

Another push from myself:

I also obtain similar results as described above for all five ETH-subtests when trying to train the models. Could you guys please recheck your submitted code and configs or upload your original pre-trained models? It's such a great work, but impossible to use for further comparison or investigation if it's not fixed.

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants