Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About multi-GPU training #5

Open
hammerwy opened this issue Aug 27, 2018 · 3 comments
Open

About multi-GPU training #5

hammerwy opened this issue Aug 27, 2018 · 3 comments

Comments

@hammerwy
Copy link

How can I train the dataset on multi-gpu? and How do the code load the dataset ? Should I divide the damaged images and the right images into different folders making the same name of two images?
Thank you for your reply.

@Zhaoyi-Yan
Copy link
Owner

For the first question, #6 may help you.
For 2nd question, all ground-truth images should be placed in one folder. That's all ! You should set mask_type=random in training. When testing, you need figure out the mask corresponding to the damaged images. It is because, for general inpainting, it's impossible to get the masked region directly from the damaged images. Blind-inpainting masks sense only for specific scenes, scratches in old images, fences removal, etc.
Finally, I also recommend you try our pytorch version. It has been able to handle multiGPU training.
I am training pytorch model these days, hoping a good model in moths.

@hammerwy
Copy link
Author

Appreciate for your reply.
For your answer of the 2nd question , in the other word, it will not need damaged images but noly for the intact images, when training the model. Is it right?
and I want to restore some images with random line similar with fences, but it is difficult to get the mask of the line . What should i do if i want use Shift-net to solve the problem?
Thanks for your reply.

@Zhaoyi-Yan
Copy link
Owner

As the mask of line is quite thin. It is not quite necessary for gan training. As shift operation shift pixels from known region to missing region, it is not quite possible to adopt shift if mask is not available. Or you may use an additional network to estimate the mask, then perform shift. What's your opinion?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants