Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

training_method seems not to work #30

Open
Con6924 opened this issue Aug 28, 2023 · 0 comments
Open

training_method seems not to work #30

Con6924 opened this issue Aug 28, 2023 · 0 comments

Comments

@Con6924
Copy link

Con6924 commented Aug 28, 2023

Thanks for your impressive work and clear code!

I find that modifying training_method to 'selfattn' or 'xattn' leads to failure:

create LoRA for U-Net: 0 modules.
Traceback (most recent call last):
File "/home/notebook/code/personal/S9049723/LECO/./train_lora.py", line 343, in
main(args)
File "/home/notebook/code/personal/S9049723/LECO/./train_lora.py", line 330, in main
train(config, prompts)
File "/home/notebook/code/personal/S9049723/LECO/./train_lora.py", line 89, in train
optimizer = optimizer_module(network.prepare_optimizer_params(), lr=config.train.lr, **optimizer_kwargs)
File "/home/notebook/code/personal/S9049723/Anaconda3/envs/leco/lib/python3.10/site-packages/torch/optim/adamw.py", line 50, in init
super().init(params, defaults)
File "/home/notebook/code/personal/S9049723/Anaconda3/envs/leco/lib/python3.10/site-packages/torch/optim/optimizer.py", line 187, in init
raise ValueError("optimizer got an empty parameter list")
ValueError: optimizer got an empty parameter list

According to LoRA implementation, extra LoRA modules are only attached to Conv and Linear modules, so that attn blocks have no LoRA associated. Maybe related code can be removed or refined in your later update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant