LoRA Training 6.1 - Network rank and network alpha play an important role

LoRA Training 6.1  - Network rank and network alpha play an important role

tl;dr Do not set your network alpha to 1, it's garbage unless your character is super generic. Do not set your network alpha to 128 either it has a risk of generating artifacts.

💡
This has been overturned, Set your Network Alpha to be double your Network Rank for LoRA Training. (Experiment 8 findings)

Consider this a follow-up to the previous post (LoRA Training 6), as I won't be diving too deep this time. I decided to run the training again without regularization images, aiming for 42 epochs. To my dismay, the results were shockingly worse. I also ran the same settings at fewer epochs and more repeats and each one was worse than the other.

So, what could have caused this deterioration from version 1 to 1.2, despite increasing the epochs and reducing the repeats to 10? Let's try to figure this out.

LoRA of Kiwi set to 42 epoch with a network rank of 64 and network alpha of 1, 0.8 strength

In my previous post, I failed to mention something crucial: I had set the network rank to 64, just like before, but this time I left the network alpha at 1, assuming it wouldn't make a difference. Turns out, I was wrong – it definitely has an impact, although the extent remains unclear.

With that in mind, my next training attempt will use a new data set with 15 repeats, 32 epochs, a network rank of 64, and a network alpha of 32. Depending on the results, I may delve deeper into these parameters. However, with each training session taking around 3 hours for roughly 32 epochs (depending on the repeats), it's not feasible to run numerous tests quickly – especially since I need my computer for other tasks.

LoRA of Kiwi set to 42 epoch with a network rank of 64 and network alpha of 1, 0.9 strength

In the future, I might try a network rank of 32 with an alpha of 32 or a rank of 64 and an alpha of 64. I'm hesitant to attempt 128 x 128 again, as I suspect that's what caused the images to artifact really bad. Maybe the network alpha should be 64 instead of 128. If time permits, I'll run tests for those as well.

For now, my next blog post will focus on these adjustments. At least I have a clearer direction in mind. Sometimes, taking a break can help, as answers often come when preoccupied with something else entirely. In this case, I'm trusting my intuition to guide me.

It's quite disappointing that about 90% of the generated images turned out to be useless, and the system struggled to follow the prompt accurately. The best result I obtained was at a 0.9 strength setting, but even that was far from perfect. The rest of the images were just a jumbled mess.

LoRA of Kiwi set to 42 epoch with a network rank of 64 and network alpha of 1, 0.9 strength