My LoRA experiment 6 - Trying regularization images
What a catastrophe this training session turned out to be.
For the regularization folder I generated a few images, and then some were just casually placed in the regularization folder. The choice to settle on 20 images seemed reasonable at the time. I decided to generate 3 of each image type, such as generic images like "1girl", "close up", various poses, different characters.
- Use 2 or 3 different images per tag. For example for "1girl" have 2 or 3 different images of what "1girl" could be.
- Apply this rule to every tag you are using in your data set in your regularization folder.
- It's about variety, but keep it simple and focused.
Thinking it would be efficient, running the training overnight with 20 repeats and 42 epochs was the plan. Training ran at around 2.2s/it which was really not bad and I was overall happy. Expecting 9 hours of progress, it was quite the disappointment to wake up and discover that only 21 epochs had run. Maybe there was an error in the settings, but who knows?
Testing epoch 20 or 21 later that evening, it was baffling to find that none of the generated images resembled the intended character. Everything seemed so generic. It was as if the regularization images had erased any progress made during training. But hey, might as well try again with the original 42 epochs and see what happens.
Once again, before getting some sleep it seemed like the perfect time to set up the training. Triple-checking everything, the plan was to train with 42 epochs. But then, it somehow switched to 20 epochs even though the UI showed 42. After some investigation, it turned out that having regularization images caused the switch to 20 epochs.
Thinking it could be fixed by adding 20 more images to the regularization image folder for a total of 40 epochs, it sadly didn't work. The training still stubbornly insisted on 20 epochs. However, at least the cause was identified. Noticing an epoch override setting in one of the last sections about resuming training, it was set to 42, allowing the intended 42 epochs to run.
Feeling victorious, the training was left to run overnight. It took well over 9 hours – even bleeding into the next day's work hours. Finally, checking the results at lunch, it was nothing short of a disaster. Not a single generated image resembled Kiwi. It was a complete failure.
So, the results were less than impressive. Sure, the clothes were somewhat acceptable, but they were marred by artifacts and imperfections. And the character? Just a generic anime girl sporting blonde hair, only because it was specifically prompted.
Feeling a bit desperate, the next attempt was prompting "face_mask," which had been a tag in the training set. It's also a distinctive trait of Kiwi, so it shouldn't even require a prompt when generating her. But, what's the harm in trying, right? Time to see what this experiment would yield.
The level of dissatisfaction and disappointment I was feeling was incredible.
This is not what I wanted at all, I am unsure how part of her default outfit got created but here it’s mixed her outfit with a skirt. Yes it’s stylish but it’s so badly rendered it’s hard to look at. But let's be real, the training with regularization images was a total bust. It's as if regularization erased any progress made, leaving behind only fragments of the outfit and a mishmash of unwanted art styles.
The good
I suppose this LoRA could be basically “kiwi clothes” or something. I also learnt that regularization images are probably not meant for these anime type pictures. Or if you want to isolate clothing or style.
The bad
Both training with regularization images was a complete failure. It is like the regularization undid any training or knowledge of the character, part of the outfit persisted but it completely sucked in the art style and mixed it with something else which I did not want.
Things to do next time
The logical step would be to re-run the training without regularization images. However, before diving back in, it might be wise to do some online research to see if something was overlooked. After all, the first iteration wasn't perfect either, and simply re-training might not be the solution.
At this point, having someone to discuss ideas and strategies with would be ideal. But, since that's not an option, it's time to forge ahead on this solo journey.