Featured image of post Attempting to Fine-tune Stable Diffusion Again

Attempting to Fine-tune Stable Diffusion Again

My second attempt at fine-tuning stable diffusion by using images from Cyberpunk 2077 was more successful.

After my first attempt at fine-tuning Stable Diffusion didn’t yield the desired results, I realized that I may have caused the collapse of the embedding space by using too few images with similar descriptions. For my second attempt, I decided to collect more images using a simple approach: extrating frames from a playthrough video of the game. This allowed me to increase the diversity of images and hopefully improve the performance of the fine-tuning process.

This seven hour long video clocked in at about 1,200,000 frames. I used Yolov7 to identify frames that had recognizable objects in them and extracted about 10,000 frames that I cropped and scaled to 512x512.
The next challenge was to generate accurate captions for these images. To do this, I used BLIP (Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation) to generate four captions for each image: two generated deterministically and two generated by sampling the space, using two different BLIP models. Finally, I used a transformer-based summarizer to summarize the captions.

Here are two example images with corresponding captions:

A police car driving down a street next to a bus. a police vehicle driving past a truck on street with lights. two trucks parked in the rain. a city at night with a police car and a police An image of a man with tattoo on his hands. a man sitting at a table with a green box in his hand. A man in a restaurant eating something

These are not the best captions but they are more diverse than what I used before and hopefully still reasonably accurate. To make this work with the StableDiffusion code, I had to generate a metadata.jsonl file that mapped all the images to their corresponding captions. A simple Jupyter Notebook did the trick for that.

Ten thousand extracted images with generated captions

Thanks to LambdaLab’s Cloud GPUs, I rented an NVidia A100 for a couple of days. This allowed me to run the fine-tuning for about 25 epochs. I saved a checkpoint after each epoch. To evaluate the quality of the checkpoints, I asked txt2img to generate 4 images for 4 prompts for each of the checkpoints. The video below shows the result. It remains to be seen how to interpret this. While the results seem definitely better than my previous attempt, it’s still not quite where I would like it to be.

Let me know what you think.

The views expressed on these pages are my own and do not represent the views of anyone else.
Built with Hugo - Theme Stack designed by Jimmy