Yep, I used looking glass for that.
I chose ruDallE because at the time at least it was the only available version of DallE that was open source and I could train myself. Not sure if there are other versions of DallE right now that allow custom fine-tuning. If I remember correctly I did 20,000 iterations, not sure how many epochs that was. I think in total the training time was around 8-9 hours.
Basically no experience with AI before this, it's indeed pretty difficult to start out, and I think right now I could already do a lot better with DallE2 and ChatGPT existing. Unfortunately I can't really point you to good resources to learn, I just looked for different info all over the place.
Hope this answers some questions though!
I know you mentioned about can't point to good resources, 3 months later. Is there anything you can share to get started doing something like. SDXL is out and I believe on civit ai there are a lot of LoRAs thatn ca be used to train stuff like yours. I wanted to know what did you use as far as hardware GPU to train it.
I don't really have any experience with the more modern tools like stable diffusion, so I still can't really give good advice on that. My personal GPU is pretty old so I used sagemaker studio lab which gave me a pretty good free GPU to use (I think it was a tesla v100). Google Colab (pro) should be a very good option right now as well.