2 GB, so I see I need 4 GB. Hm. Someone recently mentioned that I might be able to create a virtual machine and up the VRAM using it, but I'm not sure if that would work given the NVIDIA CUDA requirements. Hmm... Maybe I need to upgrade my hardware after all.
vbwyrde
3
Posts
A member registered Sep 08, 2022
Recent community posts
Stable Diffusion GRisk GUI 0.1 comments · Replied to Xyper in Stable Diffusion GRisk GUI 0.1 comments
I tried it on my with a OptiPlex 990 / 16 GB RAM with GeForce GT 710 but I get this error:
File "torch\nn\modules\module.py", line 925, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) RuntimeError: CUDA out of memory. Tried to allocate 44.00 MiB (GPU 0; 2.00 GiB total capacity; 1.57 GiB already allocated; 20.93 MiB free; 1.63 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Anyone know if there anything I can do to get it to run on this machine?