This seems very exotic so I can't look into it until I come back home, but it should work if you select Vulkan instead of cuBLAS.
Using --usevulkan receives this response------
Traceback (most recent call last):
File "koboldcpp.py", line 4451, in <module>
main(parser.parse_args(),start_server=True)
File "koboldcpp.py", line 4099, in main
loadok = load_model(modelname)
File "koboldcpp.py", line 871, in load_model
ret = handle.load_model(inputs)
OSError: [WinError -1073741795] Windows Error 0xc000001d
[11968] Failed to execute script 'koboldcpp' due to unhandled exception!
------Let me know what you figure out, when you're able. Thanks.
I double-checked everything on my end, so my only guess at this point is that's your're using an ancient CPU without AVX2 support, or something is preventing KoboldCPP from creating the _MEI folder in temp.
If you're using a very old CPU (pre 2013 for intel and pre 2015 for AMD) you could try replacing your koboldcpp.exe with https://github.com/LostRuins/koboldcpp/releases/download/v1.76/koboldcpp_oldcpu.... and/or selecting "Use Vulkan (Old CPU)".
If you're not using an ancient CPU you could try disabling your antivirus, but I doubt that's the reason.
It appears I'm running an ancient CPU. Just using --usevulkan didn't help, but I did get it going by downloading the alternate release of Koboldcpp. I changed the file name to koboldcpp.exe and replaced the original version, however it won't start with the silverpine.exe. Instead, I have to start koboldcpp with "koboldcpp.exe --model "model.gguf" --usevulkan --gpulayers 43 --multiuser --skiplauncher --highpriority"(I found that I can use --usevulkan OR --usecublas with the alternate koboldcpp) from the command line, wait for that to load, and then start silverpine.exe. Figured I'd mention in case it's something you care to remedy. The only other hiccup I've come across so far is when conversing with Gareth, he'll give a response, then immediately I get asked if I want to buy the shed, and after clicking 'no', I get another response from him as though I had sent a blank input after his initial response. Beyond that, I am thoroughly impressed with how well the LLM works with whatever prompting and settings you have coded into the game, as the responses given are not only coherent, but intelligent, on point, and the memory throughout a conversation is excellent. Keep it up. Can't wait to see some male characters or maybe even a customizable one for the player to use.