Thank you for all this powerfull softwares ! I have a problem with
LocalGPT Llama2-7b (w/o gui) [CUDA(tokenizing only, chat on cpu)/CPU]
i launch the first .bat to ingest a document (a pdf in folder SOURCE_DOCUMENTS)., a message shows loading document... and that's all ! console command is not freezed but nothing seems to work, i wait some hours...
so i launch the last .bat to start localgpt, i have errors :
"
CUDA extension not installed.
CUDA extension not installed.
2024-03-27 10:54:55,908 - INFO - run_localGPT.py:244 - Running on: cuda
2024-03-27 10:54:55,908 - INFO - run_localGPT.py:245 - Display Source Documents set to: False
2024-03-27 10:54:55,909 - INFO - run_localGPT.py:246 - Use history set to: False
2024-03-27 10:54:57,360 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
D:\LocalGPT\miniconda3\lib\site-packages\torch\_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
max_seq_length 512
2024-03-27 10:55:00,011 - INFO - run_localGPT.py:132 - Loaded embeddings from hkunlp/instructor-large
2024-03-27 10:55:00,260 - INFO - run_localGPT.py:60 - Loading Model: TheBloke/Llama-2-7b-Chat-GGUF, on: cuda
2024-03-27 10:55:00,260 - INFO - run_localGPT.py:61 - This action can take a few minutes!
2024-03-27 10:55:00,262 - INFO - load_models.py:38 - Using Llamacpp for GGUF/GGML quantized models
Traceback (most recent call last):
File "D:\LocalGPT\localGPT\run_localGPT.py", line 285, in <module>
main()
File "D:\LocalGPT\miniconda3\lib\site-packages\click\core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "D:\LocalGPT\miniconda3\lib\site-packages\click\core.py", line 1078, in main
rv = self.invoke(ctx)
File "D:\LocalGPT\miniconda3\lib\site-packages\click\core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "D:\LocalGPT\miniconda3\lib\site-packages\click\core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "D:\LocalGPT\localGPT\run_localGPT.py", line 252, in main
qa = retrieval_qa_pipline(device_type, use_history, promptTemplate_type=model_type)
File "D:\LocalGPT\localGPT\run_localGPT.py", line 142, in retrieval_qa_pipline
llm = load_model(device_type, model_id=MODEL_ID, model_basename=MODEL_BASENAME, LOGGING=logging)
File "D:\LocalGPT\localGPT\run_localGPT.py", line 65, in load_model
llm = load_quantized_model_gguf_ggml(model_id, model_basename, device_type, LOGGING)
File "D:\LocalGPT\localGPT\load_models.py", line 56, in load_quantized_model_gguf_ggml
return LlamaCpp(**kwargs)
File "D:\LocalGPT\miniconda3\lib\site-packages\langchain\load\serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp
__root__
Could not load Llama model from path: ./models\models--TheBloke--Llama-2-7b-Chat-GGUF\snapshots\191239b3e26b2882fb562ffccdd1cf0f65402adb\llama-2-7b-chat.Q4_K_M.gguf. Received error [WinError -1073741795] Windows Error 0xc000001d (type=value_error)
"
can you help me ? i have 3060 RTX and Xeon E5 with 64 GB RAM?
thank you for all.