Localgpt is buggy and extremely unstable, so:
1) create small .txt file with some text and try to ingest, transformating big document into tokens can be REALLY long and i havent tested it cause ima lazy shit
2) try to 2) .bat file and replace paths to llama model into absolute paths (right click on ggml file and click copy path to file and paste this path without quotemarks)
This may help i think