Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Ran into a random error with a story where the generator refuses to continue with one character but some others still work. Unsure if this is just due to the length of the story or if there are other factors that contribute to the issue. I have tried: rebooting PC, messing with folder permissions, and generating other stories to replicate the issue. It references something to do with compiling with a different option for CUDA, but I am uncertain of specifics. I am not familiar with Python to really venture more of a guess. Appreciate your time and what you have created so far!

On Windows 11, log below:

-----

21:30:48,448 AIdventure_Server INFO Server started ws://0.0.0.0:9999

21:30:51,652 AIdventure_Server INFO Client 127.0.0.1 connected

21:30:56,602 AIdventure_Server INFO Initialising the model: LyaaaaaGames/GPT-Neo-2.7B-Horni-LN at C:/Users/Surumon/AppData/Roaming/aidventure/models/generators/LyaaaaaGames/GPT-Neo-2.7B-Horni-LN

21:30:56,602 AIdventure_Server INFO Is CUDA available: True

21:30:56,603 AIdventure_Server INFO Setting up the Generator.

21:30:56,833 AIdventure_Server INFO Tokens successfully loaded from local files

21:31:02,235 AIdventure_Server INFO Model successfully loaded from local files

21:31:14,728 AIdventure_Server INFO Loading inputs to GPU

21:31:18,795 AIdventure_Server ERROR CUDA error: device-side assert triggered

CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.

For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

21:31:18,795 AIdventure_Server INFO Shutting down the server

21:31:18,883 asyncio ERROR Task exception was never retrieved

future: <Task finished name='Task-5' coro=<WebSocketServerProtocol.handler() done, defined at C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\websockets\legacy\server.py:153> exception=SystemExit(0)>

Traceback (most recent call last):

  File "C:\Program Files\AIdventure\server\server.py", line 153, in handler

    data_to_send = handle_request(p_websocket, json_message)

  File "C:\Program Files\AIdventure\server\server.py", line 183, in handle_request

    generated_text = generator.generate_text(prompt, parameters)

  File "C:\Program Files\AIdventure\server\generator.py", line 67, in generate_text

    model_output = self._Model.generate(**model_input)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context

    return func(*args, **kwargs)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\transformers\generation\utils.py", line 1452, in generate

    return self.sample(

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\transformers\generation\utils.py", line 2468, in sample

    outputs = self(

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl

    return self._call_impl(*args, **kwargs)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl

    return forward_call(*args, **kwargs)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 741, in forward

    transformer_outputs = self.transformer(

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl

    return self._call_impl(*args, **kwargs)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl

    return forward_call(*args, **kwargs)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 621, in forward

    outputs = block(

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl

    return self._call_impl(*args, **kwargs)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl

    return forward_call(*args, **kwargs)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 326, in forward

    attn_outputs = self.attn(

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl

    return self._call_impl(*args, **kwargs)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl

    return forward_call(*args, **kwargs)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 278, in forward

    return self.attention(

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl

    return self._call_impl(*args, **kwargs)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl

    return forward_call(*args, **kwargs)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 241, in forward

    attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\transformers\models\gpt_neo\modeling_gpt_neo.py", line 194, in _attn

    mask_value = torch.tensor(mask_value, dtype=attn_weights.dtype).to(attn_weights.device)

RuntimeError: CUDA error: device-side assert triggered

CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.

For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):

  File "C:\Program Files\AIdventure\server\server.py", line 290, in <module>

    asyncio.run(main())

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\asyncio\runners.py", line 44, in run

    return loop.run_until_complete(main)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\asyncio\base_events.py", line 629, in run_until_complete

    self.run_forever()

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\asyncio\windows_events.py", line 316, in run_forever

    super().run_forever()

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\asyncio\base_events.py", line 596, in run_forever

    self._run_once()

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\asyncio\base_events.py", line 1890, in _run_once

    handle._run()

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\asyncio\events.py", line 80, in _run

    self._context.run(self._callback, *self._args)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\websockets\legacy\server.py", line 236, in handler

    await self.ws_handler(self)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\site-packages\websockets\legacy\server.py", line 1175, in _ws_handler

    return await cast(

  File "C:\Program Files\AIdventure\server\server.py", line 166, in handler

    shutdown_server(exit_code)

  File "C:\Program Files\AIdventure\server\server.py", line 276, in shutdown_server

    exit(p_exit_code)

  File "C:\Program Files\AIdventure\mamba\envs\aidventure\lib\_sitebuiltins.py", line 26, in __call__

    raise SystemExit(code)

SystemExit: 0

(+1)

Hello, I never saw this error before. What character did you send to the AI? You said it happened with one specific character.

I did try different settings and I think I figured it out. I had been stress testing my system and the program. I had set Context's Max Length to 10,000 characters. I lowered this setting by 1,000 and kept reloading the server. Once I hit 7,000 for Context's Max Length, the server stopped crashing.

For context on the save: The story in that save is about 14,500 characters. The story contains standard English letters, punctuation, quotation mark, and no numbers. There are no made up words except two character names and one lore book entry giving a description of one character. The story was started with the "Very Bad Awakening" scenario. The memory has not been altered since beginning the story.

I am not sure if the error I had is just an out of memory issue or something else, but changing the Context's Max Length option seems to have fixed it. It also appears that the server does not respect mid-run settings changes to the AI Settings or when I shut the server down with the save loaded. I have to specifically return to the main menu and choose to shut down the server before adjusting the AI Settings to get them to stick to the server.

I hope this information is useful to you. Thank you for the help!

It’s very curious. Increasing the context length wouldn’t fix an out of memory error. I think it’s something else. I will write your feedback down and check it when I can. Thanks!