Copy link Quote reply
harryyoung commented Dec 6, 2011
When loading a map on the current build I get the following
——- Game Initialization ——-
gamedate: Dec 6 2011
Float to int casting behaviour: ISO compliant
ERROR: Com_sprintf: overflowed bigbuffer
—— Server Shutdown (Server crashed: Com_sprintf: overflowed bigbuffer) ——
==== ShutdownGame ====
^1Error: BotLibShutdown: bot library used before being setup
—— CL_Shutdown ——
OpenAL capture device closed.
I recently purchased this video game to finally finish it after years. However, I was having trouble running it. Whenever I tried to launch it my monitor would constantly change resolutions, then the game would switch to "windowed" mode, and eventually, it’d crash. I have a fairly old computer, and while I know it’s quite underperforming, I was certain that the issue was not my graphics card, but rather a flaw in the video game itself. I decided to search through the game’s files for a potential solution, and I managed to track down the root of the problem: It is within the "gl" folder. The .DLL libraries inside it are apparently faulty. I thought that by replacing the libraries from a video game which shares the same engine (In my case, "Quake III Arena") I’d be able to fix it, and while it worked without launching it from "Steam", starting it from the latter produced the same crash. Ultimately, I decided to delete the folder’s contents entirely, and the problem was solved. My best guess is that the game attempts to read the OpenGL library directly from the "gl" folder instead of the operating system’s default location ("C:WINDOWSsystem32opengl32.dll"), causing it to flip.
TL;DR: Please open your game’s installation folder ("C:Program FilesSteamsteamappscommonReturn to Castle Wolfenstein", on my end), find a folder named "gl", and remove its contents.
Hopefully, that did the trick. If you have further issues, please comment here or message me.
When converting an integer to text, typically I create a big buffer to use with sprintf() to hold any potential result.
I’d like to more space efficient and certainly portable, so instead of 50 , found the alternative:
(sizeof(integer_type)*CHAR_BIT*0.302) + 3
So the questions are:
1 Is there a problem with the alternative equation?
2 What better solution? — as this alternative is a tad wasteful and looks overly complicated.
Answers provide 3 thoughtful approaches:
1 Use buffer[max size for type] (Answer selected)
1 The compile time max buffer size using equation (sizeof(integer_type)*CHAR_BIT*0.302) + 3 was not broken nor improved. The impact of was researched as suggested by @paddy and no locale settings affected integer conversions %d %x %u %i . It was found a slight improvement may be made to the equation if the type is known to be signed or unsigned (below). @paddy caution about "more conservative" is good advice.
2 asprintf() is really a good all-purpose solution, but not portable. Maybe in post-C11?
3 snprintf() , although standard, has known consistent implementation issues when the supplied buffer is undersized. This implies calling it with an over-sized buffer and then generating a right-size buffer. @jxh suggested a thread safe global scratch buffer to form the answer with a local right-sized buffer. This novel approach deserves consideration which I may use, but the original question focused more on determining before the s(n)printf() call a conservative buffer size.
signed ((sizeof(integer_type)*CHAR_BIT-1)*0.302) + 3
unsigned (sizeof(integer_type)*CHAR_BIT*0.302) + 2
*28/93 may be used in lieu of *0.302 .