Backporting my Shuusou Gyoku build to Windows 98 was one of my favorite commissions in recent history. If you remember 📝 last year's backport of the overhauled ReC98 build system to Windows 9x, it left me rather demoralized at the end of it all. Sure, it may be the technically fastest way of fully rebuilding the entire codebase, but it just doesn't matter to me personally – incremental rebuilds on modern systems are still faster and much better integrated with the editors I actually use. People might have appreciated the research that went into it, as usual, but it just feels so pointless if nobody actually uses the result. So why are we treating Windows 9x compatibility as this noble goal and ideal expectation again? Just because retro-computing communities exist and prefer to paint it that way? The length of this post should hopefully make it clear that this is nothing that should be demanded or taken for granted.
That's why seeing this goal in particular getting funded was such a refreshing change of perspective. Finally, retro-computing people have put their money where their mouth is, and invested in something other than hardware! 🙌
So, how do you backport a modern C++ project to Windows 98 in 2025? Visual Studio removed official support for such old systems a long time ago, and increasingly uses newfangled Win32 API functions in its C++ standard library implementations where they can't be trivially removed.
If your codebase of choice restricts itself to old C and C++ standards, compiling it with an old version of Visual Studio can get you most of the way there. But this is becoming increasingly unlikely as we only ever move further away from the mid-90s. After all, this restriction would not only have to apply to a project's own code, but to all of its dependencies as well, since a backport can't just fall back on precompiled libraries. And then, all bets are off – some projects like miniaudio might be committed to supporting Visual C++ 6, but others might just freely use whatever language features are available on the GCC version that is part of the oldest Linux image offered by their CI provider. Which is totally understandable: There is a reason behind new language versions, and at some point, developers just want to move on and stop taking productivity hits all the time. Or just prefer to try something new, because C89 in particular sure gets old after writing a 5-digit number of lines in it, at least as far as I'm concerned. I'm still hoping that I get to statically recompile Turbo C++ 4.0J one day and add at least a few more language features and code optimizations to it…
Also, having simple and accessible build processes has always been a guiding principle of mine. If people can't compile with widely available tools and have to acquire old proprietary compilers from legally dubious sources, I don't fully deliver on a key promise of free software, which is kind of important to me.
But as long as the Windows 98 users are willing to install KernelEx, we can get very far with even current Visual Studio versions. KernelEx covers most of those newfangled Win32 API functions, and even helpfully makes Windows ignore the *OperatingSystemVersion fields in the PE header. The only thing we should manually add to the build process is the /arch:IA32 flag: It removes any modern x86 instructions in newly-compiled code and thus ensures that the game still runs on period-correct CPUs. Of course, the modern build should use all modern instructions it possibly can, but it makes sense to limit Windows 98 support to the alternate build with pbg's original DirectDraw and Direct3D graphics and add the flag there.
And sure enough, 📝 this worked out beautifully for the first few releases of my Shuusou Gyoku fork. But once I added more features, running on Windows 98 became increasingly harder:
P0256 required an extra /Zc:threadSafeInit- to not use certain Win32 lock functions that KernelEx doesn't cover.
P0275 then started using the filesystem and thread features from modern C++, whose Microsoft STL implementations used enough unimplemented Win32 functions that I was forced to drop Windows 98 support for the time being. It sure doesn't help that KernelEx development has never escaped the increasingly locked-down forum it started in, which has made new builds increasingly inaccessible.
Meanwhile, Microsoft's C runtime had started to steadily remove more and more workarounds that were required to run on Windows 9x, after they've probably annoyed the developers for long enough.
So let's finally give this backport the dedicated attention it needs, and start the usual backporting loop:
Encounter one of the classic DLL function errors at startup
Look at the disassembly to figure out where that call came from
Either rewrite the offending code to not use the function, or find some way of polyfilling it if the call originated from code that is not under your direct control
Repeat until the game works
Follow the same steps for any crashes or weird behavior introduced by the older Windows version
There is some room for creativity in this process, as well as non-zero hack value and enjoyment from seeing it all work out in the end. Heck, MattKC even made a blockbuster feature film out of it. But ultimately, it's dumb drudge work that wouldn't be worth doing if no actual person cares.
And I haven't even mentioned the worst part: Setting up a full-featured, bug-free, and performant VM that connects to your development system in a sort of comfortable way – and then repeating this process for different language versions of Windows 98, and even for Windows XP and maybe 7 when it comes to debugging DirectDraw issues. This only gets harder as the required dedicated VM code for these old systems starts to bit-rot, which left apparently every VM software out there with at least one deprecated or already removed feature…
The 4 fiber-local storage API functions (Fls*()), which we redirect to the corresponding Tls*() functions just like Microsoft did in earlier Windows SDK versions.
GetLocaleInfoEx(), which is used by std::filesystem's error message implementation. We currently don't use these messages, but the retrieval method still gets linked into the binary because the respective method is part of a vtable.
InitializeCriticalSectionEx(), which bumps the minimum OS requirement to Vista for no reason because the CRT only ever passes 0 to the Flags parameter. Easily redirected to the older InitializeCriticalSectionAndSpinCount().
GetFileInformationByHandleEx(), used by std::filesystem's directory iterator. This was the only specific function needed to get BGM modding working.
Not only were these functions enough to cover Windows 98 with the one version of KernelEx I managed to snag from the MSFN forum before they disabled downloads, but they also made the game run on unmodified Windows XP again! To completely remove the need for KernelEx and unicows.dll, we'd still have to cover a slightly bigger number of Win32 API functions, though. But now that this push has put all the foundations into place, the chances are good that the next push might already get this done. And at that point, even Windows 95 support wouldn't be far away.
If it takes longer, it'll probably be due to these two other remaining issues:
The game currently crashes every time it's closed on Windows 9x. We can thank the Microsoft Store for that one: Store apps have to be terminated differently than regular Windows programs, but Visual Studio only uses a single (or, as they call it, universal) C runtime for both kinds of programs. As a result, the CRT has to reach deep into the NT Process Environment Block to find out what kind of program it's dealing with, and this structure simply doesn't exist in Windows 9x.
The second issue in particular shows the limits of this approach. It's only a matter of time until Microsoft activates unconditional SSE on every single part of the precompiled CRT, forcing us to reimplement pretty much all of it for continued 9x compatibility.
This is exactly why I prefer the Zig approach of compiling the C standard library on demand against the chosen CPU model. Looking at Zig's recent progress, I'm very impressed to see that the community has addressed almost 📝 all of my pain points in the 1½ years since I last looked at it. The Zig compiler now has PDB basenames, compilation progress output, improved UBSan error messages, and compilation speed is actively being worked on.
Unfortunately though, they still break the build system all the time. For a system-level dependency that people can and will use different versions of, that kind of instability is a non-starter. So I'm very likely not going to migrate anything to Zig before the compiler hits 1.0, unless they do a feature freeze or make some other kind of compatibility promise before that point. Oh well, I've put too much effort into my Tup building blocks to not continue using them for at least a few more years.
It might seem like compiling with MinGW would be the more reliable alternative here. Even its GCC 14 version still sets the *OperatingSystemVersion and *SubsystemVersion fields to 4.0, indicating Windows 95, when compiling a 32-bit binary. And if MinGW ever decides on a higher default, I'm sure that the --(major|minor)-(os|subsystem)-version linker flags will continue to allow that default to be freely overridden. Unlike Visual Studio 2022's LINK and EDITBIN tools, which refuse OS version 4.0 for no particular reason. 🙄
However, MinGW is hardcoded to link against the DLL version of Microsoft's C runtime and offers no option of statically linking the CRT, presumably due to legal reasons. This used to be no problem as its GCC ≤13 versions linked against the generic msvcrt.dll, which is available on Windows 98 as well. But this was bad for multiplereasons. And so, even they ultimately decided that Windows 7 was a reasonable minimum requirement these days and made MinGW's GCC 14 version link against the Universal CRT, with all its api-ms-win-crt-*-l1-1-0 DLLs. We can only avoid these DLL dependencies by going -nostdlib and rewriting all our code accordingly – but guess what, we could do the exact same thing on MSVC with /NODEFAULTLIB, without switching compilers.
Unfortunately, that's the same reason why Zig would make no difference, regardless of whether you use it as a compiler for C/C++ code or write pure Zig. If you build for Windows, you can merely choose between the GNU and MSVC ABI. Then, Zig behaves exactly like the respective C compiler: Select GNU and you get the UCRT dependencies, select MSVC and you get the statically linked Microsoft CRT with all of its aforementioned drawbacks. Supposedly, it's possible to bypass MSVC, but the GNU ABI was the answer to the question of compiling without Visual Studio back then. Establishing an easy-to-use third ABI without any dependencies sounds like much more of a research project than just staying with C++.
Not to mention that Zig's Windows version support policy follows Microsoft's extended support lifecycle. Zig 0.6.0 dropped Windows 7 support, and Zig 0.11.0 dropped Windows 8.1 support. While Andrew Kelley is open to non-invasive patches for greater OS support, these could break at any moment and would therefore need consistent maintenance as well.
Surprisingly, SDL 2 has been causing by far the least amount of problems in all of this. A small adjustment to its threading functions removed its only mandatory reliance on Microsoft CRT code, and KernelEx and unicows.dll then cover any remaining unconditional usage of newer Win32 API functions. Since we already needed a __WIN9X__ macro to opt into this change and retain SDL's default behavior on modern systems, I also took the opportunity to disable most of the subsystem backends that are unsupported on Windows 98, shaving a few hundred KB off the DLL's file size.
This made SDL look even better than 📝 the already good impression I got last time. Not only is SDL not a problem, but it's actually the biggest asset we have in a Windows 9x port. And with all the improved subsystems in SDL 3, it becomes so much of an asset that we should ideally just go all in on SDL 3 and make it a hard dependency of even the cross-platform logic code.
This would be quite a big deal, and it might not immediately be obvious why. Doesn't every one of our supported platforms already depend on SDL anyway? Internally though, my current architecture predates the plan of using SDL and is still designed for the hypothetical case of not using it. After all, retaining and expanding pbg's old backend code for a slim Windows 98 port without any big dependencies was a viable option that could have been funded. But now that the backers have voted against it, directly architecting all code against SDL 3 would have so many upsides:
Since we maintain our own SDL fork for Windows 9x support, we're in full control of its portability. In contrast, recompiling Microsoft's C runtime from the SDK sources shipped with Visual Studio isn't even supported anymore. It might still be possible, but SDL is a much more easily handled and forked dependency.
/arch:IA32 also applies to SDL code. If we managed to completely purge all precompiled CRT code from the game binary as a result of using SDL functions wherever possible, we would have fully escaped the looming proliferation of SSE code within the CRT and ensured long-term Windows 9x compatibility.
And given 📝 SDL's very nature as this incompressible brick of a DLL, it only makes sense to do so. Because I've restricted the cross-platform logic layer to the C/C++ standard library, the game binaries effectively ended up with their own implementations of features that SDL already offers. Most of those come from Microsoft's CRT, but this category also includes some makeshift code of my own that I only had to write to uphold the initial design goal. Breaking this self-imposed restriction would not only simplify the architecture, but also remove a significant amount of bloat from the Windows build and even fix the occasional bug! 🤩 I've already mentioned file handling in the previous blog post, but we'd also gain a more sophisticated and bug-free BMP writer, a standardized and configurable error logging channel, case-insensitive string comparison that doesn't bloat the binary with locale braindeath, a consistently implemented sprintf() that has a defined way of printing 64-bit numbers without the ugly PRId64 hack from <inttypes.h>, and probably a bunch of other things I'm missing right now. Heck, we could even replace miniaudio's backends with the now sane low level of SDL 3's audio subsystem, limiting miniaudio's role to just mixing.
In fact, this idea is so convincing that it makes me want to freeze all new feature or backport development for Shuusou Gyoku until it's done. However, we absolutely want to do this with SDL 3 rather than 2 to reap the full set of benefits. This would imply removing the SDL 2 code path for good, but our Flatpak still uses this code path because the Freedesktop SDK will only start shipping SDL 3 with the next update in August. We could compile SDL 3 from source in the meantime, but maybe we shouldn't?
Given the funding situation and general hype, it'll probably be best if I just focus on TH03 until then.
But once that's done, it would only leave TrueType fonts, MIDI, and graphics rendering as the subsystems that our architecture supports system-specific APIs for – and even MIDI will only be on there until someone funds MIDI support for just a single non-Windows platform.
You might wonder why graphics rendering is on there, but we can unfortunately never get rid of pbg's original DirectDraw code. The 8-bit mode is just too crucial for getting the game to run decently on the old systems without 3D acceleration that a Windows 9x port is supposed to target. We could try going full GDI in the hope of maybe even being faster or more portable, but that would just be another custom backend.
We could, however, go the opposite route. Turning pbg's old code into an SDL_Renderer backend would facilitate all kinds of backports of pure SDL_Renderer games to that late-90s period of hardware. Those games will probably not run all that well 📝 if our benchmark results for software rendering are any indication, but the idea definitely has hack value.
And why stop there? Let's add a PC-98 backend! …yeah, I'm getting off-track.
Speaking of pbg's old rendering path though: Having to run it while debugging the Windows 98 backport comes with the practical problem that we still have no proper windowed mode for it. Multi-monitor support in VMs is sketchy at best, and even if it works, running OllyDbg on a separate virtual monitor next to exclusive fullscreen 8-bit DirectDraw still doesn't prevent these highly disruptive mode switches between the game and debugger windows.
Fortunately, D3DWindower is old enough to still work on Windows 98 and works well with pbg's original build of Shuusou Gyoku. Unfortunately, 📝 it stopped working as soon as I migrated window creation to SDL 2. But if we put two and two together, we immediately get a theory as to why: Because it works on Windows 98, D3DWindower might only hook the ANSI versions of all the Windows API functions that games can use to enter exclusive fullscreen mode, but SDL uses the Unicode variants. You wouldn't think that a mode-switching API uses potentially localizable strings as part of its parameters, but hey, maybe monitors are treated like files and addressed with names?
And indeed, SDL uses ChangeDisplaySettingsExW(), but D3DWindower only hooks ChangeDisplaySettingsExA(). Switching from W to A was all it took to get it working again… on modern Windows at least. It wasn't enough for Windows 98, but what could we possibly be missing?
Turns out that KernelEx is the one and only issue there. D3DWindower (or rather, its internally used madCodeHook library) uses the Win32 GetVersion() function, but interchangeably calls it both via its import and via a proc address pointer retrieved directly from kernel32.dll. KernelEx only wraps one of the two, which causes the hooking algorithm to fail as it gets confused by contradictory Windows version numbers.
The problem with version numbers is that the number-returning function itself has no way of knowing the caller's intent. I can't think of a situation where it wouldn't make more sense to query the presence of a certain OS feature rather than the version number of the entire thing. And so, KernelEx's wrapper makes the understandable choice of returning exactly the version you've configured for the executable:
This will cause the hooked GetVersion() to return 0x17710006 rather than 0xC0000A04.
madCodeHook, however, uses the version number to pick between the completely different hooking strategies for 9x and NT kernels, and therefore always needs the actual version of the underlying system. Presumably, these different strategies are needed because 9x kernels didn't have the copy-on-write mechanism that allows a process to freely rewrite system DLL code without affecting other processes. Instead, 9x kernels only have a single global shared instance of all system DLLs, which gets mapped to the same address for every process. This is also why setting code breakpoints within system DLLs on 9x can break the entire system: Since 9x doesn't support hardware breakpoints, debuggers only have the option of writing the INT 3 instruction byte (0xCC) to the breakpoint address and then reverting it before resuming execution. But this instruction can only break back into the debugger for the one process that the debugger is attached to. In the meantime, every other process is left with a corrupted instruction stream, and OllyDbg's cryptic Unable to flush cache only describes a single symptom of the ensuing general instability.
Thankfully, KernelEx lets us disable its GetVersion() wrapper for any specific compatibility mode by editing the respective section of %WINDIR%\KernelEx\Core.ini. For Windows 2008 SP1, the change would be:
After a reboot, D3DWindower then succeeds in hooking ChangeDisplaySettingsExA() through KernelEx even if the KernelEx-injected Microsoft Layer for Unicode previously redirected ChangeDisplaySettingsExW() to that function.
Since the ideal definition of a "Windows 98 backport" does not include KernelEx though, it made sense to already go ANSI right now and restore general compatibility with D3DWindower on NT kernels as well. And with one more crucial manual setting that prevents SDL from crashing itself in confusion…
Yes, I would have preferred a nice GIAN07 (Windows 98).exe file name, but D3DWindower unfortunately glitches if binary names contain spaces. If the directory of a hooked executable contains another executable whose name matches the hooked one up to the first space, D3DWindower will run that other executable instead of the intended one. We sure don't want to run the regular GIAN07.exe by accident.
… we've got the game running in a provisional windowed mode on Windows 9x!
I can't stress enough that debugging was the main intention behind this fix. Without scaling options, D3DWindower is not a replacement for a proper windowed mode, and it adds its own bugs on top. 📝 The P0251 blog post has more detail about how precise an 8-bit DirectDraw emulation has to be to avoid the infamous golf course in Stage 3.
But turning off Unicode in the Windows 98 build of SDL 2 also had one unfortunate drawback: The window title is now ??? rather than 秋霜玉, even when running on NT kernels or a Japanese version of Windows 98. That brings us right to the other big complication of this backport:
😩 Handling Shift-JIS and Unicode 😩
With SDL now using the *A() functions, you might have expected the same mojibake that you'd see in the windowed title bar of pbg's original build. But since we pass UTF-8 to SDL rather than Shift-JIS, the result would always be slightly different. The number of question marks does match the number of codepoints in the string though, which means that SDL does convert from UTF-8 into something before passing the string to Windows. Unfortunately, this target encoding is always pure 7-bit ASCII because SDL's hand-rolled iconv() function only supports that, Latin-1, and the Unicode Transformation Formats.
This looks like a very bad choice on the surface. Sure, this implementation is meant to be a minimal fallback for systems that don't have iconv(3), but if it uses that library when available, why doesn't it also use WideCharToMultiByte() on Windows? One reason might be right there in the name of that Win32 function: Windows treats UTF-16 as the base encoding from where all other encodings are converted, but SDL (and everyone else) prefers UTF-8 in that role. This allows SDL to directly convert to UTF-32 or Latin-1 without stopping at UTF-16 first.
But even if SDL offered Win32-powered conversion from UTF-8 into any Win32 codepage, there's still the issue that WideCharToMultiByte(932) will most likely just not work on non-Japanese editions of Windows 9x. Since there is no algorithmic mapping between JIS and Unicode, the conversion between these two encodings requires a lookup table. Windows stores this table in C_932.NLS, and there is no guarantee that this file will be installed on anything before Vista.
On the other hand, the second screenshot above clearly shows that…
Text rendering
…just works for Japanese text? On my Western Windows 98?! Things quickly take a turn once we enter the Music Room though, where we get working Japanese text next to mojibake:
It currently also looks like this on Japanese Windows 98 due to, well, me not having tested this case before.
This disparity is quickly explained: Any text that is either hardcoded or pulled from the Vorbis comment tag of a BGM pack file is in UTF-8 and can be trivially converted to UTF-16. Every piece of mojibake, on the other hand, comes from the original .DAT files, is therefore encoded in Shift-JIS, and fails the conversion to UTF-16 for the aforementioned reasons.
But seriously, how can UTF-16 text rendering suddenly just work on Windows 9x? Well, contrary to popular (or certainly my) belief, Windows 9x did have functional Unicode variants for a small group of 15 API functions, which just happens to include GDI's TextOutW(), ExtTextOutW(), and GetTextExtentPoint32W(). Yup – these empty text areas we were getting for Japanese games on Windows 9x back in the day? All of them were at least partly preventable. The missing C_932.NLS on non-Japanese systems would have still meant empty text boxes if developers preferred storing text in Shift-JIS rather than UTF-8, which they might have wanted to do if their favorite editors were similarly limited. But that's about the only valid argument for using Shift-JIS on Windows 9x:
Compatibility with standard char* string handling doesn't count (this is an argument against UTF-16, not against UTF-8)
Rendered width = (strlen() × (full-width block size / 2)) doesn't count (breaks with the proportional fonts your localizers want to use; just use GetTextExtentPoint32W(), it works on 9x too)
B-but Han unification!1!! doesn't count (you control the font, and Unicode still supports lossless conversion from and to Shift-JIS)
So even if devs absolutely wanted to use Shift-JIS as the on-disk format, converting to UTF-16 at runtime and calling the Unicode versions of the GDI text rendering functions would have been better than using their *A() versions. Then, Windows 9x users could have fixed empty text boxes by properly installing codepage 932, XP users would have only needed to check that one Install files for East Asian languages box, and it all would have just worked without requiring the unbearable cringe of locale emulation. The *A() versions had no reason to exist other than programmer convenience.
Alright, so there's some theoretical way to get all rendered text to show up correctly on Windows 9x, regardless of locale. But what about…
The original Japanese filenames
Without a proper CreateFileW(), this is where we hit all the problems we were expecting. How should these behave on systems with non-Japanese codepages? Are we OK with turning old replay file names like 秋霜りぷEx.DAT into ????Ex.DAT, and consequently ____Ex.DAT due to question marks not being allowed in file names? This looks like the best choice we have: It's also what unicows.dll does right now, and it has the advantage of being easy to manually type.
It also is better than returning to the game's original behavior of blindly reinterpreting the bytes in the system codepage, which would turn the string into H‘š‚è‚ÕEx.DAT on codepage 1252. If you run pbg's original build on Western Windows 98, you'll see that it actually won't save any file whose name starts with the 秋 kanji. Apparently, 9x kernels are much stricter than NT kernels when it comes to filenames in the system codepage and will outright refuse to create a file if it contains unassigned codepoints? The Shift-JIS lead byte of 秋 is 0x8F, which is unused in CP1252.
Then again, if we had better replay-related error reporting, the specific file names probably wouldn't matter because we'd just display them on screen. Given that 📝 our forward-compatible configuration format only uses ASCII characters on purpose and the new replay format will do the same, this would only ever matter for the initial upgrade. There will be the possibility of converting future replays back into the original format for validation purposes, but that feature would ideally use exactly the names that the original game uses on the current system: Japanese names in Japanese locale, nothing on Western Windows 98, and mojibake everywhere else. Maybe we could even add a menu option to let players pick among all possible broken file names?
But shouldn't we at least retain support for loading from original names when running the Windows 98 build on an NT kernel? Sure, the goal is a backport to Windows 98, but functional Unicode makes Windows XP a much more reasonable retro target. It sure makes a lot more sense than supporting XP with the regular, non-suffixed modern build: XP is almost identical to 98 in terms of SDL backends, being just as limited to Direct3D 9, WinMM, and DirectSound in the graphics and sound department. The only additional backend for XP would be Raw Input, and we could just enable that one conditionally.
So how about just…
Leaving it all to unicows.dll?
Yeah, why don't we just directly link both SDL and the game against the Microsoft Layer for Unicode, without relying on KernelEx injecting it for us? Then, SDL could just continue using Unicode APIs without us having to rewrite anything. And since MSLU disables itself on NT kernels, you'd still get the 秋霜玉 window title and support for the original Japanese filenames regardless of the non-Unicode codepage. Heck, unicows.lib is still shipped with current Visual Studio. I'd only have to add a single linker flag and be done with it!
But when I tried this, the game broke in every environment:
DxWnd failed to hook about half of the functions. It applied the configured window size, but failed to bypass the display mode change. There might be some permutation of tweaking options that fixes this issue, but good luck finding it. I tried all the hook-related options that sounded like they could help, but no dice.
On Windows 98, the game crashed even when run without an external windowing tool due to an unfortunate combination of SDL and MSLU features:
SDL's renderers can be initialized into an existing Win32 window.
This means that SDL will handle all events that the system sends to this window.
However, the window might have set up a custom window proc to handle certain events outside of SDL. It might be a good idea to still run this code and not have SDL take exclusive control.
Due to the whole ANSI-vs.-Unicode mess, doing this requires a dedicated subclassing mechanism. By using CallWindowProc() on magic cookie values, Windows allows ANSI window procs to subclass Unicode window procs and vice versa.
However, MSLU uses the same subclassing mechanism to provide Unicode↔codepage wrappers for all events that carry string data. Since it hooks window creation, it goes first, subclassing the window proc that SDL specified in the window class structure.
Then, it's SDL's turn to check whether it needs to subclass the window. It would only need to do so if the window proc differs from its own, which should only ever be the case if the window wasn't created through SDL, but, um…
// Remember the previous window proc in case we have to subclass it
WNDPROC superclass_wndproc = GetWindowLong(hwnd, GWL_WNDPROC);
if (superclass_wndproc == SDL_WIN_WindowProc) {
// Window uses our window class and wasn't subclassed. Nothing to do.
superclass_wndproc = NULL;
} else {
// Window already has a foreign window proc. Move us back to the top
// of the hierarchy to ensure that we handle the messages we care
// about. Surely no one subclassed us between CreateWindow() and now?
SetWindowLong(hwnd, GWL_WNDPROC, SDL_WIN_WindowProc);
}
Adapted from SDL_windowswindow.c.
Now, both MSLU and SDL think that their own window proc is a subclass of the other one. Thus, both of them call the other one using CallWindowProc(), expecting it to terminate the chain…
…but since neither of them does, the resulting infinite recursion ends up crashing the game with a stack overflow. 💥
Maybe this could be considered a fixable bug that traces back to SDL 1, but the whole situation is just very silly. If this had worked, we would be running the Windows 9x build through up to three separate layers of dynamically patched code – KernelEx, MSLU, and D3DWindower – when we'd like to have at most one and ideally zero. Besides, we know which code we want to run, we know that we don't need to reach for subclassing to make it all work, and we know that SDL is the ideal place for it. Now we just have to write it all.
Next up: The long-awaited return to TH03! With per month going explicitly toward that game now, we'll definitely stay there for a while. Ember2528 is generously funding short-term and long-term netplay options, so let's finish OP.EXE in preparation for nice and user-friendly menus. This is the last main menu to be decompiled across all of PC-98 Touhou and it's mostly text-based, so how hard can it be?
P0307
Seihou / Shuusou Gyoku (SDL 3 platform layer)
P0308
Seihou / Shuusou Gyoku (Render API unbricking / ReC98 build label on the title screen / Revamped pixel format handling)
P0309
Seihou / Shuusou Gyoku (WebP screenshot compression / Compression benchmark in the main menu)
💰 Funded by:
Ember2528
🏷️ Tags:
Well, that fell apart surprisingly quickly. The release of Shuusou Gyoku's Linux port just happened to be surrounded by the unluckiest sequence of events in Arch Linux land:
After I fixed a silly mistake on my part, Shuusou Gyoku was still playable on sdl2-compat as it was only affected by rather minor bugs, but these bugs still undermined the effort I put into the port. That left us with three options:
Let the more involved SDL community fix sdl2-compat out on their own. After all, why should we bother if rogue distros randomly mess with our dependencies?
Become part of that community and help fix the issues in either sdl2-compat or SDL 3.
Properly update Shuusou Gyoku to SDL 3 right now, while keeping SDL 2 support for the Flatpak, more conservative Linux distributions, and the upcoming Windows 98 backport.
I really would have preferred to delay this migration for a few years until the dust has settled. For this project, I already picked C++ as the dependency I want to be on the bleeding edge of, and SDL 2 was supposed to balance this out by being the conservative and stable choice. Oh well, if we've got to update at some point, we might as well do it now. The ReC98 development schedule at least gave me another month of waiting for the community to sort out SDL 3's growing pains…
So, why does something like sdl2-compat even exist if it only causes problems? And why are distros rolling it out so soon after SDL 3 if SDL 2 has been working fine all the time? In a nutshell, sdl2-compat is the second pillar in SDL's forward compatibility strategy. While the 📝 dynamic API mechanism ensures compatibility with future minor versions by integrating dynamic linking so deeply that static linking is made entirely useless, sdlN-compat ensures compatibility with one future major version by implementing version N's API in terms of SDL version N+1. This allows the SDL team to very quickly stop updating version N while still allowing programs linked against that version to run well on modern systems by using all the actively maintained backends of version N+1. This worked out well with sdl12-compat, which nowadays seems to do a great job at preserving abandoned SDL 1 games – especially if we consider that you'd be running sdl12-compat on top of sdl2-compat on top of SDL 3 from now on.
If you absolutely must have the real SDL2 ("SDL 2 Classic"), please use the SDL2 branch at https://github.com/libsdl-org/SDL, which occasionally gets bug fixes (and eventually, no new formal releases). But we strongly encourage you not to do that.
Followed by zero arguments to back up this audacious suggestion. So they not only imply that sdl2-compat is already perfectly compatible and works without bugs for every SDL 2 program ever, but also that the underlying SDL 3 implementation doesn't introduce any bugs on top – and it only takes a single look into either project's issue tracker to disprove that notion. There is no technical reason why a distro couldn't ship SDL 3 and 2 in parallel. The continued existence of the SDL 2 AUR package is proof of that, and still received upset comments as of mid-March that justified its existence.
There was absolutely no reason to push sdl2-compat on everyone by default other than forcefully turning users into beta testers. SDL 2 was still stable, maintained, and working well. People who needed SDL 3 before its release for whatever feature already used SDL 3. People who want to use the SDL 3 backends to solve some obscure backend-related issue in an SDL 2 program can use sdl2-compat without needing it to be the only option available. And with a package size of 1.2 MiB, you can't convince me that SDL 2 is somehow a burden on the packaging front either – especially if your distro has separate packages for every commonly used fiddly Python and Haskell library.
I can't help but imagine the reaction if Microsoft pushed an enforced update of this magnitude. They're already getting regularly lambasted by the press for much smaller and ultimately inconsequential offenses…
For all the 📝 criticism I had about Flatpak and Flathub last time, they made the right choice of not treating their base package as a rolling and bleeding-edge distribution. The Freedesktop platform will only ship SDL 3 in its next version releasing in August, which will probably leave enough time for the SDL developers to address all but the rarest remaining issues in sdl2-compat. Although I'm not sure how I should interpret this commit being made at that specific time: This is either very considerate (because they've chosen to take up the job of early-adopting SDL 3 as part of developing the new SDK version, and thus will be helping out with reporting bugs), or very inconsiderate because they bought the whole sdl2-compat story just like Arch did. If Freedesktop SDK updates shipped in February rather than August and the release tag was on this branch, they would have screwed over their users just as much. Also, there's still not much point in force-updating everyone onto a compatibility layer in freaking 2025…
Then again, I can empathize with the SDL developers to a degree. Lots of developers have been asking the "when is SDL 3 ready and stable enough for regular use?" question while picturing SDL as this highly important and central library that surely has a big team of testers who could ensure its stability at one point. But if there just isn't enough Valve money to form such a team, what else should you do as a developer other than turn your personal hype into a "it's ready now, go use it and please leave feedback" reply? Maybe, turning your users into beta testers is the only realistic way to ever approach stability in this economy. And sure, they call it 3.2.0 for… reasons, but they're not fooling anyone.
The big irony, however, is this: At one point in the future, sdl2-compat will be that perfect solution for running abandoned SDL 2 (and SDL 1) programs on top of SDL 3. But it's the exact opposite of what you'd want during active development: You want to update to SDL 3 and use the new APIs and function names to be ready for the future, but also retain the option to run on the stable SDL 2 foundation for at least a little longer until every distribution has caught up. Or, in other words, you want to run SDL 3 on top of SDL 2.
You could totally have a library that implements this alternate kind of compatibility layer. It would still be prone to bugs just like sdl2-compat, but unlike that one, the chance for new bugs is halved since you'd be running on top of the proven and stable SDL 2. But of course, such a library would restrict your codebase to SDL 2's feature set, which is probably why something like this doesn't exist. So instead, our SDL platform layer now contains 64 conditional branches and a bunch of function renaming macros and generic helper code to support compiling against both SDL 3 and SDL 2. At least I wrote it all in a way that allows us to quickly rip out SDL 2 support once we no longer need it…
Oh well, enough ranting. Because once it works, there are plenty of things to like about SDL 3. Limited to, of course, everything notable that applies to Shuusou Gyoku:
Requesting fullscreen from SDL 3's basic window creation API will now always give you a borderless window as they went with the times and removed the option to directly create a window in exclusive fullscreen mode. In isolation, this might look bad enough to not even consider updating to SDL 3. However, this doesn't mean that boomer fullscreen is gone – it only has been relegated to a separate and, in fact, much more comprehensive mode-changing API that also covers refresh rates. Using it does require significantly more and different code compared to SDL 2, but being explicit about the refresh rate is crucial for games whose speed depends on the frame rate, like this one. If your display supports a 62.5 Hz mode by any chance, we select it now.
SDL 3's software blitters come with optimized SSE2, SSE4.1, and AVX implementations, replacing SDL 2's aging and nowadays actually suboptimal MMX code paths. On the surface, this only seems to speed up the software renderer as far as we're concerned, but it will also be very welcome once we have to do pixel format conversions. (Which, spoiler, I managed to just barely avoid on the SDL level for this new code.)
The new SDL_SetRenderLogicalPresentation() function now implements all of the three borderless fullscreen layouts as part of SDL. Together with the now cleaned-up handling of render target state, this removes almost all of the complexity and state juggling that SDL 2 previously required for the combination of fullscreen and clipping. Too bad that I still have to retain all of that SDL 2 code for the time being…
The filesystem API that originated in SDL 2 is finally joined by a matching set of file access functions that Do The Right Thing, explicitly take UTF-8 filenames, and use the Unicode APIs on Windows. If this had existed 📝 at the end of 2022, I wouldn't have felt the need to write my own abstractions. Sure, the lack of UTF-16 overloads means that this API is not strictly, perfectly optimal on Windows, but in turn, we get this API for free with the rest of SDL. It'll even be very welcome for the Windows 9x port, which could simply translate UTF-8 to the system codepage without requiring any other kind of Unicode layer. Besides, I've found myself using these strictly optimal UTF-16 strings less and less: These have always been an implementation detail of the Windows version, and any path we save in a .CFG file should better be in UTF-8 to allow configuration sharing between Linux and Windows.
SDL_RenderReadPixels(), the "screenshot" function that transfers pixel data from the GPU to system memory, now allocates a new pixel surface instead of writing pixel data in a specific format to pre-allocated memory. This is another change that looks bad on the surface because we sure love them freedoms to self-allocate our memory in C/C++ land. However:
This single allocation is far from being the bottleneck in the screenshotting process. It doesn't even clearly stick out in execution timings because it gets completely masked by the variance of the actual GPU→CPU pixel transfer.
In SDL 2's version of the function, you decided the pixel format that SDL would write into your buffer, which might have incurred a conversion if your chosen format didn't match the pixels returned by the GPU. In Shuusou Gyoku, this could have easily happened with geometry scaling. By newly allocating the returned surface, SDL 3 can keep the original pixel format and thus needs to involve at most a single memcpy() – which is always measurably faster than converting pixels, even if that conversion is SIMD-optimized.
Not even having the option to overthink memory pre-allocation sure simplifies your code a lot.
Graphics APIs are now addressed by their identifier string rather than their index within the platform-specific list of APIs. SDL 2 has always provided ways to map between both indices and strings, but the fact that every function now takes a string is a nice way of nudging developers to use strings in their configuration as well. They would allow a user's API selection to be retained independently of the SDL developers later changing the order of that list – once I adapt our config format from numbers to strings in a future release, that is.
SDL apps can now define metadata strings. Most of these currently don't do anything, but the identifier now gets used as the Wayland and X11 window class name and thus represents a much cleaner way of having class-derived icons than 📝 the previous undocumented SDL_VIDEO_X11_WMCLASS environment variable. But if you read that post again, my main issue wasn't SDL's implementation, but the fact that support for class-derived icons is so rare among window managers to begin with. Not only does this change not help the situation, but it arguably makes it even worse due to a slightly different mapping decision: The app identifier is assigned to the WM_CLASSclass name, but the additional instance name receives the binary's file name, which unfortunately breaks class-derived icons in IceWM where the instance name takes precedence.
Draw calls are now batched on all renderers, and batching can no longer be deactivated. 📝 During my previous experiments, SDL's Direct3D 11 backend turned out to be by far the fastest batching renderer on Windows, and SDL 3 coincidentally also made it the new default. So it makes sense to follow suit and remove our previous OpenGL override, restoring 📝 pixel-perfect line rendering in framebuffer-scaled mode by default.
The massive downside, however, is that the combination of framebuffer rendering and OpenGL ES 2 is now completely broken on integrated Intel graphics, in the worst way: The game initializes fine and responds to input, but only shows a black screen. If we offer such a menu, we'd better also have a feature to unbrick your game in a non-graphical way if it only renders a black screen. That's why you now can
press F7 to cycle through the list of APIs at any point, or
use the environment variable SDL_RENDER_DRIVER to override any previous manual API selection, which didn't work before.
Draw call batching even extends to the software renderer now, for some reason. Doesn't software rendering boil down to nothing more than writing pixels into a system-memory buffer on a single thread? There's no penalty for just doing the thing, but there certainly is a small penalty for gathering all the things into a queue. I'd rather not pepper that procedural mess of a graphics backend with even more imperative function calls, but you can make just as much of an argument for the consistency of requiring a flush regardless of whether a renderer represents software or hardware.
The new Vulkan and GPU render backends are perhaps the most exciting change for a certain group of people. The GPU API in particular provides an abstraction for the common modern paradigm of command buffers and shaders, which is shared among Vulkan, Direct3D 12, and Metal. Given the amount of attention it received, this feature is undoubtedly great for everyone developing modern games. However, not only couldn't we care less for a game of this vintage, but it's also just more of the same dilemma: While more backends can offer a higher chance of the game working well on some potato out there, they primarily mean more code surface, which means more bugs.
Thankfully, the list of entirely bad changes is quite short:
All API functions now return true/nonzero on success and false/zero on failure, rather than 0 on success and <0 on failure as in SDL 2. Sure, true = success makes intuitive sense when you just start out programming, but then you realize that the overwhelming majority of functions can fail in multiple ways and success is just the absence of failure. SDL 2 got the right idea about this, but SDL 3 chose to regress to said beginner levels because Sam Lantinga got increasingly convinced of this idea that he, and everyone else, initially considered horrible.
#include directives must now be prefixed with an explicit SDL3/ path, unlike SDL 2 which didn't use a prefix. This was apparently necessary to fulfill some macOS requirement, but they've also removed the path from their pkg-config --cflags, turning the prefixed syntax into the only sanctioned cross-platform way of including SDL 3's headers. Being able to compile SDL3-using code without any additional CFLAGS might look pretty, but no sane build system is going to make an exception and not call pkg-config --cflags as it does for any other external library. And now I have to duplicate the #include section in every translation unit for the SDL 2 code path…
All SDL threads must now be manually awaited before calling SDL_Quit(). If they aren't, SDL reports a "leaked thread" even if the underlying OS thread might have cleanly finished. I get it, structured concurrency is probably a good idea, but it only works naturally if the rest of your program is structured accordingly, which doesn't apply to this 25-year-old codebase. Enforcing this leak check just forces me to write cleanup code for the sole purpose of satisfying SDL's bookkeeping to avoid that error.
Still, the constant stumbling over bugs and deliberate instabilities made this take way longer than it had any right to. For three of these bugs, I was the first one to report them, and I could have even reported a fourth one if I actually cared about Vulkan and didn't happen to find a workaround right before I pushed out the release.
With the additional API unbricking feature, we've ended up well into a second push. Replays were too big of a feature for now, but screenshot compression sounded like a nice task for the rest of that push. Really, how hard can it be? Add reference C library of our encoder of choice, call API with pixel buffer we get from SDL, write compressed pixel buffer to file. Easy, right? Well…
For starters, which format do we choose? Ember2528 had a clear preference, but it makes sense to compare it against other contenders first. There will be a complete benchmark further below, but let's get the seemingly most obvious candidate out of the way first:
QOI
Because who doesn't want a fast encoder for a simple format with steadily growing adoption? Sure, part of the adoption might be hype-driven, but as far as hype goes, there are definitely worse targets than a codec that fits in less than 300 lines of C. The low-color images we want to compress are rather simple from a modern point of view as well, so you'd expect QOI to be a perfect match…
…until you actually try encoding a few representative images and are greeted with file sizes that are way further removed from PNG than you'd expect after seeing the official benchmarks. Since the specification is short enough, we can easily explain these results:
All of Shuusou Gyoku's sprites are intended to be rendered within a palettized 256-color framebuffer. 3D-rendered gradients and transparency will drive up the number of unique colors in screenshots into the low 4-digit range at times, but it still makes sense to assume uncompressed 8-bit BMPs as the baseline. At our native resolution of 640×480, these are 308,278 bytes large. This is what we expect our chosen codec to beat, by hopefully a quite significant margin.
The 32-bit QOI_OP_RGB chunk would already blow up each affected pixel to 4× the size it would have had in a palettized image. Let's hope that the QOI encoder largely uses this chunk to define palette colors, and that we don't get to see it that often otherwise.
The 16-bit QOI_OP_LUMA chunk can maybe help compress unknown pixels that haven't yet been put into the running palette, but would still not contribute any compression compared to our baseline size. Fortunately, we shouldn't see too many of those as the encoder is specified to prefer 8-bit chunks where possible…
…except that QOI_OP_INDEX spends 8 bits on encoding a 6-bit palette index. With only 64 colors in the palette rather than the 256 we want, we're bound to see a lot more of those bulky 32-bit QOI_OP_RGB chunks after all. Not to mention the fact that colors are mapped onto these 64 palette slots using a simple multiplicative hash that will cause collisions at regular color intervals.
Any compression gains over uncompressed 8-bit BMP would therefore come from QOI_OP_RUN. If run-length encoding is the best an image codec can do, that's rather basic instead of OK, I'd say.
Actually… wait a moment, doesn't BMP also have a run-length-encoded mode that was mostly forgotten after the 90s? And indeed, the compression rates between vintage BMP/RLE and QOI are very similar, with any differences stemming from the way these two formats encode their run lengths. QOI typically does slightly better, but BMP/RLE still beats it in the 西方Project logo and the main menu.
So while reduced complexity and blazingly fast encoding speed are good arguments, they don't cut it if decent compression of our source images relies on all the complexity found in PNG. But shouldn't this deficiency have stuck out in the official benchmark in some way? After all, 43% of the images in QOI's test suite have ≤256 colors, with most of them coming from Philip K's Ancient Collection in the textures_pk directory, where they make up 80%. For this directory, the official numbers claim average compressed sizes of 80 KiB for PNG and 75 KiB for QOI, and running the benchmark myself confirms these numbers…
…but wait, the input PNG files in the test suite package are actually half that size?! Yup – this benchmark merely tests the fixed, untunable QOI format against two specific PNG encoders, libpng and stb_image, at their default compression level and filter settings. It does not claim anything about QOI's relation to the known limits of PNG as a format, despite what the hype drivers would lead you to conclude all too easily. In any case, it paints a much different picture of QOI's 256-color capabilities:
Average file size
stb_image
110,337
libpng
82,136
QOI
77,404
PNG source files
43,437
oxipng -o max -Z
41,032
We will later see why comparing the slowest PNG encoders against the constantly fast QOI is, in fact, not unfair.
The final nail in QOI's coffin is this concession at the end of its release announcement:
SIMD acceleration for QOI would also be cool but (from my very limited knowledge about some SIMD instructions on ARM), the format doesn't seem to be well suited for it. Maybe someone with a bit more experience can shed some light?
I'd rather take a new image format that's designed around modern SIMD instructions from the start. Then, it can invest these performance gains into more complex filters to end up with better compression at a roughly similar encoding performance. Heck, it can even be slightly slower for all I care. SIMD-first design worked great for non-cryptographic hashes, and we'll see in a minute that it works just as well for image formats.
But Ember2528 had a different codec in mind anyway. Let's jump right to the polar opposite of the complexity spectrum:
Lossless JPEG XL
Because why wouldn't you use the currently best and most popular image format according to actual professionals who know a couple of things about image compression? It's winning benchmarks left and right, and blog posts like these make it appear as if even version 0.10 of its reference encoder already beats out every other widely used codec. And after it unfairly got removed from Chromium in 2022, you can't help but root for it. Time to do my small part in bringing its adoption to a level that Google can no longer deny!
Too bad that the enthusiasm immediately drops after cloning the libjxl repo and running a CMake test build. What are all these library dependencies, and why can't I just reduce the build to the lossless encoder? The resulting binaries are way larger than what I'd consider appropriate in relation to game code. 😩
Looking through the repo more thoroughly, however, reveals a very welcome little surprise: If a few basic requirements are met, the fastest lossless speed tier actually uses an entirely separate encoder that's implemented in a single source file and can be used independently from the rest of libjxl. Nice to see that someone thought about simple integration after all! That's exactly what I've hoped to find. Sadly, Linux distributions don't have a separate standalone package for this encoder, but it wouldn't be the only library we'd statically link on Linux.
Having a single function as an easy entry point is always a good sign, too. Those parameters, though…
Only accepting pixels in RGBA memory order sure is awkward in a 3D-accelerated world where everything else prefers BGRX, including BMP files. Sure, it doesn't matter for us because we live in SDL land where we have SIMD-optimized pixel format converters, but I don't think you should assume that everyone has these kinds of batteries included. "Just roll your own" isn't a good argument either because you'd want pixel format conversions to be SIMD-optimized. We'd all love it if compilers perfectly auto-vectorized such code, but we're not there yet; Visual Studio in particular is pretty bad at optimizing naive byte-flipping code. But writing SIMD code always comes with the same CPU feature detection and alignment boilerplate, and JPEG XL already has all of that in its codebase. Thus, it makes a lot more sense for it to include pixel format converters than forcing that onto every caller. It's API designs like this one that almost necessitate turning SDL into a hard dependency of the cross-platform frontend in the long run.
The not further documented big_endian parameter is the first indication that a lot of development effort went into aspects we don't care about. You'd think that passing true would cause the rgba buffer to be interpreted as ABGR, but it's only used to select the per-channel endianness of images with 16 bits per color channel. For 8-bit-per-channel images like the ones we're exclusively dealing with, it silently does nothing.
As the FJXL abbreviation implies, this encoder actually started as an independent project that, coincidentally, was a direct response to the hype surrounding QOI. By using AVX2 instructions within the confines of an existing format, it managed to beat QOI in both encoded file sizes and compression speed for every type of image its developer tested. But it's this competitive focus that brings us to its most questionable implementation decision.
The good news is that FJXL acknowledges that low-color images exist, are a prime use case for lossless compression, and are best dealt with using JPEG XL's palette features. However, detecting and optimizing that palette takes up a lot of time relative to QOI. If the input image uses more colors than a palette would make sense for, you'd want to fail as early as possible. Slide 11 explains the solution FJXL came up with:
Hash table with 65k possible entries
Any collision -> no palette
[…]
On non-palette-friendly images, this fails quickly (birthday paradox says after ~256 distinct pixels).
On palette images, encoding 1 channel rather than 4 more than compensates the
cost of detection.
With 10 additional bits and a widely renowned multiplier, the hash function looks leaps and bounds ahead of the one in QOI:
// has to map 0 to 0
uint16_t pixel_hash(uint32_t p) {
return ((p * 2654435761) >> 16);
}
But since we're still hashing 32-bit RGBA pixels to 16 bits, we're bound to run into a collision sooner or later. You can certainly think of this hash function as mapping color values to uniformly distributed random numbers and then reason about its efficacy using probability theory, as we saw in the slide above. However, the conclusion drawn in that slide is rather abbreviated and ultimately misleading: The birthday paradox does not return a binary success/failure result, but a probability. In this case of 256 distinct colors:
That's a smaller probability, but a 1/4 failure rate would still be way too high for our use case. And sure enough, it actually happens in the main menu, where a single #583732FF pixel (or 0xFF323758 in its little-endian representation) collides with #FFFFFFFF:
The resulting 143 KiB file immediately tells us how not palettizing such images completely ruins the compression ratio. If this one pixel had any other non-colliding color, FJXL would have compressed it into a still decent 52 KiB. Therefore, the slides should have better added a graph of the failure probability, and said something like:
Not perfect, and likely to misdetect even low-color images with <256 distinct colors as not palette-friendly according to the birthday paradox.
For our use case of screenshots without an alpha channel, we could work around this whole issue by having a separate non-alpha code path. Detecting the potential palette of an RGBA image within a worst-case time complexity of 𝑂(𝑛) without using hashes requires a (232/8) = 512 MiB bit array to cover the entire RGBA color space, which is probably too steep of a memory requirement. Removing the alpha channel, however, would shrink this array to a definitely appropriate 2 MiB.
Ultimately though, we decided against doing any of that because FJXL by itself is as untunable from the outside as the codec it was inspired by. Ember2528 preferred the opposite: an encoder with multiple effort levels that offer different trade-offs between encoding speed and file size, which would allow faster CPUs to produce the smallest files at still reasonable speeds. So let's look past the bloat, link in the complete libjxl reference encoder, and see how it performs on higher effort levels…
…um, what is this API? Adapting the example code gave me encoding times that are at least 1.5× slower than the cjxl command-line encoder, and already hit the 100 ms mark at -e 2. Even -e 1 is suddenly much slower than using FJXL in isolation while yielding the same compressed sizes. Also, pushing speculative allocation onto the caller? 🤨 📝 stb_vorbis is a bad joke, not a model to be emulated.
The compressed file sizes are pretty underwhelming as well. Most of the test cases don't even get close to oxipng at -e ≤6 while still taking absurdly long to encode within the game. Even at peak effort, it's a mixed bag at best, with both oxipng and JPEG XL -e 10 massively beating the other in 3 out of 7 cases. And if that's the best we can say about this format…
All this is echoed by this recent issue that points out JPEG XL's inadequacy with an even more retro 16-color example. In the end, the documentation said it all along:
They are about 60-75% of size of PNG, and smaller than WebP lossless for photos.
But there is one widely-used image codec that both perfectly fits Ember2528's priorities and compresses well on lower effort levels. Let's finally look at the complete benchmark numbers:
main_menu / Effort
0
1
2
3
4
5
6
7
8
9
JPEG XL
146,352
51,851
59,453
45,329
37,864
37,276
36,130
35,222
33,793
31,724
WebP
54,116
32,194
28,112
27,860
27,712
28,272
28,178
28,120
28,684
27,816
AVIF
272,604
272,604
136,220
131,235
119,398
117,525
111,380
110,684
110,543
109,601
BMP (8 bpp)
308,278
BMP/RLE
92,034
QOI
93,884
oxipng -o max -Z
30,702
​
​
​
ingame / Effort
0
1
2
3
4
5
6
7
8
9
JPEG XL
123,606
102,949
130,689
102,944
84,916
72,590
68,302
49,618
45,865
46,997
WebP
50,678
49,030
43,620
41,760
40,724
40,854
38,608
37,940
37,842
37,138
AVIF
462,703
462,703
197,818
156,007
141,043
139,689
133,399
132,573
126,270
125,379
BMP (8 bpp)
308,278
BMP/RLE
185,842
QOI
175,949
oxipng -o max -Z
38,409
BMP, cropped
185,398
BMP/RLE, cropped
177,456
QOI, cropped
165,620
stage6 / Effort
0
1
2
3
4
5
6
7
8
9
JPEG XL
32,204
24,146
35,053
24,599
19,936
19,560
19,336
18,444
17,423
16,183
WebP
20,856
19,916
17,070
16,524
16,380
16,562
15,488
15,386
15,404
15,124
AVIF
185,676
185,676
84,437
62,354
57,791
56,524
52,956
52,611
51,969
51,795
BMP (8 bpp)
308,278
BMP/RLE
55,838
QOI
52,302
oxipng -o max -Z
18,741
BMP, cropped
185,398
BMP/RLE, cropped
48,954
QOI, cropped
45,874
laser / Effort
0
1
2
3
4
5
6
7
8
9
JPEG XL
345,199
287,279
301,608
248,852
92,463
85,529
81,206
66,811
61,445
47,173
WebP
85,318
56,724
51,558
53,964
53,492
53,492
51,860
51,460
51,460
41,726
AVIF
218,858
218,858
122,100
88,490
82,675
81,245
75,866
75,395
75,462
75,138
BMP (24 bpp)
921,654
​
QOI
290,088
oxipng -o max -Z
61,595
BMP, cropped
553,014
​
QOI, cropped
280,462
laserbomb / Effort
0
1
2
3
4
5
6
7
8
9
JPEG XL
332,706
125,197
150,436
128,755
110,357
102,891
99,718
68,968
66,975
64,484
WebP
129,472
94,564
86,538
64,990
64,062
64,062
60,776
60,318
60,318
59,198
AVIF
313,731
313,731
168,388
114,111
109,239
107,121
104,109
102,054
99,106
99,103
BMP (24 bpp)
921,654
​
QOI
210,496
oxipng -o max -Z
87,286
BMP, cropped
553,014
​
QOI, cropped
200,002
gates / Effort
0
1
2
3
4
5
6
7
8
9
JPEG XL
208,293
185,662
212,615
172,008
124,466
117,509
113,563
110,992
97,454
91,146
WebP
124,308
125,070
113,896
102,656
102,482
102,482
95,536
94,768
94,768
57,850
AVIF
306,742
306,742
293,874
293,276
254,073
243,953
243,947
242,188
241,943
241,359
BMP (24 bpp)
921,654
​
QOI
157,705
oxipng -o max -Z
90,545
BMP, cropped
553,014
​
QOI, cropped
147,670
seihou / Effort
0
1
2
3
4
5
6
7
8
9
JPEG XL
6,124
5,088
4,732
4,468
4,427
4,416
4,377
4,112
4,016
4,040
WebP
39,518
5,904
5,642
5,574
5,500
5,518
5,518
5,504
5,486
5,490
AVIF
26,984
26,984
25,085
24,927
22,582
21,698
21,697
21,627
21,631
21,505
BMP (8 bpp)
308,278
BMP/RLE
17,654
QOI
18,047
oxipng -o max -Z
5,383
BMP, cropped
23,798
BMP/RLE, cropped
14,144
QOI, cropped
13,371
The effort value directly corresponds to cwebp's -z parameter. Add 1 to get cjxl's -e parameter, and subtract from 10 for avifenc's -s parameter.
I definitely could have surveyed the landscape of PNG encoders more thoroughly, but since Ember2528 prioritized compression ratio over compression speed, there was no need to. oxipng is as good as it gets, but even its strongest and most sluggish setting is still outperformed by regular WebP at some level, and often as early as -z 2.
191 colors. The large areas in black and #DDE4FA are a great test case for an encoder's RLE capabilities. The menu's half-transparent background is slightly nasty, but should still keep this image well within the range of potential palette-based compression. (Unless you're QOI, of course.)
FJXL palette detection collision chance: 24.21%.
92 colors. Lots of repeated bullet sprites to appropriately represent gameplay, plus a small transparency effect in the Evade gauge that shouldn't complicate compression all too much.
FJXL palette detection collision chance: 6.20%.
96 colors. The wavy clock animation makes Stage 6 look complex, but we expect encoders to actually have a much easier time on the last three stages due to their backgrounds being mostly black.
FJXL palette detection collision chance: 6.72%.
1219 colors. A simple repeated tile in the background, with a big gradient that is likely to push the color count beyond palette-based algorithms.
831 colors. Similar to enemy-fired lasers, but with multiple smaller gradients rather than a single big one.
2326 colors. With a comparatively complex background, bullets, and a big laser, this is probably the most intense test case for lossless compression that this game has to offer.
40 colors. A small consolation prize for JPEG XL, as the smoothly feathered and blurred colors match the photo-like characteristics this codec was meant to target. Even oxipng gets to barely outperform WebP on this one. Then again, the difference between JPEG XL and WebP is still less than 1.5 KiB at most, for an image that doesn't represent the rest of the game.
FJXL palette detection collision chance: 1.18%.
Lossless WebP
Yup, it's 📝 ZMBV beating AV1 all over again. For these kinds of retro game screenshots, JPEG XL is vastly outperformed by its counterpart from the previous generation of widely-used image formats. And not just in terms of compressed file sizes, but also in every single other aspect that matters to us:
Faster compression times across every effort level? ✅ You bet. Imagine adapting its example code and actually getting encoding speeds that match the cwebp command-line encoder! Which brings us to…
Better C API? ✅ Check – well-documented and significantly easier to use, and I'm not even using the easiest entry point due to its fixed effort level. libwebp does use a single 32-bit pixel format internally, just like JPEG XL, but what's that, importers for other 32-bit pixel formats and even palettized 8-bit images? Sure, the latter ones are part of the extra code that typically isn't part of Linux distribution packages and it just does a simple unoptimized loop. But that's how a library communicates that it's the right tool for the job.
Less bloat? ✅ Obviously. The unmodified reference library with all of its SSE and AVX optimizations adds an acceptable 274.5 KiB to the statically linked and optimized release binary.
That's not to say that libwebp is perfect. Its code makes it very obvious that lossless WebP was designed for 2010-era hardware as the encoder never got optimized for modern CPUs. There was an attempt at optimizing at least the lossy encoder for AVX2, but it was ultimately abandoned because it never got fast enough. Surprisingly, the codebase did receive new AVX2 code one week before I released this build, but it only covers the lossless decoder so far.
As for concurrency, libwebp does come with support for multi-threaded encoding, and I did activate it for the Shuusou Gyoku integration, but it's only used at effort levels 8 and 9. Also, why is argb in this structure interpreted as native-endian and therefore BGRA memory order, but these are interpreted as big-endian?
But the main criticism is the same that also applies to JPEG XL: The lossless and lossy modes are lumped into the same repository despite having virtually no code in common, and are selected via a structure field rather than having unrelated API entry points. This once again makes it very difficult for static linkers to remove all the code on the lossy branches that I never asked for in the first place.
And I sure never want to run the lossy encoder under any circumstance. Lossy WebP deserves all its bad reputation for basically being VP8's intra-frame coding applied to still images. VP8, 📝 if you remember, is that bad video codec from two generations ago that I'm only serving on this website due to sheer inertia. Applying its enforced YCbCr 4:2:0 chroma subsampling to images does not only make it utterly unsuitable for pixel art, but also even worse than well-compressed JPEG which isn't limited to a single subsampling scheme. If anything in the GIAN07 process accidentally flips the "I want lossless" flag, I'd rather want the WebP encoder to error out and have the screenshot frontend fall back on BMP than save an image with mutilated colors.
But while JPEG XL is a lost cause as far as I'm concerned, I've grown to like lossless WebP too much to leave it trapped within the unfortunate organization of its codebase. Also, there seems to be a lot of untapped potential in the format – really, why does PNG get alltheattention of people writing alternative encoders when lossless WebP is the demonstrably much more capable format?
So I've decided to fork libwebp and surgically remove all code related to the lossy encoder. The statically linked result now only takes up ~100 KiB in the Windows build while still being API- and ABI-compatible. Of course, Linux users will still use their distribution's libwebp package with the lossy encoder included, but let's hope that the aforementioned possibility of accidents stays purely theoretical.
Really though, why have people started to bundle lossless and lossy image codecs under the same format in the first place if their algorithms have nothing in common? It might make sense for Opus where SILK and CELT are different kinds of lossy, but lossless and lossy are two completely different paradigms. The bloat and usability confusion far outweigh any situational tricks this might offer.
Alright, we found a good format with configurable effort levels, and we're only missing a way for players to pick an effort level. Depending on how they want to use this rapid-fire screenshot feature, almost all of the options make sense in some context:
You'd like to screenshot a whole section of a stage as fast as possible with the help of the disabled frame rate limiter, and you got plenty of free disk space? You probably want to stick with BMP and compress the screenshots outside of the game, just like how you would have done it without this feature.
A slight slowdown is OK or maybe even welcome for providing additional feedback that you're actually taking screenshots? Pick one of WebP's higher effort values that certainly take longer than 16 ms to encode, but are still reasonably fast and won't turn the game into a <2-FPS slideshow.
Want the lowest file size that your system can encode while staying at 62.5 FPS? Well, how fast is your system? And not just the CPU – maybe your system is actually bottlenecked by I/O and writing a large uncompressed BMP file takes much longer than encoding it into WebP and writing the resulting smaller file.
The latter two use cases would be covered by automatic detection of the maximum effort value that encodes within a given number of frames. The problem, however, is that encoding times are always relative to the complexity of the image. Once we're in-game and have lots of bullets and lasers, any choice that might have been appropriate for the main menu might suddenly start dropping frames after all. Thus, we can't solve this with an upfront benchmark, but have to dynamically adapt to the complexity of the current game scene. But then the whole idea falls apart as we can't possibly treat the configurable allowed screenshot time as a hard limit. To figure out whether it's safe to raise the effort level again, there's no way around periodically exceeding that limit and thus dropping more frames after all.
The ideal solution would involve deep hooks into the WebP encoder that could dynamically adjust the compression algorithms depending on the remaining time in the current frame. An image compressor with real-time guarantees… sure sounds like an interesting research project.
In the end, letting players choose a fixed format and effort level remains the best option. However, they can only make an informed choice if they know the performance of all options relative to each other. And that's how we arrive at this new submenu:
These measurements start before retrieving the framebuffer's pixels, and end after the file writing syscalls. If you save to a reasonably fast and write-cached storage medium, these syscalls are unlikely to have a big impact. Thus, the BMP times almost purely represent the fixed cost of the SDL_RenderReadPixels() call.
These specific numbers I got on my now almost 7-year-old Intel Core i5 8400T are very peculiar. -z 0 gets quite close to the 16 ms we have per frame, but would still be too slow to reliably compress every gameplay situation without dropping frames. A 64-bit build would speed up -z 0 by 10%, -z 2 through -z 7 by 25%, -z 8 by 210% (!), and -z 9 by 60%. Linux users already enjoy these higher speeds, and the Windows build is just a few compiler settings away from matching them. 📝 Last time, the bitness argument was a lot more balanced, but WebP encoding performance presents the first compelling reason for going 64-bit.
Or we could always go multi-threaded, which already is a much more popular idea within the Seihou development Discord group.
Or I could investigate PNG after all to find out how exactly its encoding speed compares to WebP…
But then, Ember2528 posted the encoding times he got on his new Ryzen 9 9950X3D:
…yeah, I probably won't get funding for performance tuning.
Finally, you probably already noticed another small change in this build: The ReC98 push ID is now shown in the bottom-right corner of the title screen image, just below the original game version number. This was the one part of replay preparations that I wanted to get in sooner rather than later. Since the game binary and the data files can be updated or modded independently from each other, I'm going to tag future replays with both of their respective versions to guarantee reproducibility. Of course, newer builds should never introduce bugs that affect gameplay and desynchronize existing replays. But if they ever do, the included push ID allows hosting sites to remove any replays recorded on such a broken build from the official competition tier associated with a specific data file version.
As for rendering the push ID, it should obviously look similar to the VERSION 1.005 text above. We can find these glyphs in GRAPH.DAT file #0, but this particular text is actually baked into the main menu's background image, which explains why the decimal point glyph isn't part of that data file. The glyphs for 0-9 are also used in-game for the score popups, but the A-Z glyphs remain unused – so unused, in fact, that pbg didn't even leave any reference to them in the source code:
This means that the game provides us with all the glyphs we would need to display the ReC98 push ID. However:
The 0-9 glyphs have a size of 5×7 and would stick out a bit too much against a capital P rendered as a smaller 5×5 glyph.
In WIP builds, the build ID should also include the Git commit, which traditionally uses small letters. Surrounding the commit info with (brackets) would also be nice.
So, all the glyphs next to the BUILD label actually come from the TrueType text renderer. The non-slashed zeroes immediately give this away, but exactly emulating the color gradient of the 0-9 glyphs makes MS Gothic blend in very well regardless:
And that's all I've got for these very packed three pushes! In exchange, I'll reserve the next Shuusou Gyoku push for another round of maintenance and forward compatibility.
The new builds:
Next up: The long-awaited Windows 98 backport of our Shuusou Gyoku build! This has been in development for quite a while, so this should now be a matter of days rather than weeks.
P0304
TH02 RE (Stage / (mid)boss variables) + Decompilation (Bullets, part 1/2)
P0305
TH02 decompilation (Bullets, part 2/2 + Sparks, part 1/2)
P0306
TH02 decompilation (Player, part 1/2: Update/render functions + Miss animation) + Random TH04/TH05 finalization
💰 Funded by:
Yanga, iruleatgames, nrook, [Anonymous]
🏷️ Tags:
Sometimes, the gameplay community will come up with the most outlandish theories before they even begin to consider the idea that certain safespots might not be intentional and only work by accident to begin with. Want more details? Read on…
So, TH02's bullet system! At a high level, it marks an interesting transitional point: It's still very much based on TH01's design with its predefined static or aimed spreads, but also introduces a few features that would later return in TH04 and TH05. By transplanting the TH01 system into a double-buffered environment, ZUN eliminated the 📝 worst📝 unblitting-related parts that plagued TH01, ending up with the simplest and cleanest implementation of bullets I've seen so far. That's not to say it's good-code – far from it – but it also hasn't reached the messy levels that TH04 and especially TH05 would bring later. Of course, there's still TH03's system left to be done until I can say for sure, but TH02's is a pretty strong contender.
The more detailed overview of the system:
TH02 introduces the distinction between the white 8×8 pellets and the 16×16 sprite bullets that TH04 and TH05 would later expand upon.
The game has a single cap of 150 that is shared among both 8×8 and 16×16 bullets, unlike TH04 and TH05 where the cap is split for optimization reasons.
In 封魔録.TXT, ZUN claims that TH02 could even compete with DoDonPachi in terms of bullet amounts:
怒首領蜂もびっくりな判定の小ささ、弾の量。
Can it really, though? DoDonPachi spawns decidedly more bullets than TH02 throughout all of the game, and this pattern definitely exceeds 150 bullets. Hence, we can immediately debunk this claim as marketing hyperbole rather than a factual statement about the game. It would be nice to have a specific bullet cap number for DoDonPachi as well, but I can't find a decompilation project or annotated disassembly. Nor for any other CAVE game either, for that matter… 👀
TH01's decay and delay cloud effects were removed for TH02. Slightly unfortunate as it leaves bullets completely without any sprite effect, but hey, less code surface to mess up!
All bullets lose 0.625 pixels of per-frame speed on Easy and gain an extra 0.75 pixels of per-frame speed on Lunatic. Each bullet is clamped to a minimum speed of at least 1 pixel per frame; on Easy, the game also filters every second bullet that would have been slower. This mechanism mainly kicks in with the blob enemies at minimum rank during Stage 4.
TH02 sticks with the fixed 2-, 3-, 4-, and 5-way spreads that TH01 introduced, but adds a third delta angle variant on top of TH01's two "narrow" and "wide" ones. 2-spreads even get a fourth "ultrawide" angle, which Evil Eye Σ uses in the pellet corridor pattern during its last phase.
TH02 also adds predefined 4-, 8-, 16-, and 32-ring groups, all of which are used by bosses.
The game does not yet offer predefined stack groups, but has an auto-stacking system that automatically turns every spawned group into a potential 2-stack on Hard and Lunatic. This system forms the main way in which these difficulties differ from the easier ones, and is exactly why going from Normal to Hard roughly doubles the number of bullets fired. On Hard, the second bullet in each stack moves at half the speed of the primary bullet, while Lunatic adds another 0.5 pixels per frame onto that halved speed.
The game also has a function to apply a further multiplier on top of the difficulty-specific stack count, but only uses it to temporarily disable stacking during three patterns, one of them used by the Five Magic Stones and two of them used by Mima.
Just like all other games, TH02 offers a variety of special bullet motion types. For some reason, ZUN limited these to single 16×16 bullets in TH02; they are not supported for either 8×8 pellets or any of the multi-pellet groups. There is no technical reason for this, so ZUN likely did this as a deliberate game design choice. The upside is that you as a player can be certain that every 8×8 pellet moves in a straight line, which may or may not help reading patterns.
Chase bullets adjust their X/Y velocity by a configurable amount on every frame relative to the player's location. These are exclusively used by the 呪 bullets fired by the Stage 2 midboss.
Homing bullets work in a very similar way, re-aiming at the player more properly for a customizable number of frames after a bullet was spawned. These are completely unused.
Decelerating bullets reduce their speed to 0 by halving their velocity every 8 frames, and then turn and repeat this process a fixed number of times. In TH02, this movement type is only used in a symmetric green-ball pattern used by the eastern and western Magic Stones, but it would become really popular later on, showing up in 6 of TH04's midboss and/or boss patterns and 9 of TH05's.
Gravity bullets add a customizable acceleration factor to their Y position on every frame. Another movement type exclusive to a single green-ball pattern by the northern Magic Stone, and interestingly special-cased to bypass any difficulty- or rank-based speed tuning.
Drift bullets either add a remote-controlled angle and speed delta value to a bullet's angle and speed on every frame, or use that remote-controlled angle to chase toward the player using the same algorithm as the 呪 bullets. These two types are criminally underutilized and could have created some widely inventive patterns that you wouldn't have expected out of the first PC-98 Touhou shmup. Instead, they're only used for two of Marisa's rotating star patterns.
And finally, of course, we have bullets that bounce and flip their direction near the edge of the playfield. In this game, the bounce edges actually lie 8 pixels inside the playfield:The velocity flip only happens on the frame in which a bullet enters the red bounce margin zone. So, faster bullets might still travel a good deal toward the actual edge of the playfield before getting flipped.
This type is not only used by Meira's and Evil Eye Σ's red and purple billiard ball bullets, but also by some star bullet patterns during the Mima fight.
Pellet rendering is batched! For the first time, ZUN preserves the GRCG state for successively blitted pellets, avoiding the extra >168 cycles per pellet that master.lib's grcg_setcolor() and grcg_off() would cost on a 486. The caveat, however, lies in the words successively blitted. Without an architectural split between pellets and sprite bullets, the rendering code ends up looking like this:
While this definitely is suboptimal once you start mixing the two size types, it's not too bad in context. The actual bullet scripts in TH02 mostly stick to one of the two sprite types, and once the script switches from one to the other, the old and new bullets will occupy mostly contiguous areas of the bullet array anyway. The game doesn't actually mix 8×8 and 16×16 bullets within the same pattern until literally the last pattern of Mima's second form.
The four other ZUN quirks in the system are all related to clipping and aim point calculations. ZUN tries very hard to use constants that are supposed to work for both 8×8 and 16×16 bullets, but they never perfectly fit either of the two.
To find out where all these bullet types are used, I of course had to label all the individual pattern functions and assign them to their (mid)boss owners. As a side effect, we now also know the preferred boss decompilation order for this game!
Marisa
Mima
Evil Eye Σ
Meira
Rika
5 Magic Stones
Quite a satisfying order, if I may say so myself – burning off the big fireworks right in the beginning, getting slightly more unexciting later on, but then ending on arguably the best Touhou character ever conceived.
Each of these decompilations will be preceded by the stage's respective midboss. This includes the Extra Stage – you might not think that this stage has a midboss, but it technically does, in the form of this combination of patterns:
Lasting exactly these 420 frames.
There's nothing in TH02's code that mandates midbosses to have sprite-like entities or even something like an HP bar. Instead, the code-level definition of a midboss is all about these properties:
It assigns control functions to the same function pointers that the other stages use for their midbosses.
These functions are activated at a fixed, specific point throughout the stage.
Regular stage enemy spawns are deactivated until these control functions signal completion.
If a pattern manipulates stage tiles, it can only be part of a boss or midboss with custom C code, as this is not supported for regular stage enemy scripts.
Stage 5, on the other hand, indeed doesn't have anything that can be interpreted as a midboss.
Finally, and probably most importantly, hitboxes! The raw decompilation of TH02's bullet collision detection code looks like this:
However, if you aren't deeply familiar with the sizes of all involved sprites, these top-left positions slightly obscure the actual position of the hitbox. That top-left point might also not be where you think it is:
It's the red point.
So let's transform these checks to a more useful comparison of the respective center points against each other, and also fix that inconsistency of the right coordinates being compared with < instead of <= like the other values:
Now also revealing the horizontal asymmetry that ZUN's code was sneakily hiding.
TH02 has only 5 different bullet shapes and no directional or vector bullets, so we can exactly visualize all of them:
📝 As📝 usual, a bullet sprite has to be fully surrounded by the blue box for a hit to be registered.
Yup. Quite asymmetric indeed, and probably surprising no one.
While experimenting with the various hardcoded group types, I stumbled over a quite surprising quirk that you might have already noticed in the spread showcase video further above. For some reason, none of these spreads are perfectly symmetric, what the…?
By the time the bullets have reached the bottom of the playfield, the inaccuracy has compounded so much that the right lane ends up 6 pixels closer to the player's center position than the left lane. Depending on which of the two lanes actually gets the correct angle, this either means that the left lane is moving too far (2️⃣) or that the right lane is not moving far enough (3️⃣).
This is very weird because the angles that go into the velocity calculations are demonstrably correct. You'd therefore get this asymmetry for not only the hardcoded spreads, but also for code that does its own angle calculations and spawns each bullet manually. It's not something that can arise from the other known issue of 📝 Q12.4 quantization either, because that would affect all parts of a pattern equally.
Instead, the inaccuracy originates in the conversion from the polar coordinates of angles and speeds into the per-frame X/Y pixel velocities that the game uses for actual movement. The integer math algorithm that ZUN uses here is pretty much the single most fundamental piece of code shared by all 5 games:
// Using 📝 typical 8-bit angles.
int16_t polar_x(int16_t center, int16_t radius, uint8_t angle)
{
// Ensure that the multiplication below doesn't overflow
int32_t radius32 = radius;
// Get the cosine value from master.lib's lookup table, which scales the
// real-number range of [-1; +1] to the integer range of [-256; +256].
int16_t cosine = CosTable8[angle];
// The multiplication will include master.lib's 256× scaling factor, so
// divide the result to bring it within the intended radius.
return (((radius * cosine) >> 8) + center);
}
This exact algorithm is even recommended in the master.lib manual.
The pattern above uses TH02's medium delta angle for 2-spreads and moves at a Q12.4 subpixel speed of 2.5, which corresponds to a radius of 40 in the context of polar coordinate calculation. Let's step through it:
Angle
Cosine
Multiplied
In hex
Shift result
In decimal
In Q12.4
(0x40 - 6)
38
1520
000005F0
00000005
5
0.3125
(0x40 + 6)
-38
-1520
FFFFFA10
FFFFFFFA
-6
-0.3750
Whoa, talk about getting a basic lesson about how computers work! PC-98 Touhou has just taught us that signedness-preserving arithmetic bitshifts are not equivalent to the apparently corresponding division by a power of two, because the typical two's complement representation of negative numbers causes the result to effectively get rounded away from zero rather than toward zero like the corresponding positive value. In our example, this means that the right lane is correct and moves at the angle we passed in, while the left lane moves 1/16 pixels per frame further to the left than intended. Since we're talking about the most basic piece of trigonometry code here, this inaccuracy also applies to every other entity in PC-98 Touhou that moves left relative to its origin point – and/or up, because Y coordinates are calculated analogously. Imagine that… it's been 10 years since I decompiled the first variant of this function, and I'm only now noticing how fundamentally broken it is.
It's understandable why master.lib's manual recommends bitshifts instead of the more correct division here. On a 486, a single 32-bit IDIV takes a whopping >33 cycles, and it would have been even slower on the 286 systems that master.lib is geared toward. But there's no need to go that far: By simply rounding up negative numbers, we can emulate the rounding behavior of regular division while still using a bitshift:
int16_t polar_x(int16_t center, int16_t radius, uint8_t angle)
{
int32_t ret = (static_cast<int32_t>(radius) * CosTable8[angle]);
+ if(ret < 0) {
+ // Round the multiplication result so that the shift below will yield a number
+ // that's 1 closer to 0, thus rounding toward zero rather than away from zero as
+ // bitshifts with negative numbers would usually do. This ensures that we return
+ // the same absolute value after the bitshift that we would return if [ret] were
+ // positive, thus repairing certain broken symmetries in PC-98 Touhou.
+ ret += 255;
+ }
return ((ret >> 8) + center);
}
You could also do this in a branchless way, which is coincidentally very close to what current Clang would generate if you just wrote a regular division by 256. This branchless way does seem slightly slower on a 486 though, as it adds a constant >8 cycles worth of instructions. The branching implementation only adds >4 cycles for positive numbers and >3 for negative ones.
But that would be deep quirk-fixing territory. uth05win just uses floating-point math for this transformation, exchanging master.lib's 8-bit lookup tables for the C library's regular sin() and cos() functions, but bypassing the issue like this also forms the single biggest source of porting inaccuracy. Can't really win here… 🤷
Now it will be interesting to see whether ZUN worked around this inaccuracy in certain places by using slightly lower left- or up-pointing angles…
Alright, but aren't we still missing the single biggest quirk about bullets in TH02? What's with Reimu's hitbox misaligning when dying? I can't release a blog post about TH02's bullet system without solving the single most infamous bullet-related mystery that this game has to offer. So, time to start a third push for looking at all the player movement, rendering, and death sequence code…
If you remember the code above, there is no way that a hitbox defined using hardcoded numbers can ever shift in response to anything. Any so-called hitbox misalignment would therefore be a player position misalignment, which sounds even harder to believe. And sure enough, after decompiling all of it, there's nothing of that sort to be found in the player code either.
If we take player position misalignment literally, we're only left with one other place where it could possibly somehow come from: the strange vertical shaking you can observe right in the first few frames of most stages. So let's visualize the hitbox and… nope, the shaking is purely a scrolling bug, nothing about it changes the internal player position used for collision detection.
So, uh, what are people even talking about? It doesn't help that noone cites any source for this claim and just presents it as a natural and seemingly self-evident fact, as if it was the most obvious and most easily verified property about the game.
Thankfully though, there have been two relativelyrecent videos about the issue, but both of them only showcase the supposed hitbox shifting in relation to a specific safespot at the end of the Extra Stage midboss. So is that what's been going on here? The community taking the game's behavior in just a single instance of collision detection within a single stage, and extending it to a general claim about the game as a whole?
But indeed, the described behavior cleanly reproduces every time. Enter the spot with 2 remaining lives and you survive, but enter with 1 remaining life and you die:
Whatever this is about, it's not due to a difference in hitboxes because Reimu's position demonstrably stays identical. But if we switch between these two videos, we can easily spot that it's the patterns that are different! With 1 life left, the pattern moves at an ever so slightly slower speed, which apparently adds up to a life-or-death difference at that specific spot.
And that's what the supposed hitbox shifting ultimately boils down to: The natural impact of rank on patterns, adjusting bullet speed with a factor of ((playperf + 48) / 48) times 1/16 pixels. And nothing else.
Let's visualize the hitbox and also track one of the bullets:
If we look at the respective frames in the playperf = +2 case, we see that the bullet misses the hitbox by either one or two pixels on three successive frames:
That's not a safespot, that's Reimu barely surviving only thanks to rounding.
So, for once, this is not a quirk, and doesn't even qualify as a "funny ZUN code moment" if you ask me. This is the game working exactly as designed, and it's the players who are instead making wild assumptions about safespots that only hold when the rank system plugs very specific numbers into the game's fixed-point math.
If anything, you could make the stronger case that this safespot should not work under any circumstance. If the game tested the whole parallelogram covered by a bullet's trajectory between two successive frames instead of just looking at a bullet's current position, it would consistently detect this collision regardless of rank. But even the later games don't go to these lengths.
By testing with parallelograms, the game would not only look at the distinct bullet positions in green, but also detect that the bullet traveled through the position highlighted in cyan, which does lie fully within the hitbox.
Amusingly, if you die twice before this pattern and reach a rank of -2, bullet speed drops enough for the safespot to work again:
It's even the same bullet that fails to hit Reimu, although coming in 5 frames later.
If you're now sad because you liked the idea of ZUN deliberately putting hitbox-shifting code into the game, you don't have to be! You might have already noticed it in the 1-life videos above, but TH02 does have one funny but inconsequential instance of death-induced player position shifting. In the 19 frames between the end of the animation and Reimu respawning at the bottom of the playfield, ZUN just adds 4 pixels to Reimu's Y position. You don't really notice it because the game doesn't render Reimu's sprite during these frames, but this modified position still partakes in collision detection, causing bullets to be removed accordingly.
Hilariously, ZUN was well aware that this shift could move the player's Y position beyond the bottom of the playfield, and thus cause sparks to be spawned at Y coordinates larger than 400. So he just… wrapped these spark spawn coordinates back into the visible range of VRAM, thus moving them to the top of the playfield…
The off-center spawn point of these sparks was the only actual bug in this delivery, by the way.
To round out the third push, I took some of the Anything budget towards finalizing random bits of previously RE'd TH04 and TH05 code that wouldn't add anything more to this blog post. These posts aren't really meant to be a reference – that's the job of the code, the actual primary source of the facts discussed here – but people have still started to use them as such. So it makes sense to try focusing them a bit more in the future, and not bundle all too many topics into a single one.
This finalization work was mostly centered on some tile rendering and .STD file loading boilerplate, but it also covered some of TH05's unfortunately undecompilable HUD number display code. The irony is that it's actually quite good ASM code that makes smart register choices and uses secondary side effects of certain instructions in a way that's clever but not overly incomprehensible. Too bad that these optimizations have no right to exist in logic code that is called way less than once per frame…
Next up: An unexpected quick return to the Shuusou Gyoku Linux port, as Arch Linux is bullying us onto SDL 3 faster than I would have liked.
Here we go, the finale of the Shuusou Gyoku Linux port, culminating in packages for the Arch Linux AUR and Flathub! No intro, this is huge enough as it is.
Before we could compile anything for Linux, I still needed to add GCC/Clang support to my Tup building blocks, in what's hopefully the last piece of build system-related work for a while. Of course, the decision to use one compiler over the other for the Linux build hinges entirely on their respective support for C++ standard library modules. I 📝 rolled out import std; for the Windows build last time and absolutely do not want to code without it anymore. According to the cppreference compiler support table at the time I started development, we had the choice between
experimental support in the not-yet-released GCC 15, and
partial support as of Clang 17, two versions ago.
GCC's current implementation does compile in current snapshot builds, but still throws lots of errors when used within the Shuusou Gyoku codebase. Clang's allegedly partial support, on the other hand, turned out just fine for our purposes. So for now, Clang it is, despite not being the preferred C/C++ compiler on most Linux distributions. In the meantime, please forgive the additional run-time dependency on libc++, its C++ standard library implementation. 🙇 Let's hope that it all will actually work in GCC 15 once that version comes out sometime in 2025.
At a high level, my Tup building blocks only have to do a single thing to support standard library modules with a given compiler: Finding the std and std.compat module interface units at the compiler's standard locations, and compiling them with the same compiler flags used for the rest of the project. Visual Studio got the right idea about this: If you compile on its command prompts, you're already using a custom shell with environment variables that define the necessary paths and parameters for your target platform. Therefore, it makes sense to store these module units at such an easily reachable path – and sure enough, you can reliably find the std module unit at %VCToolsInstallDir%\modules\std.ixx. While this is hands down the optimal way of locating this file, I can understand why GCC and Clang would want module lookup to work in generic shells without polluting environment variables. In this case, asking some compiler binary for that path is a decent second-best option.
Unfortunately, that would have been way too simple. Instead, these two compilers approached the problem from the angle of general module usage within the common build systems out there:
Using modules within a project introduces a new kind of dependency relation between C++ source files, forcing all such code to be compiled in an implicitly defined order. For Tup, this isn't much of a problem because it has always required 📝 order-relevant dependencies to be explicitly specified. So it's been quite amusing for me to hear all these CMake-entrenched CppCon speakers in recent years comment on how this aspect of modules places such a burden on build systems… 🤭
Then again, their goal is a world where devs just write import name_of_module; and the build system figures out a project's dependency graph on its own by scanning all source files prior to compilation. Or rather, asking the compiler to parse the source files and dump out this information, using the fdeps-* options on GCC, the separate clang-scan-deps tool for Clang, or the cl /scanDependencies option for MSVC.
Because each of the three major compilers has its own implementation of modules, it's understandable why the options and tools are different. Obviously though, CMake is interested in at least getting all three to output the dependency information in the same format. So they got onto the C++ committee's SG15 working group and proposed a JSON format, which GCC and Clang subsequently implemented.
But wait! The source files for the std and std.compat modules don't lie inside the source tree and couldn't be found by such a scan over the declared project files. So SG15 later simply proposed using the same JSON format for this purpose and installing such a JSON file together with the standard library implementation.
But wait! That only shifted the problem, because now we need to find that JSON file. What does the paper have to say on that issue?
For the Standard Library:
The build system should be able to query the toolchain (either the compiler or relevant packaging tools) for the location of that metadata file.
Wonderful. Just what we wanted to do all along, only with an additional layer of indirection that now forces every build system to include a JSON parser somewhere in its architecture. 🤦
In CMake's defense, they did try to get other build systems, including Tup, involved in these proposals. Can't really complain now if that was the consensus of everybody who wanted to engage in this discussion at the time. Still, what a sad irony that they reached out to Tup users on the exact day in 2019 at which I retired from thcrap and shelved all my plans of using Tup for modern C++ code…
So, to locate the interface units of standard library modules on Clang and GCC, a build system must do the following:
Ask the compiler for the path to the modules.json file, using the 30-year-old-print-file-name option.
GCC and Clang implement this option in the worst possible way by basically conditionally prepending a path to the argument and then printing it back out again. If the compiler can't find the given file within its inscrutable list of paths or you made a typo, you can only detect this by string-comparing its output with your parameter. I can't imagine any use case that wouldn't prefer an error instead.
Clang was supposed to offer the conceptually saner -print-library-module-manifest-path option, but of course, this is modern C++, and every single good idea must be accompanied by at least one other half-baked design or implementation decision.
Load the JSON file with the returned file name.
Parse the JSON file.
Scan the "modules" array for an entry whose "logical-name" matches the name of the standard module you're looking for.
Discover that the "source-path" is actually relative and will need to be turned into an absolute one for your compilation command line. Thankfully, it's just relative to the path of the JSON file we just parsed.
Sure, you can turn everything into a one-liner on Linux shells, but at what cost?
You might argue that Tup rules are a rather contrived case. Tup by itself can't store the output of processes in variables because rule generation and rule execution are two separate phases, so we need to call clang -print-file-name at both of the places in the command line where we need the file name. But, uh, CMake's implementation is 170 lines long…
At least it's pretty straightforward to then use these compiled modules. As far as our Tup building blocks are concerned, it's just another explicit input and a set of command-line flags, indistinguishable from a library. For Clang, the -fmodule-file=module_name=path option is all that's required for mapping the logical module names to the respective compiled debug or release version.
GCC, however, decided to tragically over-engineer this mapping by devising a plaintext protocol for a microservice like it's 2014. Reading the usage documentation is truly soul-crushing as GCC tries everything in its power to not be like Clang and just have simple parameters. Fortunately, this mapper does support files as the closest alternative to parameters, which we can just echo from Tup for some 📝 90's response file nostalgia. At least I won't have to entertain this folly for a moment longer after the Lua code is written and working…
So modules are justifiably hard and we should cut compiler writers some slack for having to come up with an entirely new way of serializing C++ code that still works with headers. But surely, there won't be any problems with the smaller new C++ features I've started using. If they've been working in MSVC, they surely do in Clang as well, right? Right…?
Once again, C++ standard versions are proven to be utterly meaningless to anyone outside the committee and the CppCon presenters who try to convince you they matter. Here's the list of features that still don't work in Clang in early 2025:
C++20's std::jthread, which fixes an important design flaw of C++'s regular thread class. This would have been very unfortunate if I hadn't coincidentally already rewritten my threading code to use SDL's more portable thread API as part of the Windows 98 backport. Thus, I could adopt that work into this delivery, gifting a much-needed extra 0.3 pushes of content to the Windows 98 backport. 🙌
C++17's std::from_chars() for floating-point values, which we use to parse 📝 gain factors for waveform BGM out of Vorbis comment tags. This one is a medium-sized tragedy: Since it's not worth it to polyfill this function with a third-party library for just a single call, the best thing we can do is to fall back on strtof() from the C standard library. Why wasn't I using this function all along, you may ask? Well, as we all know by now, the C standard library is complete and utter trash, and strtof() is no exception by suffering from locale braindeath.
A good chunk() (ha) of the C++23 range adaptors. As a rather new addition to the language, I've only made sporadic use of them so far to get a feel for their optimal usage. But as it turns out, sporadic use of range adaptors makes very little sense because the code is much simpler and easier to read without them. And this is what the C++ committee has been demanding our respect for all this time? They have played us for absolute fools.
The -2 might look slightly cryptic at first, but since this code is part of a constinit block, we'd get a compiler error if we either wrote too few elements (and left parts of the array uninitialized) or wrote too many (and thus out of the array's bounds). Therefore, the number can't be anything else.
It almost looked like it'd finally be time for my long-drafted rant about the state of modern C++, but the language just barely redeemed itself with the last two sentences there. Some other time, then…
On the bright side, all my portability work on game logic code had exactly the effect I was hoping for: Everything just worked after the first successful compilation, with zero weird run-time bugs resulting from the move from a 32-bit MSVC build to 64-bit Clang. 🎉
Before we can tackle text rendering as the last subsystem that still needs to be ported away from Windows, we need to take a quick look at the font situation. Even if we don't care about pixel-perfectly matching the game's text rendering on Windows, MS Gothic seems to be the only font that fits the game's design at all:
All text areas are dimensioned around the exact metrics of MS Gothic's embedded bitmaps. In menus, each half-width character is expected to be exactly 7×14 pixels large because most of the submenu items are aligned with spaces. In text boxes and the Music Room, glyphs can be smaller than the intended 8×16 pixels per half-width character, but they can't be larger without cutting off something somewhere.
Only bitmap fonts can deliver the sharp and pixelated look the game goes for. Subpixel rendering techniques are crucial for making vector fonts look good, but quickly get ugly when applied to drop-shadowed text rendered at these small sizes:
That's MS Gothic in both pictures. The smoothed rendering on the help text might arguably look nicer, but it clashes very badly with the drop shadow in the menus.
However, MS Gothic is non-free and any use of the font outside of a Windows system violates Microsoft's EULA. In spite of that, the AUR offers three ways of installing this font regardless:
The ttf-ms-*auto-* packages download a Windows 10 or 11 ISO from a somewhat official download link on Microsoft's CDN and extract the font files from there. Probably good enough if downloading 5 GB only to scrape a single 9 MB font file out of that image doesn't somehow feel wrong to you.
The regular, non-auto or -cdnttf-ms-win* packages leave it up to you where exactly you get the files from. While these are the clearest options in how they let you manually perform the EULA infringement, this manual nature breaks automated AUR helpers. And honestly, requiring you to copy over all 141 font files shipped with modern Windows is massively overkill when we only need a single one of them. At that point, you might as well just copy msgothic.ttc to ~/.local/share/fonts and don't bother with any package. Which, by the way, works for every distro as well as Flatpaks, which can freely access fonts on the host system.
You might want to go the extra mile and use any of these methods for perfectly accurate text rendering on Linux, and supporting MS Gothic should definitely be part of the intended scope of this port. But we can't expect this from everyone, and we need to find something that we can bundle as part of the Flatpak.
So, we need an alternative free Japanese font that fits the metric constraints of MS Gothic, has embedded bitmaps at the exact sizes we need, and ideally looks somewhat close. Checking all these boxes is not too easy; Japanese fonts with a full set of all Kanji in Shift-JIS are a niche to begin with, and nobody within this niche advertises embedded bitmaps. As the DPI resolutions of all our screens only get higher, well-designed modern fonts are increasingly unlikely to have them, thus further limiting the pool to old fonts that have long been abandoned and probably only survived on websites that barely function anymore.
Ultimately, the ideal alternative turned out to be a font named IPAMonaGothic, which I found while digging through the Winetricks source code. While its embedded bitmaps only cover MS Gothic's first half for font heights between 10 and 16 pixels rather than going all the way to 22 pixels, it happens to be exactly the range we need for this game.
If you're a PC-98 hardware fan, the difference between these two fonts is probably already reminding you of the stylistic difference between NEC's and Epson's versions of the ROM font.
Both of these screenshots were made on Windows. Obviously, the Linux port shouldn't settle for anything less than pixel-perfectly matching these reference renderings with both fonts.
Alright then, how are we going to get these fonts onto the screen with something that isn't GDI? With all the emphasis on embedded bitmaps, you might come to the conclusion that all we want to do is to place these bitmap glyphs next to each other on a monospaced grid. Thus, all we'd need is a TTF/OTF library that gives us the bitmap for a given Unicode code point. Why should we use any potentially system-specific API then?
But if we instead approach this from the point of view of GDI's feature set, it does seem better to match a standard Windows text rendering API with the equivalent stack of text rendering libraries that are typically used by Linux desktop environments. And indeed, there are also solid reasons why this is a better idea for now:
There actually is a single instance where this game uses MS Gothic at a height of 24 pixels, which is too large to be covered by its embedded bitmaps and thus requires rasterization of vector outlines. Whenever the SCL parser encounters an unknown opcode, it shows this error message:
Modders may very well end up seeing this one as a result of bugs in SCL compilers.
You might see debug text as not worth bothering with, but then there's Kioh Gyoku. Not only does that game display its text at much bigger sizes throughout, but it also renders every string at 3× the size it is ultimately downscaled to, similar to the 2× scale factor used by the 640×480 Windows Touhou games. Going for a full-featured solution that works with both embedded bitmaps and outlines saves us time later.
We'd be ready for translations into even the most complex-to-render non-ASCII scripts.
Since our fonts might not support these scripts, having the API fall back on other fonts installed in the system as necessary would allow us to add these translations independently of figuring out the font situation for them.
In fact, text rendering must technically already support glyph fallback because 📝 the BGM pack selection just displays path names, which count as user input. If people use code points in their BGM pack folder names that aren't covered by either of our two fonts, they probably have some font installed on their system that can display them. Also, the missing .DAT file screen further below in that post shows that GDI already does glyph fallback with emoji, so wouldn't it be lame if the Linux version didn't have at least feature parity in this regard? Instead, the Linux stack would actually outperform GDI thanks to the former's natural support for color emoji. 🎨
Since we're explicitly porting to desktop Linux here, using the standard Linux text rendering stack is the least bloated option because Linux users will have it installed anyway. We can still reach for more minimalistic alternatives later once we do port this game to something other than Linux.
Let's look at what this stack consists of and how the libraries interact with each other:
FreeType provides access to everything related to the rendering of TTF and OTF fonts, including their embedded bitmaps, as well as a rasterizer for regular vector glyphs. It's completely obvious why we need this library.
GLib2 is a collection of various general utility functions that modern non-C languages would have in their standard libraries. Most notably, it provides the tables and APIs for Unicode character data, but its iconv wrapper also comes in quite handy for converting the Shift-JIS text from the original .DAT files to UTF-8 without additional dependencies.
FriBidi implements the Unicode Bidirectional Algorithm, just in case you've thrown some Arabic or Hebrew into your string.
HarfBuzz implements shaping, i.e., the translation of raw Unicode into a sequence of glyph indices and positions depending on what's supported by the font. We might not strictly need this library right now, but it's completely obvious why we will eventually need it for translations.
Fontconfig manages all fonts installed on the system, maps user-friendly font names to file names, tracks their Unicode coverage, and offers a central place for storing various font tweaking options.
Normally, games wouldn't need this library because they just bundle all the fonts they need and hardcode any required tweaking settings to make them look as intended. Looking back at our font situation though, installing MS Gothic in a system-wide way through a package that puts the font into a standard location will be the simplest method of meeting that optional dependency. This is a reasonable assumption in a neatly packaged Linux system where the font is just another item on the game's dependency list, but also within a Flatpak, where "system-wide" includes any fonts shipped with the image. If we now assume that IPAMonaGothic is installed in the same way, we can let Fontconfig handle the actual selection. All we need to do is to specify a preference for MS Gothic over IPAMonaGothic, and Fontconfig will take care of the rest, without us having to write a single line of TTF-loading code.
Pango combines the three libraries above into an API that somewhat matches GDI's simplicity, laying out text in one or multiple lines based on the shaped output of HarfBuzz and substituting glyphs as necessary based on Fontconfig information. The actual rendering, however, is delegated to…
Cairo, a… "2D graphics library"? Now why would we need one of those if all we want is a buffer filled with pixels? Wikipedia's description emphasizes its vector graphics capabilities, which seems to describe the library better than the nondescript blurb on its official website, but doesn't FreeType already do this for text? After looking at it for way too long, the best summary I can come up with is "a collection of font rasterization code that should have maybe been part of FreeType, plus the aforementioned general 2D vector graphics code we don't need". Just like Pango wraps HarfBuzz and Fontconfig to lay out the individual glyphs, Cairo wraps FreeType and raw pixel buffers to actually place these glyphs on its surface abstraction. (And also Fontconfig because of all its configuration settings that can influence the rendering.) Ultimately, this means that each font is represented by a HarfBuzz+FreeType handle, a Pango+Cairo handle, and a Cairo+FreeType handle, which I'm sure won't be relevant later on. 👀
Pango does have a raw FreeType backend that could render text entirely without Cairo, but it's not really maintained and supports neither embedded bitmaps nor color emoji. So we don't have much of a choice in the matter.
Created using pango-view -t 'effective. Power لُلُصّبُلُلصّبُررً ॣ ॣh ॣ ॣ🌈冗' --font='MS Gothic 16px' --backend=cairo.
Created using pango-view -t 'effective. Power لُلُصّبُلُلصّبُررً ॣ ॣh ॣ ॣ🌈冗' --font='MS Gothic 16px' --backend=ft2.
Fun fact: Since Cairo also manages the temporary CPU image buffer we draw on and then hand to SDL, our backend for Shuusou Gyoku ends up with 3× the amount of Cairo function calls than Pango function calls.
In the end, a typical desktop Linux program requires every single one of these 8 libraries to end up with a combined API that resembles Ye Olde Win32 GDI in terms of functionality and abstraction level. Sure, the combination of these eight is more powerful than GDI, offering e.g. affine transformations and text rendering along a curved path. But you can't remove any of these libraries without falling behind GDI.
Even then, my Linux implementation of text rendering for Shuusou Gyoku still ended up slightly longer than the GDI one due to all the Pango and Cairo contexts we have to manually manage. But I did come up with a nice trick to reduce at least our usage of Cairo: Since GDI needs to be used together with DirectDraw, the GDI implementation must keep a system-memory copy of the entire 📝 text surface due to 📝 DirectDraw's possibility of surface loss. But since we only use Cairo with SDL, the Cairo surface in system memory does not actually need to match the SDL-managed GPU texture. Thus, we can reduce the Cairo surface to the role of a merely temporary system-memory buffer that only is as large as the single largest text rectangle, and then copy this single rectangle to the intended packed place within the texture. I probably wouldn't have realized this if the seemingly most simple way to limit rendering to a fixed rectangle within a Cairo surface didn't involve creating another Cairo surface, which turned out to be quite cumbersome.
But can this stack deliver the pixel-perfect rendering we'd like to have? Well, almost:
Cue hours of debugging to find the cause behind these vertical shifts. The overview above already suggested it, but this bug hunt really drove home how this entire stack of libraries is a huge pile of redundantly implemented functionality that interacts with and overrides each other in undocumented and mostly unconfigurable ways. Normally, I don't have much of a problem with that as long as I can step through the code, but stepping through Cairo and especially Pango is a special kind of awful. Both libraries implement dynamic typing and object-oriented paradigms in C, thus hiding their actually interesting algorithms under layers and layers of "clean" management functions. But the worst part is a particularly unexpected piece of recursion: To layout a paragraph of text, Pango requires a few font metrics, which it calculates by laying out a language-specific paragraph of example text. No, I do not like stepping through functions that much, please don't put a call to the text layout function into the text layout function to make me debug while I debug, dawg…
It'll probably take many more years until most of this stack has been displaced with the planned Rust rewrites. But honestly, I don't have great hopes as long as they stay with this pile-of-libraries approach. This pile doesn't even deserve to be called a stack given the circular dependency between FreeType and HarfBuzz…
Ultimately, these are the bugs we're seeing here:
When rendering strings that contain both Japanese and Latin characters with MS Gothic, the Japanese characters are pushed down by about 1/8th of the font height. This one was already reported in June 2023 and is a bug in either HarfBuzz, Pango, or MS Gothic. With the main HarfBuzz developer confused and without an idea for a clean solution, the bug has remained unfixed for 1½ years.
For now, the best workaround would be to revert the commit that introduced the baseline shift. Since the Flatpak release can bundle whatever special version of whatever library it needs, I can patch this bug away there, but distro-specific packages or self-compiled builds would have to patch Pango themselves. LD_LIBRARY_PATH is a clean way of opting into the patched library without interfering with the regular updates of your distro, but there's still a definite hurdle to setting it up.
The remaining 1-pixel vertical shift is, weirdly enough, caused by hinting. Now why would a technique intended for improving the sharpness of outline fonts even apply to bitmap fonts to begin with? As you might have guessed, the pile-of-libraries approach strikes once more:
We can override Cairo's metric hinting defaults with the API documented in the page I linked above. But we must only do so conditionally because 16-pixel MS Gothic does require metric hinting for its glyph placement to match GDI. The resulting hack is very much not pretty.
Cairo's font options can only be really changed at the level of a Cairo context. Any Pango font handle created from a Pango layout mapped to a Cairo context will get a copy of that context's font options at creation time. And of course, the Pango level treats these options as an implementation detail that cannot be modified from the outside. So, we need to figure out the font using raw Fontconfig calls instead of Pango's abstraction. Oh, and this copy also forces us to recreate the Pango layout if we change between 14- and 16-pixel MS Gothic, which is not necessary with IPAMonaGothic.
Don't you love it when the concerns are so separated that they end up overlapping again? I'm so looking forward to writing my own bitmap font renderer for the multilingual PC-98 translations, where the memory constraints of conventional DOS RAM make it infeasible to use any libraries of this pile to begin with 😛
Before we can package this port for Flathub, there's one more obstacle we have to deal with. Flathub mandates that any published and publicly listed app must come with an icon that's at least 128×128 pixels in size. pbg did not include the game's original 32×32 icon in the MIT-licensed source code release, but even if he did, just taking that icon and upscaling it by 4× would simultaneously look lame and more official than it perhaps should.
So, the backers decided to commission a new one, depicting VIVIT in her title screen pose but drawn in a different style as to not look too official. Mr. Tremolo Measure quickly responded to our search and Ember2528 liked his PC-98-esque pixel art style, so that's what we went for:
However, the problem with pixel art icons is that they're strongly tied to specific resolutions. This clashes with modern operating system UIs that want to almost arbitrarily scale icons depending on the context they appear in. You can still go for pixel art, and it sure looks gorgeous if their resolution exactly matches the size a GUI wants to display them at. But that's a big if – if the size doesn't match and the icon gets scaled, the resulting blurry mess lacks all the definition you typically expect from pixel art. Even nearest-neighbor integer upscaling looks more cheap rather than stylized as the coarse pixel grid of the icon clashes with the finer pixel grid of everything surrounding it.
So you'd want multiple versions of your icon that cover all the exact sizes it will appear at, which is definitely more expensive than a single smooth piece of scalable vector artwork. On a cursory look through Windows 11, I found no fewer than 7 different sizes that icons are displayed at:
16×16 in the title bar and all of Explorer's list views
24×24 in the taskbar
28×28 in the small icon next to the file name in Explorer's detail pane (which is never sharp for some reason, even if you provide a 28×28 variant?!)
32×32 in the old-style Properties window
48×48 in Explorer's Medium icons view
96×96 in Explorer's Large icons view, and the large icon its detail pane
256×256 in Explorer's Extra large icons view
And that's just at 1× display scaling and the default zooming factors in Explorer.
But it gets worse. Adding our commissioned multi-resolution icon to an .exe seems simple enough:
Bundle the individual images into a single .ico file using magick in1.png in2.png … out.ico
Write a small resource script, call rc, and add the resulting .res file to the link command line
Be amazed as that icon appears in the title and task bars without you writing a single line of code, thanks to SDL's window creation code automatically setting the first icon it finds inside the executable
But what's going on in Explorer?
Same Extra large icons setting for both.
That's the 48×48 variant sitting all tiny in the center of a 256×256 box, in a context where we expect exactly what we get for the .ico file. Did I just stumble right into the next underdocumented detail? What was the point of having a different set of rules for icons in .exe files? Make that 📝 another Raymond Chen explanation I'm dying to hear…
Until then, here's what the rules appear to be:
256×256 is the one and only mandatory size for high-res program icons on Windows.
48×48 is the next smallest supported size, as unbelievable as that sounds. Windows will never use any other icon variant in between. Some sites claim that 64×64 is supported as well, but I sure couldn't confirm that in my tests.
Those 96×96 use cases from the list above? Yup, Windows will never actually display an embedded 96×96 icon at its native resolution, and either scale up the 48×48 variant (in the Large icons view) or scale down the 256×256 variant (in the detail pane).
You only ever see an embedded icon with a size between 48×48 and 256×256 if it's the only icon available – and then it still gets scaled to 48×48. Or to 96×96, depending on how Explorer feels like.
Getting different results in your tests? Try rebuilding the icon cache, because of course Windows still struggles with cache invalidation. This must have caused unspeakable amounts of miscommunication with artists over the decades.
Oh well, let's nearest-neighbor-scale our 128×128 icon by 2× and move on to Linux, where we won't have such archaic restrictions…
…which is not to say that pixel art icons don't come with their own issues there. 🥲
On Linux, this kind of metadata is not part of the ELF format, but is typically stored in separate Desktop Entry files, which are analogous to .lnk shortcuts on Windows. Their plaintext nature already suggests that icon assignment is refreshingly sane compared to the craziness we've seen above, and indeed, you simply refer to PNG or even SVG files in a separate directory tree that supports arbitrary size variants and even different themes. For non-SVG icons, menus and panels can then pick the best size variant depending on how many pixels they allot to an icon. The overwhelming majority of the ones I've seen do a good job at picking exactly the icon you'd expect, and bugs are rare.
But how would this work for title and task bars once you started the app? If you launched it through a Desktop Entry, a smart window manager might remember that you did and automatically use the entry's icon for every window spawned by the app's process. Apparently though, this feature is rather rare, maybe because it only covers this single use case. What about just directly starting an app's binary from a shell-like environment without going through a Desktop Entry? You wouldn't expect window managers to maintain a reverse mapping from binaries to Desktop Entries just to also support icons in this other case.
So, there must be some way for a program to tell the window manager which icon it's supposed to use. Let's see what SDL has to offer… and the documentation only lists a single function that takes a single image buffer and transfers its pixels to the X11 or Wayland server, overriding any previous icon. 😶
Well great, another piece of modern technology that works against pixel art icons. How can we know which size variant we should pick if icon sizing is the job of the window manager? For the same reason, this function used to be unimplemented in the Wayland backend until the committee of Wayland stakeholders agreed on the xdg-toplevel-icon protocol last year.
Now, we could query the size of the window decorations at all four edges to at least get an approximation, but that approach creates even more problems:
Which edge do we pick? The top one? The largest one? How can we possibly be sure that the one we pick is the one that will show the icon?
Even if we picked the correct edge, the icon will likely be smaller and not cover the full area. Again, anything less than an exact match isn't good enough for pixel art.
This function is not implemented on Wayland because client windows aren't supposed to care about how the server is decorating them.
But even among X11 window managers, there's at least one that doesn't report back the border sizes immediately after window creation. 🙄
Most importantly though: What if that icon is also used in a taskbar whose icons have a different size than the ones in title bars? Both X11's _NET_WM_ICON property and Wayland's xdg-toplevel-icon-v1 protocol support multiple size variants, but SDL's function does not expose this possibility. It might look as if SDL 3 supports this use case via its new support for alternate images in surfaces, but this feature is currently only used for mouse cursors. That sounds like a pull request waiting to happen though, I can't think of a reason not to do the same for icons. contribution-ideas?
But if SDL 2's single window icon function used to be unsupported on Wayland, did SDL 2 apps just not have icons on Wayland before October 2024?
Digging deeper reveals the tragically undocumented SDL_VIDEO_X11_WMCLASS environment variable, which does what we were hoping to find all along. If you set it to the name of your program's Desktop Entry file, the window manager is supposed to locate the file, parse it, read out the Icon value, and perform the usual icon and size lookup. Window class names are a standard property in both X11 and Wayland, and since SDL helpfully falls back on this variable even on Wayland, it will work on both of them.
Or at least it should. Ultimately, it's up to the window manager to actually implement class-derived icons, and sadly, correct support is not as widespread as you would expect.
How would I know this? Because I've tested them all. 🥲 That is, all non-AUR options listed on the Arch Wiki's Desktop environment and Window manager pages that provide something vaguely resembling a desktop you can launch arbitrary programs from:
WM / DE
Manually transferred pixels
Class-derived icons
Notes
awesome
✔️
Does not report border sizes back to SDL immediately after window creation
Blackbox
bspwm
No title bars
Budgie
✔️
✔️
Title bars have no icons. Taskbar falls back on the icon from the Desktop Entry file the app was launched with.
Cinnamon
✔️
✔️
Title bars have no icons, but they work fine in the taskbar. Points out the difference between native and Flatpak apps!
COSMIC
✔️
✔️
Title bars have no icons, but they work fine in the taskbar. Points out the difference between native and Flatpak apps!
Cutefish
➖
Title bars have no icons. The status bar only seems to support the X11 _NET_WM_ICON property, and not the older XWMHints mechanism used by e.g. xterm.
Deepin
Did not start
Enlightenment
✔️
➖
Taskbar falls back on the icon from the Desktop Entry file the app was launched with. Only picks the correctly scaled icon variant in about half of the places, and just scales the largest one in the other half.
Fluxbox
✔️
GNOME Flashback / Metacity
✔️
Title bars have no icons
GNOME
✔️
✔️
Title bars have no icons
GNOME Classic
How do you get this running? The variables just start regular GNOME.
Taskbar only supports manually transferred icons. Scaling of class-derived icons in title bars is broken.
xmonad
No title bars
I tested all window managers, compositors, and/or desktop environments at their latest version as of January 2025 in their default configuration. There were no differences between the X11 and Wayland versions for the ones that offer both.
Yes, you can probably rice title bars and icons onto WMs that don't have them by default. I don't have the time.
That's only 6 out of 33 window managers with a bug-free implementation of class-derived icons, and still 6 out of 28 if we disregard all the tiling window managers where icons are not in scope. If you actually want icons in the title bar, the number drops to just 2, KDE and Pantheon. I'm really impressed by IceWM there though, beating all other similarly old and minimal window managers by shipping with an almost correct implementation.
For now, we'll stay with class-derived icons for budget reasons, but we could add a pixel transfer solution in the future. And that was the 2,000-word story behind this single line of code… 📕
On to packaging then, starting with Arch! Writing my first PKGBUILD was a breeze; as you'd expect from the Arch Wiki, the format and process are very well documented, and the AUR provides tons of examples in case you still need any.
The PKGBUILD guidelines have some opinions about how to handle submodules, but applying them would complicate the PKGBUILD quite a bit while bringing us nowhere close to the 📝 nirvana of shallow and sparse submodules I've scripted earlier. But since PKGBUILDs are just shell scripts that can naturally call other shell scripts, we can just ignore these guidelines, run build.sh, and end up with a simpler PKGBUILD and the intended shorter and less bloated package creation process.
Sadly, PKGBUILDs don't easily support specifying a dependency on either one of two packages, which we would need to codify the font situation. Due to the way the AUR packages both IPAMonaGothic and MS Gothic together with their Mincho and proportional variants, either of them would be Shuusou Gyoku's largest individual dependency. So you'd only want to install one or the other, but probably not both. We could resolve this by editing the PKGBUILDs of both font packages and adding a provides entry for a new and potentially controversial virtual package like ttf-japanese-14-and-16-pixel-bitmap that Shuusou Gyoku could then depend on. But with both of the packages being exclusive to the AUR, this dependency would still be annoying to resolve and you'd have no context about the difference.
Thus, the best we can do is to turn both MS Gothic and IPAMonaGothic into optional dependencies with a short one-line description of the difference, and elaborating on this difference in a comment at the top of the PKGBUILD. Thankfully, the culture around Arch makes this a non-issue because you can reasonably expect people to read your PKGBUILD if they build something from the AUR to begin with. You do always read the PKGBUILD, right?
Flatpak, on the other hand… I'm not at all opposed to the fundamental idea of installing another distro on top of an already existing distro for wider ABI compatibility; heck, Flatpak is basically no different from Wine or WSL in this regard. It's just that this particular ABI-widening distro works in a rather… unnatural way that crosses the border into utter cringe at times.
There are enough rants about Flatpak from a user's perspective out there, criticizing the bloat relative to native packages, the security implications of bundling libraries, and the questionable utility of its sandbox. But something I rarely see people talk about is just how awful Flatpak is from a developer's point of view:
The documentation is written in this weird way that presents Flatpak and its concepts in complete isolation. Without drawing any connections to previous packaging and dependency management systems you might have worked with, it left a lot of my seemingly basic questions unanswered. While it is important to explain your concepts with example code, the lack of a simple and complete reference of the manifest format doesn't exactly inspire confidence in what you're doing. Eventually, I just resorted to cross-checking features in the JSON Schema to get a better idea of what's actually possible.
The ABI-expanding distro part of Flatpak is actually called the Freedesktop platform, a currently 680 MB large stack of typical GUI application libraries updated once a year. It's accompanied by the Freedesktop SDK containing the matching development libraries and tools in another 1.7 GB. As the name implies, this distro is maintained by a separate entity with a homepage that makes the entire thing look deeply self-important and unprofessional. A blurry 25 FPS logo video, a front page full of spelling mistakes, a big focus on sponsors and events… come on, you have one job, and it's compiling and packaging a bunch of open-source libraries. Was this a result of the usual corporate move of creating more departments in order to shift blame and responsibility?
Optics aside, their documentation is even more bizarrely useless. The single bit of actual useful information I was looking for – i.e., the concrete list of packages bundled as part of their runtimes and their versions, is best found by going straight to their code repo.
The manifest of a Flatpak app can be written in your preferred lesser evil of the two most popular markup languages: JSON (slightly ugly for humans and machines alike), or YAML, the underspecified mess that uses syntactically significant whitespace while outlawing the closest thing we have to a semantic indentation character. Oh well, YAML at least supports comments, and we sure sorely need them to justify our bleeding-edge C++ module setup to the Flathub maintainers.
Adding more dependencies on top of the basic runtime can be done by either using runtime extensions or BaseApps. That's two entirely separate concepts that appear to do the same thing on the surface, except that you can only have one BaseApp. The documentation then waffles on and tries to explain both concepts with words that have meaning in isolation but once again answer exactly zero of my questions. Must a BaseApp contain a collection of at least two dependencies or why would anyone ever write the sentence that raises this question? Why do they judge BaseApps to be a "specialized concept" without elaborating, as if to suggest that their audience is too dumb to understand them? Why does a page named Dependencies document extensions as if I wanted to prepare my own package for extension by others? Why be all weird and require "extension points" to be defined when it all just comes down to overlaying another filesystem? Who cares about the special significance of the .Debug, .Locale, and .Sources conventions in the context of dependencies?
In the end, you once again get a clearer idea by simply looking at how existing code uses these concepts. Basically, SDK extensions = build-time dependencies, BaseApps = run-time dependencies, and extension points don't matter at all for our purposes because you can just arbitrarily extend the org.freedesktop.Sdk anyway. 🤷
Speaking of extensions: This exact architectural split between build-time and run-time dependencies is why the org.freedesktop.Sdk.Extension.llvm19 extension packages Clang, but not libc++. When questioned about this omission, one of the maintainers responded with the lamest of excuses: Copying the library would be inconvenient (for them), and something we can't even imagine a use case for. Um, guys? Here's a table. Compare the color of each cell between GCC and Clang. There's your use case.
Thankfully, you can build libc++ without building LLVM as a whole. Seeing how building libc++ takes basically no time at all compared to the rest of LLVM just raises even more questions about not simply providing some kind of script to copy it over.
Speaking of XDG directories, why do they create the .flatpak-builder cache directory in the current working directory and not under $XDG_CACHE_HOME where it belongs?
The modules in a Flatpak work in a similarly layered way as the commands in a Dockerfile, causing edits to a lower layer to evict previous builds of all successive layers from the cache. Any tweaking work in the lower layers therefore suffers from the same disruptive workflow you might already know from Docker, where you constantly shift the layers around to minimize unnecessary rebuilds because there's never an optimal order. Will we ever see container bros move on from layers to a proper build graph of the entire system? The stagnation in this space is saddening.
The --ccache option sort of mitigates the layering by at least caching object files in .flatpak-builder/ccache, which reduces repeated C compilation to a mere file copy from the cache to the package. But not only is this option not enabled by default, it also doesn't appear in any of the flatpak-builder example command lines in the documentation.
Also, it only appears to work with GCC, and setting CCACHE_COMPILERTYPE=clang seems to have no effect. Fortunately, my investment into C++ modules pays off here as well and keeps compile times decently short.
flatpak-builder doesn't validate the manifest schema? Misspelled or misplaced properties just silently do nothing?
Speaking of validation, why does flatpak-builder-lint take 8 seconds to validate a manifest, even if it just consists of a single line? Sure, it's written in Python, but that's an order of magnitude too slow for even that language.
No tab completion for any of the org.flatpak.Builder tools. Sandbox working as designed, I guess 🤷
Git submodule handling. Oh my goodness.
Flatpak recursively clones and checks out all of a repository's submodules. This might be necessary for some codebases, but not for this one: The Linux build doesn't need the SDL submodule, and nothing needs the second miniaudio submodule that the dr_libs use for its testing code. And if these recursive submodules didn't opt into shallow clones, you end up with lots of disk space wasted for no reason; 166.1 MiB in our case.
Except that it's actually twice that amount. There's the download cache that persists across multiple flatpak-builder runs, and then there's the temporary directory the build runs in, which gets a full second clone of the entire tree of submodules. This isn't Windows 8, there are no excuses for not using read-only symlinks.
None of this would be too bad if we could just do the same thing we did with Arch, ignore the default or recommended submodule processing, and let our shell script run the show and selectively download and check out the submodules required for the Linux build. But no – the build process of a Flatpak is strictly separated into a download stage and a build stage, and the build stage cannot access the network. Once again, Flatpak would have the option to allow build-time network access, but enabling it would mean no hosting and discoverability on Flathub for you.
I guess it makes sense from a security point of view, as reviewers would only have to audit a fixed set of declaratively specified sources rather than all code run by the build commands? But even this can only ever apply to the initial review. Allowing app developers to push updates independently from the Flathub maintainers is one of Flathub's biggest selling points. Once you're in, you or your supply chain can just simply hide the malware in an updated version of a module source. 🤷
Getting Tup to work within the Flatpak build environment is slightly tricky. The build sandbox doesn't provide access to the kernel's FUSE module, which Tup uses to track syscalls by default. Thankfully, Tup also supports syscall tracking via LD_PRELOAD, which allows us to still build Shuusou Gyoku in a parallelized way with a regular Tup binary. Imagine compiling FUSE from source only to make Tup compile, but then having to build the game via a tup generated single-threaded shell script…
One common user complaint about Flatpak is that it allows Windows app developers to stick to their beloved and un-Linux-y way of bundling all dependencies, as if they actually ever enjoyed doing that. In reality, it's not the app authors, but the Flathub maintainers and submission reviewers who do everything in their power to prevent Flathub from turning into a typical package manager. Since they ended up with a system where every new extension to the Freedesktop SDK somehow places a burden on the maintainers, they're quick to shut down everything they consider a bad idea, including a Tup package I submitted. What a great job for people who always wanted to be gatekeepers and arbiters of good ideas. If your system treats CMake as one of two blessed build systems that get first-class support, we already fundamentally disagree on basic questions of good taste.
Because even the build stages of individual modules are sandboxed from each other, the only way to persist a module's build outputs for further modules is by installing them into the same /app/ path that the final application is supposed to live in. Since most of these foundational modules will be libraries, /app/ will be full of C header files, static library files, and library-related tooling that you don't want to bloat your shipped package. Docker solves this with multi-stage builds: After building your app into an image full of all build-time dependencies and other artifacts vomited out by your build system, you can start from a fresh, minimal base image and selectively copy over only the files your app actually needs to run. Flatpak solves this in the opposite way, merely letting you manually clean up after your dependencies in the end. At least they support wildcards…
So you've built your Flatpak, but it has an issue that your native build doesn't have and it's time for some debugging. You open up a shell into the image, fire up gdb… and don't get debug symbols despite your build definitely emitting them. The documentation mentions that debug symbols are placed into a separate package, just like Arch Linux's makepkg does it, but the suggested command line to install them doesn't work:
error: No remote refs found for ‘$FLATPAK_ID’
The apparently correct command line can only be found in third-party blog posts. Pulling the package directly out of the builder cache is as random as it gets for someone not deeply familiar with the system.
Before you publish your package, you might want to inspect the bundle to make sure that your --cleanup entries actually covered all the library bloat you suddenly have to care about. Flatpak also adds a few slight annoyances there:
You could look into the build directory (not the repo directory! Very important difference! 🤪) you pass to flatpak-builder, but it also contains all the debug files and source code.
You could open the --devel shell and inspect the contents of /app/. This shell environment is rather minimal and misses both a lot of typical Linux userland tools and (of course) a package manager, but ls and find work and can do the job.
So if all of Flatpak feels like Docker anyway, why isn't it built on top of Docker to begin with? Instead, we got what amounts to a worse copy that doesn't innovate in any way I can notice. Why throw away compatibility with all of Docker's existing tooling just to gain hash-based deduplication at the file level for a couple of images? How can they seriously use a tagline like "Git for apps", which only makes sense for very, very loose definitions of "Git"?
Or maybe all the innovation went into the portals that make this thing work at all, and have at least this little game work indistinguishably from a native build past the initial load time…
… except when parts of it don't! 🤣 Audio is only supported through PulseAudio, which you might not have installed on Arch Linux. Thus, Flatpak ironically enforces another dependency on the host system that the app itself might not have needed.
Alright, you've submitted your app, incorporated the changes requested by the reviewers, waited a while, and now your app is live and has its own page on Flathub. You'd think I'd be done ranting at this point, but no:
You give them nice lossless PNG screenshots and icons, and they convert both of them to lossy WebP with clearly visible compression artifacts. How about some trust in the fact that people who give you small PNG files know what they're doing? Verified by a programmatic check whether such a lossy recompression even noticeably improves the file size, instead of blindly blowing up our icon to 4.58× the size of the original PNG. Source-quality images are way more important to me than brand colors.
The screenshot area on the app pages has a fixed height of 468 pixels. Is this some kind of a sick joke? How could anyone look at that height and not go "nah, that looks wrong, 12 more pixels and we'd be VGA-compatible, barely makes a difference anyway"?
That leaves us with two choices:
Crop those 12 pixels out of the raw game screenshots I originally wanted to have there, or
The latter probably isn't the worst idea as it also gives us a chance to show off the 16×16 variant of the icon at its intended size. But I sure didn't immediately find a KDE theme that both has 16-pixel window icons (unlike Breeze's 15 pixels at the Small size) and doesn't have obscenely large and asymmetric shadows (unlike Materia or Klassy). Shoutout to the Arc theme for matching all these constraints!
Might as well try converting these images to lossless WebP while I'm at it, in the hope that they then leave them alone… but nope, they still get lossily recompressed! 🤪 You know what, I'm not gonna bother with the rest of their guidelines, this is an embarrassment.
Finally, game controller support comes with a very similar asterisk. By default, it's disabled just like any other piece of hardware, and the documentation tells you to specify --device=input to activate it. However, this specific permission is a fairly recent development in Flatpak terms and thus isn't widely available yet? Therefore, the reviewers don't yet allow it in manifests, and your only alternative is a blanket permission for all devices in the user's system. But then, Flathub lists your app as having potentially unsafe user device (and even webcam!) access, even though you had no alternative except for disabling game controller support. What a nice sandbox they have there… 🙄
If that's the supposed future of shipping programs on Linux, they've sure made this dev look back into the past with newfound fondness. I'm now more motivated than ever to separately package Shuusou Gyoku for every distribution, if only to see whether there's just a single distro out there whose packaging system is worse than Flatpak. But then again, packaging this game for other distros is one of the most obvious contribution-ideas there is.
In the end though, the fact that we need to patch Pango to correctly render MS Gothic means that there is a point to shipping Shuusou Gyoku as a Flatpak, beyond just having a single package that works on every distro. And with a download size of 3.4 MiB and an installed size of 6.4 MiB, Shuusou Gyoku almost exemplifies the ideal use case of Flatpak: Apart from miniaudio, BLAKE3, the IPAMonaGothic font, the temporary libc++, and the patched Pango, all other dependencies of the Linux port happen to be part of the Freedesktop runtime and don't add more bloat to the system.
And so, we finally have a 100% native Linux port of Shuusou Gyoku, working and packaged, after 36 pushes! 🎉 But as usual, there's always that last bit of optional work left. The three biggest remaining portability gaps are
guaranteed support for ARM CPUs, which currently fail to build the project on Flathub due to a Tup issue, and who knows what other issues there might be,
Despite 📝 spending 10 pushes on accurate waveform BGM, MIDI support seems to be the most worthwhile feature out of the three. The whole point of the BGM work was that Linux doesn't have a native MIDI synth, so why should packagers or even the users themselves jump through the hoops of setting up some kind of softsynth if it most likely won't sound remotely close to a SC-88Pro? But if you already did, the lack of support might indeed seem unexpected.
But as described in the issue, MIDI support can also mean "a Windows-like plug-and-play" experience, without downloading a BGM pack. Despite the resulting unauthentic sound, this might also be a worthwhile thing to fund if we consider that 14 of the 17 YouTube channels that have uploaded Shuusou Gyoku videos since P0275 still had MIDI playing through the Microsoft GS Wavetable Synth and didn't bother to set up a BGM pack.
Finally, we might want to patch IPAMonaGothic at some point down the line. While a fix for the ascent and descent values that achieves perfect glyph placement without relying on hinting hacks would merely be nice to have, matching the Unicode coverage of its embedded bitmaps with MS Gothic will be crucial for non-ASCII Latin script translations. IPAMonaGothic's outlines do cover the entire Latin-1 Supplement block, but the font is missing embedded bitmaps for all of this block's small letters. Since the existing outlines prevent any glyph fallback in both Fontconfig and GDI, letters like ä, ö, ü, and ñ currently render as spaces.
Not pictured here is the fact that IPAMonaGothic also suffers from Greek and Cyrillic glyphs being full-width, like most Japanese fonts from the Shift-JIS era. If we ever translate Shuusou Gyoku into those scripts, we'd probably just hunt for a different font altogether. But it's not worth going on such a hunt for Latin scripts that are only missing a few special characters.
Ideally, I'd like to apply these edits by modifying the embedded bitmaps in a more controlled, documented, and diffable way and then recompiling the font using a pipeline of some sort. The whole field of fonts often feels impenetrable because the usual editing workflow involves throwing a binary file into a bulky GUI tool and writing out a new binary file, and it doesn't have to be this way. But it looks like I'd have to write key parts of that pipeline myself:
The venerable ttx provides no comfort features for embedded bitmaps and simply dumps their binary representation as hex strings.
The more modern UFO format does specify embedded images, but both of the biggest implementations (defcon and ufoLib2) just throw away any embedded bitmaps, and thus, the whole selling point of such tools.
That would increase the price of translations by about one extra push if you all agree that this is a good idea. If not, then we just go for the usual way of patching the .ttf file after all. In any case, we then get to host the edited font at a much nicer place than the Wayback Machine.
TH05's OP.EXE? It's not one of the 📝 main blockers for multilingual translation support, but fine, let's push it to 100% RE. This didn't go all too quickly after all, though – sure, we were only missing the High Score viewer, but that's technically a menu. By now, we all know the level of code quality we can reasonably expect from ZUN's menu code, especially if we simultaneously look at how it's implemented in TH04 as well. But how much could I possibly say about even a static screen?
Then again, with half of the funding for this push not being constrained to RE, OP.EXE wasn't the worst choice. In both TH04 and TH05, the High Score viewer's code is preceded by all the functions needed to handle the GENSOU.SCR scorefile format, which I already RE'd 📝 in late 2019. Back then, it turned out to be one of the most needlessly inconsistent pieces of code in all of PC-98 Touhou, with a slightly different implementation in each of the 6 binaries that was waiting for its equally messy decompilation ever since.
Most of these inconsistencies just add bloat, but TH05's different stage number defaults for the Extra Stage do have the tiniest visible impact on the game. Since 2019 was before we had our current system of classifying weird code, let's take a quick look at this again:
In the end, this is a landmine, albeit a slightly unusual one. OP.EXE always needs to load GENSOU.SCR to determine whether the Extra Stage is unlocked and can be selected in the main menu. If that file is corrupted or doesn't exist yet, OP.EXE will always recreate it. Therefore, MAINE.EXE's recreation code would only ever run if GENSOU.SCR got deleted or corrupted while playing the game. This can only happen through code that runs outside the game or as the result of failing hardware, and thus goes beyond our criteria for observability.
On to the actual High Score screen then! The OP.EXE code I decompiled here only covers the viewer, the actual score registration is part of MAINE.EXE and is a completely different beast that only shares a few code snippets at best. This means that I'll have to do this all over again at some point down the line, which will result in another few pushes that look very similar to this one. 🥲
By now, it's no surprise that even this static screen has more or less the same density of bugs, landmines, and bloat as ZUN's more dynamic and animated menus. This time however, the worst source of bloat lies more on the meta level: TH04's version explicitly spells out every single loading and rendering call for both of that game's playable characters, rather than covering them with loops like TH05 does for its four characters. As a result, the two games only share 3¼ out of the 7 functions in even this simple viewer screen. It definitely didn't have to be this way.
On the bright side, the code starts off with a feature that probably only scoreplayers and their followers have been consciously awareof: The High Score screens can display 9-digit scores without glitches, unlike the in-game HUD's infamous overflow that turns the 8th digit into a letter once the score exceeds 100 million points.
To understand why this is such a surprise, we have to look at how scores are tracked in-game where the glitch does happen. This brings us back to the binary-coded decimal format that the final three PC-98 Touhou games use for their scores, which we didn't have to deal with 📝 for almost three years. On paper, the fixed-size array of 8 digits used by the three games would leave no room for a 9th one, so why don't we get a counterstop at 99,999,999 points, similar to what happens in modern Touhou? Let's look at the concrete example of adding, say, 200,000 points to a score of 99,899,990 points, and step through the algorithm for the most significant four digits:
score
BCD delta
09 09 08 09 09 09 09 00
+ 00 00 02 00 00 00 00 00
= 09 09 08 09 09 09 09 00
+ 00 00 02 00 00 00 00 00
= 09 0A 00 09 09 09 09 00
+ 00 00 02 00 00 00 00 00
= 0A 00 00 09 09 09 09 00
+ 00 00 02 00 00 00 00 00
= 0A 00 00 09 09 09 09 00
It sure is neat how ZUN arranged the gaiji font in such a way that the HUD's rendering is an exact visual representation of the bytes in memory… at least for scores between 100,000,000 (A0000000) and 159,999,999 (F9999999) inclusive.
Formatted as big-endian for easier reading. Here's the relevant undecompilable ASM code, featuring the venerable AAA instruction.
In other words: The carry of each addition is regularly added to the next digit as if it were binary, and then the next iteration has to adjust that value as necessary and pass along any carry to the digit after that. But once we've reached the most significant digit, there is no way for its carry to go. So it just stays there, leaving the last digit with a value greater than 9 and effectively turning it from a BCD digit into a regular old 8-bit binary value. This leaves us with a maximum representable score of 2,559,999,999 points (FF 09 09 09 09 09 09 09) – and with the scores achieved by current TAS runs being far below that limit in bothgames, it's definitely not worth it to bother about rendering that 10th score digit anywhere.
In the High Score screens, ZUN also zero-padded each score to 8 digits, but only blitted the 9th digit into the padding between name and score if it's nonzero. From this code detail alone, we can tell that ZUN was fully aware of ≥100 million points being possible, but probably considered such high scores unlikely enough to not bother rearranging the in-game HUD to support 9 digits. After all, it only looks like there's plenty of unused space next to the HUD, but in reality, it's tightly surrounded by important VRAM regions on both sides: The 32 pixels to the left provide the much-needed sprite garbage area to support 📝 visually clipped sprites despite master.lib's lack of sprite clipping, and the 64 pixels to the right are home to the 📝 tile source area:
It sure wouldn't have been impossible. You could either sacrifice the two tiles that would cover the 9th digit in both the HiScore and Score row, or – even better – move these tiles under the existing padding space within the HUD. 📝 The tile sections of TH04 and TH05 already address their images using raw VRAM addresses, so this wouldn't have even required an additional tile index→VRAM address lookup table.
And sure enough, ZUN confirms this awareness in TH04's OMAKE.TXT:
However, the highest score that the High Score screens of both games can display without visual glitches is not 999,999,999, as you would expect from 9 digits, but rather…
959 million?
(Also, this 9th digit nicely highlights a slight asymmetry in TH04's screen, where Marisa gets 4 fewer pixels of padding between names and scores.)
What a weird limit. Regardless of whether GENSOU.SCR saves its scores in a sane unsigned 32-bit format or a silly 8-digit BCD one, this limit makes no sense in either representation. In fact, GENSOU.SCR goes even further than BCD values, and instead uses… the ID of the corresponding gaiji in the 📝 bold font?
How cute. No matter how you look at it, storing digits with an added offset of 160 makes no sense:
It's suboptimal for the High Score screens (which want to display scores with the digit sprites from SCNUM.BFT and thus have to subtract 160 from every digit),
it's suboptimal for the HiScore row in the in-game HUD (which also needs actual digits under the hood for easier comparison and replacement with the current Score, and rendering just adds 160 again), and
it doesn't even work as obfuscation (with an offset of 160 / 0xA0, you can always read the number by just looking at the lower 4 bits, and each character/rank section in GENSOU.SCR is encrypted with its own key anyway).
It does start to explain the 959 million limit, though. Since each digit in GENSOU.SCR takes up 1 byte as well, they are indeed limited to a maximum value of (255 - 160) = 95 before they wrap back to 0.
But wait. If the game simply subtracts 160 from the gaiji index to get the digit value, shouldn't this subtraction also wrap back around from 0 to 255 and recover higher values without issue? The answer is, 📝 again, C's integer promotion: Splitting the binary value into two digits involves a division by 10, the C standard mandates that a regular untyped 10 is always of type int, the uint8_t digit operand gets promoted to match, and the result is actually negative and thus doesn't even get recognized as a 9th digit because no negative value is ≥10.
So what would happen if we were to enter a score that exceeds this limit? The registration screen in MAINE.EXE doesn't display the 9th digit and the 8th one wraps around. But it still sorts the score correctly, so at least the internal processing seems to work without any problem…
(160 + 99) = 259, which wraps around to 3, so this makes perfect sense. We'll figure out the exact logic behind the differently colored sprite once RE progress reaches this screen.
But once you try viewing this score, you're instead greeted with VRAM corruption resulting from master.lib's super_put() function not bounds-checking the negative sprite IDs passed by the viewer:
In a rare case for PC-98 Touhou, the High Score viewer also hides two interesting details regarding its BGM. Just like for the graphics, ZUN also coded a fade-in call for the music. In abbreviated ASM code:
mov ax, 0000h ; PMD AH=00H (start music playback)
int 60h
mov ax, 0280h ; PMD AH=02H (fade in/out)
int 60h
However, the AH=02H fade-in call has no effect because AH=00h resets the music volume and would need to be followed by a volume-lowering AH=19h call. But even if there was such a call, the fade-in would sound terrible. 80h corresponds to the fastest possible fade-in speed of -128, which is almost but not quite instant. As such, the fade-in would leave the initial note on each channel muted while the rest of the track fades in very abruptly, which clashes badly with the bass and chord notes you'd expect to hear in the name registration themes of the two games:
At least the first issue could have been avoided if PMD's AH=00h call took optional parameters that describe the initial playback state instead of relying on these mutating calls later on. After all, it might be entirely possible for a bunch of interrupts to fire between AH=00h and these further calls, and if those interrupts take a while, the FM chip might have already played a few samples at PMD's default volume. Sure, Real Mode doesn't stop you from wrapping this sequence in CLI and STI instructions to work around this issue, but why rely on even more CPU state mutation when there would have been plenty of free x86 registers for passing more initial state to AH=00h?
The second detail is the complete opposite: It's a fade-out when leaving the menu, it uses PMD's slowest fade speed, and it does work and sound good. However, the speed is so slow that you typically barely notice the feature before the main menu theme starts playing again. But ZUN hid a small easter egg in the code: After the title screen background faded back in, the games wait for all inputs to be released before moving back into the main menu and playing the title screen theme. By holding any key when leaving the High Score viewer, you can therefore listen to the fade-out for as long as you want.
Although when I said that it works, this does not include TH04. 📝 As📝 usual, this game's menus do not address the PC-98's keyboard scancode quirk with regard to held keys, causing the loop to break even while the player is still holding a key. There are 21 not yet RE'd input polling calls in TH02 and TH04 that will most certainly reveal similar inconsistencies, are you excited yet?
But in TH05, holding a key indeed reveals the hidden-content of a 37-second fade-out:
I'm holding Esc here, but this works with any key, even the ⬅️ left and ➡️ right arrow keys that don't quit out of the menu.
As you can already tell by the markers, the final bugs in TH05's (and only TH05's) OP.EXE are palette-related and revealed by switching between these two screens:
Why does the title screen initially use an ever so slightly darker palette than it does when returning from the menu?
What's with the sudden palette change between frames 1 and 2? Why are the colors suddenly much brighter?
1) is easily traced and attributed to an off-by-one error in the animation's palette fade code, but 2) is slightly more complex. This palette glitch only happens if the High Score viewer is the first palette-changing submenu you enter after the 📝 title animation. Just like 📝 TH03's character portraits, both TH04 and TH05 load the sprites for the High Score screen's digits (SCNUM.BFT) and rank indicator (HI_M.BFT) as soon as the title animation has finished. Since these are regular BFNT sprite sheets, ZUN loads them using master.lib's super_entry_bfnt(), and that's where the issue hides: master.lib's blocking palette fade functions operate on master.lib's main 8-bit palette, and super_entry_bfnt() overwrites this palette with the one in the BFNT header. Synchronizing the hardware palette with this newly loaded one would have immediately revealed this possibly unintended state mutation, but I get why master.lib might not have wanted to do that – after all, 📝 palette uploads aren't exactly cheap and would be very noticeable when loading multiple sprite sheets in a row.
In any case, this is no problem in TH04 as that game's HI_M.BFT and OP1.PI have identical palettes. But in TH05, HI_M.BFT has a significantly brighter palette:
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
OP1.PI
HI01.PI / HI_M.BFT
And that's 100% RE for TH05's OP.EXE! 🎉 TH04's counterpart is not far behind either now, and only misses its title screen animation to reach the same mark.
As for 100% finalization, there's still the not yet decompiled TH04/TH05 version of the ZUN Soft logo that separates both OP.EXE binaries from this goal. But as I've mentioned 📝 time and time again, the most fitting moment for decompiling that animation would be right before reaching 100% on the entirety of either game. Really – as long as we aren't there, your funding is better invested into literally anything else. The ZUN Soft logo does not interact with or block work on any other part of the game, and any potential modding should be easy enough on the ASM level.
But thankfully, nobody actually scrolls down to the Finalized section. So I can rest assured that no one will take that moment away from me!
Next up: I'd kinda like to stay with PC-98 Touhou for a little longer, but the current backlog is pulling into too many different directions and doesn't convincingly point toward one goal over any other. TH02 is close, but with an active subscription, it makes more sense to accumulate 3 pushes of funding and then go for that game's bullet system in January. This is why I'm OK with subscriptions exceeding the cap every once in a while, because they do allow me to plan ahead in the long term.
So, let's wait a few days for all of you to capture the open towards something more specific. But if the backlog stays as indecisive as it is now, I'll instead go for finishing the Shuusou Gyoku Linux port, hopefully in time for the holiday season.
As for prices, indeed seems to be the point where my supply meets the community's demand for this project and the store no longer sells out immediately. So for the time being, we're going to stay at that push price and I won't increase it any further upon hitting the cap.
Remember when ReC98 was about researching the PC-98 Touhou games? After over half a year, we're finally back with some actual RE and decompilation work. The 📝 build system improvement break was definitely worth it though, the new system is a pure joy to use and injected some newfound excitement into day-to-day development.
And what game would be better suited for this occasion than TH03, which currently has the highest number of individual backers interested in it. Funding the full decompilation of TH03's OP.EXE is the clearest signal you can send me that 📝 you want your future TH03 netplay to be as seamlessly integrated and user-friendly as possible. We're just two menu screens away from reaching that goal anyway, and the character selection screen fits nicely into a single push.
The code of a menu typically starts with loading all its graphics, and TH03's character selection already stands out in that regard due to the sheer amount of image data it involves. Each of the game's 9 selectable characters comes with
a 192×192-pixel portrait (??SL.CD2),
a 32×44-pixel pictogram describing her Extra Attack (in SLEX.CD2), and
a 128×16-pixel image of her name (in CHNAME.BFT). While this image just consists of regular boldfaced versions of font ROM glyphs that the game could just render procedurally, pre-rendering these names and keeping them around in memory does make sense for performance reasons, as we're soon going to see. What doesn't make sense, though, is the fact that this is a 16-color BFNT image instead of a monochrome one, wasting both memory and rendering time.
Luckily, ZUN was sane enough to draw each character's stats programmatically. If you've ever looked through this game's data, you might have wondered where the game stores the sprite for an individual stat star. There's SLWIN.CDG, but that file just contains a full stat window with five stars in all three rows. And sure enough, ZUN renders each character's stats not by blitting sprites, but by painting (5 - value) yellow rectangles over the existing stars in that image.
The only stat-related image you will find as part of the game files. The number of stat stars per character is hardcoded and not based on any other internal constant we know about.
Together with the EXTRA🎔 window and the question mark portrait for Story Mode, all of this sums up to 255,216 bytes of image data across 14 files. You could remove the unnecessary alpha plane from SLEX.CD2 (-1,584 bytes) or store CHNAME.BFT in a 1-bit format (-6,912 bytes), but using 3.3% less memory barely makes a difference in the grand scheme of things.
From the code, we can assume that loading such an amount of data all at once would have led to a noticeable pause on the game's target PC-98 models. The obvious alternative would be to just start out with the initially visible images and lazy-load the data for other characters as the cursors move through the menu, but the resulting mini-latencies would have been bound to cause minor frame drops as well. Instead, ZUN opted for a rather creative solution: By segmenting the loading process into four parts and moving three of these parts ahead into the main menu, we instead get four smaller latencies in places where they don't stick out as much, if at all:
The loading process starts at the logo animation, with Ellen's, Kotohime's, and Kana's portraits getting loaded after the 東方夢時空 letters finished sliding in. Why ZUN chose to start with characters #3, #4, and #5 is anyone's guess.
Reimu's, Mima's, and Marisa's portraits as well as all 9 EXTRA🎔 attack pictograms are loaded at the end of the flash animation once the full title image is shown on screen and before the game is waiting for the player to press a key.
The stat and EXTRA🎔 windows are loaded at the end of the main menu's slide-in animation… together with the question mark portrait for Story Mode, even though the player might not actually want to play Story Mode.
Finally, the game loads Rikako's, Chiyuri's, and Yumemi's portraits after it cleared VRAM upon entering the Select screen, regardless of whether the latter two are even unlocked.
I don't like how ZUN implemented this split by using three separately named standalone functions with their own copy-pasted character loop, and the load calls for specific files could have also been arranged in a more optimal order. But otherwise, this has all the ingredients of good-code. As usual, though, ZUN then definitively ruins it all by counteracting the intended latency hiding with… deliberately added latency frames:
The entire initialization process of the character selection screen, including Step #4 of image loading, is enforced to take at least 30 frames, with the count starting before the switch to the Selection theme. Presumably, this is meant to give the player enough time to release the Z key that entered this menu, because holding it would immediately select Reimu (in Story mode) or the previously selected 1P character (in VS modes) on the very first frame. But this is a workaround at best – and a completely unnecessary one at that, given that regular navigation in this menu already needs to lock keys until they're released. In the end, you can still auto-select the default choice by just not releasing the Z key.
And if that wasn't enough, the 1P vs. 2P variant of the menu adds 16 more frames of startup delay on top.
Sure, maybe loading the fourth part's 69,120 bytes from a highly fragmented hard drive might have even taken longer than 30 frames on a period-correct PC-98, but the point still stands that these delays don't solve the problem they are supposed to solve.
But the unquestionable main attraction of this menu is its fancy background animation. Mathematically, it consists of Lissajous curves with a twist: Instead of calculating each point as
x = sin((fx·t)+ẟx)y = sin((fy·t)+ẟy), TH03 effectively calculates its points as
x = cos(fx·((t+ẟx) % 0xFF))y = sin(fy·((t+ẟy) % 0xFF)), due to t and ẟ being 📝 8-bit angles. Since the result of the addition remains 8-bit as well, it can and will regularly overflow before the frequency scaling factors fx and fy are applied, thus leading to sudden jumps between both ends of the 8-bit value range. The combination of this overflow and the gradual changes to fx and fy create all these interesting splits along the 360° of the curve:
At a high level, there really is just one big curve and one small curve, plus an array of trailing curves that approximate motion blur by subtracting from ẟx and ẟy.
In a rather unusual display of mathematical purity, ZUN fully re-calculates all variables and every point on every frame from just the single byte of state that indicates the current time within the animation's 128-frame cycle. However, that beauty is quickly tarnished by the sheer cost of fully recalculating these curves every frame:
In total, the effect calculates, clips, and plots 16 curves: 2 main ones, with up to 7×2 = 14 darker trailing curves.
Each of these curves is made up of the 256 maximum possible points you can get with 8-bit angles, giving us 4,096 points in total.
Each of these points takes at least 333 cycles on a 486 if it passes all clipping checks, not including VRAM latencies or the performance impact of the 📝 GRCG's RMW mode.
Due to the larger curve's diameter of 440 pixels, a few of the points at its edges are needlessly calculated only to then be discarded by the clipping checks as they don't fit within the 400 VRAM rows. Still, >1.3 million cycles for a single frame remains a reasonable ballpark assumption.
This is decidedly more than the 1.17 million cycles we have between each VSync on the game's target 66 MHz CPUs. So it's not surprising that this effect is not rendered at 56.4 FPS, but instead drops the frame rate of the entire menu by targeting a hardcoded 1 frame per 3 VSync interrupts, or 18.8 FPS. Accordingly, I reduced the frame rate of the video above to represent the actual animation cycle as cleanly as possible.
Apparently, ZUN also tested the game on the 33 MHz PC-98 model that he targeted with TH01 and TH02, and realized that 4,096 points were way too much even at 18.8 FPS. So he also added a mechanism that decrements the number of trailing curves if the last frame took ≥5 VSync interrupts, down to a minimum of only a single extra curve. You can see this in action by underclocking the CPU in your Neko Project fork of choice.
But were any of these measures really necessary? Couldn't ZUN just have allocated a 12 KiB ring buffer to keep the coordinates of previous curves, thus reducing per-frame calculations to just 512 points? Well, he could have, but we now can't use such a buffer to optimize the original animation. The 8-bit main angle offset/animation cycle variable advances by 0x02 every frame, but some of the trailing curves subtract odd numbers from this variable and thus fall between two frames of the main curves.
So let's shelve the idea of high-level algorithmic optimizations. In this particular case though, even micro-optimizations can have massive benefits. The sheer number of points magnifies the performance impact of every suboptimal code generation decision within the inner point loop:
Frequency scaling works by multiplying the 8-bit angles with a fixed-point Q8.8 factor. The result is then scaled back to regular integers via… two divisions by 256 rather than two bitshifts? That's another ≥46 cycles where ≥4 would have sufficed.
The biggest gains, however, would come from inlining the two far calls to the 5-instruction function that calculates one dimension of a polar coordinate, saving another ≥100 cycles.
Multiplied by the number of points, even these low-hanging fruit already save a whopping ≥753,664 cycles per frame on an i486, without writing a single line of ASM! On Pentium CPUs such as the one in the PC-9821Xa7 that ZUN supposedly developed this game on, the savings are slightly smaller because far calls are much faster, but still come in at a hefty ≥491,520 cycles. Thus, this animation easily beats 📝 TH01's sprite blitting and unblitting code, which just barely hit the 6-digit mark of wasted cycles, and snatches the crown of being the single most unoptimized code in all of PC-98 Touhou.
The incredible irony here is that TH03 is the point where ZUN 📝 really📝 started📝 going📝 overboard with useless ASM micro-optimizations, yet he didn't even begin to optimize the one thing that would have actually benefitted from it. Maybe he 📝 once again went for the 📽️ cinematic look 📽️ on purpose?
Unlike TH01's sprites though, all this wasted performance doesn't really matter much in the end. Sure, optimizing the animation would give us more trailing curves on slower PC-98 models, but any attempt to increase the frame rate by interpolating angles would send us straight into fanfiction territory. Due to the 0x02/2.8125° increment per cycle, tripling the frame rate of this animation would require a change to a very awkward (log2384) = 8.58-bit angle format, complete with a new 384-entry sine/cosine lookup table. And honestly, the effect does look quite impressive even at 18.8 FPS.
There are three more bugs and quirks in this animation that are unrelated to performance:
If you've tried counting the number of trailing dots in the video above, you might have noticed that the very first frame actually renders 8×2 trailing curves instead of 7×2, thus rendering an even higher 4,608 points. What's going on there is that ZUN actually requested 8 trailing curves, but then forgot to reset the VSync counter after the initial 30-frame delay. As a result, the game always thinks that the first frame of the menu took ≥30 VSync interrupts to render, thus causing the decrement mechanism to kick in and deterministically reduce the trailing curve count to 7.
This is a textbook example of my definition of a ZUN bug: The code unmistakably says 8, and we only don't get 8 because ZUN forgot to mutate a piece of global state.
The small trailing curves have a noticeable discontinuity where they suddenly get rotated by ±90° between the last and first frame of the animation cycle.
This quirk comes down to the small curve's ẟy angle offset being calculated as ((c/2)-i), with i being the number of the trailing curve. Halving the main cycle variable effectively restricts this smaller curve to only the first half of the sine oscillation, between [0x00, 0x80[. For the main curve, this is fine as i is always zero. But once the trailing curves leave us with a negative value after the subtraction, the resulting angle suddenly flips over into the second half of the sine oscillation that the regular curve never touches. And if you recall how a sine wave looks, the resulting visual rotation immediately makes sense:
Negated input, negated output.
Removing the division would be the most obvious fix, but that would double the speed of the sine oscillation and change the shape of the curve way beyond ZUN's intentions. The second-most obvious fix involves matching the trailing curves to the movement of the main one by restricting the subtraction to the first half of the oscillation, i.e., calculating ẟy as (((c/2)-i) % 0x80) instead. With c increasing by 0x02 on each frame of the animation, this fix would only affect the first 8 frames.
ZUN decided to plot the darker trailing curves on top of the lighter main ones. Maybe it should have been the other way round?
Now with the full 18 curves, a direction change of the smaller trailing curves at the end of the loop that only looks slightly odd, and a reversed and more natural plotting order.
Now that we fully understand how the curve animation works, there's one more issue left to investigate. Let's actually try holding the Z key to auto-select Reimu on the very first frame of the Story Mode Select screen:
The confirmation flash even happens before the menu's first page flip.
Stepping through the individual frames of the video above reveals quite a bit of tearing, particularly when VRAM is cleared in frame 1 and during the menu's first page flip in frame 49. This might remind you of 📝 the tearing issues in the Music Rooms – and indeed, this tearing is once again the expected result of ZUN landmines in the code, not an emulation bug. In fact, quite the contrary: Scanline-based rendering is a mark of quality in an emulator, as it always requires more coding effort and processing power than not doing it. Everyone's favorite two PC-98 emulators from 20 years ago might look nicer on a per-frame basis, but only because they effectively hide ZUN's frequent confusion around VRAM page flips.
To understand these tearing issues, we need to consider two more code details:
If a frame took longer than 3 VSync interrupts to render, ZUN flips the VRAM pages immediately without waiting for the next VSync interrupt.
The hardware palette fade-out is the last thing done at the end of the per-frame rendering loop, but before busy-waiting for the VSync interrupt.
The combination of 1) and the aforementioned 30-frame delay quirk explains Frame 49. There, the page flip happens within the second frame of the three-frame chunk while the electron beam is drawing row #156. DOSBox-X doesn't try to be cycle-accurate to specific CPUs, but 1 menu frame taking 1.39 real-time frames at 56.4 FPS is roughly in line with the cycle counting we did earlier.
Frame 97 is the much more intriguing one, though. While it's mildly amusing to see the palette actually go brighter for a single frame before it fades out, the interesting aspect here is that 2) practically guarantees its palette changes to happen mid-frame. And since the CRT's electron beam might be anywhere at that point… yup, that's how you'd get more than 16 colors out of the PC-98's 16-color graphics mode. 🎨
Let's exaggerate the brightness difference a bit in case the original difference doesn't come across too clearly on your display:
Probably not too much of a reason for demosceners to get excited; generic PC-98 code that doesn't try to target specific CPUs would still need a way of reliably timing such mid-frame palette changes. Bit 6 (0x40) of I/O port 0xA0 indicates HBlank, and the usual documentation suggests that you could just busy-wait for that bit to flip, but an HBlank interrupt would be much nicer.
This reproduces on both DOSBox-X and Neko Project 21/W, although the latter needs the Screen → Real palettes option enabled to actually emulate a CRT electron beam. Unfortunately, I couldn't confirm it on real hardware because my PC-9821Nw133's screen vinegar'd at the beginning of the year. But just as with the image loading times, TH03's remaining code sorts of indicate that mid-frame palette changes were noticeable on real hardware, by means of this little flag I RE'd way back in March 2019. Sure, palette_show() takes >2,850 cycles on a 486 to downconvert master.lib's 8-bit palette to the GDC's 4-bit format and send it over, and that might add up with more than one palette-changing effect per frame. But tearing is a way more likely explanation for deferring all palette updates until after VSync and to the next frame.
And that completes another menu, placing us a very likely 2 pushes away from completing TH03's OP.EXE! Not many of those left now…
To balance out this heavy research into a comparatively small amount of code, I slotted in 2024's Part 2 of my usual bi-annual website improvements. This time, they went toward future-proofing the blog and making it a lot more navigable. You've probably already noticed the changes, but here's the full changelog:
The Progress blog link in the main navigation bar now points to a new list page with just the post headers and each post's table of contents, instead of directly overwhelming your browser with a view of every blog post ever on a single page.
If you've been reading this blog regularly, you've probably been starting to dread clicking this link just as much as I've been. 14 MB of initially loaded content isn't too bad for 136 posts with an increasing amount of media content, but laying out the now 2 MB of HTML sure takes a while, leaving you with a sluggish and unresponsive browser in the meantime. The old one-page view is still available at a dedicated URL in case you want to Ctrl-F over the entire history from time to time, but it's no longer the default.
The new 🔼 and 🔽 buttons now allow quick jumps between blog posts without going through the table of contents or the old one-page view. These work as expected on all views of the blog: On single-post pages, the buttons link to the adjacent single-post pages, whereas they jump up and down within the same page on the list of posts or the tag-filtered and one-page views.
The header section of each post now shows the individual goals of each push that the post documents, providing a sort of title. This is much more useful than wasting space with meaningless commit hashes; just like in the log, links to the commit diffs don't need to be longer than a GitHub icon.
The web feeds that 📝 handlerug implemented two years ago are now prominently displayed in the new blog navigation sub-header. Listing them using <link rel="alternate"> tags in the HTML <head> is usually enough for integrated feed reader extensions to automatically discover their presence, but it can't hurt to draw more attention to them. Especially now that Twitter has been locking out unregistered users for quite some time…
Speaking of microblogging platforms, I've now also followed a good chunk of the Touhou community to Bluesky! The algorithms there seem to treat my posts much more favorably than Twitter has been doing lately, despite me having less than 1/10 of mostly automatically migrated followers there. For now, I'm going to cross-post new stuff to both platforms, but I might eventually spend a push to migrate my entire tweet history over to a self-hosted PDS to own the primary source of this data.
Next up: Staying with main menus, but jumping forward to TH04 and TH05 and finalizing some code there. Should be a quick one.
P0286
tupblocks (import std; support)
P0287
Seihou / Shuusou Gyoku (Code cleanup + Game logic portability, part 2/? + Fixes for bugs and landmines)
P0288
Seihou / Shuusou Gyoku (Getting pbg's code through static analysis)
P0289
Seihou / Shuusou Gyoku (Game logic portability, part 3/? + Graphics refactoring, part 3/5: Preparations and colors)
P0290
Seihou / Shuusou Gyoku (Graphics refactoring, part 4/5: Geometry, enumeration, and software rendering)
P0291
Seihou / Shuusou Gyoku (Graphics refactoring, part 5/5: Clipping, sprites, and initialization)
P0292
Seihou / Shuusou Gyoku (Cross-platform APIs, part 3/?: Main loop + Main menu refactoring)
P0293
Seihou / Shuusou Gyoku (Cross-platform APIs, part 4/?: SDL_Renderer backend)
P0294
Seihou / Shuusou Gyoku (Window and scaling modes, part 1/2)
P0295
Seihou / Shuusou Gyoku (Window and scaling modes, part 2/2 + Hotkeys) + Website (Adding missing money amounts to the log)
💰 Funded by:
Ember2528, [Anonymous]
🏷️ Tags:
And then, the Shuusou Gyoku renderer rewrite escalated to another 10-push monster that delayed the planned Seihou Summer™ straight into mid-fall. Guess that's just how things go these days at my current level of quality. Testing and polish made up half of the development time of this new build, which probably doesn't surprise anyone who has ever dealt with GPUs and drivers…
But first, let's finally deploy C++23 Standard Library Modules! I've been waiting for the promised compile-time improvements of modules for 4 years now, so I was bound to jump at the very first possible opportunity to use them in a project. Unfortunately, MSVC further complicates such a migration by adding one particularly annoying proprietary requirement:
Our own code wants to use both static analysis and modules.
MSVC therefore insists that the modules are also compiled with static analysis enabled.
But this in turn forces every other translation unit that consumes these modules, including pbg's code, to be built with static analysis enabled as well, …
… which means we're now faced with hundreds of little warnings and C++ Core Guideline violations from pbg's code. Sure, we could just disable all warnings when compiling pbg's source files and get on with rolling out modules, because they would still count as "statically analyzed" in this case. But that's silly. As development continues and we write more of our own modern code, more and more of it will invariably end up within pbg's files, merging and intertwining with original game code. Therefore, not analyzing these files is bound to leave more and more potential issues undetected. Heck, I've already committed a static initialization order fiasco by accident that only turned into an actual crash halfway through the development of these 10 pushes. Static analysis would have caught that issue.
So let's meet in the middle. Focus on a sensible subset of warnings that we would appreciate in our own code or that could reveal bugs or portability issues in pbg's code, but disable anything that would lead to giant and dangerous refactors or that won't apply to our own code. For example, it would sure be nice to rewrite certain instances of goto spaghetti into something more structured, but since we ourselves won't use goto, it's not worth worrying about within a porting project.
After deduplicating lots of code to reduce the sheer number of warnings, the single biggest remaining group of issues were the C-style casts littered throughout the code. These combine the unconstrained unsafety of C with the fact that most of them use the classic uppercase integer types from <windows.h>, adding a further portability aspect to this class of issues.
The perhaps biggest problem about them, however, is that casts are a unary operator with its own place in the precedence hierarchy. If you don't surround them with even more brackets to indicate the exact order of operations, you can confuse and mislead the hell out of anyone trying to read your code. This is how we end up with the single most devious piece of arithmetic I've found in this game so far:
If you don't look at vintage C code all day, this cast looks redundant at first glance. Why would you separately cast the result of this expression to the type of the receiving variable? However, casting has higher precedence than division, so the code actually downcasts the dividend, (t->d+4), not the result of the division. And why would pbg do that? Because the regular, untyped 4 is implicitly an int, C promotes t->d to int as well, thus avoiding the intended 8-bit overflow. If t->d is 252, removing the cast would therefore result in
((int{ 252 } + int{ 4 }) / 8) =
256 / 8 =
32, not the 0 we wanted to have. And since this line is part of the sprite selection for VIVIT-captured-'s feather bullets, omitting the cast has a visible effect on the game:
The first file in GRAPH.DAT explains what we're seeing here.
So let's add brackets and replace the C-style cast with a C++ static_cast to make this more readable:
const auto d = (static_cast<uint8_t>(t->d + 4) / 8);
But that only addresses the precedence pitfall and doesn't tell us why we need that cast in the first place. Can we be more explicit?
const auto d = (((t->d + 4) & 0xFF) / 8);
That might be better, but still assumes familiarity with integer promotion for that mask to not appear redundant. What's the strongest way we could scream integer promotion to anyone trying to touch this code?
const auto d = (Cast::down_sign<uint8_t>(t->d + 4) / 8);
Of course, I also added a lengthy comment above this line.
Now we're talking! Cast::down_sign() uses static_asserts to enforce that its argument must be both larger and differently signed than the target type inside the angle brackets. This unmistakably clarifies that we want to truncate a promoted integer addition because the code wouldn't even compile if the argument was already a uint8_t. As such, this new set of casts I came up with goes even further in terms of clarifying intent than the gsl::narrow_cast() proposed by the C++ Core Guidelines, which is purely informational.
OK, so replacing C-style casts is better for readability, but why care about it during a porting project? Wouldn't it be more efficient to just typedef the <windows.h> types for the Linux code and be done with it? Well, the ECL and SCL interpreters provide another good reason not to do that:
In these instances, the DWORD type communicates that this codebase originally targeted Windows, and implies that the cmd buffer stores these 32-bit values in little-endian format. Therefore, replacing DWORD with the seemingly more portable uint32_t would actually be worse as it no longer communicates the endianness assumption. Instead, let's make the endianness explicit:
No surprises once we port this game to a big-endian system – and much fewer characters than a pedantic reinterpret_cast, too.
With that and another pile of improvements for my Tup building blocks, we finally get to deploy import std; across the codebase, and improve our build times by…
…not exactly the mid-three-digit percentages I was hoping for. Previously, a full parallel compilation of the Debug build took roughly 23.9s on my 6-year-old 6-core Intel Core i5-8400T. With modules, we now need to compile the C++ standard library a single time on every from-scratch rebuild or after a compiler version update, which adds an unparallelizable ~5.8s to the build time. After that though, all C++ code compiles within ~12.4s, yielding a still decent 92% speedup for regular development. 🎉 Let's look more closely into these numbers and the resulting state of the codebase:
Expecting three-digit speedups was definitely a bit premature as there were still several game-code translation units that #include <windows.h>. The subsequent graphics work removed a few more of these instances, which did bring the speedup into the three-digit range with a compilation time of ~11.6s by the end of P0295.
Supporting import-then-#include is crucial for supporting gradual migrations from headers to modules, but this is one of the most challenging features for compilers to implement, with both MSVC and Clang struggling. By now, MSVC admirably seems to handle all of the cases I ran into, except for this one:
// ENEMY.H
import std.compat;
inline bool LaserHITCHK(/* … */)
{
// […]
// Causes the compiler to instantiate the overloaded C++ version of
// std::abs() via the global namespace re-export in `std.compat`,
// not the C version.
w = abs(-sinl(d,tx) + cosl(d,ty));
// […]
}
// Later, in another header file included via <windows.h>…
// This header defines the C version of abs(), thus causing a duplicate
// definition error.
#include <stdlib.h>
The best solution here is to simply not define functions in headers. We could also blame this one on the std.compat module which re-exports the C standard library into the global namespace and thus creates these duplicated definitions in the first place, but come on, std::uint32_t is 13 characters. That is way too much typing and screen space for referring to basic fixed-size integer types.
📝 As we've thoroughly explored last time, Tup still ain't batching. Could it be that Tup's paradigm of spawning one cl.exe process per translation unit prevents us from using modules to their full throughput potential? The /cgthreads1 flag seems to help in this regard. Let's do some profiling using cl.exe's undocumented /Bt flag to find out how the compilation times are distributed between the parsing and semantic analysis frontend (c1*.dll) and the code generation backend (c2.dll):
Game code (60 TUs around migration, 58 TUs at end of P0295)
Cumulative frontend and backend compilation times of a Debug build on my system, as reported by /Bt, together with the total real time. Since the library code is all C and therefore unaffected by modules, the numbers are the average of the builds at all three tested commits.
So yes, the Tup tax is real and adds somewhere between 30 and 40 ms per translation unit to the compilation time. cl.exe is simply better at parallelizing itself than any attempt to parallelize it from the outside. It feels inevitable that I'll eventually just fork Tup and add this batching functionality myself; the entire trajectory of my development career has been pointing towards that goal, and it would be the logical conclusion of my C++ build frustrations. But certainly not any time soon; the cost is not too high all things considered, I update libraries maybe once every second push, and I'll have done enough build system work for the foreseeable future after the Linux port is done.
These numbers also explain why /cgthreads1 has no measurable performance benefit for this codebase. You might think it's a good idea because Tup spawns one parallel cl.exe process per CPU core and we can't get any more real parallelism in such a situation. However, that's not what this option does – it only limits the number of code generation threads, and as the numbers show, code generation is the opposite of our bottleneck.
However, these compile time improvements come at the cost of modules completely breaking any of the major LSPs at this point in time:
The C++ extension for Visual Studio Code crashes with this error in any file that includes several headers in addition to modules:
IntelliSense process crash detected: handle_initialize
Quick info operation failed: FE: 'Compiler exited with error - No IL available'
Consequently, it no longer provides any IntelliSense for either header or standard library code.
The big Visual Studio IDE politely remarks that C++ IntelliSense support for C++20 Modules is currently experimental and then silently doesn't provide IntelliSense for anything either.
When given a compile_commands.json from Tup via tup compiledb, clangd does continue to provide IntelliSense for both header code and the C++ standard library, but its actual lack of module support puts so many false-positive squiggly lines all over the code that it's not worth using either.
But in the end, the halved compile times during regular development are well worth sacrificing IntelliSense for the time being… especially given that I am the only one who has to live in this codebase. 🧠 And besides, modules bring their own set of productivity boosts to further offset this loss: We can now freely use modern C++ standard library features at a minuscule fraction of their usual compile time cost, and get to cut down the number of necessary #include directives. Once you've experienced the simplicity of import std;, headers and their associated micro-optimization of #include costs immediately feels archaic. Try the equally undocumented /d1reportTime flag to get an idea of the compile time impact of function definitions and template instantiations inside headers… I've definitely been moving quite a few of those to .cpp files within these 10 pushes.
However, it still felt like the earliest possible point in time where doing this was feasible at all. Without LSP support, modules still feel way too bleeding-edge for a feature that was added to the C++ standard 4 years ago. This is why I only chose to use them for covering the C++ standard library for now, as we have yet to see how well GCC or Clang handle it all for the Linux port. If we run into any issues, it makes sense to polyfill any workarounds as part of the Tup building blocks instead of bloating the code with all the standard library header inclusions I'm so glad to have gotten rid of.
Well, almost all of them, because we still have to #include <assert.h> and <stdlib.h> because modules can't expose preprocessor macros and C++23 has no macro-less alternative for assert() and offsetof(). 🤦 [[assume()]] exists, but it's the exact opposite of assert(). How disappointing.
As expected, static analysis also brought a small number of pbg code pearls into focus. This list would have fit better into the static analysis section, but I figured that my audience might not necessarily care about C++ all that much, so here it is:
Shuusou Gyoku only ever seeds its RNG in three places:
At program startup (with 0),
immediately before the game picks a random attract replay after 10 seconds of no input in the top level of the menu (with the current system time in milliseconds), and, obviously,
when starting a replay (with the replay's recorded seed), which ironically counteracts the above seed immediately after the game selected the replay.
Since neither the main menu nor any of the three weapon previews utilize the RNG, any new unrecorded round started immediately after launching the .exe will always start with a seed of 0. Similarly, recorded rounds calculate their seed from the next two RNG numbers, and will always start with a seed of 347 in the same situation. RNG manipulation is therefore as simple as crafting a replay file with the intended seed, starting its playback, and immediately quitting back to the main menu. The stage of the crafted replay only matters insofar as Stage 6 starts out by reading 320 numbers from the RNG to initialize its wavy clock and shooting star animations, so you'd preferably use any other stage as all of them take a while until they read their first random number.
Of course, even a shmup with a fixed seed is only as deterministic as the input it receives from the player, and typical human input deviations will quickly add more randomness back into the game.
The effective cap of stage enemies, player shots, enemy bullets, lasers, and items is 1 entity smaller than their static array sizes would suggest. pbg did this to work around a potential out-of-bounds write in a generic management function.
The in-game score display no longer overflows into negative numbers once the score exceeds (231 - 1) points. Shuusou Gyoku did track the score using a signed 64-bit integer, but pbg accidentally used a 32-bit specifier for sprintf().
Alright, on to graphics! With font rendering and surface management mostly taken care of last year, the main focus for this final stretch was on all the geometric shapes and color gradients. pbg placed a bunch of rather game-specific code in the platform layer directly next to the Direct3D API calls, including point generation for circles and even the colors of gradient rectangles, gradient polygons, and the Music Room's spectrum analyzer. We don't want to duplicate any of this as part of the new SDL graphics layer, so I moved it all into a new game-level geometry system. By placing both the 8-bit and 16-bit approaches next to each other, this new system also draws more attention to the different approaches used at each bit depth.
So far, so boring. Said differences themselves are rather interesting though, as this refactor uncovered all of the remaining inconsistencies between the two modes:
In 8-bit mode, the game draws circles by writing pixels along the accurate outline into the framebuffer. The hardware-accelerated equivalent for the 16-bit mode would be a large unwieldy point list, so the game instead approximates circles by drawing straight lines along a regular 32-sided polygon:
It's not like the APIs prevent the 16-bit mode from taking the same approach as the 8-bit mode, so I suppose that pbg profiled this and concluded that lines offloaded to the GPU performed better than locking the framebuffer and writing pixels? Then again, given Shuusou Gyoku's comparatively high system requirements…
There's an off-by-one error in the playfield clipping region for Direct3D-rendered shapes, which ends at (511, 479) instead of (512, 480):
The fix is obvious.
There's an off-by-one error in the 8-bit rendering code for opaque rectangles that causes them to appear 1 pixel wider than in 16-bit mode. The red backgrounds behind the currently entered score are the only such boxes in the entire game; the transparent rectangles used everywhere else are drawn with the same width in both modes.
The game code also clearly asks for 400 and 14 pixels, respectively.
If we move the nice and accurate 8-bit circle outlines closer to the edge of the playfield, we discover, you guessed it, yet another off-by-one error:
No circle pixels at the right edge of the playfield. Obviously, I had to fix bug #2 in order for the line approximation to not also get clipped at the same coordinate.
The final off-by-one clipping error can be found in the filled circle part of homing lasers in 8-bit mode, but it's so minor that it doesn't deserve its own screenshot.
Now that all of the more complex geometry is generated as part of game code, I could simplify most of the engine's graphics layer down to the classic immediate primitives of early 3D rendering: Line strips, triangle strips, and triangle fans, although I'm retaining pbg's dedicated functions for filled boxes and single gradient lines in case a backend can or needs to use special abstractions for these. (Hint, hint…)
So, let's add an SDL graphics backend! With all the earlier preparation work, most of the SDL-specific sprite and geometry code turned out as a very thin wrapper around the, for once, truly simple function calls of the DirectMedia layer. Texture loading from the original color-keyed BMP files, for example, turned into a sequence of 7 straight-line function calls, with most of the work done by SDL_LoadBMP_RW(), SDL_SetColorKey(), and SDL_CreateTextureFromSurface(). And although SDL_LoadBMP_RW() definitely has its fair share of unnecessary allocations and copies, the whole sequence still loads textures ~300 µs faster than the old GDI and DirectDraw backend.
Being more modern than our immediate geometry primitives, SDL's triangle renderer only either renders vertex buffers as triangle lists or requires a corresponding index buffer to realize triangle strips and fans. On paper, this would require an additional memory allocation for each rendered shape. But since we know that Shuusou Gyoku never passes more than 66 vertices at once to the backend, we can be fancy and compute two constant index buffers at compile time. 🧠 SDL_RenderGeometryRaw() is the true star of the show here: Not only does it allow us to decouple position and color data compared to SDL's default packed vertex structure, but it even allows the neat size optimization of 8-bit index buffers instead of enforcing 32-bit ones.
By far the funniest porting solution can be found in the Music Room's spectrum analyzer, which calls for 144 1-pixel gradient lines of varying heights. SDL_Renderer has no API for rendering lines with multiple colors… which means that we have to render them as 144 quads with a width of 1 pixel.
The wireframe was generated via a raw glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
But all these simple abstractions have to be implemented somehow, and this is where we get to perhaps the biggest technical advantage of SDL_Renderer over pbg's old graphics backend. We're no longer locked into just a single underlying graphics API like Direct3D 2, but can choose any of the APIs that the team implemented the high-level renderer abstraction for. We can even switch between them at runtime!
On Windows, we have the choice between 3 Direct3D versions, 2 OpenGL versions, and the software renderer. And as we're going to see, all we should do here is define a sensible default and then allow players to override it in a dedicated menu:
Huh, we default to OpenGL 2.1? Aren't we still on Windows?
Since such a menu is pretty much asking for people to try every GPU ever with every one of these APIs, there are bound to be bugs with certain combinations. To prevent the potentially infinite workload, these bugs are exempt from my usual free bugfix policy as long as we can get the game working on at least one API without issues. The new initialization code should be resilient enough to automatically fall back on one of SDL's other driver APIs in case the default OpenGL 2.1 fails to initialize for whatever reason, and we can still fight about the best default API.
But let's assume the hopefully usual case of a functional GPU with at least decently written drivers where most of the APIs will work without visible issues. Which of them is the most performant/power-saving one on any given system? With every API having a slightly different idea about 3D rendering, there are bound to be some performance differences, and maybe these even differ between GPUs. But just how large would they be?
The answer is yes:
System
FPS (lowest | median) / API
Intel Core i5-2520M (2011) Intel HD Graphics 3000 (2011)
1120×840
Computed using pbg's original per-second debugging algorithm. Except for the Intel i7-4790 test, all of these use SDL's default geometry scaling mode as explained further below. The GeForce GTX 1070 could probably be twice as fast if it weren't inside a laptop that thermal-throttles after about 10 seconds of unlimited rendering.
The two tested replays decently represent the entire game: In Stage 6, the software renderer frequently drops into low 1-digit FPS numbers as it struggles with the blending effects used by the Laser shot type's bomb, whereas GPUs enjoy the absence of background tiles. In the Extra Stage, it's the other way round: The tiled background and a certain large bullet cancel emphasize the inefficiency of unbatched rendering on GPUs, but the software renderer has a comparatively much easier time.
And that's why I picked OpenGL as the default. It's either the best or close to the best choice everywhere, and in the one case where it isn't, it doesn't matter because the GPU is powerful enough for the game anyway.
If those numbers still look way too low for what Shuusou Gyoku is (because they kind of do), you can try enabling SDL's draw call batching by setting the environment variable SDL_RENDER_BATCHING to 1. This at least doubles the FPS for all hardware-accelerated APIs on the Intel UHD 630 in the Extra Stage, and astonishingly turns Direct3D 11 from the slowest API into by far the fastest one, speeding it up by 22× for a median FPS of 1617. I only didn't activate batching by default because it causes stability issues with OpenGL ES 2.0 on the same system. But honestly, if even a mid-range laptop from 13 years ago manages a stable 60 FPS on the default OpenGL driver while still scaling the game, there's no real need to spend budget on performance improvements.
If anything, these numbers justify my choice of not focusing on a specific one of these APIs when coding retro games. There are only very few fields that target a wider range of systems with their software than retrogaming, and as we've seen, each of SDL's supported APIs could be the optimal choice on some system out there.
📝 Last year, it seemed as if the 西方Project logo screen's lens ball effect would be one of the more tricky things to port to SDL_Renderer, and that impression was definitely accurate.
The effect works by capturing the original 140×140 pixels under the moving lens ball from the framebuffer into a temporary buffer and then overwriting the framebuffer pixels by shifting and stretching the captured ones according to a pre-calculated table. With DirectDraw, this is no big deal because you can simply lock the framebuffer for read and write access. If it weren't for the fact that you need to either generate or hand-write different code for every support bit depth, this would be one of the most natural effects you could implement with such an API. Modern graphics APIs, however, don't offer this luxury because it didn't take long for this feature to become a liability. Even 20 years ago, you'd rather write this sort of effect as a pixel shader that would directly run on the GPU in a much more accelerated way. Which is a non-starter for us – we sure ain't breaking SDL's abstractions to write a separate shader for every one of SDL_Renderer's supported APIs just for a single effect in the logo screen.
As such, SDL_Renderer doesn't even begin to provide framebuffer locking. We can only get close by splitting the two operations:
Writing can only be done by getting the new pixels onto a texture first. Which in turn can either be done by updating a rectangular area with prepared pixel data from system memory, or locking a rectangular area and writing the pixels into a buffer. However, even SDL_LockTexture() is explicitly labeled as write-only. By returning an effectively uninitialized texture, you're forced to software-render your entire scene onto this texture anyway after locking.
This little detail in the API contract makes locking entirely unusable for this lens effect. Its code does not write to every pixel within the 140×140 area and relies on the unwritten pixels retaining their rendered color, just as you would expect regular memory to behave. If we are forced to prepare the full 140×140 pixels on the CPU, we might as well just go for the simpler and fasterSDL_UpdateTexture().
Also, if SDL says "write-only access", does this mean we can't even be sure that the locked buffer is readable after we wrote some pixels and before we unlock the texture again? We'd only have to look at the PC-98's GRCG for an example of memory-mapped I/O where reading and writing can work fundamentally differently depending on the mode register. The OpenGL driver implements texture locking by allocating a separate buffer in main memory and then uploading this modified buffer to the GPU via glTexSubImage2D() upon unlocking, but the docs do leave open the possibility for a driver to return a pointer to GPU memory we can't or shouldn't read from.
In fact, the only sanctioned way of reading pixels back from a texture involves turning the texture into a render target and calling SDL_RenderReadPixels().
Within these API limitations, we can now cobble together a first solution:
Rely on render-to-texture being supported. This is the case for all APIs that are currently implemented for SDL 2's renderer and SDL 3 even made support mandatory, but who knows if we ever get our hands on one of the elusive SDL 2 console ports under NDA and encounter one of them that doesn't support it…
Create a 640×480 texture that serves as our editable framebuffer.
Create a 140×140 buffer in main memory, serving as the input and output buffer for the effect. We don't need the full 640×480 here because the effect only modifies the pixels below the magnified 140×140 area and doesn't push them further outside.
Retain the original main-memory 140×140 buffer from the DirectDraw implementation that captures the current frame's pixels under the lens ball before we modify the pixels.
Each frame, we then
render the scene onto 2),
capture the magnified area using SDL_RenderReadPixels(), reading from 2) and writing to 3),
copy 3) to 4) using a regular memcpy(),
apply the lens effect by shifting around pixels, reading from 4) and writing to 3),
write 3) back to 2), and finally
use 2) as the texture for a quad that scales the texture to the size of the window.
Compared to the DirectDraw approach, this adds the technical insecurity of render-to-texture support, one additional texture, one additional fullscreen blit, at least one additional buffer, and two additional copies that comprise a round-trip from GPU to CPU and back. It surely would have worked, but the documentation suggestions and horror stories surrounding SDL_RenderReadPixels() put me off even trying that approach. Also, it would turn out to clash with an implementation detail we're going to look at later.
However, our scene merely consists of a 320×42 image on top of a black background. If we need the resulting pixels in CPU-accessible memory anyway, there's little point in hardware-rendering such a simple scene to begin with, especially if SDL lets you create independent software renderers that support the same draw calls but explicitly write pixels to buffers in regular system memory under your full control.
This simplifies our solution to the following:
Create a 640×480 surface in main memory, acting as the target surface for SDL_CreateSoftwareRenderer(). But since the potentially hardware-accelerated renderer drivers can't render pixels from such surfaces, we still have to
create an additional 640×480 texture in write-only GPU memory.
Retain the original main-memory 140×140 buffer from the DirectDraw implementation that captures the current frame's pixels under the lens ball before we modify the pixels.
Each frame, we then
software-render the scene onto 1),
capture the magnified area using a regular memcpy(), reading from 1) and writing to 3),
apply the lens effect by shifting around pixels, reading from 3) and writing to 1),
upload all of 1) onto 2), and finally
use 2) as the texture for a quad that scales the texture to the size of the window.
This cuts out the GPU→CPU pixel transfer and replaces the second lens pixel buffer with a software-rendered surface that we can freely manipulate. This seems to require more memory at first, but this memory would actually come in handy for screenshots later on. It also requires the game to enter and leave the new dedicated software rendering mode to ensure that the 西方Project image gets loaded as a system-memory "texture" instead of a GPU-memory one, but that's just two additional calls in the logo and title loading functions.
Also, we would now software-render all of these 256 frames, including the fades. Since software rendering requires the 西方Project image to reside in main memory, it's hard to justify an additional GPU upload just to render the 127 frames surrounding the animation.
Still, we've only eliminated a single copy, and SDL_UpdateTexture() can and will do even more under the hood. Suddenly, SDL having its own shader language seems like the lesser evil, doesn't it?
When writing it out like this, it sure looks as if hardware rendering adds nothing but overhead here. So how about full-on dropping into software rendering and handling the scaling from 640×480 to the window resolution in software as well? This would allow us to cut out steps 2) and d), leaving 1) as our one and only framebuffer.
It sure sounds a lot more efficient. But actually trying this solution revealed that I had a completely wrong idea of the inefficiencies here:
We do want to hardware-render the rest of the game, so we'd need to switch from software to hardware at the end of the logo animation. As it turns out, this switch is a rather expensive operation that would add an awkward ~500 ms pause between logo and title screen.
Most importantly, though: Hardware-accelerating the final scaling step is kind of important these days. SDL's CPU scaling implementation can get really slow if a bilinear filter is involved; on my system, software-scaling 62.5 frames per second by 1.75× to 1120×840 pixels increases CPU usage by ~10%-20% in Release mode, and even drops FPS to 50 in Debug mode.
This was perhaps the biggest lesson in this sudden 25-year jump from optimizing for a PC-98 and suffering under slow DirectDraw and Direct3D wrappers into the present of GPU rendering. Even though some drivers technically don't need these redundant CPU copies, a slight bit of added CPU time is still more than worth it if it means that we get to offload the actually expensive stuff onto the GPU.
But we all know that 4-digit frame rates aren't the main draw of rendering graphics through SDL. Besides cross-platform compatibility, the most useful aspect for Shuusou Gyoku is how SDL greatly simplifies the addition of the scaled window and borderless fullscreen modes you'd expect for retro pixel graphics on modern displays. Of course, allowing all of these settings to be changed in-engine from inside the Graphic options menu is the minimum UX comfort level we would accept here – after all, something like a separate DPI-aware dialog window at startup would be harder to port anyway.
For each setting, we can achieve this level of comfort in one of two ways:
We could simply shut down SDL's underlying render driver, close the window, and reopen/reinitialize the window and driver, reloading any game graphics as necessary. This is the simplest way: We can just reuse our backend's full initialization code that runs at startup and don't need any code on top. However, it would feel rather janky and cheap.
Or we could use SDL's various setter functions to only apply the single change to the specific setting… and anything that setting depends on. This would feel really smooth to use, but would require additional code with a couple of branches.
pbg's code already geared slightly towards 2) with its feature to seamlessly change the bit depth. And with the amount of budget I'm given these days, it should be obvious what I went with. This definitely wasn't trivial and involved lots of state juggling and careful ordering of these procedural, imperative operations, even at the level of "just" using high-level SDL API calls for everything. It must have undoubtedly been worse for the SDL developers; after all, every new option for a specific parameter multiplies the amount of potential window state transitions.
In the end though, most of it ended up working at our preferred high level of quality, leaving only a few cases where either SDL or the driver API forces us to throw away and recreate the window after all:
When changing rendering APIs, because certain API transitions would fail to initialize properly and only leave a black window,
when changing from borderless fullscreen into exclusive fullscreen on any API. This one is fixed in SDL 3, and they may or may not backport a fix in response to my bug report.
As for the actual settings, I decided on making the windowed-mode scale factor customizable at intervals of 0.25, or 160×120 pixels, up to the taskbar-excluding resolution of the current display the game window is placed on. Sure, restricting the factor to integer values is the idealistically correct thing to do, but 640×480 is a rather large source resolution compared to the retro consoles where integer scaling is typically brought up. Hence, such a limitation would be suboptimal for a large number of displays, most notably any old 720p display or those laptop screens with 1366×768 resolutions.
In the new borderless fullscreen mode, the configurable scaling factor breaks down into all three possible interpretations of "fitting the game window onto the whole screen":
A [Integer] fit that applies the largest possible integer scaling factor and windowboxes the game accordingly,
a [4:3] fit that stretches the game as large as possible while maintaining the original aspect ratio and either pillarboxes the game on landscape displays or letterboxes it on portrait ones,
and the cursed, aspect ratio-ignoring [Stretch] fit that may or may not improve gameplay for someone out there, but definitely evokes nostalgia for stretching Game Boy (Color) games on a Game Boy Advance.
What currentlycan't be configured is the image filter used for scaling. The game always uses nearest-neighbor at integer scaling factors and bilinear filtering at fractional ones.
The three scaling options available in borderless fullscreen mode as rendered on a 1280×720 display, which is one of the worst display resolutions you could play this game on.
And yes – as the presence of the FullScr[Borderless] option implies, the new build also still supports exclusive, display mode-changing 640×480 boomer fullscreen. 🙌
That ScaleMode, though…
And then, I was looking for one more small optional feature to complete the 9th push and came up with the idea of hotkeys that would allow changing any of these settings at any point. Ember2528 considered it the best one of my ideas, so I went ahead… but little did I know that moving these graphics settings out of the main menu would not only significantly reshape the architecture of my code, but also uncover more bugs in my code and even a replay-related one from the original game. Paraphrasing the release notes:
The original game had three bugs that affected the configured difficulty setting when playing the Extra Stage or watching an Extra Stage replay. When returning to the main menu from an Extra Stage replay, the configured difficulty would be overridden with either
the difficulty selected before the last time the Extra Stage's Weapon Select screen was entered, or
Easy, when watching the replay before having been to the Extra Stage's Weapon Select screen during one run of the program.
Also, closing the game window during the Extra Stage (both self-played and replayed) would override the configured difficulty with Hard (the internal difficulty level of the Extra Stage).
But the award for the greatest annoyance goes to this SDL quirk that would reset a render target's clipping region when returning to raw framebuffer rendering, which causes sprites to suddenly appear in the two black 128-pixel sidebars for the one frame after such a change. As long as graphics settings were only available from the unclipped main menu, this quirk only required a single silly workaround of manually backing up and restoring the clipping region. But once hotkeys allowed these settings to be changed while SDL_Renderer clips all draw calls to the 384×480 playfield region, I had to deploy the same exact workaround in three additional places… 🥲 At least I wrote it in a way that allows it to be easily deleted if we ever update to SDL 3, where the team fixed the underlying issue.
In the end, I'm not at all confident in the resulting jumbled mess of imperative code and conditional branches, but at least it proved itself during the 1½ months this feature has existed on my machine. If it's any indication, the testers in the Seihou development Discord group thought it was fine at the beginning of October when there were still 8 bugs left to be discovered.
As for the mappings themselves: F10 and F11 cycle the window scaling factor or borderless fullscreen fit, F9 toggles the ScaleMode described below, and F8 toggles the frame rate limiter. The latter in particular is very useful for not only benchmarking, but also as a makeshift fast-forward function for replays. Wouldn't rewinding also be cool?
So we've ported everything the game draws, including its most tricky pixel-level effect, and added windowed modes and scaling on top. That only leaves screenshots and then the SDL backend work would be complete. Now that's where we just call SDL_RenderReadPixels() and write the returned pixels into a file, right? We've been scaling the game with the very convenient SDL_RenderSetLogicalSize(), so I'd expect to get back the logical 640×480 image to match the original behavior of the screenshot key…
…except that we don't? Why do we only get back the 640×480 pixels in the top-left corner of the game's scaled output, right before it hits the screen? How unfortunate – if SDL forces us to save screenshots at their scaled output resolution, we'd needlessly multiply the disk space that these uncompressed .BMP files take up. But even if we did compress them, there should be no technical reason to blow up the pixels of these screenshots past the logical size we specified…
Taking a closer look at SDL_RenderSetLogicalSize() explains what's going on there. This function merely calculates a scale factor by comparing the requested logical size with the renderer's output size, as well as a viewport within the game window if it has a different aspect ratio than the logical size. Then, it's up to the SDL_Renderer frontend to multiply and offset the coordinates of each incoming vertex using these values.
Therefore, SDL_RenderReadPixels() can't possibly give us back a 640×480 screenshot because there simply is no 640×480 framebuffer that could be captured. As soon as the draw calls hit the render API and could be captured, their coordinates have already been transformed into the scaled viewport.
The solution is obvious: Let's just create that 640×480 image ourselves. We'd first render every frame at that resolution into a texture, and then scale that texture to the window size by placing it on a single quad. From a preservation standpoint, this is also the academically correct thing to do, as it ensures that the entire game is still rendered at its original pixel grid. That's why this framebuffer scaling mode is the default, in contrast to the geometry scaling that SDL comes with.
With integer scaling factors and nearest-neighbor filtering, we'd expect the two approaches to deliver exactly identical pixels as far as sprite rendering is concerned. At fractional resolutions though, we can observe the first difference right in the menu. While geometry scaling always renders boxes with sharp edges, it noticeably darkens the text inside the boxes because it separately scales and alpha-blends each shadowed line of text on top of the already scaled pixels below – remember, 📝 the shadow for each line is baked into the same sprite. Framebuffer scaling, on the other hand, doesn't work on layers and always blurs every edge, but consequently also blends together all pixels in a much more natural way:
Look closer, and you can even see texture coordinate glitches at the edges of the individual text line quads.
Surprisingly though, we don't see much of a difference with the circles in the Weapon Select screen. If geometry scaling only multiplies and offsets vertices, shouldn't the lines along the 32-sided polygons still be just one pixel thick? As it turns out, SDL puts in quite a bit of effort here: It never actually uses the API's line primitive when scaling the output, but instead takes the endpoints, rasterizes the line on the CPU, and turns each point on the resulting line into a quad the size of the scale factor. Of course, this completely nullifies pbg's original intent of approximating circles with lines for performance reasons.
The result looks better and better the larger the window is scaled. On low fractional scale factors like 1.25×, however, lines end up looking truly horrid as the complete lack of anti-aliasing causes the 1.25×1.25-pixel point quads to be rasterized as 2 pixels rather than a single one at regular intervals:
Also note how you can either have bright circle colors or bright text colors, but not both.
But once we move in-game, we can even spot differences at integer resolutions if we look closely at all the shapes and gradients. In contrast to lines, software-rasterizing triangles with different vertex colors would be significantly more expensive as you'd suddenly have to cover a triangle's entire filled area with point quads. But thanks to that filled nature, SDL doesn't have to bother: It can merely scale the vertex coordinates as you'd expect and pass them onto the driver. Thus, the triangles get rasterized at the output resolution and end up as smooth and detailed as the output resolution allows:
Note how the HP gauge, being a gradient, also looks smoother with geometry scaling, whereas the Evade gauge, being 9 additively-blended red boxes with decreasing widths, doesn't differ between the modes.
For an even smoother rendering, enable anti-aliasing in your GPU's control panel; SDL unfortunately doesn't offer an API-independent way of enabling it.
You might now either like geometry scaling for adding these high-res elements on top of the pixelated sprites, or you might hate it for blatantly disrespecting the original game's pixel grid. But the main reasons for implementing and offering both modes are technical: As we've learned earlier when porting the lens ball effect, render-to-texture support is technically not guaranteed in SDL 2, and creating an additional texture is technically a fallible operation. Geometry scaling, on the other hand, will always work, as it's just additional arithmetic.
If geometry scaling does find its fans though, we can use it as a foundation for further high-res improvements. After all, this mode can't ever deliver a pixel-perfect rendition of the original Direct3D output, so we're free to add whatever enhancements we like while any accuracy concerns would remain exclusive to framebuffer scaling.
Just don't use geometry scaling with fractional scaling factors. These look even worse in-game than they do in the menus: The glitching texture coordinates reveal both the boundaries of on-screen tiles as well as the edge pixels of adjacent tiles within the set, and the scaling can even discolor certain dithered transparency effects, what the…?!
That green color is supposed to be the color key of this sprite sheet… 🤨
With both scaling paradigms in place, we now have a screenshot strategy for every possible rendering mode:
Software-rendering (i.e., showing the 西方Project logo)?
This is the optimal case. We've already rendered everything into a system-memory framebuffer anyway, so we can just take that buffer and write it to a file.
Hardware-rendering at unscaled 640×480?
Requires a transfer of the GPU framebuffer to the system-memory buffer we initially allocate for software rendering, but no big deal otherwise.
Hardware-rendering with framebuffer scaling?
As we've seen with the initial solution for the lens ball effect, flagging a texture as a render target thankfully always allows us to read pixels back from the texture, so this is identical to the case above.
Hardware-rendering with geometry scaling?
This is the initial case where we must indeed bite the bullet and save the screenshot at the scaled resolution because that's all we can get back from the GPU. Sure, we could software-scale the resulting image back to 640×480, but:
That would defeat the entire point of geometry scaling as it would throw away all the increased detail displayed in the screenshots above. Maybe that is something you'd like to capture if you deliberately selected this scale mode.
If we scaled back an image rendered at a fractional scaling factor, we'd lose every last trace of sharpness.
The only sort of reasonable alternative: We could respond to the keypress by setting up a parallel 640×480 software renderer, rendering the next frame in both hardware and software in parallel, and delivering the requested screenshot with a 1-frame lag. This might be closer to what players expect, but it would make quite a mess of this already way too stateful graphics backend. And maybe, the lag is even longer than 1 frame because we simultaneously have to recreate all active textures in CPU-accessible memory…
Now that we can take screenshots, let's take a few and compare our 640×480 output to pbg's original Direct3D backend to see how close we got. Certain small details might vary across all the APIs we can use with SDL_Renderer, but at least for Direct3D 9, we'd expect nothing less than a pixel-perfect match if we pass the exact same vertices to the exact same APIs. But something seems to be wrong with the SDL backend at the subpixel level with any triangle-based geometry, regardless of which rendering API we choose…
As if each polygon was shifted slightly up and to the left…
The other, much trickier accuracy issue is the line rendering. We saw earlier that SDL software-rasterizes any lines if we geometry-scale, but we do expect it to use the driver's line primitive if we framebuffer-scale or regularly render at 640×480. And at one point, it did, until the SDL team discovered accuracy bugs in various OpenGL implementations and decided to just always software-rasterize lines by default to achieve identical rendered images regardless of the chosen API. Just like with the half-pixel offset above, this is the correct choice for new code, but the wrong one for accurately porting an existing Direct3D game.
Thankfully, you can opt into the API's native line primitive via SDL's hint system, but the emphasis here is on API. This hint can still only ensure a pixel-perfect match if SDL renders via any version of Direct3D and you either use framebuffer scaling or no scaling at all. OpenGL will draw lines differently, and the software renderer just uses the same point rasterizing algorithm that SDL uses when scaling.
Pixels written into the framebuffer along the accurate outline, as we've covered above. Also note the slightly brighter color compared to the 3D-rendered variants.
The original Direct3D line rendering used in pbg's original code, touching a total of 568 pixels.
OpenGL's line rendering gets close, but still puts 16 pixels into different positions. Still, 97.2% of points are accurate to the original game.
The result of SDL's software line rasterizer, which you'd still see in the P0295 build when using either the software renderer or geometry scaling with any API. Slightly more accurate than OpenGL in this particular case with only 14 diverging pixels, matching 97.5% of the original circle.
As another alternative, SDL also offers a mode that renders each line as two triangles. This method naturally scales to any scale factor, but ends up drawing slightly thicker diagonals. You can opt into this mode via SDL's hint system by setting the environment variable SDL_RENDER_LINE_METHOD to 3.
The triangle method would also fit great with the spirit of geometry scaling, rendering smooth high-res circles analogous to the laser examples we saw earlier. This is how it would look like with the game scaled to 3200×2400… yeah, maybe we do want the point list after all, you can clearly see the 32 corners at this scale.
Replacing circles with point lists, as mentioned earlier, won't solve everything though, because Shuusou Gyoku also has plenty of non-circle lines:
6884 pixels touched by the Direct3D line renderer, a 98.3% match by the OpenGL rasterizer with 119 diverging pixels, and a 97.9% match by the SDL rasterizer with 147 diverging pixels. Looks like OpenGL gets better the longer the lines get, making line render method #2 the better choice even on non-Direct3D drivers.
So yeah, this one's kind of unfortunate, but also very minor as both OpenGL's and SDL's algorithms are at least 97% accurate to the original game. For now, this does mean that you'll manually have to change SDL_Renderer's driver from the OpenGL default to any of the Direct3D ones to get those last 3% of accuracy. However, I strongly believe that everyone who does care at this level will eventually read this sentence. And if we ever actually want 100% accuracy across every driver, we can always reverse-engineer and reimplement the exact algorithm used by Direct3D as part of our game code.
That completes the SDL renderer port for now! As all the GitHub issue links throughout this post have already indicated, I could have gone even further, but this is a convincing enough state for a first release. And once I've added a Linux-native font rendering backend, removed the few remaining <windows.h> types, and compiled the whole thing with GCC or Clang as a 64-bit binary, this will be up and running on Linux as well.
If we take a step back and look at what I've actually ended up writing during these SDL porting endeavors, we see a piece of almost generic retro game input, audio, window, rendering, and scaling middleware code, on top of SDL 2. After a slight bit of additional decoupling, most of this work should be reusable for not only Kioh Gyoku, but even the eventual cross-platform ports of PC-98 Touhou.
Perhaps surprisingly, I'm actually looking forward to Kioh Gyoku now. That game seems to require raw access to the underlying 3D API due to a few effects that seem to involve a Z coordinate, but all of these are transformed in software just like the few 3D effects in Shuusou Gyoku. Coming from a time when hardware T&L wasn't a ubiquitous standard feature on GPUs yet, both games don't even bother and only ever pass Z coordinates of 0 to the graphics API, thus staying within the scope of SDL_Renderer. The only true additional high-level features that Kioh Gyoku requires from a renderer are sprite rotation and scaling, which SDL_Renderer conveniently supports as well. I remember some of my backers thinking that Kioh Gyoku was going to be a huge mess, but looking at its code and not seeing a separate 8-bit render path makes me rather excited to be facing a fraction of Shuusou Gyoku's complexity. The 3D engine sure seems featureful at the surface, and the hundreds of source files sure feel intimidating, but a lot of the harder-to-port parts remained unused in the final game. Kind of ironic that pbg wrote a largely new engine for this game, but we're closer to porting it back to our own enhanced, now almost fully cross-platform version of the Shuusou Gyoku engine.
Speaking of 8-bit render paths though, you might have noticed that I didn't even bother to port that one to SDL. This is certainly suboptimal from a preservation point of view; after all, pbg specifically highlights in the source code's README how the split between palettized 8-bit and direct-color 16-bit modes was a particularly noteworthy aspect of the period in time when this game was written:
Times have changed though, and SDL_Renderer doesn't even expose the concept of rendering bit depth at the API level. 📝 If we remember the initial motivation for these Shuusou Gyoku mods, Windows ≥8 doesn't even support anything below 32-bit anymore, and neither do most of SDL_Renderer's hardware-accelerated drivers as far as texture formats are concerned. While support for 24-bit textures without an alpha channel is still relatively common, only the Linux DirectFB drivermight support 16-bit and 8-bit textures, and you'd have to go back to the PlayStation Vita, PlayStation 2, or the software renderer to find guaranteed 16-bit support.
Therefore, full software rendering would be our only option. And sure enough, SDL_Renderer does have the necessary palette mapping code required for software-rendering onto a palettized 8-bit surface in system memory. That would take care of accurately constraining this render path to its intended 256 colors, but we'd still have to upconvert the resulting image to 32-bit every frame and upload it to GPU for hardware-accelerated scaling. This raises the question of whether it's even worth it to have 8-bit rendering in the SDL port to begin with if it will be undeniably slower than the GPU-accelerated direct-color port. If you think it's still a worthwhile thing to have, here is the issue to invest in.
In the meantime though, there is a much simpler way of continuing to preserve the 8-bit mode. As usual, I've kept pbg's old DirectX graphics code working all the way through the architectural cleanup work, which makes it almost trivial to compile that old backend into a separate binary and continue preserving the 8-bit mode in that way.
This binary is also going to evolve into the upcoming Windows 98 backport, and will be accompanied by its own SDL DLL that throws out the Direct3D 11, 12, OpenGL 2, and WASAPI backends as they don't exist on Windows 98. I've already thrown out the SSE2 and AVX implementations of the BLAKE3 hash function in preparation, which explains the smaller binary size. These Windows 98-compatible binaries will obviously have to remain 32-bit, but I'm undecided on whether I should update the regular Windows build to a 64-bit binary or keep it 32-bit:
Going 64-bit would give Windows users easy access to both builds and could help with testing and debugging rare issues that only occur in either the 64-bit or the 32-bit build, whereas
staying 32-bit would make it less likely for us to actually break the 32-bit Windows build because all Windows users (and developers) would continue using it.
I'm open to strong opinions that sway me in one or the other direction, but I'm not going to do both – unless, of course, someone subscribes for the continued maintenance of three Windows builds. 😛
Speaking about SDL, we'll probably want to update from SDL 2 to SDL 3 somewhere down the line. It's going to be the future, cleans up the API in a few particularly annoying places, and adds a Vulkan driver to SDL_Renderer. Too bad that the documentation still deters me from using the audio subsystem despite the significant improvements it made in other regards…
For now, I'm still staying on SDL 2 for two main reasons:
While SDL 3 is bound to be more available on Linux distributions in the future, that's not the case right now. Everyone is still waiting for its first stable release, and so it currently isn't packaged in any distribution repo outside the AUR from what I can tell. Wide Linux compatibility is the whole point of this port.
The funding for a Windows 98 port of SDL 2 was obviously intended to help with other existing SDL 2 games and not just Shuusou Gyoku.
Finally, I decided against a Japanese translation of the new menu options for now because the help text communicates too much important information. That will have to wait until we make the whole game translatable into other languages.
📝 I promised to recreate the Sound Canvas VA packs once I know about the exact way real hardware handles the 📝 invalid Reverb Macro messages in ZUN's MIDI files, and what better time to keep that promise than to tack it onto the end of an already long overdue delivery. For some reason, Sound Canvas VA exhibited several weird glitches during the re-rendering processes, which prompted some rather extensive research and validation work to ensure that all tracks generally sound like they did in the previous version of the packages. Figuring out why this patch was necessary could have certainly taken a push on its own…
Interestingly enough, all these comparisons of renderings against each other revealed that the fix only makes a difference in a lot fewer than the expected 34 out of 39 MIDIs. Only 19 tracks – 11 in the OST and 8 in the AST – actually sound different depending on the Reverb Macro, because the remaining 15 set the reverb effect's main level to 0 and are therefore unaffected by the fix.
And then, there is the Stage 1 theme, which only activates reverb during a brief portion of its loop:
Thus, this track definitely counts toward the 11 with a distinct echo version. But comparing that version against the no-echo one reveals something truly mind-blowing: The Sound Canvas VA rendering only differs within exactly the 8 bars of the loop, and is bit-by-bit identical anywhere else. 🤯 This is why you use softsynths.
This is the OST version, but it works just as well with the AST.
This is the OST version, but it works just as well with the AST.
Since the no-echo and echo BGM packs are aligned in both time and volume, you can reproduce this result – and explore the differences for any other track across both soundtracks – by simply phase-inverting a no-echo variant file and mixing it into the corresponding echo file. Obviously, this works best with the FLAC files.
Since the no-echo and echo BGM packs are aligned in both time and volume, you can reproduce this result – and explore the differences for any other track across both soundtracks – by simply phase-inverting a no-echo variant file and mixing it into the corresponding echo file. Obviously, this works best with the FLAC files. Trying it with the lossy versions gets surprisingly close though, and simultaneously reveals the infamous Vorbis pre-echo on the drums.
So yeah, the fact that ZUN enabled reverb by suddenly increasing the level for just this 8-bar piano solo erases any doubt about the panning delay having been a quirk or accident. There is no way this wasn't done intentionally; whether the SC-88Pro's default reverb is at 0 or 40 barely makes an audible difference with all the notes played in this section, and wouldn't have been worth the unfortunate chore of inserting another GS SysEx message into the sequence. That's enough evidence to relegate the previous no-echo Sound Canvas VA packs to a strictly unofficial status, and only preserve them for reference purposes. If you downloaded the earlier ones, you might want to update… or maybe not if you don't like the echo, it's all about personal preference at the end of the day.
While we're that deep into reproducibility, it makes sense to address another slight issue with the March release. Back then, I rendered 📝 our favorite three MIDI files, the AST versions of the three Extra Stage themes, with their original long setup area and then trimmed the respective samples at the audio level. But since the MIDI-only BGM pack features a shortened setup area at the MIDI level, rendering these modified MIDI files yourself wouldn't give you back the exact waveforms. 📝 As PCM behaves like a lollipop graph, any change to the position of a note at a tempo that isn't an integer factor of the sampling rate will most likely result in completely different samples and thus be uncomparable via simple phase-cancelling.
In our case though, all three of the tracks in question render with a slightly higher maximum peak amplitude when shortening their MIDI setup area. Normally, I wouldn't bother with such a fluctuation, but remember that シルクロードアリス is by far the loudest piece across both soundtracks, and thus defines the peak volume that every other track gets normalized to.
But wait a moment, doesn't this mean that there's maybe a setup area length that could yield a lower or even much lower peak amplitude?
And so I tested all setup area lengths at regular intervals between our target 2-beat length and ZUN's original lengths, and indeed found a great solution: When manipulating the setup area of the Extra Stage theme to an exact length of 2850 MIDI pulses, the conversion process renders it with a peak amplitude of 1.900, compared to its previous peak amplitude of 2.130 from the March release. That translates to an extra +0.56 dB of volume tricked out of all other tracks in the AST! Yeah, it's not much, but hey, at least it's not worse than what it used to be. The shipped MIDIs of the Extra Stage themes still don't correspond to the rendered files, but now this is at least documented together with the MIDI-level patch to reproduce the exact optimal length of the setup area.
Still, all that testing effort for tracks that, in my subjective opinion, don't even sound all that good… The resulting shrill resonant effects stick out like a sore thumb compared to the more basic General MIDI sound of every other track across both soundtrack variants. Once again, unofficial remixes such as Romantique Tp's one edit to 二色蓮花蝶 ~ Ancients can be the only solution here.
As far as preservation is concerned, this is as good as it gets, and my job here is done.
Then again, now that I've further refined (and actually scripted) the loop construction logic, I'd love to also apply it to Kioh Gyoku's MIDI soundtrack once its codebase is operational. Obviously, there's much less of an incentive for putting SC-88Pro recordings back into that game given that Kioh Gyoku already comes with an official (and, dare I say, significantly more polished) waveform soundtrack. And even if there was an incentive, it might not extend to a separate Sound Canvas VA version: As frustrating as ZUN's sequencing techniques in the final three Shuusou Gyoku Extra Stage arrangements are when dealing with rendered output, the fact that he reserved a lot more setup space to fit the more detailed sound design of each Kioh Gyoku track is a good thing as far as real-hardware playback is concerned. Consequently, the Romantique Tp recordings suffer far less from 📝 the SC-88Pro's processing lag issues, and thus might already constitute all the preservation anyone would ever want.
Once again though, generous MIDI setup space also means that Kioh Gyoku's MIDI soundtrack has lots of long and awkward pauses at the beginning of stages before the music starts. The two worst offenders here are
天鵞絨少女戦 ~ Velvet Battle and 桜花之恋塚 ~ Flower of Japan, with a 3:429s pause each. So, preserving the MIDI soundtrack in its originally intended sound might still be a worthwhile thing to fund if only to get rid of those pauses. After all, we can't ever safely remove these pauses at the MIDI level unless users promise that they use a GS-supporting device.
What we can do as part of the game, however, is hotpatch the original MIDI files from Shuusou Gyoku's MUSIC.DAT with the Reverb Macro fix. This way, the fix is also available for people who want to listen to the OST through their own copy of Sound Canvas VA or a SC-8850 and don't want to download recordings. This isn't necessary for the AST because we can simply bake the fix into the MIDI-only BGM pack, but we can't do this for the OST due to copyright reasons. This hotpatch should be an option just because hotpatching MIDIs is rather insidious in principle, but it's enabled by default due to the evidence we found earlier.
The game currently pauses when it loses focus, which also silences any currently playing MIDI notes. Thus, we can verify the active reverb type by switching between the game and VST windows:
Maximum volume recommended.
Still saying Panning Delay, even though we obviously hear the default reverb. A clear bug in the Sound Canvas VA UI.
Next up: You decide! This delivery has opened up quite a bit of budget, so this would be a good occasion to take a look at something else while we wait for a few more funded pushes to complete the Shuusou Gyoku Linux port. With the previous price increases effectively increasing the monetary value of earlier contributions, it might not always be exactly obvious how much money is needed right now to secure another push. So I took a slight bit out of the Anything funds to add the exact € amount to the crowdfunding log.
In the meantime, I'll see how far I can get with porting all of the previous SDL work back to Windows 98 within one push-equivalent microtransaction, and do some internal website work to address some long-standing pain points.
P0002
Build system improvements, part 2 (Preparations / Codebase cleanup)
P0003
Build system improvements, part 3 (Lua rewrite of the Tupfile / Tup bugfixes for MS-DOS Player)
P0004
Build system improvements, part 4 (Merging the 16-bit build part into the Tupfile)
P0281
Build system improvements, part 5 (MS-DOS Player bugfixes and performance tuning for Turbo C++ 4.0J)
P0282
Build system improvements, part 6 (Generating an ideal dumb batch script for 32-bit platforms)
P0283
Build system improvements, part 7 (Researching and working around Windows 9x batch file limits)
P0284
#include cleanup, part 1/2 / Decompilation (TH04/TH05 .REC loading)
P0285
#include cleanup, part 2/2 / Decompilation (TH02 MAIN.EXE High Score entry)
💰 Funded by:
GhostPhanom, [Anonymous], Blue Bolt, Yanga
🏷️ Tags:
I'm 13 days late, but 🎉 ReC98 is now 10 years old! 🎉 On June 26, 2014, I first tried exporting IDA's disassembly of TH05's OP.EXE and reassembling and linking the resulting file back into a binary, and was amazed that it actually yielded an identical binary. Now, this doesn't actually mean that I've spent 10 years working on this project; priorities have been shifting and continue to shift, and time-consuming mistakes were certainly made. Still, it's a good occasion to finally fully realize the good future for ReC98 that GhostPhanom invested in with the very first financial contribution back in 2018, deliver the last three of the first four reserved pushes, cross another piece of time-consuming maintenance off the list, and prepare the build process for hopefully the next 10 years.
But why did it take 8 pushes and over two months to restore feature parity with the old system? 🥲
The original plan for ReC98's good future was quite different from what I ended up shipping here. Before I started writing the code for this website in August 2019, I focused on feature-completing the experimental 16-bit DOS build system for Borland compilers that I'd been developing since 2018, and which would form the foundation of my internal development work in the following years. Eventually, I wanted to polish and publicly release this system as soon as people stopped throwing money at me. But as of November 2019, just one month after launch, the store kept selling out with everyone investing into all the flashier goals, so that release never happened.
In theory, this build system remains the optimal way of developing with old Borland compilers on a real PC-98 (or any other 32-bit single-core system) and outside of Borland's IDE, even after the changes introduced by this delivery. In practice though, you're soon going to realize that there are lots of issues I'd have to revisit in case any PC-98 homebrew developers are interested in funding me to finish and release this tool…
The main idea behind the system still has its charm: Your build script is a regular C++ program that #includes the build system as a static library and passes fixed structures with names of source files and build flags. By employing static structure initialization, even a 1994 Turbo C++ would let you define the whole build at compile time, although this certainly requires some dank preprocessor magic to remain anywhere near readable at ReC98 scale. 🪄 While this system does require a bootstrapping process, the resulting binary can then use the same dependency-checking mechanisms to recompile and overwrite itself if you change the C++ build code later. Since DOS just simply loads an entire binary into RAM before executing it, there is no lock to worry about, and overwriting the originating binary is something you can just do.
Later on, the system also made use of batched compilation: By passing more than one source file to TCC.EXE, you get to avoid TCC's quite noticeable startup times, thus speeding up the build proportional to the number of translation units in each batch. Of course, this requires that every passed source file is supposed to be compiled with the same set of command-line flags, but that's a generally good complexity-reducing guideline to follow in a build script. I went even further and enforced this guideline in the system itself, thus truly making per-file compiler command line switches considered harmful. Thanks to Turbo C++'s #pragma option, changing the command line isn't even necessary for the few unfortunate cases where parts of ZUN's code were compiled with inconsistent flags.
I combined all these ideas with a general approach of "targeting DOSBox": By maximizing DOS syscalls and minimizing algorithms and data structures, we spend as much time as possible in DOSBox's native-code DOS implementation, which should give us a performance advantage over DOS-native implementations of MAKE that typically follow the opposite approach.
Of course, all this only matters if the system is correct and reliable at its core. Tup teaches us that it's fundamentally impossible to have a reliable generic build system without
augmenting the build graph with all actual files read and written by each invoked build tool, which involves tracing all file-related syscalls, and
persistently serializing the full build graph every time the system runs, allowing later runs to detect every possible kind of change in the build script and rebuild or clean up accordingly.
Unfortunately, the design limitations of my system only allowed half-baked attempts at solving both of these prerequisites:
If your build system is not supposed to be generic and only intended to work with specific tools that emit reliable dependency information, you can replace syscall tracing with a parser for those specific formats. This is what my build system was doing, reading dependency information out of each .OBJ file's OMF COMENT record.
Since DOS command lines are limited to 127 bytes, DOS compilers support reading additional arguments from response files, typically indicated with an @ next to their path on the command line. If we now put every parameter passed to TCC or TLINK into a response file and leave these files on disk afterward, we've effectively serialized all command-line arguments of the entire build into a makeshift database. In later builds, the system can then detect changed command-line arguments by comparing the existing response files from the previous run with the new contents it would write based on the current build structures. This way, we still only recompile the parts of the codebase that are affected by the changed arguments, which is fundamentally impossible with Makefiles.
But this strategy only covers changes within each binary's compile or link arguments, and ignores the required deletions in "the database" when removing binaries between build runs. This is a non-issue as long as we keep decompiling on master, but as soon as we switch between master and similarly old commits on the debloated/anniversary branches, we can get very confusing errors:
The symptom is a calling convention mismatch: The two vector functions use __cdecl on master and pascal on debloated/anniversary. We've switched from anniversary (which compiles to ANNIV.EXE) back to master (which compiles to REIIDEN.EXE) here, so the .obj file on disk still uses the pascal calling convention. The build system, however, only checks the response files associated with the current target binary (REIIDEN.EXE) and therefore assumes that the .obj files still reflect the (unchanged) command-line flags in the TCC response file associated with this binary. And if none of the inputs of these .obj files changed between the two branches, they aren't rebuilt after switching, even though they would need to be.
Apparently, there's also such a thing as "too much batching", because TCC would suddenly stop applying certain compiler optimizations at very specific places if too many files were compiled within a single process? At least you quickly remember which source files you then need to manually touch and recompile to make the binaries match ZUN's original ones again…
But the final nail in the coffin was something I'd notice on every single build: 5 years down the line, even the performance argument wasn't convincing anymore. The strategy of minimizing emulated code still left me with an 𝑂(𝑛) algorithm, and with this entire thing still being single-threaded, there was no force to counteract the dependency check times as they grew linearly with the number of source files.
At P0280, each build run would perform a total of 28,130 file-related DOS syscalls to figure out which source files have changed and need to be rebuilt. At some point, this was bound to become noticeable even despite these syscalls being native, not to mention that they're still surrounded by emulator code that must convert their parameters and results to and from the DOS ABI. And with the increasing delays before TCC would do its actual work, the entire thing started feeling increasingly jankier.
While this system was waiting to be eventually finished, the public master branch kept using the Makefile that dates back to early 2015. Back then, it didn't takelong for me to abandon raw dumb batch files because Make was simply the most straightforward way of ensuring that the build process would abort on the first compile error.
The following years also proved that Makefile syntax is quite well-suited for expressing the build rules of a codebase at this scale. The built-in support for automatically turning long commands into response files was especially helpful because of how naturally it works together with batched compilation. Both of these advantages culminate in this wonderfully arcane incantation of ASCII special characters and syntactically significant linebreaks:
tcc … @&&|
$**
|
Which translates to "take the filenames of all dependents of this explicit rule, write them into a temporary file with an autogenerated name, insert this filename into the tcc … @ command line, and delete the file after the command finished executing". The @ is part of TCC's command-line interface, the rest is all MAKE syntax.
But 📝 as we all know by now, these surface-level niceties change nothing about Makefiles inherently being unreliable trash due to implementing none of the aforementioned two essential properties of a generic build system. Borland got so close to a correct and reliable implementation of autodependencies, but that would have just covered one of the two properties. Due to this unreliability, the old build16b.bat called Borland's MAKER.EXE with the -B flag, recompiling everything all the time. Not only did this leave modders with a much worse build process than I was using internally, but it also eventually got old for me to merge my internal branch onto master before every delivery. Let's finally rectify that and work towards a single good build process for everyone.
As you would expect by now, I've once again migrated to Tup's Lua syntax. Rewriting it all makes you realize once again how complex the PC-98 Touhou build process is: It has to cover 2 programming languages, 2 pipeline steps, and 3 third-party libraries, and currently generates a total of 39 executables, including the small programs I wrote for research. The final Lua code comprises over 1,300 lines – but then again, if I had written it in 📝 Zig, it would certainly be as long or even longer due to manual memory management. The Tup building blocks I constructed for Shuusou Gyoku quickly turned out to be the wrong abstraction for a project that has no debug builds, but their 📝 basic idea of a branching tree of command-line options remained at the foundation of this script as well.
This rewrite also provided an excellent opportunity for finally dumping all the intermediate compilation outputs into a separate dedicated obj/ subdirectory, finally leaving bin/ nice and clean with only the final executables. I've also merged this new system into most of the public branches of the GitHub repo.
As soon as I first tried to build it all though, I was greeted with a particularly nasty Tup bug. Due to how DOS specified file metadata mutation, MS-DOS Player has to open every file in a way that current Tup treats as a write access… but since unannotated file writes introduce the risk of a malformed build graph if these files are read by another build command later on, Tup providently deletes these files after the command finished executing. And by these files, I mean TCC.EXE as well as every one of its C library header files opened during compilation.
Due to a minor unsolved question about a failing test case, my fix has not been merged yet. But even if it was, we're now faced with a problem: If you previously chose to set up Tup for ReC98 or 📝 Shuusou Gyoku and are maybe still running 📝 my 32-bit build from September 2020, running the new build.bat would in fact delete the most important files of your Turbo C++ 4.0J installation, forcing you to reinstall it or restore it from a backup. So what do we do?
Should my custom build get a special version number so that the surrounding batch file can fail if the version number of your installed Tup is lower?
Or do I just put a message somewhere, which some people invariably won't read?
The easiest solution, however, is to just put a fixed Tup binary directly into the ReC98 repo. This not only allows me to make Tup mandatory for 64-bit builds, but also cuts out one step in the build environment setup that at least one person previously complained about. *nix users might not like this idea all too much (or do they?), but then again, TASM32 and the Windows-exclusive MS-DOS Player require Wine anyway. Running Tup through Wine as well means that there's only one PATH to worry about, and you get to take advantage of the tool checks in the surrounding batch file.
If you're one of those people who doesn't trust binaries in Git repos, the repo also links to instructions for building this binary yourself. Replicating this specific optimized binary is slightly more involved than the classic ./configure && make && make install trinity, so having these instructions is a good idea regardless of the fact that Tup's GPL license requires it.
One particularly interesting aspect of the Lua code is the way it handles sprite dependencies:
If build commands read from files that were created by other build commands, Tup requires these input dependencies to be spelled out so that it can arrange the build graph and parallelize the build correctly. We could simply put every sprite into a single array and automatically pass that as an extra input to every source file, but that would effectively split the build into a "sprite convert" and "code compile" phase. Spelling out every individual dependency allows such source files to be compiled as soon as possible, before (and in parallel to) the rest of the sprites they don't depend on. Similarly, code files without sprite dependencies can compile before the first sprite got converted, or even before the sprite converter itself got compiled and linked, maximizing the throughput of the overall build process.
Running a 30-year-old DOS toolchain in a parallel build system also introduces new issues, though. The easiest and recommended way of compiling and linking a program in Turbo C++ is a single tcc invocation:
tcc … main.cpp utils.cpp master.lib
This performs a batched compilation of main.cpp and utils.cpp within a single TCC process, and then launches TLINK to link the resulting .obj files into main.exe, together with the C++ runtime library and any needed objects from master.lib. The linking step works by TCC generating a TLINK command line and writing it into a response file with the fixed name turboc.$ln… which obviously can't work in a parallel build where multiple TCC processes will want to link different executables via the same response file.
Therefore, we have to launch TLINK with a custom response file ourselves. This file is echo'd as a separate parallel build rule, and the Lua code that constructs its contents has to replicate TCC's logic for picking the correct C++ runtime .lib file for the selected memory model.
The response file for TH02's ZUN_RES.COM, consisting of the C++ standard library, two files of ZUN code, and master.lib.
While this does add more string formatting logic, not relying on TCC to launch TLINK actually removes the one possible PATH-related error case I previously documented in the README. Back in 2021 when I first stumbled over the issue, it took a few hours of RE to figure this out. I don't like these hours to go to waste, so here's a Gist, and here's the text replicated for SEO reasons:
Issue: TCC compiles, but fails to link, with Unable to execute command 'tlink.exe'
Cause: This happens when invoking TCC as a compiler+linker, without the -c flag. To locate TLINK, TCC needlessly copies the PATH environment variable into a statically allocated 128-byte buffer. It then constructs absolute tlink.exe filenames for each of the semicolon- or \0-terminated paths, writing these into a buffer that immediately follows the 128-byte PATH buffer in memory. The search is finished as soon as TCC finds an existing file, which gives precedence to earlier paths in the PATH. If the search didn't complete until a potential "final" path that runs past the 128 bytes, the final attempted filename will consist of the part that still managed to fit into the buffer, followed by the previously attempted path.
Workaround: Make sure that the BIN\ path to Turbo C++ is fully contained within the first 127 bytes of the PATH inside your DOS system. (The 128th byte must either be a separating ; or the terminating \0 of the PATH string.)
Now that DOS emulation is an integral component of the single-part build process, it even makes sense to compile our pipeline tools as 16-bit DOS executables and then emulate them as part of the build. Sure, it's technically slower, but realistically it doesn't matter: Our only current pipeline tools are 📝 the converter for hardcoded sprites and the 📝 ZUN.COM generators, both of which involve very little code and are rarely run during regular development after the initial full build. In return, we get to drop that awkward dependency on the separate Borland C++ 5.5 compiler for Windows and yet another additional manual setup step. 🗑️ Once PC-98 Touhou becomes portable, we're probably going to require a modern compiler anyway, so you can now delete that one as well.
That gives us perfect dependency tracking and minimal parallel rebuilds across the whole codebase! While MS-DOS Player is noticeably slower than DOSBox-X, it's not going to matter all too much; unless you change one of the more central header files, you're rarely if ever going to cause a full rebuild. Then again, given that I'm going to use this setup for at least a couple of years, it's worth taking a closer look at why exactly the compilation performance is so underwhelming …
On the surface, MS-DOS Player seems like the right tool for our job, with a lot of advantages over DOSBox:
It doesn't spawn a window that boots an entire emulated PC, but is instead
perfectly integrated into the Windows console. Using it in a modern developer console would allow you to click on a compile error and have your editor immediately open the relevant file and jump to that specific line! With DOSBox, this basic comfort feature was previously unthinkable.
Heck, Takeda Toshiya originally developed it to run the equally vintage LSI C-86 compiler on 64-bit Windows. Fixing any potential issues we'd run into would be well within the scope of the project.
It consists of just a single comparatively small binary that we could just drop into the ReC98 repo. No manual setup steps required.
But once I began integrating it, I quickly noticed two glaring flaws:
Back in 2009, Takeda Toshiya chose to start the project by writing a custom DOS implementation from scratch. He was aware of DOSBox, but only adapted small tricky parts of its source code rather than starting with the DOSBox codebase and ripping out everything he didn't need. This matches the more research-oriented nature that all of his projects appear to follow, where the primary goal of writing the code is a personal understanding of the problem domain rather than a widely usable piece of software. MS-DOS Player is even the outlier in this regard, with Takeda Toshiya describing it as 珍しく実用的かもしれません. I am definitely sympathetic to this mindset; heck, my old internal build system falls under this category too, being so specialized and narrow that it made little sense to use it outside of ReC98. But when you apply it to emulators for niche systems, you end up with exactly the current PC-98 emulation scene, where there's no single universally good emulator because all of them have some inaccuracy somewhere. This scene is too small for you not to eventually become part of someone else's supply chain… 🥲
Emulating DOS is a particularly poor fit for a research/NIH project because it's Hyrum's Law incarnate. With the lack of memory protection in Real Mode, programs could freely access internal DOS (and even BIOS) data structures if they only knew where to look, and frequently did. It might look as if "DOS command-line tools" just equals x86 plus INT 21h, but soon you'll also be emulating the BIOS, PIC, PIT, EMS, XMS, and probably a few more things, all with their individual quirks that some application out there relies on. DOSBox simply had much more time to grow and mature and figure out all of these details by trial and error. If you start a DOS emulator from scratch, you're bound to duplicate all this research as people want to use your emulator to run more and more programs, until you've ended up with what's effectively a clone of DOSBox's exact logic. Unless, of course, if you draw a line somewhere and limit the scope of the DOS and BIOS emulation. But given how many people have wanted to use MS-DOS Player for running DOS TUIs in arbitrarily sized terminal windows with arbitrary fonts, that's not what happened. I guess it made sense for this use case before DOSBox-X gained a TTF output mode in late 2020?
As usual, I wouldn't mention this if I didn't run into twobugs when combining MS-DOS Player with Turbo C++ and Tup. Both of these originated from workarounds for inaccuracies in the DOS emulation that date back to MS-DOS Player's initial release and were thankfully no longer necessary with the accuracy improvements implemented in the years since.
For CPU emulation, MS-DOS Player can use either MAME's or Neko Project 21/W's x86 core, both of which are interpreters and won't win any performance contests. The NP21/W core is significantly better optimized and runs ≈41% faster, but still pales in comparison to DOSBox-X's dynamic recompiler. Running the same sequential commands that the P0280 Makefile would execute, the upstream 2024-03-02 NP21/W core build of MS-DOS Player would take to compile the entire ReC98 codebase on my system, whereas DOSBox-X's dynamic core manages the same in , or 94% faster.
Granted, even the DOSBox-X performance is much slower than we would like it to be. Most of it can be blamed on the awkward time in the early-to-mid-90s when Turbo C++ 4.0J came out. This was the time when DOS applications had long grown past the limitations of the x86 Real Mode and required DOS extenders or even sillier hacks to actually use all the RAM in a typical system of that period, but Win32 didn't exist yet to put developers out of this misery. As such, this compiler not only requires at least a 386 CPU, but also brings its own DOS extender (DPMI16BI.OVL) plus a loader for said extender (RTM.EXE), both of which need to be emulated alongside the compiler, to the great annoyance of emulator maintainers 30 years later. Even MS-DOS Player's README file notes how Protected Mode adds a lot of complexity and slowdown:
8086 binaries are much faster than 80286/80386/80486/Pentium4/IA32 binaries.
If you don't need the protected mode or new mnemonics added after 80286,
I recommend i86_x86 or i86_x64 binary.
The immediate reaction to these performance numbers is obvious: Let's just put DOSBox-X's dynamic recompiler into MS-DOS Player, right?! 🙌 Except that once you look at DOSBox-X, you immediately get why Takeda Toshiya might have preferred to start from scratch. Its codebase is a historically grown tangled mess, requiring intimate familiarity and a significant engineering effort to isolate the dynamic core in the first place. I did spend a few days trying to untangle and copy it all over into MS-DOS Player… only to be greeted with an infinite loop as soon as everything compiled for the first time. 😶 Yeah, no, that's bound to turn into a budget-exceeding maintenance nightmare.
Instead, let's look at squeezing at least some additional performance out of what we already have. A generic emulator for the entire CISCy instruction set of the 80386, with complete support for Protected Mode, but it's only supposed to run the subset of instructions and features used by a specific compiler and linker as fast as possible… wait a moment, that sounds like a use case for profile-guided optimization! This is the first time I've encountered a situation that would justify the required 2-phase build process and lengthy profile collection – after all, writing into some sort of database for every function call does slow down MS-DOS Player by roughly 15×. However, profiling just the compilation of our most complex translation unit (📝 TH01 YuugenMagan) and the linking of our largest executable (TH01's REIIDEN.EXE) should be representative enough.
I'll get to the performance numbers later, but even the build output is quite intriguing. Based on this profile, Visual Studio chooses to optimize only 104 out of MS-DOS Player's 1976 functions for speed and the rest for size, shaving off a nice 109 KiB from the binary. Presumably, keeping rare code small is also considered kind of fast these days because it takes up less space in your CPU's instruction cache once it does get executed?
With PGO as our foundation, let's run a performance profile and see if there are any further code-level optimizations worth trying out:
Removing redundant memset() calls: MS-DOS Player is written in a very C-like style of C++, and initializes a bunch of its statically allocated data by memset()ing it with 00 bytes at startup. This is strictly redundant even in C; Section 6.7.9/10 of the C standard mandates that all static data is zero-initialized by default. In turn, the program loaders of modern operating systems employ all sorts of paging tricks to reduce the CPU cost (and actual RAM usage!) of this initialization as much as possible. If you manually memset() afterward, you throw all these advantages out of the window.
Of course, these calls would only ever show up among the top CPU consumers in a performance profile if a program uses a large amount of static data, but the hardcoded 32 MiB of emulated RAM in ≥i386-supporting builds definitely qualifies. Zeroing 32.8 MiB of memory makes up a significant chunk of the runtime of some of the shorter build steps and quickly adds up; a full rebuild of the ReC98 codebase currently spawns a total of 361 MS-DOS Player instances, totaling 11.5 GiB of needless memory writes.
Limiting the emulated instruction set: NP21/W's x86 core emulates everything up to the SSE3 extension from 2004, but Turbo C++ 4.0J's x86 instruction set usage doesn't stretch past the 386. It doesn't even need the x87 FPU for compiling code that involves floating-point constants. Disabling all these unneeded extensions speeds up x86's infamously annoying instruction decoding, and also reduces the size of the MS-DOS Player binary by another 149.5 KiB. The source code already had macros for this purpose, and only needed a slight fix for the code to compile with these macros disabled.
Removing x86 paging: Borland's DOS extender uses segmented memory addressing even in Protected Mode. This allows us to remove the MMU emulation and the corresponding "are we paging" check for every memory access.
Removing cycle counting: When emulating a whole system, counting the cycles of each instruction is important for accurately synchronizing the CPU with other pieces of hardware. As hinted above, MS-DOS Player does emulate and periodically update a few pieces of hardware outside the CPU, but we need none of them for a build tool.
Testing Takeda Toshiya's optimizations: In a nice turn of events, Takeda Toshiya merged every single one of my bugfixes and optimization flags into his upstream codebase. He even agreed with my memset() and cycle counting removal optimizations, which are now part of all upstream builds as of 2024-06-24. For the 2024-06-27 build, he claims to have gone even further than my more minimal optimization, so let's see how these additional changes affect our build process.
Further risky optimizations: A lot of the remaining slowness of x86 emulation comes from the segmentation and protection fault checks required for every memory access. If we assume that the emulator only ever executes correct code, we can remove these checks and implement further shortcuts based on their absence.
The L[DEFGS]S group of instructions that load a segment and offset register from a 32-bit far pointer, for example, are both frequently used in Turbo C++ 4.0J code and particularly expensive to emulate. Intel specified their Real Mode operation as loading the segment and offset part in two separate 16-bit reads. But if we assume that neither of those reads can fault, we can compress them into a single 32-bit read and thus only perform the costly address translation once rather than twice. Emulator authors are probably rolling their eyes at this gross violation of Intel documentation now, but it's at least worth a try to see just how much performance we could get out of it.
Measured on a 6-year-old 6-core Intel Core i5 8400T on Windows 11. The first number in each column represents the codebase before the #include cleanup explained below, and the second one corresponds to this commit. All builds are 64-bit, 32-bit builds were ≈5% slower across the board. I kept the fastest run within three attempts; as Tup parallelizes the build process across all CPU cores, it's common for the long-running full build to take up to a few seconds longer depending on what else is running on your system. Tup's standard output is also redirected to a file here; its regular terminal output and nice progress bar will add more slowdown on top.
The key takeaways:
By merely disabling certain x86 features from MS-DOS Player and retaining the accuracy of the remaining emulation, we get speedups of ≈60% (full build), ≈70% (median TU), and ≈80% (largest TU).
≈25% (full build), ≈29% (median TU), and ≈41% (largest TU) of this speedup came from Visual Studio's profile-guided optimization, with no changes to the MS-DOS Player codebase.
The effects of removing cycle counting are the biggest surprise. Between ≈17% and ≈23%, just for removing one subtraction per emulated instruction? Turns out that in the absence of a "target cycle amount" setting, the x86 emulation loop previously ran for only a single cycle. This caused the PIC check to run after every instruction, followed by PIT, serial I/O, keyboard, mouse, and CRTC update code every millisecond. Without cycle counting, the x86 loop actually keeps running until a CPU exception is raised or the emulated process terminates, skipping the hardware code during the vast majority of the program's execution time.
While Takeda Toshiya's changes in the 2024-06-27 build completely throw out the cycle counter and clean up process termination, they also reintroduce the hardware updates that made up the majority of the cycle removal speedup. This explains the results we're getting: The small speedup for full rebuilds is too insignificant to bother with and might even fall within a statistical margin of error, but the build slows down more and more the longer the emulated process runs. Compiling and linking YuugenMagan takes a whole 14% longer on generic builds, and ≈9-12% longer on PGO builds. I did another in-between test that just removed the x86 loop from the cycle removal version, and got exactly the same numbers. This just goes to show how much removing two writes to a fixed memory address per emulated instruction actually matters. Let's not merge back this one, and stay on top of 2024-06-24 for the time being.
The risky optimizations of ignoring segment limits and speeding up 32-bit segment+offset pointer load instructions could yield a further speedup. However, most of these changes boil down to removing branches that would never be taken when emulating correct x86 code. Consequently, these branches get recorded as unlikely during PGO training, which then causes the profile-guided rebuild to rearrange the instructions on these branches in a way that favors the common case, leaving the rest of their effective removal to your CPU's branch predictor. As such, the 10%-15% speedup we can observe in generic builds collapses down to 2%-6% in PGO builds. At this rate and with these absolute durations, it's not worth it to maintain what's strictly a more inaccurate fork of Neko Project 21/W's x86 core.
The redundant header inclusions afforded by #include guards do in fact have a measurable performance cost on Turbo C++ 4.0J, slowing down compile times by 5%.
But how does this compare to DOSBox-X's dynamic core? Dynamic recompilers need some kind of cache to ensure that every block of original ASM gets recompiled only once, which gives them an advantage in long-running processes after the initial warmup. As a result, DOSBox-X compiles and links YuugenMagan in , ≈92% faster than even our optimized MS-DOS Player build. That percentage resembles the slowdown we were initially getting when comparing full rebuilds between DOSBox-X and MS-DOS Player, as if we hadn't optimized anything.
On paper, this would mean that DOSBox-X barely lost any of its huge advantage when it comes to single-threaded compile+link performance. In practice, though, this metric is supposed to measure a typical decompilation or modding workflow that focuses on repeatedly editing a single file. Thus, a more appropriate comparison would also have to add the aforementioned constant 28,130 syscalls that my old build system required to detect that this is the one file/binary that needs to be recompiled/relinked. The video at the top of this blog post happens to capture the best time () I got for the detection process on DOSBox-X. This is almost as slow as the compilation and linking itself, and would have only gotten slower as we continue decompiling the rest of the games. Tup, on the other hand, performs its filesystem scan in a near-constant , matching the claim in Section 4.7 of its paper, and thus shrinking the performance difference to ≈14% after all. Sure, merging the dynamic core would have been even better (contribution-ideas, anyone?), but this is good enough for now.
Just like with Tup, I've also placed this optimized binary directly into the ReC98 repo and added the specific build instructions to the GitHub release page.
I do have more far-reaching ideas for further optimizing Neko Project 21/W's x86 core for this specific case of repeated switches between Real Mode and Protected Mode while still retaining the interpreted nature of this core, but these already strained the budget enough.
The perhaps more important remaining bottleneck, however, is hiding in the actual DOS emulation. Right now, a Tup-driven full rebuild spawns a total of 361 MS-DOS Player processes, which means that we're booting an emulated DOS 361 times. This isn't as bad as it sounds, as "booting DOS" basically just involves initializing a bunch of internal DOS structures in conventional memory to meaningful values. However, these structures also include a few environment variables like PATH, APPEND, or TEMP/TMP, which MS-DOS Player seamlessly integrates by translating them from their value on the Windows host system to the DOS 8.3 format. This could be one of the main reasons why MS-DOS Player is a native Windows program rather than being cross-platform:
On Windows, this path translation is as simple as calling GetShortPathNameA(), which returns a unique 8.3 name for every component along the path.
Also, drive letters are an integralpart of the DOS INT 21h API, and Windows still uses them as well.
However, the NT kernel doesn't actually use drive letters either, and views them as just a legacy abstraction over its reality of volume GUIDs. Converting paths back and forth between these two views therefore requires it to communicate with a
mount point manager service, which can coincidentally also be observed in debug builds of Tup.
As a result, calling any path-retrieving API is a surprisingly expensive operation on modern Windows. When running a small sprite through our 📝 sprite converter, MS-DOS Player's boot process makes up 56% of the runtime, with 64% of that boot time (or 36% of the entire runtime) being spent on path translation. The actual x86 emulation to run the program only takes up 6.5% of the runtime, with the remaining 37.5% spent on initializing the multithreaded C++ runtime.
But then again, the truly optimal solution would not involve MS-DOS Player at all. If you followed general video game hacking news in May, you'll probably remember the N64 community putting the concept of statically recompiled game ports on the map. In case you're wondering where this seemingly sudden innovation came from and whether a reverse-engineered decompilation project like ReC98 is obsolete now, I wrote a new FAQ entry about why this hype, although justified, is at least in part misguided. tl;dr: None of this can be meaningfully applied to PC-98 games at the moment.
On the other hand, recompiling our compiler would not only be a reasonable thing to attempt, but exactly the kind of problem that recompilation solves best. A 16-bit command-line tool has none of the pesky hardware factors that drag down the usefulness of recompilations when it comes to game ports, and a recompiled port could run even faster than it would on 32-bit Windows. Sure, it's not as flashy as a recompiled game, but if we got a few generous backers, it would still be a great investment into improving the state of static x86 recompilation by simply having another open-source project in that space. Not to mention that it would be a great foundation for improving Turbo C++ 4.0J's code generation and optimizations, which would allow us to simplify lots of awkward pieces of ZUN code… 🤩
That takes care of building ReC98 on 64-bit platforms, but what about the 32-bit ones we used to support? The previous split of the build process into a Tup-driven 32-bit part and a Makefile-driven 16-bit part sure was awkward and I'm glad it's gone, but it did give you the choice between 1) emulating the 16-bit part or 2) running both parts natively on 32-bit Windows. While Tup's upstream Windows builds are 64-bit-only, it made sense to 📝 compile a custom 32-bit version and thus turn any 32-bit Windows ≥Vista into the perfect build platform for ReC98. Older Windows versions that can't run Tup had to build the 32-bit part using a separately maintained dumb batch script created by tup generate, but again, due to Make being trash, they were fully rebuilding the entire codebase every time anyway.
Driving the entire build via Tup changes all of that. Now, it makes little sense to continue using 32-bit Tup:
We need to DLL-inject into a 64-bit MS-DOS Player. Sure, we could compile a 32-bit build of MS-DOS Player, but why would we? If we look at current marketshares, nobody runs 32-bit Windows anymore, not even by accident. If you run 32-bit Windows in 2024, it's because you know what you're doing and made a conscious choice for the niche use case of natively running DOS programs. Emulating them defeats the whole point of setting up this environment to begin with.
It would make sense if Tup could inject into DOS programs, but it can't.
Also, as we're going to see later, requiring Windows ≥Vista goes in the opposite direction of what we want for a 32-bit build. The earlier the Windows version, the better it is at running native DOS tools.
This means that we could now only support 32-bit Windows via an even larger tup generated batch file. We'd have to move the MS-DOS Player prefix of the respective command lines into an environment variable to make Tup use the same rules for both itself and the batch file, but the result seems to work…
…but it's really slow, especially on Windows 9x. 🐌 If we look back at the theory behind my previous custom build system, we can already tell why: Efficiently building ReC98 requires a completely different approach depending on whether you're running a typical modern multi-core 64-bit system or a vintage single-core 32-bit system. On the former, you'd want to parallelize the slow emulation as much as you can, so you maximize the amount of TCC processes to keep all CPU cores as busy as possible. But on the latter, you'd want the exact opposite – there, the biggest annoyance is the repeated startup and shutdown of the VDM, TCC, and its DOS extender, so you want to continue batching translation units into as few TCC processes as possible.
CMake fans will probably feel vindicated now, thinking "that sounds exactly like you need a meta build system 🤪". Leaving aside the fact that the output vomited by all of CMake's Makefile generators is a disgusting monstrosity that's far removed from addressing any performance concerns, we sure could solve this problem by adding another layer of abstraction. But then, I'd have to rewrite my working Lua script into either C++ or (heaven forbid) Batch, which are the only options we'd have for bootstrapping without adding any further dependencies, and I really wouldn't want to do that. Alternatively, we could fork Tup and modify tup generate to rewrite the low-level build rules that end up in Tup's database.
But why should we go for any of these if the Lua script already describes the build in a high-level declarative way? The most appropriate place for transforming the build rules is the Lua script itself…
… if there wasn't the slight problem of Tup forbidding file writes from Lua. 🥲 Presumably, this limitation exists because there is no way of replicating these writes in a tup generated dumb shell script, and it does make sense from that point of view.
But wait, printing to stdout or stderr works, and we always invoke Tup from a batch file anyway. You can now tell where this is going. Hey, exfiltrating commands from a build script to the build system via standard I/O streams works for Rust's Cargo too!
Just like Cargo, we want to add a sufficiently unique prefix to every line of the generated batch script to distinguish it from Tup's other output. Since Tup only reruns the Lua script – and would therefore print the batch file – if the script changed between the previous and current build run, we only want to overwrite the batch file if we got one or more lines. Getting all of this to work wasn't all too easy; we're once again entering the more awful parts of Batch syntax here, which apparently are so terrible that Wine doesn't even bother to correctly implement parts of it. 😩
Most importantly, we don't really want to redirect any of Tup's standard I/O streams. Redirecting stdout disables console output coloring and the pretty progress bar at the bottom, and looping over stderr instead of stdout in Batch is incredibly awkward. Ideally, we'd run a second Tup process with a sub-command that would just evaluate the Lua script if it changed - and fortunately, tup parse does exactly that. 😌
In the end, the optimally fast and ERRORLEVEL-preserving solution involves two temporary files. But since creating files between two Tup runs causes it to reparse the Lua code, which would print the batch file to the unfiltered stdout, we have to hide these temporary files from Tup by placing them into its .tup/ database directory. 🤪
On a more positive note, programmatically generating batches from single-file TCC rules turned out to be a great idea. Since the Lua code maps command-line flags to arrays of input files, it can also batch across binaries, surpassing my old system in this regard. This works especially well on the debloated and anniversary branches, which replace ZUN's little command-line flag inconsistencies with a single set of good optimization flags that every translation unit is compiled with.
Time to fire up some VMs then… only to see the build failing on Windows 9x with multiple unhelpful Bad command or file name errors. Clearly, the long echo lines that write our response files run up against some length limit in command.com and need to be split into multiple ones. Windows 9x's limit is larger than the 127 characters of DOS, that's for sure, and the exact number should just be one search away…
…except that it's not the 1024 characters recounted in a surviving newsgroup post. Sure, lines are truncated to 1023 bytes and that off-by-one error is no big deal in this context, but that's not the whole story:
: This not unrealistic command line is 137 bytes long and fails on Windows 9x?!
> echo -DA=1 2 3 a/b/c/d/1 a/b/c/d/2 a/b/c/d/3 a/b/c/d/4 a/b/c/d/5 a/b/c/d/6 a/b/c/d/7 a/b/c/d/8 a/b/c/d/9 a/b/c/d/10 a/b/c/d/11 a/b/c/d/12
Bad command or file name
Wait, what, something about / being the SWITCHAR? And not even just that…
: Down to 132 bytes… and 32 "assignments"?
> echo a=0 b=1 c=2 d=3 e=4 f=5 g=6 h=7 i=8 j=9 k=0 l=1 m=2 n=3 o=4 p=5 q=6 r=7 s=8 t=9 u=0 v=1 w=2 x=3 y=4 z=5 a=0 b=1 c=2 d=3 e=4 f=5
Bad command or file name
And what's perhaps the worst example:
: 64 slashes. Works on DOS, works on `cmd.exe`, fails on 9x.
> echo ////////////////////////////////////////////////////////////////
Bad command or file name
My complete set of test cases: 2024-07-09-Win9x-batch-tokenizer-tests.bat
So, time to load command.com into DOSBox-X's debugger and step through some code. 🤷 The earliest NT-based Windows versions were ported to a variety of CPUs and therefore received the then-all-new cmd.exe shell written in C, whereas Windows 9x's command.com was still built on top of the dense hand-written ASM code that originated in the very first DOS versions. Fortunately though, Microsoft open-sourced one of the later DOS versions in April. This made it somewhat easier to cross-reference the disassembly even though the Windows 9x version significantly diverged in the parts we're interested in.
And indeed: After truncating to 1023 bytes and parsing out any redirectors, each line is split into tokens around whitespace and = signs and before every occurrence of the SWITCHAR. These tokens are written into a statically allocated 64-element array, and once the code tries to write the 65th element, we get the Bad command or file name error instead.
#
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
String
echo
-DA
1
2
3
a
/B
/C
/D
/1
a
/B
/C
/D
/2
Switch flag
🚩
🚩
🚩
🚩
🚩
🚩
🚩
🚩
The first few elements of command.com's internal argument array after calling the Windows 9x equivalent of parseline with my initial example string. Note how all the "switches" got capitalized and annotated with a flag, whereas the = sign no longer appears in either string or flag form.
Needless to say, this makes no sense. Both DOS and Windows pass command lines as a single string to newly created processes, and since this tokenization is lossy, command.com will just have to pass the original string anyway. If your shell wants to handle tokenization at a central place, it should happen after it decided that the command matches a builtin that can actually make use of a pointer to the resulting token array – or better yet, as the first call of each builtin's code. Doing it before is patently ridiculous.
I don't know what's worse – the fact that Windows 9x blindly grinds each batch line through this tokenizer, or the fact that no documentation of this behavior has survived on today's Internet, if any even ever existed. The closest thing I found was this page that doesn't exist anymore, and it also just contains a mere hint rather than a clear description of the issue. Even the usual Batch experts who document everything else seem to have a blind spot when it comes to this specific issue. As do emulators: DOSBox and FreeDOS only reimplement the sane DOS versions of command.com, and Wine only reimplements cmd.exe.
Oh well. 71 lines of Lua later, the resulting batch file does in fact work everywhere:
The clear performance winner at 11.15 seconds after the initial tool check, though sadly bottlenecked by strangely long TASM32 startup times. As for TCC though, even this performance is the slowest a recompiled port would be. Modern compiler optimizations are probably going to shave off another second or two, and implementing support for #pragma once into the recompiled code will get us the aforementioned 5% on top.
If you run this on VirtualBox on modern Windows, make sure to disable Hyper-V to avoid the slower snail execution mode. 🐢
Building in Windows XP under Hyper-V exchanges Windows 98's slow TASM32 startup times for slightly slower DOS performance, resulting in a still decent 13.4 seconds.
29.5 seconds?! Surely something is getting emulated here. And this is the best time I randomly got; my initial preview recording took 55 seconds which is closer to DOSBox-X's dynamic core than it is to Windows 9x. Given how poorly 32-bit Windows 10 performs, Microsoft should have probably discontinued 32-bit Windows after 8 already. If any 16-bit program you could possibly want to run is either too slow or likely to exhibit other compatibility issues (📝 Shuusou Gyoku, anyone?), the existence of 32-bit Windows 10 is nothing but a maintenance burden. Especially because Windows 10 simultaneously overhauled the console subsystem, which is bound to cause compatibility issues anyway. It sure did for me back in 2019 when I tried to get my build system to work…
But wait, there's more! The codebase now compiles on all 32-bit Windows systems I've tested, and yields binaries that are equivalent to ZUN's… except on 32-bit Windows 10. 🙄 Suddenly, we're facing the exact same batched compilation bug from my custom build system again, with REIIDEN.EXE being 16 bytes larger than it's supposed to be.
Looks like I have to look into that issue after all, but figuring out the exact cause by debugging TCC would take ages again. Thankfully, trial and error quickly revealed a functioning workaround: Separating translation unit filenames in the response file with two spaces rather than one. Really, I couldn't make this up. This is the most ridiculous workaround for a bug I've encountered in a long time.
The TCC response file generation code for all current decompiled TH04 code, split into multiple echo calls based on the Windows 9x batch tokenizer rules and with double spaces between each parameter for added "safety". Would this also have been the solution for the batched compilation bugs I was experiencing with my old build system in DOSBox? I suddenly was unable to reproduce these bugs, so we won't know for the time being…
Hopefully, you've now got the impression that supporting any kind of 32-bit Windows build is way more of a liability than an asset these days, at least for this specific project. "Real hardware", "motivating a TCC recompilation", and "not dropping previous features" really were the only reasons for putting up with the sheer jank and testing effort I had to go through. And I wouldn't even be surprised if real-hardware developers told me that the first reason doesn't actually hold up because compiling ReC98 on actual PC-98 hardware is slow enough that they'd rather compile it on their main machine and then transfer the binaries over some kind of network connection.
I guess it also made for some mildly interesting blog content, but this was definitely the last time I bothered with such a wide variety of Windows versions without being explicitly funded to do so. If I ever get to recompile TCC, it will be 64-bit only by default as well.
Instead, let's have a tier list of supported build platforms that clearly defines what I am maintaining, with just the most convincing 32-bit Windows version in Tier 1. Initially, that was supposed to be Windows 98 SE due to its superior performance, but that's just unreasonable if key parts of the OS remain undocumented and make no sense. So, XP it is.
*nix fans will probably once again be disappointed to see their preferred OS in Tier 2. But at least, all we'd need for that to move up to Tier 1 is a CI configuration, contributed either via funding me or sending a PR. (Look, even more contribution-ideas!)
Getting rid of the Wine requirement for a fully cross-platform build process wouldn't be too unrealistic either, but would require us to make a few quality decisions, as usual:
Do we run the DOS tools by creating a cross-platform MS-DOS Player fork, or do we statically recompile them?
Do we replace 32-bit Windows TASM with the 16-bit DOS TASM.EXE or TASMX.EXE, which we then either run through our forked MS-DOS Player or recompile? This would further slow down the build and require us to get rid of these nice long non-8.3 filenames… 😕 I'd only recommend this after the looming librarization of ZUN's master.lib fork is completed.
Or do we try migrating to JWasm again? As an open-source assembler that aims for MASM compatibility, it's the closest we can get to TASM, but it's not a drop-in replacement by any means. I already tried in late 2014, but encountered too many issues and quickly abandoned the idea. Maybe it works better now that we have less ASM? In any case, this migration would only get easier the less ASM code we have remaining in the codebase as we get closer to the 100% finalization mark.
Y'know what I think would be the best idea for right now, though? Savoring this new build system and spending an extended amount of time doing actual decompilation or modding for a change.
Now that even full rebuilds are decently fast, let's make use of that productivity boost by doing some urgent and far-reaching code cleanup that touches almost every single C++ source file. The most immediately annoying quirk of this codebase was the silly way each translation unit #included the headers it needed. Many years ago, I measured that repeatedly including the same header did significantly impact Turbo C++ 4.0J's compilation times, regardless of any include guards inside. As a consequence of this discovery, I slightly overreacted and decided to just not use any include guards, ever. After all, this emulated build process is slow enough, and we don't want it to needlessly slow down even more! This way, redundantly including any file that adds more than just a few #define macros won't even compile, throwing lots of Multiple definition errors.
Consequently, the headers themselves #included almost nothing. Starting a new translation unit therefore always involved figuring and spelling out the transitive dependencies of the headers the new unit actually wants to use, in a short trial-and-error process. While not too bad by itself, this was bound to become quite counterproductive once we get closer to porting these games: If some inlined function in a header needed access to, let's say, PC-98-specific I/O ports as an implementation detail, the header would have externalized this dependency to the top-level translation unit, which in turn made that that unit appear to contain PC-98-native code even if the unit's code itself was perfectly portable.
But once we start making some of these implicit transitive dependencies optional, it all stops being justifiable. Sometimes, a.hpp declared things that required declarations from b.hpp but these things are used so rarely that it didn't justify adding #include "b.hpp" to all translation units that #include "a.hpp". So how about conditionally declaring these things based on previously #included headers?
#if (defined(SUBPIXEL_HPP) && defined(PLANAR_H))
// Sets the [tile_ring] tile at (x, y) to the given VRAM offset.
void tile_ring_set_vo(subpixel_t x, subpixel_t y, vram_offset_t image_vo);
#endif
You can maybe do this in a project that consistently sorts the #include lists in every translation unit… err, no, don't do this, ever, it's awful. Just separate that declaration out into another header.
Now that we've measured that the sane alternative of include guards comes with a performance cost of just 5% and we've further reduced its effective impact by parallelizing the build, it's worth it to take that cost in exchange for a tidy codebase without such surprises. From now on, every header file will #include its own dependencies and be a valid translation unit that must compile on its own without errors. In turn, this allows us to remove at least 1,000 #includes of transitive dependencies from .cpp files. 🗑️
However, that 5% number was only measured after I reduced these redundant #includes to their absolute minimum. So it still makes sense to only add include guards where they are absolutely necessary – i.e., transitively dependent headers included from more than one other file – and continue to (ab)use the Multiple definition compiler errors as a way of communicating "you're probably #including too many headers, try removing a few". Certainly a less annoying error than Undefined symbol.
Since all of this went way over the 7-push mark, we've got some small bits of RE and PI work to round it all out. The .REC loader in TH04 and TH05 is completely unremarkable, but I've got at least a bit to say about TH02's High Score menu. I already decompiled MAINE.EXE's post-Staff Roll variant in 2015, so we were only missing the almost identical MAIN.EXE variant shown after a Game Over or when quitting out of the game. The two variants are similar enough that it mostly needed just a small bit of work to bring my old 2015 code up to current standards, and allowed me to quickly push TH02 over the 40% RE mark.
Functionally, the two variants only differ in two assignments, but ZUN once again chose to copy-paste the entire code to handle them. This was one of ZUN's better copy-pasting jobs though – and honestly, I can't even imagine how you would mess up a menu that's entirely rendered on the PC-98's text RAM. It almost makes you wonder whether ZUN actually used the same #if ENDING preprocessor branching that my decompilation uses… until the visual inconsistencies in the alignment of the place numbers and the and labels clearly give it away as copy-pasted:
Next up: Starting the big Seihou summer! Fortunately, waiting two more months was worth it: In mid-June, Microsoft released a preview version of Visual Studio that, in response to my bug report, finally, finally makes C++ standard library modules fully usable. Let's clean up that codebase for real, and put this game into a window.
P0280
TH03 RE (Coordinate transformations / Player entity movement / Global shared hitbox / Hit circles)
💰 Funded by:
Blue Bolt, JonathKane, [Anonymous]
🏷️ Tags:
TH03 gameplay! 📝 It's been over two years. People have been investing some decent money with the intention of eventually getting netplay, so let's cover some more foundations around player movement… and quickly notice that there's almost no overlap between gameplay RE and netplay preparations?
That makes for a fitting opportunity to think about what TH03 netplay would look like. Regardless of how we implement them into TH03 in particular, these features should always be part of the netcode:
You'd want UDP rather than TCP for both its low latency and its NAT hole-punching ability
However, raw UDP does not guarantee that the packets arrive in order, or that they even arrive at all
WebRTC implements these reliability guarantees on top of UDP in a modern package, providing the best of both worlds
NAT traversal via public or self-hosted STUN/TURN servers is built into the connection establishment protocol and APIs, so you don't even have to understand the underlying issue
I'm not too deep into networking to argue here, and it clearly works for Ju.N.Owen. If we do explore other options, it would mainly be because I can't easily get something as modern as WebRTC to natively run on Windows 9x or DOS, if we decide to go for that route.
Matchmaking: I like Ju.N.Owen's initial way of copy-pasting signaling codes into chat clients to establish a peer-to-peer connection without a dedicated matchmaking server. progre eventually implemented rooms on the AWS cloud, but signaling codes are still used for spectating and the Pure P2P mode. We'll probably copy the same evolution, with a slight preference for Pure P2P – if only because you would have to check a GDPR consent box before I can put the combination of your room name and IP address into a database. Server costs shouldn't be an issue at the scale I expect this to have.
Rollback: In emulators, rollback netcode can be and has been implemented by keeping savestates of the last few frames together with the local player's inputs and then replaying the emulation with updated inputs of the remote player if a prediction turned out to be incorrect. This technique is a great fit for TH03 for two reasons:
All game state is contained within a relatively small bit of memory. The only heap allocations done in MAIN.EXE are the 📝 .MRS images for gauge attack portraits and bomb backgrounds, and the enemy scripts and formations, both of which remain constant throughout a round. All other state is statically allocated, which can reduce per-frame snapshots from the naive 640 KiB of conventional DOS memory to just the 37 KiB of MAIN.EXE's data segment. And that's the upper bound – this number is only going to go down as we move towards 100% PI, figure out how TH03 uses all its static data, and get to consolidate all mutated data into an even smaller block of memory.
For input prediction, we could even let the game's existing AI play the remote player until the actual inputs come in, guaranteeing perfect play until the remote inputs prove otherwise. Then again… probably only while the remote player is not moving, because the chance for a human to replicate the AI's infamous erratic dodging is fairly low.
The only issue with rollback in specifically a PC-98 emulator is its implications for performance. Rendering is way more computationally expensive on PC-98 than it is on consoles with hardware sprites, involving lots of memory writes to the disjointed 4 bitplane segments that make up the 128 KB framebuffer, and equally as many reads and bitshift operations on sprite data. TH03 lessens the impact somewhat thanks to most of its rendering being EGC-accelerated and thus running inside the emulator as optimized native code, but we'd still be emulating all the x86 code surrounding the EGC accesses – from the emulator's point of view, it looks no different than game logic. Let's take my aging i5 system for example:
With the Screen → No wait option, Neko Project 21/W can emulate TH03 gameplay at 260 FPS, or 4.6× its regular speed.
This leaves room for each frame to contain 3.6 frames of rollback in addition to the frame that's supposed to be displayed,
which results in a maximum safe network latency of ≈63 ms, or a ping of ≈126 ms. According to this site, that's enough for a smooth connection from Germany to any other place in Europe and even out to the US Midwest. At this ping, my system could still run the game without slowdown even if every single frame required a rollback, which is highly unlikely.
Any higher ping, however, could occasionally lead to a rollback queue that's too large for my system to process within a single frame at the intended 56.4 FPS rate. As a result, me playing anyone in the western US is highly likely to involve at least occasional slowdowns. Delaying inputs on purpose is the usual workaround, but isn't Touhou that kind of game series where people use vpatch to get rid of even the default input delay in the Windows games?
So we'd ideally want to put TH03 into an update-only mode that skips all rendering calls during re-simulation of rolled-back frames. Ironically, this means that netplay-focused RE would actually focus on the game's rendering code and ensure that it doesn't mutate any statically allocated data, allowing it to be freely skipped without affecting the game. Imagine palette-based flashing animations that are implemented by gradually mutating statically allocated values – these would cause wrong colors for the rest of the game if the animation doesn't run on every frame.
The integration of all of this into TH03 can be approached from several angles. Of course, as long as we don't port the game, netplay will still require a PC-98 emulator to run on modern systems. PC-98 emulation is typically regarded as difficult to set up and the additional configuration required for some of these methods would only make it harder. However, yksoft1 demonstrates that it doesn't have to be: By compiling the (potentially modified) PC-98 emulator to WebAssembly, running any of these non-native methods becomes as simple as opening a website. To stay legally safe, I wouldn't host the game myself, so you'd still have to drag your th03.hdi onto that browser tab. But if you're happy with playing in a browser, this would be as user-friendly as it gets.
Here's an overview of the various approaches with their most important pros and cons:
Depending on what the backers prefer, we can go for one, a few, or all of these.
Generic PC-98 netcode for one or more emulators
This is the most basic and puristic variant that implements generic netplay for PC-98 games in general by effectively providing remote control of the emulated keyboard and joypad. The emulator will be unaware of the game, and the game will be unaware of being netplayed, which makes this solution particularly interesting for the non-Touhou PC-98 scene, or competitive players who absolutely insist on using ZUN's original binaries and won't trust any of my modded game builds.
Applied to TH03, this means that players would select the regular hot-seat 1P vs 2P mode and then initiate a match through a new menu in the emulator UI. The same UI must then provide an option to manually remap incoming key and button presses to the 2P controls (newly introducing remapping to the emulator if necessary), as well as blocking any non-2P keys. The host then sends an initial savestate to the guest to ensure an identical starting state, and starts synchronizing and rolling back inputs at VSync boundaries.
This generic nature means that we don't get to include any of the TH03-specific rollback optimizations mentioned above, leading to the highest CPU and memory requirements out of all the variants. It sure is the easiest to implement though, as we get to freely use modern C++ WebRTC libraries that are designed to work with the network stack of the underlying OS.
I can try to build this netcode as a generic library that can work with any PC-98 emulator, but it would ultimately be up to the respective upstream developers to integrate it into official releases. Therefore, expect this variant to require separate funding and custom builds for each individual emulator codebase that we'd like to support.
Emulator-level netcode with game-specific hooks
Takes the generic netcode developed in 1) and adds the possibility for the game to control it via a special interrupt API. This enables several improvements:
Online matches could be initiated through new options in TH03's main menu rather than the emulator's UI.
The game could communicate the memory region that should be backed up every frame, cutting down memory usage as described above.
The exchanged input data could use the game's internal format instead of keyboard or joypad inputs. This removes the need for key remapping at the emulator level and naturally prevents the inherent issue of remote control where players could mess with each other's controls.
The game could be aware of the rollbacks, allowing it to jump over its rendering code while processing the queue of remote inputs and thus gain some performance as explained above.
The game could add synchronization points that block gameplay until both players have reached them, preventing the rollback queue from growing infinitely. This solves the issue of 1) not having any inherent way of working around desyncs and the resulting growth of the rollback queue. As an example, if one of the two emulators in 1) took, say, 2 seconds longer to load the game due to a random CPU spike caused by some bloatware on their system, the two players would be out of sync by 2 seconds for the rest of the session, forcing the faster system to render 113 frames every time an input prediction turned out to be incorrect.
Good places for synchronization points include the beginning of each round, the WARNING!! You are forced to evade / Your life is in peril popups that pause the game for a few frames anyway, and whenever the game is paused via the ESC key.
During such pauses, the game could then also block the resuming ESC key of the player who didn't pause the game.
Emulated serial port communicating over named pipes with a standalone netplay tool
This approach would take the netcode developed in 2) out of the emulator and into a separate application running on the (modern) host OS, just like Ju.N.Owen or Adonis. The previous interrupt API would then be turned into a binary protocol communicated over the PC-98's serial port, while the rollback snapshots would be stored inside the emulated PC-98 in EMS or XMS/Protected Mode memory. Netplay data would then move through these stages:
🖥️ PC-98 game logic ⇄ Serial port ⇄ Emulator ⇄ Named pipe ⇄ Netcode logic ⇄ WebRTC Data Channel ⇄ Internet 🛜
All green steps run natively on the host OS.
Sending serial port data over named pipes is only a semi-common feature in PC-98 emulators, and would currently restrict netplay to Neko Project 21/W and NP2kai on Windows. This is a pretty clean and generally useful feature to have in an emulator though, and emulator maintainers will be much more likely to include this than the custom netplay code I proposed in 1) and 2). DOSBox-X has an open issue that we could help implement, and the NP2kai Linux port would probably also appreciate a mkfifo(3) implementation.
This could even work with emulators that only implement PC-98 serial ports in terms of, well, native Windows serial ports. This group currently includes Neko Project II fmgen, SL9821, T98-Next, and rare bundles of Anex86 that replace MIDI support with COM port emulation. These would require separately installed and configured virtual serial port software in place of the named pipe connection, as well as support for actual serial ports in the netplay tool itself. In fact, this is the only way that die-hard Anex86 and T98-Next fans could enjoy any kind of netplay on these two ancient emulators.
If it works though, it's the optimal solution for the emulated use case if we don't want to fork the emulator. From the point of view of the PC-98, the serial port is the cheapest way to send a couple of bytes to some external thing, and named pipes are one of many native ways for two Windows/Linux applications to efficiently communicate.
The only slight drawback of this approach is the expected high DOS memory requirement for rollback. Unless we find a way to really compress game state snapshots to just a few KB, this approach will require a more modern DOS setup with EMS/XMS support instead of the pre-installed MS-DOS 3.30C on a certain widely circulated .HDI copy. But apart from that, all you'd need to do is run the separate netplay tool, pick the same pipe name in both the tool and the emulator, and you're good to go.
It could even work for real hardware, but would require the PC-98 to be linked to the separately running modern system via a null modem cable.
Native PC-98 Windows 9x netcode (only for real PC-98 hardware equipped with an Ethernet card)
Equivalent in features to 2), but pulls the netcode into the PC-98 system itself. The tool developed in 3) would then as a separate 32-bit or 16-bit Windows application that somehow communicates with the game running in a DOS window. The handful of real-hardware owners who have actually equipped their PC-98 with a network card such as the LGY-98 would then no longer require the modern PC from 3) as a bridge in the middle.
This specific card also happens to be low-level-emulated by the 21/W fork of Neko Project. However, it makes little sense to use this technique in an emulator when compared to 3), as NP21/W requires a separately installed and configured TAP driver to actually be able to access your native Windows Internet connection. While the setup is well-documented and I did manage to get a working Internet connection inside an emulated Windows 95, it's definitely not foolproof. Not to mention DOSBox-X, which currently emulates the apparently hardware-compatible NE2000 card, but disables its emulation in PC-98 mode, most likely because its I/O ports clash with the typical peripherals of a PC-98 system.
And that's not the end of the drawbacks:
Netplay would depend on the PC-98 versions of Windows 9x and its full network stack, nothing of which is required for the game itself.
Porting libdatachannel (and especially the required transport encryption) to Windows 95 will probably involve a bit of effort as well.
As would actually finding a way to access V86 mode memory from a 32-bit or 16-bit Windows process, particularly due to how isolated DOS processes are from the rest of the system and even each other. A quick investigation revealed three potential approaches:
A 32-bit process could read the memory out of the address space of the console host process (WINOA32.MOD). There seems to be no way of locating the specific base address of a DOS process, but you could always do a brute-force search through the memory map.
If started before Windows, TSRs will share their resident memory with both DOS and Win16 processes. The segment pointer would then be retrieved through a typical interrupt API.
Writing a VxD driver 😩
Correctly setting up TH03 to run within Windows 95 to begin with can be rather tricky. The GDC clock speed check needs to be either patched out or overridden using mode-setting tools, Windows needs to be blocked from accessing the FM chip, and even then, MAIN.EXE might still immediately crash during the first frame and leave all of VRAM corrupted:
This is probably a bug in the latest ver0.86 rev92β3 version of Neko Project 21/W; I got it to work fine on real hardware. 📝 StormySpace did run on the same emulated Windows 95 system without any issues, though. Regardless, it's still worth mentioning as a symbol of everything that can go wrong.
A matchmaking server would be much more of a requirement than in any of the emulator variants. Players are unlikely to run their favorite chat client on the same PC-98 system, and the signaling codes are way too unwieldy to type them in manually. (Then again, IRC is always an option, and the people who would fund this variant are probably the exact same people who are already running IRC clients on their PC-98.)
Native PC-98 DOS netcode (only for real PC-98 hardware equipped with an Ethernet card)
Conceptually the same as 4), but going yet another level deeper, replacing the Windows 9x network stack with a DOS-based one. This might look even more intimidating and error-prone, but after I got pingand even Telnet working, I was pleasantly surprised at how much simpler it is when compared to the Windows variant. The whole stack consists of just one LGY-98 hardware information tool, a LGY-98 packet driver TSR, and a TSR that implements TCP/IP/UDP/DNS/ICMP and is configured with a plaintext file. I don't have any deep experience with these protocols, so I was quite surprised that you can implement all of them in a single 40 KiB binary. Installed as TSRs, the entire stack takes up an acceptable 82 KiB of conventional memory, leaving more than enough space for the game itself. And since both of the TSRs are open-source, we can even legally bundle them with the future modified game binaries.
The matchmaking issue from the Windows 9x approach remains though, along with the following issues:
Porting libdatachannel and the required transport encryption to the TEEN stack seems even more time-consuming than a Windows 95 port.
The TEEN stack has no UI for specifying the system's or gateway's IP addresses outside of its plaintext configuration file. This provides a nice opportunity for adding a new Internet settings menu with great error feedback to the game itself. Great for UX, but it's another thing I'd have to write.
As always, this is the premium option. If the entire game already runs as a standalone executable on a modern system, we can just put all the netcode into the same binary and have the most seamless integration possible.
That leaves us with these prerequisites:
1), by definition, needs nothing from ReC98, and I could theoretically start implementing it right now. If you're interested in funding it, just tell me via the usual Twitter or Discord channels.
2) through 5) require at least 100% RE of TH03's OP.EXE to facilitate the new menu code. Reverse-engineering all rendering-related code in MAIN.EXE would be nice for performance, but we don't strictly need all of it before we start. Re-simulated frames can just skip over the few pieces of rendering code we do know, and we can gradually increase the skipped area of code in future pushes. 100% PI won't be a requirement either, as I expect the MAIN.EXE part of the interfacing netcode layer to be thin enough that it can easily fit within the original game's code layout.
Therefore, funding TH03 OP.EXE RE is the clearest way you can signal to me that you want netplay with nice UX.
6), obviously, requires all of TH03 to be RE'd, decompiled, cleaned up, and ported to modern systems. Currently, TH03 appears to be the second-easiest game to port behind TH02:
Although TH03 already has more needlessly micro-optimized ASM code than TH02 and there's even more to come, it still appears to have way less than TH04 or TH05.
Its game logic and rendering code seem to be somewhat neatly separated from each other, unlike TH01 which deeply intertwines them.
Its graphics seem free of obvious bugs, unlike – again — the flicker-fest that is TH01.
But still, it's the game with the least amount of RE%. Decompilation might get easier once I've worked myself up to the higher levels of game code, and even more so if we're lucky and all of the 9 characters are coded in a similar way, but I can't promise anything at this point.
Once we've reached any of these prerequisites, I'll set up a separate campaign funding method that runs parallel to the cap. As netplay is one of those big features where incremental progress makes little sense and we can expect wide community support for the idea, I'll go for a more classic crowdfunding model with a fixed goal for the minimum feature set and stretch goals for optional quality-of-life features. Since I've still got two other big projects waiting to be finished, I'd like to at least complete the Shuusou Gyoku Linux port before I start working on TH03 netplay, even if we manage to hit any of the funding goals before that.
For the first time in a long while, the actual content of this push can be listed fairly quickly. I've now RE'd:
conversions from playfield-relative coordinates to screen coordinates and back (a first in PC-98 Touhou; even TH02 uses screen space for every coordinate I've seen so far),
the low-level code that moves the player entity across the screen,
a copy of the per-round frame counter that, for some reason, resets to 0 at the start of the Win/Lose animation, resetting a bunch of animations with it,
a global hitbox with one variable that sometimes stores the center of an entity, and sometimes its top-left corner,
and the 48×48 hit circles from EN2.PI.
It's also the third TH03 gameplay push in a row that features inappropriate ASM code in places that really, really didn't need any. As usual, the code is worse than what Turbo C++ 4.0J would generate for idiomatic C code, and the surrounding code remains full of untapped and quick optimization opportunities anyway. This time, the biggest joke is the sprite offset calculation in the hit circle rendering code:
A multiplication with 6 would have compiled into a single IMUL instruction. This compiles into 4 MOVs, one IMUL (with 2), and two ADDs. This surely must have been left in on purpose for us to laugh about it one day?
But while we've all come to expect the usual share of ZUN bloat by now, this is also the first push without either a ZUN bug or a landmine since I started using these terms! 🎉 It does contain a single ZUN quirk though, which can also be found in the hit circles. This animation comes in two types with different caps: 12 animation slots across both playfields for the enemy circles shown in alternating bright/dark yellow colors, whereas the white animation for the player characters has a cap of… 1? P2 takes precedence over P1 because its update code always runs last, which explains what happens when both players get hit within the 16 frames of the animation:
If they both get hit on the exact same frame, the animation for P1 never plays, as P2 takes precedence.
If the other player gets hit within 16 frames of an active white circle animation, the animation is reinitialized for the other player as there's only a single slot to hold it. Is this supposed to telegraph that the other player got hit without them having to look over to the other playfield? After all, they're drawn on top of most other entities, but below the player.
SPRITE16 uses the PC-98's EGC to draw these single-color sprites. If the EGC is already set up, it can be set into a GRCG-equivalent RMW mode using the pattern/read plane register (0x4A2) and foreground color register (0x4A6), together with setting the mode register (0x4A4) to 0x0CAC. Unlike the typical blitting operations that involve its 16-dot pattern register, the EGC even supports 8- or 32-bit writes in this mode, just like the GRCG. 📝 As expected for EGC features beyond the most ordinary ones though, T98-Next simply sets every written pixel to black on a 32-bit write. Comparing the actual performance of such writes to the GRCG would be 📝 yet another interesting question to benchmark.
Next up: I think it's time for ReC98's build system to reach its final form.
For almost 5 years, I've been using an unreleased sane build system on a parallel private branch that was just missing some final polish and bugfixes. Meanwhile, the public repo is still using the project's initial Makefile that, 📝 as typical for Makefiles, is so unreliable that BUILD16B.BAT force-rebuilds everything by default anyway. While my build system has scaled decently over the years, something even better happened in the meantime: MS-DOS Player, a DOS emulator exclusively meant for seamless integration of CLI programs into the Windows console, has been forked and enhanced enough to finally run Turbo C++ 4.0J at an acceptable speed. So let's remove DOSBox from the equation, merge the 32-bit and 16-bit build steps into a single 32-bit one, set all of this up in a user-friendly way, and maybe squeeze even more performance out of MS-DOS Player specifically for this use case.
That was quick: In a surprising turn of events, Romantique Tp themselves came in just one day after the last blog post went up, updated me with their current and much more positive opinion on Sound Canvas VA, and confirmed that real SC-88Pro hardware clamps invalid Reverb Macro values to the specified range. I promised to release a new Sound Canvas VA BGM pack for free once I knew the exact behavior of real hardware, so let's go right back to Seihou and also integrate the necessary SysEx patches into the game's MIDI player behind a toggle. This would also be a great occasion to quickly incorporate some long overdue code maintenance and build system improvements, and a migration to C++ modules in particular. When I started the Shuusou Gyoku Linux port a year ago, the combination of modules and <windows.h> threw lots of weird errors and even crashed the Visual Studio compiler. But nowadays, Microsoft even uses modules in the Office code base. This must mean that these issues are fixed by now, right?
Well, there's still a bug that causes the modularized C++ standard library to be basically unusable in combination with the static analyzer, and somehow, I was the first one to report it. So it's 3½ years after C++20 was finalized, and somehow, modules are still a bleeding-edge feature and a second-class citizen in even the compiler that supports them the best. I want fast compile times already! 😕
Thankfully, Microsoft agrees that this is a bug, and will work on it at some point. While we're waiting, let's return to the original plan of decompiling the endings of the one PC-98 Touhou game that still needed them decompiled.
After the textless slideshows of TH01, TH02 was the first Touhou game to feature lore text in its endings. Given that this game stores its 📝 in-game dialog text in fixed-size plaintext files, you wouldn't expect anything more fancy for the endings either, so it's not surprising to see that the END?.TXT files use the same concept, with 44 visible bytes per line followed by two bytes of padding for the CR/LF newline sequence. Each of these lines is typed to the screen in full, with all whitespace and a fixed time for each 2-byte chunk.
As a result, everything surrounding the text is just as hardcoded as TH01's endings were, which once again opens up the possibility of freely integrating all sorts of creative animations without the overhead of an interpreter. Sadly, TH02 only makes use of this freedom in a mere two cases: the picture scrolling effect from Reimu's head to Marisa's head in the Bad Endings, and a single hardware palette change in the Good Endings.
Powered by master.lib's egc_shift_down().
Same image, different palette. Note how the palette for 2️⃣ must still contain a green color for the VRAM-rendered bold text, which the image is not supposed to use.
Hardcoding also still made sense for this game because of how the ending text is structured. The Good and Bad Endings for the individual shot types respectively share 55% and 77% of their text, and both only diverge after the first 27 lines. In straight-line procedural code, this translates to one branch for each shot type at a single point, neatly matching the high-level structure of these endings.
But that's the end of the positive or neutral aspects I can find in these scripts. The worst part, by far, is ZUN's approach to displaying the text in alternating colors, and how it impacts the entire structure of the code.
The simplest solution would have involved a hardcoded array with the color of each line, just like how the in-game dialogs store the face IDs for each text box. But for whatever reason, ZUN did not apply this piece of wisdom to the endings and instead hardcoded these color changes by… mutating a global variable before calling the text typing function for every individual line. This approach ruins any possibility of compressing the script code into loops. While ZUN did use loops, all of them are very short because they can only last until the next color change. In the end, the code contains 90 explicitly spelled-out calls to the 5-parameter line typing function that only vary in the pointer to each line and in the slower speed used for the one or two final lines of each ending. As usual, I've deduplicated the code in the ReC98 repository down to a sensible level, but here's the full inlined and macro-expanded horror:
It's highly likely that this is what ZUN hacked into his PC-98 and was staring at back in 1997.
All this redundancy bloats the two script functions for the 6 endings to a whopping 3,344 bytes inside TH02's MAINE.EXE. In particular, the single function that covers the three Good Endings ends up with a total of 631 x86 ASM instructions, making it the single largest function in TH02 and the 7th longest function in all of PC-98 Touhou. If the 📝 single-executable build for TH02's debloated and anniversary branches ends up needing a few more KB to reduce its size below the original MAIN.EXE, there are lots of opportunities to compress it all.
The ending text can also be fast-forwarded by holding any key. As we've come to expect for this sort of ZUN code, the text typing function runs its own rendering loop with VSync delays and input detection, which means that we 📝 once📝 again have to talk about the infamous quirk of the PC-98 keyboard controller in relation to held keys. We've still got 54 not yet decompiled calls to input detection functions left in this codebase, are you excited yet?!
Holding any key speeds up the text of all ending lines before the last one by displaying two kana/kanji instead of one per rendered frame and reducing the delay between the rendered frames to 1/3 of its regular length. In pseudocode:
for(i = 0; i < number_of_2_byte_chunks_on_displayed_line; i++) {
input = convert_current_pc98_bios_input_state_to_game_specific_bitflags();
add_chunk_to_internal_text_buffer(i);
blit_internal_text_buffer_from_the_beginning();
if(input == INPUT_NONE) {
// Basic case, no key pressed
frame_delay(frames_per_chunk);
} else if((i % 2) == 1) {
// Key pressed, chunk number is odd.
frame_delay(frames_per_chunk / 3);
} else {
// Key pressed, chunk number is even.
// No delay; next iteration adds to the same frame.
}
}
This is exactly the kind of code you would write if you wanted to deliberately maximize the impact of this hardware quirk. If the game happens to read the current input state right after a key up scancode for the last previously held and game-relevant key, it will then wrongly take the branch that uninterruptibly waits for the regular, non-divided amount of VSync interrupts. In my tests, this broke the rhythm of the fast-forwarded text about once per line. Note how this branch can also be taken on an even chunk: Rendering glyphs straight from font ROM to VRAM is not exactly cheap, and if each iteration (needlessly) blits one more full-width glyph than the last one, the probability of a key up scancode arriving in the middle of a frame only increases.
The fact that TH02 allows any of the supported input keys to be held points to another detail of this quirk I haven't mentioned so far. If you press multiple keys at once, the PC-98's keyboard controller only sends the periodic key up scancodes as long as you are holding the last key you pressed. Because the controller only remembers this last key, pressing and releasing any other key would get rid of these scancodes for all keys you are still holding.
As usual, this ZUN bug only occurs on real hardware and with DOSBox-X's correct emulation of the PC-98 keyboard controller.
After the ending, we get to witness the most seamless transition between ending and Staff Roll in any Touhou game as the BGM immediately changes to the Staff Roll theme, and the ending picture is shifted into the same place where the Staff Roll pictures will appear. Except that the code misses the exact position by four pixels, and cuts off another four pixels at the right edge of the picture:
Also, note the green 1-pixel line at the right edge of this specific picture. This is a bug in the .PI file where the picture is indeed shifted one pixel to the left.
What follows is a comparatively large amount of unused content for a single scene. It starts right at the end of this underappreciated 11-frame animation loaded from ENDFT.BFT:
Wastefully using the 4bpp BFNT format. The single frame at the end of the animation is unused; while it might look identical to the ZUN glyphs later on in the Staff Roll, that's only because both are independently rendered boldfaced versions of the same font ROM glyphs. Then again, it does prove that ZUN created this animation on a PC-98 model made by NEC, as the Epson clones used a font ROM with a distinctly different look.
TH02's Staff Roll is also unique for the pre-made screenshots of all 5 stages that get shown together with a fancy rotating rectangle animation while the Staff Roll progresses in sync with the BGM. The first interesting detail shows up immediately after the first image, where the code jumps over one of the 320×200 quarters in ED06.PI, leaving the screenshot of the Stage 2 midboss unused.
All of the cutscenes in PC-98 Touhou store their pictures as 320×200 quarters within a single 640×400 .PI file. Anywhere else, all four quarters are supposed to be displayed with the same palette specified in the .PI header, but TH02's Staff Roll screenshots are also unique in how all quarters beyond the top-left one require palettes loaded from external .RGB files to look right. Consequently, the game doesn't clearly specify the intended palette of this unused screenshot, and leaves two possibilities:
The unused second 320×200 quarter of TH02's ED06.PI, displayed in the Stage 2 color palette used in-game.
The unused second 320×200 quarter of TH02's ED06.PI, displayed in the palette specified in the .PI header. These are the colors you'd see when looking at the file in a .PI viewer, when converting it into another format with the usual tools, or in sprite rips that don't take TH02's hardcoded palette changes into account. These colors are only intended for the Stage 1 screenshot in the top-left quarter of the file.
The unused second 320×200 quarter of TH02's ED06.PI, displayed in the palette from ED06B.RGB, which the game uses for the following screenshot of the Meira fight. As it's from the same stage, it almost matches the in-game colors seen in 1️⃣, and only differs in the white color (#FFF) being slightly red-tinted (#FCC).
It might seem obvious that the Stage 2 palette in 1️⃣ is the correct one, but ZUN indeed uses ED06B.RGB with the red-tinted white color for the following screenshot of the Meira fight. Not only does this palette not match Meira's in-game appearance, but it also discolors the rectangle animation and the surrounding Staff Roll text:
Also, that tearing on frame #1 is not a recording artifact, but the expected result of yet another VSync-related landmine. 💣 This time, it's caused by the combination of 1) the entire sequence from the ending to the verdict screen being single-buffered, and 2) this animation always running immediately after an expensive operation (640×400 .PI image loading and blitting to VRAM, 320×200 VRAM inter-page copy, or hardware palette loading from a packed file), without waiting for the VSync interrupt. This makes it highly likely for the first frame of this animation to start rendering at a point where the (real or emulated) electron beam has already traveled over a significant portion of the screen.
But when I went into Stage 2 to compare these colors to the in-game palette, I found something even more curious. ZUN obviously made this screenshot with the Reimu-C shot type, but one of the shot sprites looks slightly different from how it does in-game. These screenshots must have been made earlier in development when the sprite didn't yet feature the second ring at the top. The same applies to the Stage 4 screenshot later on:
Finally, the rotating rectangle animation delivers one more minor rendering bug. Each of the 20 frames removes the largest and outermost rectangle from VRAM by redrawing it in the same black color of the background before drawing the remaining rectangles on top. The corners of these rectangles are placed on a shrinking circle that starts with a radius of 256 pixels and is centered at (192, 200), which results in a maximum possible X coordinate of 448 for the rightmost corner of the rectangle. However, the Staff Roll text starts at an X coordinate of 416, causing the first two full-width glyphs to still fall within the area of the circle. Each line of text is also only rendered once before the animation. So if any of the rectangles then happens to be placed at an angle that causes its edges to overlap the text, its removal will cut small holes of black pixels into the glyphs:
The green dotted circle corresponds to the newest/smallest rectangle. Note how ZUN only happened to avoid the holes for the two final animations by choosing an initial angle and angular velocity that causes the resulting rectangles to just barely avoid touching the TEST PLAYER glyphs.
At least the following verdict screen manages to have no bugs aside from the slightly imperfect centering of its table values, and only comes with a small amount of additional bloat. Let's get right to the mapping from skill points to the 12 title strings from END3.TXT, because one of them is not like the others:
Skill
Title
≥100
神を超えた巫女!!
90 - 99
もはや神の領域!!
80 - 99
A級シューター!!
78 - 79
うきうきゲーマー!
77
バニラはーもにー!
70 - 76
うきうきゲーマー!
60 - 69
どきどきゲーマー!
50 - 59
要練習ゲーマー
40 - 49
非ゲーマー級
30 - 39
ちょっとだめ
20 - 29
非人間級
10 - 19
人間でない何か
≤9
死んでいいよ、いやいやまじで
Looks like I'm the first one to document the required skill points as well? Everyoneelse just copy-pastes END3.TXT without providing context.
So how would you get exactly 77 and achieve vanilla harmony? Here's the formula:
* Ranges from 0 (Easy) to 3 (Lunatic). † Across all 5 stages.
With Easy Mode capping out at 85, this is possible on every difficulty, although it requires increasingly perfect play the lower you go. Reaching 77 on purpose, however, pretty much demands a careful route through the entire game, as every collected and missed item will influence the item_skill in some way. This almost feels it's like the ultimate challenge that this game has to offer. Looking forward to the first Vanilla Harmony% run!
And with that, TH02's MAINE.EXE is both fully position-independent and ready for translation. There's a tiny bit of undecompiled bit of code left in the binary, but I'll leave that for rounding up a future TH02 decompilation push.
With one of the game's skill-based formulas decompiled, it's fitting to round out the second push with the other two. The in-game bonus tables at the end of a stage also have labels that we'd eventually like to translate, after all.
The bonus formula for the 4 regular stages is also the first place where we encounter TH02's rank value, as well as the only instance in PC-98 Touhou where the game actually displays a rank-derived value to the player. KirbyComment and Colin Douglas Howell accurately documented the rank mechanics over at Touhou Wiki two years ago, which helped quite a bit as rank would have been slightly out of scope for these two pushes. 📝 Similar to TH01, TH02's rank value only affects bullet speed, but the exact details of how rank is factored in will have to wait until RE progress arrives at this game's bullet system.
These bonuses are calculated by taking a sum of various gameplay metrics and multiplying it with the amount of point items collected during the stage. In the 4 regular stages, the sum consists of:
難易度
Difficulty level* × 2,000
ステージ
(Rank + 16) × 200
ボム
max((2,500 - (Bombs used* × 500)), 0)
ミス
max((3,000 - (Lives lost* × 1,000)), 0)
靈撃初期数
(4 - Starting bombs) × 800
靈夢初期数
(5 - Starting lives) × 1,000
* Within this stage, across all continues.
Yup, 封魔録.TXT does indeed document this correctly.
As rank can range from -6 to +4 on Easy and +16 on the other difficulties, this sum can range between:
Easy
Normal
Hard
Lunatic
Minimum
2,800
4,800
6,800
8,800
Maximum
16,700
21,100
23,100
25,100
The sum for the Extra Stage is not documented in 封魔録.TXT:
クリア
10,000
ミス回数
max((20,000 - (Lives lost × 4,000)), 0)
ボム回数
max((20,000 - (Bombs used × 4,000)), 0)
クリアタイム
⌊max((20,000 - Boss fight frames*), 0) ÷ 10⌋ × 10
* Amount of frames spent fighting Evil Eye Σ, counted from the end of the pre-boss dialog until the start of the defeat animation.
And that's two pushes packed full of the most bloated and copy-pasted code that's unique to TH02! So bloated, in fact, that TH02 RE as a whole jumped by almost 7%, which in turn finally pushed overall RE% over the 60% mark. 🎉 It's been a while since we hit a similar milestone; 50% overall RE happened almost 2 years ago during 📝 P0204, a month before I completed the TH01 decompilation.
Next up: Continuing to wait for Microsoft to fix the static analyzer bug until May at the latest, and working towards the newly popular dreams of TH03 netplay by looking at some of its foundational gameplay code.
📝 Over two years since the previous largest delivery, we've now got a new record in every regard: 12 pushes across 5 repos, 215 commits, and a blog post with over 14,000 words and 48 pieces of media. 😱 Who would have thought that the superficially simple task of putting SC-88Pro recordings into Shuusou Gyoku would actually mainly focus on deep research into the underlying MIDI files? I don't typically cover much music-related content because it's a non-issue as far as PC-98 Touhou code is concerned, so it's quite fitting how extensive this one turned out. So here we go, the result of virtually unlimited funding and patience:
So where's the controversy? Romantique Tp obviously made the best and most careful real-hardware SC-88Pro recordings of all of ZUN's old MIDIs, including the original (OST) and arranged (AST) soundtrack of Shuusou Gyoku, right? Surely all I have to do now is to cut them into seamless loops to save a bit of disk space, and then put them into the game? Let's start at the end of the track list with the name registration theme, since it's light on instruments and has an obvious loop point that will be easy to spot in the waveform. But, um… wait a moment, that very first drum note comes a bit late, doesn't it?
At a notated tempo of 96 BPM, these first four beats should take exactly 2.5 seconds, which they do in this seamlessly looping softsynth rendering.
That's… not quite the accuracy and perfection I was expecting. But I think I know what we're seeing and hearing there. Let's look at the first few MIDI events across all channels:
Delta Pulse Beat Channel Event
+540 960 2:000 1 Controller { CC 0, value 0 }
+0 960 2:000 1 Controller { CC 32, value 0 }
+0 960 2:000 1 ProgramChange { 37 }
[…]
+0 960 2:000 2 Controller { CC 0, value 0 }
+0 960 2:000 2 Controller { CC 32, value 0 }
+0 960 2:000 2 ProgramChange { 19 }
[…]
+0 960 2:000 3 Controller { CC 0, value 0 }
+0 960 2:000 3 Controller { CC 32, value 0 }
+0 960 2:000 3 ProgramChange { 6 }
[…]
+0 960 2:000 4 Controller { CC 0, value 0 }
+0 960 2:000 4 Controller { CC 32, value 0 }
+0 960 2:000 4 ProgramChange { 2 }
[…]
Also, the fact that GS doesn't put its drums on a non-general voice bank and instead relies on external channel configuration to differentiate drums from pitched instruments is making this Yamaha kid uncontrollably furious. 🤬
Yup. That's the sound of a vintage hardware synth being slow and taking a two-digit number of milliseconds to process a barrage of simultaneous Program Change messages, playing a MIDI file that doesn't take this reality into account and expects program changes to happen instantly.
I can only speak from my own experience of writing MIDIs for hardware synths here, but having the first note displaced by 50 ms is very much not the way a composer would have intended the music to be heard if the note is clearly notated to occur on the beat. If you had told me about such an issue when playing one of my MIDIs on a certain synth, I would have thanked you for the bug report! And I would have promptly released a fixed version of the MIDI with the Program Change events moved back by a beat or two. In the case of Shuusou Gyoku's MIDIs, this wouldn't even have added any additional delay in-game, as all of these files already start with at least one beat of leading silence to make room for setting Roland-specific synth parameters.
OK, but that's just a single isolated bass drum hit. If we wanted to, we could even fix this issue ourselves by splicing the same note from around the loop end point. Maybe this is just an isolated case and the rest of Romantique Tp's recordings are fine? Well…
By the way, this seamless audio player is what consumed most of the two website pushes this time. The rest went to the slightly redesigned main page, whose progress bars now use the cap bar style and the GitHub badge colors.
This one is even worse. Here, the delay is so long relative to the tempo of the piece that the intended five drum hits pretty much turn into four.
This type of issue doesn't even have to be isolated to the very beginning of a piece. A few of the tracks in both the OST and AST start with an anacrusis on just one or two channels and leave the Program Change event barrage at the beginning of the first full measure. In 幻想科学 ~ Doll's Phantom for example, this creates a flam-like glitch where the bass on channel 2 is pretty much on time, but the crash hit on channel 10 only follows 50 ms later, after the SC-88Pro took its sweet time to process all the Program Change events on the channels between:
This is from the arranged soundtrack for a change. In that one, ZUN at least fixed the issue in the final three MIDIs (シルクロードアリス, 魔女達の舞踏会, and 二色蓮花蝶 ~ Ancients) that closed out this rearranging project in May 2001, which spread out their per-channel setup events over at least a single measure before playing any note.
Sure, all of this is barely noticeable in casual listening, but very noticeable if you're the one who now has to cut these recordings into seamless loops. And these are just the most obvious timing issues that can be easily pinpointed and documented – the actual worst aspects are all the minor tempo and timing fluctuations throughout most of the pieces. With recordings that deviate ever so slightly from the tempo defined in the MIDI files, you can no longer rely on mathematically exact sample positions when cutting loops. Even if those positions do work out from time to time, there'd pretty much always be a discontinuity in the waveform at both ends of the loop, manifesting as a clearly audible click. In the end, the only way of finding good loop points in existing recordings involves straining your ears and listening very, very closely to avoid any audible glitches. 😩
But if you've taken a look at the second tabs in the clips above, you will have noticed that we don't necessarily have to be stuck with recordings from real hardware. In late 2015, Roland released Sound Canvas VA, a VST plugin that emulates the classic core of Roland's old Sound Canvas lineup, including the SC-88Pro. As long as we run such a software synthesizer through a quality VST host, a purely software-based solution should be way superior for recording looped BGM:
By moving from real-time recording to an offline rendering paradigm, we get perfectly accurate note timing, as it no longer matters how long the synth takes to produce each output sample.
We stay entirely in the digital realm instead of going from digital (SC-88Pro) to analog (RCA cable) to digital (line-in recording) again, removing any chance for noise or distortion to ruin audio quality.
We get to directly render at 44,100 Hz instead of being limited to the 32,000 Hz signal coming out of the SC-88Pro's DAC. This can be easily noticed in the half-speed video above, whose SCVA version retains significantly more sibilant high-frequency content compared to the more muffled sound of Romantique Tp's recording.
Doing that also makes it feasible to preserve loudness differences between the pieces of a soundtrack instead of eradicating them by normalizing the volume of each individual track to the digital maximum.
Finally, it's much more time-efficient. We simply hit foobar2000's Convert button and get all MIDIs rendered within a few seconds each, instead of having to wait the entire length of a piece.
Any drawbacks? For our use case, all of them are found in the abysmal software quality of everything around the synth engine. As it's typical for the VST industry, Sound Canvas VA is excessively DRM'd – it takes multiple seconds to start up, and even then only allows a single process to run at any given time, immediately quitting every process beyond the first one with a misleading Parameter File1 Read Error message box. I totally believe anyone who claims that this makes SCVA more annoying than real hardware when composing new music. Retro gamers also dislike how Roland themselves no longer sells the 32-bit builds they used to offer for the first few versions. These old versions are now exclusively available through resellers, or on the seven seas.
But as far as the SC-88Pro emulation is concerned, there don't seem to be any technical reasons against it. There is a long thread over at VOGONS discussing all sorts of issues, but you have to dig quite deep to find any clear descriptions of bugs in SCVA's synth engine. Everything I found either only applies to the SC-55 emulation and not the SC-88Pro, was fixed by Roland in the meantime, or turned out to be a fixable bug in a MIDI file.
But wait, we've already heard one obvious difference between the real SC-88Pro and Sound Canvas VA. Let's listen to the very first clip again:
Ha! You can clearly hear a panning echo in the real-hardware recording that is missing from the Sound Canvas VA rendering. That's an obvious case of a core system effect not being reproduced correctly. If even that's undeniably broken, who knows which other subtle bugs SCVA suffers from, right? Case closed, Romantique Tp was right all along, SCVA is trash, real hardware reigns supreme
Actually, let's look closer into this one. Panning delay effects like this are typically reverb-related, but General MIDI only specifies a single controller to specify the per-channel reverb level from 0 to 127. Any specific characteristics of the reverb therefore have to be configured using vendor-specific system-exclusive messages, or SysEx for short.
So it's down to one of the four SysEx messages at the beginning of the MIDI file:
Since these byte strings represent Roland-specific instructions, we can't learn anything from a raw MIDI event dump alone here. No problem though, let's just load these files into some old MIDI sequencer that targeted Roland synths, open its MIDI event list, and then they will be automatically decoded into a human-readable representation…
…or at least that's what I expected. In Yamaha land, XGworks has done that for Yamaha's own XG SysEx messages ever since 1997:
No configuration required. You can even edit the textual Value1 representation and XGworks parses it back into the closest supported value!
But for Roland synths, there's… nothing similar? Seriously? 😶 Roland fanboys, how do you even live?! I mean, they are quick to recommend the typical bloated and sluggish big-name DAWs that take up multiple gigabytes of disk space, but none of the ones I tried seemed to have this feature. They can't have possibly been flinging around raw byte strings for the past 33 years?!
But once you look more into today's MIDI community, it becomes clear that this is exactly what they've been doing. Why else would so many people use the word complicated to describe Roland SysEx, or call it an old school/cryptic communication protocol in hexadecimal format? The latter is particularly hilarious because if you removed the word cryptic, this might as well describe all of MIDI, not just SysEx. Everything about this is a tooling issue, and Yamaha showed how easily it could have been solved. Instead, we get Sound Canvas experts, who should know more about the ecosystem than I do, making the incredible mental leap from "my DAW doesn't decode or easily generate SysEx" to "SysEx is antiquated" to "please just lift up these settings to the VST level and into my proprietary DAW's proprietary project format, that would be so much better"…
Thankfully that's not entirely true. After some more digging and configuration, I found a somewhat workable solution involving a comparatively modern sequencer called Domino:
Open the File → Preferences menu and associate your MIDI output device with a module map. This makes sense for SysEx encoding/generation since it can limit the options in the UI to what's actually available on your target hardware, but is also required for selecting the respective SysEx map into Domino's SysEx decoder. There is no technical reason for this because SC-88Pro SysEx messages can be uniquely identified by the three vendor, device, and model ID bytes that every message starts with, but would be too easy and user-friendly. The perception of SysEx being a black art must be upheld at all costs.
I've kept the garbled text of the partial translation to emphasize the sheer amount of jank involved in this entire process.
Load a MIDI file and let Domino "analyze" it:
Strangely enough, this will take quite a while – on my system, this analysis step runs at a speed of roughly 4.25 KB/s of MIDI data. Yes, kilobytes.
Unfortunately, "control change macro restoration" also seems to mean that you don't get to see any raw bytes when selecting the respective MIDI track in the UI, but at least we get what we were looking for:
…for the most part?
Alright, that's something we can work with. The GS Reset message is something that every Roland GS MIDI should start with, but it's immediately followed by a message that Domino failed to decode? The two subsequent reverb parameters make sense, but panning delays typically have more parameters than just a reverb level and time.
That unknown SysEx message shares much of the same bytes with the decoded ones though. So let's do what we maybe should have done all along, return to caveman, and check the SC-88Pro manual:
The relevant section from page 194. We can see how the address and value correspond to bytes 5-7 and 8 in the SysEx messages. Byte 9 is a checksum and byte 10 signals the end of the message.
And that's where we find what this particular issue boils down to. The missing SysEx message is clearly intended to be a Reverb Macro command, whose value can range from 0 to 7 inclusive on the SC-88Pro, but ZUN tries to specify Reverb Macro #14h, or 20 in decimal. The SC-88Pro manual does not specify what happens if a SysEx message wants to write an invalid value to a valid address, which means that we've firmly entered the territory of undefined behavior. Edit (2024-03-10):Romantique Tp confirmed that the real SC-88Pro clamps these Reverb Macro IDs to the supported range of 0-7. Therefore, the appropriate course of action for guaranteeing the same sound on other Roland synths would be to fix the MIDI file and specify Reverb Macro #7 instead. But since this behavior remains technically undefined, we can still argue about ZUN's intention behind specifying the Reverb Macro like this:
Clearly, ZUN did want to specify a valid Reverb Macro, but made a typo when manually entering the SysEx byte string, as he was forced to do thanks to terrible tooling. He clearly liked the resulting sound though, so the track should still be preserved with the panning reverb intact.
Clearly, the typical behavior for MIDI synths is to ignore invalid and unsupported SysEx messages, because validating user input is an important characteristic of quality software. This is what SCVA does, and what we hear in its rendering is the default hall reverb with ZUN's level and time adjustments. Therefore, SCVA is right, and the fact that we get a panning delay on the real SC-88Pro is a bug in real hardware.
Clearly, ZUN did not care enough about the reverb to specify a valid Reverb Macro. Whether we get the default reverb or a panning delay is an irrelevant performance detail, and does intentionally not matter when it comes to the intended sound of this track – especially since these four SysEx messages are the full extent of Roland GS-specific sound design in this piece, and the rest of it only uses standard MIDI features.
In fact, 32 out of the 39 MIDIs across both of Shuusou Gyoku's soundtrack use this invalid Reverb Macro. The only ones that don't are
both versions of Gates' theme (天空アーミー), which use the equally invalid Reverb Macro #11,
both versions of Milia's theme (プリムローズシヴァ), which use Reverb Macro #0 (Room 1),
and, again, the three arranged MIDIs that ZUN released last (シルクロードアリス, 魔女達の舞踏会, and 二色蓮花蝶 ~ Ancients), which feature a more detailed effect setup with custom chorus and EQ settings. In the case of Reimu's theme, these settings are even commented within the MIDI file.
And that's where this quest seemed to end, until Romantique Tp themselves came in and suggested that I take a closer look at the GS Advanced Editor, or GSAE for short.
Make sure to connect a MIDI input device before starting GSAE, or it will silently crash immediately after this splash screen. At least it accepts any controller, so this might just be a bug instead of the typical user-hostile kind of hardware dongle DRM that is pervasive in today's synth industry. 1999 would seem a bit too early for that, thankfully.
I was aware of this tool, but hadn't initially considered it because it's always described as just a SysEx generator/encoder. In fact, the very existence of such a tool made no sense to me at first, and seemed to prove my point that the usability of GS SysEx was wholly inferior to what I was used to in Yamaha land. Like, why not build at least a tiny and stripped-down MIDI sequencer around this functionality that would allow you to insert SC-88Pro-specific messages at any point within a sequence, and not just the beginning? I can see the need for such a tool in today's world of closed-source DAWs where hardware MIDI modules are niche and retro and are only kept alive by a small community of enthusiasts. But why would its developers guarantee that MIDI composers would have to hop between programs even back in 1997? I can only imagine that they saw how every just slightly advanced MIDI sequencer or DAW back then already used its own project format instead of raw Standard MIDI Files, and assumed that composers would therefore be program-hopping anyway?
However, GSAE does support the import of settings from a MIDI file and features a SysEx history window that decodes every newly processed Roland SysEx byte string, which is all I was looking for. So let's throw in that same MIDI and…
That's the result of sending just the single F0 41 10 42 12 40 01 30 14 7B F7 message at the top.
Now that's some wild numbers. An equally invalid Reverb Character, and Reverb Level and Time values that even exceed their defined range of 0-127? Could it be that GSAE emulates the real-hardware response to invalid Reverb Macros here, and gives us the exact reverb setting we can hear in Romantique Tp's recording? This could even be the reason why GSAE is still used and recommended within today's Roland MIDI sequencing scene, and hasn't been supplanted by some more modern open-source tool written by the community.
In any case, these values have to come from somewhere, so let's reverse-engineer GSAE and figure out the logic behind them. Shoutout to IDR for being a great help with its automatic generation of IDC debug symbols for the Delphi standard library, and even including a few names of application-level widget class methods by reading Delphi-specific type information from the binary. This little sub-project made me also come around to appreciating Ghidra, whose decompiler and data type manager helped a lot and allowed me to find the relevant code section within just a few hours.
A~nd it turns out that the values all come from out-of-bounds accesses into arrays on the stack. If we combine 25, 235, and 132 back into a 32-bit value, we get 0x19EB84, which is the virtual address of the relevant function's stack frame base pointer.
But it gets even more hilarious: If you enable debug text output via Option → Other Options → SMF → Insert text events to setup measures and export these imported settings back into a MIDI file, GSAE not only retains these invalid Reverb Macro IDs, but stringifies them via a simple lookup into a hardcoded string pointer array, again without any bounds checks. The effects of this are roughly what you would expect:
Reverb Macro IDs between 8 and 27 simply insert wrong strings from adjacent string pointer arrays
Reverb Macro 28 crashes GSAE
Reverb Macro 64 causes GSAE to vomit 65,512 bytes of garbage into the MIDI file
In the end, we have Domino not decoding the Reverb Macro message, and GSAE, the premier SysEx tool for Roland synths, responding to it in even more undefined and clearly bugged ways than real hardware apparently does. That's two programs confirming that whatever ZUN intended was never supposed to work reliably. And while we still don't know exactly what these reverb parameters are supposed to be, these observations solve the mystery as far as I'm concerned, and solidify my personal opinion on the matter.
So what do we do now, and which version do we go with? Optimally, I'd offer both versions and turn this controversy into a personal choice so that everybody wins… and Ember2528 agreed and generously provided all the funding to make it happen. 💸
If you haven't picked your favorite yet, here are some final arguments:
The Romantique Tp recordings certainly have something going for them with their provenance of coming from real hardware, and the care that Romantique Tp put into manually recording every single track, warts and all. I wholeheartedly agree that preserving the raw sound of playing the MIDI files into the hardware without thinking about bugs or quirks is an important angle to take when it comes to preservation. It's good that these recordings exist – after all, you wouldn't know which musical elements you'd possibly be missing in an emulation if you have nothing to compare it to. Even the muffled sound in the half-speed clip above can be an argument in their favor, as the SC-88Pro's DAC operates at 32 kHz and you wouldn't expect any meaningful frequency content between 16,000 and 22,050 Hz to begin with. Any frequency content in that range that does remain in Romantique Tp's recording is simply 📝 rolled-off imaging noise added during the ADC's resampling process.
All this is why they are a definite improvement over kaorin's 2007 recordings of only the AST, which used to be the previous reference recordings within the community. Those had all of the same timing issues and more, in addition to being so excessively volume-boosted that 0.15% of the samples across the entire soundtrack ended up clipped. That's 6.25 seconds out of 68:39m being lost to pure digital noise.
Most importantly though: ZUN himself said that only the real SC-88Pro will play back these files as he intended them to sound. This quote is likely where the tagline of Romantique Tp's entire recording project came from in the first place:
> 全てのデエタはSC-88ProもしくはSC-8850(ロオランド社)にて最適に聴けるように調整してあります
> それ以外の音源でも、作者の意図した音ではない場合があります。
— ZUN on 東方幻想的音楽, his old MIDI page
However. ZUN is not exactly known for accurately and carefully preserving the legacy of his series, or really doing anything beyond parading his old games as unobtainable showpieces at conventions. With all the issues we've seen, preferring real hardware is ultimately just that: an angle, and a preference. This is why I disagree with the heavy and uncritical advertising that is mainly responsible for elevating the Romantique Tp recordings to their current reference status within the community, especially if at least half of the alleged superiority of real hardware is founded on undefined behavior that can easily be fixed in the MIDI files themselves if people only bothered to look.
Here's where I stand: MIDI files are digital sheet music first and foremost, not an inferior version of tracker modules where the samples are sold separately. As such, the specific synth a MIDI file was written for is merely a secondary property of the composition – and even more so if the MIDI file contains little to nothing in terms of sound design and mostly restricts itself to the basic feature set of General MIDI. In turn, synth quirks and bugs are not a defined part of the composition either, unless they are clearly annotated and documented in the file itself. And most importantly: If the MIDI file specifies a certain timing and a recording fails to reproduce that timing, then that recording is not an accurate representation of the MIDI file.
In that regard, Sound Canvas VA is not only the closest alternative to the real thing, as a few people in the MIDI and retrogaming scene do have to admit, but superior to the real thing. I'll gladly take clarity and perfect timing accuracy in exchange for minor differences in effects, especially if the MIDI file does not explicitly and correctly define said effects to begin with. If I want a panning delay as part of the reverb, I add the respective and correct SysEx message to define one – and if I don't, I do not care about the reverb. You might still get a panning delay on a certain synth, and you might even prefer how it sounds, but it's ultimately a rendering artifact and not a consciously intended part of the composition. In that way, it's similar to the individual flavor a musician adds to a performance of a piece of classical music.
And as far as the differences in frequency response and resonant filters are concerned: In Yamaha land, these are exactly the main distinguishing factors between vintage WF-192XG sound cards (resembling the real SC-88Pro in these characteristics) and the S-YXG50 softsynth (resembling SCVA). Once I found out about that softsynth and how much clearer it sounded in comparison, I sold that old PCI sound card soon after.
In the interest of preservation though, there's still one more unexplored solution that could be the ideal middle ground between the two approaches:
Play the MIDIs through a real-hardware SC-88Pro again
Capture the actually observed system-exclusive settings that fall within the synth's supported and documented ranges
Insert them back into the MIDI file, creating a new bugfixed version
Re-record that bugfixed version through Sound Canvas VA
Edit (2024-03-10): And since Romantique Tp has confirmed what exactly happens on real hardware, I'm going to do exactly that. These bugfixed Sound Canvas VA renderings will be a free bonus of the single next Shuusou Gyoku push, and will add another angle to the preservation of these soundtracks. In the meantime though, the Sound Canvas VA packs will sound like they do in the preview videos above.
Just to be clear: I'm not suggesting that Romantique Tp should have been the one to cut their recordings into loops, or even just the one who defined where the loop points are supposed to be. On the surface, this seems to be a non-issue, and you'd just pick a point wherever each track appears to loop, right? But with 39 MIDIs to cut and all the financial support from Ember2528, it made sense to also solve this problem more thoroughly, and algorithmically detect provably correct loop points for all of these files. Who knows, maybe we even find some surprises that make it all worth it?
This is the algorithm I came up with:
At a basic level, we loop over the list of MIDI events and return the earliest and longest subrange that is immediately followed by an identical copy.
MIDI players, however, need loop point definitions that use MIDI pulse units rather than event list indices. This is especially necessary for multi-track/SMF Type 1 sequences, which would otherwise require one loop start/end index pair per track, and then it still wouldn't work because some of the tracks might not even have an event at the loop start/end point. This requires the detection algorithm and the player to agree on how to map event indices to time points and back, and simply going for the first event of each pulse (i.e., any event with a nonzero delta time) makes the most sense here. In turn, we can skip any potential start or end events that have a delta time of 0, speeding up the algorithm significantly for typical compositions with a high degree of polyphony.
Naively considering just the raw MIDI events works for MIDI playback. But as soon as we want to cut a recording based on the detected loop points, we need to account for the fact that MIDI playback is inherently stateful. Each of the 16 channels at the protocol level features at least the 128 continuous controllers (CCs) with a 7-bit state, the 14-bit pitch bend controller, and the 7-bit instrument program value, in addition to the global tempo of the piece. As a result, two ranges of events might look identical, but can still sound differently if the events before the first range changed one piece of state which is then only touched again near the end of that range. This requires us to track the full MIDI state at both the start and end of a loop, and reject any potential loop that differs in these states:
In this example, a naive event-level scan would detect a loop between beats 3 and 6 as the same events are immediately repeated between beats 6 and 9. However, the piece starts with the first four notes at a channel volume of 50, which is only set to its later value of 100 on beat 5. Therefore, the actual loop ranges from beat 5 to 8. In turn, the piece needed to be at least 11 beats long to include the full second copy of the looped events and prove the loop as such.
This check can be a bit too strict in some cases, though. A channel might start with one of its CCs at a specific value but then change the same CC to a different value at a later point before playing the first note. In such a case, the detected loop would be delayed to the second CC change even though the initial CC value has no impact on the sound. By filtering these redundant CC changes, we get to move the loop start point of a few tracks (original 夢機械 ~ Innocent Power and arranged 魔法少女十字軍) back by a few seconds, to the position you'd expect.
Finally, we reject any overlong loops that themselves fully consist of multiple successive copies of the first N events.
Shuusou Gyoku's original MIDI files hide the original game's lack of MIDI looping by simply duplicating the looping sections enough times so that a typical player won't notice. The algorithm we have so far, however, would return a much longer loop if a MIDI file contains more than three successive copies of a looping section. The original version of ハーセルヴズ in particular repeats its 8 looping bars a total of 15 times before the MIDI ends, and this condition is necessary to detect the actual 8-bar loop instead of a 56-bar one.
Of course, this algorithm isn't perfect and won't work for every MIDI file out there. It doesn't consider things like differently ordered events within the same MIDI pulse, (non-)registered parameter numbers, or the effect that SysEx messages can have on the state of individual channels. The latter would require the general SysEx decoding logic that I would have liked to have for the research above… actually, let's add an issue and add the project to the order form. I'd really like to see a comprehensive open-source cross-vendor SysEx decoder library in my lifetime.
As for the implementation, I was happy to write some Rust again for a change, as it's a great fit for these standalone greenfield command-line tools that don't have to directly interact with the legacy C++ code bases that this project usually deals with. It's even better if the foundational functionality is not just available in a crate, but in four, with the community already having gone through multiple iterations to arrive at a tried and tested winner. Who knows, maybe I even get to rewrite this website in it one day? Just for the sheer meme value of doing so, of course.
I also enjoyed this a lot from a technical point of view:
You might think that Rust's typical safety guarantees don't matter for the problem at hand. But then you accidentally write -= instead of += for a u32 that starts out at 0, and Rust immediately panics instead of silently underflowing to u32::MAX. This must have saved me at least 5 minutes of debugging the resulting logic error.
As it turns out, my loop detection algorithm is embarrassingly parallel. You might initially think about it in a sequential way because we always want the earliest occurrence of the longest repeating section of MIDI events, which means that each new loop candidate further into the track has to be longer than the previous one. But since we always iterate over the entire MIDI, it makes perfect sense to divide and conquer the problem. Let's split the list of possible loop end points into equal chunks, scan them all in parallel for the earliest and longest loop within that chunk, and then pick the earliest and longest loop among those intermediate results as the final one. In Rust, you don't even have to think much about the chunks, as all of that can be easily done by replacing the iteration with Rayon's parallel fold and adding a reduce() with the same condition for the final step. This sped up the algorithm by exactly the number of cores in my system.
This algorithm works well for the long MIDI files of Shuusou Gyoku's OST that all contain multiple duplicates of their loop section, but it quickly reaches its limit with the AST. Following the classic two-loop + fade-out format, that soundtrack was meant to be played back in generic MIDI players, and not to actually be put back into the game in looped form. Since the loop algorithm did, in fact, find inconsistencies even in the OST, two copies of the apparent loop are sometimes not enough to prove cases where the actual loop ends much later than you think it does. In a few cases, it would be enough to simply remove all volume change events from the fade-out to prove the actual loop, but in others, the algorithm would need MIDI event data far past the end of the fade-out.
However, just giving up and not looping any of these tracks would be equally unfortunate. So how about shifting the question, from what's the best loop in this MIDI file to what's the best loop if the MIDI didn't fade out and instead repeated its apparent second loop a third time? As long as the detected loop in such a pre-processed file ends before the repeated range, it's still a valid loop in terms of the unmodified original.
Ideally, we want to do this pre-processing programmatically with the same Rust library instead of manually editing the MIDI. Many sequencers (and especially XGworks) apply significant changes to a MIDI file's internal structure when saving its internal representation back to a MIDI file, which might even mess with our loop algorithm. So it would be very nice to have a more trustworthy tool that applies only the edit we actually want, and perfectly retains the rest of the MIDI.
And that's how this sub-project turned into a small suite of command-line MIDI operations in the classic Unix filter/pipeline style: Each command reads a MIDI file from stdin, transforms it, and outputs text or the resulting MIDI file on stdout. This way, we gain maximum transparency and reproducibility as I can document the unique pre-processing steps for each AST track by simply providing the command lines. And sure, we're re-encoding and re-decoding the full MIDI sequence at every step along such a pipeline, but computers are fast, Rust and the midly library in particular are ⚡ blazingly fast ⚡, and the usability benefits of this pipeline model far outweigh any theoretical performance drops.
Here's the full list of commands that made it into the resulting mly tool:
cut: Extremely basic removal of MIDI events within a certain range.
dump: Dumps all MIDI events into a textual table. All event lists in this blog post are based on this output.
duration: Shows the duration of a MIDI file in pulses, beats, seconds, and PCM samples.
filter-note: Removes all Note On events within a certain range, retaining all other events. This allows us to generate separate intro and loop MIDIs, whose renderings we can then splice back into a single loopable waveform with no discontinuities, which is not guaranteed when rendering a single MIDI file. This provides the last missing piece needed for rendering perfect, sample-accurate loops through Sound Canvas VA.
loop-find: The loop detection algorithm described above.
loop-unfold: Duplicates MIDI events from a given point to the end of the track. A budget solution for the problem of creating synthetic loops – arbitrary copying of arbitrary subranges to arbitrary destinations would have been undeniably nicer, but also much more complex, and I didn't need that full flexibility for the task at hand.
smf0: Flattening multi-track/SMF Type 1 MIDI sequences into single-track/SMF Type 0 ones. Having this conversion as a distinct operation in our toolset allows other operations to exclusively support SMF Type 0 if a Type 1 implementation would either take significant additional effort or just duplicate the Type 0 flattening algorithm. This group of operations includes loop-find, cut, and even the real-time output for duration because tempo events can theoretically occur on any track.
This feature set should strike a good balance between not spending too much of the Shuusou Gyoku budget on tangential problems, but still offering a decent solution for the problem at hand. As a counterexample, the obvious killer feature – deserializing a dump back into a Standard MIDI File – would have gone way past the budget. While there are crates that free you from the need to write manual parsing code for basic data structures, they would instead require a lot of attribute boilerplate – and if the library that provided the structures doesn't already come with these attributes, you now have to duplicate all the structures, and convert back and forth between the original structures and your copies. Not to mention that we'd still have to write code for the high-level structure of the dump output…
If we put it all together, this is what we can do:
The best loop found in the raw MIDI file spans 4 events and 200 milliseconds. Clearly, this is not the loop we're looking for.
Let's cut off all events from the start of the fade-out to the end, do a loop-unfold copy of all events from the position during the apparent second loop that corresponds to where the fade-out started, and try looking for a loop in that modified MIDI.
The resulting loop is 1:31m long, which is exactly what we were hoping to find.
The note space loop represents the earliest possible event range with equivalent per-channel controller and pitch bend state at both ends. This loop is only appropriate for MIDI players, as its bounds can fall into the middle of notes that are played with a different channel state at the start and end of the loop. This is why it doesn't show any sample positions.
The recording space loop ensures that this doesn't happen. It's also always placed on a Note On event with non-zero velocity, which eases the splicing of separate filter-note recordings. This way, it's enough to remove leading silence from the loop part and mix it exactly at the indicated sample position.
The detected loop is also nowhere close to the cut point at beat 466, matching our condition for validity. All events within the loop came from ZUN's original composition, and the cut/loop-unfold combo merely provided the remaining 63% of events necessary to prove this loop as such.
So, where are these loop quirks that justify why some of these audio files are longer than you'd think they should be? Just listing them as text wouldn't really communicate just how minor these are. It would be much nicer to visualize them in a way that highlights the exact inconsistencies within a fixed range of MIDI measures. Screenshots of MIDI sequencer or DAW windows won't capture these aspects all too well because these programs are geared toward fine-grained editing of single tracks, not visualization of details across all channels.
REAPER's piano roll nicely snaps to a certain range, but good luck picking out the individual lines from the single volume lane at the bottom of the screen, or spotting a 7-point difference. Not to mention that CC #11 (Expression) makes up an equal part of a channel's final perceived volume, which is the metric we'd actually want to visualize.
Typical MIDI visualizers, however, are on the complete opposite end of the spectrum. In recent years, MIDI visualization has become synonymous with the typical Synthesia style of YouTube videos with a big keyboard at the bottom, note bars flying in from the top, and optional fancy effects once those notes hit the top of the keyboard. The Black MIDI community has been churning out tons of identically looking MIDI visualizers in recent years that mainly seem to differ in the programming language they're written in, and in how well they can cope with the blackest of black MIDIs.
Thankfully, most of these visualizers are open-source and have small and manageable codebases. The project with the most GitHub stars and the most generic name seemed to be the best starting point for hacking in the missing features, despite using GLSL shaders which I had no prior experience with. It was long overdue that I did something with GLSL though – it added a nice educational aspect to these hacks, and it still was easier than deciphering whatever the fastest and hyper-optimized Rust visualizer is doing.
Still, this visualizer needed a total of 18 small features and bugfixes to be actually usable for demonstrating Shuusou Gyoku's loop quirks. As such, these hacks turned into yet another tangential sub-project that could have easily consumed another two pushes if I cleaned up the code and published the result. But that would have really gone way past the budget for something that people might not even care about. So here's what we're going to do:
I've added this MIDI visualizer as a new goal to the order form. This goal is eligible for microtransactions, so you don't have to fund a full push to see the first changes committed and released.
The upstream project seems to have been abandoned recently, which is the perfect excuse for not even trying to merge in my sweeping changes with a series of pull requests. The code sure needs a lot of cleanup and deduplication, and especially a more build system-friendly way of embedding its shader source code.
Every backer who supports this goal with at least 0.1 pushes or microtransactions will get a Windows binary with my current hacked-in changes as a preview, immediately after the purchase. Shoutout to the MIT license for letting me do this 😛
As usual, once the code is done, the final cleaned-up version will be available for free for everyone, in both source code and binary release form.
Alright then! Here's how to read the visualizations:
The transparency of each note represents its velocity multiplied by the channel volume and expression. To spot volume inconsistencies, you'd compare the opacity of equivalent notes in the two ranges.
The X-axis of these visualizations uses linear/real time, so the width of each measure represents the exact time it takes to be played relative to the other measures in the visualized range. To spot tempo inconsistencies, you'd compare the distance between the bar lines.
Notes that are duplicated on two or more channels may be colored differently in the loop start and end views. These are rendering order inconsistencies and don't communicate anything about the MIDI.
Stage 1 theme (フォルスストロベリー), original and arranged version: The string and harmonica channels are slightly louder on the apparent first loop than on the others.
Apparent loop:
0:01m – 1:31m
Actual loop:
1:04m – 2:34m
Mei and Mai's theme (ディザストラスジェミニ), arranged version: The one and only quirk that's caused by different notes – the first loop has an E♭ on the slap bass channel in measure 32, but the second loop has a G♭ in the corresponding measure 72.
Apparent loop:
0:01m – 1:02m
Actual loop:
0:50m – 1:51m
Stage 3 theme (華の幻想 紅夢の宙), original and arranged version:
The trumpet channel starts out panned to the center of the stereo field (64), before being left-panned by 25% (48) at 1:04m, where it stays for the rest of the track.
Apparent loop:
0:01m – 1:29m
Actual loop:
1:04m – 2:32m
I didn't come up with a good way of visualizing panning in a 2D plane, so you have to trust your ears with this one.
Marie's theme (機械サーカス ~ Reverie), arranged version: Every apparent loop modulates up by a semitone 16 measures before it ends, and remains in that new key at the start of the next loop, so the piece technically doesn't loop at all. The original stays in G♯m throughout.
Stage 5 theme (カナベラルの夢幻少女), original version: The ritardando near the supposed end of the first loop drops from 145 BPM to 118 BPM, but only to 129 BPM in all further loops.
Apparent loop:
0:01m – 1:39m
Actual loop:
1:33m – 3:11m
Yup, that means that the intro part technically makes almost up the entire apparent loop. ZUN replaced the ritardando with instant tempo changes in the arranged version, which moves the loop to its expected place at the start of the track.
The loop start and end points are in the respective next measure past this range.
Stage 6 theme (アンティークテラー), arranged version: The string channel starts out with the maximum expression of 127, but then only goes up to 120 after some fading notes later in the piece, where it stays for the beginning of the second loop.
Apparent loop:
0:01m – 1:53m
Actual loop:
0:13m – 2:05m
Same here.
VIVIT-captured-'s first theme (夢機械 ~ Innocent Power), arranged version: Has a unique ending section that starts in Gm and then modulates through Em and Fm before it fades out on F♯m.
VIVIT-captured-'s second theme (幻想科学 ~ Doll's Phantom), original and arranged version: Another fade-related 127 vs. 120 expression inconsistency, this time on the orange square channel.
Apparent loop:
0:01m – 1:32m
Actual loop:
1:03m – 2:34m
VIVIT-captured-'s third theme (少女神性 ~ Pandora's Box), original and arranged version: Another tempo inconsistency: A slightly differently shaped ritardando before the bell tree hit in the supposed first loop.
Marisa's theme (魔女達の舞踏会), arranged version: Has a unique 8-bar ending section that is first played in Cm and then loops in C♯m while fading out.
Ending theme (ハーセルヴズ), arranged version: Probably the best-known one out of these, and I'm talking of course about the beautiful ending section. I'm making the executive decision to not loop this track in-game, and letting it fade to silence instead.
Before we package up these looped soundtracks, let's take a quick look at how they would be shown off in the Music Room. The Seihou Music Rooms carry over the per-channel keyboards from TH05, add the current per-channel volume, expression, and pan pot values, and top it off with a fake spectrum analyzer. All of these visualizations rely on MIDI data, and the Music Room would feel very dull and boring without them. Just look at Kioh Gyoku, whose Music Room basically turns into a still image in WAVE mode.
Retaining these visualizations even when playing waveform BGM was very important for me, and not just because it would make for a unique high-quality feature that would break new ground. It can also double as proof that the waveform versions are, in fact, in perfect sync with both the MIDIs they are based on, and, by extension, the respective stage scripts.
However, this would require the game to process the MIDIs and update the internal visualization state without simultaneously playing them back through the WinMM / MME / midiOut*() API. And just like graphics and text rendering, Shuusou Gyoku's original code came with zero architectural separation between platform-independent processing logic and platform-specific playback…
So I accidentally rewrote almost the entire MIDI code to achieve said separation. This also provided a great occasion to modernize this code and add some much-needed robustness for potential MIDI mods, while retaining the original code's approach of iterating over raw SMF byte streams. It might all have been very excessive for a delivery that was supposed to be just about waveform BGM support, but on the plus side, MIDI output is now portable to any other system's MIDI API as well.
Surprisingly though, it was Shuusou Gyoku's original MIDI timing that quickly turned out to be rather inaccurate, and not the waveforms. The exact numbers vary depending on the piece, but the game played back every MIDI about 1% slower than notated, adding about 2 or 3 seconds to their total playback time after 5 minutes. Tempo changes in particular were the biggest causes of desynchronizations with the waveforms…
To understand how this can happen to begin with, we have to look closer at how you're supposed to use the midiOut*() API. This API is as low-level as it gets, only covering the transmission of a single MIDI message to the selected output device right now. There is no concept of note timing at this low level, so it's completely up to the program to parse delta times and tempo change events out of the MIDI file and correctly time the calls to this API for each MIDI message. With all the code that runs between the API and the actual renderer of the synth for every single message, the resulting timing can only ever be an approximation of the MIDI file. This doesn't really matter for the timescales and polyphony levels of typical music because, again, computers are fast, but such an API is fundamentally unsuitable for accurately playing back even just a moderately complex million-note Black MIDI.
Shuusou Gyoku handles this required manual timing in the simplest possible way: It runs a MIDI processing function (Mid_Proc() in the code) at an interval of 10 ms, which processes and instantly sends out all MIDI events that have occurred at any point within the last 10 ms, maintaining merely their order. This explains not only why the original game incremented its MIDI TIMER by multiples of 10, but also the infamous missing drums when playing the soundtrack through the Microsoft GS Wavetable Synth:
ZUN reduced all drum notes to the minimum possible length allowed by the 480 PPQN pulse resolution of these MIDI files.
In regular music notation, this corresponds to 1/1920th notes.
While the exact real-time length in purely mathematical terms depends on the tempo of a piece, it only has to be ≥13 BPM for a 1/1920th note to be shorter than 10 ms.
Therefore, the higher the BPM, the higher the chance that both a drum note's Note On and Note Off messages are sent within the same call to Mid_Proc(), with the respective two midiOut*() API calls only being at best a two-digit number of microseconds apart.
So it only makes sense why cheap MIDI synths that don't even respond to reverb or release time messages completely drop any note with such a short length. After all, at a sampling rate of 44,100 Hz, a note would have to be at least 22.7 µs long to be represented by even a single PCM sample.
This also extends to the visualizations above, and was the reason why I chose to render all drum notes as fixed-size diamonds. Otherwise, they would barely be visible.
But while sending MIDI events in such quantized chunks might not be perfect, it can't be the cause behind multi-second playback slowdowns. Instead, this issue has to boil down to the way Shuusou Gyoku times each individual message, and specifically how it converts between MIDI pulse units and real-time (milli)seconds. pbg's original MIDI code chose to do this in an equally confusing and inaccurate way: it kept two counters that tracked the current MIDI pulse before and after the latest tempo change, used the value of the latter counter to decide which events to process, and only added the pulse equivalent of 10 ms to this counter at the end of Mid_Proc() in the then current tempo. The commit message for my rewritten algorithm details the problems with this approach using nice ASCII art in case you're interested, but in short, the main problem lies in how the single final addition can only consider a single tempo change within each call to Mid_Proc(). If a MIDI file contains tempo ramps with less than 10 ms between each different tempo, the original game would only use the last of these tempo values as the basis for converting the entire 10 ms back into MIDI pulses. Not to mention that maybe MIDI pulses aren't the best unit in a game that still 📝 treats the FPU as lava and doesn't use any fixed-point means of increasing the resolution of the 10 ms→pulse division either…
On the contrary, it's much more accurate to immediately convert every encountered MIDI delta time to a real-time quantity and use that unit for event timing, especially if we want to restrict ourselves to integer math. Signed 64-bit integers are enough to fit the product of the slowest possible MIDI tempo ((224 - 1) µs per quarter note) and the highest possible MIDI delta time (228 - 1) at nanosecond precision (103), with one bit to spare. Then, we arrive at a much simpler timing algorithm:
Each simultaneously playing track gets a next event timer, starting out at 0
When looking at the next event, add the converted nanosecond value of its delta time to this timer
Subtract the equivalent of 10 ms from each track's timer at the beginning of the processing function
As long as the timer is ≤0, process and send the next message
The additive nature of this timer not only naturally allows more than one event to happen within a single Mid_Proc() call, but also averages out any minor timing inconsistencies across the length of a track.
assert(length_of_tempo_message == 3);
uint32_t tempo = 0;
for(int i = 0; i < length_of_tempo_message; i++) {
- tempo += ((tempo << 8) + (*track_data++));+ tempo = ((tempo << 8) + (*track_data++));
}
Yup – the original code performed two additions per byte, which incorrectly added the interim value at every byte to the final result, and yielded a tempo that is ≈0.8% / ≈1 BPM slower than notated in the MIDI file, matching the number we were looking for. That's why the |/OR operator is the safer one to use in such a bit-twiddling context…
But now I'm curious. This is such a tiny bug that is bound to remain unnoticed until someone compares the game's MIDI output to another renderer. It must have certainly made it into other games whose MIDI code is based on Shuusou Gyoku's, or that pbg was involved with. And sure enough, not only did this bug survive Kioh Gyoku's OOP refactoring, but it even traveled into Windows Touhou, where it remained in every single game that supported MIDI playback. Now we know for a fact that pbg's Program Support role in the TH06 credits involved sharing ready-made, finished code with ZUN:
The broken tempo deserialization in the respective latest full versions of TH06 through TH10. And yes, that's TH10 – even though TH09's trial version was the last game to ship MIDI versions of its soundtrack, TH10 still contained all of pbg's MIDI code that originated back in Shuusou Gyoku, before TH11 finally removed it.
Amusingly, ZUN's compiler even started optimizing the combination of left-shifting and addition to a multiplication with 257 for TH09, which even sort of highlights this bug if you're used to reading x86 ASM.
That leaves support for MIDI loop points as the only missing feature for syncing MIDI data with a looping waveform track. While it didn't require all too much code, pbg's original zero-copy approach of iterating over raw MIDI data definitely injected a lot of complexity into the required branches. Multi-track/SMF Type 1 files require quite a bit of extra thought to correctly calculate delta times across loop boundaries that reach past the end of the respective track, while still allowing the real-time delta values to be resynchronized at tempo changes within the loop – and yes, 3 of ZUN's 19 arranged MIDI files actually do use more than one track, so this wasn't just about maximizing MIDI compatibility for mods. I stuck to the original approach mostly as a challenge and to prove that it's possible without first parsing the entire MIDI sequence into a friendlier internal representation, but I absolutely do not recommend this to anyone else.
After hardcoding the loop points detected by mly into the binary, we only need to call Mid_Proc() once per frame in the Music Room and pass the frame delta time instead of the 10 ms constant. And then, we get this:
The MIDI TIMER now shows off the arguably more interesting current MIDI pulse value rather than just formatting the PASSED TIME in milliseconds. Ironically, displaying this value in a constantly counting way takes more effort now – the new nanosecond-based timing code doesn't use any measure of total MIDI pulses anymore, and they don't naturally fall out of the algorithm either. Instead, the code remembers the total pulse value of the last event it processed and adds the real-time duration that has passed since, similar to the original timing algorithm.
This naturally causes the timer to jump from the loop end pulse to the loop start pulse, proving that Mid_Proc() is in fact looping the sequence.
Alright, now we know what to package:
We're going to have 8 BGM packs for each permutation of soundtrack (OST / AST), sound source (Romantique Tp / Sound Canvas VA), and codec (FLAC / Vorbis), making up 1.15 GiB of music data in total.
When looking at the package names, you will notice that I don't particularly highlight the FLAC versions as lossless. And for good reason – the Romantique Tp recordings had dithering and noise shaping applied to them, and the Sound Canvas VA versions will necessarily have to be volume-normalized and quantized to 16-bit during the conversion to FLAC. If we wanted a BGM pack with the actual raw Sound Canvas VA output, we'd have to implement WavPack support, which is the only lossless codec that supports 32-bit float – and even that codec could only compress these files down to 14 MiB per minute of music, or 508 MB for the entire original soundtrack. That's 1.4× the size of an equivalent thbgm.dat!
The whole packaging process will be complex enough to warrant a build system. I'd also like to generate an extensive README file for each package, not least to describe the Sound Canvas VA rendering and loop-cutting process in complete detail.
The AST packs need to bundle the MIDI files from ZUN's site for Music Room visualization. We might as well add a 9th MIDI-only AST pack then, as it will naturally fall out of the packaging pipeline anyway. Some people sure love their MIDI synths, after all.
The OST packs can fall back on the original game's MIDI files from MUSIC.DAT for their Music Room visualization, so there's no need to bundle those and infringe copyright. Ironically, the game will still require a MUSIC.DAT even if you use a BGM pack, if only for the one number in that file that says that Shuusou Gyoku's soundtrack consists of 20 tracks in total.
ZUN didn't arrange タイトルドメイド, so we need to copy the OST version recorded with the respective sound source into the AST pack.
Unfortunately, we still haven't reached the end of the complications and weird issues that haunt Shuusou Gyoku's music:
The original game reads the in-game track title directly out of the first Sequence Name event of the playing MIDI file. The waveform equivalent would be the Vorbis comment TITLE tag, which therefore should exactly match the original track's title, down to the exact placement of whitespace. As usual, if I emphasize minor things like this, it's not without reason: 幻想科学 ~ Doll's Phantom inconsistently uses halfwidth spaces at both sides of the ~, and wouldn't fit into the Music Room's limited space otherwise.
However, the AST MIDI files jam a bunch of other metadata into their Sequence Names, roughly following the format
【 $title 】 from 秋霜玉 for sc88Pro comp.ZUN
The track titles should definitely not appear in this format in-game, but how do we get rid of this format without hardcoding either the names or the magic to parse the names out of this format?
The absolute state of GS SysEx tooling rears its ugly head one final time in three of the AST MIDIs, which for some reason are missing the Roland vendor prefix byte in all of their SysEx messages and are therefore undeniably bugged. There even seemed to be another SysEx-related bug which Romantique Tp explained away, but not this one:
The irony of using invalid Reverb Macros within already invalid SysEx messages is not lost on me.
This is something we should fix even before running these files through Sound Canvas VA in order to render these with the reverb settings that ZUN clearly (and, for once, unironically) intended.
For perfect preservation of the original BGM/gameplay synchronicity, it makes sense for the waveform versions to retain the leading 1 or 2 beats of silence that the original MIDI files use for their SysEx setup. While some of the AST tracks use a slightly different tempo compared to their OST counterparts, they would still be largely in sync as ZUN didn't rearrange the layout of their setup area… except for, once again, the three tracks used in the Extra Stage. Marisa's and Reimu's boss themes aren't too bad with their 4 beats of setup, but シルクロードアリス takes the cake with a whopping 12 beats of leading silence. That's 5 seconds from the start of the Extra Stage to the first note you'd hear. 🐌
2) and 4) could theoretically be worked around in Shuusou Gyoku's MIDI code, but there's no way around editing the MIDI files themselves as far as 3) is concerned. Thus, it makes sense to apply all of the workarounds to the AST MIDIs as part of the BGM build process – parsing the titles out of the 【brackets】, inserting the Roland vendor prefix byte where necessary, and compressing the setup bars in the Extra Stage themes to match their OST counterparts. Adding any hidden magic to the MIDI code would only have needlessly increased complexity and/or annoyed some modder in the future who would then have to work around it.
Ideally, these edits would involve taking the mly dump output, performing the necessary replacements at a plaintext level, and rebuilding the result back into a MIDI file, bu~t we're unfortunately missing the latter feature. Luckily, someone else had the same idea 13 years ago and
wrote a tool in C that does exactly what we need. Getting it to compile in 2024 only required fixing a typical C thing… why are students and boomers defending this antique of a language again? 🙄
The single most glaring issue, however, is the drastic difference in volume between the individual tracks in both soundtracks. While Romantique Tp had to normalize each track to the maximum possible volume individually as a consequence of the recording process, the Sound Canvas VA renderings reveal just how inconsistent the volume levels of these MIDI files really are:
The peak amplitudes of every track in both soundtracks, as rendered by Sound Canvas VA at maximum volume. Looking at these, you might think that kaorin's 2007 recordings were purposely trying to preserve the clipping that would come out of an SC-88Pro if you don't manually adjust the volume knob for each song, but those recordings are still much louder than even these numbers.
So how do we interpret this? Is this a bug, because no one in their right mind would want their music to clip on purpose, and that in turn means that everything about these volume levels is arbitrary and unintentional? Or is this a quirk, and ZUN deliberately chose these volume levels for compositional reasons? It certainly would make sense for the name registration theme.
Once again, the AST version of シルクロードアリス is the worst offender in this regard as well, but it might also provide some evidence for the quirk interpretation. The fact that almost all of its MIDI channels blast away at full volume might have been an accident that could have gone unnoticed if the volume knob of ZUN's SC-88Pro was turned rather low during the time he arranged this piece, but the excessive left-panning must have been deliberate. Even Romantique Tp agrees:
It might have even made compositional sense if Silk Road Alice was supposed to be a "Western-style piece", but it's not.
And that's with the volume already normalized. Because this one channel of this one track is almost twice as loud as anything else in the AST, we would consequently have to bring down the volume of every other arranged track and the right channel of the same track by almost 50% if we wanted to maintain the volume differences between the individual tracks of the AST. In the process, we lose almost one entire bit of dynamic range. At this rate, you might even consider remixing and remastering the entire thing, but that would involve so many creative decisions to definitely fall into fanfiction territory…
However, normalizing each track to a peak level of 0 dBFS makes much more sense for in-game playback if you consider how loud Shuusou Gyoku's sound effects are. Once again, the best solution would involve offering both versions, but should we really add two more SCVA BGM packs just to cover volume differences? ReplayGain solves this exact problem for regular music listening in a non-destructive way by writing the per-track and per-album gain levels into an audio file's metadata. Since we need metadata support for titles anyway, we can do something similar, albeit not exactly the same for two reasons:
ReplayGain is specified to target an average volume of −17 dBFS, whereas we'd like to target a peak volume of 0 dBFS in order to always use the entire available digital scale. We've got some loud sound effects to compete with, after all.
ReplayGain expresses its gain values in dB, which is cumbersome to work with. In the realm of PCM, volume changes don't need to involve more than a simple multiplication, so let's go with a simple scalar GAIN FACTOR.
And so, we hard-apply the album-level gain during the conversion from 32-bit float to FLAC to preserve the volume differences between the tracks, calculate the track-levelGAIN FACTOR based on the resulting peak levels, add a volume normalization toggle to the Sound / Config menu, enable it by default, and thus make everyone happy. ✅
The final interesting tidbit in building these packages can be found in the way the Sound Canvas VA recordings are looped. When manually cutting loops, you always have to consider that the intro might end with unique notes that aren't present at the end of the loop, which will still be fading out at the calculated loop start point. This necessitates shifting the loop start point by a few bars until these notes are no longer audible – or you could simply ignore the issue because ZUN's compositions are so frantic that no one would ever notice.
With the separate intro and loop files generated by mly, on the other hand, the reverb/release trails are immediately visible and, after trimming trailing silence, exactly define the number of samples that the calculated loop start point needs to be shifted by. The .loop file then remains always exactly as long, in samples, as the duration of the loop reported by mly. If a piece happens to have a constant tempo whose beat duration corresponds to an integer number of samples, we get some very satisfying, round loop durations out of this process. ☺️
So let's play it all back in-game… and immediately run into two unexpected miniaudio limitations, what the…?!
miniaudio uses a fixed linear function for its fade-out envelope, and doesn't offer anything else? We might not even want a logarithmic one this time because symmetry with MIDI's simple quadratic curve would be neat, but we sure don't want a linear function – those stay near the original volume for too long, and then turn quiet way too quickly.
There is no way to access FLAC metadata from miniaudio's public API, even though the library bundles the author's own FLAC library which has this feature?
📝 Back when I evaluated miniaudio, I alluded that I consider single-file C libraries to be massively overrated, and this is exactly why: Once they grow as massive as miniaudio (how ironic), they can quickly lead to their authors treating their dependencies as implementation details and melting down the interfaces that would naturally arise. In a regular library, dr_flac would be a separate, proper dependency, and the API would have a way to initialize a stream from an externally loaded drflac object. But since the C community collectively pretends that multi-file libraries are a burden on other developers, miniaudio ended up with dr_flac copy-pasted into its giant single file, with a silly ma_ namespacing prefix added to all its functions. And why? Did we have to move so far in the other direction just because CMake doesn't support globbing? That's a symptom of CMake not actually solving any problem, not a valid architectural decision that libraries should bend around. 🙄
So unless we fork and hack around in miniaudio, there's now no way around depending on a second, regular copy of dr_flac. Which has now led to the same project organization bloat that single-file libraries originally set out to prevent…
Sigh. At this rate, it makes more sense to just copy-paste and adapt the old BGM streaming code I wrote for thcrap in late 2018, which used dr_flac directly, and extend it with metadata support. With the streaming code moved out of the platform layer and into game logic, it also makes much more sense to implement the squared fade-out curve at that same level instead of copy-pasting and adjusting an unhealthy amount of miniaudio's verbose C code.
While I'm doing the same for the old Vorbis streaming code, it would also make sense to rewrite that one to use stb_vorbis instead of the old libogg+libvorbis reference libraries. There's no need to add two more dependencies if miniaudio already comes with stb_vorbis.c, and that library is widelyacclaimed. So, integration should be a breeze, right?
Well, surprise, rarely have I seen a C library so actively hostile toward being integrated. Both of its API variants are completely unreasonable:
The pulldata API pulls Vorbis data as needed from either a memory buffer containing the entire Vorbis file, or a C FILE* handle.
Effectively, this forces either you to give up disk streaming completely, or your program into C's terrible I/O API with all its buffering slowness and Unicode issues on Windows. The documentation even goes on to suggest just modifying the code if you need anything else, which might be acceptable in the strange world of game development this library originates from, but it sure isn't in the kind of open-source development I do.
The pushdata API expects the caller to gradually feed chunks of Vorbis data. How large do these chunks have to be? Nobody knows – and, even worse, the API doesn't retain any of the data already pushed in. If the buffer you passed is too small, which you don't get to know in advance, you have to pass the same data plus more in the next call. I get that you might want an API like this to avoid dynamic memory allocations, but not only does this API perform plenty of allocations itself, it actively forces its caller to realloc() over and over again. 🙄 The lack of seeking support reveals that this API is geared towards live-streamed audio, and it might very well be acceptable in such a case, but it's nothing we could use for BGM.
What happened to the tried-and-true idea of providing a structure with read, tell, and seek callbacks, and then providing an optional variant for C FILE* handles if you absolutely must? Sure, the whole point of Vorbis is to be small and nobody these days would care about spending a few MB on keeping an entire Vorbis file in memory, but come on. If pulldata made the deliberate and opinionated choice to only support buffers of complete Vorbis streams and argued in the name of simplicity that hand-coded disk streaming isn't worth it in this day and age, I might have even been convinced. And this is from the guy who popularized the concept of single-file C libraries in the first place?
Oh well, tupblocks go brrr. libvorbis definitely shows its age with all the old command-line tools in the lib/ directory that they never moved away and that we now have to remove from our glob. But even that just adds a single line to the Tupfile, and then we get to enjoy its much friendlier API. That sure beats the almost 800 lines of code that miniaudio had to write to integrate stb_vorbis… which I can't even link because the file is too big for GitHub. 🤷
At this point, it would have even made sense to upgrade from a 24-year-old lossy codec to an 11-year-old lossy codec and use Opus instead, since the enforced 48,000 Hz sampling rate is a non-issue when you control the entire audio pipeline. But let's keep compatibility with existing thcrap mods for now.
In the end, the Windows build ended up using only a single one of the miniaudio features that DirectSound doesn't have, and that's the ability to use the more modern WASAPI instead of DirectSound. We're still going to use miniaudio for the Linux port, but as far as Windows is concerned, it would be quite nice to backport BGM streaming to the game's original DirectSound backend. The P0275 build is pushing 1 MiB of binary size for a game that originally came in a 220 KiB binary, so it would remove a noticeable amount of bloat from GIAN07.EXE, but it would also allow waveform BGM to work in the Windows 98-compatible i586 build. If that sounds cool to you, this is the issue you want to fund.
That only left some logic and UI busywork to put it all together, which means that we've almost reached the end of things to talk about! Here's what it all looks like:
BGM pack selection is done in-game through a new submenu. The <Download> option will open the BGM pack release page in the system's preferred browser:
This window presented a great occasion for already implementing the generic boilerplate for vertically scrolling windows with an unlimited number of items. That will come in quite handy once we introduce better replay support… 👀
Even with per-track BGM volume normalization, Shuusou Gyoku's sound effects are still a bit too loud in comparison, especially when mixed on top of that excessively and unfixably left-panned AST version of the Extra Stage theme. Adding separate volume controls for BGM and sound effects really was the only sustainable solution here, and conveniently checks an important quality-of-life box the original game lacked. So important that it was the very first issue I added to the GitHub tracker of my fork:
I really wanted to have Japanese help text in these menus, as it makes them look just so much more consistent and polished. Many thanks to Elfin, who responded to my bounty offer, and will most likely also provide localizations for future features.
In-game music titles are now consistently right-aligned. Leading whitespace in 4 of the original MIDI Sequence Names suggests that pbg might have intended these titles to be centered within the 216 maximum pixels that the original code designated for music titles, but none of those 4 had the correct amount of spaces that would have been required for exact centering:
Right-aligned text matches the one certain intention I can read out of the code, and allows us to consistently trim whitespace from both the original MIDI Sequence Names and the TITLE tags in the BGM packs… at the cost of significantly changing the animation. 🤔
Maybe, all this whitespace had the explicit purpose of making the animation look the way it did originally? But hard-padding the title tags in the BGM packs would be so dumb… 😩 Let's keep it like this for now and fix the animation later.
At startup, the game now shows a new screen if any of the game's .DAT files are missing, displaying their expected absolute path. This is bound to be very important on Linux because each distribution might have its own idea of where these files are supposed to be stored. But even on Windows, this allows GIAN07.EXE to at least run and show something if one or more of these files are not present, instead of crashing at the first attempt of loading anything from them.The ¥ instead of \ is, 📝 once again, a font issue. Good luck finding a font not named MS Gothic that looks good when rendered in this game…
On a more unfortunate note, I dropped the i586 build from this release. Visual Studio 2022's CRT implements the new filesystem and threading code using Win32 API functions that are only available on Vista or later and are not covered by the one ready-made KernelEx package I was able to find, so I couldn't easily test such a build on Windows 98 anymore. Resurrecting the i586 build would therefore involve additional platform abstraction layers that we wouldn't need otherwise. Writing them wouldn't be too expensive, but it only makes sense if there's actual demand. Backporting waveform BGM to DirectSound to restore feature parity would also be a good idea here, as it would avoid the need to litter the current code with #ifdefs at any place that references anything related to BGM packs.
After half a year of being bought out way past the cap, I've finally got some small room left for new orders again. If it weren't for this blog post and the required research and web development work, this delivery would have probably come out in early January, taking half the time it ended up taking. So I really have to start factoring the blog posts into the push prices in a better and fairer way.
Meanwhile, the hate toward my day job only keeps growing, but there's little point in looking for a new one as long as ReC98 remains this motivating and complex. It leaves pretty much no cognitive room for any similarly demanding job. Thus, I want 2024 to be the year where ReC98 either becomes profitable enough to be my only full-time job, or where we conclusively find out that it can't, I go look for a better day job, and ReC98 shifts to a slower pace. Here's the plan:
From now on, I will immediately increase the push price whenever we reach 100% of the cap, either directly through new orders or indirectly through existing subscriptions. The price increase will be relative to how long it took to reach that point since the last re-opening.
If the store continues selling out, I will aim for per push by the end of the year.
In exchange, microtransactions (i.e., deliveries containing just code and no blog posts) will now be half the price of regular pushes for the same amount of delivered code. Or in other words: If you want to fund a goal that's eligible for microtransactions, you can now decide whether your fixed amount of money goes to 2× coding work and 0× blogging, or 1× coding work and 1× blogging.
I'll permanently increase the default level of the cap from 8 to 10 pushes. The past 12 months were full of mod releases that raised the bar, and 2024 shows no signs of stopping that trend.
If we ever reach per push, I plan to hire people for some of the contribution-ideas or anything else that might improve this project. (Well-produced YouTube videos about the findings of this project might be a nice idea!) At that point, I will have reached my goal of living decently off this project alone, and it's time for others to make money in this space as well.
With the new price of per push, this means that there's now a small window in which you can get a full push worth of functionality for , until the current cap is filled up again.
Next up: Probably TH02's endings to relax a bit. Maybe we're also getting some new Touhou-related contributions?
P0264
TH03/TH04/TH05 decompilation (Music Rooms, part 1/2)
P0265
TH03/TH04/TH05 decompilation (Music Rooms, part 2/2 + MAINE.EXE main()) + TH02 PI/RE (Boss damage and position)
💰 Funded by:
Blue Bolt, [Anonymous], iruleatgames
🏷️ Tags:
Oh, it's 2024 already and I didn't even have a delivery for December or January? Yeah… I can only repeat what I said at the end of November, although the finish line is actually in sight now. With 10 pushes across 4 repositories and a blog post that has already reached a word count of 9,240, the Shuusou Gyoku SC-88Pro BGM release is going to break 📝 both the push record set by TH01 Sariel two years ago, and 📝 the blog post length record set by the last Shuusou Gyoku delivery. Until that's done though, let's clear some more PC-98 Touhou pushes out of the backlog, and continue the preparation work for the non-ASCII translation project starting later this year.
But first, we got another free bugfix according to my policy! 📝 Back in April 2022 when I researched the Divide Error crash that can occur in TH04's Stage 4 Marisa fight, I proposed and implemented four possible workarounds and let the community pick one of them for the generally recommended small bugfix mod. I still pushed the others onto individual branches in case the gameplay community ever wants to look more closely into them and maybe pick a different one… except that I accidentally pushed the wrong code for the warp workaround, probably because I got confused with the second warp variant I developed later on.
Fortunately, I still had the intended code for both variants lying around, and used the occasion to merge the current master branch into all of these mod branches. Thanks to wyatt8740 for spotting and reporting this oversight!
As the final piece of code shared in largely identical form between 4 of the 5 games, the Music Rooms were the biggest remaining piece of low-hanging fruit that guaranteed big finalization% gains for comparatively little effort. They seemed to be especially easy because I already decompiled TH02's Music Room together with the rest of that game's OP.EXE back in early 2015, when this project focused on just raw decompilation with little to no research. 9 years of increased standards later though, it turns out that I missed a lot of details, and ended up renaming most variables and functions. Combined with larger-than-expected changes in later games and the usual quality level of ZUN's menu code, this ended up taking noticeably longer than the single push I expected.
The undoubtedly most interesting part about this screen is the animation in the background, with the spinning and falling polygons cutting into a single-color background to reveal a spacey image below. However, the only background image loaded in the Music Room is OP3.PI (TH02/TH03) or MUSIC3.PI (TH04/TH05), which looks like this in a .PI viewer or when converted into another image format with the usual tools:
Let's call this "the blank image".
That is definitely the color that appears on top of the polygons, but where is the spacey background? If there is no other .PI file where it could come from, it has to be somewhere in that same file, right?
And indeed: This effect is another bitplane/color palette trick, exactly like the 📝 three falling stars in the background of TH04's Stage 5. If we set every bit on the first bitplane and thus change any of the resulting even hardware palette color indices to odd ones, we reveal a full second 8-color sub-image hiding in the same .PI file:
The spacey sub-image. Never before seen!1!! …OK, touhou-memories beat me by a month. Let's add each image's full 16-color palette to deliver some additional value.
On a high level, the first bitplane therefore acts as a stencil buffer that selects between the blank and spacey sub-image for every pixel. The important part here, however, is that the first bitplane of the blank sub-images does not consist entirely of 0 bits, but does have 1 bits at the pixels that represent the caption that's supposed to be overlaid on top of the animation. Since there now are some pixels that should always be taken from the spacey sub-image regardless of whether they're covered by a polygon, the game can no longer just clear the first bitplane at the start of every frame. Instead, it has to keep a separate copy of the first bitplane's original state (called nopoly_B in the code), captured right after it blitted the .PI image to VRAM. Turns out that this copy also comes in quite handy with the text, but more on that later.
Then, the game simply draws polygons onto only the reblitted first bitplane to conditionally set the respective bits. ZUN used master.lib's grcg_polygon_c() function for this, which means that we can entirely thank the uncredited master.lib developers for this iconic animation – if they hadn't included such a function, the Music Rooms would most certainly look completely different.
This is where we get to complete the series on the PC-98 GRCG chip with the last remaining four bits of its mode register. So far, we only needed the highest bit (0x80) to either activate or deactivate it, and the bit below (0x40) to choose between the 📝 RMW and 📝 TCR/📝 TDW modes. But you can also use the lowest four bits to restrict the GRCG's operations to any subset of the four bitplanes, leaving the other ones untouched:
// Enable the GRCG (0x80) in regular RMW mode (0x40). All bitplanes are
// enabled and written according to the contents of the tile register.
outportb(0x7C, 0xC0);
// The same, but limiting writes to the first bitplane by disabling the
// second (0x02), third (0x04), and fourth (0x08) one, as done in the
// PC-98 Touhou Music Rooms.
outportb(0x7C, 0xCE);
// Regular GRCG blitting code to any VRAM segment…
pokeb(0xA8000, offset, …);
// We're done, turn off the GRCG.
outportb(0x7C, 0x00);
This could be used for some unusual effects when writing to two or three of the four planes, but it seems rather pointless for this specific case at first. If we only want to write to a single plane, why not just do so directly, without the GRCG? Using that chip only involves more hardware and is therefore slower by definition, and the blitting code would be the same, right?
This is another one of these questions that would be interesting to benchmark one day, but in this case, the reason is purely practical: All of master.lib's polygon drawing functions expect the GRCG to be running in RMW mode. They write their pixels as bitmasks where 1 and 0 represent pixels that should or should not change, and leave it to the GRCG to combine these masks with its tile register and OR the result into the bitplanes instead of doing so themselves. Since GRCG writes are done via MOV instructions, not using the GRCG would turn these bitmasks into actual dot patterns, overwriting any previous contents of each VRAM byte that gets modified.
Technically, you'd only have to replace a few MOV instructions with OR to build a non-GRCG version of such a function, but why would you do that if you haven't measured polygon drawing to be an actual bottleneck.
An example with three polygons drawn from top to bottom. Without the GRCG, edges of later polygons overwrite any previously drawn pixels within the same VRAM byte. Note how treating bitmasks as dot patterns corrupts even those areas where the background image had nonzero bits in its first bitplane.
As far as complexity is concerned though, the worst part is the implicit logic that allows all this text to show up on top of the polygons in the first place. If every single piece of text is only rendered a single time, how can it appear on top of the polygons if those are drawn every frame?
Depending on the game (because of course it's game-specific), the answer involves either the individual bits of the text color index or the actual contents of the palette:
Colors 0 or 1 can't be used, because those don't include any of the bits that can stay constant between frames.
If the lowest bit of a palette color index has no effect on the displayed color, text drawn in either of the two colors won't be visually affected by the polygon animation and will always appear on top. TH04 and TH05 rely on this property with their colors 2/3, 4/5, and 6/7 being identical, but this would work in TH02 and TH03 as well.
But this doesn't apply to TH02 and TH03's palettes, so how do they do it? The secret: They simply include all text pixels in nopoly_B. This allows text to use any color with an odd palette index – the lowest bit then won't be affected by the polygons ORed into the first bitplane, and the other bitplanes remain unchanged.
TH04 is a curious case. Ostensibly, it seems to remove support for odd text colors, probably because the new 10-frame fade-in animation on the comment text would require at least the comment area in VRAM to be captured into nopoly_B on every one of the 10 frames. However, the initial pixels of the tracklist are still included in nopoly_B, which would allow those to still use any odd color in this game. ZUN only removed those from nopoly_B in TH05, where it had to be changed because that game lets you scroll and browse through multiple tracklists.
The contents of nopoly_B with each game's first track selected.
Finally, here's a list of all the smaller details that turn the Music Rooms into such a mess:
Due to the polygon animation, the Music Room is one of the few double-buffered menus in PC-98 Touhou, rendering to both VRAM pages on alternate frames instead of using the other page to store a background image. Unfortunately though, this doesn't actually translate to tearing-free rendering because ZUN's initial implementation for TH02 mixed up the order of the required operations. You're supposed to first wait for the GDC's VSync interrupt and then, within the display's vertical blanking interval, write to the relevant I/O ports to flip the accessed and shown pages. Doing it the other way around and flipping as soon as you're finished with the last draw call of a frame means that you'll very likely hit a point where the (real or emulated) electron beam is still traveling across the screen. This ensures that there will be a tearing line somewhere on the screen on all but the fastest PC-98 models that can render an entire frame of the Music Room completely within the vertical blanking interval, causing the very issue that double-buffering was supposed to prevent.
ZUN only fixed this landmine in TH05.
The polygons have a fixed vertex count and radius depending on their index, everything else is randomized. They are also never reinitialized while OP.EXE is running – if you leave the Music Room and reenter it, they will continue animating from the same position.
TH02 and TH04 don't handle it at all, causing held keys to be processed again after about a second.
TH03 and TH05 correctly work around the quirk, at the usual cost of a 614.4 µs delay per frame. Except that the delay is actually twice as long in frames in which a previously held key is released, because this code is a mess.
But even in 2024, DOSBox-X is the only emulator that actually replicates this detail of real hardware. On anything else, keyboard input will behave as ZUN intended it to. At least I've now mentioned this once for every game, and can just link back to this blog post for the other menus we still have to go through, in case their game-specific behavior matches this one.
TH02 is the only game that
separately lists the stage and boss themes of the main game, rather than following the in-game order of appearance,
continues playing the selected track when leaving the Music Room,
always loads both MIDI and PMD versions, regardless of the currently selected mode, and
does not stop the currently playing track before loading the new one into the PMD and MMD drivers.
The combination of 2) and 3) allows you to leave the Music Room and change the music mode in the Option menu to listen to the same track in the other version, without the game changing back to the title screen theme. 4), however, might cause the PMD and MMD drivers to play garbage for a short while if the music data is loaded from a slow storage device that takes longer than a single period of the OPN timer to fill the driver's song buffer. Probably not worth mentioning anymore though, now that people no longer try fitting PC-98 Touhou games on floppy disks.
Exactly 40 (TH02/TH03) / 38 (TH04/TH05) visible bytes per line,
padded with 2 bytes that can hold a CR/LF newline sequence for easier editing.
Every track starts with a title line that mostly just duplicates the names from the hardcoded tracklist,
followed by a fixed 19 (TH02/TH03/TH04) / 9 (TH05) comment lines.
In TH04 and TH05, lines can start with a semicolon (;) to prevent them from being rendered. This is purely a performance hint, and is visually equivalent to filling the line with spaces.
All in all, the quality of the code is even slightly below the already poor standard for PC-98 Touhou: More VRAM page copies than necessary, conditional logic that is nested way too deeply, a distinct avoidance of state in favor of loops within loops, and – of course – a couple of gotos to jump around as needed.
In TH05, this gets so bad with the scrolling and game-changing tracklist that it all gives birth to a wonderfully obscure inconsistency: When pressing both ⬆️/⬇️ and ⬅️/➡️ at the same time, the game first processes the vertical input and then the horizontal one in the next frame, making it appear as if the latter takes precedence. Except when the cursor is highlighting the first (⬆️ ) or 12th (⬇️ ) element of the list, and said list element is not the first track (⬆️ ) or the quit option (⬇️ ), in which case the horizontal input is ignored.
And that's all the Music Rooms! The OP.EXE binaries of TH04 and especially TH05 are now very close to being 100% RE'd, with only the respective High Score menus and TH04's title animation still missing. As for actual completion though, the finalization% metric is more relevant as it also includes the ZUN Soft logo, which I RE'd on paper but haven't decompiled. I'm 📝 still hoping that this will be the final piece of code I decompile for these two games, and that no one pays to get it done earlier…
For the rest of the second push, there was a specific goal I wanted to reach for the remaining anything budget, which was blocked by a few functions at the beginning of TH04's and TH05's MAINE.EXE. In another anticlimactic development, this involved yet another way too early decompilation of a main() function…
Generally, this main() function just calls the top-level functions of all other ending-related screens in sequence, but it also handles the TH04-exclusive congratulating All Clear images within itself. After a 1CC, these are an additional reward on top of the Good Ending, showing the player character wearing a different outfit depending on the selected difficulty. On Easy Mode, however, the Good Ending is unattainable because the game always ends after Stage 5 with a Bad Ending, but ZUN still chose to show the EASY ALL CLEAR!! image in this case, regardless of how many continues you used.
While this might seem inconsistent with the other difficulties, it is consistent within Easy Mode itself, as the enforced Bad Ending after Stage 5 also doesn't distinguish between the number of continues. Also, Try to Normal Rank!! could very well be ZUN's roundabout way of implying "because this is how you avoid the Bad Ending".
With that out of the way, I was finally able to separate the VRAM text renderer of TH04 and TH05 into its own assembly unit, 📝 finishing the technical debt repayment project that I couldn't complete in 2021 due to assembly-time code segment label arithmetic in the data segment. This now allows me to translate this undecompilable self-modifying mess of ASM into C++ for the non-ASCII translation project, and thus unify the text renderers of all games and enhance them with support for Unicode characters loaded from a bitmap font. As the final finalized function in the SHARED segment, it also allowed me to remove 143 lines of particularly ugly segmentation workarounds 🙌
The remaining 1/6th of the second push provided the perfect occasion for some light TH02 PI work. The global boss position and damage variables represented some equally low-hanging fruit, being easily identified global variables that aren't part of a larger structure in this game. In an interesting twist, TH02 is the only game that uses an increasing damage value to track boss health rather than decreasing HP, and also doesn't internally distinguish between bosses and midbosses as far as these variables are concerned. Obviously, there's quite a bit of state left to be RE'd, not least because Marisa is doing her own thing with a bunch of redundant copies of her position, but that was too complex to figure out right now.
Also doing their own thing are the Five Magic Stones, which need five positions rather than a single one. Since they don't move, the game doesn't have to keep 📝 separate position variables for both VRAM pages, and can handle their positions in a much simpler way that made for a nice final commit.
And for the first time in a long while, I quite like what ZUN did there!
Not only are their positions stored in an array that is indexed with a consistent ID for every stone, but these IDs also follow the order you fight the stones in: The two inner ones use 0 and 1, the two outer ones use 2 and 3, and the one in the center uses 4. This might look like an odd choice at first because it doesn't match their horizontal order on the playfield. But then you notice that ZUN uses this property in the respective phase control functions to iterate over only the subrange of active stones, and you realize how brilliant it actually is.
This seems like a really basic thing to get excited about, especially since the rest of their data layout sure isn't perfect. Splitting each piece of state and even the individual X and Y coordinates into separate 5-element arrays is still counter-productive because the game ends up paying more memory and CPU cycles to recalculate the element offsets over and over again than this would have ever saved in cache misses on a 486. But that's a minor issue that could be fixed with a few regex replacements, not a misdesigned architecture that would require a full rewrite to clean it up. Compared to the hardcoded and bloated mess that was 📝 YuugenMagan's five eyes, this is definitely an improvement worthy of the good-code tag. The first actual one in two years, and a welcome change after the Music Room!
These three pieces of data alone yielded a whopping 5% of overall TH02 PI in just 1/6th of a push, bringing that game comfortably over the 60% PI mark. MAINE.EXE is guaranteed to reach 100% PI before I start working on the non-ASCII translations, but at this rate, it might even be realistic to go for 100% PI on MAIN.EXE as well? Or at least technical position independence, without the false positives.
Next up: Shuusou Gyoku SC-88Pro BGM. It's going to be wild.
P0262
Decompilation (TH04/TH05 main/option menu)
P0263
Decompilation (TH04/TH05 first-launch sound setup menu + TH05 title screen animation)
💰 Funded by:
Blue Bolt, [Anonymous]
🏷️ Tags:
And once again, the Shuusou Gyoku task was too complex to be satisfyingly solved within a single month. Even just finding provably correct loop sections in both the original and arranged MIDI files required some rather involved detection algorithms. I could have just defined what sounded like correct loops, but the results of these algorithms were quite surprising indeed. Turns out that not even Seihou is safe from ZUN quirks, and some tracks technically loop much later than you'd think they do, or don't loop at all. And since I then wanted to put these MIDI loops back into the game to ensure perfect synchronization between the recordings and MIDI versions, I ended up rewriting basically all the MIDI code in a cross-platform way. This rewrite also uncovered a pbg bug that has traveled from Shuusou Gyoku into Windows Touhou, where it survived until ZUN ultimately removed all MIDI code in TH11 (!)…
Fortunately, the backlog still had enough general PC-98 Touhou funds that I could spend on picking some soon-important low-hanging fruit, giving me something to deliver for the end of the month after all. TH04 and TH05 use almost identical code for their main/option menus, so decompiling it would make number go up quite significantly and the associated blog post won't be that long…
Wait, what's this, a bug report from touhou-memories concerning the website?
Tab switchers tended to break on certain Firefox versions, and
video playback didn't work on Microsoft Edge at all?
Those are definitely some high-priority bugs that demand immediate attention.
The tab switcher issue was easily fixed by replacing the previous z-index trickery with a more robust solution involving the hidden attribute. The second one, however, is much more aggravating, because video playback on Edge has been broken ever since I 📝 switched the preferred video codec to AV1.
This goes so far beyond not supporting a specific codec. Usually, unsupported codecs aren't supposed to be an issue: As soon as you start using the HTML <video> tag, you'll learn that not every browser supports all codecs. And so you set up an encoding pipeline to serve each video in a mix of new and ancient formats, put the <source> tag of the most preferred codec first, and rest assured that browsers will fall back on the best-supported option as necessary. Except that Edge doesn't even try, and insists on staying on a non-playing AV1 video. 🙄
The codecs parameter for the <source> type attribute was the first potential solution I came across. Specifying the video codec down to the finest encoding details right in the HTML markup sounds like a good idea, similar to specifying sizes of images and videos to prevent layout reflows on long pages during the initial page load. So why was this the first time I heard of this feature? The fact that there isn't a simple ffprobe -show_html_codecs_string command to retrieve this string might already give a clue about how useful it is in practice. Instead, you have to manually piece the string together by grepping your way through all of a video's metadata…
…and then it still doesn't change anything about Edge's behavior, even when also specifying the string for the VP9 and VP8 sources. Calling the infamously ridiculous HTMLMediaElement.canPlayType() method with a representative parameter of "video/webm; codecs=av01.1.04M.08.0.000.01.13.00.0" explains why: Both the AV1-supporting Chrome and Edge return "probably", but only the former can actually play this format. 🤦
But wait, there is an AV1 video extension in the Microsoft Store that would add support to any unspecified favorite video app. Except that it stopped working inside Edge as of version 116. And even if it did: If you can't query the presence of this extension via JavaScript, it might as well not exist at all.
Not to mention that the favorite video app part is obviously a lie as a lot of widely preferred Windows video apps are bundled with their own codecs, and have probably long supported AV1.
In the end, there's no way around the utter desperation move of removing the AV1 <source> for Edge users. Serving each video in two other formats means that we can at least do something here – try visiting the GitHub release page of the P0234-1 TH01 Anniversary Edition build in Edge and you also don't get to see anything, because that video uses AV1 and GitHub understandably doesn't re-encode every uploaded video into a variety of old formats.
Just for comparison, I tried both that page and the ReC98 blog on an old Android 6 phone from 2014, and even that phone picked and played the AV1 videos with the latest available Chrome and Firefox versions. This was the phone whose available Firefox version didn't support VP9 in 2019, which was my initial reason for adding the VP8 versions. Looks like it's finally time to drop those… 🤔 Maybe in the far future once I start running out of space on this server.
Removing the <source> tags can be done in one of two places:
server-side, detecting Edge via the User-Agent header, or
I went with 2) because more dynamic server-side code would only move us further away from static site generation, which would make a lot of sense as the next evolutionary step in the architecture of this website. The client-side solution is much simpler too, and we can defer the deletion until a user actually hovers over a specific video.
And while we're at it, let's also add a popup complaining about this whole state of affairs. Edge is heavily marketed inside Windows as "the modern browser recommended by Microsoft", and you sure wouldn't expect low-quality chroma-subsampled VP9 from such a tagline. With such a level of anti-support for AV1, Edge users deserve to know exactly what's going on, especially since this post also explains what they will encounter on other websites.
That's the polite way of putting it.
Alright, where was I? For TH01, the main menu was the last thing I decompiled before the 100% finalization mark, so it's rather anticlimactic to already cover the TH04/TH05 one now, with both of the games still being very far away from 100%, just because people will soon want to translate the description text in the bottom-right corner of the screen. But then again, the ZUN Soft logo animation would make for an even nicer final piece of decompiled code, especially since the bouncing-ball logo from TH01, TH02, and TH03 was the very first decompilation I did, all the way back in 2015.
The code quality of ZUN's VRAM-based menus has barely increased between TH01 and TH05. Both the top-level and option menu still need to know the bounding rectangle of the other one to unblit the right pixels when switching between the two. And since ZUN sure loved hardcoded and copy-pasted numbers in the PC-98 days, the coordinates both tend to be excessively large, and excessively wrong. Luckily, each menu item comes with its own correct unblitting rectangle, which avoids any graphical glitches that would otherwise occur.
As for actual observable quirks and bugs, these menus only contain one of each, and both are exclusive to TH04:
Quitting out of the Music Room moves the cursor to the Start option. In TH05, it stays on Music Room.
Changing the S.E. mode seems to do nothing within TH04's menus, and would only take effect if you also change the Music mode afterward, or launch into the game.
And yes, these videos do have a frame rate of 2 FPS.
Now that 100% finalization of their OP.EXE binaries is within reach, all this bloat made me think about the viability of a 📝 single-executable build for TH04's and TH05's debloated and anniversary versions. It would be really nice to have such a build ready before I start working on the non-ASCII translations – not just because they will be based on the anniversary branch by default, but also because it would significantly help their development if there are 4 fewer executables to worry about.
However, it's not as simple for these games as it was for TH01. The unique code in their OP.EXE and MAINE.EXE binaries is much larger than Borland's easily removed C++ exception handler, so I'd have to remove a lot more bloat to keep the resulting single binary at or below the size of the original MAIN.EXE. But I'm sure going to try.
Speaking of code that can be debloated for great effect: The second push of this delivery focused on the first-launch sound setup menu, whose BGM and sound effect submenus are almost complete code duplicates of each other. The debloated branch could easily remove more than half of the code in there, yielding another ≈800 bytes in case we need them.
If hex-editing MIKO.CFG is more convenient for you than deleting that file, you can set its first byte to FF to re-trigger this menu. Decompiling this screen was not only relevant now because it contains text rendered with font ROM glyphs and it would help dig our way towards more important strings in the data segment, but also because of its visual style. I can imagine many potential mods that might want to use the same backgrounds and box graphics for their menus.
How about an initial language selection menu in the same style?
With the two submenus being shown in a fixed sequence, there's not a lot of room for the code to do anything wrong, and it's even more identical between the two games than the main menu already was. Thankfully, ZUN just reblits the respective options in the new color when moving the cursor, with no 📝 palette tricks. TH04's background image only uses 7 colors, so he could have easily reserved 3 colors for that. In exchange, the TH05 image gets to use the full 16 colors with no change to the code.
Rounding out this delivery, we also got TH05's rolling Yin-Yang Orb animation before the title screen… and it's just more bloat and landmines on a smaller scale that might be noticeable on slower PC-98 models. In total, there are three unnecessary inter-page copies of the entire VRAM that can easily insert lag frames, and two minor page-switching landmines that can potentially lead to tearing on the first frame of the roll or fade animation. Clearly, ZUN did not have smoothness or code quality in mind there, as evidenced by the fact that this animation simply displays 8 .PI files in sequence. But hey, a short animation like this is 📝 another perfectly appropriate place for a quick-and-dirty solution if you develop with a deadline.
And that's 1.30% of all PC-98 Touhou code finalized in two pushes! We're slowly running out of these big shared pieces of ASM code…
I've been neglecting TH03's OP.EXE quite a bit since it simply doesn't contain any translatable plaintext outside the Music Room. All menu labels are gaiji, and even the character selection menu displays its monochrome character names using the 4-plane sprites from CHNAME.BFT. Splitting off half of its data into a separate .ASM file was more akin to getting out a jackhammer to free up the room in front of the third remaining Music Room, but now we're there, and I can decompile all three of them in a natural way, with all referenced data.
Next up, therefore: Doing just that, securing another important piece of text for the upcoming non-ASCII translations and delivering another big piece of easily finalized code. I'm going to work full-time on ReC98 for almost all of December, and delivering that and the Shuusou Gyoku SC-88Pro recording BGM back-to-back should free up about half of the slightly higher cap for this month.
And we're back to PC-98 Touhou for a brief interruption of the ongoing Shuusou Gyoku Linux port.
Let's clear some of the Touhou-related progress from the backlog, and use
the unconstrained nature of these contributions to prepare the
📝 upcoming non-ASCII translations commissioned by Touhou Patch Center.
The current budget won't cover all of my ambitions, but it would at least be
nice if all text in these games was feasibly translatable by the time I
officially start working on that project.
At a little over 3 pushes, it might be surprising to see that this took
longer than the
📝 TH03/TH04/TH05 cutscene system. It's
obvious that TH02 started out with a different system for in-game dialog,
but while TH04 and TH05 look identical on the surface, they only
actually share 30% of their dialog code. So this felt more like decompiling
2.4 distinct systems, as opposed to one identical base with tons of
game-specific differences on top.
The table of contents was pretty popular last time around, so let's have
another one:
Let's start with the ones from TH04 and TH05, since they are not that
broken. For TH04, ZUN started out by copy-pasting the cutscene system,
causing the result to inherit many of the caveats I already described in the
cutscene blog post:
It's still a plaintext format geared exclusively toward full-width
Japanese text.
The parser still ignores all whitespace, forcing ASCII text into hacks
with unassigned Shift-JIS lead bytes outside the second byte of a 2-byte
chunk.
Commands are still preceded by a 0x5C byte, which renders
as either a \ or a ¥ depending on your font and
interpretation of Shift-JIS.
Command parameters are parsed in exactly the same way, with all the same
limits.
A lot of the same script commands are identical, including 7 of them
that were not used in TH04's original dialog scripts.
Then, however, he greatly simplified the system. Mainly, this was done by
moving text rendering from the PC-98 graphics chip to the text chip, which
avoids the need for any text-related unblitting code, but ZUN also added a
bunch of smaller changes:
The player must advance through every dialog box by releasing any held
keys and then pressing any key mapped to a game action. There are no
timeouts.
The delay for every 2 bytes of text was doubled to 2 frames, and can't
be overridden.
Instead of holding ESC to fast-forward, pressing any key
will immediately print the entire rest of a text box.
Dialogs run in their own single-buffered frame loop, interrupting the
rest of the game. The other VRAM page keeps the background pixels required
for unblitting the face images.
All script commands that affect the graphics layer are preceded by a
1-frame delay. ZUN most likely did this because of the single-buffered
nature, as it prevents tearing on the first frame by waiting for the CRT
beam to return to the top-left corner before changing any pixels.
Both boxes are intended to contain up to 30 half-width characters on
each of their up to 3 lines, but nothing in the code enforces these limits.
There is no support for automatic line breaks or starting new boxes.
While it would seem that TH05 has no issues with ASCII 0x20
spaces, the text as a whole is still blindly processed two bytes at a
time, and any commands can only appear at even byte positions within a
line. I dimmed the VRAM pixels to 25% of their original brightness to make the
text easier to read.
The same text backported to TH04, additionally demonstrating how that
game's dialog system inherited the whitespace skipping behavior of
TH03's cutscene system. Just like there, ASCII 0x20 spaces
only work at odd byte positions because the game treats them as the
trailing byte of a full-width Shift-JIS codepoint. I don't know how
large the budget for the upcoming non-ASCII translations will be, but
I'm going to fix this even in the very basic fully static variant.
I dimmed the VRAM pixels to 25% of their original brightness to make the
text easier to read.
TH05 then moved from TH04's plaintext scripts to the binary
.TX2 format while removing all the unused commands copy-pasted
from the cutscene system. Except for a
single additional command intended to clear a text box, TH05's dialog
system only supports a strict subset of the features of TH04's system.
This change also introduced the following differences compared to TH04:
The game now stores the dialog of all 4 playable characters in the same
file, with a (4 + 1)-word header that indicates the byte offset
and length of each character's script. This way, it can load only the one
script for the currently played character.
Since there is no need for whitespace in a binary format, you can now
use ASCII 0x20 spaces even as the first byte of a 2-byte text
chunk! 🥳
All command parameters are now mandatory.
Filenames are now passed directly by pointer to the respective game
function. Therefore, they now need to be null-terminated, but can in turn be
as long as
📝 the number of remaining bytes in the allocated dialog segment.
In practice though, the game still runs on DOS and shares its restriction of
8.3 filenames…
When starting a new dialog box, any existing text in the other box is
now colored blue.
Thanks to ZUN messing up the return values of the command-interpreting
switch function, you can effectively use only line break and gaiji commands in the middle of text. All other
commands do execute, but the interpreter then also treats their command byte
as a Shift-JIS lead byte and places it in text RAM together with whatever
other byte follows in the script.
This is why TH04 can and does put its \= commandsinto the boxes
started with the 0 or 1 commands, but TH05 has to
put its 0x02 commands before the equivalent 0x0D.
Writing the 0x02 byte to text RAM results in an character, which is simply the PC-98 font ROM's glyph for that
Shift-JIS codepoint. Also note how each face change is now
preceded by two frames of delay.
No problem in TH04. Note how the dialog also runs a bit faster – TH04
only adds the aforementioned one frame of delay to each face change, and
has fewer two-byte chunks of text to display overall.
For modding these files, you probably want to use TXDEF from
-Tom-'s MysticTK. It decodes these
files into a text representation, and its encoder then takes care of the
character-specific byte offsets in the 10-byte header. This text
representation simplifies the format a lot by avoiding all corner cases and
landmines you'd experience during hex-editing – most notably by interpreting
the box-starting 0x0D as a
command to show text that takes a string parameter, avoiding the broken
calls to script commands in the middle of text. However, you'd still have to
manually ensure an even number of bytes on every line of text.
In the entry function of TH05's dialog loop, we also encounter the hack that
is responsible for properly handling
📝 ZUN's hidden Extra Stage replay. Since the
dialog loop doesn't access the replay inputs but still requires key presses
to advance through the boxes, ZUN chose to just skip the dialog altogether in the
specific case of the Extra Stage replay being active, and replicated all
sprite management commands from the dialog script by just hardcoding
them.
And you know what? Not only do I not mind this hack, but I would have
preferred it over the actual dialog system! The aforementioned sprite
management commands effectively boil down to manual memory management,
deallocating all stage enemy and midboss sprites and thus ensuring that the
boss sprites end up at specific master.lib sprite IDs (patnums). The
hardcoded boss rendering function then expects these sprites to be available
at these exact IDs… which means that the otherwise hardcoded bosses can't
render properly without the dialog script running before them.
There is absolutely no excuse for the game to burden dialog scripts with
this functionality. Sure, delayed deallocation would allow them to blit
stage-specific sprites, but the original games don't do that; probably
because none of the two games feature an unblitting command. And even if
they did, it would have still been cleaner to expose the boss-specific
sprite setup as a single script command that can then also be called from
game code if the script didn't do so. Commands like these just are a recipe
for crashes, especially with parsers that expect fullwidth Shift-JIS
text and where misaligned ASCII text can easily cause these commands to be
skipped.
But then again, it does make for funny screenshot material if you
accidentally the deallocation and then see bosses being turned into stage
enemies:
Some of the more amusing consequences of not calling the
sprite-deallocating
\c /
0x04 command inside a dialog
script.
In the case of 4️⃣, the game then even crashes on this frame at the end
of the dialog, in a way that resembles the infamous
📝 TH04 crash before Stage 5 Yuuka if no EMS driver is loaded.
Both the stage- and boss-specific BFNT sprites are loaded into memory at
this point, leaving no room for the 256×256-pixel background image on
the size-limited master.lib heap.
With all the general details out of the way, here's the command reference:
0 1
0x00 0x01
Selects either the player character (0) or the boss (1) as the
currently speaking character, and moves the cursor to the beginning of
the text box. In TH04, this command also directly starts the new dialog
box, which is probably why it's not prefixed with a \ as it
only makes sense outside of text. TH05 requires a separate 0x0D command to do the
same.
\=1
0x02 0x!!
Replaces the face portrait of the currently active speaking
character with image #1 within her .CD2
file.
\=255
0x02 0xFF
Removes the face portrait from the currently active text box.
\l,filename
0x03 filename 0x00
Calls master.lib's super_entry_bfnt() function, which
loads sprites from a BFNT file to consecutive IDs starting at the
current patnum write cursor.
\c
0x04
Deallocates all stage-specific BFNT sprites (i.e., stage enemies and
midbosses), freeing up conventional RAM for the boss sprites and
ensuring that master.lib's patnum write cursor ends up at
128 /
180.
In TH05's Extra Stage, this command also replaces
📝 the sprites loaded from MIKO16.BFT with the ones from ST06_16.BFT.
\d
Deallocates all face portrait images.
The game automatically does this at the end of each dialog sequence.
However, ZUN wanted to load Stage 6 Yuuka's 76 KiB of additional
animations inside the script via \l, and would have once again
run up against the master.lib heap size limit without that extra free
memory.
\m,filename
0x05 filename 0x00
Stops the currently playing BGM, loads a new one from the given
file, and starts playback.
\m$
0x05 $ 0x00
Stops the currently playing BGM.
Note that TH05 interprets $ as a null-terminated filename as
well.
\m*
Restarts playback of the currently loaded BGM from the
beginning.
\b0,0,0
0x06 0x!!!!0x!!!!0x!!
Blits the master.lib patnum with the ID indicated by the third
parameter to the current VRAM page at the top-left screen position
indicated by the first two parameters.
\e0
Plays the sound effect with the given ID.
\t100
Sets palette brightness via master.lib's
palette_settone() to any value from 0 (fully black) to 200
(fully white). 100 corresponds to the palette's original colors.
\fo1
\fi1
Calls master.lib's palette_black_out() or
palette_black_in() to play a hardware palette fade
animation from or to black, spending roughly 1 frame on each of the 16 fade steps.
\wo1
\wi1
0x09 0x!!
0x0A 0x!!
Calls master.lib's palette_white_out() or
palette_white_in() to play a hardware palette fade
animation from or to white, spending roughly 1 frame on each of the 16 fade steps. The
TH05 version of 0x09 also clears the text in both boxes
before the animation.
\n
0x0B
Starts a new line by resetting the X coordinate of the TRAM cursor
to the left edge of the text area and incrementing the Y coordinate.
The new line will always be the next one below the last one that was
properly started, regardless of whether the text previously wrapped to
the next TRAM row at the edge of the screen.
\g8
Plays a blocking 8-frame screen shake
animation. Copy-pasted from the cutscene parser, but actually used right
at the end of the dialog shown before TH04's Bad Ending.
\ga0
0x0C 0x!!
Shows the gaiji with the given ID from 0 to 255
at the current cursor position, ignoring the per-glyph delay.
\k0
Waits 0 frames (0 = forever) for any key
to be pressed before continuing script execution.
Takes the current dialog cursor as the top-left corner of a
240×48-pixel rectangle, and replaces all text RAM characters within that
rectangle with whitespace.
This is only used to clear the player character's text box before
Shinki's final いくよ‼ box. Shinki has two
consecutive text boxes in all 4 scripts here, and ZUN probably wanted to
clear the otherwise blue text to imply a dramatic pause before Shinki's
final sentence. Nice touch.
(You could, however, also use it after a
box-ending 0xFF command to mess with text RAM in
general.)
\#
Quits the currently running loop. This returns from either the text
loop to the command loop, or it ends the dialog sequence by returning
from the command loop back to gameplay. If this stage of the game later
starts another dialog sequence, it will start at the next script
byte.
\$
Like \#, but first waits for any key to be
pressed.
0xFF
Behaves like TH04's \$ in the text loop, and like
\# in the command loop. Hence, it's not possible in TH05 to
automatically end a text box and advance to the next one without waiting
for a key press.
Unused commands are in gray.
At the end of the day, you might criticize the system for how its landmines
make it annoying to mod in ASCII text, but it all works and does what it's
supposed to. ZUN could have written the cleanest single and central
Shift-JIS iterator that properly chunks a byte buffer into halfwidth and
fullwidth codepoints, and I'd still be throwing it out for the upcoming
non-ASCII translations in favor of something that either also supports UTF-8
or performs dictionary lookups with a full box of text.
The only actual bug can be found in the input detection, which once
again doesn't correctly handle the infamous key
up/key down scancode quirk of PC-98 keyboards. All it takes
is one wrongly placed input polling call, and suddenly you have to think
about how the update cycle behind the PC-98 keyboard state bytes
might cause the game to run the regular 2-frame delay for a single
2-byte chunk of text before it shows the full text of a box after
all… But even this bug is highly theoretical and could probably only be
observed very, very rarely, and exclusively on real hardware.
The same can't be said about TH02 though, but more on that later. Let's
first take a look at its data, which started out much simpler in that game.
The STAGE?.TXT files contain just raw Shift-JIS text with no
trace of commands or structure. Turning on the whitespace display feature in
your editor reveals how the dialog system even assumes a fixed byte
length for each box: 36 bytes per line which will appear on screen, followed
by 4 bytes of padding, which the original files conveniently use to visually
split the lines via a CR/LF newline sequence. Make sure to disable trimming
of trailing whitespace in your editor to not ruin the file when modding the
text…
Two boxes from TH02's STAGE5.TXT with visualized whitespace.
These also demonstrate how the CR/LF newlines only make up 2 of the 4
padding bytes, and require each line to be padded with two more bytes; you
could not use these trailing spaces for actual text. Also note how
the exquisite mixture of fullwidth and halfwidth spaces demands the text to
be viewed with only the most metrically consistent monospace fonts to
preserve the intended alignment. 🍷 It appears quite misaligned on my phone.
Consequently, everything else is hardcoded – every effect shown between text
boxes, the face portrait shown for each box, and even how many boxes are
part of each dialog sequence. Which means that the source code now contains
a
long hardcoded list of face IDs for most of the text boxes in the game,
with the rest being part of the
dedicated hardcoded dialog scripts for 2/3 of the
game's stages.
Without the restriction to a fixed set of scripting commands, TH02 naturally
gravitated to having the most varied dialog sequences of all PC-98 Touhou
games. This flexibility certainly facilitated Mima's grand entrance
animation in Stage 4, or the different lines in Stage 4 and 5 depending on
whether you already used a continue or not. Marisa's post-boss dialog even
inserts the number of continues into the text itself – by, you guessed it,
writing to hardcoded byte offsets inside the dialog text before printing it
to the screen. But once again, I have nothing to
criticize here – not even the fact that the alternate dialog scripts have to
mutate the "box cursor" to jump to the intended boxes within the file. I
know that some people in my audience like VMs, but I would have considered
it more bloated if ZUN had implemented a full-blown scripting
language just to handle all these special cases.
Another unique aspect of TH02 is the way it stores its face portraits, which
are infamous for how hard they are to find in the original data files. These
sprites are actually map tiles, stored in MIKO_K.MPN,
and drawn using the same functions used to blit the regular map tiles to the
📝 tile source area in VRAM. We can only guess
why ZUN chose this one out of the three graphics formats he used in TH02:
BFNT supports transparency, but sacrifices one of the 16 colors to do
so. ZUN only used 15 colors for the face portraits, but might have wanted to
keep open the option to use that 16th color. The detailed
backgrounds also suggest that these images were never supposed to be
transparent to begin with.
PI is used for all bigger and non-transparent images, but ZUN would have
had to write a separate small function to blit a 48×48 subsection of such an
image. That certainly wouldn't have stopped him in the TH01 days, but he
probably was already past that point by this game.
That only leaves .MPN. Sure, he did have to slice each face into 9
separate 16×16 "map" tiles to use this format, but that's a small price to
pay in exchange for not having to write any new low-level blitting code,
especially since he must have already had an asset pipeline to generate
these files.
TH02's MIKO_K.PTN, arranged into a 16×16-tile layout that
reveals how these tiles are combined into face portraits. MPNDEF from -Tom-'s MysticTK conveniently uses
this exact layout in its .BMP output. Earlier MPNDEF
versions crashed when converting this file as its 256 tiles led to an
8-bit overflow bug, so make sure you've updated to the current version
from the end of October 2023 if you want to convert this file yourself.
The format stores the 4 bitplanes of each 16×16 tile in order, so good
luck finding a different planar image viewer that would support both
such a tiled layout and a custom palette. Sometimes, a weird
internal format is the best type of obfuscation.
And since you're certainly wondering about all these black tiles at the
edges: Yes, these are not only part of the file and pad it from the required
240×192 pixels to 256×256, but also kept in memory during a stage, wasting
9.5 KiB of conventional RAM. That's 172 seconds of potential input
replay data, just for those people who might still think that we need EMS
for replays.
Alright, we've got the text, we've got the faces, let's slide in the box and
display it all on screen. Apparently though, we also have to blit the player
and option sprites using raw, low-level master.lib function calls in the
process? This can't be right, especially because ZUN
always blits the option sprite associated with the Reimu-A shot type,
regardless of which one the player actually selected. And if you keep moving
above the box area before the dialog starts, you get to see exactly how
wrong this is:
Let's look closer at Reimu's sprite during the slide-in animation, and in
the two frames before:
This one image shows off no less than 4 bugs:
ZUN blits the stationary player sprite here, regardless of whether the
player was previously moving left or right. This is a nice way of indicating
that Reimu stops moving once the dialog starts, but maybe ZUN should
have unblitted the old sprite so that the new one wouldn't have appeared on
top. The game only unblits the 384×64 pixels covered by the dialog box on
every frame of the slide-in animation, so Reimu would only appear correctly
if her sprite happened to be entirely located within that area.
All sprites are shifted up by 1 pixel in frame 2️⃣. This one is not a
bug in the dialog system, but in the main game loop. The game runs the
relevant actions in the following order:
Invalidate any map tiles covered by entities
Redraw invalidated tiles
Decrement the Y coordinate at the top of VRAM according to the
scroll speed
Update and render all game entities
Scroll in new tiles as necessary according to the scroll speed, and
report whether the game has scrolled one pixel past the end of the
map
If that happened, pretend it didn't by incrementing the value
calculated in #3 for all further frames and skipping to
#8.
Issue a GDC SCROLL command to reflect the line
calculated in #3 on the display
Wait for VSync
Flip VRAM pages
Start boss if we're past the end of the map
The problem here: Once the dialog starts, the game has already rendered
an entire new frame, with all sprites being offset by a new Y scroll
offset, without adjusting the graphics GDC's scroll registers to
compensate. Hence, the Y position in 3️⃣ is the correct one, and the
whole existence of frame 2️⃣ is a bug in itself. (Well… OK, probably a
quirk because speedrunning exists, and it would be pretty annoying to
synchronize any video regression tests of the future TH02 Anniversary
Edition if it renders one fewer frame in the middle of a stage.)
ZUN blits the option sprites to their position from frame 1️⃣. This
brings us back to
📝 TH02's special way of retaining the previous and current position in a two-element array, indexed with a VRAM page ID.
Normally, this would be equivalent to using dedicated prev and
cur structure fields and you'd just index it with the back page
for every rendering call. But if you then decide to go single-buffered for
dialogs and render them onto the front page instead…
Note that fixing bug #2 would not cancel out this one – the sprites would
then simply be rendered to their position in the frame before 1️⃣.
And of course, the fixed option sprite ID also counts as a bug.
As for the boxes themselves, it's yet another loop that prints 2-byte chunks
of Shift-JIS text at an even slower fixed interval of 3 frames. In an
interesting quirk though, ZUN assumes that every box starts with the name of
the speaking character in its first two fullwidth Shift-JIS characters,
followed by a fullwidth colon. These 6 bytes are displayed immediately at
the start of every box, without the usual delay. The resulting alignment
looks rather janky with Genjii, whose single right-padded 亀
kanji looks quite awkward with the fullwidth space between the name
and the colon. Kind of makes you wonder why ZUN just didn't spell out his
proper name, 玄爺, instead, but I get the stylistic
difference.
In Stage 4, the two-kanji assumption then breaks with Marisa's three-kanji
name, which causes the full-width colon to be printed as the first delayed
character in each of her boxes:
That's all the issues and quirks in the system itself. The scripts
themselves don't leave much room for bugs as they basically just loop over
the hardcoded face ID array at this level… until we reach the end of the
game. Previously, the slide-in animation could simply use the tile
invalidation and re-rendering system to unblit the box on each frame, which
also explained why Reimu had to be separately rendered on top. But this no
longer works with a custom-rendered boss background, and so the game just
chooses to flood-fill the area with graphics chip color #0:
Then again, transferring pixels from the back page would be just
as wrong as they lag one frame behind. No way around capturing these 384×64
pixels to main memory here… Oh well, this flood-fill at least adds even more
legibility on top of the already half-transparent text box. A property that
the following dialog sequence unfortunately lacks…
For Mima's final defeat dialog though, ZUN chose to not even show the box.
He might have realized the issue by that point, or simply preferred the more
dramatic effect this had on the lines. The resulting issues, however, might
even have ramifications for such un-technical things as lore and
character dynamics. As it turns out, the code
for this dialog sequence does in fact render Mima's smiling face for all
boxes?! You only don't see it in the original game because it's rendered to
the other VRAM page that remains invisible during the dialog sequence:
Caution, flashing lights.
Here's how I interpret the situation:
The function that launches into the final part of the dialog script
starts with dedicated
code to re-render Mima to the back page, on top of the previously
rendered planet background. Since the entire script runs on the front
page (and thus, on top of the previous frame) and the game launches into
the ending immediately after, you don't ever get to see this new partial
frame in the original game.
Showing this partial frame would also ensure that you can actually
read the dialog text without a surrounding box. Then, the white
letters won't ever be put on top of any white bullets – or, worse, be completely invisible if the
dialog is triggered in the middle of Reimu-B's bomb animation, which
fills VRAM with lots of white pixels.
Hence, we've got enough evidence to classify not showing the back page
as a ZUN
bug. 🐞
However, Mima's smiling face jars with the words she says here. Adding
the face would deviate more significantly from the original game than
removing the player shot, item, bullet, or spark sprites would. It's
imaginable that ZUN just forgot about the dedicated code that
re-rendered just Mima to the back page, but the faces add
something to the dialog, and ZUN would have clearly noticed and
fixed it if their absence wasn't intended. Heck, ZUN might have just put
something related to Mima into the code because TH02's dialog system has
no way of not drawing a face for a dialog box. Filling the face
area with graphics chip color #0, as seen in the first and third boxes
of the Extra Stage pre-boss dialog, would have been an alternative, but
that would have been equally wrong with regard to the background.
Hence, the invisible face portrait from the original game is a ZUN
quirk. 🎺
So, the future TH02 Anniversary Edition will fix the bug by showing
the back page, but retain the quirk by rewriting the dialog code to
not blit the face.
And with that, we've secured all in-game dialog for the upcoming non-ASCII
translations! The remaining 2/3 of the last push made
for a good occasion to also decompile the small amount of code related to
TH03's win messages, stored in the @0?TX.TXT files. Similar to
TH02's dialog format, these files are also split into fixed-size blocks of
3×60 bytes. But this time, TH03 loads all 60 bytes of a line, including the
CR/LF line breaking codepoints in the original files, into the statically
allocated buffer that it renders from. These control characters are then
only filtered to whitespace by ZUN's graph_putsa_fx() function.
If you remove the line breaks, you get to use the full 60 bytes on every
line.
The final commits went to the MIKO.CFG loading and saving
functions used in TH04's and TH05's OP.EXE, as well as TH04's
game startup code to finally catch up with
📝 TH05's counterpart from over 3 years ago.
This brought us right in front of the main menu rendering code in both TH04
and TH05, which is identical in both games and will be tackled in the next
PC-98 Touhou delivery.
Next up, though: Returning to Shuusou Gyoku, and adding support for SC-88Pro
recordings as BGM. Which may or may not come with a slight controversy…
And now we're taking this small indie game from the year 2000 and porting
its game window, input, and sound to the industry-standard cross-platform
API with "simple" in its name.
Why did this have to be so complicated?! I expected this to take maybe 1-2
weeks and result in an equally short blog post. Instead, it raised so many
questions that I ended up with the longest blog post so far, by quite a wide
margin. These pushes ended up covering so many aspects that could be
interesting to a general and non-Seihou-adjacent audience, so I think we
need a table of contents for this one:
Before we can start migrating to SDL, we of course have to integrate it into
the build somehow. On Linux, we'd ideally like to just dynamically link to a
distribution's SDL development package, but since there's no such thing on
Windows, we'd like to compile SDL from source there. This allows us to reuse
our debug and release flags and ensures that we get debug information,
without needing to clone build scripts for every
C++ library ever in the process or something.
So let's get my Tup build scripts ready for compiling vendored libraries… or
maybe not? Recently, I've kept hearing about a hot new
technology that not only provides the rare kind of jank-free
cross-compiling build system for C/C++ code, but innovates by even
bundling a C++ compiler into a single 279 MiB package with no
further dependencies. Realistically replacing both Visual Studio and Tup
with a single tool that could target every OS is quite a selling point. The
upcoming Linux port makes for the perfect occasion to evaluate Zig, and to
find out whether Tup is still my favorite build system in 2023.
Even apart from its main selling point, there's a lot to like about Zig:
First and foremost: It's a modern systems programming language with
seamless C interop that we could gradually migrate parts of the codebase to.
The feature set of the core language seems to hit the sweet spot between C
and C++, although I'd have to use it more to be completely sure.
A native, optimized Hello World binary with no string formatting is
4 KiB when compiled for Windows, and 6.4 KiB when cross-compiled
from Windows to Linux. It's so refreshing to see a systems language in 2023
that doesn't bundle a bulky runtime for trivial programs and then defends it
with the old excuse of "but all this runtime code will come in handy the
larger your program gets". With a first impression like this, Zig
managed to realize the "don't pay for what you don't use" mantra that C++
typically claims for itself, but only pulls off maybe half of the time.
You can directly
target specific CPU models, down to even the oldest 386 CPUs?! How
amazing is that?! In contrast, Visual Studio only describes its /arch:IA32
compatibility option in very vague terms, leaving it up to you to figure out
that "legacy 32-bit x86 instruction set without any vector
operations" actually means "i586/P5 Pentium, because the startup code
still includes an unconditional CPUID instruction". In any
case, it means that Zig could also cover the i586 build.
Even better, changing Zig's CPU model setting recompiles both its
bundled C/C++ standard library and Zig's own compiler-rt polyfill
library for that architecture. This ensures that no unsupported
instructions ever show up in the binary, and also removes the need for
any CPUID checks. This is so much better than the Visual
Studio model of linking against a fixed pre-compiled standard library
because you don't have to trust that all these newer instructions
wouldn't actually be executed on older CPUs that don't have them.
I love the auto-formatter. Want to lay out your struct literal into
multiple lines? Just add a trailing comma to the end of the last element.
It's very snappy, and a joy to use.
Like every modern programming language, Zig comes with a test framework
built into the language. While it's not all too important for my grand plan
of having one big test that runs a bunch of replays and compares their game
states against the original binary, small tests could still be useful for
protecting gameplay code against accidental changes. It would be great if I
didn't have to evaluate and choose among
the many testing frameworks for C++ and could just use a language
standard.
Package
management is still in its infancy, but it's looking pretty good so far,
resembling Go's decentralized approach of just pointing to a URL but with
specific version selection from the get-go.
However, as a version number of 0.11.0 might already suggest, the whole
experience was then bogged down by quite a lot of issues:
While Zig's C/C++ compilation feature is very
well architected to reuse the C/C++ standard libraries of GCC and MinGW and
thus automatically keeps up with changes to the C++ standard library,
it's ultimately still just a Clang frontend. If you've been working with a
Visual Studio-exclusive codebase – which, as we're going to see below, can
easily happen even if you compile in C++23 mode – you'd now have to
migrate to Clang and Zig in a single step. Obviously, this can't ever
be fixed without Microsoft open-sourcing their C++ compiler. And even then,
supporting a separate set of command-line flags might not be worth it.
The standard library is very poorly documented, especially in the
build-related parts that are meant to attract the C++ audience.
Often, the only documentation is found in blog posts from a few years
ago, with example code written against old Zig versions that doesn't compile
on the newest version anymore. It's all very far from stable.
However, Zig's project generation sub-commands (zig
init-exe and friends) do emit well-documented boilerplate
code? It does make sense for that code to double as a comprehensive example,
but Zig advertises itself as so simple that I didn't even think about
bootstrapping my project with a CLI tool at first – unlike, say, Rust, where
a project always starts with filling out a small form in
Cargo.toml.
There's no progress output for C/C++ compilation? Like, at all?
This hurts especially because compilation times are significantly longer
than they were with Visual Studio. By default, the current Tupfile builds
Shuusou Gyoku in both debug and release configurations simultaneously. If I
fully rebuild everything from a clean cache, Visual Studio finishes such a
build in roughly the same amount of time that Zig takes to compile just a
debug build.
The --global-cache-dir option is only supported by specific
subcommands of the zig CLI rather than being a top-level
setting, and throws an error if used for any other subcommand. Not having a
system-wide way to change it and being forced into writing a wrapper script
for that is fine, but it would be nice if said wrapper script didn't have to
also parse and switch over the subcommand just to figure out whether it is
allowed to append the setting.
compiler-rt still needs a bit of dead code elimination work. As soon as
your program needs a single polyfilled function, you get all of them,
because they get referenced in some exception-related table even if nothing
uses them? Changing the link_eh_frame_hdr option had no
effect.
And that was not the only std.Build.Step.Compile option
that did nothing. Worse, if I just tweaked the options and changed nothing
about the code itself, Zig simply copied a previously built executable
out of its build cache into the output directory, as revealed by the
timestamp on the .EXE. While I am willing to believe that Zig correctly
detects that all these settings would just produce the same binary, I do not
like how this behavior inspires distrust and uncertainty in Zig's build
process as a whole. After all, we still live in a world where clearing
the build cache is way too often the solution for weird problems in
software, especially when using CMake. And it makes sense why it would be:
If you develop a complex system and then try solving the infamously hard
problem of cache invalidation on top, the risk of getting cache invalidation
wrong is, by definition, higher than if that was the only thing your system
did. That's the reason why I like Tup so much: It solely focuses on
getting cache invalidation right, and rather errs on the side of caution by
maybe unnecessarily rebuilding certain files every once in a while because
the compiler may have read from an environment variable that has changed in
the meantime. But this is the one job I expect a build system to do, and Tup
has been delivering for years and has become fundamentally more trustworthy
as a result.
Zig activates Clang's UBSan
in debug builds by default, which executes a program-crashing
UD2 instruction whenever the program is about to rely on
undefined C++ behavior. In theory, that's a great help for spotting hidden
portability issues, but it's not helpful at all if these crashes are
seemingly caused by C++ standard library code?! Without any clear info
about the actual cause, this just turned into yet another annoyance on
top of all the others. Especially because I apparently kept searching for
the wrong terms when I first encountered this issue, and only found
out how to deactivate it after I already decided against Zig.
Also, can we get /PDBALTPATH?
Baking absolute paths from the filesystem of the developer's machine into
released binaries is not only cringe in itself, but can also cause potential
privacy or security accidents.
So for the time being, I still prefer Tup. But give it maybe two or three
years, and I'm sure that Zig will eventually become the best tool for
resurrecting legacy C++ codebases. That is, if the proposed divorce of the
core Zig compiler from LLVMisn't an indication that the
productive parts of the Zig community consider the C/C++ building features
to be "good enough", and are about to de-emphasize them to focus more
strongly on the actual Zig language. Gaining adoption for your new systems
language by bundling it with a C/C++ build system is such a great and unique
strategy, and it almost worked in my case. And who knows, maybe Zig will
already be good enough by the time I get to port PC-98 Touhou to modern
systems.
(If you came from the Zig
wiki, you can stop reading here.)
A few remnants of the Zig experiment still remain in the final delivery. If
that experiment worked out, I would have had to immediately change the
execution encoding to UTF-8, and decompile a few ASM functions exclusive to
the 8-bit rendering mode which we could have otherwise ignored. While Clang
does support inline assembly with Intel syntax via
-fms-extensions, it has trouble with ; comments
and instructions like REP STOSD, and if I have to touch that
code anyway… (The REP STOSD function translated into a single
call to memcpy(), by the way.)
Another smaller issue was Visual Studio's lack of standard library header
hygiene, where #including some of the high-level STL features also includes
more foundational headers that Clang requires to be included separately, but
I've already known about that. Instead, the biggest shocker was that Visual
Studio accepts invalid syntax for a language feature as recent as C++20
concepts:
// Defines the interface of a text rendering session class. To simplify this
// example, it only has a single `Print(const char* str)` method.
template <class T> concept Session = requires(T t, const char* str) {
t.Print(str);
};
// Once the rendering backend has started a new session, it passes the session
// object as a parameter to a user-defined function, which can then freely call
// any of the functions defined in the `Session` concept to render some text.
template <class F, class S> concept UserFunctionForSession = (
Session<S> && requires(F f, S& s) {
{ f(s) };
}
);
// The rendering backend defines a `Prerender()` method that takes the
// aforementioned user-defined function object. Unfortunately, C++ concepts
// don't work like this: The standard doesn't allow `auto` in the parameter
// list of a `requires` expression because it defines another implicit
// template parameter. Nevertheless, Visual Studio compiles this code without
// errors.
template <class T, class S> concept BackendAttempt = requires(
T t, UserFunctionForSession<S> auto func
) {
t.Prerender(func);
};
// A syntactically correct definition would use a different constraint term for
// the type of the user-defined function. But this effectively makes the
// resulting concept unusable for actual validation because you are forced to
// specify a type for `F`.
template <class T, class S, class F> concept SyntacticallyFixedBackend = (
UserFunctionForSession<F, S> && requires(T t, F func) {
t.Prerender(func);
}
);
// The solution: Defining a dummy structure that behaves like a lambda as an
// "archetype" for the user-defined function.
struct UserFunctionArchetype {
void operator ()(Session auto& s) {
}
};
// Now, the session type disappears from the template parameter list, which
// even allows the concrete session type to be private.
template <class T> concept CorrectBackend = requires(
T t, UserFunctionArchetype func
) {
t.Prerender(func);
};
What's this, Visual Studio's infamous delayed template parsing applied to
concepts, because they're templates as well? Didn't
they get rid of that 6 years ago? You would think that we've moved
beyond the age where compilers differed in their interpretation of the core
language, and that opting into a current C++ standard turns off any
remaining antiquated behaviors…
So let's actually get my Tup build scripts ready for compiling
vendored libraries, because the
📝 previous 70 lines of Lua definitely
weren't. For this use case, we'd like to have some notion of distinct build
targets that can have a unique set of compilation and linking flags. We'd
also like to always build them in debug and release versions even if you
only intend to build your actual program in one of those versions – with the
previous system of specifying a single version for all code, Tup would
delete the other one, which forces a time-consuming and ultimately needless
rebuild once you switch to the other version.
The solution I came up with treats the set of compiler command-line options
like a tree whose branches can concatenate new options and/or filter the
versions that are built on this branch. In total, this is my 4th
attempt at writing a compiler abstraction layer for Tup. Since we're
effectively forced to write such layers in Lua, it will always be a
bit janky, but I think I've finally arrived at a solid underlying design
that might also be interesting for others. Hence, I've split off the result
into its own separate
repository and added high-level documentation and a documented example.
And yes, that's a Code Nutrition
label! I've wanted to add one of these ever since I first heard about the
idea, since it communicates nicely how seriously such an open-source project
should be taken. Which, in this case, is actually not all too
seriously, especially since development of the core Tup project has all but
stagnated. If Zig does indeed get better and better at being a Clang
frontend/build system, the only niches left for Tup will be Visual
Studio-exclusive projects, or retrocoding with nonstandard toolchains (i.e.,
ReC98). Quite ironic, given Tup's Unix heritage…
Oh, and maybe general Makefile-like tasks where you just want to run
specific programs. Maybe once the general hype swings back around and people
start demanding proper graph-based dependency tracking instead of just a command runner…
Alright, alternatives evaluated, build system ready, time to include SDL!
Once again, I went for Git submodules, but this time they're held together
by a
batch file that ensures that the intended versions are checked out before
starting Tup. Git submodules have a bad rap mainly because of their
usability issues, and such a script should hopefully work around
them? Let's see how this plays out. If it ends up causing issues after all,
I'll just switch to a Zig-like model of downloading and unzipping a source
archive. Since Windows comes with curl and tar
these days, this can even work without any further dependencies.
Compiling SDL from a non-standard build system requires a
bit of globbing to include all the code that is being referenced, as
well as a few linker settings, but it's ultimately not much of a big deal.
I'm quite happy that it was possible at all without pre-configuring a build,
but hey, that's what maintaining a Visual Studio project file does to a
project.
By building SDL with the stock Windows configuration, we then end up with
exactly what the SDL developers want us to use… which is a DLL. You
can statically link SDL, but they really don't want you to do
that. So strongly, in fact, that they not
merely argue how well the textbook advantages of dynamic linking have worked
for them and gamers as a whole, but implemented a whole dynamic API
system that enforces overridable dynamic function loading even in static
builds. Nudging developers to their preferred solution by removing most
advantages from static linking by default… that's certainly a strategy. It
definitely fits with SDL's grassroots marketing, which is very good at
painting SDL as the industry standard and the only reliable way to keep your
game running on all originally supported operating systems. Well, at least
until SDL 3 is so stable that SDL 2 gets deprecated and won't
receive any code for new backends…
However, dynamic linking does make sense if you consider what SDL is.
Offering all those multiple rendering, input, and sound backends is what
sets it apart from its more hip competition, and you want to have all of
them available at any time so that SDL can dynamically select them based on
what works best on a system. As a result, everything in SDL is being
referenced somewhere, so there's no dead code for the linker to eliminate.
Linking SDL statically with link-time code generation just prolongs your
link time for no benefit, even without the dynamic API thwarting any chance
of SDL calls getting inlined.
There's one thing I still don't like about all this, though. The dynamic
API's table references force you to include all of SDL's subsystems in the
DLL even if your game doesn't need some of them. But it does fit with their
intention of having SDL2.dll be swappable: If an older game
stopped working because of an outdated SDL2.dll, it should be
possible for anyone to get that game working again by replacing that DLL
with any newer version that was bundled with any random newer game. And
since that would fail if the newer SDL2.dll was size-optimized
to not include some of the subsystems that the older game required, they
simply removed (or de-prioritized) the possibility altogether.
Maybe that was their train of thought? You can always just use the official Windows
DLL, whose whole point is to include everything, after all. 🤷
So, what do we get in these 1.5 MiB? There are:
renderer backends for Direct3D 9/11/12, regular OpenGL, OpenGL ES 2.0,
Vulkan, and a software renderer,
and audio backends for WinMM, DirectSound, WASAPI, and direct-to-disk
recording.
Unfortunately, SDL 2 also statically references some newer Windows API
functions and therefore doesn't run on Windows 98. Since this build of
Shuusou Gyoku doesn't introduce any new features to the input or sound
interfaces, we can still use pbg's original DirectSound and DirectInput code
for the i586 build to keep it working with the rest of the
platform-independent game logic code, but it will start to lag behind in
features as soon as we add support for SC-88Pro BGM or more sophisticated input
remapping. If we do want to keep this build at the same feature level as
the SDL one, we now have a choice: Do we write new DirectInput and
DirectSound code and get it done quickly but only for Shuusou Gyoku, or do
we port SDL 2 to Windows 98 and benefit all other SDL 2 games as
well? I leave
that for my backers to decide.
Immediately after writing the first bits of actual SDL code to initialize
the library and create the game window, you notice that SDL makes it very
simple to gradually migrate a game. After creating the game window, you can
call SDL_GetWindowWMInfo()
to retrieve HWND and HINSTANCE handles that allow
you to continue using your original DirectDraw, DirectSound, and DirectInput
code and focus on porting one subsystem at a time.
Sadly, D3DWindower can no longer turn SDL's fullscreen mode into a windowed
one, but DxWnd still works, albeit behaving a bit janky and insisting on
minimizing the game whenever its window loses focus. But in exchange, the
game window can surprisingly be moved now! Turns out that the originally
fixed window position had nothing to do with the way the game created its
DirectDraw context, and everything to do with pbg
blocking the Win32 "syscommand" that allows a window to be moved. By
deleting a system menu… seriously?! Now I'm dying to hear the Raymond
Chen explanation for how this behavior dates back to an unfortunate decision
during the Win16 days or something.
As implied by that commit, I immediately backported window movability to the
i586 build.
However, the most important part of Shuusou Gyoku's main loop is its frame
rate limiter, whose Win32 version leaves a bit of room for improvement.
Outside of the uncapped [おまけ] DrawMode, the
original main loop continuously checks whether at least 16 milliseconds have
elapsed since the last simulated (but not necessarily rendered) frame. And
by that I mean continuously, and deliberately without using any of
the Windows system facilities to sleep the process in the meantime, as
evidenced by a commented-out Sleep(1) call. This has two
important effects on the game:
The 60Fps DrawMode actually corresponds to a
frame rate of
(1000 / 16) = 62.5 FPS,
not 60. Since the game didn't account for the missing
2/3 ms to bring the limit down to exactly 60 FPS,
62.5 FPS is Shuusou Gyoku's actual official frame rate in a
non-VSynced setting, which we should also maintain in the SDL port.
Not sleeping the process turns Shuusou Gyoku's frame rate limitation
into a busy-waiting loop, which always uses 100% of a single CPU core just
to wait for the next frame.
Sure, modern computers are fast, but a frame won't ever take an
infinitely fast 0 milliseconds to render. So we still need to take the
current frame time into account.
SDL_Delay()'s documentation says that the wake-up could be
further delayed due to OS scheduling.
To address both of these issues, I went with a base delay time of
15 ms minus the time spent on the current frame, followed by
busy-waiting for the last millisecond to make sure that the next frame
starts on the exact frame boundary. And lo and behold: Even though this
still technically wastes up to 1 ms of CPU time, it still dropped CPU
usage into the 0%-2% range during gameplay on my Intel Core i5-8400T CPU,
which is over 5 years old at this point. Your laptop battery will appreciate
this new build quite a bit.
Time to look at audio then, because it sure looks less complicated than
input, doesn't it? Loading sounds from .WAV file buffers, playing a fixed
number of instances of every sound at a given position within the stereo
field and with optional looping… and that's everything already. The
DirectSound implementation is so straightforward that the most complex part
of its code is the .WAV file parser.
Well, the big problem with audio is actually finding a cross-platform
backend that implements these features in a way that seamlessly works with
Shuusou Gyoku's original files. DirectSound really is the perfect sound API
for this game:
It doesn't require the game code to specify any output sample format.
Just load the individual sound effects in their original format, and
playback just works and sounds correctly.
Its final sound stream seems to have a latency of 10 ms, which is
perfectly fine for a game running at 62.5 FPS. Even 15 ms would be
OK.
Sound effect looping? Specified by passing the
DSBPLAY_LOOPING flag to
IDirectSoundBuffer::Play().
Stereo panning balancing? One method call.
Playing the same sound multiple times simultaneously from a single
memory buffer? One
method call. (It can fail though, requiring you to copy the data after
all.)
Pausing all sounds while the game window is not focused? That's the
default behavior, but it can be equally easily disabled with just
a single per-buffer flag.
Future streaming of waveform BGM? No problem either. Windows Touhou has
always done that, and here's
some code I wrote 12½ years ago that would even work without DirectSound
8's notification feature.
No further binary bloat, because it's part of the operating system.
The last point can't really be an argument against anything, but we'd still
be left with 7 other boxes that a cross-platform alternative would have to
tick. We already picked SDL for our portability needs, so how does its audio
subsystem stack up? Unfortunately, not great:
It's fully DIY. All you get is a single output buffer, and you have to
do all the mixing and effect processing yourself. In other words, it's the
masochistic approach to cross-platform audio.
There are helper functions for resampling and mixing, but the
documentation of the latter is full of FUD. With a disclaimer that so
vehemently discourages the use of this function, what are you supposed to do
if you're newly integrating SDL audio into a game? Hunt for a separate sound
mixing library, even though your only quality goal is parity with stone-age
DirectSound? 🙄
It forces the game to explicitly define the PCM sampling rate, bit
depth, and channel count of the output buffer. You can't
just pass a nullptr to SDL_OpenAudioDevice(),
and if you pass a zeroed SDL_AudioSpec structure, SDL just defaults
to an unacceptable 22,050 Hz sampling rate, regardless of what the
audio device would actually prefer. It took until last year for them to
notice that people would at least like to query the native
format. But of course, this approach requires the backend to actually
provide this information – and since we've seen above that DirectSound
doesn't care, the
DirectSound version of this function has to actually use the more modern
WASAPI, and remains unimplemented if that API is not available.
Standardizing the game on a single sampling rate, bit depth, and channel
count might be a decent choice for games that consistently use a single
format for all its sounds anyway. In that case, you get to do all mixing and
processing in that format, and the audio backend will at most do one final
conversion into the playback device's native format. But in Shuusou Gyoku,
most sound effects use 22,050 Hz, the boss explosion sound effect uses
11,025 Hz, and the future SC-88Pro BGM will obviously use
44,100 Hz. In such a scenario, you would have to pick the highest
sampling rate among all sound sources, and resample any lower-quality sounds
to that rate. But if the audio device uses a different sampling rate, those
lower-quality sounds would get resampled a second time.
I know that this
will be fixed in SDL 3, but that version is still under heavy
development.
Positives? Uh… the callback-based nature means that BGM streaming is
rather trivial, and would even be comparatively less complicated than with
DirectSound. Having a mutex to prevent
writes to your sound instance structures while they're being read by the
audio thread is nice too.
OK, sure, but you're not supposed to use it for anything more than a
single stream of audio. SDL_mixer exists precisely to cover such non-trivial
use cases, and it even supports sound effect looping and panning with just a
single function call! But as far as the rest of the library is concerned, it
manages to be an even bigger disappointment than raw SDL audio:
As it sits on top of SDL's audio subsystem, it still can't just use your
audio device's native sample format.
It only offers a very opinionated system for streaming – and of course,
its opinion is wrong. 😛 The fact that it only supports a single streaming
audio track wouldn't matter all too much if you could switch to another
track at sample precision. But since you can't, you're forced to implement
looping BGM using a single file…
…which brings us to the unfortunate issue of loop point definitions.
And, perhaps most importantly, the complete lack of any way to set them
through the API?! It doesn't take long until you come up with a theory for
why the API only offers a function to retrieve loop points: The
"music" abstraction is so format-agnostic that it even supports MIDI
and tracker formats where a typical loop point in PCM samples doesn't make
sense. Both of these formats already have in-band ways of specifying loop
points in their respective time units. They
might not be standardized, but it's still much better than usual
single-file solutions for PCM streams where the loop point has to be stored
in an out-of-band way – such as in a metadata tag or an entirely separate
file.
Speaking of MIDI, why is it so common among these APIs to not have
any way of specifying the MIDI device? The fact that Windows Vista
removed the Control Panel option for specifying the system-wide default
MIDI output device is no excuse for your API lacking the option as well.
In fact, your MIDI API now needs such a setting more than it was
needed in the Windows XP and 9x days.
Funnily enough, they did once receive a patch for a function to set loop
points which was never upstreamed… and this patch came from
the main developer behind PyTouhou, who needed that feature for obvious
reasons. The world sure is a small place.
As a result, they turned loop points into a property that each
individual format may
or may
not have. Want to loop
MP3 files at sample precision? Tough luck, time to reconvert to another
lossy format. 🙄 This is the exact jank I decided against when I implemented
BGM modding for thcrap back in 2018,
where I concluded that separate intro and
loop files are the way to go.
But OK, we only plan to use FLAC and Ogg Vorbis for the SC-88Pro BGM, for
which SDL_mixer does support loop points in the form of Vorbiscomments,
and hey, we can even pass them at sample accuracy. Sure, it's wrong and
everything, but nothing I couldn't work with…
However, the final straw that makes SDL_mixer unsuitable for Shuusou
Gyoku is its core sound mixing paradigm of distributing all sound effects
onto a fixed number of channels, set to 8
by default. Which raises the quite ridiculous question of how many we
would actually need to cover the maximum amount of sounds that can
simultaneously be played back in any game situation. The theoretic maximum
would be 41, which is the combined sum of individual sound buffer instances
of all 20 original sound effects. The practical limit would surely be a lot
smaller, but we could only find out that one through experiments, which
honestly is quite a silly proposition.
It makes you wonder why they went with this paradigm in the first
place. And sure enough, they actually
use the aforementioned SDL core function for mixing audio. Yes, the
same function whose current documentation advises against using it for
this exact use case. 🙄 What's the argument here? "Sure, 8 is
significantly more than 2, but any mixing artifacts that will occur for
the next 6 sounds are not worrying about, but they get really bad
after the 8th sound, so we're just going to protect you from
that"?
This dire situation made me wonder if SDL was the wrong choice for Shuusou
Gyoku to begin with. Looking at other low-level cross-platform game
libraries, you'll quickly notice that all of them come with mostly
equally capable 2D renderers these days, and mainly differentiate themselves
in minute API details that you'd only notice upon a really close look. raylib is another one of those
libraries and has been getting exceptionally popular in recent years, to the
point of even having more than twice as many GitHub stars as SDL. By
restricting itself to OpenGL, it can even offer an
abstraction for shaders, which we'd really like for the 西方Project lens ball effect.
In the case of raylib's audio system, the lack of sound effect looping is
the minute API detail that would make it annoying to use for Shuusou Gyoku.
But it might be worth a look at how raylib implements all this if it doesn't
use SDL… which turned out to be the best look I've taken in a long time,
because raylib builds on top of miniaudio
which is exactly the kind of audio library I was hoping to find.
Let's check the list from above:
🟢 miniaudio's high-level API initialization defaults to the native
sample format of the playback device. Its internal processing uses 32-bit
floating-point samples and only converts back to the native bit depth as
necessary when writing the final stream into the backend's audio buffer.
WASAPI, for example, never needs any further conversion because it operates
with 32-bit floats as well.
🟢 The final audio stream uses the same 10 ms update period (and
thus, sound effect latency) that I was getting with DirectSound.
🟢 Stereo panning balancing? ma_sound_set_pan(),
although it does require a conversion from Shuusou Gyoku's dB units into a
linear attenuation factor.
🟢 Sound effect looping? ma_sound_set_looping().
🟢 Playing the same sound multiple times simultaneously from a single
memory buffer? Perfectly possible, but requires a bit of digging in the
header to find the best solution. More on that below.
🟢 Future streaming of waveform BGM? Just call
ma_sound_init_from_file() with the
MA_SOUND_FLAG_STREAM flag.
👍 It also comes with a FLAC decoder in the core library and an Ogg
Vorbis one as part of the repo, …
🤩 … and even supports gapless switching between the intro and loop
files via a single declarative call to
ma_data_source_set_next()!
(Oh, and it also has ma_data_set_loop_point_in_pcm_frames()
for anyone who still believes in obviously and objectively
inferior out-of-band loop points.)
🟢 Pausing all sounds while the game window is not focused? It's not
automatic, but adding new functions to the sound interface and calling
ma_engine_stop() and ma_engine_start() does the
trick, and most importantly doesn't cause any samples to be lost in the
process.
🟡 Sound control is implemented in a lock-free way, allowing your main
game thread to call these at any time without causing glitches on the audio
thread. While that looks nice and optimal on the surface, you now have to
either believe in the soundness (ha) of the implementation, or verify that
atomic structure fields actually are enough to not cause any race
conditions (which I did for the calls that Shuusou Gyoku uses, and I didn't
find any). "It's all lock-free, don't worry about it" might be
easier, but I consider SDL's approach of just providing a mutex to
prevent the output callback from running while you mutate the sound state to
actually be simpler conceptually.
🟡 miniaudio adds 247 KB to the binary in its minimum
configuration, a bit more than expected. Some of that is bloat from effect
code that we never use, but it does include backends for all three Windows
audio subsystems (WASAPI, DirectSound, and WinMM).
✅ But perhaps most importantly: It natively supports all modern
operating systems that one could seriously want to port this game to, and
could be easily ported to any other backend, including
SDL.
Oh, and it's written by the same developer who also wrote the best FLAC
library back in 2018. And that's despite them being single-file C libraries,
which I consider to be massively overrated…
The drawback? Similar to Zig, it's only on version 0.11.18, and also focuses
on good high-level documentation at the expense of an API reference. Unlike
Zig though, the three issues I ran into turned out to be actual and fixable
bugs: Two minor
ones related to looping of streamed sounds shorter than 2 seconds which
won't ever actually affect us before we get into BGM modding, and a critical one that
added high-frequency corruption to any mono sound effect during its
expansion to stereo. The latter took days to track down – with symptoms
like these, you'd immediately suspect the bug to lie in the resampler or its
low-pass filter, both of which are so much more of a fickle and configurable
part of the conversion chain here. Compared to that, stereo expansion is so
conceptually simple that you wouldn't imagine anyone getting it wrong.
While the latter PR has been merged, the fix is still only part of the
dev branch and hasn't been properly released yet. Fortunately,
raylib is not affected by this bug: It does currently
ship version 0.11.16 of miniaudio, but its usage of the library predates
miniaudio's high-level API and it therefore uses a different,
non-SSE-optimized code path for its format conversions.
The only slightly tricky part of implementing a miniaudio backend for
Shuusou Gyoku lies in setting up multiple simultaneously playing instances
for each individual sound. The documentation and answers on the issue
tracker heavily push you toward miniaudio's resource manager and its file
abstractions to handle this use case. We surely could turn Shuusou Gyoku's
numeric sound effect IDs into fake file names, but it doesn't really fit the
existing architecture where the sound interface just receives in-memory .WAV
file buffers loaded from the SOUND.DAT packfile.
In that case, this seems to be the best way:
Call ma_decode_memory() to decode from any of the supported
audio formats to a buffer of raw PCM samples. At this point, you can
choose between
decoding into the original format the sound effect is stored in,
which would require it to be converted to the playback format every
time it's played, or
decoding into 32-bit floats (the native bit depth of the miniaudio
engine) and the native sampling rate of the playback device, which
avoids any further resampling and floating-point conversion, but takes
up more memory.
Nowadays, it's not clear at all which of the two approaches is faster.
Does it actually matter if we save the audio thread from doing all those
floating-point operations on every sample? Or is that no longer true these
days because the audio thread is probably running on a different CPU core,
the rest of the game largely doesn't touch the floating-point parts of your
CPU anyway, and you'd rather want to keep sound effects small so that they
can better fit into the CPU cache? That would be an interesting question to
benchmark, but just like the similar text rendering question from the last
blog posts, it doesn't matter for this tiny 2000s retro game. 😌
I went with 2) mainly because it simplified all the debugging I was doing.
At a sampling rate of 48,000 Hz, this increases the memory usage for
all sound effects from 379 KiB to 3.67 MiB. At least I'm not
channel-expanding all sound effects as well here…
We've seen earlier that mono➜stereo expansion
is SSE-optimized, so it's very hard to justify a further doubling of the
memory usage here.
Then, for each instance of the sound, call
ma_audio_buffer_ref_init() to create a reference
buffer with its own playback cursor, and
ma_sound_init_from_data_source() to create a new
high-level sound node that will play back the reference buffer.
As a side effect of hunting that one critical bug in miniaudio, I've now
learned a fair bit about audio resampling in general. You'll probably need
some knowledge about basic
digital signal behavior to follow this section, and that video is still
probably the best introduction to the topic.
So, how could this ever be an issue? The only time I ever consciously
thought about resampling used to be in the context of the Opus codec and its
enforced sampling rate of 48,000 Hz, and how Opus advocates
claim that resampling is a solved problem and nothing to worry about,
especially in the context of a lossy codec. Still, I didn't add Opus to
thcrap's BGM modding feature entirely because the mere thought of having to
downsample to 44,100 Hz in the decoder was off-putting enough. But even
if my worries were unfounded in that specific case: Recording the
Stereo Mix of Shuusou Gyoku's now two audio backends revealed that
apparently not every audio processing chain features an Opus-quality
resampler…
If we take a look at the material that resamplers actually have to work with
here, it quickly becomes obvious why their results are so varied. As
mentioned above, Shuusou Gyoku's sound effects use rather low sampling rates
that are pretty far away from the 48,000 Hz your audio device is most
definitely outputting. Therefore, any potential imaging noise across the
extended high-frequency range – i.e., from the original Nyquist frequencies
of 11,025 Hz/5,512.5 Hz up to the new limit of 24,000 Hz – is
still within the audible range of most humans and can clearly color the
resulting sound.
But it gets worse if the audio data you put into the resampler is
objectively defective to begin with, which is exactly the problem we're
facing with over half of Shuusou Gyoku's sound effects. Encoding them all as
8-bit PCM is definitely excusable because it was the turn of the millennium
and the resulting noise floor is masked by the BGM anyway, but the blatant
clipping and DC offsets definitely aren't:
KEBARI
TAME
LASER
LASER2
BOMB
SELECT
HIT
CANCEL
WARNING
SBLASER
BUZZ
MISSILE
JOINT
DEAD
SBBOMB
BOSSBOMB
ENEMYSHOT
HLASER
TAMEFAST
WARP
Waveforms for all 20 of Shuusou Gyoku's sound effects, in the order they
appear inside SOUND.DAT and with their internal names. We can
see quite an abundance of clipping, as well
as a significant DC
offset in WARNING, BUZZ, JOINT,
SBBOMB, and BOSSBOMB.
Wait a moment, true peaks? Where do those come from? And, equally
importantly, how can we even observe, measure, and store anything
above the maximum amplitude of a digital signal?
The answer to the first question can be directly derived from the Xiph.org
video I linked above: Digital signals are lollipop graphs, not stairsteps as
commonly depicted in audio editing software. Converting them back to an
analog signal involves constructing a continuous curve that passes through
each sample point, and whose frequency components stay below the Nyquist
frequency. And if the amplitude of that reconstructed wave changes too
strongly and too rapidly, the resulting curve can easily overshoot the
maximum digital amplitude of 0
dBFS even if none of the defined samples are above that limit.
So let's store the resampled output as a FLAC file and load it into Audacity
to visualize the clipped peaks… only to find all of them replaced with the
typical kind of clipping distortion? 😕 Turns out that I've stumbled over
the one case where the FLAC format isn't lossless and there's
actually no alternative to .WAV: FLAC just doesn't support
floating-point samples and simply truncates them to discrete integers during
encoding. When we measured inter-sample peaks above, we weren't only
resampling to a floating-point format to avoid any quantization to discrete
integer values, but also to make it possible to store amplitudes beyond the
0 dBFS point of ±1.0 in the first place. Once we lose that ability,
these amplitudes are clipped to the maximum value of the integer bit depth,
and baked into the waveform with no way to get rid of them again. After all,
the resampled file now uses a higher sampling rate, and the clipping
distortion is now a defined part of what the sound is.
Finally, storing a digital signal with inter-sample peaks in a
floating-point format also makes it possible for you to reduce the
volume, which moves these peaks back into the regular, unclipped amplitude
range. This is especially relevant for Shuusou Gyoku as you'll probably
never listen to sound effects at full volume.
Now that we understand what's going on there, we can finally compare the
output of various resamplers and pick a suitable one to use with miniaudio.
And immediately, we see how they fall into two categories:
High-quality resamplers are the ones I described earlier: They cleanly
recreate the signal at a higher sampling rate from its raw frequency
representation and thus add no high-frequency noise, but can lead to
inter-sample peaks above 0 dBFS.
Linear resamplers use much simpler math to merely interpolate
between neighboring samples. Since the newly interpolated samples can only
ever stay within 0 dBFS, this approach fully avoids inter-sample
clipping, but at the expense of adding high-frequency imaging noise that has
to then be removed using a low-pass filter.
miniaudio only comes with a linear resampler – but so does DirectSound as it
turns out, so we can get actually pretty close to how the game sounded
originally:
All of Shuusou Gyoku's sound effects combined and resampled into a
single 48,000 Hz / 32-bit float .WAV file, using GoldWave's File Merger tool. By
converting to 32-bit float first and then resampling, the
conversion preserved the exact frequency range of the original
22,050 Hz and 11,025 Hz files, even despite clipping. There
are small noise peaks across the entire frequency range, but they
only occur at the exact boundary between individual sound effects. These
are a simple result of the discontinuities that naturally occur in the
waveform when concatenating signals that don't start or end at a 0
sample.
As mentioned above, you'll only get this sound out of your DAC at lower
volumes where all of the resampled peaks still fit within 0 dBFS.
But you most likely will have reduced your volume anyway, because these
effects would be ear-splittingly loud otherwise.
The result of converting 1️⃣ into FLAC. The necessary bit depth
conversion from 32-bit float to 16-bit integers clamps any data above
0 dBFS or ±1.0f to the discrete
[-32,678; 32,767] range, the maximum value of such
an integer. The resulting straight lines at maximum amplitude in the
time domain then turn into distortion across the entire 24,000 Hz
frequency domain, which then remains a part of the waveform even at
lower volumes. The locations of the high-frequency noise exactly match
the clipped locations in the time-domain waveform images above.
The resulting additional distortion can be best heard in
BOSSBOMB, where the low source frequency ensures that any
distortion stays firmly within the hearing range of most humans.
All of Shuusou Gyoku's sound effects as played through DirectSound and
recorded through Stereo Mix. DirectSound also seems to use a linear
low-pass filter that leaves quite a bit of high-frequency noise in the
signals, making these effects sound crispier than they should be.
Depending on where you stand, this is either highly inaccurate and
something that should be fixed, or actually good because the sound
effects really benefit from that added high end. I myself am definitely
in the latter camp – and hey, this sound is the result of original game
code, so it is accurate at least in that regard.
All of Shuusou Gyoku's sound effects as converted by miniaudio and
directly saved to a file, with the same low-pass filter setting used in
the P0256 build. This first-order low-pass filter is a decent
approximation of DirectSound's resampler, even though it sounds slightly
crispier as the high-frequency noise is boosted a little further. By
default, miniaudio would use a 4th-order low-pass filter, so
this is the second-lowest resampling quality you can get, short of
disabling the low-pass filter altogether.
Conversion results when using miniaudio's 8th-order low-pass
filter for resampling, the highest quality supported. This is the
closest we can get to the reference conversion without using a custom
resampler. If we do want to go for perfect accuracy though, we might as
well go
for 1️⃣ directly?
These spectrum images were initially created using ffmpeg's -lavfi
showspectrumpic=mode=combined:s=1280x720 filter. The samples
appear in the same order as in the waveform above.
And yes, these are indeed the first videos on this blog to have sound! I
spent another push on preparing the
📝 video conversion pipeline for audio
support, and on adding the highly important volume control to the player.
Web video codecs only support lossy audio, so the sound in these videos will
not exactly match the spectrum image, but the lossless source files do
contain the original audio as uncompressed PCM streams.
Compared to that whole mess of signals and noise, keyboard and joypad input
is indeed much simpler. Thanks to SDL, it's almost trivial, and only
slightly complicated because SDL offers two subsystems with seemingly
identical APIs:
SDL_GameController provides a consistent interface for the typical kind
of modern gamepad with two analog sticks, a D-pad, and at least 4 face and 2
shoulder buttons. This API is implemented by simply combining SDL_Joystick
with a
long list of mappings for specific controllers, and therefore doesn't
work with joypads that don't match this standard.
According
to SDL, this is what a "game controller" looks like. Here's
the source of the SVG.
To match Shuusou Gyoku's original WinMM backend, we'd ideally want to keep
the best aspects from both APIs but without being restricted to
SDL_GameController's idea of a controller. The Joy
Pad menu just identifies each button with a numeric ID, so
SDL_Joystick would be a natural fit. But what do we do about directional
controls if SDL_Joystick doesn't tell us which joypad axes correspond to the
X and Y directions, and we don't have the SDL-recommended configuration UI yet?
Doing that right would also mean supporting
POV hats and D-pads, after all… Luckily, all joypads we've tested map
their main X axis to ID 0 and their main Y axis to ID 1, so this seems like
a reasonable default guess.
The necessary consolidation of the game's original input handling uncovered
several minor bugs around the High Score and Game Over screen that I
sufficiently described in the release notes of the new build. But it also
revealed an interesting detail about the Joy Pad
screen: Did you know that Shuusou Gyoku lets you unbind all these
actions by pressing more than one joypad button at the same time? The
original game indicated unbound actions with a [Button
0] label, which is pretty confusing if you have ever programmed
anything because you now no longer know whether the game starts numbering
buttons at 0 or 1. This is now communicated much more clearly.
ESC is not bound to any joypad button in
either screenshot, but it's only really obvious in the P0256
build.
With that, we're finally feature-complete as far as this delivery is
concerned! Let's send a build over to the backers as a quick sanity check…
a~nd they quickly found a bug when running on Linux and Wine. When holding a
button, the game randomly stops registering directional inputs for a short
while on some joypads? Sounds very much like a Wine bug, especially if the
same pad works without issues on Windows.
And indeed, on certain joypads, Wine maps the buttons to completely
different and disconnected IDs, as if it simply invents new buttons or axes
to fill the resulting gaps. Until we can differentiate joypad bindings
per controller, it's therefore unlikely that you can use the same joypad
mapping on both Windows and Linux/Wine without entering the Joy Pad menu and remapping the buttons every time you
switch operating systems.
Still, by itself, this shouldn't cause any issues with my SDL event handling
code… except, of course, if I forget a break; in a switch case.
🫠
This completely preventable implicit fallthrough has now caused a few hours
of debugging on my end. I'd better crank up the warning level to keep this
from ever happening again. Opting into this specific warning also revealed
why we haven't been getting it so far: Visual Studio did gain a whole host
of new warnings related to the C++ Core
Guidelines a while ago, including the one I
was looking for, but actually getting the compiler to throw these
requires activating
a separate static analysis mode together with a plugin, which
significantly slows down build times. Therefore I only activate them for
release builds, since these already take long enough.
Since all that input debugging already started a 5th push, I
might as well fill that one by restoring the original screenshot feature.
After all, it's triggered by a key press (and is thus related to the input
backend), reads the contents of the frame buffer (and is thus related to the
graphics backend), and it honestly looks bad to have this disclaimer in the
release notes just because we're one small feature away from 100% parity
with pbg's original binary.
Coincidentally, I had already written code to save a DirectDraw surface to a
.BMP file for all the debugging I did in the last delivery, so we were
basically only missing filename generation. Except that Shuusou
Gyoku's original choice of mapping screenshots to the PrintScreen key did
not age all too well:
And as of Windows 11, the OS takes full control of the key by binding it
to the Snipping Tool by default, complete with a UI that politely steals
focus when hitting that key.
As a result, both Arandui and I independently arrived at the
idea of remapping screenshots to the P key, which is the same screenshot key
used by every Windows Touhou game since TH08.
The rest of the feature remains unchanged from how it was in pbg's original
build and will save every distinct frame rendered by the game (i.e., before
flipping the two framebuffers) to a .BMP file as long as the P key is being
held. At a 32-bit color depth, these screenshots take up 1.2 MB per
frame, which will quickly add up – especially since you'll probably hold the
P key for more than 1/60 of a second and therefore end
up saving multiple frames in a row. We should probably compress
them one day.
Since I already translated some of Shuusou Gyoku's ASM code to C++ during
the Zig experiment, it made sense to finish the fifth push by covering the
rest of those functions. The integer math functions are used all throughout
the game logic, and are the main reason why this goal is important for a
Linux port, or any port to a 64-bit architecture for that matter. If you've
ever read a micro-optimization-related blog post, you'll know that hand-written ASM is a great recipe that often results in the finest jank, and the game's square root function definitely delivers in that regard, right out of the gate.
What slightly differentiates this algorithm from the typical definition of
an integer
square root is that it rounds up: In real numbers, √3 is
≈ 1.73, so isqrt(3) returns 2 instead of 1. However, if
the result is always rounded down, you can determine whether you have to
round up by simply squaring the calculated root and comparing it to the radicand. And even that
is only necessary if the difference between the two doesn't naturally fall
out of the algorithm – which is what also happens with Shuusou Gyoku's
original ASM code, but pbg
didn't realize this and squared the result regardless.
That's one suboptimal detail already. Let's call the original ASM function
in a loop over the entire supported range of radicands from 0 to
231 and produce a list of results that I can verify my C++
translation against… and watch as the function's linear time complexity with
regard to the radicand causes the loop to run for over 15 hours on my
system. 🐌 In a way, I've found the literal opposite of Q_rsqrt()
here: Not fast, not inverse, no bit hacks, and surely without the
awe-inspiring kind of WTF.
I really didn't want to run the same loop over a
literal C++ translation of the same algorithm afterward. Calculating
integer square roots is a common problem with lots of solutions, so let's
see if we can go better than linear.
And indeed, Wikipedia
also has a bitwise algorithm that runs in logarithmic time, uses only
additions, subtractions, and bit shifts, and even ends up with an error term
that we can use to round up the result as necessary, without a
multiplication. And this algorithm delivers the exact same results over the
exact same range in… 50 seconds. 🏎️ And that's with the I/O to print
the first value that returns each of the 46,341 different square root
results.
"But wait a moment!", I hear you say. "Why are you bothering with
an integer square root algorithm to begin with? Shouldn't good old
round(sqrt(x)) from <math.h> do the trick
just fine? Our CPUs have had SSE for a long time, and this probably compiles
into the single SQRTSD instruction. All that extra
floating-point hardware might mean that this instruction could even run in
parallel with non-SSE code!"
And yes, all of that is technically true. So I tested it, and my very
synthetic and constructed micro-benchmark did indeed deliver the same
results in… 48 seconds. That's not enough of a
difference to justify breaking the spirit of treating the FPU as lava that
permeates Shuusou Gyoku's code base. Besides, it's not used for that much to
begin with:
pre-calculating the 西方Project lens ball effect
the fade animation when entering and leaving stages
rendering the circular part of stationary lasers
pulling items to the player when bombing
After a quick C++ translation of the RNG function that spells out a 32-bit
multiplication on a 32-bit CPU using 16-bit instructions, we reach the final
pieces of ASM code for the 8-bit atan2() and trapezoid
rendering. These could actually pass for well-written ASM code in how they
express their 64-bit calculations: atan8() prepares its 64-bit
dividend in the combined EDX and EAX registers in
a way that isn't obvious at all from a cursory look at the code, and the
trapezoid functions effectively use Q32.32 subpixels. C++ allows us to
cleanly model all these calculations with 64-bit variables, but
unfortunately compiles the divisions into a call to a comparatively much
more bloated 64-bit/64-bit-division polyfill function. So yeah, we've
actually found a well-optimized piece of inline assembly that even Visual
Studio 2022's optimizer can't compete with. But then again, this is all
about code generation details that are specific to 32-bit code, and it
wouldn't be surprising if that part of the optimizer isn't getting much
attention anymore. Whether that optimization was useful, on the other hand…
Oh well, the new C++ version will be much more efficient in 64-bit builds.
And with that, there's no more ASM code left in Shuusou Gyoku's codebase,
and the original DirectXUTYs directory is slowly getting
emptier and emptier.
Phew! Was that everything for this delivery? I think that was everything.
Here's the new build, which checks off 7 of the 15 remaining portability
boxes:
Next up: Taking a well-earned break from Shuusou Gyoku and starting with the
preparations for multilingual PC-98 Touhou translatability by looking at
TH04's and TH05's in-game dialog system, and definitely writing a shorter
blog post about all that…
And then I'm even late by yet another two days… For some reason, preparing
Shuusou Gyoku for an OpenGL port has been the most difficult and drawn-out
task I've worked on so far throughout this project. These pushes were in
development since April, and over two months in total. Tackling a legacy
codebase with such a rather vague goal while simultaneously wanting to keep
everything running did not do me any favors, and it was pretty hard to
resist the urge to fix everything that had better be fixed to make
this game portable… 📝 2022 ended with Shuusou Gyoku working at full speed on Windows ≥8 by itself, without external tools, for the first
time. However, since it all came down to just one small bugfix, the
resulting build still had several issues:
The game might still start in the slow, mitigated 8-bit or 16-bit
mode if the respective app compatibility flag is still present in the
registry from the earlier 📝 P0217 build. A
player would then have to manually put the game into 32-bit mode via the
Option menu to make it run at its actual intended speed. Bypassing this flag
programmatically would require some rather fiddly .EXE patching techniques.
(#33)
The 32-bit mode tends to lag significantly if a lot of sprites are
onscreen, for example when canceling the final pattern of the Extra Stage
midboss. (#35)
If the game window lost and regained focus during the ending (for
example via Alt-Tabbing), the game reloads the wrong sprite sheet. (#19)
And, of course, we still have no native windowed mode, or support for
rendering in the higher resolutions you'd want to use on modern high-DPI
displays. (#7)
Now, we could tackle all of these issues one by one, in focused pushes… or
wait for one hero to fund a full-on OpenGL backend as part of the larger
goal of porting this game to Linux. This would take much longer, but fix all
these issues at once while bringing us significantly closer to Shuusou Gyoku
being cross-platform. Which is exactly what Ember2528 did.
Shuusou Gyoku is a very Windows-native codebase. Its usage of types
declared in <windows.h> even extends to core gameplay
code, the rendering code is completely architected around DirectDraw's
features and drawbacks, and text rendering is not abstracted at all. Looks
like it's now my task to write all the abstractions that pbg didn't manage
to write…
Therefore, I chose to stay with DirectDraw for a few more pushes while I
would build these abstractions. In hindsight, this was the least efficient
approach one could possibly imagine for the exact goal of porting the game
to Linux. Suddenly, I had to understand all this DirectDraw and GDI
jank, just to keep the game running at every step along the way. Retaining
Shuusou Gyoku's 8-bit mode in particular was a huge pain, but I didn't want
to remove it because it's currently the only way I can easily debug the game
in windowed mode at a scaled resolution, through DxWnd. In 16-bit or
32-bit mode, DxWnd slows down to a crawl, roughly resembling the performance
drop we used to get with Windows' own compatibility mitigations for the
original build.
The upside, though, is that everything I've built so far still works with
the original 8-bit and 16-bit graphics modes. And with just one compiler flag to disable
any modern x86 instructions, my build can still run on i586/P5 Pentium
CPUs, and only requires KernelEx and its latest
Kstub822 patches to run on Windows 98. And, surprisingly, my core
audience does appreciate this fact. Thus, I will include an i586 build
in all of my upcoming Shuusou Gyoku releases from now on. Once this codebase
can compile into a 64-bit binary (which will obviously be required for a
native Linux build), the i586 build will remain the only 32-bit Windows
build I'll include in my releases.
So, what was DirectDraw? In the shortest way that still describes it
accurately from the point of view of a developer: "A hardware acceleration
layer over Ye Olde Win32 GDI, providing double-buffering and fast blitting
of rectangles." There's the primary double-buffered framebuffer
surface, the offscreen surfaces that you create (which are
comparable to what 3D rendering APIs would call textures), and you
can blit rectangular regions between the two. That's it. Except for
double-buffering, DirectDraw offers no feature that GDI wouldn't also
support, while not covering some of GDI's more complex features. I mean,
DirectDraw can blit rectangles only? How
lame.
However, DirectDraw's relative lack of features is not as much of a problem
as it might appear at first. The reason for that lies in what I consider to
be DirectDraw's actual killer feature: compatibility with GDI's device
context (DC) abstraction. By acquiring a DC for a DirectDraw surface,
you can use all existing GDI functions to draw onto the surface, and, in
general, it will all just work. 😮 Most notably, you can use GDI's blitting
functions (i.e., BitBlt() and friends) to transfer pixel data
from a GDI HBITMAP in system memory onto a DirectDraw surface
in video memory, which is the easiest and most straightforward way to, well,
get sprite data onto a DirectDraw surface in the first place.
In theory, you could do that without ever touching GDI by locking the
surface memory and writing the raw bytes yourself. But in practice, you
probably won't, because your game has to run under multiple bit depths and
your data files typically only store one copy of all your sprites in a
single bit depth. And the necessary conversion and palette color matching…
is a mere implementation detail of GDI's blitting functions, using a
supposedly optimized code path for every permutation of source and
destination bit depths.
All in all, DirectDraw doesn't look too bad so far, does it? Fast blitting,
and you can still use the full wealth of GDI functions whenever needed… at
the small cost of potentially losing your surface memory at any time. 🙄
Yup, if a DirectDraw game runs in true resolution-changing fullscreen mode
and you switch to the Windows desktop, all your surface memory is freed and
you have to manually restore it once the game regains focus, followed by
manually copying all intended bitmap data back onto all surfaces. DirectDraw
is where this concept of surface loss originated, which later carried over
to the earlier versions of Direct3D and, infamously,
Direct2D as well.
Looking at it from the point of view of the mid-90s, it does make sense to
let the application handle trashed video memory if that's an unfortunate
reality that your graphics API implementation has to deal with. You don't
want to retain a second copy of each surface in a less volatile part of
memory because you didn't have that much of it. Instead, the application can
now choose the most appropriate way to restore each individual surface. For
procedurally generated surfaces, it could just re-run the generating code,
whereas all the fixed sprite sheets could be reloaded from disk.
In practice though, this well-intentioned freedom turns into a huge pain.
Suddenly, it's no longer enough to load every sprite sheet once before it's
needed, blit its pixel data onto the DirectDraw surface, and forget about
it. Now, the renderer must also be able to refresh the pixel data of every
surface from within itself whenever any of DirectDraw's blitting
functions fails with a DDERR_SURFACELOST error. This fact alone
is enough to push your renderer interface towards central management and
allocation of surfaces. You could maybe avoid the conceptual
SurfaceManager by bundling each surface with a regeneration
callback, but why should you? Any other graphics API would work with
straight-line procedural load-and-forget initialization code, so why slice
that code into little parts just because of some DirectDraw quirk?
So if your surfaces can get trashed at any time, and you already use
GDI to copy them from system memory to DirectDraw-managed video memory,
and your game features at least one procedurally generated surface…
you might as well retain every currently loaded surface in the form of an
additional GDI device-independent bitmap. 🤷 In fact, that's even better
than what Shuusou Gyoku did originally: For all .BMP-sourced surfaces, it
only kept a buffer of the entire decompressed .BMP file data, which means
that it had to recreate said intermediate GDI bitmap every time it needed to
restore a surface. The in-game music title was originally restored
via regeneration callback that re-rendered the intended title directly onto
the DirectDraw surface, but this was handled by an additional "restore hook"
system that remained unused for anything else.
Anything more involved would be a micro-optimization, especially since the
goal is to get away from DirectDraw here. Not much point in "neatly"
reloading sprite surfaces from disk if the total size of all loaded sprite
sheets barely exceeds the 1 MiB mark. Also, keeping these GDI DIBs loaded
and initialized does speed up getting back into the game… in theory,
at least. After all, the game still runs in fullscreen mode, and resolution
switching already takes longer on modern flat-panel displays than any
surface restoration method we could come up with.
So that was all pretty annoying. But once we start rendering in 8-bit mode,
it gets even worse as we suddenly have to bother with palette management.
Similar to PC-98 Touhou, Shuusou Gyoku
uses way too many different palettes. In fact, it creates
a separate DirectDraw palette to retain the palette embedded into every
loaded .BMP file, and simply sets the palette of the primary surface and the
backbuffer to the one it loaded last. Like, why would you retain
per-surface palettes, and what effect does this even have? What even happens
when you blit between two DirectDraw surfaces that have different palettes?
Might this be the cause of the discolored in-game music title when playing
under DxWnd? 😵 But if we try throwing out those extra palettes, it
only takes until Stage 3 for us to be greeted with… the infamous golf
course:
As you might have guessed, these exact colors come from Gates' face sprite,
whose palette apparently doesn't match the sprite sheets used in Stage 3.
Turns out that 256 colors are not enough for what Shuusou Gyoku would like
to use across the entire stage. In sprite loading order:
Sprite sheet
GRAPH.DAT file
Additional unique colors
Total unique colors
General system sprites
#0
+96
96
Stage 3 enemies
#3
+42
138
Stage 3 map tiles
#9
+40
178
Wide Shot bomb cut-in
#26
+3
181
VIVIT's faceset
#13
+40
221
Unknown face
#14
+35
256
Gates' faceset
#17
+40
296
And that's why Shuusou Gyoku does not only have to retain these palettes,
but also contains stage
script commands (!) to switch the current palette back to either the map
or enemy one, after the dialog system enforced the face palette.
But the worst aspects about palettes rear their ugly head at the boundary
between GDI and DirectDraw, when GDI adds its own palettes into the mix.
None of the following points are clearly documented in either ancient or
current MSDN, forcing each new DirectDraw developer to figure them out on
their own:
When calling IDirectDraw::CreateSurface() in 8-bit mode,
DirectDraw automatically sets up the newly created surface with a reference
(not a copy!) to the palette that's currently assigned to the primary
surface.
When locking an 8-bit surface for GDI blitting via
IDirectDrawSurface::GetDC(), DirectDraw is supposed to set the
GDI palette of the returned DC to the current palette of the DirectDraw…
primary surface?! Not the surface you're actually calling
GetDC() on?!
Interestingly, it took until March of this year for DxWnd to discover a
different game that relied on this detail, while DDrawCompat had
implemented it for years. DxWnd version 2.05.95 then introduced the
DirectX(2) → Fix DC palette tweak, and it's this option that would
fix the colors of the in-game music title on any Shuusou Gyoku build older
than P0251.
Make sure to neverBitBlt() from a 24-bit RGB GDI
image to a palettized 8-bit DirectDraw offscreen surface. You might be
tempted to just go 24-bit because there's no palette to worry about and you
can retain a single GDI image for every supported bit depth, but the
resulting palette mapping glitches will be much worse than if you just
stayed in 8-bit. If you want to procedurally generate a GDI bitmap for a
DirectDraw surface, for example if you need to render text, just create
a bitmap that's compatible with the DC of DirectDraw's primary or
backbuffer surface. Doing that magically removes all palette woes, and
CreateCompatibleBitmap() is much easier to call anyway.
Ultimately, all of this is why Shuusou Gyoku's original DirectDraw backend
looks the way it does. It might seem redundant and inefficient in places,
but pbg did in fact discover the only way where all the undocumented GDI and
DirectDraw color mapping internals come together to make the game look as
intended. 🧑🔬
And what else are you going to do if you want to target old hardware? My
PC-9821Nw133, for example, can only run the original Shuusou Gyoku in 8-bit
mode. For a Windows game on such old hardware, 8-bit DirectDraw looks like
the only viable option. You certainly don't want to use GDI alone, because
that's probably slow and you'd have to worry about even more palette-related
issues. Although people have reported that Shuusou Gyoku does actually
run faster on their old Windows 9x machine if they disable DirectDraw
acceleration…?
In that case, it might be worth a try to write a completely new 8-bit
software renderer, employing the same retained VRAM techniques that the
PC-98 Touhou games used to implement their scrolling playfields with a
minimum of redraws. The hardware scrolling feature of the PC-98 GDC would
then be replicated by blitting the playfield in two halves every frame. I
wonder how fast that would be…
Or you go straight back to DOS, and bring your own font renderer and
MIDI/PCM sound driver.
So why did we have to learn about all this? Well, if GDI functions can
directly render onto any kind of DirectDraw surface, this also includes text
rendering functions like TextOut() and DrawText().
If you're really lazy, you can even render your text directly onto
the DirectDraw backbuffer, which probably re-rasterizes all glyphs
every frame!
Which, you guessed it, is exactly how Shuusou Gyoku renders most of its
text. 🐷 Granted, it's not too bad with MS Gothic thanks to its embedded
bitmaps for font
heights between 7 and 22 inclusive, which replace the usual Bézier curve
rasterization for TrueType fonts with a rather quick bitmap lookup. However,
it would not only become a hypothetical problem if future translations end
up choosing more complex fonts without embedded bitmaps, but also as soon as
we port the game to other systems. Nobody in their right mind would
integrate a cross-platform font renderer directly with a 3D graphics API… right?
Instead, let's refactor the game to render all its existing text to and from
a bitmap,
extending the way the in-game music title is rendered to the rest of the
game. Conceptually, this is also how the Windows Touhou games have always
rendered their text. Since they've always used Direct3D, they've always had
to blit GDI's output onto a texture. Through the definitions in
text.anm, this fixed-size texture is then turned into a sprite
sheet, allowing every rendered line of text to be individually placed on the
screen and animated.
However, the static nature of both the sprite sheet and the texture caused
its fair share of problems for thcrap's translation support. Some of the
sprites, particularly the ones for spell card titles, don't originally take
up the entire width of the playfield, cutting off translations long before
they reach the left edge. Consequently, thcrap's base patch
for the Windows Touhou games has to resize the respective sprites to
make translators happy. Before I added .ANM header
patching in late 2018, this had to be done through a complete modified
copy of text.anm for every game – with possibly additional
variants if ZUN changed the layout of this file between game versions. Not
to mention that it's bound to be quite annoying to manually allocate a
rectangle for every line of text we want to show. After all, I have at leasttwo text-heavy future
features in mind already…
So let's not do exactly that. Since DirectDraw wants us to manage all
surfaces in a central place, we keep the idea of using a single surface for
all text. But instead of predefining anything about the surface layout, we
fully build up the surface at runtime based on whatever rectangles we need,
using a rectangle
packing algorithm… yup, I wouldn't have expected to enter such territory
either. For now, we still hardcode a fixed size that each piece of text is
allowed to maximally take up. But once we get translations, nothing is
stopping us from dynamically extending this size to fit even longer strings,
and fitting them onto the fixed screen space via smooth scrolling.
To prevent the surface from arbitrarily growing as the game wants to render
more and more text, we also reset all allocated rectangles whenever the game
state changes. In turn, this will also recreate the text surface to match
the new bounding box of all rectangles before the first prerendering call
with the new layout. And if you remember the first bullet point about
DirectDraw palettes in 8-bit mode, this also means that the text surface
automatically receives the current palette of the primary surface, giving
us correct colors even without requiring DxWnd's DC palette tweak. 🎨
In fact, the need to dynamically create surfaces at custom sizes was the
main reason why I had to look into DirectDraw surface management to begin
with. The original game created
all of its surfaces at once, at startup or after changing the bit depth
in the main menu, which was a bad idea for many reasons:
It hardcoded and limited the size of all sprite sheets,
added another rendering-API-specific function that game code should not
need to worry about,
introduced surface IDs that have to be synchronized with the
surface pointers used throughout the rest of the game,
and was the main reason why the game had to distribute the six 320×240
ending pictures across two of the fixed 640×480 surfaces, which ended up
causing the sprite reload
bug in the ending. As implied in the issue, this was a DirectDraw bug
that pretty much had to fix itself before I could port the game to OpenGL,
and was the only bug where this was the case. Check the issue comments for
more details about this specific bug.
In the end, we get four different layouts for the text surface: One for the
main menu, the Music Room, the in-game portion, and the ending. With,
perhaps surprisingly, not too much text on either of them:
Yes, the ending uses just a single rectangle that takes up the entire screen
space below the pictures and credits.
For the menus, the resulting packed layout reveals how I'm assigning a
separately cached rectangle to each possible option – otherwise, they
couldn't be arranged vertically on screen with this bitmap layout. Right
now, I'm only storing all text for the current menu level, which requires
text to be rendered again when entering or leaving submenus. However, I'm
allocating as many rectangles as required for the submenu with the most
amount of items to at least prevent the single text surface from being
resized while navigating through the menu. As a side effect, this is also
why you can see multiple Exit labels: These simply come from
other submenus with more elements than the currently visited Sound /
Music one.
Still, we're re-rasterizing whole lines of text exactly as they appear on
screen, and are even doing so multiple times to apply any drop shadows.
Isn't that exactly what every text rendering tutorial nowadays advises
against doing? Why not directly go for the classic solution to this problem
and render using a font texture
atlas? Well…
Most of the game text is still in Japanese. If we were to build a font
atlas in advance, we'd have to add a separate build step that collects all
needed codepoints by parsing all text the game would ever print, adding a
build-time dependency on the original game's copyrighted data files. We'd
also have to move all hardcoded strings to a separate file since we surely
don't want to parse C++ manually during said build step. Theoretically, we
would then also give up the idea of modding text at run-time without
re-running that build step, since we'd restrict all text to the glyphs we've
rasterized in the atlas… yeah, that's more than enough reasons for static
atlas generation to be a non-starter.
OK, then let's build the atlas dynamically, adding new glyphs as we
encounter them. Since this game is old, we can even be a bit lazy as far as
the packing is concerned, and don't have to get as fancy as the GIF in the
link above. Just assume a fixed height for each glyph, and fill the atlas
from left to right. We can even clear it periodically to keep it from
getting too big, like before entering the Music Room, the in-game portion,
or the ending, or after switching languages once we have translations.
Should work, right?
Except that most text in Shuusou Gyoku comes with a shadow, realized by
first drawing the same string in a darker color and displaced by a few
pixels. With a 3D renderer, none of this would be an issue because we can
define vertex colors. But we're still using DirectDraw, which has no way of
applying any sort of color formula – again, all it can do is take a
rectangle and blit it somewhere else. So we can't just keep one atlas with
white glyphs and let the renderer recolor it. Extending Shuusou Gyoku's
Direct3D code with support for textured quads is also out of the question
because then we wouldn't have any text in the Direct3D-less 8-bit mode. So
what do we do instead? Throw the atlas away on every color change? Keep
multiple atlases for every color we've seen so far? Turn shadows into a
high-level concept? Outright forgetting the idea seems to be the best choice
here…
For a rather square language like Japanese where one Shift-JIS codepoint
always corresponds to one glyph, a texture atlas can work fine and without
too much effort. But once we support languages with more complex ligatures,
we suddenly need to get a shaping
engine from somewhere, and directly interact with it from our rendering
code. This necessarily involves changing APIs and maybe even bundling the
first cross-platform libraries, which I wanted to avoid in an already packed
and long overdue delivery such as this one. If we continue to render
line-by-line, translations would only need a line break algorithm.
Most importantly though: It's not going to matter anyway. The
game ran fine on early 2000s hardware even though it called
TextOut() every frame, and any approach that caches the result
of this call is going to be faster.
While the Music Room and the ending can be easily migrated to a prerendering
system, it's much harder for the main menu. Technically, all option
strings of the currently active submenu are rewritten every frame, even
though that would only be necessary for the scrolling MIDI device name in
the Sound / Music submenu. And since all this rewriting is done
via a classic sprintf() on fixed-size char
buffers, we'd have to deploy our own change detection before prerendering
can have any performance difference.
In essence, we'd be shifting the text rendering paradigm from the original
immediate approach to a more retained one. If you've ever used any of the
hot new immediate-mode GUI or web frameworks that have become popular over
the last 10 years, your alarm bells are probably already ringing by now.
Adding retained elements is always a step back in terms of code quality, as
it increases complexity by storing UI state in a second place.
Wouldn't it be better if we could just stay with the original immediate
approach then? Absolutely, and we only need a simple cache system to get
there. By remembering the string that was last rendered to every registered
rectangle, the text renderer can offer an immediate API that combines the
distinct Prerender() and Blit() steps into a
single Render() call. There still has to be an initialization
point that registers all rectangles for each game state (which,
surprisingly, was not present for the in-game portion in the original code),
but the rendering code remains architecturally unchanged in how we call the
text renderer every frame. As long as the text doesn't change, the text
renderer just blits whatever it previously rendered to the respective
rectangle. With an API like this, the whole pre-rendering part turns into a
mere implementation detail.
So, how much faster is the result? Since I can only measure non-VSynced
performance in a quite rudimentary way using DxWnd's FPS counter, it highly
depends on the selected renderer. Weirdly enough, even just switching font
creation to the Unicode APIs tripled the FPS inside the Music Room
when rendering with OpenGL? That said, the primary surface renderer
seems to yield the most realistic numbers, as we still stay entirely within
DirectDraw and perform no API wrapping. Using this renderer, I get speedups
of roughly:
~3.5× in the Music Room,
~1.9× during in-game dialog, and
~1.5× in the main menu.
Not bad for something I had to do anyway to port the game away from
DirectDraw! Shuusou Gyoku is rather infamous among the vintage computer
scene for being ridiculously unoptimized, so I should definitely be able to
get some performance gains out of the in-game portion as well.
For a final test of all the new blitting code, I also tried running
outside DxWnd to verify everything against real and unpatched
DirectDraw. Amusingly, this revealed how blitting from the new text surface
seems to reach the color mapping limits of the DWM mitigation in 8-bit mode:
For some reason, my system maps the intended #FFFFFF text
color to #E4E3BB in the main menu?
8-bit mode does render correctly when I ran the same build in a Windows 98
VirtualBox on the same system though, so it's not worth looking into a mode
that the system reports as unsupported to begin with. Let's leave this as
somewhat of a visual reminder for players to select 32-bit mode instead.
Alright, enough about the annoying parts of GDI and DirectDraw for now.
Let's stop looking back and start looking forward, to a time within this
Seihou revolution when we're going to have lots of new options in the main
menu. Due to the nature of delivering individual pushes, we can expect lots
of revisions to the config file format. Therefore, we'd like to have a
backward-compatible system that allows players to upgrade from any older
build, including the original 秋霜玉.exe, to a newer one. The
original game predominantly used single-byte values for all its options, but
we'd like our system to work with variables of any size, including strings
to store things like the
name of the selected MIDI device in a more robust way. Also, it's pure
evil to reset the entire configuration just because someone tried to
hex-edit the config file and didn't keep the checksum in mind.
It didn't take long for me to arrive at a common
Size()/Read()/Write() interface. By
using the same interface for both arrays and individual values, new config
file versions can naturally expand older ones by taking the array of option
references from the previous version and wrapping it into a new array,
together with the new options.
The classic way of implementing this in C++ involves a typical
object-oriented class hierarchy: An Option base class would
define the interface in the form of virtual abstract functions, and the
Value, Array, and ConfigVersion
subclasses would provide different implementations. This works, but
introduces quite a bit of boilerplate, not to mention the runtime bloat from
all the virtual functions which Visual C++ can't inline. Why should we do
any runtime dispatch here? We know the set of configuration options
at compile time, after all…
Let's try looking into the modern C++ toolbox and see if we can do better.
The only real challenge here is that the array type has to support
arbitrarily sized option value types, which sounds like a job for
template parameter packs. If we save these into a
std::tuple, we can then "iterate" over all options with std::apply
and fold
expressions, in a nice functional style.
I was amazed by just how clearly the "crazy" modern C++ approach with
template parameter packs, std::apply() over giant
std::tuples, and fold expressions beats a classic polymorphic
hierarchy of abstract virtual functions. With the interface moved into an
even optional concept, the class hierarchy can be completely
flattened, which surprisingly also makes the code easier to both read and
write.
Here's how the new system works from the player's point of view:
The config files now use a kanji-less and explicitly forward-compatible
naming scheme, starting with SSG_V00.CFG in the P0251 build.
The format of this initial version simply includes all values from the
original 秋霜CFG.DAT without padding bytes or a checksum. Once
we release a new build that adds new config options, we go up to
SSG_V01.CFG, and so on.
When loading, the game starts at its newest supported config file
version. If that file doesn't exist, the game retries with each older
version in succession until it reaches the last file in the chain, which is
always the original 秋霜CFG.DAT. This makes it possible to
upgrade from any older Shuusou Gyoku build to a newer one while retaining
all your settings – including, most importantly, which shot types you
unlocked the Extra Stage with. The newly introduced settings will simply
remain at their initial default in this case.
When saving, the game always writes all versions it knows about,
down to and including the original 秋霜CFG.DAT, in the
respective version-specific format. This means that you can change options
in a newer build and they'll show up changed in older builds as well if they
were supported there.
And yes, this also means that we can stop writing the unsupported 32-bit bit
depth setting to 秋霜CFG.DAT, which would cause a validation
failure on the original build. This is now avoided by simply turning 32-bit
into 16-bit just for the configuration that gets saved to this file. And
speaking of validation failures…
This per-value validation is also done if my builds loaded the
original 秋霜CFG.DAT. The checksum is still written for
compatibility with the original build, but my builds ignore it.
With that, we've got more than enough code for a new build:
This build also contains two more fixes that didn't fit into the big
DirectDraw or configuration categories:
The P0226 build had a bug that allowed invalid stages to be selected for
replay recording. If the ReplaySave option was
[O F F], pressing the ⬅️ left arrow key on the
StageSelect
option would overflow its value to 255. The effects of this weren't all too
serious: The game would simply stay on the Weapon Select screen for an
invalid stage number, or launch into the Extra Stage if you scrolled all the
way to 131. Still, it's fixed in this build.Whoops! That one was fully my fault.
The render time for the in-game music title is now roughly cut in half:
Achieved by simply trimming trailing whitespace and using slightly more
efficient GDI functions to draw the gradient. Spending 4 frames on
rendering a gradient is still way too much though. I'll optimize that
further once I actually get to port this effect away from GDI.
These videos also show off how DxWnd's DC palette bug affected the
original game, and how it doesn't affect the P0251 build.
These 6 pushes still left several of Shuusou Gyoku's DirectDraw portability
issues unsolved, but I'd better look at them once I've set up a basic OpenGL
skeleton to avoid any more premature abstraction. Since the ultimate goal is
a Linux port, I might as well already start looking at the current best
platform layer libraries. SDL would be the standard choice here, and while
SDL_ttf looks regrettably misdesigned, the core SDL library seems to cover
all we could possibly want for Shuusou Gyoku, including a 2D renderer… wait,
what?!
Yup. Admittedly, I've been living under a rock as far as SDL is concerned,
and thus wasn't aware that SDL 2 introduced its own abstraction for 2D
rendering that just happens to almost exactly cover everything we need
for Shuusou Gyoku. This API even covers all of the game's Direct3D code,
which only draws alpha-blended, untextured, and pre-transformed
vertex-colored triangles and lines. It's the exact abstraction over OpenGL I
thought I had to write myself, and such a perfect match for this game that
it would be foolish to go for a custom OpenGL backend – especially since SDL
will automatically target the ideal graphics API for any given operating
system.
Sadly, the one thing SDL_Renderer is missing is something equivalent to
pixel shaders, which we would need to replicate the 西方Project lens ball effect shown at startup. Looks like we have
to drop into a
completely separate, unaccelerated rendering mode and continue to
software-render this one effect before switching to hardware-accelerated
rendering for the rest of the game. But at least we can do that in a
cross-platform way, and don't have to bother with shading languages –
or, perhaps even worse, SDL's own shading
language.
If we were extremely pedantic, we'd also have to do the same for the
📝 unused spiral effect that was originally intended for the staff roll.
Software rendering would be even more annoying there, since we don't
just have to software-render these staff sprites, but also the ending
picture and text, complete with their respective fade effects. And while I
typically do go the extra mile to preserve whatever code was present in
these games, keeping this effect would just needlessly drive up the
cost of the SDL backend. Let's just move this one to the museum of unused
code and no longer actively compile it. RIP spiral 🥲 At least you're
still preserved in lossless video form.
Now that SDL has become an integral part of Shuusou Gyoku's portability plan
rather than just being one potential platform layer among many, the optimal
order of tasks has slightly changed. If we stayed within the raw Win32 API
any longer than absolutely necessary, we'd only risk writing more
Win32-native code for things like audio streaming that we'd
then have to throw away and rewrite in SDL later. Next up, therefore:
Staying with Shuusou Gyoku, but continuing in a much more focused manner by
fixing the input system and starting the SDL migration with input and sound.
Yet another small interruption before we get to Shuusou Gyoku, but only
because I've got a big announcement to make! Touhou Patch Center has just
commissioned the basic feature set that would allow PC-98 Touhou to be
translated into non-ASCII languages. 💰 And we're in fact doing it on PC-98,
and don't wait for the games to be ported to other systems first.
How is this going to work?
This project will start sometime after I've completed the current big
project of porting Shuusou
Gyoku to Linux, so probably during the summer of 2024. Similar to
the previous MediaWiki update, this will bypass the ReC98 push and cap
model: Touhou Patch Center is going to guarantee a minimum budget out of
their Open Collective funds, which can be increased with further donations
from the community, and I'm going to send an invoice once I'm done. In
addition, I'm also going to keep in contact with all interested translators
and backers via a Discord room throughout the process for additional
technical quality control. Edit (2024-04-11): Over the last few months, I've focused all unconstrained RE funding on increasing the amount of moddable text-related code. As a result, the translation project could now cover the majority of text in PC-98 Touhou, including:
With still a bit of time left until the Shuusou Gyoku Linux port is done,
I'll put any general and unconstrained reverse-engineering,
position independence, or anything contributions that come in during
the next few months towards covering everything that's still missing there:
TH04's and TH05's MAINE.EXE contains some not
yet RE'd text in their verdict screens. 1.5 pushes there, since it's
unfortunately contained in the same function that also performs the highly
complex skill value calculation.
TH05's Extra Stage ending is followed by an All Cast screen listing the characters of all 5 games, to the tune of Peaceful Romancer. Shouldn't take longer than 0.5 pushes.
TH03's MAINL.EXE needs 100% PI to enable convenient
translations of the win messages, the character titles and names at the
beginning of a stage, the Stage 8/9 cutscenes, and the endings. Let's go
with 2 pushes there just to be safe, and finalize the missing code to not 📝 incur more technical debt.
Technically, we'd need TH02's MAIN.EXE to be 100%
position-independent for any translation-related code modifications, but
reaching that goal before I get to work on translation support is probably
unrealistic. However, this new translation code needs to work across 13
executables to begin with, so I'm going to put most of it into a
separate TSR program anyway. Including this TSR in a non-PI'd executable
shouldn't be that painful, then.
The same is true for TH03's MAIN.EXE, but the WINNER BONUS popup is the only translatable piece of text there. Should be even less of a problem.
TH04's and TH05's High Score menus contain a single string about scores not being recorded in Slow Mode (スローモードでのプレイでは、スコアは記録されません). Regularly, this means that we'd have to decompile the whole menu, together with TH05's intricate "glyph ball" animation, which would be way too excessive just for this one string. If the Shuusou Gyoku Linux port gets done sooner than this gets decompiled, I'll figure something out.
In total, that's the next 4 general pushes that will go towards ensuring
translatability of most of PC-98 Touhou. If you'd like your
contribution (or existing subscription) to go to gameplay code instead, be sure to tell me!
What's the minimum guaranteed set of features?
The main feature will be a custom renderer for a subsetted, monospaced
Unicode bitmap font, and its integration into any translatable part of the
game. For the script files, this means UTF-8 support with Shift-JIS
fallback. For the glyphs, I'll use GNU Unifont by default, but we
could also use any other freely licensed bitmap font with 8×16 or 16×16
glyphs for alphabets of certain languages. Everything about this will be the
real deal: The system will potentially support all of Unicode without font
ROM hacks so that the translations will work on real hardware, and there
will be no shortcuts for just a few Latin characters. And if someone wants
to translate this game into a language with more complex
shaping rules, I'll make sure that they look pretty as well if there's
some budget left.
This will allow translation teams to build static translation patches into
any language by editing the original script files, and using
-Tom-'s existing
tools for any images. Modifications of hardcoded strings would still
require recompiling the binary, and each group would have to distribute and
advertise the result on their own.
🌐
Which languages are we getting?
As of 2023-10-10, the following translators and teams have expressed
interest:
Spanish, Latin American: Xziled, DarkeyeSide, Mr. Tremolo
Measure
Vietnamese: Shinka
Wait, Arabic?! On my PC-98?! What's the plan there?
The two challenges with Arabic scripts are transforming
a text to use the codepoints for contextual glyph forms
(shaping), and right-to-left rendering. Shaping requires not too
much code, which is easily added to the font subsetting build step.
Right-to-left rendering, on the other hand, must be a feature of the new
PC-98-native text renderer, because there are several places in PC-98 Touhou
where text is gradually typed character-by-character. So it will require a
bit of dedicated budget, but not all too much from what I can tell. Bidirectional
textwould add a great deal of complexity here, but we most
likely won't need to implement it – I'll simply pick a direction based
on the
first codepoint on a line, and ask translators to manually reverse
any Latin-script runs of text in the middle of an Arabic-script line.
How much better could it all be?
The most important feature: We could finally move away from the concept
of translation patches, integrate all translations as part of the
ReC98 repo, and ship them directly as part of new ReC98 builds. Languages
could then be switched at runtime, through a new setting in the Option
menu.
And why stop there? How about binding a keyboard key to a new
language selection window that can be opened at any point during the
game, and even switches out any text that is currently shown on
screen?
I could finally translate a
canon Touhou game via a gettext-like dictionary
system. This would allow modded source text to override translations, and
even make it possible to translate mods as well.
Ideally, all translators I get to work with are highly motivated and
finish translating each game they start, so that we don't even have to think
about translation
stacking, but maybe we still should.
We could use proportional fonts instead of aligning every glyph to the
8×16 text RAM grid. Unlike the Windows Touhou games where proportional fonts
are crucial because adding more text space would desync replays, they are
not that important in PC-98 Touhou. With no replays to be desynced,
we can arbitrarily add new boxes without worrying about the font.
However, supporting proportional fonts would make it possible to
lift some of the text sprites into the custom font system, allowing
their glyphs to be shared more easily across languages:
Some of the image text from TH03's, TH04's, and TH05's main menus.
Turning these sprites into text so that translators won't have to
manually shift pixels around may or may not be worth it.
On the topic of new text boxes: Automatic line and box breaks at word
boundaries would completely remove the need for in-game proofreading.
In-game TL notes… nah, probably not. Where would we even put them in the
original screen layout?
Due to the continued interest in TH01's Anniversary Edition, any code
modifications would be exclusive to a respective game's bugfixed
anniversary branch – i.e., any translated builds will be
bundled with a growing number of fixes for issues in the original games that
fall under my
current definition of bugs. This avoids a combinatorial explosion
of the number of branches, merges, and releases I'd have to do. For a small
amount of extra money though, I could merge them back to the ZUN
bug-preserving debloated branch. And for a lot of extra
money, I could reimplement everything on master while
preserving the original memory layout of ZUN's original binaries. This would
allow the translation-supporting binaries to be easily diffed against the
original ones, and retain compatibility with existing hacks or cheat tables.
The latter was already something that
📝 the Shuusou Gyoku community previously expected my recompiled builds to have.
I could top off the project with some smaller, more intricate
localizations that translators might request for certain languages. Most
notably, this category would include any localization of TH01's
東方★靈異伝,
STAGE #, and
HARRY UP popups that goes beyond just
fixing the Engrish.
The previous static English patches from 2014 introduced quite a few
fanfiction changes that have been interpreted as canon in the years since. I
could write a blog post to highlight these, and also compare the translation
as a whole with the more literal English translation we're likely to get
this time around.
Finally, if you all really want to, I could move all translatable
content to the Touhou Patch Center interface, which would truly turn that
site into the one central translation source for all canon Touhou games.
Automatic updates won't be feasible before porting away the games from PC-98
hardware, so the thpatch server would have to communicate with the ReC98
repo via a GitHub webhook. This will be rather expensive though, as I'd also
have to set up some kind of build/release CI for ReC98 first.
These features are mostly independent of each other, and it will be up to
Touhou Patch Center to pick a priority order. That's also where all of you
could come in and influence this order with your donations. So it's closer
to a traditional crowdfunding campaign with stretch goals, where the sky is
the limit, than it is to the usual ReC98 model. And while there can be no
fixed prices for any of the goals, you can be sure that anything you invest
will improve the quality of the final product.
From now on, this will be the only way of funding any translation-related
goals; I've removed the respective options from the ReC98 order form.
Looking forward to how many of these additional ideas I get to implement –
but, as always, please invest responsibly.
P0245
TH04/TH05 finalization (Sprite clipping + gather circles + boss explosions) + TH01 Anniversary Edition (Lines, part 1/?)
💰 Funded by:
Blue Bolt, Ember2528, [Anonymous]
🏷️ Tags:
And then, the supposed boilerplate code revealed yet another confusing issue
that quickly forced me back to serial work, leading to no parallel progress
made with Shuusou Gyoku after all. 🥲 The list of functions I put together
for the first ½ of this push seemed so boring at first, and I was so sure
that there was almost nothing I could possibly talk about:
TH02's gaiji animations at the start and end of each stage, resembling
opening and closing window blind slats. ZUN should have maybe not defined
the regular whitespace gaiji as what's technically the last frame of the
closing animation, but that's a minor nitpick. Nothing special there
otherwise.
The remaining spawn functions for TH04's and TH05's gather circles. The
only dumb antic there is the way ZUN initializes the template for bullets
fired at the end of the animation, featuring ASM instructions that are
equivalent to what Turbo C++ 4.0J generates for the __memcpy__
intrinsic, but show up in a different order. Which means that they must have
been handwritten. I already figured that out in 2022
though, so this was just more of the same.
EX-Alice's override for the game's main 16×16 sprite sheet, loaded
during her dialog script. More of a naming and consistency challenge, if
anything.
The regular version of TH05's big 16×16 sprite sheet.
EX-Alice's variant of TH05's big 16×16 sprite sheet.
The rendering function for TH04's Stage 4 midboss, which seems to
feature the same premature clipping quirk we've seen for
📝 TH05's Stage 5 midboss, 7 months ago?
The rendering function for the big 48×48 explosion sprite, which also
features the same clipping quirk?
That's three instances of ZUN removing sprites way earlier than you'd want
to, intentionally deciding against those sprites flying smoothly in and out
of the playfield. Clearly, there has to be a system and a reason behind it.
Turns out that it can be almost completely blamed on master.lib. None of the
super_*() sprite blitting functions can clip the rendered
sprite to the edges of VRAM, and much less to the custom playfield rectangle
we would actually want here. This is exactly the wrong choice to make for a
game engine: Not only is the game developer now stuck with either rendering
the sprite in full or not at all, but they're also left with the burden of
manually calculating when not to display a sprite.
However, strictly limiting the top-left screen-space coordinate to
(0, 0) and the bottom-right one to (640, 400) would actually
stop rendering some of the sprites much earlier than the clipping conditions
we encounter in these games. So what's going on there?
The answer is a combination of playfield borders, hardware scrolling, and
master.lib needing to provide at least some help to support the
latter. Hardware scrolling on PC-98 works by dividing VRAM into two vertical
partitions along the Y-axis and telling the GDC to display one of them at
the top of the screen and the other one below. The contents of VRAM remain
unmodified throughout, which raises the interesting question of how to deal
with sprites that reach the vertical edges of VRAM. If the top VRAM row that
starts at offset 0x0000 ends up being displayed below
the bottom row of VRAM that starts at offset 0x7CB0 for 399 of
the 400 possible scrolling positions, wouldn't we then need to vertically
wrap most of the rendered sprites?
For this reason, master.lib provides the super_roll_*()
functions, which unconditionally perform exactly this vertical wrapping. But
this creates a new problem: If these functions still can't clip, and don't
even know which VRAM rows currently correspond to the top and bottom row of
the screen (since master.lib's graph_scrollup() function
doesn't retain this information), won't we also see sprites wrapping around
the actual edges of the screen? That's something we certainly
wouldn't want in a vertically scrolling game…
The answer is yes, and master.lib offers no solution for this issue. But
this is where the playfield borders come in, and helpfully cover 16 pixels
at the top and 16 pixels at the bottom of the screen. As a result, they can
hide up to 32 rows of potentially wrapped sprite pixels below them:
•
The earliest possible frame that TH05 can start rendering the Stage 5
midboss on. Hiding the text layer reveals how master.lib did in fact
"blindly" render the top part of her sprite to the bottom of the
playfield. That's where her sprite starts before it is correctly
wrapped around to the top of VRAM.
If we scrolled VRAM by another 200 pixels (and faked an equally shifted
TRAM for demonstration purposes), we get an equally valid game scene
that points out why a vertically scrolling PC-98 game must wrap all sprites
at the vertical edges of VRAM to begin with.
Also, note how the HP bar has filled up quite a bit before the midboss can
actually appear on screen.
And that's how the lowest possible top Y coordinate for sprites blitted
using the master.lib super_roll_*() functions during the
scrolling portions of TH02, TH04, and TH05 is not 0, but -16. Any lower, and
you would actually see some of the sprite's upper pixels at the
bottom of the playfield, as there are no more opaque black text cells to
cover them. Theoretically, you could lower this number for
some animation frames that start with multiple rows of transparent
pixels, but I thankfully haven't found any instance of ZUN using such a
hack. So far, at least…
Visualized like that, it all looks quite simple and logical, but for days, I
did not realize that these sprites were rendered to a scrolling VRAM.
This led to a much more complicated initial explanation involving the
invisible extra space of VRAM between offsets 0x7D00 and
0x7FFF that effectively grant a hidden additional 9.6 lines
below the playfield. Or even above, since PC-98 hardware ignores the highest
bit of any offset into a VRAM bitplane segment
(& 0x7FFF), which prevents blitting operations from
accidentally reaching into a different bitplane. Together with the
aforementioned rows of transparent pixels at the top of these midboss
sprites, the math would have almost worked out exactly.
The need for manual clipping also applies to the X-axis. Due to the lack of
scrolling in this dimension, the boundaries there are much more
straightforward though. The minimum left coordinate of a sprite can't fall
below 0 because any smaller coordinate would wrap around into the
📝 tile source area and overwrite some of the
pixels there, which we obviously don't want to re-blit every frame.
Similarly, the right coordinate must not extend into the HUD, which starts
at 448 pixels.
The last part might be surprising if you aren't familiar with the PC-98 text
chip. Contrary to the CGA and VGA text modes of IBM-compatibles, PC-98 text
cells can only use a single color for either their foreground or
background, with the other pixels being transparent and always revealing the
pixels in VRAM below. If you look closely at the HUD in the images above,
you can see how the background of cells with gaiji glyphs is slightly
brighter (◼ #100) than the opaque black
cells (◼ #000) surrounding them. This
rather custom color clearly implies that those pixels must have been
rendered by the graphics GDC. If any other sprite was rendered below the
HUD, you would equally see it below the glyphs.
So in the end, I did find the clear and logical system I was looking for,
and managed to reduce the new clipping conditions down to a
set of basic rules for each edge. Unfortunately, we also need a second
macro for each edge to differentiate between sprites that are smaller or
larger than the playfield border, which is treated as either 32×32 (for
super_roll_*()) or 32×16 (for non-"rolling"
super_*() functions). Since smaller sprites can be fully
contained within this border, the games can stop rendering them as soon as
their bottom-right coordinate is no longer seen within the playfield, by
comparing against the clipping boundaries with <= and
>=. For example, a 16×16 sprite would be completely
invisible once it reaches (16, 0), so it would still be rendered at
(17, 1). A larger sprite during the scrolling part of a stage, like,
say, the 64×64 midbosses, would still be rendered if their top-left
coordinate was (0, -16), so ZUN used < and
> comparisons to at least get an additional pixel before
having to stop rendering such a sprite. Turbo C++ 4.0J sadly can't
constant-fold away such a difference in comparison operators.
And for the most part, ZUN did follow this system consistently. Except for,
of course, the typical mistakes you make when faced with such manual
decisions, like how he treated TH04's Stage 4 midboss as a "small" sprite
below 32×32 pixels (it's 64×64), losing that precious one extra pixel. Or
how the entire rendering code for the 48×48 boss explosion sprite pretends
that it's actually 64×64 pixels large, which causes even the initial
transformation into screen space to be misaligned from the get-go.
But these are additional bugs on top of the single
one that led to all this research.
Because that's what this is, a bug. 🐞 Every resulting pixel boundary is a
systematic result of master.lib's unfortunate lack of clipping. It's as much
of a bug as TH01's byte-aligned rendering of entities whose internal
position is not byte-aligned. In both cases, the entities are alive,
simulated, and partake in collision detection, but their rendered appearance
doesn't accurately reflect their internal position.
Initially, I classified
📝 the sudden pop-in of TH05's Stage 5 midboss
as a quirk because we had no conclusive evidence that this wasn't
intentional, but now we do. There have been multiple explanations for why
ZUN put borders around the playfield, but master.lib's lack of sprite
clipping might be the biggest reason.
And just like byte-aligned rendering, the clipping conditions can easily be
removed when porting the game away from PC-98 hardware. That's also what
uth05win chose to do: By using OpenGL and not having to rely on hardware
scrolling, it can simply place every sprite as a textured quad at its exact
position in screen space, and then draw the black playfield borders on top
in the end to clip everything in a single draw call. This way, the Stage 5
midboss can smoothly fly into the playfield, just as defined by its movement
code:
The entire smooth Stage 5 midboss entrance animation as shown in
uth05win. If the simultaneous appearance of the Enemy!! label
doesn't lend further proof to this having been ZUN's actual intention, I
don't know what will.
Meanwhile, I designed the interface of the 📝 generic blitter used in the TH01 Anniversary Edition entirely around
clipping the blitted sprite at any explicit combination of VRAM edges. This
was nothing I tacked on in the end, but a core aspect that informed the
architecture of the code from the very beginning. You really want to
have one and only one place where sprite clipping is done right – and
only once per sprite, regardless of how many bitplanes you want to write to.
Which brings us to the goal that the final ¼ of this push went toward. I
thought I was going to start cleaning up the
📝 player movement and rendering code, but
that turned out too complicated for that amount of time – especially if you
want to start with just cleanup, preserving all original bugs for the
time being.
Fixing and smoothening player and Orb movement would be the next big task in
Anniversary Edition development, needing about 3 pushes. It would start with
more performance research into runtime-shifting of larger sprites, followed
by extending my generic blitter according to the results, writing new
optimized loaders for the original image formats, and finally rewriting all
rendering code accordingly. With that code in place, we can then start
cleaning up and fixing the unique code for each boss, one by one.
Until that's funded, the code still contains a few smaller and easier pieces
of code that are equally related to rendering bugs, but could be dealt with
in a more incremental way. Line rendering is one of those, and first needs
some refactoring of every call site, including
📝 the rotating squares around Mima and
📝 YuugenMagan's pentagram. So far, I managed
to remove another 1,360 bytes from the binary within this final ¼ of a push,
but there's still quite a bit to do in that regard.
This is the perfect kind of feature for smaller (micro-)transactions. Which
means that we've now got meaningful TH01 code cleanup and Anniversary
Edition subtasks at every price range, no matter whether you want to invest
a lot or just a little into this goal.
If you can, because Ember2528 revealed the plan behind
his Shuusou Gyoku contributions: A full-on Linux port of the game, which
will be receiving all the funding it needs to happen. 🐧 Next up, therefore:
Turning this into my main project within ReC98 for the next couple of
months, and getting started by shipping the long-awaited first step towards
that goal.
I've raised the cap to avoid the potential of rounding errors, which might
prevent the last needed Shuusou Gyoku push from being correctly funded. I
already had to pick the larger one of the two pending TH02 transactions for
this push, because we would have mathematically ended up
1/25500 short of a full push with the smaller
transaction. And if I'm already at it, I might
as well free up enough capacity to potentially ship the complete OpenGL
backend in a single delivery, which is currently estimated to cost 7 pushes
in total.
🎉 After almost 3 years, TH04 finally caught up to TH05 and is now 100%
position-independent as well! 🎉
For a refresher on what this means and does not mean, check the
announcements from back in 2019 and 2020 when we chased the goal for TH05's
📝 OP.EXE and
📝 the rest of the game. These also feature
some demo videos that show off the kind of mods you were able to efficiently
code back then. With the occasional reverse-engineering attention it
received over the years, TH04's code should now be slightly easier to work
with than TH05's was back in the day. Although not by much – TH04 has
remained relatively unpopular among backers, and only received more than the
funded attention because it shares most of its core code with the more
popular TH05. Which, coincidentally, ended up becoming
📝 the reason for getting this done now.
Not that it matters a lot. Ever since we reached 100% PI for TH05, community
and backer interest in position independence has dropped to near zero. We
just didn't end up seeing the expected large amount of community-made mods
that PI was meant to facilitate, and even the
📝 100% decompilation of TH01 changed nothing
about that. But that's OK; after all, I do appreciate the business of
continually getting commissioned for all the
📝 large-scale mods. Not focusing on PI is
also the correct choice for everyone who likes reading these blog posts, as
it often means that I can't go that much into detail due to cutting corners
and piling up technical debt left and right.
Surprisingly, this only took 1.25 pushes, almost twice as fast as expected.
As that's closer to 1 push than it is to 2, I'm OK with releasing it like
this – especially since it was originally meant to come out three days ago.
🍋 Unfortunately, it was delayed thanks to surprising
website bugs and a certain piece of code that was way more difficult to
document than it was to decompile… The next push will have slightly less
content in exchange, though.
📝 P0240 and P0241 already covered the final
remaining structures, so I only needed to do some superficial RE to prove
the remaining numeric literals as either constants or memory addresses. For
example, I initially thought I'd have to decompile the dissolve animations
in the staff roll, but I only needed to identify a single function pointer
type to prove all false positives as screen coordinates there. Now, the TH04
staff roll would be another fast and cheap decompilation, similar to the
custom entity types of TH04. (And TH05 as well!)
The one piece of code I did have to decompile was Stage 4's carpet
lighting animation, thanks to hex literals that were way too complicated to
leave in ASM. And this one probably takes the crown for TH04's worst set of
landmines and bloat that still somehow results in no observable bugs or
quirks.
This animation starts at frame 1664, roughly 29.5 seconds into the stage,
and quickly turns the stage background into a repeated row of dark-red plaid
carpet tiles by moving out from the center of the playfield towards the
edges. Afterward, the animation repeats with a brighter set of tiles that is
then used for the rest of the stage. As I explained
📝 a while ago in the context of TH02, the
stage tile and map formats in PC-98 Touhou can't express animations, so all
of this needed to be hardcoded in the binary.
The repeating 384×16 row of carpet tiles at the beginning of TH04's
Stage 4 in all three light levels, shown twice for better visibility.
And ZUN did start out making the right decision by only using fully-lit
carpet tiles for all tile sections defined in ST03.MAP. This
way, the animation can simply disable itself after it completed, letting the
rest of the stage render normally and use new tile sections that are only
defined for the final light level. This means that the "initial" dark
version of the carpet is as much a result of hardcoded tile manipulation as
the animation itself.
But then, ZUN proceeded to implement it all by directly manipulating the
ring buffer of on-screen tiles. This is the lowest level before the tiles
are rendered, and rather detached from the defined content of the
📝 .MAP tile sections. Which leads to a whole
lot of problems:
If you decide to do this kind of tile ring modification, it should ideally
happen at a very specific point: after scrolling in new tiles into
the ring buffer, but before blitting any scrolled or invalidated
tiles to VRAM based on the ring buffer. Which is not where ZUN chose to put
it, as he placed the call to the stage-specific render function after both
of those operations. By the time the function is
called, the tile renderer has already blitted a few lines of the fully-lit
carpet tiles from the defined .MAP tile section, matching the scroll speed.
Fortunately, these are hidden behind the black TRAM cells above and below
the playfield…
Still, the code needs to get rid of them before they would become visible.
ZUN uses the regular tile invalidation function for this, which will only
cause actual redraws on the next frame. Again, the tile rendering call has
already happened by the time the Stage 4-specific rendering function gets
called.
But wait, this game also flips VRAM pages between frames to provide a
tear-free gameplay experience. This means that the intended redraw of the
new tiles actually hits the wrong VRAM page.
And sure, the code does attempt to invalidate these newly blitted lines
every frame – but only relative to the current VRAM Y coordinate that
represents the top of the hardware-scrolled screen. Once we're back on the
original VRAM page on the next frame, the lines we initially set out to
remove could have already scrolled past that point, making it impossible to
ever catch up with them in this way.
The only real "solution": Defining the height of the tile invalidation
rectangle at 3× the scroll speed, which ensures that each invalidation call
covers 3 frames worth of newly scrolled-in lines. This is not intuitive at
all, and requires an understanding of everything I have just written to even
arrive at this conclusion. Needless to say that ZUN didn't comprehend it
either, and just hardcoded an invalidation height that happened to be enough
for the small scroll speeds defined in ST03.STD for the first
30 seconds of the stage.
The effect must consistently modify the tile ring buffer to "fix" any new
tiles, overriding them with the intended light level. During the animation,
the code not only needs to set the old light level for any tiles that are
still waiting to be replaced, but also the new light level for any tiles
that were replaced – and ZUN forgot the second part. As a result, newly scrolled-in tiles within the already animated
area will "remain" untouched at light level 2 if the scroll speed is fast
enough during the transition from light level 0 to 1.
All that means that we only have to raise the scroll speed for the effect to
fall apart. Let's try, say, 4 pixels per frame rather than the original
0.25:
By hiding the text RAM layer and revealing what's below the usually
opaque black cells above and below the playfield, we can observe all
three landmines – 1) and 2) throughout light level 0, and 3) during the
transition from level 0 to 1.
All of this could have been so much simpler and actually stable if ZUN
applied the tile changes directly onto the .MAP. This is a much more
intuitive way of expressing what is supposed to happen to the map, and would
have reduced the code to the actually necessary tile changes for the first
frame and each individual frame of the animation. It would have still
required a way to force these changes into the tile ring buffer, but ZUN
could have just used his existing full-playfield redraw functions for that.
In any case, there would have been no need for any per-frame tile
fixing and redrawing. The CPU cycles saved this way could have then maybe
been put towards writing the tile-replacing part of the animation in C++
rather than ASM…
Wow, that was an unreasonable amount of research into a feature that
superficially works fine, just because its decompiled code didn't make
sense. To end on a more positive note, here are
some minor new discoveries that might actually matter to someone:
The laser part of Marisa's Illusion Laser shot type always does 3
points of damage per frame, regardless of the player's power level. Its
hitbox also remains identical on all power levels, no matter how wide the
laser appears on screen. The strength difference between the levels purely
comes from the number of frames the laser stays active before a fixed
non-damaging 32-frame cooldown time:
Power level
Frames per cycle (including 32-frame cooldown)
2
64
3
72
4
88
5
104
6
128
7
144
8
168
9
192
The decay animation for player shots is faster in TH05 (12 frames) than in
TH04 (16 frames).
In the first phase of her Stage 6 fight, Yuuka moves along one of two
randomly chosen hardcoded paths, defined as a set of 5 movement angles.
After reaching the final point and firing a danmaku pattern, she teleports
back to her initial position to repeat the path one more time before the
phase times out.
Similarly, TH04's Stage 3 midboss also goes through 12 fixed movement angles
before flying off the playfield.
The formulas for calculating the skill rating on both TH04's and TH05's
final verdict screen are going to be very long and complicated.
Next up: ¾ of a push filled with random boilerplate, finalization, and TH01
code cleanup work, while I finish the preparations for Shuusou Gyoku's
OpenGL backend. This month, everything should finally work out as intended:
I'll complete both tasks in parallel, ship the former to free up the cap,
and then ship the latter once its 5th push is fully funded.
P0242
TH02 RE (Score tracking + HUD rendering)
P0243
TH02 RE (Items)
💰 Funded by:
Yanga
🏷️ Tags:
OK, let's decompile TH02's HUD code first, gain a solid understanding of how
increasing the score works, and then look at the item system of this game.
Should be no big deal, no surprises expected, let's go!
…Yeah, right, that's never how things end up in ReC98 land.
And so, we get the usual host of newly discovered
oddities in addition to the expected insights into the item mechanics. Let's
start with the latter:
Some regular stage enemies appear to randomly drop either or items. In reality, there is
very little randomness at play here: These items are picked from a
hardcoded, repeating ring of 10 items
(𝄆 𝄇), and the only source of
randomness is the initial position within this ring, which changes at
the beginning of every stage. ZUN further increased the illusion of
randomness by only dropping such a semi-random item for every
3rd defeated enemy that is coded to drop one, and also having
enemies that drop fixed, non-random items. I'd say it's a decent way of
ensuring both randomness and balance.
There's a 1/512 chance for such a semi-random
item drop to turn into a item instead –
which translates to 1/1536 enemies due to the
fixed drop rate.
Edit (2023-06-11): These are the only ways that items can randomly drop in this game. All other drops, including
any items, are scripted and deterministic.
After using a continue (both after a Game Over, or after manually
choosing to do so through the Pause menu for whatever reason), the
next
(Stage number + 1) semi-random item
drops are turned into items instead.
Items can contribute up to 25 points to the skill value and subsequent
rating (あなたの腕前) on the final verdict
screen. Doing well at item collection first increases a separate
collect_skill value:
Item
Collection condition
collect_skill change
below max power
+1
at or above max power
+2
value == 51,200
+8
value ≥20,000 and <51,200
+4
value ≥10,000 and <20,000
+2
value <10,000
+1
with 5 bombs in stock
+16
Note, again, the lack of anything involving
items. At the maximum of 5 lives, the item spawn function transforms
them into bomb items anyway. It is possible though to gain
the 5th life by reaching one of the extend scores while a
item is still on screen; in that case,
collecting the 1-up has no effect at all.
Every 32 collect_skill points will then raise the
item_skill by 1, whereas every 16 dropped items will lower
it by 1. Before launching into the ending sequence,
item_skill is clamped to the [0; 25] range and
added to the other skill-relevant metrics we're going to look at in
future pushes.
When losing a life, the game will drop a single
and 4 randomly picked or items in a random order
around Reimu's position. Contrary to an
unsourced Touhou Wiki edit from 2009, each of the 4 does have an
equal and independent chance of being either a
or item.
Finally, and perhaps most
interestingly, item values! These are
determined by the top Y coordinate of an item during the frame it is
collected on. The maximum value of 51,200 points applies to the top 48
pixels of the playfield, and drops off as soon as an item falls below
that line. For the rest of the playfield, point items then use a formula
of (28,000 - (top Y coordinate of item in
screen space × 70)):
Point items and their collection value in TH02. The numbers
correspond to items that are collected while their top Y coordinate
matches the line they are directly placed on. The upper
item in the image would therefore give
23,450 points if the player collected it at that specific
position.
Reimu collects any item whose 16×16 bounding box lies fully within
the red 48×40 hitbox. Note that
the box isn't cut off in this specific case: At Reimu's lowest
possible position on the playfield, the lowest 8 pixels of her
sprite are clipped, but the item hitbox still happens to end exactly
at the bottom of the playfield. Since an item's Y velocity
accelerates on every frame, it's entirely possible to collect a
point item at the lowest value of 2,240 points, on the exact frame
before it falls below the collection hitbox.
Onto score tracking then, which only took a single commit to raise another
big research question. It's widely known that TH02 grants extra lives upon
reaching a score of 1, 2, 3, 5, or 8 million points. But what hasn't been
documented is the fact that the game does not stop at the end of the
hardcoded extend score array. ZUN merely ends it with a sentinel value of
999,999,990 points, but if the score ever increased beyond this value, the
game will interpret adjacent memory as signed 32-bit score values and
continue giving out extra lives based on whatever thresholds it ends up
finding there. Since the following bytes happen to turn into a negative
number, the next extra life would be awarded right after gaining another 10
points at exactly 1,000,000,000 points, and the threshold after that would
be 11,114,905,600 points. Without an explicit counterstop, the number of
score-based extra lives is theoretically unlimited, and would even continue
after the signed 32-bit value overflowed into the negative range. Although
we certainly have bigger problems once scores ever reach that point…
That said, it seems impossible that any of this could ever happen
legitimately. The current high scores of 42,942,800 points on
Lunatic and 42,603,800 points on
Extra don't even reach 1/20 of ZUN's sentinel
value. Without either a graze or a bullet cancel system, the scoring
potential in this game is fairly limited, making it unlikely for high scores
to ever increase by that additional order of magnitude to end up anywhere
near the 1 billion mark.
But can we really be sure? Is this a landmine because it's impossible
to ever reach such high scores, or is it a quirk because these extends
could be observed under rare conditions, perhaps as the result of
other quirks? And if it's the latter, how many of these adjacent bytes do we
need to preserve in cleaned-up versions and ports? We'd pretty much need to
know the upper bound of high scores within the original stage and boss
scripts to tell. This value should be rather easy to calculate in a
game with such a simple scoring system, but doing that only makes sense
after we RE'd all scoring-related code and could efficiently run such
simulations. It's definitely something we'd need to look at before working
on this game's debloated version in the far future, which is
when the difference between quirks and landmines will become relevant.
Still, all that uncertainty just because ZUN didn't restrict a loop to the
size of the extend threshold array…
TH02 marks a pivotal point in how the PC-98 Touhou games handle the current
score. It's the last game to use a 32-bit variable before the later games
would regrettably start using arrays of binary-coded
decimals. More importantly though, TH02 is also the first game to
introduce the delayed score counting animation, where the displayed score
intentionally lags behind and gradually counts towards the real one over
multiple frames. This could be implemented in one of two ways:
Keep the displayed score as a separate variable inside the presentation
layer, and let it gradually count up to the real score value passed in from
the logic layer
Burden the game logic with this presentation detail, and split the score
into two variables: One for the displayed score, and another for the
delta between that score and the actual one. Newly gained points are
first added to the delta variable, and then gradually subtracted from there
and added to the real score before being displayed.
And by now, we can all tell which option ZUN picked for the rest of the
PC-98 games, even if you don't remember
📝 me mentioning this system last year.
📝 Once again, TH02 immortalized ZUN's initial
attempt at the concept, which lacks the abstraction boundaries you'd want
for managing this one piece of state across two variables, and messes up the
abstractions it does have. In addition to the regular score
transfer/render function, the codebase therefore has
a function that transfers the current delta to the score immediately,
but does not re-render the HUD, and
a function that adds the delta to the score and re-renders the HUD, but
does not reset the delta.
And – you guessed it – I wouldn't have mentioned any of this if it didn't
result in one bug and one quirk in TH02. The bug resulting from 1) is pretty
minor: The function is called when losing a life, and simply stops any
active score-counting animation at the value rendered on the frame where the
player got hit. This one is only a rendering issue – no points are lost, and
you just need to gain 10 more for the rendered value to jump back up to its
actual value. You'll probably never notice this one because you're likely
busy collecting the single spawned around Reimu
when losing a life, which always awards at least 10 points.
The quirk resulting from 2) is more intriguing though. Without a separate
reset of the score delta, the function effectively awards the current delta
value as a one-time point bonus, since the same delta will still be
regularly transferred to the score on further game frames.
This function is called at the start of every dialog sequence. However, TH02
stops running the regular game loop between the post-boss dialog and the
next stage where the delta is reset, so we can only observe this quirk for
the pre-boss sequences and the dialog before Mima's form change.
Unfortunately, it's not all too exploitable in either case: Each of the
pre-boss dialog sequences is preceded by an ungrazeable pellet pattern and
followed by multiple seconds of flying over an empty playfield with zero
scoring opportunities. By the time the sequence starts, the game will have
long transferred any big score delta from max-valued point items. It's
slightly better with Mima since you can at least shoot her and use a bomb to
keep the delta at a nonzero value, but without a health bar, there is little
indication of when the dialog starts, and it'd be long after Mima
gave out her last bonus items in any case.
But two of the bosses – that is, Rika, and the Five Magic Stones – are
scrolled onto the playfield as part of the stage script, and can also be hit
with player shots and bombs for a few seconds before their dialog starts.
While I'll only get to cover shot types and bomb damage within the next few
TH02 pushes, there is an obvious initial strategy for maximizing the effect
of this quirk: Spreading out the A-Type / Wide / High Mobility shot to land
as many hits as possible on all Five Magic Stones, while firing off a bomb.
Turns out that the infamous button-mashing mechanics of the
player shot are also more complicated than simply pressing and releasing the
Shot key at alternating frames. Even this result took way too many
takes.
Wow, a grand total of 1,750 extra points! Totally worth wasting a bomb for…
yeah, probably not. But at the very least, it's
something that a TAS score run would want to keep in mind. And all that just
because ZUN "forgot" a single score_delta = 0; assignment at
the end of one function…
And that brings TH02 over the 30% RE mark! Next up: 100% position
independence for TH04. If anyone wants to grab the
that have now been freed up in the cap: Any small Touhou-related task would
be perfect to round out that upcoming TH04 PI delivery.
P0240
TH04 PI/RE (Stage 5 star rendering + Stage 6 Yuuka checkerboard + Custom entity structures, part 1/2)
P0241
TH04 PI/RE (Custom entity structures, part 2/2 + Thick laser structure + PI false positives + .STD loading)
💰 Funded by:
JonathKane, Blue Bolt, [Anonymous]
🏷️ Tags:
Well, well. My original plan was to ship the first step of Shuusou Gyoku
OpenGL support on the next day after this delivery. But unfortunately, the
complications just kept piling up, to a point where the required solutions
definitely blow the current budget for that goal. I'm currently sitting on
over 70 commits that would take at least 5 pushes to deliver as a meaningful
release, and all of that is just rearchitecting work, preparing the
game for a not too Windows-specific OpenGL backend in the first place. I
haven't even written a single line of OpenGL yet… 🥲
This shifts the intended Big Release Month™ to June after all. Now I know
that the next round of Shuusou Gyoku features should better start with the
SC-88Pro recordings, which are much more likely to get done within their
current budget. At least I've already completed the configuration versioning
system required for that goal, which leaves only the actual audio part.
So, TH04 position independence. Thanks to a bit of funding for stage
dialogue RE, non-ASCII translations will soon become viable, which finally
presents a reason to push TH04 to 100% position independence after
📝 TH05 had been there for almost 3 years. I
haven't heard back from Touhou Patch Center about how much they want to be
involved in funding this goal, if at all, but maybe other backers are
interested as well.
And sure, it would be entirely possible to implement non-ASCII translations
in a way that retains the layout of the original binaries and can be easily
compared at a binary level, in case we consider translations to be a
critical piece of infrastructure. This wouldn't even just be an exercise in
needless perfectionism, and we only have to look to Shuusou Gyoku to realize
why: Players expected
that my builds were compatible with existing SpoilerAL SSG files, which
was something I hadn't even considered the need for. I mean, the game is
open-source 📝 and I made it easy to build.
You can just fork the code, implement all the practice features you want in
a much more efficient way, and I'd probably even merge your code into my
builds then?
But I get it – recompiling the game yields just yet another build that can't
be easily compared to the original release. A cheat table is much more
trustworthy in giving players the confidence that they're still practicing
the same original game. And given the current priorities of my backers,
it'll still take a while for me to implement proof by replay validation,
which will ultimately free every part of the community from depending on the
original builds of both Seihou and PC-98 Touhou.
However, such an implementation within the original binary layout would
significantly drive up the budget of non-ASCII translations, and I sure
don't want to constantly maintain this layout during development. So, let's
chase TH04 position independence like it's 2020, and quickly cover a larger
amount of PI-relevant structures and functions at a shallow level. The only
parts I decompiled for now contain calculations whose intent can't be
clearly communicated in ASM. Hitbox visualizations or other more in-depth
research would have to wait until I get to the proper decompilation of these
features.
But even this shallow work left us with a large amount of TH04-exclusive
code that had its worst parts RE'd and could be decompiled fairly quickly.
If you want to see big TH04 finalization% gains, general TH04 progress would
be a very good investment.
The first push went to the often-mentioned stage-specific custom entities
that share a single statically allocated buffer. Back in 2020, I
📝 wrongly claimed that these were a TH05 innovation,
but the system actually originated in TH04. Both games use a 26-byte
structure, but TH04 only allocates a 32-element array rather than TH05's
64-element one. The conclusions from back then still apply, but I also kept
wondering why these games used a static array for these entities to begin
with. You know what they call an area of memory that you can cleanly
repurpose for things? That's right, a heap!
And absolutely no one would mind one additional heap allocation at the start
of a stage, next to the ones for all the sprites and portraits.
However, we are still running in Real Mode with segmented memory. Accessing
anything outside a common data segment involves modifying segment registers,
which has a nonzero CPU cycle cost, and Turbo C++ 4.0J is terrible at
optimizing away the respective instructions. Does this matter? Probably not,
but you don't take "risks" like these if you're in a permanent
micro-optimization mindset…
In TH04, this system is used for:
Kurumi's symmetric bullet spawn rays, fired from her hands towards the left
and right edges of the playfield. These are rather infamous for being the
last thing you see before
📝 the Divide Error crash that can happen in ZUN's original build.
Capped to 6 entities.
The 4 📝 bits used in Marisa's Stage 4 boss
fight. Coincidentally also related to the rare Divide Error
crash in that fight.
Stage 4 Reimu's spinning orbs. Note how the game uses two different sets
of sprites just to have two different outline colors. This was probably
better than messing with the palette, which can easily cause unintended
effects if you only have 16 colors to work with. Heck, I have an entire blog post tag just to highlight
these cases. Capped to the full 32 entities.
The chasing cross bullets, seen in Phase 14 of the same Stage 6 Yuuka
fight. Featuring some smart sprite work, making use of point symmetry to
achieve a fluid animation in just 4 frames. This is
good-code in sprite form. Capped to 31 entities, because the 32nd custom entity during this fight is defined to be…
The single purple pulsating and shrinking safety circle, seen in Phase 4 of
the same fight. The most interesting aspect here is actually still related
to the cross bullets, whose spawn function is wrongly limited to 32 entities
and could theoretically overwrite this circle. This
is strictly landmine territory though:
Yuuka never uses these bullets and the safety circle
simultaneously
She never spawns more than 24 cross bullets
All cross bullets are fast enough to have left the screen by the
time Yuuka restarts the corresponding subpattern
The cross bullets spawn at Yuuka's center position, and assign its
Q12.4 coordinates to structure fields that the safety circle interprets
as raw pixels. The game does try to render the circle afterward, but
since Yuuka's static position during this phase is nowhere near a valid
pixel coordinate, it is immediately clipped.
The flashing lines seen in Phase 5 of the Gengetsu fight,
telegraphing the slightly random bullet columns.
These structures only took 1 push to reverse-engineer rather than the 2 I
needed for their TH05 counterparts because they are much simpler in this
game. The "structure" for Gengetsu's lines literally uses just a single X
position, with the remaining 24 bytes being basically padding. The only
minor bug I found on this shallow level concerns Marisa's bits, which are
clipped at the right and bottom edges of the playfield 16 pixels earlier
than you would expect:
The remaining push went to a bunch of smaller structures and functions:
The structure for the up to 2 "thick" (a.k.a. "Master Spark") lasers. Much
saner than the
📝 madness of TH05's laser system while being
equally customizable in width and duration.
The structure for the various monochrome 16×16 shapes in the background of
the Stage 6 Yuuka fight, drawn on top of the checkerboard.
The rendering code for the three falling stars in the background of Stage 5.
The effect here is entirely palette-related: After blitting the stage tiles,
the 📝 1bpp star image is ORed
into only the 4th VRAM plane, which is equivalent to setting the
highest bit in the palette color index of every pixel within the star-shaped
region. This of course raises the question of how the stage would look like
if it was fully illuminated:
The full tile map of TH04's Stage 5, in both dark and fully
illuminated views. Since the illumination effect depends on two
matching sets of palette colors that are distinguished by a single
bit, the illuminated view is limited to only 8 of the 16 colors. The
dark view, on the other hand, can freely use colors from the
illuminated set, since those are unaffected by the OR
operation.
Most code that modifies a stage's tile map, and directly specifies tiles via
their top-left offset in VRAM.
Thanks to code alignment reasons, this forced a much longer detour into the
.STD format loader. Nothing all too noteworthy there since we're still
missing the enemy script and spawn structures before we can call .STD
"reverse-engineered", but maybe still helpful if you're looking for an
overview of the format. Also features a buffer overflow landmine if a .STD
file happens to contain more than 32 enemy scripts… you know, the usual
stuff.
To top off the second push, we've got the vertically scrolling checkerboard
background during the Stage 6 Yuuka fight, made up of 32×32 squares. This
one deserves a special highlight just because of its needless complexity.
You'd think that even a performant implementation would be pretty simple:
Set the GRCG to TDW mode
Set the GRCG tile to one of the two square colors
Start with Y as the current scroll offset, and X
as some indicator of which color is currently shown at the start of each row
of squares
Iterate over all lines of the playfield, filling in all pixels that
should be displayed in the current color, skipping over the other ones
Count down Y for each line drawn
If Y reaches 0, reset it to 32 and flip X
At the bottom of the playfield, change the GRCG tile to the other color,
and repeat with the initial value of X flipped
The most important aspect of this algorithm is how it reduces GRCG state
changes to a minimum, avoiding the costly port I/O that we've identified
time and time again as one of the main bottlenecks in TH01. With just 2
state variables and 3 loops, the resulting code isn't that complex either. A
naive implementation that just drew the squares from top to bottom in a
single pass would barely be simpler, but much slower: By changing the GRCG
tile on every color, such an implementation would burn a low 5-digit number
of CPU cycles per frame for the 12×11.5-square checkerboard used in the
game.
And indeed, ZUN retained all important aspects of this algorithm… but still
implemented it all in ASM, with a ridiculous layer of x86 segment arithmetic
on top? Which blows up the complexity to 4 state
variables, 5 nested loops, and a bunch of constants in unusual units. I'm
not sure what this code is supposed to optimize for, especially with that
rather questionable register allocation that nevertheless leaves one of the
general-purpose registers unused. Fortunately,
the function was still decompilable without too many code generation hacks,
and retains the 5 nested loops in all their goto-connected
glory. If you want to add a checkerboard to your next PC-98
demo, just stick to the algorithm I gave above.
(Using a single XOR for flipping the starting X offset between 32 and 64
pixels is pretty nice though, I have to give him that.)
This makes for a good occasion to talk about the third and final GRCG mode,
completing the series I started with my previous coverage of the
📝 RMW and
📝 TCR modes. The TDW (Tile Data Write) mode
is the simplest of the three and just writes the 8×1 GRCG tile into VRAM
as-is, without applying any alpha bitmask. This makes it perfect for
clearing rectangular areas of pixels – or even all of VRAM by doing a single
memset():
// Set up the GRCG in TDW mode.
outportb(0x7C, 0x80);
// Fill the tile register with color #7 (0111 in binary).
outportb(0x7E, 0xFF); // Plane 0: (B): (********)
outportb(0x7E, 0xFF); // Plane 1: (R): (********)
outportb(0x7E, 0xFF); // Plane 2: (G): (********)
outportb(0x7E, 0x00); // Plane 3: (E): ( )
// Set the 32 pixels at the top-left corner of VRAM to the exact contents of
// the tile register, effectively repeating the tile 4 times. In TDW mode, the
// GRCG ignores the CPU-supplied operand, so we might as well just pass the
// contents of a register with the intended width. This eliminates useless load
// instructions in the compiled assembly, and even sort of signals to readers
// of this code that we do not care about the source value.
*reinterpret_cast<uint32_t far *>(MK_FP(0xA800, 0)) = _EAX;
// Fill the entirety of VRAM with the GRCG tile. A simple C one-liner that will
// probably compile into a single `REP STOS` instruction. Unfortunately, Turbo
// C++ 4.0J only ever generates the 16-bit `REP STOSW` here, even when using
// the `__memset__` intrinsic and when compiling in 386 mode. When targeting
// that CPU and above, you'd ideally want `REP STOSD` for twice the speed.
memset(MK_FP(0xA800, 0), _AL, ((640 / 8) * 400));
However, this might make you wonder why TDW mode is even necessary. If it's
functionally equivalent to RMW mode with a CPU-supplied bitmask made up
entirely of 1 bits (i.e., 0xFF, 0xFFFF, or
0xFFFFFFFF), what's the point? The difference lies in the
hardware implementation: If all you need to do is write tile data to
VRAM, you don't need the read and modify parts of RMW mode
which require additional processing time. The PC-9801 Programmers'
Bible claims a speedup of almost 2× when using TDW mode over equivalent
operations in RMW mode.
And that's the only performance claim I found, because none of these old
PC-98 hardware and programming books did any benchmarks. Then again, it's
not too interesting of a question to benchmark either, as the byte-aligned
nature of TDW blitting severely limits its use in a game engine anyway.
Sure, maybe it makes sense to temporarily switch from RMW to TDW mode
if you've identified a large rectangular and byte-aligned section within a
sprite that could be blitted without a bitmask? But the necessary
identification work likely nullifies the performance gained from TDW mode,
I'd say. In any case, that's pretty deep
micro-optimization territory. Just use TDW mode for the
few cases it's good at, and stick to RMW mode for the rest.
So is this all that can be said about the GRCG? Not quite, because there are
4 bits I haven't talked about yet…
And now we're just 5.37% away from 100% position independence for TH04! From
this point, another 2 pushes should be enough to reach this goal. It might
not look like we're that close based on the current estimate, but a
big chunk of the remaining numbers are false positives from the player shot
control functions. Since we've got a very special deadline to hit, I'm going
to cobble these two pushes together from the two current general
subscriptions and the rest of the backlog. But you can, of course, still
invest in this goal to allow the existing contributions to go to something
else.
… Well, if the store was actually open. So I'd better
continue with a quick task to free up some capacity sooner rather than
later. Next up, therefore: Back to TH02, and its item and player systems.
Shouldn't take that long, I'm not expecting any surprises there. (Yeah, I
know, famous last words…)
Stripe is now
properly integrated into this website as an alternative to PayPal! Now, you
can also financially support the project if PayPal doesn't work for you, or
if you prefer using a
provider out of Stripe's greater variety. It's unfortunate that I had to
ship this integration while the store is still sold out, but the Shuusou
Gyoku OpenGL backend has turned out way too complicated to be finished next
to these two pushes within a month. It will take quite a while until the
store reopens and you all can start using Stripe, so I'll just link back to
this blog post when it happens.
Integrating Stripe wasn't the simplest task in the world either. At first,
the Checkout API
seems pretty friendly to developers: The entire payment flow is handled on
the backend, in the server language of your choice, and requires no frontend
JavaScript except for the UI feedback code you choose to write. Your
backend API endpoint initiates the Stripe Checkout session, answers with a
redirect to Stripe, and Stripe then sends a redirect back to your server if
the customer completed the payment. Superficially, this server-based
approach seems much more GDPR-friendly than PayPal, because there are no
remote scripts to obtain consent for. In reality though, Stripe shares
much more potential personal data about your credit card or bank
account with a merchant, compared to PayPal's almost bare minimum of
necessary data.
It's also rather annoying how the backend has to persist the order form
information throughout the entire Checkout session, because it would
otherwise be lost if the server restarts while a customer is still busy
entering data into Stripe's Checkout form. Compare that to the PayPal
JavaScript SDK, which only POSTs back to your server after the
customer completed a payment. In Stripe's case, more JavaScript actually
only makes the integration harder: If you trigger the initial payment
HTTP request from JavaScript, you will have
to improvise a bit to avoid the CORS error when redirecting away to a
different domain.
But sure, it's all not too bad… for regular orders at least. With
subscriptions, however, things get much worse. Unlike PayPal, Stripe
kind of wants to stay out of the way of the payment process as much as
possible, and just be a wrapper around its supported payment methods. So if
customers aren't really meant to register with Stripe, how would they cancel
their subscriptions?
Answer: Through
the… merchant? Which I quite dislike in principle, because why should
you have to trust me to actually cancel your subscription after you
requested it? It also means that I probably should add some sort of UI for
self-canceling a Stripe subscription, ideally without adding full-blown user
accounts. Not that this solves the underlying trust issue, but it's more
convenient than contacting me via email or, worse, going through your bank
somehow. Here is how my solution works:
When setting up a Stripe subscription, the server will generate a random
ID for authentication. This ID is then used as a salt for a hash
of the Stripe subscription ID, linking the two without storing the latter on
my server.
The thank you page, which is parameterized with the Stripe
Checkout session ID, will use that ID to retrieve the subscription
ID via an API call to Stripe, and display it together with the above
salt. This works indefinitely – contrary to what the expiry field in the
Checkout session object suggests, Stripe sessions are indeed stored
forever. After all, Stripe also displays this session information in a
merchant's transaction log with an excessive amount of detail. It might have
been better to add my own expiration system to these pages, but this had
been taking long enough already. For now, be aware that sharing the link to
a Stripe thank you page is equivalent to sharing your subscription
cancellation password.
The salt is then used as the key for a subscription management page. To
cancel, you visit this page and enter the Stripe subscription ID to confirm.
The server then checks whether the salt and subscription ID pair belong to
each other, and sends the actual cancellation
request back to Stripe if they do.
I might have gone a bit overboard with the crypto there, but I liked the
idea of not storing any of the Stripe session IDs in the server database.
It's not like that makes the system more complex anyway, and it's nice to
have a separate confirmation step before canceling a subscription.
But even that wasn't everything I had to keep in mind here. Once you
switch from test to production mode for the final tests, you'll notice that
certain SEPA-based
payment providers take their sweet time to process and activate new
subscriptions. The Checkout session object even informs you about that, by
including a payment status field. Which initially seems just like
another field that could indicate hacking attempts, but treating it as such
and rejecting any unpaid session can also reject perfectly valid
subscriptions. I don't want all this control… 🥲
Instead, all I can do in this case is to tell you about it. In my test, the
Stripe dashboard said that it might take days or even weeks for the initial
subscription transaction to be confirmed. In such a case, the respective
fraction of the cap will unfortunately need to remain red for that entire time.
And that was 1½ pushes just to replicate the basic functionality of a simple
PayPal integration with the simplest type of Stripe integration. On the
architectural site, all the necessary refactoring work made me finally
upgrade my frontend code to TypeScript at least, using the amazing esbuild to handle transpilation inside
the server binary. Let's see how long it will now take for me to upgrade to
SCSS…
With the new payment options, it makes sense to go for another slight price
increase, from up to per push.
The amount of taxes I have to pay on this income is slowly becoming
significant, and the store has been selling out almost immediately for the
last few months anyway. If demand remains at the current level or even
increases, I plan to gradually go up to by the end
of the year. 📝 As📝 usual,
I'm going to deliver existing orders in the backlog at the value they were
originally purchased at. Due to the way the cap has to be calculated, these
contributions now appear to have increased in value by a rather awkward
13.33%.
This left ½ of a push for some more work on the TH01 Anniversary Edition.
Unfortunately, this was too little time for the grand issue of removing
byte-aligned rendering of bigger sprites, which will need some additional
blitting performance research. Instead, I went for a bunch of smaller
bugfixes:
ANNIV.EXE now launches ZUNSOFT.COM if
MDRV98 wasn't resident before. In hindsight, it's completely obvious
why this is the right thing to do: Either you start
ANNIV.EXE directly, in which case there's no resident
MDRV98 and you haven't seen the ZUN Soft logo, or you have
made a single-line edit to GAME.BAT and replaced
op with anniv, in which case MDRV98 is
resident and you have seen the logo. These are the two
reasonable cases to support out of the box. If you are doing
anything else, it shouldn't be that hard to adjust though?
You might be wondering why I didn't just include all code of
ZUNSOFT.COM inside ANNIV.EXE together with
the rest of the game. The reason: ZUNSOFT.COM has
almost nothing in common with regular TH01 code. While the rest of
TH01 uses the custom image formats and bad rendering code I
documented again and again during its RE process,
ZUNSOFT.COM fully relies on master.lib for everything
about the bouncing-ball logo animation. Its code is much closer to
TH02 in that respect, which suggests that ZUN did in fact write this
animation for TH02, and just included the binary in TH01 for
consistency when he first sold both games together at Comiket 52.
Unlike the 📝 various bad reasons for splitting the PC-98 Touhou games into three main executables,
it's still a good idea to split off animations that use a completely
different set of rendering and file format functions. Combined with
all the BFNT and shape rendering code, ZUNSOFT.COM
actually contains even more unique code than OP.EXE,
and only slightly less than FUUIN.EXE.
The optional AUTOEXEC.BAT is now correctly encoded in
Shift-JIS instead of accidentally being UTF-8, fixing the previous
mojibake in its final ECHO line.
The command-line option that just adds a stage selection without
other debug features (anniv s) now works reliably.
This one's quite interesting because it only ever worked
because of a ZUN bug. From a superficial look at the code, it
shouldn't: While the presence of an 's' branch proves
that ZUN had such a mode during development, he nevertheless forgot
to initialize the debug flag inside the resident structure within
this branch. This mode only ever worked because master.lib's
resdata_create() function doesn't clear the resident
structure after allocation. If anything on the system previously
happened to write something other than 0x00,
0x01, or 0x03 to the specific byte that
then gets repurposed as the debug mode flag, this lack of
initialization does in fact result in a distinct non-test and
non-debug stage selection mode.
This is what happens on a certain widely circulated .HDI copy of
TH01 that boots MS-DOS 3.30C. On this system, the memory that
master.lib will allocate to the TH01 resident structure was
previously used by DOS as stack for its kernel, which left the
future resident debug flag byte at address 9FF6:0012 at
a value of 0x12. This might be the entire reason why
game s is even widely documented to trigger a stage
selection to begin with – on the widely circulated TH04 .HDI that
boots MS-DOS 6.20, or on DOSBox-X, the s parameter
doesn't work because both DOS systems leave the resident debug flag
byte at 0x00. And since ANNIV.EXE pushes
MDRV98 into that area of conventional DOS RAM, anniv s
previously didn't work even on MS-DOS 3.30C.
Both bugs in the
📝 1×1 particle system during the Mima fight
have been fixed. These include the off-by-one error that killed off the
very first particle on the 80th
frame and left it in VRAM, and, just like every other entity type, a
replacement of ZUN's EGC unblitter with the new pixel-perfect and fast
one. Until I've rearchitected unblitting as a whole, the particles will
now merely rip barely visible 1×1 holes into the sprites they overlap.
The bomb value shown in the lowest line of the in-game
debug mode output is now right-aligned together with the rest of the
values. This ensures that the game always writes a consistent number
of characters to TRAM, regardless of the magnitude of the
bomb value, preventing the seemingly wrong
timer values that appeared in the original game
whenever the value of the bomb variable changed to a
lower number of digits:
Finally, I've streamlined VRAM page access changes, which allowed me to
consistently replace ZUN's expensive function call with the optimal two
inlined x86 instructions. Interestingly, this change alone removed
2 KiB from the binary size, which is almost all of the difference
between 📝 the P0234-1 release and this
one. Let's see how much longer we can make each new release of
ANNIV.EXE smaller than the previous one.
The final point, however, raised the question of what we're now going to do
about
📝 a certain issue in the 地獄/Jigoku Bad Ending.
ZUN's original expensive way of switching the accessed VRAM page was the
main reason behind the lag frames on slower PC-98 systems, and
search-replacing the respective function calls would immediately get us to
the optimized version shown in that blog post. But is this something we
actually want? If we wanted to retain the lag, we could surely preserve that
function just for this one instance… The discovery of this issue
predates the clear distinction between bloat, quirks, and bugs, so it makes
sense to first classify what this issue even is. The distinction comes all
down to observability, which I defined as changes to rendered frames
between explicitly defined frame boundaries. That alone would be enough to
categorize any cause behind lag frames as bloat, but it can't hurt to be
more explicit here.
Therefore, I now officially judge observability in terms of an infinitely
fast PC-98 that can instantly render everything between two explicitly
defined frames, and will never add additional lag frames. If we plan to port
the games to faster architectures that aren't bottlenecked by disappointing
blitter chips, this is the only reasonable assumption to make, in my
opinion: The minimum system requirements in the games' README files are
minimums, after all, not recommendations. Chasing the exact frame
drop behavior that ZUN must have experienced during the time he developed
these games can only be a guessing game at best, because how can we know
which PC-98 model ZUN actually developed the games on? There might even be
more than one model, especially when it comes to TH01 which had been in
development for at least two years before ZUN first sold it. It's also not
like any current PC-98 emulator even claims to emulate the specific timing
of any existing model, and I sure hope that nobody expects me to import a
bunch of bulky obsolete hardware just to count dropped frames.
That leaves the tearing, where it's much more obvious how it's a bug. On an
infinitely fast PC-98, the ドカーン
frame would never be visible, and thus falls into the same category as the
📝 two unused animations in the Sariel fight.
With only a single unconditional 2-frame delay inside the animation loop, it
becomes clear that ZUN intended both frames of the animation to be displayed
for 2 frames each:
No tearing, and 34 frames in total for the first of the two
instances of this animation.
Next up: Taking the oldest still undelivered push and working towards TH04
position independence in preparation for multilingual translations. The
Shuusou Gyoku OpenGL backend shouldn't take that much longer either,
so I should have lots of stuff coming up in May afterward.
P0235
TH02 RE (Stage tiles, part 1/2)
P0236
TH02 RE (Stage tiles, part 2/2)
P0237
TH02 RE (Spark structure + Point number popups + Bomb animation effects)
💰 Funded by:
Ember2528, Yanga
🏷️ Tags:
So, TH02! Being the only game whose main binary hadn't seen any dedicated
attention ever, we get to start the TH02-related blog posts at the very
beginning with the most foundational pieces of code. The stage tile system
is the best place to start here: It not only blocks every entity that is
rendered on top of these tiles, but is curiously placed right next to
master.lib code in TH02, and would need to be separated out into its own
translation unit before we can do the same with all the master.lib
functions.
In late 2018, I already RE'd
📝 TH04's and TH05's stage tile implementation, but haven't properly documented it on this
blog yet, so this post is also going to include the details that are unique
to those games. On a high level, the stage tile system works identically in
all three games:
The tiles themselves are 16×16 pixels large, and a stage can use 100 of
them at the same time.
The optimal way of blitting tiles would involve VRAM-to-VRAM copies
within the same page using the EGC, and that's exactly what the games do.
All tiles are stored on both VRAM pages within the rightmost 64×400 pixels
of the screen just right next to the HUD, and you only don't see them
because the games cover the same area in text RAM with black cells:
The initial screen of TH02's Stage 1, with the tile source
area uncovered by filling the same area in text RAM with transparent
cells instead of black ones. In TH02, this also reveals how the tile
area ends with a bunch of glitch tiles, tinted blue in the image. These
are the result of ZUN unconditionally blitting 100 tile images every
time, regardless of how many are actually contained in an
.MPN file.
These glitch tiles are another good example of a ZUN
landmine. Their appearance is the result of reading heap memory
outside allocated boundaries, which can easily cause segmentation faults
when porting the game to a system with virtual memory. Therefore, these
would not just be removed in this game's Anniversary Edition, but on the
more conservative debloated branch as well. Since the game
never uses these tiles and you can't observe them unless you manipulate
text RAM from outside the confines of the game, it's not a bug
according to our definition.
To reduce the memory required for a map, tiles are arranged into fixed
vertical sections of a game-specific constant size.
The 6 24×8-tile sections defined in TH02's STAGE0.MAP, in
reverse order compared to how they're defined in the file. Note the
duplicated row at the top of the final section: The boss fight starts
once the game scrolled the last full row of tiles onto the top of the
screen, not the playfield. But since the PC-98 text chip
covers the top tile row of the screen with black cells, this final row
is never visible, which effectively reduces a map's final tile section
to 7 rows rather than 8.
The actual stage map then is simply a list of these tile sections,
ordered from the start/bottom to the top/end.
Any manipulation of specific tiles within the fixed tile sections has to
be hardcoded. An example can be found right in Stage 1, where the Shrine
Tank leaves track marks on the tiles it appears to drive over:
This video also shows off the two issues with Touhou's first-ever
midboss: The replaced tiles are rendered below the midboss
during their first 4 frames, and maybe ZUN should have stopped the
tile replacements one row before the timeout. The first one is
clearly a bug, but it's not so clear-cut with the second one. I'd
need to look at the code to tell for sure whether it's a quirk or a
bug.
The differences between the three games can best be summarized in a table:
TH02
TH04
TH05
Tile image file extension
.MPN
Tile section format
.MAP
Tile section order defined as part of
.DT1
.STD
Tile section index format
0-based ID
0-based ID × 2
Tile image index format
Index between 0 and 100, 1 byte
VRAM offset in tile source area, 2 bytes
Scroll speed control
Hardcoded
Part of the .STD format, defined per referenced tile
section
Redraw granularity
Full tiles (16×16)
Half tiles (16×8)
Rows per tile section
8
5
Maximum number of tile sections
16
32
Lowest number of tile sections used
5 (Stage 3 / Extra)
8 (Stage 6)
11 (Stage 2 / 4)
Highest number of tile sections used
13 (Stage 4)
19 (Extra)
24 (Stage 3)
Maximum length of a map
320 sections (static buffer)
256 sections (format limitation)
Shortest map
14 sections (Stage 5)
20 sections (Stage 5)
15 sections (Stage 2)
Longest map
143 sections (Stage 4)
95 sections (Stage 4)
40 sections (Stage 1 / 4 / Extra)
The most interesting part about stage tiles is probably the fact that some
of the .MAP files contain unused tile sections. 👀 Many
of these are empty, duplicates, or don't really make sense, but a few
are unique, fit naturally into their respective stage, and might have
been part of the map during development. In TH02, we can find three unused
sections in Stage 5:
The non-empty tile sections defined in TH02's STAGE4.MAP,
showing off three unused ones.
These unused tile sections are much more common in the later games though,
where we can find them in TH04's Stage 3, 4, and 5, and TH05's Stage 1, 2,
and 4. I'll document those once I get to finalize the tile rendering code of
these games, to leave some more content for that blog post. TH04/TH05 tile
code would be quite an effective investment of your money in general, as
most of it is identical across both games. Or how about going for a full-on
PC-98 Touhou map viewer and editor GUI?
Compared to TH04 and TH05, TH02's stage tile code definitely feels like ZUN
was just starting to understand how to pull off smooth vertical scrolling on
a PC-98. As such, it comes with a few inefficiencies and suboptimal
implementation choices:
The redraw flag for each tile is stored in a 24×25 bool
array that does nothing with 7 of the 8 bits.
During bombs and the Stage 4, 5, and Extra bosses, the game disables the
tile system to render more elaborate backgrounds, which require the
playfield to be flood-filled with a single color on every frame. ZUN uses
the GRCG's RMW mode rather than TDW mode for this, leaving almost half of
the potential performance on the table for no reason. Literally,
changing modes only involves changing a single constant.
The scroll speed could theoretically be changed at any time. However,
the function that scrolls in new stage tiles can only ever blit part of a
single tile row during every call, so it's up to the caller to ensure
that scrolling always ends up on an exact 16-pixel boundary. TH02 avoids
this problem by keeping the scroll speed constant across a stage, using 2
pixels for Stage 4 and 1 pixel everywhere else.
Since the scroll speed is given in pixels, the slowest speed would be 1
pixel per frame. To allow the even slower speeds seen in the final game,
TH02 adds a separate scroll interval variable that only runs the
scroll function every 𝑛th frame, effectively adding a prescaler to the
scroll speed. In TH04 and TH05, the speed is specified as a Q12.4 value
instead, allowing true fractional speeds at any multiple of
1/16 pixels. This also necessitated a fixed algorithm
that correctly blits tile lines from two rows.
Finally, we've got a few inconsistencies in the way the code handles the
two VRAM pages, which cause a few unnecessary tiles to be rendered to just
one of the two pages. Mentioning that just in case someone tries to play
this game with a fully cleared text RAM and wonders where the flickering
tiles come from.
Even though this was ZUN's first attempt at scrolling tiles, he already saw
it fit to write most of the code in assembly. This was probably a reaction
to all of TH01's performance issues, and the frame rate reduction
workarounds he implemented to keep the game from slowing down too much in
busy places. "If TH01 was all C++ and slow, TH02 better contain more ASM
code, and then it will be fast, right?"
Another reason for going with ASM might be found in the kind of
documentation that may have been available to ZUN. Last year, the PC-98
community discovered and scanned two new game programming tutorial books
from 1991 (1, 2).
Their example code is not only entirely written in assembly, but restricts
itself to the bare minimum of x86 instructions that were available on the
8086 CPU used by the original PC-9801 model 9 years earlier. Such code is
not only suboptimal
on the 486, but can often be actually worse than what your C++
compiler would generate. TH02 is where the trend of bad hand-written ASM
code started, and it
📝 only intensified in ZUN's later games. So,
don't copy code from these books unless you absolutely want to target the
earlier 8086 and 286 models. Which,
📝 as we've gathered from the recent blitting benchmark results,
are not all too common among current real-hardware owners.
That said, all that ASM code really only impacts readability and
maintainability. Apart from the aforementioned issues, the algorithms
themselves are mostly fine – especially since most EGC and GRCG operations
are decently batched this time around, in contrast to TH01.
Luckily, the tile functions merely use inline assembly within a
typical C function and can therefore be at least part of a C++ source file,
even if the result is pretty ugly. This time, we can actually be sure that
they weren't written directly in a .ASM file, because they feature x86
instruction encodings that can only be generated with Turbo C++ 4.0J's
inline assembler, not with TASM. The same can't unfortunately be said about
the following function in the same segment, which marks the tiles covered by
the spark sprites for redrawing. In this one, it took just one dumb hand-written ASM
inconsistency in the function's epilog to make the entire function
undecompilable.
The standard x86 instruction sequence to set up a stack frame in a function prolog looks like this:
PUSH BP
MOV BP, SP
SUB SP, ?? ; if the function needs the stack for local variables
When compiling without optimizations, Turbo C++ 4.0J will
replace this sequence with a single ENTER instruction. That one
is two bytes smaller, but much slower on every x86 CPU except for the 80186
where it was introduced.
In functions without local variables, BP and SP
remain identical, and a single POP BP is all that's needed in
the epilog to tear down such a stack frame before returning from the
function. Otherwise, the function needs an additional MOV SP,
BP instruction to pop all local variables. With x86 being the helpful
CISC architecture that it is, the 80186 also introduced the
LEAVE instruction to perform both tasks. Unlike
ENTER, this single instruction
is faster than the raw two instructions on a lot of x86 CPUs (and
even current ones!), and it's always smaller, taking up just 1 byte instead
of 3. So what if you use LEAVE even if your function
doesn't use local variables? The fact that the
instruction first does the equivalent of MOV SP, BP doesn't
matter if these registers are identical, and who cares about the additional
CPU cycles of LEAVE compared to just POP BP,
right? So that's definitely something you could theoretically do, but
not something that any compiler would ever generate.
And so, TH02 MAIN.EXE decompilation already hits the first
brick wall after two pushes. Awesome! Theoretically,
we could slowly mash through this wall using the 📝 code generator. But having such an inconsistency in the
function epilog would mean that we'd have to keep Turbo C++ 4.0J from
emitting any epilog or prolog code so that we can write our
own. This means that we'd once again have to hide any use of the
SI and DI registers from the compiler… and doing
that requires code generation macros for 22 of the 49 instructions of
the function in question, almost none of which we currently have. So, this
gets quite silly quite fast, especially if we only need to do it
for one single byte.
Instead, wouldn't it be much better if we had a separate build step between
compile and link time that allowed us to replicate mistakes like these by
just patching the compiled .OBJ files? These files still contain the names
of exported functions for linking, which would allow us to look up the code
of a function in a robust manner, navigate to specific instructions using a
disassembler, replace them, and write the modified .OBJ back to disk before
linking. Such a system could then naturally expand to cover all other
decompilation issues, culminating in a full-on optimizer that could even
recreate ZUN's self-modifying code. At that point, we would have sealed away
all of ZUN's ugly ASM code within a separate build step, and could finally
decompile everything into readable C++.
Pulling that off would require a significant tooling investment though.
Patching that one byte in TH02's spark invalidation function could be done
within 1 or 2 pushes, but that's just one issue, and we currently have 32
other .ASM files with undecompilable code. Also, note that this is
fundamentally different from what we're doing with the
debloated branch and the Anniversary Editions. Mistake patching
would purely be about having readable code on master that
compiles into ZUN's exact binaries, without fixing weird
code. The Anniversary Editions go much further and rewrite such code in
a much more fundamental way, improving it further than mistake patching ever
could.
Right now, the Anniversary Editions seem much more
popular, which suggests that people just want 100% RE as fast as
possible so that I can start working on them. In that case, why bother with
such undecompilable functions, and not just leave them in raw and unreadable
x86 opcode form if necessary… But let's first
see how much backer support there actually is for mistake patching before
falling back on that.
The best part though: Once we've made a decision and then covered TH02's
spark and particle systems, that was it, and we will have already RE'd
all ZUN-written PC-98-specific blitting code in this game. Every further
sprite or shape is rendered via master.lib, and is thus decently abstracted.
Guess I'll need to update
📝 the assessment of which PC-98 Touhou game is the easiest to port,
because it sure isn't TH01, as we've seen with all the work required for the first Anniversary Edition build.
Until then, there are still enough parts of the game that don't use any of
the remaining few functions in the _TEXT segment. Previously, I
mentioned in the 📝 status overview blog post
that TH02 had a seemingly weird sprite system, but the spark and point popup
() structures showed that the game just
stores the current and previous position of its entities in a slightly
different way compared to the rest of PC-98 Touhou. Instead of having
dedicated structure fields, TH02 uses two-element arrays indexed with the
active VRAM page. Same thing, and such a pattern even helps during RE since
it's easy to spot once you know what to look for.
There's not much to criticize about the point popup system, except for maybe
a landmine that causes sprite glitches when trying to display more than
99,990 points. Sadly, the final push in this delivery was rounded out by yet
another piece of code at the opposite end of the quality spectrum. The
particle and smear effects for Reimu's bomb animations consist almost
entirely of assembly bloat, which would just be replaced with generic calls
to the generic blitter in this game's future Anniversary Edition.
If I continue to decompile TH02 while avoiding the brick wall, items would
be next, but they probably require two pushes. Next up, therefore:
Integrating Stripe as an alternative payment provider into the order form.
There have been at least three people who reported issues with PayPal, and
Stripe has been working much better in tests. In the meantime, here's a temporary Stripe
order link for everyone. This one is not connected to the cap yet, so
please make sure to stay within whatever value is currently shown on the
front page – I will treat any excess money as donations.
If there's some time left afterward, I might
also add some small improvements to the TH01 Anniversary Edition.
Turns out I was not quite done with the TH01 Anniversary Edition yet.
You might have noticed some white streaks at the beginning of Sariel's
second form, which are in fact a bug that I accidentally added to the
initial release.
These can be traced back to a quirk
I wasn't aware of, and hadn't documented so far. When defeating Sariel's
first form during a pattern that spawns pellets, it's likely for the second
form to start with additional pellets that resemble the previous pattern,
but come out of seemingly nowhere. This shouldn't really happen if you look
at the code: Nothing outside the typical pattern code spawns new pellets,
and all existing ones are reset before the form transition…
Except if they're currently showing the 10-frame delay cloud
animation , activated for all pellets during the symmetrical radial 2-ring
pattern in Phase 2 and left activated for the rest of the fight. These
pellets will continue their animation after the transition to the second
form, and turn into regular pellets you have to dodge once their animation
completed.
By itself, this is just one more quirk to keep in mind during refactoring.
It only turned into a bug in the Anniversary Edition because the game tracks
the number of living pellets in a separate counter variable. After resetting
all pellets, this counter is simply set to 0, regardless of any delay cloud
pellets that may still be alive, and it's merely incremented or decremented
when pellets are spawned or leave the playfield.
In the original game, this counter is only used as an optimization to skip
spawning new pellets once the cap is reached. But with batched
EGC-accelerated unblitting, it also makes sense to skip the rather costly
setup and shutdown of the EGC if no pellets are active anyway. Except if the
counter you use to check for that case can be 0 even if there are
pellets alive, which consequently don't get unblitted…
There is an optimal fix though: Instead of unconditionally resetting the
living pellet counter to 0, we decrement it for every pellet that
does get reset. This preserves the quirk and gives us a
consistently correct counter, allowing us to still skip every unnecessary
loop over the pellet array.
Cutting out the lengthy defeat animation makes it easier to see where the
additional pellets come from.
Cutting out the lengthy defeat animation makes it easier to see where the
additional pellets come from. Also, note how regular unblitting resumes
once the first pellet gets clipped at the top of the playfield – the
living pellet counter then gets decremented to -1, and who uses
<= rather than == on a seemingly unsigned
counter, right?
Cutting out the lengthy defeat animation makes it easier to see where the
additional pellets come from.
Ultimately, this was a harmless bug that didn't affect gameplay, but it's
still something that players would have probably reported a few more times.
So here's a free bugfix:
P0229
TH01 debloating (Single-executable build, part 1/2)
P0230
TH01 debloating (Single-executable build, part 2/2)
P0231
Research (Spawning TSRs from C)
P0232
Portability (PC-98 platform layer, part 1)
P0233
Research (Performance of various PC-98 blitting approaches)
P0234
TH01 Anniversary Edition (Removing interlaced pellet rendering + Merging previous fixes)
💰 Funded by:
Ember2528, [Anonymous]
🏷️ Tags:
128 commits! Who would have thought that the ideal first release of the TH01
Anniversary Edition would involve so much maintenance, and raise so many
research questions? It's almost as if the real work only starts after
the 100% finalization mark… Once again, I had to steal some funding from the
reserved JIS trail word pushes to cover everything I liked to research,
which means that the next towards the
anything goal will repay this debt. Luckily, this doesn't affect any
immediate plans, as I'll be spending March with tasks that are already fully
funded.
So, how did this end up so massive? The list of things I originally set out
to do was pretty short:
Build entire game into single executable
Fix rendering issues in the one or two most important parts of the game
for a good initial impression
But even the first point already started with tons of little cleanup
commits. A part of them can definitely be blamed on the rush to hit the 100%
decompilation mark before the 25th anniversary last August.
However, all the structural changes that I can't commit to
master reveal how much of a mess the TH01 codebase actually
is.
Merging the executables is mainly difficult because of all the
inconsistencies between REIIDEN.EXE and FUUIN.EXE.
The worst parts can be found in the REYHI*.DAT format code and
the High Score menu, but the little things are just as annoying, like how
the current score is an unsigned variable in
REIIDEN.EXE, but a signed one in FUUIN.EXE.
If it takes me this long and this many
commits just to sort out all of these issues, it's no wonder that the only
thing I've seen being done with this codebase since TH01's 100%
decompilation was a single porting attempt that ended in a rather quick
ragequit.
So why are we merging the executables in preparation for the Anniversary
Edition, and not waiting with it until we start doing ports?
Distributing and updating one executable is cleaner than doing the same
with three, especially as long as installation will still involve manually
dropping the new binary into the game directory.
The Anniversary Edition won't be the only fork binary. We are already
going to start out with a separate DEBLOAT.EXE that contains
only the bloat removal changes without any bug fixes, and spaztron64
will probably redo his seizure-less edition. We don't want to clutter
the game directory with three binaries for each of these fork builds, and we
especially don't want to remember things like oh, but this fork
only modifies REIIDEN.EXE…
All forks should run side-by-side with the original game. During the
time I was maintaining thcrap, I've had countless bug reports of people
assuming that thcrap was
responsible for bugs that were present in the original game, and the
same is certain to happen with the Anniversary Edition. Separate binaries
will make it easier for everyone to check where these bugs came from.
Also, I'd like to make a point about how bloated the original
three-executable structure really is, since I've heard people defending it
as neat software architecture. Really, even in Real Mode where you typically
want to use as little of the 640 KiB of conventional memory as possible, you
don't want to split your game up like this.
The game actually is so bloated that the combined binary ended up
smaller than the original REIIDEN.EXE. If all you see are the
file sizes of the original three executables, this might look like a
pretty impressive feat. Like, how can we possibly get 407,812
bytes into less than 238,612 bytes, without using compression?
If you've ever looked at the linker map though, it's not at all surprising.
Excluding the aforementioned inconsistencies that are hard to quantify,
OP.EXE and FUUIN.EXE only feature 5,767 and 6,475
bytes of unique code and data, respectively. All other code in these
binaries is already part of REIIDEN.EXE, with more than half of
the size coming from the Borland C++ runtime. The single worst offender here
is the C++ exception handler that Borland forces
onto every non-.COM binary by default, which alone adds 20,512 bytes
even if your binary doesn't use C++ exceptions.
On a more hilarious note, this
single line is responsible for pulling another unnecessary 14,242 bytes
into OP.EXE and FUUIN.EXE. This floating-point
multiplication is completely unnecessary in this context because all
possible parameters are integers, but it's enough for Turbo C++ and TLINK to
pull in the entire x87 FPU emulation machinery. These two binaries don't
even draw lines, but since this function is part of the general
graphics code translation unit and contains other functions that these
binaries do need, TLINK links in the entire thing. Maybe, multiple
executables aren't the best choice either if you use a linker that can't do
dead code elimination…
Since the 📝 Orb's physics do turn the entire
precision of a double variable into gameplay effects, it's not
feasible to ever get rid of all FPU code in TH01. The exception handler,
however, can
be removed, which easily brings the combined binary below the size of
the original REIIDEN.EXE. Compiling all code with a single set
of compiler optimization flags, including the more x86-friendly
pascal calling convention, then gets us a few more KB on top.
As does, of course, removing unused code: The only remaining purpose of
features such as 📝 resident palettes is to
potentially make porting more difficult for anyone who doesn't immediately
realize that nothing in the game uses these functions.
Technically, all unused code would be bloat, but for now, I'm keeping
the parts that may tell stories about the game's development history (such
as unused effects or the 📝 mouse cursor), or
that might help with debugging. Even with that in mind, I've only scratched
the surface when it comes to bloat removal, and the binary is only going to
get smaller from here. A lot smaller.
If only we now could start MDRV98 from this new combined binary, we wouldn't
need a second batch file either…
Which brings us to the first big research question of this delivery. Using
the C spawn() function works fine on this compiler, so
spawn("MDRV98.COM") would be all we need to do, right? Except
that the game crashes very soon after that subprocess returned.
So it's not going to be that easy if the spawned process is a TSR.
But why should this be a problem? Let's take a look at the DOS heap, and how
DOS lays out processes in conventional memory if we launch the game
regularly through GAME.BAT:
The rough layout of the DOS heap when launching TH01 from
GAME.BAT.
The batch file starts MDRV98 first, which will therefore end up below
the game in conventional memory. This is perfect for a TSR: The program can
resize itself arbitrarily before returning to DOS, and the rest of memory
will be left over for the game. If we assume such a layout, a DOS program
can implement a custom memory allocator in a very simple way, as it only has
to search for free memory in one direction – and this is exactly how Borland
implemented the C heap for functions like malloc() and
free(), and the C++ new and delete
operators.
But if we spawn MDRV98 after starting TH01, well…
MDRV98 will spawn in the next free memory location, allocate itself, return
to TH01… which suddenly finds its C heap blocked from growing. As a result,
the next big allocation will immediately fail with a rather misleading "out
of memory" error.
So, what can we do about this? Still in a bloat removal mindset, my gut
reaction was to just throw out Borland's C heap implementation, and replace
it with a very thin wrapper around the DOS heap as managed by INT 21h,
AH=48h/49h/4Ah. Like, why
did these DOS compilers even bother with a custom allocator in the first
place if DOS already comes with a perfectly fine native one? Using the
native allocator would completely erase the distinction between TSR memory
and game memory, and inherently allow the game to allocate beyond
MDRV98.
I did in fact implement this, and noticed even more benefits:
While DOS uses 16 bytes rather than Borland's 4 bytes for the control
structure of each memory block, this larger size automatically aligns all
allocations to 16-byte boundaries. Therefore, all allocation addresses would
fit into 16-bit segment-only pointers rather than needing 32-bit
far ones. On the Borland heap, the 4-byte header further limits
regular far pointers to 65,532 bytes, forcing you into
expensive huge pointers for bigger allocations.
Debuggers in DOS emulators typically have features to show and manage
the DOS heap. No need for custom debugging code.
You can change the memory placement
strategy to allocate from the top of conventional memory down to the
bottom. This is how the games allocate their resident structures.
Ultimately though, the drawbacks became too significant. Most of them are
related to the PC-98 Touhou games only ever creating a single DOS
process, even though they contain multiple executables.
Switching executables is done via exec(), which resizes a
program's main allocation to match the new binary and then overwrites the
old program image with the new one. If you've ever wondered why DOSBox-X
only ever shows OP as the active process name in the title bar,
you now know why. As far as DOS is concerned, it's still the same
OP.EXE process rooted at the same segment, and
exec() doesn't bother rewriting the name either. Most
importantly though, this is how REIIDEN.EXE can launch into
another REIIDEN.EXE process even if there are less than 238,612
bytes free when exec() is called, and without consuming more
memory for every successive binary.
For now, ANNIV.EXE still re-exec()s itself at
every point where the original game did, as ZUN's original code really
depends on being reinitialized at boss and scene boundaries. The resulting
accidental semi-hot reloading is also a useful property to retain
during development.
So why is the DOS heap a bad idea for regular game allocation after all?
Even DOS automatically releases all memory associated with a process
during its termination. But since we keep running the same process until the
player quits out of the main menu, we lose the C heap's implicit cleanup on
exec(), and have to manually free all memory ourselves.
Since the binary can be larger after hot reloading, we in fact have
to allocate all regular memory using the last fit strategy.
Otherwise, exec() fails to resize the program's main block for
the same reason that crashed the game on our initial attempt to
spawn("MDRV98.COM").
Just like Borland's heap implementation, the DOS heap stores its control
structures immediately before each allocation, forming a singly linked list.
But since the entire OS shares this single list, corruptions from heap
overflows also affect the whole system, and become much more disastrous.
Theoretically, it might be possible to recover from them by forcibly
releasing all blocks after the last correct one, or even by doing a
brute-force search for valid memory
control blocks, but in reality, DOS will likely just throw error code #7
(ERROR_ARENA_TRASHED) on the next memory management syscall,
forcing a reboot.
With a custom allocator, small corruptions remain isolated to the process.
They can be even further limited if the process adds some padding between
its last internal allocation and the end of the allocated DOS memory block;
Borland's heap sort of does this as well by always rounding up the DOS block
to a full KiB. All this might not make a difference in today's emulated and
single-tasked usage, but would have back then when software was still
developed inside IDEs running on the same system.
TH01's debug mode uses heapcheck() and
heapchecknode(), and reimplementing these on top of the DOS
heap is not trivial. On the contrary, it would be the most complicated part
of such a wrapper, by far.
I could release this DOS heap wrapper in unused form for another push if
anyone's interested, but for now, I'm pretty happy with not actually using
it in the games. Instead, let's stay with the Borland C heap, and find a way
to push MDRV98 to the very top of conventional RAM. Like this:
Which is much easier said than done. It would be nice if we could just use
the last fit allocation strategy here, but .COM executables always
receive all free memory by default anyway, which eliminates any difference
between the strategies.
But we can still change memory itself. So let's temporarily claim all
remaining free memory, minus the exact amount we need for MDRV98, for our
process. Then, the only remaining free space to spawn MDRV98 is at the exact
place where we want it to be:
Obviously, we release all the additional memory after spawning MDRV98.
Now we only need to know how much memory to not temporarily allocate. First,
we need to replicate the assumption that MDRV98's -M7
command-line parameter corresponds to a resident size of 23,552 bytes. This
is not as bad as it seems, because the -M parameter explicitly
has a KiB unit, and we can nicely abstract it away for the API.
The (env.) block though? Its minimum size equals the combined length
of all environment variables passed to the process, but its maximum size is…
not limited at all?! As in, DOS implementations can add and have
historically added more free space because some programs insisted on storing
their own new environment variables in this exact segment. DOSBox and
DOSBox-X follow this tradition by providing a configuration option for the
additional amount of environment space, with the latter adding 1024
additional bytes by default, y'know, just in case someone wants to compile
FreeDOS on a slow emulator. It's not even worth sending a bug report for
this specific case, because it's only a symptom of the fact that
unexpectedly large program environment blocks can and will happen, and are
to be expected in DOS land.
So thanks to this cruel joke, it's technically impossible to achieve what we
want to do there. Hooray! The only thing we can kind of do here is an
educated guess: Sum up the length of all environment variables in our
environment block, compare that length against the allocated size of the
block, and assume that the MDRV98 process will get as much additional memory
as our process got. 🤷
The remaining hurdles came courtesy of some Borland C runtime implementation
details. You would think that the temporary reallocation could even be done
in pure C using the sbrk(), coreleft(), and
brk() functions, but all values passed to or returned from
these functions are inaccurate because they don't factor in the
aforementioned KiB padding to the underlying DOS memory block. So we have to
directly use the DOS syscalls after all. Which at least means that learning
about them wasn't completely useless…
The final issue is caused inside Borland's
spawn() implementation. The environment block for the
child process is built out of all the strings reachable from C's
environ pointer, which is what that FreeDOS build process
should have used. Coalescing them into a single buffer involves yet
another C heap allocation… and since we didn't report our DOS memory block
manipulation back to the C heap, the malloc() call might think
it needs to request more memory from DOS. This resets the DOS memory block
back to its intended level, undoing our manipulation right before the actual
INT 21h, AH=4Bh
EXEC syscall. Or in short:
Manipulate DOS heap ➜ spawn() call ➜_LoadProg() ➜ allocate and prepare environment block ➜ _spawn() ➜ DOS EXEC syscall
The obvious solution: Replace _LoadProg(), implement the
coalescing ourselves, and do it before the heap manipulation. Fortunately,
Borland's internal low-level _spawn() function is not
static, so we can call it ourselves whenever we want to:
Allocate and prepare environment block ➜ manipulate DOS heap ➜ _spawn() call ➜EXEC syscall
So yes, launching MDRV98 from C can be done, but it involves advanced
witchcraft and is completely ridiculous.
Launching external sound drivers from a batch file is the right way
of doing things.
Fortunately, you don't have to rely on this auto-launching feature. You can
still launch DEBLOAT.EXE or ANNIV.EXE from a batch
file that launched MDRV98.COM before, and the binaries will
detect this case and skip the attempt of launching MDRV98 from C. It's
unlikely that my heuristic will ever break, but I definitely recommend
replicating GAME.BAT just to be completely sure – especially
for user-friendly repacks that don't want to include the original game
anyway.
This is also why ANNIV.EXE doesn't launch
ZUNSOFT.COM: The "correct" and stable way to launch
ANNIV.EXE still involves a batch file, and I would say that
expecting people to remove ZUNSOFT.COM from that file is worse
than not playing the animation. It's certainly a debate we can have, though.
This deep dive into memory allocation revealed another previously
undocumented bug in the original game. The RLE decompression code for the
東方靈異.伝 packfile contains two heap overflows, which are
actually triggered by SinGyoku's BOSS1_3.BOS and Konngara's
BOSS8_1.BOS. They only do not immediately crash the game when
loading these bosses thanks to two implementation details of Borland's C
heap.
Obviously, this is a bug we should fix, but according to the definition of
bugs, that fix would be exclusive to the anniversary branch.
Isn't that too restrictive for something this critical? This code is
guaranteed to blow up with a different heap implementation, if only in a
Debug build. And besides, nobody would notice a fix
just by looking at the game's rendered output…
Looks like we have to introduce a fourth category of weird code, in addition
to the previous bloat, bug, and quirk categories, for
invisible internal issues like these. Let's call it landmine, and fix
them on the debloated branch as well. Thanks to
Clerish for the naming inspiration!
With this new category, the full definitions for all categories have become
quite extensive. Thus, they now live in CONTRIBUTING.md
inside the ReC98 repository.
With the new discoveries and the new landmine category, TH01 is now at 67
bugs and 20 landmines. And the solution for the landmine in question? Simplifying
the 61 lines of the original code down to 16. And yes, I'm including
comments in these numbers – if the interactions of the code are complex
enough to require multi-paragraph comments, these are a necessary and
valid part of the code.
While we're on the topic of weird code and its visible or invisible effects,
there's one thing you might be concerned about. With all the rearchitecting
and data shifting we're doing on the debloated branch, what
will happen to the 📝 negative glitch stages?
These are the result of a clearly observable bug that, by definition, must
not be fixed on the debloated branch. But given that the
observable layout of the glitch stages is defined by the memory
surrounding the scene stage variable, won't the
debloated branch inherently alter their appearance (= ⚠️
fanfiction ⚠️), or even remove them completely?
Well, yes, it will. But we can still preserve their layout by
hardcoding
the exact original data that the game would originally read, and even emulate
the original segment relocations and other pieces of global data.
Doing this is feasible thanks to the fact that there are only 4 glitch
stages. Unfortunately, the same can't be said for the timer values, which
are determined by an array lookup with the un-modulo'd stage ID. If we
wanted to preserve those as well, we'd have to bundle an exact copy of the
original REIIDEN.EXE data segment to preserve the values of all
32,768 negative stages you could possibly enter, together with a map
of all relocations in this segment. 😵 Which I've decided against for now,
since this has been going on for far too long already. Let's first see if
anyone ever actually complains about details like this…
Alright, time to start the anniversary branch by rendering
everything at its correct internal unaligned X position? Eh… maybe not quite
yet. If we just hacked all the necessary bit-shifting code into all the
format-specific blitting functions, we'd still retain all this largely
redundant, bad, and slow code, and would make no progress in terms of
portability. It'd be much better to first write a single generic blitter
that's decently optimized, but supports all kinds of sprites to make this
optimization actually worth something.
So, next research question: How would such a blitter look like? After I
learned during my
📝 first foray into cycle counting that port
I/O is slow on 486 CPUs, it became clear that TH04's
📝 GRCG batching for pellets was one of the
more useful optimizations that probably contributed a big deal towards
achieving the high bullet counts of that game. This leads to two
conclusions:
master.lib's super_*() sprite functions are slow, and not
worth looking at for inspiration. Even the 📝 tiny format reinitializes the GRCG on every color change, wasting 80
cycles.
Hence, our low-level blitting API should not even care about colors. It
should only concern itself with blitting a given 1bpp sprite to a single
VRAM segment. This way, it can work for both 4-plane sprites and
single-plane sprites, and just assume that the GRCG is active.
Maybe we should also start by not even doing these unaligned bit shifts
ourselves, and instead expect the call site to
📝 always deliver a byte-aligned sprite that is correctly preshifted,
if necessary? Some day, we definitely should measure how slow runtime
shifting would really be…
What we should do, however, are some further general optimizations that I
would have expected from master.lib: Unrolling the vertical
loop, and baking a single function for every sprite width to eliminate
the horizontal loop. We can then use the widest possible x86
MOV instruction for the lowest possible number of cycles per
row – for example, we'd blit a 56-wide sprite with three MOVs
(32-bit + 16-bit + 8-bit), and a 64-wide one with two 32-bit
MOVs.
Or maybe not? There's a lot of blitting code in both master.lib and PC-98
Touhou that checks for empty bytes within sprites to skip needlessly writing
them to VRAM:
Which goes against everything you seem to know about computers. We aren't
running on an 8-bit CPU here, so wouldn't it be faster to always write both
halves of a sprite in a single operation?
That's a single CPU instruction, compared to two instructions and two
branches. The only possible explanation for this would be that VRAM writes
are so slow on PC-98 that you'd want to avoid them at all costs, even
if that means additional branching on the CPU to do so. Or maybe that was
something you would want to do on certain models with slow VRAM, but not on
others?
So I wrote a benchmark to answer all these questions, and to compare my new
blitter against typical TH01 blitting code:
A not really representative run on DOSBox-X. Since the master.lib sprite
functions are also unbatched, I expect them to not be much faster than
the naive C implementation.
2023-03-05-blitperf.zip
And here are the real-hardware results I've got from the PC-9800
Central Discord server:
PC-286LS
PC-9801ES
PC-9821Cb/Cx
PC-9821Ap3
PC-9821An
PC-9821Nw133
PC-9821Ra20
80286, 12 MHz
i386SX, 16 MHz
486SX, 33 MHz
486DX4, 100 MHz
Pentium, 90 MHz
Pentium, 133 MHz
Pentium Pro, 200 MHz
1987
1989
1994
1994
1994
1997
1996
Unchecked
C
GRCG
36,85
38,42
26,02
26,87
3,98
4,13
2,08
2,16
1,81
1,87
0,86
0,89
1,25
1,25
MOVS
GRCG
15,22
16,87
9,33
10,19
1,22
1,37
0,44
0,44
MOV
GRCG
15,42
17,08
9,65
10,53
1,15
1,3
0,44
0,44
4-plane
37,23
43,97
29,2
32,96
4,44
5,01
4,39
4,67
5,11
5,32
5,61
5,74
6,63
6,64
Checking first
GRCG
17,49
19,15
10,84
11,72
1,27
1,44
1,04
1,07
0,54
0,54
4-plane
46,49
53,36
35,01
38,79
5,66
6,26
5,43
5,74
6,56
6,8
8,08
8,29
10,25
10,29
Checking second
GRCG
16,47
18,12
10,77
11,65
1,25
1,39
1,02
0,51
0,51
4-plane
43,41
50,26
33,79
37,82
5,22
5,81
5,14
5,43
6,18
6,4
7,57
7,77
9,58
9,62
Checking both
GRCG
16,14
18,03
10,84
11,71
1,33
1,49
1,01
0,49
0,49
4-plane
43,61
50,45
34,11
37,87
5,39
5,99
4,92
5,23
5,88
6,11
7,19
7,43
9,1
9,13
Amount of frames required to render 2000 16×8 pellet sprites on a variety of
PC-98 models, using the new generic blitter. Both preshifted (first column)
and runtime-shifted (second column) sprites were tested; empty columns
correspond to times faster than a single frame. Thanks to cuba200611,
Shoutmon, cybermind, and Digmac for running the tests!
The key takeaways:
Checking for empty bytes has never been a good idea.
Preshifting sprites made a slight difference on the 286. Starting with
the 386 though, that difference got smaller and smaller, until it completely
vanished on Pentium models. The memory tradeoff is especially not worth it
for 4-plane sprites, given that you would have to preshift each of the 4
planes and possibly even a fifth alpha plane. Ironically, ZUN only ever
preshifted monochrome single-bitplane sprites with a width of 8 pixels.
That's the smallest possible amount of memory a sprite can possibly take,
and where preshifting consequently has the smallest effect on performance.
Shifting 8-wide sprites on the fly literally takes a single ROL
or ROR instruction per row.
You might want to use MOVS instead of MOV when
targeting the 286 and 386, but the performance gains are barely worth the
resulting mess you would make out of your blitting code. On Pentium models,
there is no difference.
Use the GRCG whenever you have to render lots of things that share a
static 8×1 pattern.
These are the PC-98 models that the people who are willing to test your
newly written PC-98 code actually use.
Since this won't be the only piece of game-independent and explicitly
PC-98-specific custom code involved in this delivery, it makes sense to
start a
dedicated PC-98 platform layer. This code will gradually eliminate the
dependency on master.lib and replace it with better optimized and more
readable C++ code. The blitting benchmark, for example, is already
implemented completely without master.lib.
While this platform layer is mainly written to generate optimal code within
Turbo C++ 4.0J, it can also serve as general PC-98 documentation for
everyone who prefers code over machine-translating old Japanese books. Not
to mention the immediacy of having all actual relevant information in
one place, which might otherwise be pretty well hidden in these books, or
some obscure old text file. For example, did you know that uploading gaiji
via INT 18h might end up disabling the VSync interrupt trigger,
deadlocking the process on the next frame delay loop? This nuisance is not
replicated by any emulators, and it's quite frustrating to encounter it when
trying to run your code on real hardware. master.lib works around it by
simply hooking INT 18h and unconditionally reenabling the VSync
interrupt trigger after the original handler returns, and so does our
platform layer.
So, with the pellet draw calls batched and routed through the new renderer,
we should have gained enough free CPU cycles to disable
📝 interlaced pellet rendering without any
impact on frame rates?
Well, kinda. We do get 56.4 FPS, but only together with noticeable and
reproducible tearing in the top part of the playfield, suggesting exactly
why ZUN interlaced the rendering in the first place. 😕 So have we
already reached the limit of single-buffered PC-98 games here, or can we
still do something about it?
As it turns out, the main bottleneck actually lies in the pellet
unblitting code. Every EGC-"accelerated" unblitting call in TH01 is
as unbatched as the pellet blitting calls were, spending an additional 17
I/O port writes per call to completely set up and shut down the EGC, every
time. And since this is TH01, the two-instruction operation of changing the
active PC-98 VRAM page isn't inlined either, but instead done via a function
call to a faraway segment. On the 486, that's:
>341 cycles for EGC setup and teardown, plus
>72 cycles for each 16-pixel chunk to be unblitted.
This sums up to
>917 cycles of completely unnecessary work for every active pellet,
in the optimal 50% of cases where it lies on an even VRAM byte,
or
>1493 cycles if it lies on an odd VRAM byte, because ZUN's code
extends the unblitted rectangle to a gargantuan 32×8 pixels in this case
And this calculation even ignores the lack of small micro-optimizations that
could further optimize the blitting loop. Multiply that by the game's pellet
cap of 100, and we get a 6-digit number of wasted CPU cycles. On
paper, that's roughly 1/6 of the time we have for each
of our target 56.423 FPS on the game's target 33 MHz systems. Might not
sound all too critical, but the single-buffered nature of the game means
that we're effectively racing the beam on every frame. In turn, we have to
be even more serious about performance.
So, time to also add a batched EGC API to our PC-98 platform layer? Writing
our own EGC code presents a nice opportunity to finally look deeper into all
its registers and configuration options, and see what exactly we can do
about ZUN's enforced 16-pixel alignment.
To nobody's surprise, this alignment is completely unnecessary, and only
displays a lack of knowledge about the chip. While it is true that
the EGC wants VRAM to be exclusively addressed in 16-bit chunks at
16-bit-aligned addresses, it specifically provides
an address register (0x4AC) for shifting the horizontal
start offsets of the source and destination to any pixel within the
16 pixels of such a chunk, and
a bit length register (0x4AE) for specifying the total
width of pixels to be transferred, which also implies the correct end
offsets.
And it gets even better: After ⌈bitlength ÷ 16⌉ write
instructions, the EGC's internal shifter state automatically reinitializes
itself in preparation for blitting another row of pixels with the same
initially configured bit addresses and length. This is perfect for blitting
rectangles, as two I/O port writes before the start of your blitting loop
are enough to define your entire rectangle.
The manual nature of reading and writing in 16-pixel chunks does come with a
slight pitfall though. If the source bit address is larger than the
destination bit address, the first 16-bit read won't fill the EGC's internal
shift register with all pixels that should appear in the first 16-pixel
destination chunk. In this case, the EGC simply won't write anything and
leave the first chunk unchanged. In a
📝 regular blitting loop, however, you expect
that memory to be written and immediately move on to the next chunks within
the row. As a result, the actual blitting process for such a rectangle will
no longer be aligned to the configured address and bit length. The first row
of the rectangle will appear 16 pixels to the right of the destination
address, and the second one will start at bit offset 0 with pixels from the
rightmost byte of the first line, which weren't blitted and remained in the
tile register.
There is an easy solution though: Before the horizontal loop on each line of
the rectangle, simply read one additional 16-pixel chunk from the source
location to prefill the shift register. Thankfully, it's large enough to
also fit the second read of the then full 16 pixels, without dropping any
pixels along the way.
And that's how we get arbitrarily unaligned rectangle copies with the EGC!
Except for a small register allocation trick to use two-register addressing,
there's not much use in further optimizations, as the runtime of these
inter-page blit operations is dominated by the VRAM page switches anyway.
Except that T98-Next seems to disagree about the register prefilling issue:
Every other emulator agrees with real hardware in this regard, so we can
safely assume this to be a bug in T98-Next. Just in case this old emulator
with its last release from June 2010 still has any fans left nowadays… For
now though, even they can still enjoy the TH01 Anniversary Edition: The only
EGC copy algorithm that TH01 actually needs is the left one during the
single-buffered tests, which even that emulator gets right.
That only leaves
📝 my old offer of documenting the EGC raster ops,
and we've got the EGC figured out completely!
And that did in fact remove tearing from the pellet rendering function! For
the first time, we can now fight Elis, Kikuri, Sariel, and Konngara with a
doubled pellet frame rate:
Switchable videos like these can nicely provide evidence that these
changes have no effect on gameplay, making it easy to see that the Orb
still collides with all pellets on the same frames. Also, check out the
difference in remaining conventional memory (coreleft)…
With only pellets and no other animation on screen, this exact pattern
presents the optimal demonstration case for the new unblitter. But as you
can already tell from the invincibility sprites, we'd also need to route
every other kind of sprite through the same new code. This isn't all too
trivial: Most sprites are still rendered at byte-aligned positions, and
their blitting APIs hide that fact by taking a pixel position regardless.
This is why we can't just replace ZUN's original 16-pixel-aligned EGC
unblitting function with ours, and always have to replace both the blitter
and the unblitter on a per-sprite basis.
To completely remove all flickering, we'd also like to get rid of all the
sprite-specific unblit ➜ update ➜ render sequences, and instead
gather all unblitting code to the beginning of the game loop, before any
update and rendering calls. So yeah, it will take a long time to completely
get rid of all flickering. Until we're there, I recommend any backer to tell
me their favorite boss, so that I can focus on getting that one
rendered without any flickering. Remember that here at ReC98, we can have a
Touhou character popularity contest at any time during the year, whenever
the store is open!
In the meantime, the consistent use of 8×8 rectangles during pellet
unblitting does significantly reduce flickering across the entire game,
and shrinks certain holes that pellets tend to rip into lazily reblitted
sprites:
SinGyoku's "crossing pellets" pattern, shortly before completing
the transformation back to the sphere.
To round out the first release, I added all the other bug fixes to achieve
parity with my previously released patched REIIDEN.EXE builds:
I removed the 📝 shootout laser crash by
simply leaving the lasers on screen if a boss is defeated,
prevented the HP bar heap corruption bug in test or debug mode by not
letting it display negative HP in the first place, and
So here it is, the first build of TH01's Anniversary Edition:
2023-03-05-th01-anniv.zip Edit (2023-03-12): If you're playing on Neko Project and seeing more
flickering than in the original game, make sure you've checked the Screen
→ Disp vsync option.
Next up: The long overdue extended trip through the depths of TH02's
low-level code. From what I've seen of it so far, the work on this project
is finally going to become a bit more relaxing. Which is quite welcome
after, what, 6 months of stressful research-heavy work?
P0227
TH05 decompilation (Sara) / Research (Relativity of near references)
P0228
TH05 finalization (Lasers)
💰 Funded by:
nrook, [Anonymous]
🏷️ Tags:
Starting the year with a delivery that wasn't delayed until the last
day of the month for once, nice! Still, very soon and
high-maintenance did not go well together…
It definitely wasn't Sara's fault though. As you would expect from a Stage 1
Boss, her code was no challenge at all. Most of the TH02, TH04, and TH05
bosses follow the same overall structure, so let's introduce a new table to
replace most of the boilerplate overview text:
Phase #
Patterns
HP boundary
Timeout condition
(Entrance)
4,650
288 frames
2
4
2,550
2,568 frames
(= 32 patterns)
3
4
450
5,296 frames
(= 24 patterns)
4
1
0
1,300 frames
Total
9
9,452 frames
In Phases 2 and 3, Sara cycles between waiting, moving randomly for a
fixed 28 frames, and firing a random pattern among the 4 phase-specific
ones. The pattern selection makes sure to never
pick any pattern twice in a row. Both phases contain spiral patterns that
only differ in the clockwise or counterclockwise turning direction of the
spawner; these directions are treated as individual unrelated patterns, so
it's possible for the "same" pattern to be fired multiple times in a row
with a flipped direction.
The two phases also differ in the wait and pattern durations:
In Phase 2, the wait time starts at 64 frames and decreases by 12
frames after the first 5 patterns each, ending on a minimum of 4 frames.
In Phase 3, it's a constant 16 frames instead.
All Phase 2 patterns are fired for 28 frames, after a 16-frame
gather animation. The Phase 3 pattern time starts at 80 frames and
increases by 24 frames for the first 6 patterns, ending at 200 frames
for all later ones.
Phase 4 consists of the single laser corridor pattern with additional
random bullets every 16 frames.
And that's all the gameplay-relevant detail that ZUN put into Sara's code. It doesn't even make sense to describe the remaining
patterns in depth, as their groups can significantly change between
difficulties and rank values. The
📝 general code structure of TH05 bosses
won't ever make for good-code, but Sara's code is just a
lesser example of what I already documented for Shinki.
So, no bugs, no unused content, only inconsequential bloat to be found here,
and less than 1 push to get it done… That makes 9 PC-98 Touhou bosses
decompiled, with 22 to go, and gets us over the sweet 50% overall
finalization mark! 🎉 And sure, it might be possible to pass through the
lasers in Sara's final pattern, but the boss script just controls the
origin, angle, and activity of lasers, so any quirk there would be part of
the laser code… wait, you can do what?!?
TH05 expands TH04's one-off code for Yuuka's Master and Double Sparks into a
more featureful laser system, and Sara is the first boss to show it off.
Thus, it made sense to look at it again in more detail and finalize the code
I had purportedly
📝 reverse-engineered over 4 years ago.
That very short delivery notice already hinted at a very time-consuming
future finalization of this code, and that prediction certainly came true.
On the surface, all of the low-level laser ray rendering and
collision detection code is undecompilable: It uses the SI and
DI registers without Turbo C++'s safety backups on the stack,
and its helper functions take their input and output parameters from
convenient registers, completely ignoring common calling conventions. And
just to raise the confusion even further, the code doesn't just set
these registers for the helper function calls and then restores their
original values, but permanently shifts them via additions and
subtractions. Unfortunately, these convenient registers also include the
BP base pointer to the stack frame of a function… and shifting
that register throws any intuition behind accessed local variables right out
of the window for a good part of the function, requiring a correctly shifted
view of the stack frame just to make sense of it again.
How could such code even have been written?! This
goes well beyond the already wrong assumption that using more stack space is
somehow bad, and straight into the territory of self-inflicted pain.
So while it's not a lot of instructions, it's quite dense and really hard to
follow. This code would really benefit from a decompilation that
anchors all this madness as much as possible in existing C++ structures… so
let's decompile it anyway?
Doing so would involve emitting lots of raw machine code bytes to hide the
SI and DI registers from the compiler, but I
already had a certain
📝 batshit insane compiler bug workaround abstraction
lying around that could make such code more readable. Hilariously, it only
took this one additional use case for that abstraction to reveal itself as
premature and way too complicated. Expanding
the core idea into a full-on x86 instruction generator ended up simplifying
the code structure a lot. All we really want there is a way to set all
potential parameters to e.g. a specific form of the MOV
instruction, which can all be expressed as the parameters to a force-inlined
__emit__() function. Type safety can help by providing
overloads for different operand widths here, but there really is no need for
classes, templates, or explicit specialization of templates based on
classes. We only need a couple of enums with opcode, register,
and prefix constants from the x86 reference documentation, and a set of
associated macros that token-paste pseudoregisters onto the prefixes of
these enum constants.
And that's how you get a custom compile-time assembler in a 1994 C++
compiler and expand the limits of decompilability even further. What's even
truly left now? Self-modifying code, layout tricks that can't be replicated
with regularly structured control flow… and that's it. That leaves quite a
few functions I previously considered undecompilable to be revisited once I
get to work on making this game more portable.
With that, we've turned the low-level laser code into the expected horrible
monstrosity that exposes all the hidden complexity in those few ASM
instructions. The high-level part should be no big deal now… except that
we're immediately bombarded with Fixup overflow errors at link
time? Oh well, time to finally learn the true way of fixing this highly
annoying issue in a second new piece of decompilation tech – and one
that might actually be useful for other x86 Real Mode retro developers at
that.
Earlier in the RE history of TH04 and TH05, I often wrote about the need to
split the two original code segments into multiple segments within two
groups, which makes it possible to slot in code from different
translation units at arbitrary places within the original segment. If we
don't want to define a unique segment name for each of these slotted-in
translation units, we need a way to set custom segment and group names in C
land. Turbo C++ offers two #pragmas for that:
#pragma option -zCsegment -zPgroup – preferred in most
cases as it's equivalent to setting the default segment and group via the
command line, but can only be used at the beginning of a translation unit,
before the first non-preprocessor and non-comment C language token
#pragma codeseg segment <group> – necessary if a
translation unit needs to emit code into two or more segments
For the most part, these #pragmas work well, but they seemed to
not help much when it came to calling near functions declared
in different segments within the same group. It took a bit of trial and
error to figure out what was actually going on in that case, but there
is a clear logic to it:
Symbols are allocated to the segment and group that's active during
their first appearance, no matter whether that appearance is a declaration
or definition. Any later appearance of the function in a different segment
is ignored.
The linker calculates the 16-bit offsets of such references relative to
the symbol's declared segment, not its actual one. Turbo C++ does
not show an error or warning if the declared and actual segments are
different, as referencing the same symbol from multiple segments is a valid
use case. The linker merely throws the Fixup overflow error if
the calculated distance exceeds 64 KiB and thus couldn't possibly fit
within a near reference. With a wrong segment declaration
though, your code can be incorrect long before a fixup hits that limit.
Summarized in code:
#pragma option -zCfoo_TEXT -zPfoo
void bar(void);
void near qux(void); // defined somewhere else, maybe in a different segment
#pragma codeseg baz_TEXT baz
// Despite the segment change in the line above, this function will still be
// put into `foo_TEXT`, the active segment during the first appearance of the
// function name.
void bar(void) {
}
// This function hasn't been declared yet, so it will go into `baz_TEXT` as
// expected.
void baz(void) {
// This `near` function pointer will be calculated by subtracting the
// flat/linear address of qux() inside the binary from the base address
// of qux()'s declared segment, i.e., `foo_TEXT`.
void (near *ptr_to_qux)(void) = qux;
}
So yeah, you might have to put #pragma codeseg into your
headers to tell the linker about the correct segment of a
near function in advance. 🤯 This is an important insight for
everyone using this compiler, and I'm shocked that none of the Borland C++
books documented the interaction of code segment definitions and
near references at least at this level of clarity. The TASM
manuals did have a few pages on the topic of groups, but that syntax
obviously doesn't apply to a C compiler. Fixup overflows in particular are
such a common error and really deserved better than the unhelpful 🤷
of an explanation that ended up in the User's Guide. Maybe this whole
technique of custom code segment names was considered arcane even by 1993,
judging from the mere three sentences that #pragma codeseg was
documented with? Still, it must have been common knowledge among Amusement
Makers, because they couldn't have built these exact binaries without
knowing about these details. This is the true solution to
📝 any issues involving references to near functions,
and I'm glad to see that ZUN did not in fact lie to the compiler. 👍
OK, but now the remaining laser code compiles, and we get to write
C++ code to draw some hitboxes during the two collision-detected states of
each laser. These confirm what the low-level code from earlier already
uncovered: Collision detection against lasers is done by testing a
12×12-pixel box at every 16 pixels along the length of a laser, which leaves
obvious 4-pixel gaps at regular intervals that the player can just pass
through. This adds
📝 yet📝 another📝 quirk to the growing list of quirks that
were either intentional or must have been deliberately left in the game
after their initial discovery. This is what constants were invented for, and
there really is no excuse for not using them – especially during
intoxicated coding, and/or if you don't have a compile-time abstraction for
Q12.4 literals.
When detecting laser collisions, the game checks the player's single
center coordinate against any of the aforementioned 12×12-pixel boxes.
Therefore, it's correct to split these 12×12 pixels into two 6×6-pixel
boxes and assign the other half to the player for a more natural
visualization. Always remember that hitbox visualizations need to keep
all colliding entities in mind –
📝 assigning a constant-sized hitbox to "the player" and "the bullets" will be wrong in most other cases.
Using subpixel coordinates in collision detection also introduces a slight
inaccuracy into any hitbox visualization recorded in-engine on a 16-color
PC-98. Since we have to render discrete pixels, we cannot exactly place a
Q12.4 coordinate in the 93.75% of cases where the fractional part is
non-zero. This is why pretty much every laser segment hitbox in the video
above shows up as 7×7 rather than 6×6: The actual W×H area of each box is 13
pixels smaller, but since the hitbox lies between these pixels, we
cannot indicate where it lies exactly, and have to err on the
side of caution. It's also why Reimu's box slightly changes size as she
moves: Her non-diagonal movement speed is 3.5 pixels per frame, and the
constant focused movement in the video above halves that to 1.75 pixels,
making her end up on an exact pixel every 4 frames. Looking forward to the
glorious future of displays that will allow us to scale up the playfield to
16× its original pixel size, thus rendering the game at its exact internal
resolution of 6144×5888 pixels. Such a port would definitely add a lot of
value to the game…
The remaining high-level laser code is rather unremarkable for the most
part, but raises one final interesting question: With no explicitly defined
limit, how wide can a laser be? Looking at the laser structure's 1-byte
width field and the unsigned comparisons all throughout the update and
rendering code, the answer seems to be an obvious 255 pixels. However, the
laser system also contains an automated shrinking state, which can be most
notably seen in Mai's wheel pattern. This state shrinks a laser by 2 pixels
every 2 frames until it reached a width of 0. This presents a problem with
odd widths, which would fall below 0 and overflow back to 255 due to the
unsigned nature of this variable. So rather than, I don't know, treating
width values of 0 as invalid and stopping at a width of 1, or even adding a
condition for that specific case, the code just performs a signed
comparison, effectively limiting the width of a shrinkable laser to a
maximum of 127 pixels. This small signedness
inconsistency now forces the distinction between shrinkable and
non-shrinkable lasers onto every single piece of code that uses lasers. Yet
another instance where
📝 aiming for a cinematic 30 FPS look
made the resulting code much more complicated than if ZUN had just evenly
spread out the subtraction across 2 frames. 🤷
Oh well, it's not as if any of the fixed lasers in the original scripts came
close to any of these limits. Moving lasers are much more streamlined and
limited to begin with: Since they're hardcoded to 6 pixels, the game can
safely assume that they're always thinner than the 28 pixels they get
gradually widened to during their decay animation.
Finally, in case you were missing a mention of hitboxes in the previous
paragraph: Yes, the game always uses the aforementioned 12×12 boxes,
regardless of a laser's width.
This video also showcases the 127-pixel limit because I wanted
to include the shrink animation for a seamless loop.
That was what, 50% of this blog post just being about complications that
made laser difficult for no reason? Next up: The first TH01 Anniversary
Edition build, where I finally get to reap the rewards of having a 100%
decompiled game and write some good code for once.
> "OK, TH03/TH04/TH05 cutscenes done, let's quickly finish the Touhou Patch Center MediaWiki upgrade. Just some scripting and verification left, it will be done so quickly that I don't even have to mention it on this blog"
> Still not done after 3 weeks
> Blocked by one final critical bug that really should be fixed upstream
> Code reviewers are probably on vacation
And so, the year unfortunately ended with yet another slow month. During the
MediaWiki upgrade, I was slowly decompiling the TH05 Sara fight on the side,
but stumbled over one interesting but high-maintenance detail there that
would really enhance her blog post. TH02 would need a lot of attention for
the basic rendering calls as well…
…so let's end the year with Shuusou Gyoku instead, looking at its most
critical issue in particular. As if that were the easy option here…
The game does not run properly on modern Windows systems due to its usage of
the ancient DirectDraw APIs, with issues ranging from unbearable slowdown to
glitched colors to the game not even starting at all. Thankfully, Shuusou
Gyoku is not the only ancient Windows game affected by these issues, and
people have developed a variety of generic DirectDraw wrappers and patches
for playing such games on modern systems. Out of all these, DDrawCompat is one of the
simpler solutions for Shuusou Gyoku in particular: Just drop its
ddraw proxy DLL into the game directory, and the game will run
as it's supposed to.
So let's just bundle that DLL with all my future Shuusou Gyoku releases
then? That would have been the quick and dirty option, coming with
several drawbacks:
Linux users might be annoyed by the potential need to configure a native
DLL override for ddraw.dll. It's not too much of an issue as we
could simply rename the DLL and replace the import with the new name.
However, doing that reproducibly would already involve changes to either the
DDrawCompat or Shuusou Gyoku build process.
Win32 API hooking is another potential point of failure in general,
requiring continual maintenance for new Windows versions. This is not even a
hypothetical concern: DDrawCompat does rely on particularly volatile Win32
API details, to the point that the recent Windows 11 22H2 update completely
broke it, causing a hang at startup that required a workaround.
But sure, it's still just a single third-party component. Keeping it up to
date doesn't sound too bad by itself…
…if DDrawCompat weren't evolving way beyond what we need to keep Shuusou
Gyoku running. Being a typical DirectDraw wrapper, it has always aimed to
solve all sorts of issues in old DirectDraw games. However, the latest
version, 0.4.0, has gone above and beyond in this regard, adding lots of
configuration options with default settings that actually
break Shuusou Gyoku.
To get a glimpse of how this is likely to play out, we only have to look at
the more mature DxWnd
project. In its expert mode, DxWnd features three rows of tabs, each packed
with checkboxes that toggle individual hacks, and most of these are
related to something that Shuusou Gyoku could be affected by. Imagine
checking a precise permutation of a three-digit number of checkboxes just to
keep an old game running at full speed on modern systems…
Finally, aesthetic and bloat considerations. If
📝 C++ fstreams were already too embarrassing
with the ~100 KB of bloat they add to the binary, a 565 KiB DLL is
even worse. And that's the old version 0.3.2 – version 0.4.0 comes in
at 2.43 MiB.
Fortunately, I had the budget to dig a bit deeper and figure out what
exactly DDrawCompat does to make Shuusou Gyoku work properly. Turns
out that among all the hooks and patches, the game only needs the most
central one: Enforcing a 32-bit display mode regardless of whatever lower
bit depth the game requests natively, combined with converting the game's
pixel buffer to 32-bit on the fly.
So does this mean that adding 32-bit to the game's list of supported bit
depths is everything we have to do?
Interestingly, Shuusou Gyoku already saved the DirectDraw enumeration flag
that indicates support for 32-bit display modes. The official version just
did nothing with it.
Well, almost everything. Initially, this surprised me as well: With
all the if statements checking for precise bit depths, you
would think that supporting one more bit depth would be way harder in this
code base. As it turned out though, these conditional branches are not
really about 8-bit or 16-bit color for the most part, but instead
differentiate between two very distinct rendering approaches:
"8-bit" is a pure 2D mode with palettized colors,
while "16-bit" is a hybrid 2D/3D mode that uses Direct3D 2 on top of DirectDraw, with
3-channel RGB colors.
Consequently, most of these branches deal with differences between these two
approaches that couldn't be nicely abstracted away in pbg's renderer
interface: Specific palette changes that are exclusive to "8-bit" mode, or
certain entities and effects whose Direct3D draw calls in "16-bit" mode
require tailor-made approximations for the "8-bit" mode. Since our new
32-bit mode is equivalent to the 16-bit mode in all of these branches, I
only needed to replace the raw number comparisons with more meaningful
method calls.
That only left a very small number of 2D raster effects that directly write
to or read from DirectDraw surface memory, and therefore do need to know the
bit size of each pixel. Thanks to std::variant and
std::visit(), adding 32-bit support becomes trivial here: By
rewriting the code in a generic manner that derives all offsets from the
template type, you only have to say hey,
I'd like to have 32-bit as well, and C++ will automatically
instantiate correct 32-bit variants of all bit depth-dependent code
snippets.
There are only three features in the entire game that access pixel buffers
this way: a color key retrieval function, the lens ball animation on the
logo screen, and… the ending staff roll? Sure, the text sprites fade in and
out, but so does the picture next to it, using Direct3D alpha blending or
palette color ramping depending on the current rendering mode. Instead, the
only reason why these sprites directly access their pixel buffer is… an
unused and pretty wild spiral effect. 😮 It's still part of the code, and
only doesn't show up because the
parameters that control its timing were commented out before release:
They probably considered it too wild for the mood of this
ending.
The main ending text was the only remaining issue of mojibake present in my
previous Shuusou Gyoku builds, and is now fixed as well. Windows can
render Shift-JIS text via GDI even outside Japanese locale, but only when
explicitly selecting a font that supports the SHIFTJIS_CHARSET,
and the game simply didn't select any font for rendering this text.
Thus, GDI fell back onto its default font, which obviously is only
guaranteed to support the SHIFTJIS_CHARSET if your system
locale is set to Japanese. This is why the font in the original game might
lookdifferent between systems.
For my build, I chose the font that would appear on a clean Windows
installation – a basic 400-weighted MS Gothic at font size 16, which is
already used all throughout the game.
Alright, 32-bit mode complete, let's set it as the default if possible… and
break compatibility to the original 秋霜CFG.DAT format in the
process? When validating this file, the original game only allows the
originally supported 8-bit or 16-bit modes. Setting the
BitDepth field to any other value causes the entire file
to be reset to its defaults, re-locking the Extra Stage in the process.
Introducing a backward-compatible version
system for 秋霜CFG.DAT was beyond the scope of this push.
Changing the validation to a per-field approach was a good small first step
to take though. The new build no longer validates the BitDepth
field against a fixed list, but against the actually supported bit depths on
your system, picking a different supported one if necessary. With the
original approach, this would have caused your entire configuration to fail
the validation check. Instead, you can now safely update to the new build
without losing your option settings, or your previously unlocked access to
the Extra Stage.
Side note: The validation limit for starting bombs is off by one, and the
one for starting lives check is off by two. By modifying
秋霜CFG.DAT, you could theoretically get new games to start with
7 lives and 3 bombs… if you then calculate a correct checksum for your
hacked config file, that is. 🧑💻
Interestingly, DirectDraw doesn't even indicate support for 8-bit or 16-bit
color on systems that are affected by the initially mentioned issues.
Therefore, these issues are not the fault of DirectDraw, but of
Shuusou Gyoku, as the original release requested a bit depth that it has
even verified to be unsupported. Unfortunately, Windows sides with
Sim City Shuusou Gyoku here: If you previously experimented with the
Windows app compatibility settings, you might have ended up with the
DWM8And16BitMitigation flag assigned to the full file path of
your Shuusou Gyoku executable in either
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Layers, or
As the term mitigation suggests, these modes are (poorly) emulated,
which is exactly what causes the issues with this game in the first place.
Sure, this might be the lesser evil from the point of view of an operating
system: If you don't have the budget for a full-blown DDrawCompat-style
DirectDraw wrapper, you might consider it better for users to have the game
run poorly than have it fail at startup due to incorrect API usage.
Controlling this with a flag that sticks around for future runs of a binary
is definitely suboptimal though, especially given how hard it
is to programmatically remove this flag within the binary itself. It
only adds additional complexity to the ideal clean upgrade path.
So, make sure to check your registry and manually remove these flags for the
time being. Without them, the new Config → Graphic menu will
correctly prevent you from selecting anything else but 32-bit on modern
Windows.
After all that, there was just enough time left in this push to implement
basic locale independence, as requested by the Seihou development
Discord group, without looking into automatic fixes for previous mojibake
filenames yet. Combining std::filesystem::path with the native
Win32 API should be straightforward and bloat-free, especially with all the
abstractions I've been building, right?
Well, turns out that std::filesystem::path does not
actually meet my expectations. At least as long as it's not
constexpr-enabled, because you still get the unfortunate
conversion from narrow to wide encoding at runtime, even for globals with
static storage duration. That brings us back to writing our path abstraction
in terms of the regular std::string and
std::wstring containers, which at least allow us to enforce the
respective encoding at compile time. Even std::string_view only
adds to the complexity here, as its strings are never inherently
null-terminated, which is required by both the POSIX and Win32 APIs. Not to
mention dynamic filenames: C++20's std::format() would be the
obvious idiomatic choice here, but using it almost doubles the size
of the compiled binary… 🤮
In the end, the most bloat-free way of implementing C++ file I/O in 2023 is
still the same as it was 30 years ago: Call system APIs, roll a custom
abstraction that conditionally uses the L prefix, and pass
around raw pointers. And if you need a dynamic filename, just write the
dynamic characters into arrays at fixed positions. Just as PC-98 Touhou used
to do…
Oh, and the game's window also uses a Unicode title bar now.
And that's it for this push! Make sure to rename your configuration
(秋霜CFG.DAT), score (秋霜SC.DAT), and replay
(秋霜りぷ*.DAT) filenames if you were previously running the
game on a non-Japanese locale, and then grab the new build:
Next up: Starting the new year with all my plans hopefully working out for
once. TH05 Sara very soon, ZMBV code review afterward, low-hanging fruit of
the TH01 Anniversary Edition after that, and then kicking off TH02 with a
bunch of low-level blitting code.
More than three months without any reverse-engineering progress! It's been
way too long. Coincidentally, we're at least back with a surprising 1.25% of
overall RE, achieved within just 3 pushes. The ending script system is not
only more or less the same in TH04 and TH05, but actually originated in
TH03, where it's also used for the cutscenes before stages 8 and 9. This
means that it was one of the final pieces of code shared between three of
the four remaining games, which I got to decompile at roughly 3× the usual
speed, or ⅓ of the price.
The only other bargains of this nature remain in OP.EXE. The
Music Room is largely equivalent in all three remaining games as well, and
the sound device selection, ZUN Soft logo screens, and main/option menus are
the same in TH04 and TH05. A lot of that code is in the "technically RE'd
but not yet decompiled" ASM form though, so it would shift Finalized% more
significantly than RE%. Therefore, make sure to order the new
Finalization option rather than Reverse-engineering if you
want to make number go up.
So, cutscenes. On the surface, the .TXT files look simple enough: You
directly write the text that should appear on the screen into the file
without any special markup, and add commands to define visuals, music, and
other effects at any place within the script. Let's start with the basics of
how text is rendered, which are the same in all three games:
First off, the text area has a size of 480×64 pixels. This means that it
does not correspond to the tiled area painted into TH05's
EDBK?.PI images:
The yellow area is designated for character names.
Since the font weight can be customized, all text is rendered to VRAM.
This also includes gaiji, despite them ignoring the font weight
setting.
The system supports automatic line breaks on a per-glyph basis, which
move the text cursor to the beginning of the red text area. This might seem like a piece of long-forgotten
ancient wisdom at first, considering the absence of automatic line breaks in
Windows Touhou. However, ZUN probably implemented it more out of pure
necessity: Text in VRAM needs to be unblitted when starting a new box, which
is way more straightforward and performant if you only need to worry
about a fixed area.
The system also automatically starts a new (key press-separated) text
box after the end of the 4th line. However, the text cursor is
also unconditionally moved to the top-left corner of the yellow name
area when this happens, which is almost certainly not what you expect, given
that automatic line breaks stay within the red area. A script author might
as well add the necessary text box change commands manually, if you're
forced to anticipate the automatic ones anyway…
Due to ZUN forgetting an unblitting call during the TH05 refactoring of the
box background buffer, this feature is even completely broken in that game,
as any new text will simply be blitted on top of the old one:
Wait, why are we already talking about game-specific differences after
all? Also, note how the ⏎ animation appears one line below where you'd
expect it.
Overall, the system is geared toward exclusively full-width text. As
exemplified by the 2014 static English patches and the screenshots in this
blog post, half-width text is possible, but comes with a lot of
asterisks attached:
Each loop of the script interpreter starts by looking at the next
byte to distinguish commands from text. However, this step also skips
over every ASCII space and control character, i.e., every byte
≤ 32. If you only intend to display full-width glyphs anyway, this
sort of makes sense: You gain complete freedom when it comes to the
physical layout of these script files, and it especially allows commands
to be freely separated with spaces and line breaks for improved
readability. Still, enforcing commands to be separated exclusively by
line breaks might have been even better for readability, and would have
freed up ASCII spaces for regular text…
Non-command text is blindly processed and rendered two bytes at a
time. The rendering function interprets these bytes as a Shift-JIS
string, so you can use half-width characters here. While the
second byte can even be an ASCII 0x20 space due to the
parser's blindness, all half-width characters must still occur in pairs
that can't be interrupted by commands:
As a workaround for at least the ASCII space issue, you can replace
them with any of the unassigned
Shift-JIS lead bytes – 0x80, 0xA0, or
anything between 0xF0 and 0xFF inclusive.
That's what you see in all screenshots of this post that display
half-width spaces.
Finally, did you know that you can hold ESC to fast-forward
through these cutscenes, which skips most frame delays and reduces the rest?
Due to the blocking nature of all commands, the ESC key state is
only updated between commands or 2-byte text groups though, so it can't
interrupt an ongoing delay.
Superficially, the list of game-specific differences doesn't look too long,
and can be summarized in a rather short table:
It's when you get into the implementation that the combined three systems
reveal themselves as a giant mess, with more like 56 differences between the
games. Every single new weird line of code opened up
another can of worms, which ultimately made all of this end up with 24
pieces of bloat and 14 bugs. The worst of these should be quite interesting
for the general PC-98 homebrew developers among my audience:
The final official 0.23 release of master.lib has a bug in
graph_gaiji_put*(). To calculate the JIS X 0208 code point for
a gaiji, it is enough to ADD 5680h onto the gaiji ID. However,
these functions accidentally use ADC instead, which incorrectly
adds the x86 carry flag on top, causing weird off-by-one errors based on the
previous program state. ZUN did fix this bug directly inside master.lib for
TH04 and TH05, but still needed to work around it in TH03 by subtracting 1
from the intended gaiji ID. Anyone up for maintaining a bug-fixed master.lib
repository?
The worst piece of bloat comes from TH03 and TH04 needlessly
switching the visibility of VRAM pages while blitting a new 320×200 picture.
This makes it much harder to understand the code, as the mere existence of
these page switches is enough to suggest a more complex interplay between
the two VRAM pages which doesn't actually exist. Outside this visibility
switch, page 0 is always supposed to be shown, and page 1 is always used
for temporarily storing pixels that are later crossfaded onto page 0. This
is also the only reason why TH03 has to render text and gaiji onto both VRAM
pages to begin with… and because TH04 doesn't, changing the picture in the
middle of a string of text is technically bugged in that game, even though
you only get to temporarily see the new text on very underclocked PC-98
systems.
These performance implications made me wonder why cutscenes even bother with
writing to the second VRAM page anyway, before copying each crossfade step
to the visible one.
📝 We learned in June how costly EGC-"accelerated" inter-page copies are;
shouldn't it be faster to just blit the image once rather than twice?
Well, master.lib decodes .PI images into a packed-pixel format, and
unpacking such a representation into bitplanes on the fly is just about the
worst way of blitting you could possibly imagine on a PC-98. EGC inter-page
copies are already fairly disappointing at 42 cycles for every 16 pixels, if
we look at the i486 and ignore VRAM latencies. But under the same
conditions, packed-pixel unpacking comes in at 81 cycles for every 8
pixels, or almost 4× slower. On lower-end systems, that can easily sum up to
more than one frame for a 320×200 image. While I'd argue that the resulting
tearing could have been an acceptable part of the transition between two
images, it's understandable why you'd want to avoid it in favor of the
pure effect on a slower framerate.
Really makes me wonder why master.lib didn't just directly decode .PI images
into bitplanes. The performance impact on load times should have been
negligible? It's such a good format for
the often dithered 16-color artwork you typically see on PC-98, and
deserves better than master.lib's implementation which is both slow to
decode and slow to blit.
That brings us to the individual script commands… and yes, I'm going to
document every single one of them. Some of their interactions and edge cases
are not clear at all from just looking at the code.
Almost all commands are preceded by… well, a 0x5C lead byte.
Which raises the question of whether we should
document it as an ASCII-encoded \ backslash, or a Shift-JIS-encoded
¥ yen sign. From a gaijin perspective, it seems obvious that it's a
backslash, as it's consistently displayed as one in most of the editors you
would actually use nowadays. But interestingly, iconv
-f shift-jis -t utf-8 does convert any 0x5C
lead bytes to actual ¥ U+00A5 YEN SIGN code points
.
Ultimately, the distinction comes down to the font. There are fonts
that still render 0x5C as ¥, but mainly do so out
of an obvious concern about backward compatibility to JIS X 0201, where this
mapping originated. Unsurprisingly, this group includes MS Gothic/Mincho,
the old Japanese fonts from Windows 3.1, but even Meiryo and Yu
Gothic/Mincho, Microsoft's modern Japanese fonts. Meanwhile, pretty much
every other modern font, and freely licensed ones in particular, render this
code point as \, even if you set your editor to Shift-JIS. And
while ZUN most definitely saw it as a ¥, documenting this code
point as \ is less ambiguous in the long run. It can only
possibly correspond to one specific code point in either Shift-JIS or UTF-8,
and will remain correct even if we later mod the cutscene system to support
full-blown Unicode.
Now we've only got to clarify the parameter syntax, and then we can look at
the big table of commands:
Numeric parameters are read as sequences of up to 3 ASCII digits. This
limits them to a range from 0 to 999 inclusive, with 000 and
0 being equivalent. Because there's no further sentinel
character, any further digit from the 4th one onwards is
interpreted as regular text.
Filename parameters must be terminated with a space or newline and are
limited to 12 characters, which translates to 8.3 basenames without any
directory component. Any further characters are ignored and displayed as
text as well.
Each .PI image can contain up to four 320×200 pictures ("quarters") for
the cutscene picture area. In the script commands, they are numbered like
this:
0
1
2
3
\@
Clears both VRAM pages by filling them with VRAM color 0. 🐞
In TH03 and TH04, this command does not update the internal text area
background used for unblitting. This bug effectively restricts usage of
this command to either the beginning of a script (before the first
background image is shown) or its end (after no more new text boxes are
started). See the image below for an
example of using it anywhere else.
\b2
Sets the font weight to a value between 0 (raw font ROM glyphs) to 3
(very thicc). Specifying any other value has no effect.
🐞 In TH04 and TH05, \b3 leads to glitched pixels when
rendering half-width glyphs due to a bug in the newly micro-optimized
ASM version of
📝 graph_putsa_fx(); see the image below for an example.
In these games, the parameter also directly corresponds to the
graph_putsa_fx() effect function, removing the sanity check
that was present in TH03. In exchange, you can also access the four
dissolve masks for the bold font (\b2) by specifying a
parameter between 4 (fewest pixels) to 7 (most
pixels). Demo video below.
\c15
Changes the text color to VRAM color 15.
\c=字,15
Adds a color map entry: If 字 is the first code point
inside the name area on a new line, the text color is automatically set
to 15. Up to 8 such entries can be registered
before overflowing the statically allocated buffer.
🐞 The comma is assumed to be present even if the color parameter is omitted.
\e0
Plays the sound effect with the given ID.
\f
(no-op)
\fi1
\fo1
Calls master.lib's palette_black_in() or
palette_black_out() to play a hardware palette fade
animation from or to black, spending roughly 1 frame on each of the 16 fade steps.
\fm1
Fades out BGM volume via PMD's AH=02h interrupt call,
in a non-blocking way. The fade speed can range from 1 (slowest) to 127 (fastest).
Values from 128 to 255 technically correspond to
AH=02h's fade-in feature, which can't be used from cutscene
scripts because it requires BGM volume to first be lowered via
AH=19h, and there is no command to do that.
\g8
Plays a blocking 8-frame screen shake
animation.
\ga0
Shows the gaiji with the given ID from 0 to 255
at the current cursor position. Even in TH03, gaiji always ignore the
text delay interval configured with \v.
@3
TH05's replacement for the \ga command from TH03 and
TH04. The default ID of 3 corresponds to the
gaiji. Not to be confused with \@, which starts with a backslash,
unlike this command.
@h
Shows the gaiji.
@t
Shows the gaiji.
@!
Shows the gaiji.
@?
Shows the gaiji.
@!!
Shows the gaiji.
@!?
Shows the gaiji.
\k0
Waits 0 frames (0 = forever) for an advance key to be pressed before
continuing script execution. Before waiting, TH05 crossfades in any new
text that was previously rendered to the invisible VRAM page…
🐞 …but TH04 doesn't, leaving the text invisible during the wait time.
As a workaround, \vp1 can be
used before \k to immediately display that text without a
fade-in animation.
\m$
Stops the currently playing BGM.
\m*
Restarts playback of the currently loaded BGM from the
beginning.
\m,filename
Stops the currently playing BGM, loads a new one from the given
file, and starts playback.
\n
Starts a new line at the leftmost X coordinate of the box, i.e., the
start of the name area. This is how scripts can "change" the name of the
currently speaking character, or use the entire 480×64 pixels without
being restricted to the non-name area.
Note that automatic line breaks already move the cursor into a new line.
Using this command at the "end" of a line with the maximum number of 30
full-width glyphs would therefore start a second new line and leave the
previously started line empty.
If this command moved the cursor into the 5th line of a box,
\s is executed afterward, with
any of \n's parameters passed to \s.
\p
(no-op)
\p-
Deallocates the loaded .PI image.
\p,filename
Loads the .PI image with the given file into the single .PI slot
available to cutscenes. TH04 and TH05 automatically deallocate any
previous image, 🐞 TH03 would leak memory without a manual prior call to
\p-.
\pp
Sets the hardware palette to the one of the loaded .PI image.
\p@
Sets the loaded .PI image as the full-screen 640×400 background
image and overwrites both VRAM pages with its pixels, retaining the
current hardware palette.
\p=
Runs \pp followed by \p@.
\s0
\s-
Ends a text box and starts a new one. Fades in any text rendered to
the invisible VRAM page, then waits 0 frames
(0 = forever) for an advance key to be
pressed. Afterward, the new text box is started with the cursor moved to
the top-left corner of the name area. \s- skips the wait time and starts the new box
immediately.
\t100
Sets palette brightness via master.lib's
palette_settone() to any value from 0 (fully black) to 200
(fully white). 100 corresponds to the palette's original colors.
Preceded by a 1-frame delay unless ESC is held.
\v1
Sets the number of frames to wait between every 2 bytes of rendered
text.
Sets the number of frames to spend on each of the 4 fade
steps when crossfading between old and new text. The game-specific
default value is also used before the first use of this command.
\v2
\vp0
Shows VRAM page 0. Completely useless in
TH03 (this game always synchronizes both VRAM pages at a command
boundary), only of dubious use in TH04 (for working around a bug in \k), and the games always return to
their intended shown page before every blitting operation anyway. A
debloated mod of this game would just remove this command, as it exposes
an implementation detail that script authors should not need to worry
about. None of the original scripts use it anyway.
\w64
\w and \wk wait for the given number
of frames
\wm and \wmk wait until PMD has played
back the current BGM for the total number of measures, including
loops, given in the first parameter, and fall back on calling
\w and \wk with the second parameter as
the frame number if BGM is disabled.
🐞 Neither PMD nor MMD reset the internal measure when stopping
playback. If no BGM is playing and the previous BGM hasn't been
played back for at least the given number of measures, this command
will deadlock.
Since both TH04 and TH05 fade in any new text from the invisible VRAM
page, these commands can be used to simulate TH03's typing effect in
those games. Demo video below.
Contrary to \k and \s, specifying 0 frames would
simply remove any frame delay instead of waiting forever.
The TH03-exclusive k variants allow the delay to be
interrupted if ⏎ Return or Shot are held down.
TH04 and TH05 recognize the k as well, but removed its
functionality.
All of these commands have no effect if ESC is held.
\wm64,64
\wk64
\wmk64,64
\wi1
\wo1
Calls master.lib's palette_white_in() or
palette_white_out() to play a hardware palette fade
animation from or to white, spending roughly 1 frame on each of the 16 fade steps.
\=4
Immediately displays the given quarter of the loaded .PI image in
the picture area, with no fade effect. Any value ≥ 4 resets the picture area to black.
\==4,1
Crossfades the picture area between its current content and quarter
#4 of the loaded .PI image, spending 1 frame on each of the 4 fade steps unless
ESC is held. Any value ≥ 4 is
replaced with quarter #0.
\$
Stops script execution. Must be called at the end of each file;
otherwise, execution continues into whatever lies after the script
buffer in memory.
TH05 automatically deallocates the loaded .PI image, TH03 and TH04
require a separate manual call to \p- to not leak its memory.
Bold values signify the default if the parameter
is omitted; \c is therefore
equivalent to \c15.
The \@ bug. Yes, the ¥ is fake. It
was easier to GIMP it than to reword the sentences so that the backslashes
landed on the second byte of a 2-byte half-width character pair.
The font weights and effects available through \b, including the glitch with
\b3 in TH04 and TH05.
Font weight 3 is technically not rendered correctly in TH03 either; if
you compare 1️⃣ with 4️⃣, you notice a single missing column of pixels
at the left side of each glyph, which would extend into the previous
VRAM byte. Ironically, the TH04/TH05 version is more correct in
this regard: For half-width glyphs, it preserves any further pixel
columns generated by the weight functions in the high byte of the 16-dot
glyph variable. Unlike TH03, which still cuts them off when rendering
text to unaligned X positions (3️⃣), TH04 and TH05 do bit-rotate them
towards their correct place (4️⃣). It's only at byte-aligned X positions
(2️⃣) where they remain at their internally calculated place, and appear
on screen as these glitched pixel columns, 15 pixels away from the glyph
they belong to. It's easy to blame bugs like these on micro-optimized
ASM code, but in this instance, you really can't argue against it if the
original C++ version was equally incorrect.
Combining \b and s- into a partial dissolve
animation. The speed can be controlled with \v.
Simulating TH03's typing effect in TH04 and TH05 via \w. Even prettier in TH05 where we
also get an additional fade animation
after the box ends.
So yeah, that's the cutscene system. I'm dreading the moment I will have to
deal with the other command interpreter in these games, i.e., the
stage enemy system. Luckily, that one is completely disconnected from any
other system, so I won't have to deal with it until we're close to finishing
MAIN.EXE… that is, unless someone requests it before. And it
won't involve text encodings or unblitting…
The cutscene system got me thinking in greater detail about how I would
implement translations, being one of the main dependencies behind them. This
goal has been on the order form for a while and could soon be implemented
for these cutscenes, with 100% PI being right around the corner for the TH03
and TH04 cutscene executables.
Once we're there, the "Virgin" old-school way of static translation patching
for Latin-script languages could be implemented fairly quickly:
Establish basic UTF-8 parsing for less painful manual editing of the
source files
Procedurally generate glyphs for the few required additional letters
based on existing font ROM glyphs. For example, we'd generate ä
by painting two short lines on top of the font ROM's a glyph,
or generate ¿ by vertically flipping the question mark. This
way, the text retains a consistent look regardless of whether the translated
game is run with an NEC or EPSON font ROM, or the that Neko Project II auto-generates if you
don't provide either.
(Optional) Change automatic line breaks to work on a per-word
basis, rather than per-glyph
That's it – script editing and distribution would be handled by your local
translation group. It might seem as if this would also work for Greek and
Cyrillic scripts due to their presence in the PC-98 font ROM, but I'm not
sure if I want to attempt procedurally shrinking these glyphs from 16×16 to
8×16… For any more thorough solution, we'd need to go for a more "Chad" kind
of full-blown translation support:
Implement text subdivisions at a sensible granularity while retaining
automatic line and box breaks
Compile translatable text into a Japanese→target language dictionary
(I'm too old to develop any further translation systems that would overwrite
modded source text with translations of the original text)
Implement a custom Unicode font system (glyphs would be taken from GNU
Unifont unless translators provide a different 8×16 font for their
language)
Combine the text compiler with the font compiler to only store needed
glyphs as part of the translation's font file (dealing with a multi-MB font
file would be rather ugly in a Real Mode game)
Write a simple install/update/patch stacking tool that supports both
.HDI and raw-file DOSBox-X scenarios (it's different enough from thcrap to
warrant a separate tool – each patch stack would be statically compiled into
a single package file in the game's directory)
Add a nice language selection option to the main menu
(Optional) Support proportional fonts
Which sounds more like a separate project to be commissioned from
Touhou Patch Center's Open Collective funds, separate from the ReC98 cap.
This way, we can make sure that the feature is completely implemented, and I
can talk with every interested translator to make sure that their language
works.
It's still cheaper overall to do this on PC-98 than to first port the games
to a modern system and then translate them. On the other hand, most
of the tasks in the Chad variant (3, 4, 5, and half of 2) purely deal with
the difficulty of getting arbitrary Unicode characters to work natively in a
PC-98 DOS game at all, and would be either unnecessary or trivial if we had
already ported the game. Depending on where the patrons' interests lie, it
may not be worth it. So let's see what all of you think about which
way we should go, or whether it's worth doing at all. (Edit
(2022-12-01): With Splashman's
order towards the stage dialogue system, we've pretty much confirmed that it
is.) Maybe we want to meet in the middle – using e.g. procedural glyph
generation for dynamic translations to keep text rendering consistent with
the rest of the PC-98 system, and just not support non-Latin-script
languages in the beginning? In any case, I've added both options to the
order form. Edit (2023-07-28):Touhou Patch Center has agreed to fund
a basic feature set somewhere between the Virgin and Chad level. Check the
📝 dedicated announcement blog post for more
details and ideas, and to find out how you can support this goal!
Surprisingly, there was still a bit of RE work left in the third push after
all of this, which I filled with some small rendering boilerplate. Since I
also wanted to include TH02's playfield overlay functions,
1/15 of that last push went towards getting a
TH02-exclusive function out of the way, which also ended up including that
game in this delivery.
The other small function pointed out how TH05's Stage 5 midboss pops into
the playfield quite suddenly, since its clipping test thinks it's only 32
pixels tall rather than 64:
Good chance that the pop-in might have been intended. Edit (2023-06-30): Actually, it's a
📝 systematic consequence of ZUN having to work around the lack of clipping in master.lib's sprite functions.
There's even another quirk here: The white flash during its first frame
is actually carried over from the previous midboss, which the
game still considers as actively getting hit by the player shot that
defeated it. It's the regular boilerplate code for rendering a
midboss that resets the responsible damage variable, and that code
doesn't run during the defeat explosion animation.
Next up: Staying with TH05 and looking at more of the pattern code of its
boss fights. Given the remaining TH05 budget, it makes the most sense to
continue in in-game order, with Sara and the Stage 2 midboss. If more money
comes in towards this goal, I could alternatively go for the Mai & Yuki
fight and immediately develop a pretty fix for the cheeto storage
glitch. Also, there's a rather intricate
pull request for direct ZMBV decoding on the website that I've still got
to review…
P0218
Website (Video pipeline, part 1/2: Preparations / Source file gathering)
P0219
Website (Video pipeline, part 2/2: Encoding / Tweaking settings / Caching)
P0220
Website (Video player, part 1/3: Basic controls and frame-accurate seeking)
P0221
Website (Video player, part 2/3: Tabs and markers)
P0222
Website (Video player, part 3/3: Dynamic captions and useful fullscreen mode)
💰 Funded by:
[Anonymous], Yanga, Ember2528
🏷️ Tags:
Yes, I'm still alive. This delivery was just plagued by all of the worst
luck: Data loss, physical hard drive failure, exploding phone batteries,
minor illness… and after taking 4 weeks to recover from all of that, I had
to face this beast of a task. 😵
Turns out that neither part of improving video performance and usability on
this blog was particularly easy. Decently encoding the videos into all
web-supported formats required unexpected trade-offs even for the low-res,
low-color material we are working with, and writing custom video player
controls added the timing precision resistance of HTML
<video> on top of the inherent complexity of frontend web
development. Why did this need to be 800 lines of commented JavaScript and
200 lines of commented CSS, and consume almost more than 5 pushes?!
Apparently, the latest price increase also seemed to have raised the minimum
level of acceptable polish in my work, since that's more than the maximum of
3.67 pushes it should have taken. To fund the rest, I stole some of the
reserved JIS trail word rendering research pushes, which means that the next
towards anything will go back towards that goal.
The codec situation is especially sad because it seems like so much of a
solved problem. ZMBV, the lossless capture codec introduced by DOSBox, is
both very well suited for retro game footage and remarkably simple too:
DOSBox-X's implementation of both an encoder and decoder comes in at under
650 lines of C++, excluding the Deflate implementation. Heck, the AVI
container around the codec is more complicated to write than the
compressed video data itself, and AVI is already the easiest choice you have
for a widely supported video container format.
Currently, this blog contains 9:02 minutes of video across 86 files, with a
total frame count of 24,515. In case this post attracts a general video
encoding audience that isn't familiar with what I'm encoding here: The
maximum resolution is 640×400, and most of the video uses 16 colors, with
some parts occasionally using more. With ZMBV, the lossless source files
take up 43.8 MiB, and that's even with AVI's infamously bad
overhead. While you can always spend more time on any compression task and
precisely tune your algorithm to match your source data even better,
43.8 MiB looks like a more than reasonable amount for this type of
content.
Especially compared with what I actually have to ship here, because sadly,
ZMBV is not supported by browsers. 😔 Writing a WebAssembly player for ZMBV
would have certainly been interesting, but it already took 5 pushes to get
to what we have now. So, let's instead shell out to ffmpeg and build a
pipeline to convert ZMBV to the ill-suited codecs supported by web browsers,
replacing the previously committed VP9 and VP8 files. From that point, we
can then look into AV1, the latest and greatest web-supported video codec,
to save some additional bandwidth.
But first, we've got to gather all the ZMBV source files. While I was
working on the 📝 2022-07-10 blog post, I
noticed some weirdly washed-out colors in the converted videos, leading to
the shocking realization that my previous, historically grown conversion
script didn't actually encode in a lossless way. 😢 By extension,
this meant that every video before that post could have had minor
discolorations as well.
For the majority of videos, I still had the original ZMBV capture files
straight out of DOSBox-X, and reproducing the final videos wasn't too big of
a deal. For the few cases where I didn't, I went the extra mile, took the
VP9 files, and manually fixed up all the minor color errors based on
reference videos from the same gameplay stage. There might be a huge ffmpeg
command line with a complicated filter graph to do the job, but for such a
small 4-digit number of frames, it is much more straightforward to just dump
each frame as an image and perform the color replacement with ImageMagick's
-opaque and -fill options.
So, time to encode our new definite collection of source files into AV1, and
what the hell, how slow is this codec? With ffmpeg's
libaom-av1, fully encoding all 86 videos takes almost 9
hours on my mid-range
development system, regardless of the quality selected.
But sure, the encoded videos are managed by a cache, and this obviously only
needs to be done once. If the results are amazing, they might even justify
these glacial encoding speeds. Unfortunately, they don't: In its lossless
-crf 0 mode, AV1 performs even worse than VP9, taking up
222 MiB rather than 182 MiB. It might not sound bad now,
but as we're later going to find out, we want to have a lot of
keyframes in these videos, which will blow up video sizes even further.
So, time to go lossy and maybe take a deep dive into AV1 tuning? Turns out
that it only gets worse from there:
The alternative libsvtav1 encoder is fast and creates small
files… but even on the highest-quality settings, -crf 0 and
-qp 0, the video quality resembled the terrible x264 YUV420P
format that Twitter enforces on uploaded videos.
I don't remember the librav1e results, but they sure
weren't convincing either.
libaom-av1's -usage realtime option is a
complete joke. 771 MiB for all videos, and it doesn't even compress
in real time on my system, more like 2.5× real-time. For comparison,
a certain stone-age technology by the name of "animated GIF" would take
54.3 MiB, encode in sub-realtime (0.47×), and the only necessary tuning
you need is an easily
googled palette generation and usage filter. Why can't I just use
those in a <video> tag?! These results have
clearly proven the top-voted just use modern video codecs Stack
Overflow answers wrong.
What you're actually supposed to do is to drop -cpu-used to
maybe 2 or 3, and then selectively add back prediction filters that suit
your type of content. In our case, these are
and maybe others, depending on much time you want to waste.
Because that's what all this tuning ended up being: a complete waste of
time. No matter which tuning options I tried, all they did was cut down
encoding time in exchange for slightly larger files on average. If there is
a magic tuning option that would suddenly cause AV1 to maybe even beat ZMBV,
I haven't found it. Heck, at particularly low settings,
-enable-intrabc even caused blocky glitches with certain pellet
patterns that looked like the internal frame block hashes were colliding all
over the place. Unfortunately, I didn't save the video where it happened.
So yeah, if you've already invested the computation time and encoded your
content by just specifying a -crf value and keeping the
remaining settings at their time-consuming defaults, any further tuning will
make no difference. Which is… an interesting choice from a usability
perspective. I would have expected the exact
opposite: default to a reasonably fast and efficient profile, and leave the
vast selection of tuning options for those people to explore who do
want to wait 5× as long for their encoder for that additional 5% of
compression efficiency. On the other hand, that surely is one way to get
people to extensively study your glorious engineering efforts, I guess? You
know what would maybe even motivate people to intrinsically do that?
Good documentation, with examples of the intent behind every option and its
optimal use case. Nobody needs long help strings that just spell out all of
the abbreviations that occur in the name of the option…
But hey, that at least means there's no reason to not use anything but ZMBV
for storing and archiving the lossless source files. Best compression
efficiency, encodes in real-time, and the files are much easier to edit.
OK, end of rant. To understand why anyone could be hyped about AV1 to begin
with, we just have to compare it to VP9, not to ZMBV. In that light, AV1
is pretty impressive even at -crf 1, compressing all 86
videos to 68.9 MiB, and even preserving 22.3% of frames completely
losslessly. The remaining frames exhibit the exact kind of quality loss
you'd want for retro game footage: Minor discoloration in individual pixels,
so minuscule that subtracting the encoded image from the source yields an
almost completely black image. Even after highlighting the errors by
normalizing such a difference image, they are barely visible even if you
know where to look. If "compressed PNG size of the normalized difference
between ZMBV and AV1 -crf 1" is a useful metric, this would be
its median frame among the 77.7% of non-lossless frames:
Whether you can actually spot the difference is pretty much down to the
glass between the physical pixels and your eyes. In any case, it's very
hard, even if you know where to look. As far as I'm concerned, I can
confidently call this "visually lossless", and it's definitely good enough
for regular watching and even single-frame stepping on this blog.
Since the appeal of the original lossless files is undeniable though, I also
made those more easily available. You can directly download the one for the
currently active video with the ⍗ button in the new video player – or directly
get all of them from the Git repository if you don't like clicking.
Unfortunately, even that only made up for half of the complexity in this
pipeline. As impressive as the AV1 -crf 1 result may be, it
does in fact come with the drawback of also being impressively heavy to
decode within today's browsers. Seeking is dog slow, with even the latencies
for single-frame stepping being way beyond what I'd consider
tolerable. To compensate, we have to invest another 78 MiB into turning
every 10th frame into a keyframe until single-stepping through an
entire video becomes as fast as it could be on my system.
But fine, 146 MiB, that's still less than the 178 MiB that the old
committed VP9 files used to take up. However, we still want to support VP9
for older browsers, older
hardware, and people who use Safari. And it's this codec where keyframes
are so bad that there is no clear best solution, only compromises. The main
issue: The lower you turn VP9's -crf value, the slower the
seeking performance with the same number of keyframes. Conversely,
this means that raising quality also requires more keyframes for the same
seeking performance – and at these file sizes, you really don't want to
raise either. We're talking 1.2 GiB for all 86 videos at
-crf 10 and -g 5, and even on that configuration,
seeking takes 1.3× as long as it would in the optimal case.
Thankfully, a full VP9 encode of all 86 videos only takes some 30 minutes as
opposed to 9 hours. At that speed, it made sense to try a larger number of
encoding settings during the ongoing development of the player. Here's a
table with all the trials I've kept:
Codec
-crf
-g
Other parameters
Total size
Seek time
VP9
32
20
-vf format=yuv420p
111 MiB
32 s
VP8
10
30
-qmin 10 -qmax 10 -b:v 1G
120 MiB
32 s
VP8
7
30
-qmin 7 -qmax 7 -b:v 1G
140 MiB
32 s
AV1
1
10
146 MiB
32 s
VP8
10
20
-qmin 10 -qmax 10 -b:v 1G
147 MiB
32 s
VP8
6
30
-qmin 6 -qmax 6 -b:v 1G
149 MiB
32 s
VP8
15
10
-qmin 15 -qmax 15 -b:v 1G
177 MiB
32 s
VP8
10
10
-qmin 10 -qmax 10 -b:v 1G
225 MiB
32 s
VP9
32
10
-vf format=yuv422p
329 MiB
32 s
VP8
0-4
10
-qmin 0 -qmax 4 -b:v 1G
376 MiB
32 s
VP8
5
30
-qmin 5 -qmax 5 -b:v 1G
169 MiB
33 s
VP9
63
40
47 MiB
34 s
VP9
32
20
-vf format=yuv422p
146 MiB
34 s
VP8
4
30
-qmin 0 -qmax 4 -b:v 1G
192 MiB
34 s
VP8
4
40
-qmin 4 -qmax 4 -b:v 1G
168 MiB
35 s
VP9
25
20
-vf format=yuv422p
173 MiB
36 s
VP9
15
15
-vf format=yuv422p
252 MiB
36 s
VP9
32
25
-vf format=yuv422p
118 MiB
37 s
VP9
20
20
-vf format=yuv422p
190 MiB
37 s
VP9
19
21
-vf format=yuv422p
187 MiB
38 s
VP9
32
10
553 MiB
38 s
VP9
32
10
-tune-content screen
553 MiB
VP9
32
10
-tile-columns 6 -tile-rows 2
553 MiB
VP9
15
20
-vf format=yuv422p
207 MiB
39 s
VP9
10
5
1210 MiB
43 s
VP9
32
20
264 MiB
45 s
VP9
32
20
-vf format=yuv444p
215 MiB
46 s
VP9
32
20
-vf format=gbrp10le
272 MiB
49 s
VP9
63
24 MiB
67 s
VP8
0-4
-qmin 0 -qmax 4 -b:v 1G
119 MiB
76 s
VP9
32
107 MiB
170 s
The bold rows correspond to the final encoding choices that
are live right now. The seeking time was measured by holding → Right on
the 📝 cheeto dodge strategy video.
Yup, the compromise ended up including a chroma subsampling conversion to
YUV422P. That's the one thing you don't want to do for retro pixel
graphics, as it's the exact cause behind washed-out colors and red fringing
around edges:
The worst example of chroma subsampling in a VP9-encoded file according
to the above metric, from frame 130 (0-based) of
📝 Sariel's restored leaf "spark" animation,
featuring smeared-out contours and even an all-around darker image,
blowing up the image to a whopping 3653 colors. It's certainly an
aesthetic.
But there simply was no satisfying solution around the ~200 MiB mark
with RGB colors, and even this compromise is still a disappointment in both
size and seeking speed. Let's hope that Safari
users do get AV1 support soon… Heck, even VP8, with its exclusive
support for YUV420P, performs much better here, with the impact of
-crf on seeking speed being much less pronounced. Encoding VP8
also just takes 3 minutes for all 86 videos, so I could have experimented
much more. Too bad that it only matters for really ancient systems…
Two final takeaways about VP9:
-tune-content screen and the tile options make no
difference at all.
All results used two-pass encoding. VP9 is the only codec where two
passes made a noticeable difference, cutting down the final encoded size
from 224 MiB to 207 MiB. For AV1, compression even seems to be
slightly worse with two passes, yielding 154,201,892 bytes rather than the
153,643,316 bytes we get with a single pass. But that's a difference of
0.36%, and hardly significant.
Alright, now we're done with codecs and get to finish the work on the
pipeline with perhaps its biggest advantage. With a ffmpeg conversion
infrastructure in place, we can also easily output a video's first frame as
a poster image to be passed into the <video> tag.
If this image is kept at the exact resolution of the video, the browser
doesn't need to wait for an indeterminate amount of "video metadata" to be
loaded, and can reserve the necessary space in the page layout much faster
and without any of these dreaded loading spinners. For the big
/blog page, this cuts down the minimum amount of required
resources from 69.5 MB to 3.6 MB, finally making it usable again without
waiting an eternity for the page to fully load. It's become pretty bad, so I
really had to prioritize this task before adding any more blog posts on top.
That leaves the player itself, which is basically a sum of lots of little
implementation challenges. Single-frame stepping and seeking to discrete
frames is the biggest one of them, as it's technically
not possible within the <video> tag, which only
returns the current time as a continuous value in seconds. It only sort
of works for us because the backend can pass the necessary FPS and frame
count values to the frontend. These allow us to place a discrete grid of
frame "frets" at regular intervals, and thus establish a consistent mapping
from frames to seconds and back. The only drawback here is a noticeably
weird jump back by one frame when pausing a video within the second half of
a frame, caused by snapping the continuous time in seconds back onto the
frame grid in order to maintain a consistent frame counter. But the whole
feature of frame-based seeking more than makes up for that.
The new scrubbable timeline might be even nicer to use with a mouse or a
finger than just letting a video play regularly. With all the tuning work I
put into keyframes, seeking is buttery smooth, and much better than the
built-in <video> UI of either Chrome or Firefox.
Unfortunately, it still costs a whole lot of CPU, but I'd say it's worth it.
🥲
Finally, the new player also has a few features that might not be
immediately obvious:
Keybindings for almost everything you might want them for, indicated by
hovering on top of each button. The tab switchers additionally support the
↑ Up and ↓ Down keys to cycle through all tabs, or the number keys
to jump to a specific tab. Couldn't find a way to indicate these mappings in
the UI yet.
Per-video captions now reserve the maximum height of any caption in the
layout. This prevents layout reflows when switching through such videos,
which previously caused quite annoying lag on the big /blog
page.
Useful fullscreen modes on both desktop and mobile, including all
markers and the video caption. Firefox made this harder than it needed to
be, and if it weren't for display: contents, the implementation
would have been even worse. In the end though, we didn't even need any video
pixel sizes from the backend – just as it should be…
… and supporting Firefox was definitely worth it, as it's the only
browser to support nearest-neighbor interpolation on videos.
As some of the Unicode codepoints on the buttons aren't covered by the
default fonts of some operating systems, I've taken them from the Catrinity font, licensed under the SIL
Open Font License. With all
the edits I did on this font, that license definitely was necessary. I
hope I applied it correctly though; it's not straightforward at all how to
properly license a Modified Version of an original font with a
Reserved Font Name.
And with that, development hell is over, and I finally get to return to the
core business! Just more than one month late.
Next up: Shipping the oldest still pending order, covering the TH04/TH05
ending script format. Meanwhile, the Seihou community also wants to keep
investing in Shuusou Gyoku, so we're also going to see more of that on the
side.
Thanks to handlerug for
implementing and PR'ing the feature in a very clean way. That makes at least
two people I know who wanted to see feed support, so there are probably
a few more out there.
So, Shuusou Gyoku. pbg released the original source code for the first two
Seihou games back in February 2019, but notably removed the crucial
decompression code for the original packfiles due to… various unspecified
reasons, considerations, and implications. This vague
language and subsequent rejection of a pull request
to add these features back in were probably the main reasons why no one
has publicly done anything with this codebase since.
The only other fork I know about is Priw8's private fork from 2020, but only
because WishMakers
informed me about it shortly after this push was funded. Both of them
might also contribute some features to my fork in the future if their time
allows it.
In this fork, Priw8 replaced packfile decompression with raw reads from
directories with the pre-extracted contents of all the .DAT files. This
works for playing the game, but there are actually two more things that
require the original packfile code:
High scores are stored as a bitstream with every variable separated by
an alternating 0 or 1 bit, using the same bit-level access functions as the
packfile reader. That's a quite… unique form of obfuscation: It requires way
too much code to read and write the format, and doesn't even obfuscate the
data that well because you can still see clear patterns when opening
these scorefiles in a hex editor.
Replays are 2-"file" archives compressed using the same algorithm as the
packfile. The first "file" contains metadata like the shot type, stage, and
RNG seed, and the second one contains the input state for every frame.
We can surely implement our own simple and uncompressed formats for these
things, but it's not the best idea to build all future Shuusou Gyoku
features on top of a replay-incompatible fork. So, what do we do? On the one
hand, pbg expressed the clear wish to not include data reverse-engineered
from the original binary. On the other hand, he released the code under the
MIT license, which allows us to modify the code and distribute the results
in any way we wish.
So, let's meet in the middle, and go for a clean-room implementation of the
missing features as indicated by their usage, without looking at either the
original binary or wangqr's reverse-engineered code.
With incremental rebuilds being broken in the latest Visual Studio project
files as well, it made sense to start from scratch on pbg's last commit. Of
course, I can't pass up a chance to use
📝 Tup, my favorite build system for every
project I'm the main developer of. It might not fit Shuusou Gyoku as well as
it fits ReC98, but let's see whether it would be reasonable at all…
… and it's actually not too bad! Modern Visual Studio makes this a bit
harder than it should be with all the intermediate build artifacts you have
to keep track of. In the end though, it's still only 70
lines of Lua to have a nice abstraction for both Debug and Release
builds. With this layer underneath, the actual
Shuusou Gyoku-specific part can be expressed as succinctly as in any
other modern build system, while still making every compiler flag explicit.
It might be slightly slower than a traditional .vcxproj build
due to launching
one cl.exe process per translation unit, but the result is
way more reliable and trustworthy compared to anything that involves Visual
Studio project files. This simplicity paves the way for expanding the build
process to multiple steps, and doing all the static checking on translation
strings that I never got to do for thcrap-based patches. Heck, I might even
compile all future translations directly into the binary…
Every C++ build system will invariably be hated by someone, so I'd
say that your goal should always be to simplify the actually important parts
of your build enough to allow everyone else to easily adapt it to their
favorite system. This Tupfile definitely does a better job there than your
average .vcxproj file – but if you still want such a thing (or,
gasp, 🤮 CMake project files 🤮) for better Visual Studio IDE
integration, you should have no problem generating them for yourself.
There might still be a point in doing that because that's the one part that
unfortunately sucks about this approach. Visual Studio is horribly broken
for any nonstandard C++ project even in 2022:
Makefile projects can be nicely integrated with Debug and Release
configurations, but setting a later C++ language standard requires dumb
.vcxproj hacks that don't even work properly anymore.
Folder projects are ridiculously ugly: The Build toolbar is permanently
grayed out even if you configured a build task. For some reason,
configuring these tasks merely adds one additional element to a 9-element
context menu in the Solution Explorer. Also, why does the big IDE use a
different JSON schema than the perfectly functional and adequate one from
Visual Studio Code?
In both cases, IntelliSense doesn't work properly at all even if it
appears to be configured correctly, and Tup's dependency tracking appeared
to be weirdly cut off for the very final .PDB file. Interestingly though,
using the big Visual Studio IDE for just debugging a binary via
devenv bin/GIAN07.exe suddenly eliminates all the IntelliSense
issues. Looks like there's a lot of essential information stored in the .PDB
files that Visual Studio just refuses to read in any other context.
But now compare that to Visual Studio Code: Open it from the x64_x86
Cross Tools Command Prompt via code ., launch a build or
debug task, or browse the code with perfect IntelliSense. Three small
configuration files and everything just works – heck, you even get the Tup
progress bar in the terminal. It might be Electron bloatware and horribly
slow at times, but Visual Studio Code has long outperformed regular Visual
Studio in terms of non-debug functionality.
On to the compression algorithm then… and it's just textbook LZSS,
with 13 bits for the offset of a back-reference and 4 bits for its length?
Hardly a trade secret there. The hard parts there all come from unexpected
inefficiencies in the bitstream format:
Encoding back-references as offsets into an 8 KiB ring buffer dictionary
means that the most straightforward implementation actually needs an 8 KiB
array for the LZSS sliding window. This could have easily been done with
zero additional memory if the offset was encoded as the difference to the
current byte instead.
The packfile format stores the uncompressed size of every file in its
header, which is a good thing because you want to know in advance how much
heap memory to allocate for a specific file. Nevertheless, the original game
only stops reading bits from the packfile once it encountered a
back-reference with an offset of 0. This means that the compressor not only
has to write this technically unneeded back-reference to the end of the
compressed bitstream, but also ignore any potential other longest
back-reference with an offset of 0 within the file. The latter can
easily happen with a ring buffer dictionary.
The original game used a single BIT_DEVICE class with mode
flags for every combination of reading and writing memory buffers and
on-disk files. Since that would have necessitated a lot of error checking
for all (pseudo-)methods of this class, I wrote one dedicated small class
for each one of these permutations instead. To further emphasize the
clean-room property of this code, these use modern C++ memory ownership
features: std::unique_ptr for the fixed-size read-only buffers
we get from packfiles, std::vector for the newly compressed
buffers where we don't know the size in advance, and std::span
for a borrowed reference to an immutable region of memory that we want to
treat as a bitstream. Definitely better than using the native Win32
LocalAlloc() and LocalFree() allocator, especially
if we want to port the game away from Windows one day.
One feature I didn't use though: C++ fstreams, because those are trash.
These days, they would seem to be the natural
choice with the new std::filesystem::path type from C++17:
Correctly constructed, you can pass that type to an fstream constructor and
gain both locale independence on Windows and portability to
everything else, without writing any Windows-specific UTF-16 code. But even
in a Release build, fstreams add ~100 KB of locale-related bloat to the .EXE
which adds no value for just reading binary files. That's just too
embarrassing if you look at how much space the rest of the game takes up.
Writing your own platform layer that calls the Win32
CreateFileW(), ReadFile(), and
WriteFile() API functions is apparently still the way to go
even in 2022. And with std::filesystem::path still being a
welcome addition to C++, it's not too much code to write either.
This gets us file format compatibility with the original release… and a
crash as soon as the ending starts, but only in Release mode? As it turns
out, this crash is caused by an
out-of-boundsarray
access bug that was present even in the original game, and only turned
into a crash now because the optimizer in modern Visual Studio versions
reorders static data. As a result, the 6-element pFontInfo
array got placed in front of an ECL-related counter variable that then got
corrupted by the write to the 7th element, which subsequently
crashed the game with a read access to previously deallocated danmaku script
data. That just goes to show that these technical bugs are important
and worth fixing even if they don't cause issues in the original game. Who
knows how many of these will turn into crashes once we get to porting PC-98
Touhou?
So here we go, a new build of Shuusou Gyoku, compiled with Visual Studio
2022, and compatible with all original data formats:
Inside the regular Shuusou Gyoku installation directory, this binary works
as a full-fledged drop-in replacement for the original
秋霜玉.exe. It still has all of the original binary's problems
though:
Separate Japanese locale emulation is still needed to correctly refer to
the original names of the configuration (秋霜CFG.DAT), score
(秋霜SC.DAT), and replay (秋霜りぷ*.DAT) files.
It's also required for the ending text to not render as mojibake.
Running the game at full speed and without graphical glitches on modern
Windows still requires a separate DirectDraw patch such as DDrawCompat. To
eliminate any remaining flickering, configure the game to use 16-bit
graphics in the Config → Graphic menu.
As well as some of its own:
The original screenshot feature is still missing, as it also wasn't part
of pbg's released source code.
So all in all, it's a strict downgrade at this point in time.
And more of a symbol that we can now start
doing actual work on this game. Seihou has been a fun change of pace, and I
hope that I get to do more work on the series. There is quite a lot to be
done with Shuusou Gyoku alone, and the 21 GitHub issues I've opened
are probably only scratching the surface.
However, all the required research for this one consumed more like 1⅔
pushes. Despite just one push being funded, it wouldn't have made sense to
release the commits or this binary in any earlier state. To repay this debt,
I'm going to put the next for Seihou towards the
small code maintenance and performance tasks that I usually do for free,
before doing any more feature and bugfix work. Next up: Improving video
playback on the blog, and maybe delivering some microtransaction work on the
side?
On August 15, 1997, at Comiket 52, an unknown doujin developer going by the
name of ZUN released his first game, 東方靈異伝 ~
The Highly Responsive to Prayers, marking the start of the
Touhou Project game series that keeps running to this day. Today, exactly 25
years later, the C++ source code to version 1.10 of that game has been
completely and perfectly reconstructed, reviewed, and documented.
And with that, a warm welcome to all game journalists who have
(re-)discovered this project through these news! Here's a summary for
everyone who doesn't want to go through 3 years worth of blog posts:
What does this mean?
All code that ZUN wrote as part of a TH01 installation has now been
decompiled to C++ code. The only parts left in assembly are two third-party
libraries (master.lib and PiLoad), which were originally written in
assembly, and are built from their respective official source code.
You can clone the ReC98
repository, set up the build environment, and get a binary with an
identical program image. The hashes of the resulting executables won't match
those of ZUN's original release, but all differences there stem from details
in the .EXE header that don't influence program execution, such as the
on-disk order of the conceptually unordered set of x86 memory segment
relocations. If you're interested in that level of correctness, you can
order Easier verification against original binaries from the store.
For now though, use mzdiff for
verifying the builds against ZUN's binaries.
Ever since this crowdfunding has started 3 years ago, the goal of this
project has shifted more and more towards a full-on code review rather than
being just a mechanical decompilation:
Hardcoded constants were derived from as few truly hardcoded values
as possible, which uncovered their intended meaning and highlighted any
inconsistencies
Code was deduplicated to a perhaps obsessive level (I'm still trying
to find a balance)
Tons of comments everywhere to put everything into context
And, of course, 2½ years worth of blog
posts summarizing any highlights, glitches, and secrets. (There
might still be some left to be discovered!)
As a result, modding the games and porting them away from the PC-98
platform is now a lot easier.
What does this not mean?
This is not a piracy release. ReC98 only provides the code that the
game's .EXE and .COM files are built out of. Without the rest of the
original data files, supplied from a pre-existing game copy, the code won't
do very much.
Even apart from ZUN's own code quality, the ReC98 repository is not as
polished and consistent as it could be, having seen multiple code structure
evolutions over the 8 years of its existence.
TH01 hasn't magically reached Doom levels of easy portability now. As a
decompilation of the exact code that ZUN wrote for the PC-98 platform, it is
very PC-98-native, and wildly mixes game logic with hardware
accesses. As ZUN's first foray into game development, he understandably
didn't see the need for writing an engine or hardware abstraction layer
yet.
So while this milestone opened the floodgates to PC-98-native mods, I
wouldn't advise trying to attempt a port away from PC-98 right now. But then
again, I have a financial interest in being a part of the porting process,
and who knows, maybe you can just merge in a PC-98 emulator core and
get started with something halfway decent in a short amount of time. After
all, TH01 is by far the easiest PC-98 Touhou game to port to other systems,
as it makes the least use of hardware features. (Edit
(2023-03-30): 📝 Turns out that this
crown actually goes to TH02. It features the least amount of ZUN-written
PC-98-specific rendering code out of all the 5 games, with most of it
being decently abstracted via master.lib.)
However, this game in particular raises the question of what exactly
one would even want to port. TH01 is a broken flicker-fest that
overwhelmingly suffers the drawbacks of PC-98 hardware rather than using it
to its advantage. Out of the 78 bugs that I ended up labeling as such, the
majority are sprite blitting issues,
while you can count the instances of
good hardware use on one hand.
And even at the level of game logic, this game features a lot of
weird, inconsistent behavior. Less rigorous projects such as uth05win would probably
promptly identify these issues as bugs and fix them. On the one hand, this
shows that there is a part of the community that wants sane versions of
these games which behave as expected. In other parts of the community
though, such projects quickly gain the reputation of being too inaccurate to
bother about them.
Some terminology might help here. If you look over the ReC98 codebase,
you'll find that I classified any weird code into three categories.
Edit (2023-03-05): These have been overhauled with a new
landmine category for invisible issues. Check CONTRIBUTING.md
for the complete and current current definition of all weird code
categories.
🔗 ZUN bugs: Broken
code that results from logic errors or incorrect programming
language/API/hardware use, with enough evidence in the code to indicate that
ZUN did not intend the bug. Fixing these issues must not affect hypothetical
replay compatibility, and any resulting visual changes must match ZUN's
provable intentions.
🔗 ZUN quirks:
Weird code that looks incorrect in context. Fixing these issues would change
gameplay enough to desync a hypothetical replay recorded on the original
version, or affect the visuals so much that the result is no longer faithful
to ZUN's original release. It might very well be called a fangame at that
point.
🔗 ZUN bloat:
Code that wastes memory, CPU cycles, or even just the mental capacity of
anyone trying to read and understand the code. If you want to write a
particularly resource-intensive mod, these are the places you could claim
some of those resources from.
Some examples:
All crashes are bugs
All blitting issues related to inappropriate VRAM byte alignment are
bugs
The idea of splitting TH01 across three executables is its biggest
source of bloat. It wastes disk space, the game doesn't even make use of the
memory gained from unloading unneeded code and data, it complicates the
build process and code structure with inconsistencies between the individual
binaries, and the required inter-process communication via shared memory
adds another piece of global state mutation headache.
Since I'm not in the business of writing fanfiction, I won't offer any
option that fixes quirks. That's where all of you can come in, and
use ReC98 as a base for remasters and remakes. As for bloat and bugs though,
there are many ways we could go from here:
If you want to ultimately try porting the game yourself, but still
support ReC98 somehow, I can recommend the ZUN code cleanup goal.
This is the most conservative option that leaves all bugs and quirks in
place and only removes bloat, rearchitecting the codebase so that
it's easier to work with.
For an improved gameplay experience on PC-98, choose the TH01
Anniversary Edition goal. In addition to the above code cleanup, this
goal fixes every bug with the game, most notably all the sprite
flickering by implementing a completely new renderer, while maintaining
hypothetical replay compatibility to ZUN's original release.
If you're mainly interested in seeing any variety of TH01 ported away
from PC-98 to any system, choose the Portability to non-PC-98 systems
goal. In this one, I'm going to develop the abstraction layers that would
ultimately bring this game to the aforementioned Doom level of portability,
while still keeping it running with better than original performance on
PC-98.
Replay support is also something you could order…
… as is Multilingual translation support (on PC-98), for those
sweet non-ASCII characters if that's your thing.
Then again, with all these choices in mind, maybe we should just let TH01 be
what it is: ZUN's first game, evidence for the truth that no programmer
writes good code the first time around, and more of a historical curiosity
than anything you'd want to maintain and modernize. The idea of moving on to
the next game and decompiling all 5 PC-98 Touhou games in order has
certainly shown to be
popular among the backers who funded this 100% goal.
Since the beginning of the year, I've been dramatically raising the level of
quality and care I've been putting into this project, leading to 9 of the 10
longest blog posts having been written in the past 8 months. The community
reception has been even more supportive as well, with all of you still
regularly selling out the store in return. To match the level of quality
with the community demand, I'm raising push prices from
to per push, as of this blog
post. 📝 As usual, I'm going to deliver any
existing orders in the backlog at the value they were originally purchased
at. Due to the way the cap has to be calculated, these contributions now
appear to have increased in value by 25%.
However, I do realize that this might make regular pushes prohibitively
expensive for some. This could especially prevent all these exciting modding
goals from ever getting off the ground. Thinking about it though, the push
system is only really necessary for the core reverse-engineering business,
where longer, concentrated stretches of work allow me to study a new piece
of code in a larger context and improve the quality of the final result. In
contrast, modding-related goals could theoretically be segmented into
arbitrarily small portions of work, as I have a clear idea of where I want
to go and how to get there.
Thus, I'm introducing microtransactions, now available for all
modding-related goals. These allow you to order fractional pieces of work
for as low as 1 €, which I will immediately deliver without requiring others
to fund a full push first. Edit (2022-08-16): And then the
store still sold out with a single regular contribution by
nrook towards more reverse-engineering. Guess that this
experiment will have to wait a little while longer, then… 😅
Next up: Taking a break and recovering from crunch time by improving video
playback on this blog and working on Shuusou Gyoku,
before returning to Touhou in September.
P0214
TH01 decompilation (Orb and Game Over animations + Pause, continue, and debug menus)
P0215
TH01 decompilation (REIIDEN.EXE main() function / 100%)
💰 Funded by:
Ember2528, Yanga
🏷️ Tags:
Last blog post before the 100% completion of TH01! The final parts of
REIIDEN.EXE would feel rather out of place in a celebratory
blog post, after all. They provided quite a neat summary of the typical
technical details that are wrong with this game, and that I now get to
mention for one final time:
The Orb's animation cycle is maybe two frames shorter than it should
have been, showing its last sprite for just 1 frame rather than 3:
The text in the Pause and Continue menus is not quite correctly
centered.
The memory info screen hides quite a bit of information about the .PTN
buffers, and obscures even the info that it does show behind
misleading labels. The most vital information would have been that ZUN could
have easily saved 20% of the memory by using a structure without the
unneeded alpha plane… Oh, and the REWIRTE option
mapped to the ⬇️ down arrow key simply redraws the info screen. Might be
useful after a NODE CHEAK, which replaces the output
with its own, but stays within the same input loop.
But hey, there's an error message if you start REIIDEN.EXE
without a resident MDRV2 or a correctly prepared resident structure! And
even a good, user-friendly one, asking the user to launch the batch file
instead. For some reason, this convenience went out of fashion in the later
games.
The Game Over animation (how fitting) gives us TH01's final piece of weird
sprite blitting code, which seriously manages to include 2 bugs and 3 quirks
in under 50 lines of code. In test mode (game t or game
d), you can trigger this effect by pressing the ⬇️ down arrow key,
which certainly explains why I encountered seemingly random Game Over events
during all the tests I did with this game…
The animation appears to have changed quite a bit during development, to the
point that probably even ZUN himself didn't know what he wanted it to look
like in the end:
The original version unblits a 32×32 rectangle around Reimu that only
grows on the X axis… for the first 5 frames. The unblitting call is
only run if the corresponding sprite wasn't clipped at the edges of the
playfield in the frame before, and ZUN uses the animation's frame
number rather than the sprite loop variable to index the per-sprite
clip flag array. The resulting out-of-bounds access then reads the
sprite coordinates instead, which are never 0, thus interpreting
all 5 sprites as clipped.
This variant would interpret the declared 5 effect coordinates as
distinct sprites and unblit them correctly every frame. The end result
is rather wimpy though… hardly appropriate for a Game Over, especially
with the original animation in mind.
This variant would not unblit anything, and is probably closest to what
the final animation should have been.
Finally, we get to the big main() function, serving as the duct
tape that holds this game together. It may read rather disorganized with all
the (actually necessary) assignments and function calls, but the only
actual minor issue I've seen there is that you're robbed of any
pellet destroy bonus collected on the final frame of the final boss. There
is a certain charm in directly nesting the infinite main gameplay loop
within the infinite per-life loop within the infinite stage loop. But come
on, why is there no fourth scene loop? Instead, the
game just starts a new REIIDEN.EXE process before and after a
boss fight. With all the wildly mutated global state, that was probably a
much saner choice.
The final secrets can be found in the debug stage selection. ZUN
implemented the prompts using the C standard library's scanf()
function, which is the natural choice for quick-and-dirty testing features
like this one. However, the C standard library is also complete and utter
trash, and so it's not surprising that both of the scanf()
calls do… well, probably not what ZUN intended. The guaranteed out-of-bounds
memory access in the select_flag route prompt thankfully has no
real effect on the game, but it gets really interesting with the 面数 stage prompt.
Back in 2020, I already wrote about
📝 stages 21-24, and how they're loaded from actual data that ZUN shipped with the game.
As it now turns out, the code that maps stage IDs to STAGE?.DAT
scene numbers contains an explicit branch that maps any (1-based) stage
number ≥21 to scene 7. Does this mean that an Extra Stage was indeed planned
at some point? That branch seems way too specific to just be meant as a
fallback. Maybe
Asprey was on to something after all…
However, since ZUN passed the stage ID as a signed integer to
scanf(), you can also enter negative numbers. The only place
that kind of accidentally checks for them is the aforementioned stage
ID → scene mapping, which ensures that (1-based) stages < 5 use
the shrine's background image and BGM. With no checks anywhere else, we get
a new set of "glitch stages":
Stage -1Stage -2Stage -3Stage -4Stage -5
The scene loading function takes the entered 0-based stage ID value modulo
5, so these 4 are the only ones that "exist", and lower stage numbers will
simply loop around to them. When loading these stages, the function accesses
the data in REIIDEN.EXE that lies before the statically
allocated 5-element stages-of-scene array, which happens to encompass
Borland C++'s locale and exception handling data, as well as a small bit of
ZUN's global variables. In particular, the obstacle/card HP on the tile I
highlighted in green corresponds to the
lowest byte of the 32-bit RNG seed. If it weren't for that and the fact that
the obstacles/card HP on the few tiles before are similarly controlled by
the x86 segment values of certain initialization function addresses, these
glitch stages would be completely deterministic across PC-98 systems, and
technically canon…
Stage -4 is the only playable one here as it's the only stage to end up
below the
📝 heap corruption limit of 102 stage objects.
Completing it loads Stage -3, which crashes with a Divide Error
just like it does if it's directly selected. Unsurprisingly, this happens
because all 50 card bytes at that memory location are 0, so one division (or
in this case, modulo operation) by the number of cards is enough to crash
the game.
Stage -5 is modulo'd to 0 and thus loads the first regular stage. The only
apparent broken element there is the timer, which is handled by a completely
different function that still operates with a (0-based) stage ID value of
-5. Completing the stage loads Stage -4, which also crashes, but only
because its 61 cards naturally cause the
📝 stack overflow in the flip-in animation for any stage with more than 50 cards.
And that's REIIDEN.EXE, the biggest and most bloated PC-98
Touhou executable, fully decompiled! Next up: Finishing this game with the
main menu, and hoping I'll actually pull it off within 24 hours. (If I do,
we might all have to thank 32th
System, who independently decompiled half of the remaining 14
functions…)
P0212
TH01 decompilation (Stage bonus and TOTLE screens, part 1/2)
P0213
TH01 decompilation (Stage bonus and TOTLE screens, part 2/2 + Data finalization, part 2/2 + FUUIN.EXE 100%)
Wow, it's been 3 days and I'm already back with an unexpectedly long post
about TH01's bonus point screens? 3 days used to take much longer in my
previous projects…
Before I talk about graphics for the rest of this post, let's start with the
exact calculations for both bonuses. Touhou Wiki already got these right,
but it still makes sense to provide them here, in a format that allows you
to cross-reference them with the source code more easily. For the
card-flipping stage bonus:
Time
min((Stage timer * 3), 6553)
Continuous
min((Highest card combo * 100), 6553)
Bomb&Player
min(((Lives * 200) + (Bombs * 100)), 6553)
STAGE
min(((Stage number - 1) * 200), 6553)
BONUS Point
Sum of all above values * 10
The boss stage bonus is calculated from the exact same metrics, despite half
of them being labeled differently. The only actual differences are in the
higher multipliers and in the cap for the stage number bonus. Why remove it
if raising it high enough also effectively disables it?
Time
min((Stage timer * 5), 6553)
Continuous
min((Highest card combo * 200), 6553)
MIKOsan
min(((Lives * 500) + (Bombs * 200)), 6553)
Clear
min((Stage number * 1000), 65530)
TOTLE
Sum of all above values * 10
The transition between the gameplay and TOTLE screens is one of the more
impressive effects showcased in this game, especially due to how wavy it
often tends to look. Aside from the palette interpolation (which is, by the
way, the first time ZUN wrote a correct interpolation algorithm between two
4-bit palettes), the core of the effect is quite simple. With the TOTLE
image blitted to VRAM page 1:
Shift the contents of a line on VRAM page 0 by 32 pixels, alternating
the shift direction between right edge → left edge (even Y
values) and the other way round (odd Y values)
Keep a cursor for the destination pixels on VRAM page 1 for every line,
starting at the respective opposite edge
Blit the 32 pixels at the VRAM page 1 cursor to the newly freed 32
pixels on VRAM page 0, and advance the cursor towards the other edge
Successive line shifts will then include these newly blitted 32 pixels
as well
Repeat (640 / 32) = 20 times, after which all new pixels
will be in their intended place
So it's really more like two interlaced shift effects with opposite
directions, starting on different scanlines. No trigonometry involved at
all.
Horizontally scrolling pixels on a single VRAM page remains one of the few
📝 appropriate uses of the EGC in a fullscreen 640×400 PC-98 game,
regardless of the copied block size. The few inter-page copies in this
effect are also reasonable: With 8 new lines starting on each effect frame,
up to (8 × 20) = 160 lines are transferred at any given time, resulting
in a maximum of (160 × 2 × 2) = 640 VRAM page switches per frame for the newly
transferred pixels. Not that frame rate matters in this situation to begin
with though, as the game is doing nothing else while playing this effect.
What does sort of matter: Why 32 pixels every 2 frames, instead of 16
pixels on every frame? There's no performance difference between doing one
half of the work in one frame, or two halves of the work in two frames. It's
not like the overhead of another loop has a serious impact here,
especially with the PC-98 VRAM being said to have rather high
latencies. 32 pixels over 2 frames is also harder to code, so ZUN
must have done it on purpose. Guess he really wanted to go for that 📽
cinematic 30 FPS look 📽 here…
Removing the palette interpolation and transitioning from a black screen
to CLEAR3.GRP makes it a lot clearer how the effect works.
Once all the metrics have been calculated, ZUN animates each value with a
rather fancy left-to-right typing effect. As 16×16 images that use a single
bright-red color, these numbers would be
perfect candidates for gaiji… except that ZUN wanted to render them at the
more natural Y positions of the labels inside CLEAR3.GRP that
are far from aligned to the 8×16 text RAM grid. Not having been in the mood
for hardcoding another set of monochrome sprites as C arrays that day, ZUN
made the still reasonable choice of storing the image data for these numbers
in the single-color .GRC form– yeah, no, of course he once again
chose the .PTN hammer, and its
📝 16×16 "quarter" wrapper functions around nominal 32×32 sprites.
The three 32×32 TOTLE metric digit sprites inside
NUMB.PTN.
Why do I bring up such a detail? What's actually going on there is that ZUN
loops through and blits each digit from 0 to 9, and then continues the loop
with "digit" numbers from 10 to 19, stopping before the number whose ones
digit equals the one that should stay on screen. No problem with that in
theory, and the .PTN sprite selection is correct… but the .PTN
quarter selection isn't, as ZUN wrote (digit % 4)
instead of the correct ((digit % 10) % 4).
Since .PTN quarters are indexed in a row-major
way, the 10-19 part of the loop thus ends up blitting
2 →
3 →
0 →
1 →
6 →
7 →
4 →
5 →
(nothing):
This footage was slowed down to show one sprite blitting operation per
frame. The actual game waits a hardcoded 4 milliseconds between each
sprite, so even theoretically, you would only see roughly every
4th digit. And yes, we can also observe the empty quarter
here, only blitted if one of the digits is a 9.
Seriously though? If the deadline is looming and you've got to rush
some part of your game, a standalone screen that doesn't affect
anything is the best place to pick. At 4 milliseconds per digit, the
animation goes by so fast that this quirk might even add to its
perceived fanciness. It's exactly the reason why I've always been rather
careful with labeling such quirks as "bugs". And in the end, the code does
perform one more blitting call after the loop to make sure that the correct
digit remains on screen.
The remaining ¾ of the second push went towards transferring the final data
definitions from ASM to C land. Most of the details there paint a rather
depressing picture about ZUN's original code layout and the bloat that came
with it, but it did end on a real highlight. There was some unused data
between ZUN's non-master.lib VSync and text RAM code that I just moved away
in September 2015 without taking a closer look at it. Those bytes kind of
look like another hardcoded 1bpp image though… wait, what?!
Lovely! With no mouse-related code left in the game otherwise, this cursor
sprite provides some great fuel for wild fan theories about TH01's
development history:
Could ZUN have 📝 stolen the basic PC-98
VSync or text RAM function code from a source that also implemented mouse
support?
Or was this game actually meant to have mouse-controllable portions at
some point during development? Even if it would have just been the
menus.
… Actually, you know what, with all shared data moved to C land, I might as
well finish FUUIN.EXE right now. The last secret hidden in its
main() function: Just like GAME.BAT supports
launching the game in various debug modes from the DOS command line,
FUUIN.EXE can directly launch one of the game's endings. As
long as the MDRV2 driver is installed, you can enter
fuuin t1 for the 魔界/Makai Good Ending, or
fuuin t for 地獄/Jigoku Good Ending.
Unfortunately, the command-line parameter can only control the route.
Choosing between a Good or Bad Ending is still done exclusively through
TH01's resident structure, and the continues_per_scene array in
particular. But if you pre-allocate that structure somehow and set one of
the members to a nonzero value, it would work. Trainers, anyone?
Alright, gotta get back to the code if I want to have any chance of
finishing this game before the 15th… Next up: The final 17
functions in REIIDEN.EXE that tie everything together and add
some more debug features on top.
P0207
TH01 decompilation (YuugenMagan, part 1/5: Preparation)
P0208
TH01 decompilation (YuugenMagan, part 2/5: Helper functions)
P0209
TH01 decompilation (YuugenMagan, part 3/5: Main function)
P0210
TH01 decompilation (YuugenMagan, part 4/5: Eye opening/closing + 邪 colors)
P0211
TH01 decompilation (YuugenMagan, part 5/5: Quirk research + Data finalization, part 1/2 + Common part of endings)
Whew, TH01's boss code just had to end with another beast of a boss, taking
way longer than it should have and leaving uncomfortably little time for the
rest of the game. Let's get right into the overview of YuugenMagan, the most
sequential and scripted battle in this game:
The fight consists of 14 phases, numbered (of course) from 0 to 13.
Unlike all other bosses, the "entrance phase" 0 is a proper gameplay-enabled
part of the fight itself, which is why I also count it here.
YuugenMagan starts with 16 HP, second only to Sariel's 18+6. The HP bar
visualizes the HP threshold for the end of phases 3 (white part) and 7
(red-white part), respectively.
All even-numbered phases change the color of the 邪 kanji in the stage
background, and don't check for collisions between the Orb and any eye.
Almost all of them consequently don't feature an attack, except for phase
0's 1-pixel lasers, spawning symmetrically from the left and right edges of
the playfield towards the center. Which means that yes, YuugenMagan is in
fact invincible during this first attack.
All other attacks are part of the odd-numbered phases:
Phase 1: Slow pellets from the lateral eyes. Ends
at 15 HP.
Phase 3: Missiles from the southern eyes, whose
angles first shift away from Reimu's tracked position and then towards
it. Ends at 12 HP.
Phase 5: Circular pellets sprayed from the lateral
eyes. Ends at 10 HP.
Phase 7: Another missile pattern, but this time
with both eyes shifting their missile angles by the same
(counter-)clockwise delta angles. Ends at 8 HP.
Phase 9: The 3-pixel 3-laser sequence from the
northern eye. Ends at 2 HP.
Phase 11: Spawns the pentagram with one corner out
of every eye, then gradually shrinks and moves it towards the center of
the playfield. Not really an "attack" (surprise) as the pentagram can't
reach the player during this phase, but collision detection is
technically already active here. Ends at 0 HP, marking the earliest
point where the fight itself can possibly end.
Phase 13: Runs through the parallel "pentagram
attack phases". The first five consist of the pentagram alternating its
spinning direction between clockwise and counterclockwise while firing
pellets from each of the five star corners. After that, the pentagram
slams itself into the player, before YuugenMagan loops back to phase
10 to spawn a new pentagram. On the next run through phase 13, the
pentagram grows larger and immediately slams itself into the player,
before starting a new pentagram attack phase cycle with another loop
back to phase 10.
Since the HP bar fills up in a phase with no collision detection,
YuugenMagan is immune to
📝 test/debug mode heap corruption. It's
generally impossible to get YuugenMagan's HP into negative numbers, with
collision detection being disabled every other phase, and all odd-numbered
phases ending immediately upon reaching their HP threshold.
All phases until the very last one have a timeout condition, independent
from YuugenMagan's current HP:
Phase 0: 331 frames
Phase 1: 1101 frames
Phases 2, 4, 6, 8, 10, and 12: 70 frames each
Phases 3 and 7: 5 iterations of the pattern, or
1845 frames each
Phase 5: 5 iterations of the pattern, or 2230
frames
Phase 9: The full duration of the sequence, or 491
frames
Phase 11: Until the pentagram reached its target
position, or 221 frames
This makes it possible to reach phase 13 without dealing a single point of
damage to YuugenMagan, after almost exactly 2½ minutes on any difficulty.
Your actual time will certainly be higher though, as you will have to
HARRY UP at least once during the attempt.
And let's be real, you're very likely to subsequently lose a
life.
At a pixel-perfect 81×61 pixels, the Orb hitboxes are laid out rather
generously this time, reaching quite a bit outside the 64×48 eye sprites:
And that's about the only positive thing I can say about a position
calculation in this fight. Phase 0 already starts with the lasers being off
by 1 pixel from the center of the iris. Sure, 28 may be a nicer number to
add than 29, but the result won't be byte-aligned either way? This is
followed by the eastern laser's hitbox somehow being 24 pixels larger than
the others, stretching a rather unexpected 70 pixels compared to the 46 of
every other laser.
On a more hilarious note, the eye closing keyframe contains the following
(pseudo-)code, comprising the only real accidentally "unused" danmaku
subpattern in TH01:
// Did you mean ">= RANK_HARD"?
if(rank == RANK_HARD) {
eye_north.fire_aimed_wide_5_spread();
eye_southeast.fire_aimed_wide_5_spread();
eye_southwest.fire_aimed_wide_5_spread();
// Because this condition can never be true otherwise.
// As a result, no pellets will be spawned on Lunatic mode.
// (There is another Lunatic-exclusive subpattern later, though.)
if(rank == RANK_LUNATIC) {
eye_west.fire_aimed_wide_5_spread();
eye_east.fire_aimed_wide_5_spread();
}
}
Featuring the weirdly extended hitbox for the eastern laser, as well as
an initial Reimu position that points out the disparity between
byte-aligned rendering and the internal coordinates one final time.
After a few utility functions that look more like a quickly abandoned
refactoring attempt, we quickly get to the main attraction: YuugenMagan
combines the entire boss script and most of the pattern code into a single
2,634-instruction function, totaling 9,677 bytes inside
REIIDEN.EXE. For comparison, ReC98's version of this code
consists of at least 49 functions, excluding those I had to add to work
around ZUN's little inconsistencies, or the ones I added for stylistic
reasons.
In fact, this function is so large that Turbo C++ 4.0J refuses to generate
assembly output for it via the -S command-line option, aborting
with a Compiler table limit exceeded in function error.
Contrary to what the Borland C++ 4.0 User Guide suggests, this
instance of the error is not at all related to the number of function bodies
or any metric of algorithmic complexity, but is simply a result of the
compiler's internal text representation for a single function overflowing a
64 KiB memory segment. Merely shortening the names of enough identifiers
within the function can help to get that representation down below 64 KiB.
If you encounter this error during regular software development, you might
interpret it as the compiler's roundabout way of telling you that it inlined
way more function calls than you probably wanted to have inlined. Because
you definitely won't explicitly spell out such a long function
in newly-written code, right?
At least it wasn't the worst copy-pasting job in this
game; that trophy still goes to 📝 Elis. And
while the tracking code for adjusting an eye's sprite according to the
player's relative position is one of the main causes behind all the bloat,
it's also 100% consistent, and might have been an inlined class method in
ZUN's original code as well.
The clear highlight in this fight though? Almost no coordinate is
precisely calculated where you'd expect it to be. In particular, all
bullet spawn positions completely ignore the direction the eyes are facing
to:
Combining the bottom of the pupil with the exact horizontal
center of the sprite as a whole might sound like a good idea, but looks
especially wrong if the eye is facing right.Here it's the other way round: OK for a right-facing eye, really
wrong for a left-facing one.Dude, the eye is even supposed to track the laser in this
one!Hint: That's not the center of the playfield. At least the
pellets spawned from the corners are sort of correct, but with the corner
calculates precomputed, you could only get them wrong on
purpose.
Due to their effect on gameplay, these inaccuracies can't even be called
"bugs", and made me devise a new "quirk" category instead. More on that in
the TH01 100% blog post, though.
While we did see an accidentally unused bullet pattern earlier, I can
now say with certainty that there are no truly unused danmaku
patterns in TH01, i.e., pattern code that exists but is never called.
However, the code for YuugenMagan's phase 5 reveals another small piece of
danmaku design intention that never shows up within the parameters of
the original game.
By default, pellets are clipped when they fly past the top of the playfield,
which we can clearly observe for the first few pellets of this pattern.
Interestingly though, the second subpattern actually configures its pellets
to fall straight down from the top of the playfield instead. You never see
this happening in-game because ZUN limited that subpattern to a downwards
angle range of 0x73 or 162°, resulting in none of its pellets
ever getting close to the top of the playfield. If we extend that range to a
full 360° though, we can see how ZUN might have originally planned the
pattern to end:
YuugenMagan's phase 5 patterns on every difficulty, with the
second subpattern extended to reveal the different pellet behavior that
remained in the final game code. In the original game, the eyes would stop
spawning bullets on the marked frame.
If we also disregard everything else about YuugenMagan that fits the
upcoming definition of quirk, we're left with 6 "fixable" bugs, all
of which are a symptom of general blitting and unblitting laziness. Funnily
enough, they can all be demonstrated within a short 9-second part of the
fight, from the end of phase 9 up until the pentagram starts spinning in
phase 13:
General flickering whenever any sprite overlaps an eye. This is caused
by only reblitting each eye every 3 frames, and is an issue all throughout
the fight. You might have already spotted it in the videos above.
Each of the two lasers is unblitted and blitted individually instead of
each operation being done for both lasers together. Remember how
📝 ZUN unblits 32 horizontal pixels for every row of a line regardless of its width?
That's why the top part of the left, right-moving laser is never visible,
because it's blitted before the other laser is unblitted.
ZUN forgot to unblit the lasers when phase 9 ends. This footage was
recorded by pressing ↵ Return in test mode (game t or
game d), and it's probably impossible to achieve this during
actual gameplay without TAS techniques. You would have to deal the required
6 points of damage within 491 frames, with the eye being invincible during
240 of them. Simply shooting up an Orb with a horizontal velocity of 0 would
also only work a single time, as boss entities always repel the Orb with a
horizontal velocity of ±4.
The shrinking pentagram is unblitted after the eyes were blitted,
adding another guaranteed frame of flicker on top of the ones in 1). Like in
2), the blockiness of the holes is another result of unblitting 32 pixels
per row at a time.
Another missing unblitting call in a phase transition, as the pentagram
switches from its not quite correctly interpolated shrunk form to a regular
star polygon with a radius of 64 pixels. Indirectly caused by the massively
bloated coordinate calculation for the shrink animation being done
separately for the unblitting and blitting calls. Instead of, y'know, just
doing it once and storing the result in variables that can later be
reused.
The pentagram is not reblitted at all during the first 100 frames of
phase 13. During that rather long time, it's easily possible to remove
it from VRAM completely by covering its area with player shots. Or HARRY UP pellets.
Definitely an appropriate end for this game's entity blitting code.
I'm really looking forward to writing a
proper sprite system for the Anniversary Edition…
And just in case you were wondering about the hitboxes of these pentagrams
as they slam themselves into Reimu:
62 pixels on the X axis, centered around each corner point of the star, 16
pixels below, and extending infinitely far up. The latter part becomes
especially devious because the game always collision-detects
all 5 corners, regardless of whether they've already clipped through
the bottom of the playfield. The simultaneously occurring shape distortions
are simply a result of the line drawing function's rather poor
re-interpolation of any line that runs past the 640×400 VRAM boundaries;
📝 I described that in detail back when I debugged the shootout laser crash.
Ironically, using fixed-size hitboxes for a variable-sized pentagram means
that the larger one is easier to dodge.
The final puzzle in TH01's boss code comes
📝 once again in the form of weird hardware
palette changes. The 邪 kanji on the background
image goes through various colors throughout the fight, which ZUN
implemented by gradually incrementing and decrementing either a single one
or none of the color's three 4-bit components at the beginning of each
even-numbered phase. The resulting color sequence, however, doesn't
quite seem to follow these simple rules:
Phase 0: #DD5邪
Phase 2: #0DF邪
Phase 4: #F0F邪
Phase 6: #00F邪, but at the
end of the phase?!
Phase 8: #0FF邪, at the start
of the phase, #0F5邪, at the end!?
Phase 10: #FF5邪, at the start of
the phase, #F05邪, at the end
Second repetition of phase 12: #005邪
shortly after the start of the phase?!
Adding some debug output sheds light on what's going on there:
Since each iteration of phase 12 adds 63 to the red component, integer
overflow will cause the color to infinitely alternate between dark-blue
and red colors on every 2.03 iterations of the pentagram phase loop. The
65th iteration will therefore be the first one with a dark-blue color
for a third iteration in a row – just in case you manage to stall the
fight for that long.
Yup, ZUN had so much trust in the color clamping done by his hardware
palette functions that he did not clamp the increment operation on the
stage_palette itself. Therefore, the 邪
colors and even the timing of their changes from Phase 6 onwards are
"defined" by wildly incrementing color components beyond their intended
domain, so much that even the underlying signed 8-bit integer ends up
overflowing. Given that the decrement operation on the
stage_paletteis clamped though, this might be another
one of those accidents that ZUN deliberately left in the game,
📝 similar to the conclusion I reached with infinite bumper loops.
But guess what, that's also the last time we're going to encounter this type
of palette component domain quirk! Later games use master.lib's 8-bit
palette system, which keeps the comfort of using a single byte per
component, but shifts the actual hardware color into the top 4 bits, leaving
the bottom 4 bits for added precision during fades.
OK, but now we're done with TH01's bosses! 🎉That was the
8th PC-98 Touhou boss in total, leaving 23 to go.
With all the necessary research into these quirks going well into a fifth
push, I spent the remaining time in that one with transferring most of the
data between YuugenMagan and the upcoming rest of REIIDEN.EXE
into C land. This included the one piece of technical debt in TH01 we've
been carrying around since March 2015, as well as the final piece of the
ending sequence in FUUIN.EXE. Decompiling that executable's
main() function in a meaningful way requires pretty much all
remaining data from REIIDEN.EXE to also be moved into C land,
just in case you were wondering why we're stuck at 99.46% there.
On a more disappointing note, the static initialization code for the
📝 5 boss entity slots ultimately revealed why
YuugenMagan's code is as bloated and redundant as it is: The 5 slots really
are 5 distinct variables rather than a single 5-element array. That's why
ZUN explicitly spells out all 5 eyes every time, because the array he could
have just looped over simply didn't exist. 😕 And while these slot variables
are stored in a contiguous area of memory that I could just have
taken the address of and then indexed it as if it were an array, I
didn't want to annoy future port authors with what would technically be
out-of-bounds array accesses for purely stylistic reasons. At least it
wasn't that big of a deal to rewrite all boss code to use these distinct
variables, although I certainly had to get a bit creative with Elis.
Next up: Finding out how many points we got in totle, and hoping that ZUN
didn't hide more unexpected complexities in the remaining 45 functions of
this game. If you have to spare, there are two ways
in which that amount of money would help right now:
I'm expecting another subscription transaction
from Yanga before the 15th, which would leave to
round out one final TH01 RE push. With that, there'd be a total of 5 left in
the backlog, which should be enough to get the rest of this game done.
I really need to address the performance and usability issues
with all the small videos in this blog. Just look at the video immediately
above, where I disabled the controls because they would cover the debug text
at the bottom… Edit (2022-10-31):… which no longer is an
issue with our 📝 custom video player.
I already reserved this month's anonymous contribution for this work, so it would take another to be turned into a full push.
P0205
TH01 decompilation (Mima, part 1/2: Patterns 1-4)
P0206
TH01 decompilation (Mima, part 2/2: Patterns 5-8 + main function) + Research (TH01's unexpected palette changes)
💰 Funded by:
[Anonymous], Yanga
🏷️ Tags:
Oh look, it's another rather short and straightforward boss with a rather
small number of bugs and quirks. Yup, contrary to the character's
popularity, Mima's premiere is really not all that special in terms of code,
and continues the trend established with
📝 Kikuri and
📝 SinGyoku. I've already covered
📝 the initial sprite-related bugs last November,
so this post focuses on the main code of the fight itself. The overview:
The TH01 Mima fight consists of 3 phases, with phases 1 and 3 each
corresponding to one half of the 12-HP bar.
📝 Just like with SinGyoku, the distinction
between the red-white and red parts is purely visual once again, and doesn't
reflect anything about the boss script. As usual, all of the phases have to
be completed in order.
Phases 1 and 3 cycle through 4 danmaku patterns each, for a total of 8.
The cycles always start on a fixed pattern.
3 of the patterns in each phase feature rotating white squares, thus
introducing a new sprite in need of being unblitted.
Phase 1 additionally features the "hop pattern" as the last one in its
cycle. This is the only pattern where Mima leaves the seal in the center of
the playfield to hop from one edge of the playfield towards the other, while
also moving slightly higher up on the Y axis, and staying on the final
position for the next pattern cycle. For the first time, Mima selects a
random starting edge, which is then alternated on successive cycles.
Since the square entities are local to the respective pattern function,
Phase 1 can only end once the current pattern is done, even if Mima's HP are
already below 6. This makes Mima susceptible to the
📝 test/debug mode HP bar heap corruption bug.
Phase 2 simply consists of a spread-in teleport back to Mima's initial
position in the center of the playfield. This would only have been strictly
necessary if phase 1 ended on the hop pattern, but is done regardless of the
previous pattern, and does provide a nice visual separation between the two
main phases.
That's it – nothing special in Phase 3.
And there aren't even any weird hitboxes this time. What is maybe
special about Mima, however, is how there's something to cover about all of
her patterns. Since this is TH01, it's won't surprise anyone that the
rotating square patterns are one giant copy-pasta of unblitting, updating,
and rendering code. At least ZUN placed the core polar→Cartesian
transformation in a separate function for creating regular polygons
with an arbitrary number of sides, which might hint toward some more varied
shapes having been planned at one point?
5 of the 6 patterns even follow the exact same steps during square update
frames:
Calculate square corner coordinates
Unblit the square
Update the square angle and radius
Use the square corner coordinates for spawning pellets or missiles
Recalculate square corner coordinates
Render the square
Notice something? Bullets are spawned before the corner coordinates
are updated. That's why their initial positions seem to be a bit off – they
are spawned exactly in the corners of the square, it's just that it's
the square from 8 frames ago.
Mima's first pattern on Normal difficulty.
Once ZUN reached the final laser pattern though, he must have noticed that
there's something wrong there… or maybe he just wanted to fire those
lasers independently from the square unblit/update/render timer for a
change. Spending an additional 16 bytes of the data segment for conveniently
remembering the square corner coordinates across frames was definitely a
decent investment.
When Mima isn't shooting bullets from the corners of a square or hopping
across the playfield, she's raising flame pillars from the bottom of the playfield within very specifically calculated
random ranges… which are then rendered at byte-aligned VRAM positions, while
collision detection still uses their actual pixel position. Since I don't
want to sound like a broken record all too much, I'll just direct you to
📝 Kikuri, where we've seen the exact same issue with the teardrop ripple sprites.
The conclusions are identical as well.
Mima's flame pillar pattern. This video was recorded on a particularly
unlucky seed that resulted in great disparities between a pillar's
internal X coordinate and its byte-aligned on-screen appearance, leading
to lots of right-shifted hitboxes.
Also note how the change from the meteor animation to the three-arm 🚫
casting sprite doesn't unblit the meteor, and leaves that job to
any sprite that happens to fly over those pixels.
However, I'd say that the saddest part about this pattern is how choppy it
is, with the circle/pillar entities updating and rendering at a meager 7
FPS. Why go that low on purpose when you can just make the game render ✨
smoothly ✨ instead?
So smooth it's almost uncanny.
The reason quickly becomes obvious: With TH01's lack of optimization, going
for the full 56.4 FPS would have significantly slowed down the game on its
intended 33 MHz CPUs, requiring more than cheap surface-level ASM
optimization for a stable frame rate. That might very well have been ZUN's
reason for only ever rendering one circle per frame to VRAM, and designing
the pattern with these time offsets in mind. It's always been typical for
PC-98 developers to target the lowest-spec models that could possibly still
run a game, and implementing dynamic frame rates into such an engine-less
game is nothing I would wish on anybody. And it's not like TH01 is
particularly unique in its choppiness anyway; low frame rates are actually a
rather typical part of the PC-98 game aesthetic.
The final piece of weirdness in this fight can be found in phase 1's hop
pattern, and specifically its palette manipulation. Just from looking at the
pattern code itself, each of the 4 hops is supposed to darken the hardware
palette by subtracting #444 from every color. At the last hop,
every color should have therefore been reduced to a pitch-black
#000, leaving the player completely blind to the movement of
the chasing pellets for 30 frames and making the pattern quite ghostly
indeed. However, that's not what we see in the actual game:
Nothing in the pattern's code would cause the hardware palette to get
brighter before the end of the pattern, and yet…
The expected version doesn't look all too unfair, even on Lunatic…
well, at least at the default rank pellet speed shown in this
video. At maximum pellet speed, it is in fact rather brutal.
Looking at the frame counter, it appears that something outside the
pattern resets the palette every 40 frames. The only known constant with a
value of 40 would be the invincibility frames after hitting a boss with the
Orb, but we're not hitting Mima here…
But as it turns out, that's exactly where the palette reset comes from: The
hop animation darkens the hardware palette directly, while the
📝 infamous 12-parameter boss collision handler function
unconditionally resets the hardware palette to the "default boss palette"
every 40 frames, regardless of whether the boss was hit or not. I'd classify
this as a bug: That function has no business doing periodic hardware palette
resets outside the invincibility flash effect, and it completely defies
common sense that it does.
That explains one unexpected palette change, but could this function
possibly also explain the other infamous one, namely, the temporary green
discoloration in the Konngara fight? That glitch comes down to how the game
actually uses two global "default" palettes: a default boss
palette for undoing the invincibility flash effect, and a default
stage palette for returning the colors back to normal at the end of
the bomb animation or when leaving the Pause menu. And sure enough, the
stage palette is the one with the green color, while the boss
palette contains the intended colors used throughout the fight. Sending the
latter palette to the graphics chip every 40 frames is what corrects
the discoloration, which would otherwise be permanent.
The green color comes from BOSS7_D1.GRP, the scrolling
background of the entrance animation. That's what turns this into a clear
bug: The stage palette is only set a single time in the entire fight,
at the beginning of the entrance animation, to the palette of this image.
Apart from consistency reasons, it doesn't even make sense to set the stage
palette there, as you can't enter the Pause menu or bomb during a blocking
animation function.
And just 3 lines of code later, ZUN loads BOSS8_A1.GRP, the
main background image of the fight. Moving the stage palette assignment
there would have easily prevented the discoloration.
But yeah, as you can tell, palette manipulation is complete jank in this
game. Why differentiate between a stage and a boss palette to begin with?
The blocking Pause menu function could have easily copied the original
palette to a local variable before darkening it, and then restored it after
closing the menu. It's not so easy for bombs as the intended palette could
change between the start and end of the animation, but the code could have
still been simplified a lot if there was just one global "default palette"
variable instead of two. Heck, even the other bosses who manipulate their
palettes correctly only do so because they manually synchronize the two
after every change. The proper defense against bugs that result from wild
mutation of global state is to get rid of global state, and not to put up
safety nets hidden in the middle of existing effect code.
The easiest way of reproducing the green discoloration bug in
the TH01 Konngara fight, timed to show the maximum amount of time the
discoloration can possibly last.
In any case, that's Mima done! 7th PC-98 Touhou boss fully
decompiled, 24 bosses remaining, and 59 functions left in all of TH01.
In other thrilling news, my call for secondary funding priorities in new
TH01 contributions has given us three different priorities so far. This
raises an interesting question though: Which of these contributions should I
now put towards TH01 immediately, and which ones should I leave in the
backlog for the time being? Since I've never liked deciding on priorities,
let's turn this into a popularity contest instead: The contributions with
the least popular secondary priorities will go towards TH01 first, giving
the most popular priorities a higher chance to still be left over after TH01
is done. As of this delivery, we'd have the following popularity order:
TH05 (1.67 pushes), from T0182
Seihou (1 push), from T0184
TH03 (0.67 pushes), from T0146
Which means that T0146 will be consumed for TH01 next, followed by T0184 and
then T0182. I only assign transactions immediately before a delivery though,
so you all still have the chance to change up these priorities before the
next one.
Next up: The final boss of TH01 decompilation, YuugenMagan… if the current
or newly incoming TH01 funds happen to be enough to cover the entire fight.
If they don't turn out to be, I will have to pass the time with some Seihou
work instead, missing the TH01 anniversary deadline as a result.Edit (2022-07-18): Thanks to Yanga for
securing the funding for YuugenMagan after all! That fight will feature
slightly more than half of all remaining code in TH01's
REIIDEN.EXE and the single biggest function in all of PC-98
Touhou, let's go!
P0203
TH01 decompilation (Card-flipping stages, part 3/4: Bumpers and turrets)
P0204
TH01 decompilation (Card-flipping stages, part 4/4: Portals + Bomb animation)
💰 Funded by:
GhostRiderCog, [Anonymous], Yanga
🏷️ Tags:
Let's start right with the milestones:
More than 50% of all PC-98 Touhou game code has now been
reverse-engineered! 🎉 While this number isn't equally distributed among the
games, we've got one game very close to 100% and reverse-engineered most of
the core features of two others. During the last 32 months of continuous
funding, I've averaged an overall speed of 1.11% total RE per month. That
looks like a decent prediction of how much more time it will take for 100%
across all games – unless, of course, I'd get to work towards some of the
non-RE goals in the meantime.
70 functions left in TH01, with less than 10,000 ASM instructions
remaining! Due to immense hype, I've temporarily raised the cap by 50% until
August 15. With the last TH01 pushes delivering at roughly 1.5× of the
currently calculated average speed, that should be more than enough to get
TH01 done – especially since I expect YuugenMagan to come with lots of
redundant code. Therefore, please also request a secondary priority for
these final TH01 RE contributions.
So, how did this card-flipping stage obstacle delivery get so horribly
delayed? With all the different layouts showcased in the 28 card-flipping
stages, you'd expect this to be among the more stable and bug-free parts of
the codebase. Heck, with all stage objects being placed on a 32×32-pixel
grid, this is the first TH01-related blog post this year that doesn't have
to describe an alignment-related unblitting glitch!
That alone doesn't mean that this code is free from quirky behavior though,
and we have to look no further than the first few lines of the collision
handling for round bumpers to already find a whole lot of that. Simplified,
they do the following:
Immediately, you wonder why these assignments only exist for the Y
coordinate. Sure, hitting a bumper from the left or right side should happen
less often, but it's definitely possible. Is it really a good idea to warp
the Orb to the top or bottom edge of a bumper regardless?
What's more important though: The fact that these immediate assignments
exist at all. The game's regular Orb physics work by producing a Y velocity
from the single force acting on the Orb and a gravity factor, and are
completely independent of its current Y position. A bumper collision does
also apply a new force onto the Orb further down in the code, but these
assignments still bypass the physics system and are bound to have
some knock-on effect on the Orb's movement.
To observe that effect, we just have to enter Stage 18 on the 地獄/Jigoku route, where it's particularly trivial to
reproduce. At a 📝 horizontal velocity of ±4,
these assignments are exactly what can cause the Orb to endlessly
bounce between two bumpers. As rudimentary as the Orb's physics may be, just
letting them do their work would have entirely prevented these loops:
One of at least three infinite bumper loop constellations within just
this 10×5-tile section of TH01's Stage 18 on the 地獄/Jigoku route. With an effective 56 horizontal
pixels between both hitboxes, the Orb would have to travel an absolute
Y distance of at least 16 vertical pixels within
(56 / 4) = 14 frames to escape the
other bumper's hitbox. If the initial bounce reduces the Orb's Y
velocity far enough for it to not manage that distance the first time,
it will never reach the necessary speed again. In this loop, the
bounce-off force even stabilizes, though this doesn't have to happen.
The blue areas indicate the pixel-perfect* hitboxes of each bumper.
TH01 bumper collision handling without ZUN's manual assignment of the Y
coordinate. The Orb still bounces back and forth between two bumpers
for a while, but its top position always follows naturally
from its Y velocity and the force applied to it, and gravity wins out
in the end. The blue areas indicate the pixel-perfect* hitboxes of each bumper.
Now, you might be thinking that these Y assignments were just an attempt to
prevent the Orb from colliding with the same bumper again on the next frame.
After all, those 24 pixels exactly correspond to ⅓ of the height of a
bumper's hitbox with an additional pixel added on top. However, the game
already perfectly prevents repeated collisions by turning off collision
testing with the same bumper for the next 7 frames after a collision. Thus,
we can conclude that ZUN either explicitly coded bumper collision handling
to facilitate these loops, or just didn't take out that code after
inevitably discovering what it did. This is not janky code, it's not a
glitch, it's not sarcasm from my end, and it's not the game's physics being
bad.
But wait. Couldn't these assignments just be a remnant from a time in
development before ZUN decided on the 7-frame delay on further
collisions? Well, even that explanation stops holding water after the next
few lines of code. Simplified, again:
What's important here is the part that's not in the code – namely,
anything that handles X velocities of -8 or +8. In those cases, the Orb
simply continues in the same horizontal direction. The manual Y assignment
is the only part of the code that actually prevents a collision there, as
the newly applied force is not guaranteed to be enough:
An infinite loop across three bumpers, made possible by the edge of the
playfield and bumper bars on opposite sides, an unchanged horizontal
direction, and the Y assignments neatly placing the Orb on either the
top or bottom side of a bumper. The alternating sign of the force
further ensures that the Orb will travel upwards half the time,
canceling out gravity during the short time between two hitboxes.
With the unchanged horizontal direction and the Y assignments removed,
nothing keeps an Orb at ±8 pixels per frame from flying into/over a
bumper. The collision force pushes the Orb slightly, but not enough to
truly matter. The final force sends the Orb on a significant downward
trajectory beyond the next bumper's hitbox, breaking the original loop.
Forgetting to handle ⅖ of your discrete X velocity cases is simply not
something you do by accident. So we might as well say that ZUN deliberately
designed the game to behave exactly as it does in this regard.
Bumpers also come in vertical or horizontal bar shapes. Their collision
handling also turns off further collision testing for the next 7 frames, and
doesn't do any manual coordinate assignment. That's definitely a step up in
cleanliness from round bumpers, but it doesn't seem to keep in mind that the
player can fire a new shot every 4 frames when standing still. That makes it
immediately obvious why this works:
The green numbers show the amount of
frames since the last detected collision with the respective bumper bar,
and indicate that collision testing with the bar below is currently
disabled.
That's the most well-known case of reducing the Orb's horizontal velocity to
0 by exactly hitting it with shots in its center and then button-mashing it
through a horizontal bar. This also works with vertical bars and yields even
more interesting results there, but if we want to have any chance of
understanding what happens there, we have to first go over some basics:
Collision detection for all stage obstacles is done in row-major
order from the top-left to the bottom-right corner of the
playfield.
All obstacles are collision-tested independently from each other, with
the collision response code immediately following the test.
The hitboxes for bumper bars extend far past their 32×32 sprites to make
sure that the Orb can collide with them from any side. They are a
pixel-perfect* 87×56 pixels for horizontal bars, and 57×87 pixels for
vertical ones. Yes, that's no typo, they really do differ in one pixel.
Changing the Y velocity during such a collision just involves applying a
new force with the magnitude of the negated current Y velocity, which can be
done multiple times during a frame without changing the result. This
explains why the force is correctly inverted in the clip above, despite the
Orb colliding with two bumpers simultaneously.
Lacking a similar force system, the X coordinate is simply directly
inverted.
However, if that were everything the game did, kicking the Orb into a column
of vertical bumper bars would lead them to behave more like a rope that the
Orb can climb, as the initial collision with two hitboxes cancels out the
intended sign change that reflects the Orb away from the bars:
This footage was recorded without the workaround I am about to describe.
It does not reflect the behavior of the original game. You
cannot do this in the original game.
While the visualization reveals small sections where three hitboxes
overlap, the Orb can never actually collide with three of them at the
same time, as those 3-hitbox regions are 2 pixels smaller than they
would need to be to fit the Orb. That's exactly the difference between
using < rather than <= in these hitbox
comparisons.
While that would have been a fun gameplay mechanic on its own, it
immediately breaks apart once you place two vertical bumper bars next to
each other. Due to how these bumper bar hitboxes extend past their sprites,
any two adjacent vertical bars will end up with the exact same hitbox in
absolute screen coordinates. Stage 17 on the
魔界/Makai route contains exactly such a layout:
The collision handlers of adjacent vertical bars always activate in the
same frame, independently invert the Orb's X velocity, and therefore
fully cancel out their intended effect on the Orb… if the game did not
have the workaround I am about to describe. This cannot happen
in the original game.
ZUN's workaround: Setting a "vertical bumper bar block flag" after any
collision with such a bar, which simply disables any collision with
any vertical bar for the next 7 frames. This quick hack made all
vertical bars work as intended, and avoided the need for involving the Orb's
X velocity in any kind of physics system.
Edit (2022-07-12): This flag only works around glitches
that would be caused by simultaneously colliding with more than one vertical
bar. The actual response to a bumper bar collision still remains unaffected,
and is very naive:
Horizontal bars always invert the Orb's Y velocity
Vertical bars invert either the Y or X velocity depending on whether
the Orb's current X velocity is 0 (Y) or not (X)
These conditions are only correct if the Orb comes in at an angle roughly
between 45° and 135° on either side of a bar. If it's anywhere close to 0°
or 180°, this response will be incorrect, and send the Orb straight
through the bar. Since the large hitboxes make this easily possible, you can
still get the Orb to climb a vertical column, or glide along a horizontal
row:
Here's the hitbox overlay for
地獄/Jigoku Stage 19, and here's an updated
version of the 📝 Orb physics debug mod that
now also shows bumper bar collision frame numbers:
2022-07-10-TH01OrbPhysicsDebug.zip
See the th01_orb_debug
branch for the code. To use it, simply replace REIIDEN.EXE, and
run the game in debug mode, via game d on the DOS prompt. If you
encounter a gameplay situation that doesn't seem to be covered by this blog
post, you can now verify it for yourself. Thanks to touhou-memories for bringing these
issues to my attention! That definitely was a glaring omission from the
initial version of this blog post.
With that clarified, we can now try mashing the Orb into these two vertical
bars:
At first, that workaround doesn't seem to make a difference here. As we
expect, the frame numbers now tell us that only one of the two bumper bars
in a row activates, but we couldn't have told otherwise as the number of
bars has no effect on newly applied Y velocity forces. On a closer look, the
Orb's rise to the top of the playfield is in fact caused by that
workaround though, combined with the unchanged top-to-bottom order of
collision testing. As soon as any bumper bar completed its 7
collision delay frames, it resets the aforementioned flag, which already
reactivates collision handling for any remaining vertical bumper bars during
the same frame. Look out for frames with both a 7 and a 1, like the one marked in the video above:
The 7 will always appear before
the 1 in the row-major order. Whenever
this happens, the current oscillation period is cut down from 7 to 6
frames – and because collision testing runs from top to bottom, this will
always happen during the falling part. Depending on the Y velocity, the
rising part may also be cut down to 6 frames from time to time, but that one
at least has a chance to last for the full 7 frames. This difference
adds those crucial extra frames of upward movement, which add up to send the
Orb to the top. Without the flag, you'd always see the Orb oscillating
between a fixed range of the bar column.
Finally, it's the "top of playfield" force that gradually slows down the Orb
and makes sure it ultimately only moves at sub-pixel velocities, which have
no visible effect. Because
📝 the regular effect of gravity is reset with
each newly applied force, it's completely negated during most of the climb.
This even holds true once the Orb reached the top: Since the Orb requires a
negative force to repeatedly arrive up there and be bounced back, this force
will stay active for the first 5 of the 7 collision frames and not move the
Orb at all. Once gravity kicks in at the 5th frame and adds 1 to
the Y velocity, it's already too late: The new velocity can't be larger than
0.5, and the Orb only has 1 or 2 frames before the flag reset causes it to
be bounced back up to the top again.
Portals, on the other hand, turn out to be much simpler than the old
description that ended up on Touhou Wiki in October 2005 might suggest.
Everything about their teleportations is random: The destination portal, the
exit force (as an integer between -9 and +9), as well as the exit X
velocity, with each of the
📝 5 distinct horizontal velocities having an
equal chance of being chosen. Of course, if the destination portal is next
to the left or right edge of the playfield and it chooses to fire the Orb
towards that edge, it immediately bounces off into the opposite direction,
whereas the 0 velocity is always selected with a constant 20% probability.
The selection process for the destination portal involves a bit more than a
single rand() call. The game bundles all obstacles in a single
structure of dynamically allocated arrays, and only knows how many obstacles
there are in total, not per type. Now, that alone wouldn't have much
of an impact on random portal selection, as you could simply roll a random
obstacle ID and try again if it's not a portal. But just to be extra cute,
ZUN instead iterates over all obstacles, selects any non-entered portal with
a chance of ¼, and just gives up if that dice roll wasn't successful after
16 loops over the whole array, defaulting to the entered portal in that
case.
In all its silliness though, this works perfectly fine, and results in a
chance of 0.7516(𝑛 - 1) for the Orb exiting out of the
same portal it entered, with 𝑛 being the total number of portals in a
stage. That's 1% for two portals, and 0.01% for three. Pretty decent for a
random result you don't want to happen, but that hurts nobody if it does.
The one tiny ZUN bug with portals is technically not even part of the newly
decompiled code here. If Reimu gets hit while the Orb is being sent through
a portal, the Orb is immediately kicked out of the portal it entered, no
matter whether it already shows up inside the sprite of the destination
portal. Neither of the two portal sprites is reset when this happens,
leading to "two Orbs" being visible simultaneously.
This makes very little sense no matter how you look at it. The Orb doesn't
receive a new velocity or force when this happens, so it will simply
re-enter the same portal once the gameplay resumes on Reimu's next life:
That left another ½ of a push over at the end. Way too much time to finish
FUUIN.exe, way too little time to start with Mima… but the bomb
animation fit perfectly in there. No secrets or bugs there, just a bunch of
sprite animation code wasting at least another 82 bytes in the data segment.
The special effect after the kuji-in sprites uses the same single-bitplane
32×32 square inversion effect seen at the end of Kikuri's and Sariel's
entrance animation, except that it's a 3-stack of 16-rings moving at 6, 7,
and 8 pixels per frame respectively. At these comparatively slow speeds, the
byte alignment of each square adds some further noise to the discoloration
pattern… if you even notice it below all the shaking and seizure-inducing
hardware palette manipulation.
And yes, due to the very destructive nature of the effect, the game does in
fact rely on it only being applied to VRAM page 0. While that will cause
every moving sprite to tear holes into the inverted squares along its
trajectory, keeping a clean playfield on VRAM page 1 is what allows all that
pixel damage to be easily undone at the end of this 89-frame animation.
Next up: Mima! Let's hope that stage obstacles already were the most complex
part remaining in TH01…
P0201
TH01 decompilation (SinGyoku, part 1/2: Preparation + sphere movement + patterns 1-2)
P0202
TH01 decompilation (SinGyoku, part 2/2: Patterns 3-6 + main function + Missiles, part 2/2 + YuugenMagan setup)
💰 Funded by:
Ember2528, Yanga, [Anonymous]
🏷️ Tags:
The positive:
It only took a record-breaking 1½ pushes to get SinGyoku done!
No 📝 entity synchronization code after
all! Since all of SinGyoku's sprites are 96×96 pixels, ZUN made the rather
smart decision of just using the sphere entity's position to render the
📝 flash and person entities – and their only
appearance is encapsulated in a single sphere→person→sphere transformation
function.
Just like Kikuri, SinGyoku's code as a whole is not a complete
disaster.
The negative:
It's still exactly as buggy as Kikuri, with both of the ZUN bugs being
rendering glitches in a single function once again.
It also happens to come with a weird hitbox, …
… and some minor questionable and weird pieces of code.
The overview:
SinGyoku's fight consists of 2 phases, with the first one corresponding
to the white part from 8 to 6 HP, and the second one to the rest of the HP
bar. The distinction between the red-white and red parts is purely visual,
and doesn't reflect anything about the boss script.
Both phases cycle between a pellet pattern and SinGyoku's sphere form
slamming itself into the player, followed by it slightly overshooting its
intended base Y position on its way back up.
Phase 1 only consists of the sphere form's half-circle spray pattern.
Technically, the phase can only end during that pattern, but adding
that one additional condition to allow it to end during the slam+return
"pattern" wouldn't have made a difference anyway. The code doesn't rule out
negative HP during the slam (have fun in test or debug mode), but the sum of
invincibility frames alone makes it impossible to hit SinGyoku 7 times
during a single slam in regular gameplay.
Phase 2 features two patterns for both the female and male forms
respectively, which are selected randomly.
This time, we're back to the Orb hitbox being a logical 49×49 pixels in
SinGyoku's center, and the shot hitbox being the weird one. What happens if
you want the shot hitbox to be both offset to the left a bit
and stretch the entire width of SinGyoku's sprite? You get a hitbox
that ends in mid-air, far away from the right edge of the sprite:
Due to VRAM byte alignment, all player shots fired between
gx = 376 and gx = 383 inclusive
appear at the same visual X position, but are internally already partly
outside the hitbox and therefore won't hit SinGyoku – compare the
marked shot at gx = 376 to the one at gx =
380. So much for precisely visualizing hitboxes in this game…
Since the female and male forms also use the sphere entity's coordinates,
they share the same hitbox.
Onto the rendering glitches then, which can – you guessed it – all be found
in the sphere form's slam movement:
ZUN unblits the delta area between the sphere's previous and current
position on every frame, but reblits the sphere itself on… only every second
frame?
For negative X velocities, ZUN made a typo and subtracted the Y velocity
from the right edge of the area to be unblitted, rather than adding the X
velocity. On a cursory look, this shouldn't affect the game all too
much due to the unblitting function's word alignment. Except when it does:
If the Y velocity is much smaller than the X one, the left edge of the
unblitted area can, on certain frames, easily align to a word address past
the previous right edge of the sphere. As a result, not a single sphere
pixel will actually be unblitted, and a small stripe of the sphere will be
left in VRAM for one frame, until the alignment has caught up with the
sphere's movement in the next one.
By having the sphere move from the right edge of the playfield to the
left, this video demonstrates both the lazy reblitting and broken
unblitting at the right edge for negative X velocities. Also, isn't it
funny how Reimu can partly disappear from all the sloppy
SinGyoku-related unblitting going on after her sprite was blitted?
Due to the low contrast of the sphere against the background, you typically
don't notice these glitches, but the white invincibility flashing after a
hit really does draw attention to them. This time, all of these glitches
aren't even directly caused by ZUN having never learned about the
EGC's bit length register – if he just wrote correct code for SinGyoku, none
of this would have been an issue. Sigh… I wonder how many more glitches will
be caused by improper use of this one function in the last 18% of
REIIDEN.EXE.
There's even another bug here, with ZUN hardcoding a horizontal delta of 8
pixels rather than just passing the actual X velocity. Luckily, the maximum
movement speed is 6 pixels on Lunatic, and this would have only turned into
an additional observable glitch if the X velocity were to exceed 24 pixels.
But that just means it's the kind of bug that still drains RE attention to
prove that you can't actually observe it in-game under some
circumstances.
The 5 pellet patterns are all pretty straightforward, with nothing to talk
about. The code architecture during phase 2 does hint towards ZUN having had
more creative patterns in mind – especially for the male form, which uses
the transformation function's three pattern callback slots for three
repetitions of the same pellet group.
There is one more oddity to be found at the very end of the fight:
Right before the defeat white-out animation, the sphere form is explicitly
reblitted for no reason, on top of the form that was blitted to VRAM in the
previous frame, and regardless of which form is currently active. If
SinGyoku was meant to immediately transform back to the sphere form before
being defeated, why isn't the person form unblitted before then? Therefore,
the visibility of both forms is undeniably canon, and there is some
lore meaning to be found here…
In any case, that's SinGyoku done! 6th PC-98 Touhou boss fully
decompiled, 25 remaining.
No FUUIN.EXE code rounding out the last push for a change, as
the 📝 remaining missile code has been
waiting in front of SinGyoku for a while. It already looked bad in November,
but the angle-based sprite selection function definitely takes the cake when
it comes to unnecessary and decadent floating-point abuse in this game.
The algorithm itself is very trivial: Even with
📝 .PTN requiring an additional quarter parameter to access 16×16 sprites,
it's essentially just one bit shift, one addition, and one binary
AND. For whatever reason though, ZUN casts the 8-bit missile
angle into a 64-bit double, which turns the following explicit
comparisons (!) against all possible 4 + 16 boundary angles (!!)
into FPU operations. Even with naive and readable
division and modulo operations, and the whole existence of this function not
playing well with Turbo C++ 4.0J's terrible code generation at all, this
could have been 3 lines of code and 35 un-inlined constant-time
instructions. Instead, we've got this 207-instruction monster… but hey, at
least it works. 🤷
The remaining time then went to YuugenMagan's initialization code, which
allowed me to immediately remove more declarations from ASM land, but more
on that once we get to the rest of that boss fight.
That leaves 76 functions until we're done with TH01! Next up: Card-flipping
stage obstacles.
P0198
TH01 decompilation (Kikuri, part 1/3: Preparation + soul, tear, and ripple animations)
P0199
TH01 decompilation (Kikuri, part 2/3: Patterns)
P0200
TH01 decompilation (Kikuri, part 3/3: Main function + Ending boss slideshow + Good/Bad endings)
What's this? A simple, straightforward, easy-to-decompile TH01 boss with
just a few minor quirks and only two rendering-related ZUN bugs? Yup, 2½
pushes, and Kikuri was done. Let's get right into the overview:
Just like 📝 Elis, Kikuri's fight consists
of 5 phases, excluding the entrance animation. For some reason though, they
are numbered from 2 to 6 this time, skipping phase 1? For consistency, I'll
use the original phase numbers from the source code in this blog post.
The main phases (2, 5, and 6) also share Elis' HP boundaries of 10, 6,
and 0, respectively, and are once again indicated by different colors in the
HP bar. They immediately end upon reaching the given number of HP, making
Kikuri immune to the
📝 heap corruption in test or debug mode that can happen with Elis and Konngara.
Phase 2 solely consists of the infamous big symmetric spiral
pattern.
Phase 3 fades Kikuri's ball of light from its default bluish color to bronze over 100 frames. Collision detection is deactivated
during this phase.
In Phase 4, Kikuri activates her two souls while shooting the spinning
8-pellet circles from the previously activated ball. The phase ends shortly
after the souls fired their third spread pellet group.
Note that this is a timed phase without an HP boundary, which makes
it possible to reduce Kikuri's HP below the boundaries of the next
phases, effectively skipping them. Take this video for example,
where Kikuri has 6 HP by the end of Phase 4, and therefore directly
starts Phase 6.
(Obviously, Kikuri's HP can also be reduced to 0 or below, which will
end the fight immediately after this phase.)
Phase 5 combines the teardrop/ripple "pattern" from the souls with the
"two crossed eye laser" pattern, on independent cycles.
Finally, Kikuri cycles through her remaining 4 patterns in Phase 6,
while the souls contribute single aimed pellets every 200 frames.
Interestingly, all HP-bounded phases come with an additional hidden
timeout condition:
Phase 2 automatically ends after 6 cycles of the spiral pattern, or
5,400 frames in total.
Phase 5 ends after 1,600 frames, or the first frame of the
7th cycle of the two crossed red lasers.
If you manage to keep Kikuri alive for 29 of her Phase 6 patterns,
her HP are automatically set to 1. The HP bar isn't redrawn when this
happens, so there is no visual indication of this timeout condition even
existing – apart from the next Orb hit ending the fight regardless of
the displayed HP. Due to the deterministic order of patterns, this
always happens on the 8th cycle of the "symmetric gravity
pellet lines from both souls" pattern, or 11,800 frames. If dodging and
avoiding orb hits for 3½ minutes sounds tiring, you can always watch the
byte at DS:0x1376 in your emulator's memory viewer. Once
it's at 0x1E, you've reached this timeout.
So yeah, there's your new timeout challenge.
The few issues in this fight all relate to hitboxes, starting with the main
one of Kikuri against the Orb. The coordinates in the code clearly describe
a hitbox in the upper center of the disc, but then ZUN wrote a < sign
instead of a > sign, resulting in an in-game hitbox that's not
quite where it was intended to be…
Kikuri's actual hitbox.
Since the Orb sprite doesn't change its shape, we can visualize the
hitbox in a pixel-perfect way here. The Orb must be completely within
the red area for a hit to be registered.
Much worse, however, are the teardrop ripples. It already starts with their
rendering routine, which places the sprites from TAMAYEN.PTN at byte-aligned VRAM positions in the ultimate piece of if(…) {…}
else if(…) {…} else if(…) {…} meme code. Rather than
tracking the position of each of the five ripple sprites, ZUN suddenly went
purely functional and manually hardcoded the exact rendering and collision
detection calls for each frame of the animation, based on nothing but its
total frame counter.
Each of the (up to) 5 columns is also unblitted and blitted individually
before moving to the next column, starting at the center and then
symmetrically moving out to the left and right edges. This wouldn't be a
problem if ZUN's EGC-powered unblitting function didn't word-align its X
coordinates to a 16×1 grid. If the ripple sprites happen to start at an
odd VRAM byte position, their unblitting coordinates get rounded both down
and up to the nearest 16 pixels, thus touching the adjacent 8 pixels of the
previously blitted columns and leaving the well-known black vertical bars in
their place.
OK, so where's the hitbox issue here? If you just look at the raw
calculation, it's a slightly confusingly expressed, but perfectly logical 17
pixels. But this is where byte-aligned blitting has a direct effect on
gameplay: These ripples can be spawned at any arbitrary, non-byte-aligned
VRAM position, and collisions are calculated relative to this internal
position. Therefore, the actual hitbox is shifted up to 7 pixels to the
right, compared to where you would expect it from a ripple sprite's
on-screen position:
Due to the deterministic nature of this part of the fight, it's
always 5 pixels for this first set of ripples. These visualizations are
obviously not pixel-perfect due to the different potential shapes of
Reimu's sprite, so they instead relate to her 32×32 bounding box, which
needs to be entirely inside the red
area.
We've previously seen the same issue with the
📝 shot hitbox of Elis' bat form, where
pixel-perfect collision detection against a byte-aligned sprite was merely a
sidenote compared to the more serious X=Y coordinate bug. So why do I
elevate it to bug status here? Because it directly affects dodging: Reimu's
regular movement speed is 4 pixels per frame, and with the internal position
of an on-screen ripple sprite varying by up to 7 pixels, any micrododging
(or "grazing") attempt turns into a coin flip. It's sort of mitigated
by the fact that Reimu is also only ever rendered at byte-aligned
VRAM positions, but I wouldn't say that these two bugs cancel out each
other.
Oh well, another set of rendering issues to be fixed in the hypothetical
Anniversary Edition – obviously, the hitboxes should remain unchanged. Until
then, you can always memorize the exact internal positions. The sequence of
teardrop spawn points is completely deterministic and only controlled by the
fixed per-difficulty spawn interval.
Aside from more minor coordinate inaccuracies, there's not much of interest
in the rest of the pattern code. In another parallel to Elis though, the
first soul pattern in phase 4 is aimed on every difficulty except
Lunatic, where the pellets are once again statically fired downwards. This
time, however, the pattern's difficulty is much more appropriately
distributed across the four levels, with the simultaneous spinning circle
pellets adding a constant aimed component to every difficulty level.
Kikuri's phase 4 patterns, on every difficulty.
That brings us to 5 fully decompiled PC-98 Touhou bosses, with 26 remaining…
and another ½ of a push going to the cutscene code in
FUUIN.EXE.
You wouldn't expect something as mundane as the boss slideshow code to
contain anything interesting, but there is in fact a slight bit of
speculation fuel there. The text typing functions take explicit string
lengths, which precisely match the corresponding strings… for the most part.
For the "Gatekeeper 'SinGyoku'" string though, ZUN passed 23
characters, not 22. Could that have been the "h" from the Hepburn
romanization of 神玉?!
Also, come on, if this text is already blitted to VRAM for no reason,
you could have gone for perfect centering at unaligned byte positions; the
rendering function would have perfectly supported it. Instead, the X
coordinates are still rounded up to the nearest byte.
The hardcoded ending cutscene functions should be even less interesting –
don't they just show a bunch of images followed by frame delays? Until they
don't, and we reach the 地獄/Jigoku Bad Ending with
its special shake/"boom" effect, and this picture:
Picture #2 from ED2A.GRP.
Which is rendered by the following code:
for(int i = 0; i <= boom_duration; i++) { // (yes, off-by-one)
if((i & 3) == 0) {
graph_scrollup(8);
} else {
graph_scrollup(0);
}
end_pic_show(1); // ← different picture is rendered
frame_delay(2); // ← blocks until 2 VSync interrupts have occurred
if(i & 1) {
end_pic_show(2); // ← picture above is rendered
} else {
end_pic_show(1);
}
}
Notice something? You should never see this picture because it's
immediately overwritten before the frame is supposed to end. And yet
it's clearly flickering up for about one frame with common emulation
settings as well as on my real PC-9821 Nw133, clocked at 133 MHz.
master.lib's graph_scrollup() doesn't block until VSync either,
and removing these calls doesn't change anything about the blitted images.
end_pic_show() uses the EGC to blit the given 320×200 quarter
of VRAM from page 1 to the visible page 0, so the bottleneck shouldn't be
there either…
…or should it? After setting it up via a few I/O port writes, the common
method of EGC-powered blitting works like this:
Read 16 bits from the source VRAM position on any single
bitplane. This fills the EGC's 4 16-bit tile registers with the VRAM
contents at that specific position on every bitplane. You do not care
about the value the CPU returns from the read – in optimized code, you would
make sure to just read into a register to avoid useless additional stores
into local variables.
Write any 16 bits
to the target VRAM position on any single bitplane. This copies the
contents of the EGC's tile registers to that specific position on
every bitplane.
To transfer pixels from one VRAM page to another, you insert an additional
write to I/O port 0xA6 before 1) and 2) to set your source and
destination page… and that's where we find the bottleneck. Taking a look at
the i486 CPU and its cycle
counts, a single one of these page switches costs 17 cycles – 1 for
MOVing the page number into AL, and 16 for the
OUT instruction itself. Therefore, the 8,000 page switches
required for EGC-copying a 320×200-pixel image require 136,000 cycles in
total.
And that's the optimal case of using only those two
instructions. 📝 As I implied last time, TH01
uses a function call for VRAM page switches, complete with creating
and destroying a useless stack frame and unnecessarily updating a global
variable in main memory. I tried optimizing ZUN's code by throwing out
unnecessary code and using 📝 pseudo-registers
to generate probably optimal assembly code, and that did speed up the
blitting to almost exactly 50% of the original version's run time. However,
it did little about the flickering itself. Here's a comparison of the first
loop with boom_duration = 16, recorded in DOSBox-X with
cputype=auto and cycles=max, and with
i overlaid using the text chip. Caution, flashing lights:
The original animation, completing in 50 frames instead of the expected
34, thanks to slow blitting. Combined with the lack of
double-buffering, this results in noticeable tearing as the screen
refreshes while blitting is still in progress.
(Note how the background of the ドカーン image is shifted 1 pixel to the left compared to pic
#1.)
This optimized version completes in the expected 34 frames. No tearing
happens to be visible in this recording, but the ドカーン image is still visible on every
second loop iteration. (Note how the background of the ドカーン image is shifted 1 pixel to the left compared to pic
#1.)
I pushed the optimized code to the th01_end_pic_optimize
branch, to also serve as an example of how to get close to optimal code out
of Turbo C++ 4.0J without writing a single ASM instruction.
And if you really want to use the EGC for this, that's the best you can do.
It really sucks that it merely expanded the GRCG's 4×8-bit tile register to
4×16 bits. With 32 bits, ≥386 CPUs could have taken advantage of their wider
registers and instructions to double the blitting performance. Instead, we
now know the reason why
📝 Promisence Soft's EGC-powered sprite driver that ZUN later stole for TH03
is called SPRITE16 and not SPRITE32. What a massive disappointment.
But what's perhaps a bigger surprise: Blitting planar
images from main memory is much faster than EGC-powered inter-page
VRAM copies, despite the required manual access to all 4 bitplanes. In
fact, the blitting functions for the .CDG/.CD2 format, used from TH03
onwards, would later demonstrate the optimal method of using REP
MOVSD for blitting every line in 32-pixel chunks. If that was also
used for these ending images, the core blitting operation would have taken
((12 + (3 × (320 / 32))) × 200 × 4) =
33,600 cycles, with not much more overhead for the surrounding row
and bitplane loops. Sure, this doesn't factor in the whole infamous issue of
VRAM being slow on PC-98, but the aforementioned 136,000 cycles don't even
include any actual blitting either. And as you move up to later PC-98
models with Pentium CPUs, the gap between OUT and REP
MOVSD only becomes larger. (Note that the page I linked above has a
typo in the cycle count of REP MOVSD on Pentium CPUs: According
to the original Intel Architecture and Programming Manual, it's
13+𝑛, not 3+𝑛.)
This difference explains why later games rarely use EGC-"accelerated"
inter-page VRAM copies, and keep all of their larger images in main memory.
It especially explains why TH04 and TH05 can get away with naively redrawing
boss backdrop images on every frame.
In the end, the whole fact that ZUN did not define how long this image
should be visible is enough for me to increment the game's overall bug
counter. Who would have thought that looking at endings of all things
would teach us a PC-98 performance lesson… Sure, optimizing TH01 already
seemed promising just by looking at its bloated code, but I had no idea that
its performance issues extended so far past that level.
That only leaves the common beginning part of all endings and a short
main() function before we're done with FUUIN.EXE,
and 98 functions until all of TH01 is decompiled! Next up: SinGyoku, who not
only is the quickest boss to defeat in-game, but also comes with the least
amount of code. See you very soon!
P0193
TH01 decompilation (Elis, part 1/4: Preparations + patterns 1-3)
P0194
TH01 decompilation (Elis, part 2/4: Patterns 4-6 + transformations)
P0195
TH01 decompilation (Elis, part 3/4: Patterns 7-13)
P0196
TH01 decompilation (Elis, part 4/4: Entrance animation + main function)
P0197
TH01 research (HP bar heap corruption + boss defeat crashes) + decompilation (Verdict screen)
💰 Funded by:
Ember2528, Yanga
🏷️ Tags:
With Elis, we've not only reached the midway point in TH01's boss code, but
also a bunch of other milestones: Both REIIDEN.EXE and TH01 as
a whole have crossed the 75% RE mark, and overall position independence has
also finally cracked 80%!
And it got done in 4 pushes again? Yup, we're back to
📝 Konngara levels of redundancy and
copy-pasta. This time, it didn't even stop at the big copy-pasted code
blocks for the rift sprite and 256-pixel circle animations, with the words
"redundant" and "unnecessary" ending up a total of 18 times in my source
code comments.
But damn is this fight broken. As usual with TH01 bosses, let's start with a
high-level overview:
The Elis fight consists of 5 phases (excluding the entrance animation),
which must be completed in order.
In all odd-numbered phases, Elis uses a random one-shot danmaku pattern
from an exclusive per-phase pool before teleporting to a random
position.
There are 3 exclusive girl-form patterns per phase, plus 4
additional bat-form patterns in phase 5, for a total of 13.
Due to a quirk in the selection algorithm in phases 1 and 3, there
is a 25% chance of Elis skipping an attack cycle and just teleporting
again.
In contrast to Konngara, Elis can freely select the same pattern
multiple times in a row. There's nothing in the code to prevent that
from happening.
This pattern+teleport cycle is repeated until Elis' HP reach a certain
threshold value. The odd-numbered phases correspond to the white (phase 1),
red-white (phase 3), and red (phase 5) sections of the health bar. However,
the next phase can only start at the end of each cycle, after a
teleport.
Phase 2 simply teleports Elis back to her starting screen position of
(320, 144) and then advances to phase 3.
Phase 4 does the same as phase 2, but adds the initial bat form
transformation before advancing to phase 5.
Phase 5 replaces the teleport with a transformation to the bat form.
Rather than teleporting instantly to the target position, the bat gradually
flies there, firing a randomly selected looping pattern from the 4-pattern
bat pool on the way, before transforming back to the girl form.
This puts the earliest possible end of the fight at the first frame of phase
5. However, nothing prevents Elis' HP from reaching 0 before that point. You
can nicely see this in 📝 debug mode: Wait
until the HP bar has filled up to avoid heap corruption, hold ↵ Return
to reduce her HP to 0, and watch how Elis still goes through a total of
two patterns* and four
teleport animations before accepting defeat.
But wait, heap corruption? Yup, there's a bug in the HP bar that already
affected Konngara as well, and it isn't even just about the graphical
glitches generated by negative HP:
The initial fill-up animation is drawn to both VRAM pages at a rate of 1
HP per frame… by passing the current frame number as the
current_hp number.
The target_hp is indicated by simply passing the current
HP…
… which, however, can be reduced in debug mode at an equal rate of up to
1 HP per frame.
The completion condition only checks if
((target_hp - 1) == current_hp). With the
right timing, both numbers can therefore run past each other.
In that case, the function is repeatedly called on every frame, backing
up the original VRAM contents for the current HP point before blitting
it…
… until frame ((96 / 2) + 1), where the
.PTN slot pointer overflows the heap buffer and overwrites whatever comes
after. 📝 Sounds familiar, right?
Since Elis starts with 14 HP, which is an even number, this corruption is
trivial to cause: Simply hold ↵ Return from the beginning of the
fight, and the completion condition will never be true, as the
HP and frame numbers run past the off-by-one meeting point.
Edit (2023-07-21): Pressing ↵ Return to reduce HP
also works in test mode (game t). There, the game doesn't
even check the heap, and consequently won't report any corruption,
allowing the HP bar to be glitched even further.
Regular gameplay, however, entirely prevents this due to the fixed start
positions of Reimu and the Orb, the Orb's fixed initial trajectory, and the
50 frames of delay until a bomb deals damage to a boss. These aspects make
it impossible to hit Elis within the first 14 frames of phase 1, and ensure
that her HP bar is always filled up completely. So ultimately, this bug ends
up comparable in seriousness to the
📝 recursion / stack overflow bug in the memory info screen.
These wavy teleport animations point to a quite frustrating architectural
issue in this fight. It's not even the fact that unblitting the yellow star
sprites rips temporary holes into Elis' sprite; that's almost expected from
TH01 at this point. Instead, it's all because of this unused frame of the
animation:
With this sprite still being part of BOSS5.BOS, Girl-Elis has a
total of 9 animation frames, 1 more than the
📝 8 per-entity sprites allowed by ZUN's architecture.
The quick and easy solution would have been to simply bump the sprite array
size by 1, but… nah, this would have added another 20 bytes to all 6 of the
.BOS image slots. Instead, ZUN wrote the manual
position synchronization code I mentioned in that 2020 blog post.
Ironically, he then copy-pasted this snippet of code often enough that it
ended up taking up more than 120 bytes in the Elis fight alone – with, you
guessed it, some of those copies being redundant. Not to mention that just
going from 8 to 9 sprites would have allowed ZUN to go down from 6 .BOS
image slots to 3. That would have actually saved 420 bytes in
addition to the manual synchronization trouble. Looking forward to SinGyoku,
that's going to be fun again…
As for the fight itself, it doesn't take long until we reach its most janky
danmaku pattern, right in phase 1:
The "pellets along circle" pattern on Lunatic, in its original version
and with fanfiction fixes for everything that can potentially be
interpreted as a bug.
For whatever reason, the lower-right quarter of the circle isn't
animated? This animation works by only drawing the new dots added with every
subsequent animation frame, expressed as a tiny arc of a dotted circle. This
arc starts at the animation's current 8-bit angle and ends on the sum of
that angle and a hardcoded constant. In every other (copy-pasted, and
correct) instance of this animation, ZUN uses 0x02 as the
constant, but this one uses… 0.05 for the lower-right quarter?
As in, a 64-bit double constant that truncates to 0 when added
to an 8-bit integer, thus leading to the start and end angles being
identical and the game not drawing anything.
On Easy and Normal, the pattern then spawns 32 bullets along the outline
of the circle, no problem there. On Lunatic though, every one of these
bullets is instead turned into a narrow-angled 5-spread, resulting in 160
pellets… in a game with a pellet cap of 100.
Now, if Elis teleported herself to a position near the top of the playfield,
most of the capped pellets would have been clipped at that top edge anyway,
since the bullets are spawned in clockwise order starting at Elis' right
side with an angle of 0x00. On lower positions though, you can
definitely see a difference if the cap were high enough to allow all coded
pellets to actually be spawned.
The Hard version gets dangerously close to the cap by spawning a total of 96
pellets. Since this is the only pattern in phase 1 that fires pellets
though, you are guaranteed to see all of the unclipped ones.
The pellets also aren't spawned exactly on the telegraphed circle, but 4 pixels to the left.
Then again, it might very well be that all of this was intended, or, most
likely, just left in the game as a happy accident. The latter interpretation
would explain why ZUN didn't just delete the rendering calls for the
lower-right quarter of the circle, because seriously, how would you not spot
that? The phase 3 patterns continue with more minor graphical glitches that
aren't even worth talking about anymore.
And then Elis transforms into her bat form at the beginning of Phase 5,
which displays some rather unique hitboxes. The one against the Orb is fine,
but the one against player shots…
… uses the bat's X coordinate for both X and Y dimensions.
In regular gameplay, it's not too bad as most
of the bat patterns fire aimed pellets which typically don't allow you to
move below her sprite to begin with. But if you ever tried destroying these
pellets while standing near the middle of the playfield, now you know why
that didn't work. This video also nicely points out how the bat, like any
boss sprite, is only ever blitted at positions on the 8×1-pixel VRAM byte
grid, while collision detection uses the actual pixel position.
The bat form patterns are all relatively simple, with little variation
depending on the difficulty level, except for the "slow pellet spreads"
pattern. This one is almost easiest to dodge on Lunatic, where the 5-spreads
are not only always fired downwards, but also at the hardcoded narrow delta
angle, leaving plenty of room for the player to move out of the way:
The "slow pellet spreads" pattern of Elis' bat form, on every
difficulty. Which version do you think is the easiest one?
Finally, we've got another potential timesave in the girl form's "safety
circle" pattern:
After the circle spawned completely, you lose a life by moving outside it,
but doing that immediately advances the pattern past the circle part. This
part takes 200 frames, but the defeat animation only takes 82 frames, so
you can save up to 118 frames there.
Final funny tidbit: As with all dynamic entities, this circle is only
blitted to VRAM page 0 to allow easy unblitting. However, it's also kind of
static, and there needs to be some way to keep the Orb, the player shots,
and the pellets from ripping holes into it. So, ZUN just re-blits the circle
every… 4 frames?! 🤪 The same is true for the Star of David and its
surrounding circle, but there you at least get a flash animation to justify
it. All the overlap is actually quite a good reason for not even attempting
to 📝 mess with the hardware color palette instead.
Reproducing the crash was the whole challenge here. Even after moving Elis
and Reimu to the exact positions seen in Pearl's video and setting Elis' HP
to 0 on the exact same frame, everything ran fine for me. It's definitely no
division by 0 this time, the function perfectly guards against that
possibility. The line specified in the function's parameters is always
clipped to the VRAM region as well, so we can also rule out illegal memory
accesses here…
… or can we? Stepping through it all reminded me of how this function brings
unblitting sloppiness to the next level: For each VRAM byte touched, ZUN
actually unblits the 4 surrounding bytes, adding one byte to the left
and two bytes to the right, and using a single 32-bit read and write per
bitplane. So what happens if the function tries to unblit the topmost byte
of VRAM, covering the pixel positions from (0, 0) to (7, 0)
inclusive? The VRAM offset of 0x0000 is decremented to
0xFFFF to cover the one byte to the left, 4 bytes are written
to this address, the CPU's internal offset overflows… and as it turns out,
that is illegal even in Real Mode as of the 80286, and will raise a General Protection
Fault. Which is… ignored by DOSBox-X,
every Neko Project II version in common use, the CSCP
emulators, SL9821, and T98-Next. Only Anex86 accurately emulates the
behavior of real hardware here.
OK, but no laser fired by Elis ever reaches the top-left corner of the
screen. How can such a fault even happen in practice? That's where the
broken laser reset+unblit function comes in: Not only does it just flat out pass the wrong
parameters to the line unblitting function – describing the line
already traveled by the laser and stopping where the laser begins –
but it also passes them
wrongly, in the form of raw 32-bit fixed-point Q24.8 values, with no
conversion other than a truncation to the signed 16-bit pixels expected by
the function. What then follows is an attempt at interpolation and clipping
to find a line segment between those garbage coordinates that actually falls
within the boundaries of VRAM:
right/bottom correspond to a laser's origin position, and
left/top to the leftmost pixel of its moved-out top line. The
bug therefore only occurs with lasers that stopped growing and have started
moving.
Moreover, it will only happen if either (left % 256) or
(right % 256) is ≤ 127 and the other one of the two is ≥ 128.
The typecast to signed 16-bit integers then turns the former into a large
positive value and the latter into a large negative value, triggering the
function's clipping code.
The function then follows Bresenham's
algorithm: left is ensured to be smaller than right
by swapping the two values if necessary. If that happened, top
and bottom are also swapped, regardless of their value – the
algorithm does not care about their order.
The slope in the X dimension is calculated using an integer division of
((bottom - top) /
(right - left)). Both subtractions are done on signed
16-bit integers, and overflow accordingly.
(-left × slope_x) is added to top,
and left is set to 0.
If both top and bottom are < 0 or
≥ 640, there's nothing to be unblitted. Otherwise, the final
coordinates are clipped to the VRAM range of [(0, 0),
(639, 399)].
If the function got this far, the line to be unblitted is now very
likely to reach from
the top-left to the bottom-right corner, starting out at
(0, 0) right away, or
from the bottom-left corner to the top-right corner. In this case,
you'd expect unblitting to end at (639, 0), but thanks to an
off-by-one error,
it actually ends at (640, -1), which is equivalent to
(0, 0). Why add clipping to VRAM offset calculations when
everything else is clipped already, right?
Possible laser states that will cause the fault, with some debug
output to help understand the cause, and any pellets removed for better
readability. This can happen for all bosses that can potentially have
shootout lasers on screen when being defeated, so it also applies to Mima.
Fixing this is easier than understanding why it happens, but since y'all
love reading this stuff…
tl;dr: TH01 has a high chance of freezing at a boss defeat sequence if there
are diagonally moving lasers on screen, and if your PC-98 system
raises a General Protection Fault on a 4-byte write to offset
0xFFFF, and if you don't run a TSR with an INT
0Dh handler that might handle this fault differently.
The easiest fix option would be to just remove the attempted laser
unblitting entirely, but that would also have an impact on this game's…
distinctive visual glitches, in addition to touching a whole lot of
code bytes. If I ever get funded to work on a hypothetical TH01 Anniversary
Edition that completely rearchitects the game to fix all these glitches, it
would be appropriate there, but not for something that purports to be the
original game.
(Sidenote to further hype up this Anniversary Edition idea for PC-98
hardware owners: With the amount of performance left on the table at every
corner of this game, I'm pretty confident that we can get it to work
decently on PC-98 models with just an 80286 CPU.)
Since we're in critical infrastructure territory once again, I went for the
most conservative fix with the least impact on the binary: Simply changing
any VRAM offsets >= 0xFFFD to 0x0000 to avoid
the GPF, and leaving all other bugs in place. Sure, it's rather lazy and
"incorrect"; the function still unblits a 32-pixel block there, but adding a
special case for blitting 24 pixels would add way too much code. And
seriously, it's not like anything happens in the 8 pixels between
(24, 0) and (31, 0) inclusive during gameplay to begin with.
To balance out the additional per-row if() branch, I inlined
the VRAM page change I/O, saving two function calls and one memory write per
unblitted row.
That means it's time for a new community_choice_fixes
build, containing the new definitive bugfixed versions of these games:
2022-05-31-community-choice-fixes.zip
Check the th01_critical_fixes
branch for the modified TH01 code. It also contains a fix for the HP bar
heap corruption in test or debug mode – simply changing the ==
comparison to <= is enough to avoid it, and negative HP will
still create aesthetic glitch art.
Once again, I then was left with ½ of a push, which I finally filled with
some FUUIN.EXE code, specifically the verdict screen. The most
interesting part here is the player title calculation, which is quite
sneaky: There are only 6 skill levels, but three groups of
titles for each level, and the title you'll see is picked from a random
group. It looks like this is the first time anyone has documented the
calculation?
As for the levels, ZUN definitely didn't expect players to do particularly
well. With a 1cc being the standard goal for completing a Touhou game, it's
especially funny how TH01 expects you to continue a lot: The code has
branches for up to 21 continues, and the on-screen table explicitly leaves
room for 3 digits worth of continues per 5-stage scene. Heck, these
counts are even stored in 32-bit long variables.
Next up: 📝 Finally finishing the long
overdue Touhou Patch Center MediaWiki update work, while continuing with
Kikuri in the meantime. Originally I wasn't sure about what to do between
Elis and Seihou,
but with Ember2528's surprise
contribution last week, y'all have
demonstrated more than enough interest in the idea of getting TH01 done
sooner rather than later. And I agree – after all, we've got the 25th
anniversary of its first public release coming up on August 15, and I might
still manage to completely decompile this game by that point…
TH05 has passed the 50% RE mark, with both MAIN.EXE and the
game as a whole! With that, we've also reached what -Tom-
wanted out of the project, so he's suspending his discount offer for a
bit.
Curve bullets are now officially called cheetos! 76.7% of
fans prefer this term, and it fits into the 8.3 DOS filename scheme much
better than homing lasers (as they're called in
OMAKE.TXT) or Taito
lasers (which would indeed have made sense as well).
…oh, and I managed to decompile Shinki within 2 pushes after all. That
left enough budget to also add the Stage 1 midboss on top.
So, Shinki! As far as final boss code is concerned, she's surprisingly
economical, with 📝 her background animations
making up more than ⅓ of her entire code. Going straight from TH01's
📝 final📝 bosses
to TH05's final boss definitely showed how much ZUN had streamlined
danmaku pattern code by the end of PC-98 Touhou. Don't get me wrong, there
is still room for improvement: TH05 not only
📝 reuses the same 16 bytes of generic boss state we saw in TH04 last month,
but also uses them 4× as often, and even for midbosses. Most importantly
though, defining danmaku patterns using a single global instance of the
group template structure is just bad no matter how you look at it:
The script code ends up rather bloated, with a single MOV
instruction for setting one of the fields taking up 5 bytes. By comparison,
the entire structure for regular bullets is 14 bytes large, while the
template structure for Shinki's 32×32 ball bullets could have easily been
reduced to 8 bytes.
Since it's also one piece of global state, you can easily forget to set
one of the required fields for a group type. The resulting danmaku group
then reuses these values from the last time they were set… which might have
been as far back as another boss fight from a previous stage.
And of course, I wouldn't point this out if it
didn't actually happen in Shinki's pattern code. Twice.
Declaring a separate structure instance with the static data for every
pattern would be both safer and more space-efficient, and there's
more than enough space left for that in the game's data segment.
But all in all, the pattern functions are short, sweet, and easy to follow.
The "devil"
patternis significantly more complex than the others, but still
far from TH01's final bosses at their worst. I especially like the clear
architectural separation between "one-shot pattern" functions that return
true once they're done, and "looping pattern" functions that
run as long as they're being called from a boss's main function. Not many
all too interesting things in these pattern functions for the most part,
except for two pieces of evidence that Shinki was coded after Yumeko:
The gather animation function in the first two phases contains a bullet
group configuration that looks like it's part of an unused danmaku
pattern. It quickly turns out to just be copy-pasted from a similar function
in Yumeko's fight though, where it is turned into actual
bullets.
As one of the two places where ZUN forgot to set a template field, the
lasers at the end of the white wing preparation pattern reuse the 6-pixel
width of Yumeko's final laser pattern. This actually has an effect on
gameplay: Since these lasers are active for the first 8 frames after
Shinki's wings appear on screen, the player can get hit by them in the last
2 frames after they grew to their final width.
Of course, there are more than enough safespots between the lasers.
Speaking about that wing sprite: If you look at ST05.BB2 (or
any other file with a large sprite, for that matter), you notice a rather
weird file layout:
A large sprite split into multiple smaller ones with a width of
64 pixels each? What's this, hardware sprite limitations? On my
PC-98?!
And it's not a limitation of the sprite width field in the BFNT+ header
either. Instead, it's master.lib's BFNT functions which are limited to
sprite widths up to 64 pixels… or at least that's what
MASTER.MAN claims. Whatever the restriction was, it seems to be
completely nonexistent as of master.lib version 0.23, and none of the
master.lib functions used by the games have any issues with larger
sprites.
Since ZUN stuck to the supposed 64-pixel width limit though, it's now the
game that expects Shinki's winged form to consist of 4 physical
sprites, not just 1. Any conversion from another, more logical sprite sheet
layout back into BFNT+ must therefore replicate the original number of
sprites. Otherwise, the sequential IDs ("patnums") assigned to every newly
loaded sprite no longer match ZUN's hardcoded IDs, causing the game to
crash. This is exactly what used to happen with -Tom-'s
MysticTK automation scripts,
which combined these exact sprites into a single large one. This issue has
now been fixed – just in case there are some underground modders out there
who used these scripts and wonder why their game crashed as soon as the
Shinki fight started.
And then the code quality takes a nosedive with Shinki's main function.
Even in TH05, these boss and midboss update
functions are still very imperative:
The origin point of all bullet types used by a boss must be manually set
to the current boss/midboss position; there is no concept of a bullet type
tracking a certain entity.
The same is true for the target point of a player's homing shots…
… and updating the HP bar. At least the initial fill animation is
abstracted away rather decently.
Incrementing the phase frame variable also must be done manually. TH05
even "innovates" here by giving the boss update function exclusive ownership
of that variable, in contrast to TH04 where that ownership is given out to
the player shot collision detection (?!) and boss defeat helper
functions.
Speaking about collision detection: That is done by calling different
functions depending on whether the boss is supposed to be invincible or
not.
Timeout conditions? No standard way either, and all done with manual
if statements. In combination with the regular phase end
condition of lowering (mid)boss HP to a certain value, this leads to quite a
convoluted control flow.
The manual calls to the score bonus functions for cleared phases at least provide some sense of orientation.
One potentially nice aspect of all this imperative freedom is that
phases can end outside of HP boundaries… by manually incrementing the
phase variable and resetting the phase frame variable to 0.
The biggest WTF in there, however, goes to using one of the 16 state bytes
as a "relative phase" variable for differentiating between boss phases that
share the same branch within the switch(boss.phase)
statement. While it's commendable that ZUN tried to reduce code duplication
for once, he could have just branched depending on the actual
boss.phase variable? The same state byte is then reused in the
"devil" pattern to track the activity state of the big jerky lasers in the
second half of the pattern. If you somehow managed to end the phase after
the first few bullets of the pattern, but before these lasers are up,
Shinki's update function would think that you're still in the phase
before the "devil" pattern. The main function then sequence-breaks
right to the defeat phase, skipping the final pattern with the burning Makai
background. Luckily, the HP boundaries are far away enough to make this
impossible in practice.
The takeaway here: If you want to use the state bytes for your custom
boss script mods, alias them to your own 16-byte structure, and limit each
of the bytes to a clearly defined meaning across your entire boss script.
One final discovery that doesn't seem to be documented anywhere yet: Shinki
actually has a hidden bomb shield during her two purple-wing phases.
uth05win got this part slightly wrong though: It's not a complete
shield, and hitting Shinki will still deal 1 point of chip damage per
frame. For comparison, the first phase lasts for 3,000 HP, and the "devil"
pattern phase lasts for 5,800 HP.
And there we go, 3rd PC-98 Touhou boss
script* decompiled, 28 to go! 🎉 In case you were expecting a fix for
the Shinki death glitch: That one
is more appropriately fixed as part of the Mai & Yuki script. It also
requires new code, should ideally look a bit prettier than just removing
cheetos between one frame and the next, and I'd still like it to fit within
the original position-dependent code layout… Let's do that some other
time.
Not much to say about the Stage 1 midboss, or midbosses in general even,
except that their update functions have to imperatively handle even more
subsystems, due to the relative lack of helper functions.
The remaining ¾ of the third push went to a bunch of smaller RE and
finalization work that would have hardly got any attention otherwise, to
help secure that 50% RE mark. The nicest piece of code in there shows off
what looks like the optimal way of setting up the
📝 GRCG tile register for monochrome blitting
in a variable color:
mov ah, palette_index ; Any other non-AL 8-bit register works too.
; (x86 only supports AL as the source operand for OUTs.)
rept 4 ; For all 4 bitplanes…
shr ah, 1 ; Shift the next color bit into the x86 carry flag
sbb al, al ; Extend the carry flag to a full byte
; (CF=0 → 0x00, CF=1 → 0xFF)
out 7Eh, al ; Write AL to the GRCG tile register
endm
Thanks to Turbo C++'s inlining capabilities, the loop body even decompiles
into a surprisingly nice one-liner. What a beautiful micro-optimization, at
a place where micro-optimization doesn't hurt and is almost expected.
Unfortunately, the micro-optimizations went all downhill from there,
becoming increasingly dumb and undecompilable. Was it really necessary to
save 4 x86 instructions in the highly unlikely case of a new spark sprite
being spawned outside the playfield? That one 2D polar→Cartesian
conversion function then pointed out Turbo C++ 4.0J's woefully limited
support for 32-bit micro-optimizations. The code generation for 32-bit
📝 pseudo-registers is so bad that they almost
aren't worth using for arithmetic operations, and the inline assembler just
flat out doesn't support anything 32-bit. No use in decompiling a function
that you'd have to entirely spell out in machine code, especially if the
same function already exists in multiple other, more idiomatic C++
variations.
Rounding out the third push, we got the TH04/TH05 DEMO?.REC
replay file reading code, which should finally prove that nothing about the
game's original replay system could serve as even just the foundation for
community-usable replays. Just in case anyone was still thinking that.
Next up: Back to TH01, with the Elis fight! Got a bit of room left in the
cap again, and there are a lot of things that would make a lot of
sense now:
TH04 would really enjoy a large number of dedicated pushes to catch up
with TH05. This would greatly support the finalization of both games.
Continuing with TH05's bosses and midbosses has shown to be good value
for your money. Shinki would have taken even less than 2 pushes if she
hadn't been the first boss I looked at.
Oh, and I also added Seihou as a selectable goal, for the two people out
there who genuinely like it. If I ever want to quit my day job, I need to
branch out into safer territory that isn't threatened by takedowns, after
all.
Slight change of plans, because we got instructions for
reliably reproducing the TH04 Kurumi Divide Error crash! Major thanks to
Colin Douglas Howell. With those, it also made sense to immediately look at
the crash in the Stage 4 Marisa fight as well. This way, I could release
both of the obligatory bugfix mods at the same time.
Especially since it turned out that I was wrong: Both crashes are entirely
unrelated to the custom entity structure that would have required PI-centric
progress. They are completely specific to Kurumi's and Marisa's
danmaku-pattern code, and really are two separate bugs
with no connection to each other. All of the necessary research nicely fit
into Arandui's 0.5 pushes, with no further deep understanding
required here.
But why were there still three weeks between Colin's message and this blog
post? DMCA distractions aside: There are no easy fixes this time, unlike
📝 back when I looked at the Stage 5 Yuuka crash.
Just like how division by zero is undefined in mathematics, it's also,
literally, undefined what should happen instead of these two
Divide error crashes. This means that any possible "fix" can
only ever be a fanfiction interpretation of the intentions behind ZUN's
code. The gameplay community should be aware of this, and
might decide to handle these cases differently. And if we
have to go into fanfiction territory to work around crashes in the
canon games, we'd better document what exactly we're fixing here and how, as
comprehensible as possible.
With that out of the way, let's look at Kurumi's crash first, since it's way
easier to grasp. This one is known to primarily happen to new players, and
it's easy to see why:
In one of the patterns in her third phase, Kurumi fires a series of 3
aimed rings from both edges of the playfield. By default (that is, on Normal
and with regular rank), these are 6-way rings.
6 happens to be quite a peculiar number here, due to how rings are
(manually) tuned based on the current "rank" value (playperf)
before being fired. The code, abbreviated for clarity:
Let's look at the range of possible playperf values per
difficulty level:
Easy
Normal
Hard
Lunatic
Extra
playperf_min
4
11
20
22
16
playperf_max
16
24
32
34
20
Edit (2022-05-24): This blog post initially had
26 instead of 16 for playperf_min for the Extra Stage. Thanks
to Popfan for pointing out that typo!
Reducing rank to its minimum on Easy mode will therefore result in a
0-ring after tuning.
To calculate the individual angles of each bullet in a ring, ZUN divides
360° (or, more correctly,
📝 0x100) by the total number of
bullets…
Boom, division by zero.
The pattern that causes the crash in Kurumi's fight. Also
demonstrates how the number of bullets in a ring is always halved on
Easy Mode after the rank-based tuning, leading to just a 3-ring on
playperf = 16.
So, what should the workaround look like? Obviously, we want to modify
neither the default number of ring bullets nor the tuning algorithm – that
would change all other non-crashing variations of this pattern on other
difficulties and ranks, creating a fork of the original gameplay. Instead, I
came up with four possible workarounds that all seemed somewhat logical to
me:
Firing no bullet, i.e., interpreting 0-ring literally. This would
create the only constellation in which a call to the bullet group spawn
functions would not spawn at least one new bullet.
Firing a "1-ring", i.e., a single bullet. This would be consistent with
how the bullet spawn functions behave for "0-way" stack and spread
groups.
Firing a "∞-ring", i.e., 200 bullets, which is as much as the game's cap
on 16×16 bullets would allow. This would poke fun at the whole "division by
zero" idea… but given that we're still talking about Easy Mode (and
especially new players) here, it might be a tad too cruel. Certainly the
most trollish interpretation.
Triggering an immediate Game Over, exchanging the hard crash for a
softer and more controlled shutdown. Certainly the option that would be
closest to the behavior of the original games, and perhaps the only one to
be accepted in Serious, High-Level Play™.
As I was writing this post, it felt increasingly wrong for me to make this
decision. So I once again went to Twitter, where 56.3%
voted in favor of the 1-bullet option. Good that I asked! I myself was
more leaning towards the 0-bullet interpretation, which only got 28.7% of
the vote. Also interesting are the 2.3% in favor of the Game Over option but
I get it, low-rank Easy Mode isn't exactly the most competitive mode of
playing TH04.
There are reports of Kurumi crashing on higher difficulties as well, but I
could verify none of them. If they aren't fixed by this workaround, they're
caused by an entirely different bug that we have yet to discover.
Onto the Stage 4 Marisa crash then, which does in fact apply to all
difficulty levels. I was also wrong on this one – it's a hell of a lot more
intricate than being just a division by the number of on-screen bits.
Without having decompiled the entire fight, I can't give a completely
accurate picture of what happens there yet, but here's the rough idea:
Marisa uses different patterns, depending on whether at least one of her
bits is still alive, or all of them have been destroyed.
Destroying the last bit will immediately switch to the bit-less
counterpart of the current pattern.
The bits won't respawn before the pattern ended, which ensures that the
bit-less version is always shown in its entirety after being started or
switched into.
In two of the bit-less patterns, Marisa gradually moves to the point
reflection of her position at the start of the pattern across the playfield
coordinate of (192, 112), or (224, 128) on screen.
Reference points for Marisa's point-reflected movement. Cyan:
Marisa's position, green: (192, 112), yellow: the intended end
point.
The velocity of this movement is determined by both her distance to that
point and the total amount of frames that this instance of the bit-less
pattern will last.
Since this frame amount is directly tied to the frame the player
destroyed the last bit on, it becomes a user-controlled variable. I think
you can see where this is going…
The last 12 frames of this duration, however, are always reserved for a
"braking phase", where Marisa's velocity is halved on each frame.
This part of the code only runs every 4 frames though. This expands the
time window for this crash to 4 frames, rather than just the two frames you
would expect from looking at the division itself.
Both of the broken patterns run for a maximum of 160 frames. Therefore,
the crash will occur when Marisa's last bit is destroyed between frame 152
and 155 inclusive. On these frames, the
last_frame_with_bits_alive variable is set to 148, which is the
crucial 12 duration frames away from the maximum of 160.
Interestingly enough, the calculated velocity is also only
applied every 4 frames, with Marisa actually staying still for the 3 frames
inbetween. As a result, she either moves
too slowly to ever actually reach the yellow point if the last bit
was destroyed early in the pattern (see destruction frames 68 or
112),
or way too quickly, and almost in a jerky, teleporting way (see
destruction frames 144 or 148).
Finally, as you may have already gathered from the formula: Destroying
the last bit between frame 156 and 160 inclusive results in
duration values of 8 or 4. These actually push Marisa
away from the intended point, as the divisor becomes negative.
One of the two patterns in TH04's Stage 4 Marisa boss fight that feature
frame number-dependent point-reflected movement. The bits were hacked to
self-destruct on the respective frame.
tl;dr: "Game crashes if last bit destroyed within 4-frame window near end of
two patterns". For an informed decision on a new movement behavior for these
last 8 frames, we definitely need to know all the details behind the crash
though. Here's what I would interpret into the code:
Not moving at all, i.e., interpreting 0 as the middle ground between
positive and negative movement. This would also make sense because a
12-frame duration implies 100% of the movement to consist of
the braking phase – and Marisa wasn't moving before, after all.
Move at maximum speed, i.e., dividing by 1 rather than 0. Since the
movement duration is still 12 in this case, Marisa will immediately start
braking. In total, she will move exactly ¾ of the way from her initial
position to (192, 112) within the 8 frames before the pattern
ends.
Directly warping to (192, 112) on frame 0, and to the
point-reflected target on 4, respectively. This "emulates" the division by
zero by moving Marisa at infinite speed to the exact two points indicated by
the velocity formula. It also fits nicely into the 8 frames we have to fill
here. Sure, Marisa can't reach these points at any other duration, but why
shouldn't she be able to, with infinite speed? Then again, if Marisa
is far away enough from (192, 112), this workaround would warp her
across the entire playfield. Can Marisa teleport according to lore? I
have no idea…
Triggering an immediate Game O– hell no, this is the Stage 4 boss,
people already hate losing runs to this bug!
Asking Twitter worked great for the Kurumi workaround, so let's do it again!
Gotta attach a screenshot of an earlier draft of this blog post though,
since this stuff is impossible to explain in tweets…
…and it went
through the roof, becoming the most successful ReC98 tweet so far?!
Apparently, y'all really like to just look at descriptions of overly complex
bugs that I'd consider way beyond the typical attention span that can be
expected from Twitter. Unfortunately, all those tweet impressions didn't
quite translate into poll turnout. The results
were pretty evenly split between 1) and 2), with option 1) just coming out
slightly ahead at 49.1%, compared to 41.5% of option 2).
(And yes, I only noticed after creating the poll that warping to both the
green and yellow points made more sense than warping to just one of the two.
Let's hope that this additional variant wouldn't have shifted the results
too much. Both warp options only got 9.4% of the vote after all, and no one
else came up with the idea either. In the end,
you can always merge together your preferred combination of workarounds from
the Git branches linked below.)
So here you go: The new definitive version of TH04, containing not only the
community-chosen Kurumi and Stage 4 Marisa workaround variant, but also the
📝 No-EMS bugfix from last year.
Edit (2022-05-31): This package is outdated, 📝 the current version is here!2022-04-18-community-choice-fixes.zip
Oh, and let's also add spaztron64's TH03 GDC clock fix
from 2019 because why not. This binary was built from the community_choice_fixes
branch, and you can find the code for all the individual workarounds on
these branches:
Again, because it can't be stated often enough: These fixes are
fanfiction. The gameplay community should be aware of
this, and might decide to handle these cases differently.
With all of that taking way more time to evaluate and document, this
research really had to become part of a proper push, instead of just being
covered in the quick non-push blog post I initially intended. With ½ of a
push left at the end, TH05's Stage 1-5 boss background rendering functions
fit in perfectly there. If you wonder how these static backdrop images even
need any boss-specific code to begin with, you're right – it's basically the
same function copy-pasted 4 times, differing only in the backdrop image
coordinates and some other inconsequential details.
Only Sara receives a nice variation of the typical
📝 blocky entrance animation: The usually
opaque bitmap data from ST00.BB is instead used as a transition
mask from stage tiles to the backdrop image, by making clever use of the
tile invalidation system:
TH04 uses the same effect a bit more frequently, for its first three bosses.
Next up: Shinki, for real this time! I've already managed to decompile 10 of
her 11 danmaku patterns within a little more than one push – and yes,
that one is included in there. Looks like I've slightly
overestimated the amount of work required for TH04's and TH05's bosses…
P0186
TH04/TH05 decompilation (Stage transition animation + smaller boss blockers)
P0187
TH04 RE (Shared boss state bytes)
P0188
TH04/TH05 decompilation (Boss defeat sequence / collision + Shinki's 32×32 balls (logic))
💰 Funded by:
Blue Bolt, [Anonymous], nrook
🏷️ Tags:
Did you know that moving on top of a boss sprite doesn't kill the player in
TH04, only in TH05?
Yup, Reimu is not getting hit… yet.
That's the first of only three interesting discoveries in these 3 pushes,
all of which concern TH04. But yeah, 3 for something as seemingly simple as
these shared boss functions… that's still not quite the speed-up I had hoped
for. While most of this can be blamed, again, on TH04 and all of its
hardcoded complexities, there still was a lot of work to be done on the
maintenance front as well. These functions reference a bunch of code I RE'd
years ago and that still had to be brought up to current standards, with the
dependencies reaching from 📝 boss explosions
over 📝 text RAM overlay functionality up to
in-game dialog loading.
The latter provides a good opportunity to talk a bit about x86 memory
segmentation. Many aspiring PC-98 developers these days are very scared
of it, with some even going as far as to rather mess with Protected Mode and
DOS extenders just so that they don't have to deal with it. I wonder where
that fear comes from… Could it be because every modern programming language
I know of assumes memory to be flat, and lacks any standard language-level
features to even express something like segments and offsets? That's why
compilers have a hard time targeting 16-bit x86 these days: Doing anything
interesting on the architecture requires giving the programmer full
control over segmentation, which always comes down to adding the
typical non-standard language extensions of compilers from back in the day.
And as soon as DOS stopped being used, these extensions no longer made sense
and were subsequently removed from newer tools. A good example for this can
be found in an old version of the
NASM manual: The project started as an attempt to make x86 assemblers
simple again by throwing out most of the segmentation features from
MASM-style assemblers, which made complete sense in 1996 when 16-bit DOS and
Windows were already on their way out. But there was a point to all
those features, and that's why ReC98 still has to use the supposedly
inferior TASM.
Not that this fear of segmentation is completely unfounded: All the
segmentation-related keywords, directives, and #pragmas
provided by Borland C++ and TASM absolutely can be the cause of many
weird runtime bugs. Even if the compiler or linker catches them, you are
often left with confusing error messages that aged just as poorly as memory
segmentation itself.
However, embracing the concept does provide quite the opportunity for
optimizations. While it definitely was a very crazy idea, there is a small
bit of brilliance to be gained from making proper use of all these
segmentation features. Case in point: The buffer for the in-game dialog
scripts in TH04 and TH05.
// Thanks to the semantics of `far` pointers, we only need a single 32-bit
// pointer variable for the following code.
extern unsigned char far *dialog_p;
// This master.lib function returns a `void __seg *`, which is a 16-bit
// segment-only pointer. Converting to a `far *` yields a full segment:offset
// pointer to offset 0000h of that segment.
dialog_p = (unsigned char far *)hmem_allocbyte(/* … */);
// Running the dialog script involves pointer arithmetic. On a far pointer,
// this only affects the 16-bit offset part, complete with overflow at 64 KiB,
// from FFFFh back to 0000h.
dialog_p += /* … */;
dialog_p += /* … */;
dialog_p += /* … */;
// Since the segment part of the pointer is still identical to the one we
// allocated above, we can later correctly free the buffer by pulling the
// segment back out of the pointer.
hmem_free((void __seg *)dialog_p);
If dialog_p was a huge pointer, any pointer
arithmetic would have also adjusted the segment part, requiring a second
pointer to store the base address for the hmem_free call. Doing
that will also be necessary for any port to a flat memory model. Depending
on how you look at it, this compression of two logical pointers into a
single variable is either quite nice, or really, really dumb in its
reliance on the precise memory model of one single architecture.
Why look at dialog loading though, wasn't this supposed to be all about
shared boss functions? Well, TH04 unnecessarily puts certain stage-specific
code into the boss defeat function, such as loading the alternate Stage 5
Yuuka defeat dialog before a Bad Ending, or initializing Gengetsu after
Mugetsu's defeat in the Extra Stage.
That's TH04's second core function with an explicit conditional branch for
Gengetsu, after the
📝 dialog exit code we found last year during EMS research.
And I've heard people say that Shinki was the most hardcoded fight in PC-98
Touhou… Really, Shinki is a perfectly regular boss, who makes proper use of
all internal mechanics in the way they were intended, and doesn't blast
holes into the architecture of the game. Even within TH05, it's Mai and Yuki
who rely on hacks and duplicated code, not Shinki.
The worst part about this though? How the function distinguishes Mugetsu
from Gengetsu. Once again, it uses its own global variable to track whether
it is called the first or the second time within TH04's Extra Stage,
unrelated to the same variable used in the dialog exit function. But this
time, it's not just any newly created, single-use variable, oh no. In a
misguided attempt to micro-optimize away a few bytes of conventional memory,
TH04 reserves 16 bytes of "generic boss state", which can (and are) freely
used for anything a boss doesn't want to store in a more dedicated
variable.
It might have been worth it if the bosses actually used most of these
16 bytes, but the majority just use (the same) two, with only Stage 4 Reimu
using a whopping seven different ones. To reverse-engineer the various uses
of these variables, I pretty much had to map out which of the undecompiled
danmaku-pattern functions corresponds to which boss
fight. In the end, I assigned 29 different variable names for each of the
semantically different use cases, which made up another full push on its
own.
Now, 16 bytes of wildly shared state, isn't that the perfect recipe for
bugs? At least during this cursory look, I haven't found any obvious ones
yet. If they do exist, it's more likely that they involve reused state from
earlier bosses – just how the Shinki death glitch in
TH05 is caused by reusing cheeto data from way back in Stage 4 – and
hence require much more boss-specific progress.
And yes, it might have been way too early to look into all these tiny
details of specific boss scripts… but then, this happened:
Looks similar to another
screenshot of a crash in the same fight that was reported in December,
doesn't it? I was too much in a hurry to figure it out exactly, but notice
how both crashes happen right as the last of Marisa's four bits is destroyed.
KirbyComment has suspected
this to be the cause for a while, and now I can pretty much confirm it
to be an unguarded division by the number of on-screen bits in
Marisa-specific pattern code. But what's the cause for Kurumi then?
As for fixing it, I can go for either a fast or a slow option:
Superficially fixing only this crash will probably just take a fraction
of a push.
But I could also go for a deeper understanding by looking at TH04's
version of the 📝 custom entity structure. It
not only stores the data of Marisa's bits, but is also very likely to be
involved in Kurumi's crash, and would get TH04 a lot closer to 100%
PI. Taking that look will probably need at least 2 pushes, and might require
another 3-4 to completely decompile Marisa's fight, and 2-3 to decompile
Kurumi's.
OK, now that that's out of the way, time to finish the boss defeat function…
but not without stumbling over the third of TH04's quirks, relating to the
Clear Bonus for the main game or the Extra Stage:
To achieve the incremental addition effect for the in-game score display
in the HUD, all new points are first added to a score_delta
variable, which is then added to the actual score at a maximum rate of
61,110 points per frame.
There are a fixed 416 frames between showing the score tally and
launching into MAINE.EXE.
As a result, TH04's Clear Bonus is effectively limited to
(416 × 61,110) = 25,421,760 points.
Only TH05 makes sure to commit the entirety of the
score_delta to the actual score before switching binaries,
which fixes this issue.
And after another few collision-related functions, we're now truly,
finally ready to decompile bosses in both TH04 and TH05! Just as the
anything funds were running out… The
remaining ¼ of the third push then went to Shinki's 32×32 ball bullets,
rounding out this delivery with a small self-contained piece of the first
TH05 boss we're probably going to look at.
Next up, though: I'm not sure, actually. Both Shinki and Elis seem just a
little bit larger than the 2¼ or 4 pushes purchased so far, respectively.
Now that there's a bunch of room left in the cap again, I'll just let the
next contribution decide – with a preference for Shinki in case of a tie.
And if it will take longer than usual for the store to sell out again this
time (heh), there's still the
📝 PC-98 text RAM JIS trail word rendering research
waiting to be documented.
Two years after
📝 the first look at TH04's and TH05's bullets,
we finally get to finish their logic code by looking at the special motion
types. Bullets as a whole still aren't completely finished as the
rendering code is still waiting to be RE'd, but now we've got everything
about them that's required for decompiling the midboss and boss fights of
these games.
Just like the motion types of TH01's pellets, the ones we've got here really
are special enough to warrant an enum, despite all the
overlap in the "slow down and turn" and "bounce at certain edges of the
playfield" types. Sure, including them in the bitfield I proposed two years
ago would have allowed greater variety, but it wouldn't have saved any
memory. On the contrary: These types use a single global state variable for
the maximum turn count and delta speed, which a proper customizable
architecture would have to integrate into the bullet structure. Maybe it is
possible to stuff everything into the same amount of bytes, but not without
first completely rearchitecting the bullet structure and removing every
single piece of redundancy in there. Simply extending the system by adding a
new enum value for a new motion type would be way more
straightforward for modders.
Speaking about memory, TH05 already extends the bullet structure by 6 bytes
for the "exact linear movement" type exclusive to that game. This type is
particularly interesting for all the prospective PC-98 game developers out
there, as it nicely points out the precision limits of Q12.4 subpixels.
Regular bullet movement works by adding a Q12.4 velocity to a Q12.4 position
every frame, with the velocity typically being calculated only once on spawn
time from an 8-bit angle and a Q12.4 speed. Quantization errors from this
initial calculation can quickly compound over all the frames a bullet spends
moving across the playfield. If a bullet is only supposed to move on a
straight line though, there is a more precise way of calculating its
position: By storing the origin point, movement angle, and total distance
traveled, you can perform a full polar→Cartesian transformation every frame.
Out of the 10 danmaku patterns in TH05 that use this motion type, the
difference to regular bullet movement can be best seen in Louise's final
pattern:
Louise's final pattern in its original form, demonstrating
exact linear bullet movement. Note how each bullet spawns slightly
behind the delay cloud: ZUN simply forgot to shift the fixed origin
point along with it.The same pattern with standard bullet movement, corrupting
its intended appearance. No delay cloud-related oversights here though,
at least.
Not far away from the regular bullet code, we've also got the movement
function for the infamous curve / "cheeto" bullets. I would have almost
called them "cheetos" in the code as well, which surely fits more nicely
into 8.3 filenames than "curve bullets" does, but eh, trademarks…
As for hitboxes, we got a 16×16 one on the head node, and a 12×12 one on the
16 trail nodes. The latter simply store the position of the head node during
the last 16 frames, Snake style. But what you're all here for is probably
the turning and homing algorithm, right? Boiled down to its essence, it
works like this:
// [head] points to the controlled "head" part of a curve bullet entity.
// Angles are stored with 8 bits representing a full circle, providing free
// normalization on arithmetic overflow.
// The directions are ordered as you would expect:
// • 0x00: right (sin(0x00) = 0, cos(0x00) = +1)
// • 0x40: down (sin(0x40) = +1, cos(0x40) = 0)
// • 0x80: left (sin(0x80) = 0, cos(0x80) = -1)
// • 0xC0: up (sin(0xC0) = -1, cos(0xC0) = 0)
uint8_t angle_delta = (head->angle - player_angle_from(
head->pos.cur.x, head->pos.cur.y
));
// Stop turning if the player is 1/128ths of a circle away from this bullet
const uint8_t SNAP = 0x02;
// Else, turn either clockwise or counterclockwise by 1/256th of a circle,
// depending on what would reach the player the fastest.
if((angle_delta > SNAP) && (angle_delta < static_cast<uint8_t>(-SNAP))) {
angle_delta = (angle_delta >= 0x80) ? -0x01 : +0x01;
}
head_p->angle -= angle_delta;
5 lines of code, and not all too difficult to follow once you are familiar
with 8-bit angles… unlike what ZUN actually wrote. Which is 26 lines,
and includes an unused "friction" variable that is never set to any value
that makes a difference in the formula. uth05win
correctly saw through that all and simplified this code to something
equivalent to my explanation. Redoing that work certainly wasted a bit of my
time, and means that I now definitely need to spend another push on RE'ing
all the shared boss functions before I can start with Shinki.
So while a curve bullet's speed does get faster over time, its
angular velocity is always limited to 1/256th of a
circle per frame. This reveals the optimal strategy for dodging them:
Maximize this delta angle by staying as close to 180° away from their
current direction as possible, and let their acceleration do the rest.
At least that's the theory for dodging a single one. As a danmaku
designer, you can now of course place other bullets at these technically
optimal places to prevent a curve bullet pattern from being cheesed like
that. I certainly didn't record the video above in a single take either…
After another bunch of boring entity spawn and update functions, the
playfield shaking feature turned out as the most notable (and tricky) one to
round out these two pushes. It's actually implemented quite well in how it
simply "un-shakes" the screen by just marking every stage tile to be
redrawn. In the context of all the other tile invalidation that can take
place during a frame, that's definitely more performant than
📝 doing another EGC-accelerated memmove().
Due to these two games being double-buffered via page flipping, this
invalidation only really needs to happen for the frame after the next
one though. The immediately next frame will show the regular, un-shaken
playfield on the other VRAM page first, except during the multi-frame
shake animation when defeating a midboss, where it will also appear shifted
in a different direction… 😵 Yeah, no wonder why ZUN just always invalidates
all stage tiles for the next two frames after every shaking animation, which
is guaranteed to handle both sporadic single-frame shakes and continuous
ones. So close to good-code here.
Finally, this delivery was delayed a bit because -Tom-
requested his round-up amount to be limited to the cap in the future. Since
that makes it kind of hard to explain on a static page how much money he
will exactly provide, I now properly modeled these discounts in the website
code. The exact round-up amount is now included in both the pre-purchase
breakdown, as well as the cap bar on the main page.
With that in place, the system is now also set up for round-up offers from
other patrons. If you'd also like to support certain goals in this way, with
any amount of money, now's the time for getting in touch with me about that.
Known contributors only, though! 😛
Next up: The final bunch of shared boring boss functions. Which certainly
will give me a break from all the maintenance and research work, and speed
up delivery progress again… right?
Been 📝 a while since we last looked at any of
TH03's game code! But before that, we need to talk about Y coordinates.
During TH03's MAIN.EXE, the PC-98 graphics GDC runs in its
line-doubled 640×200 resolution, which gives the in-game portion its
distinctive stretched low-res look. This lower resolution is a consequence
of using 📝 Promisence Soft's SPRITE16 driver:
Its performance simply stems from the fact that it expects sprites to be
stored in the bottom half of VRAM, which allows them to be blitted using the
same EGC-accelerated VRAM-to-VRAM copies we've seen again and again in all
other games. Reducing the visible resolution also means that the sprites can
be stored on both VRAM pages, allowing the game to still be double-buffered.
If you force the graphics chip to run at 640×400, you can see them:
The full VRAM contents during TH03's in-game portion, as seen when forcing the system into a 640×400 resolution.
•
Note that the text chip still displays its overlaid contents at 640×400,
which means that TH03's in-game portion technically runs at two
resolutions at the same time.
But that means that any mention of a Y coordinate is ambiguous: Does it
refer to undoubled VRAM pixels, or on-screen stretched pixels? Especially
people who have known about the line doubling for years might almost expect
technical blog posts on this game to use undoubled VRAM coordinates. So,
let's introduce a new formatting convention for both on-screen
640×400 and undoubled 640×200 coordinates,
and always write out both to minimize the confusion.
Alright, now what's the thing gonna be? The enemy structure is highly
overloaded, being used for enemies, fireballs, and explosions with seemingly
different semantics for each. Maybe a bit too much to be figured out in what
should ideally be a single push, especially with all the functions that
would need to be decompiled? Bullet code would be easier, but not exactly
single-push material either. As it turns out though, there's something more
fundamental left to be done first, which both of these subsystems depend on:
collision detection!
And it's implemented exactly how I always naively imagined collision
detection to be implemented in a fixed-resolution 2D bullet hell game with
small hitboxes: By keeping a separate 1bpp bitmap of both playfields in
memory, drawing in the collidable regions of all entities on every frame,
and then checking whether any pixels at the current location of the player's
hitbox are set to 1. It's probably not done in the other games because their
single data segment was already too packed for the necessary 17,664 bytes to
store such a bitmap at pixel resolution, and 282,624 bytes for a bitmap at
Q12.4 subpixel resolution would have been prohibitively expensive in 16-bit
Real Mode DOS anyway. In TH03, on the other hand, this bitmap is doubly
useful, as the AI also uses it to elegantly learn what's on the playfield.
By halving the resolution and only tracking tiles of 2×2 / 2×1 pixels, TH03 only requires an adequate total
of 6,624 bytes of memory for the collision bitmaps of both playfields.
So how did the implementation not earn the good-code tag this time? Because the code for drawing into these bitmaps is undecompilable hand-written x86 assembly. And not just your usual ASM that was basically compiled from C and then edited to maybe optimize register allocation and maybe replace a bunch of local variables with self-modifying code, oh no. This code is full of overly clever bit twiddling, abusing the fact that the 16-bit AX,
BX, CX, and DX registers can also be
accessed as two 8-bit registers, calculations that change the semantic
meaning behind the value of a register, or just straight-up reassignments of
different values to the same small set of registers. Sure, in some way it is
impressive, and it all does work and correctly covers every edge
case, but come on. This could have all been a lot more readable in
exchange for just a few CPU cycles.
What's most interesting though are the actual shapes that these functions
draw into the collision bitmap. On the surface, we have:
vertical slopes at any angle across the whole playfield; exclusively
used for Chiyuri's diagonal laser EX attack
straight vertical lines, with a width of 1 tile; exclusively used for
the 2×2 / 2×1 hitboxes of bullets
rectangles at arbitrary sizes
But only 2) actually draws a full solid line. 1) and 3) are only ever drawn
as horizontal stripes, with a hardcoded distance of 2 vertical tiles
between every stripe of a slope, and 4 vertical tiles between every stripe
of a rectangle. That's 66-75% of each rectangular entity's intended hitbox
not actually taking part in collision detection. Now, if player hitboxes
were ≤ 6 / 3 pixels, we'd have one
possible explanation of how the AI can "cheat", because it could just
precisely move through those blank regions at TAS speeds. So, let's make
this two pushes after all and tell the complete story, since this is one of
the more interesting aspects to still be documented in this game.
And the code only gets worse. While the player
collision detection function is decompilable, it might as well not
have been, because it's just more of the same "optimized", hard-to-follow
assembly. With the four splittable 16-bit registers having a total of 20
different meanings in this function, I would have almost preferred
self-modifying code…
In fact, it was so bad that it prompted some maintenance work on my inline
assembly coding standards as a whole. Turns out that the _asm
keyword is not only still supported in modern Visual Studio compilers, but
also in Clang with the -fms-extensions flag, and compiles fine
there even for 64-bit targets. While that might sound like amazing news at
first ("awesome, no need to rewrite this stuff for my x86_64 Linux
port!"), you quickly realize that almost all inline assembly in this
codebase assumes either PC-98 hardware, segmented 16-bit memory addressing,
or is a temporary hack that will be removed with further RE progress.
That's mainly because most of the raw arithmetic code uses Turbo C++'s
register pseudovariables where possible. While they certainly have their
drawbacks, being a non-standard extension that's not supported in other
x86-targeting C compilers, their advantages are quite significant: They
allow this code to stay in the same language, and provide slightly more
immediate portability to any other architecture, together with
📝 readability and maintainability improvements that can get quite significant when combined with inlining:
// This one line compiles to five ASM instructions, which would need to be
// spelled out in any C compiler that doesn't support register pseudovariables.
// By adding typed aliases for these registers via `#define`, this code can be
// both made even more readable, and be prepared for an easier transformation
// into more portable local variables.
_ES = (((_AX * 4) + _BX) + SEG_PLANE_B);
However, register pseudovariables might cause potential portability issues
as soon as they are mixed with inline assembly instructions that rely on
their state. The lazy way of "supporting pseudo-registers" in other
compilers would involve declaring the full set as global variables, which
would immediately break every one of those instances:
_DI = 0;
_AX = 0xFFFF;
// Special x86 instruction doing the equivalent of
//
// *reinterpret_cast(MK_FP(_ES, _DI)) = _AX;
// _DI += sizeof(uint16_t);
//
// Only generated by Turbo C++ in very specific cases, and therefore only
// reliably available through inline assembly.
asm { movsw; }
What's also not all too standardized, though, are certain variants of
the asm keyword. That's why I've now introduced a distinction
between the _asm keyword for "decently sane" inline assembly,
and the slightly less standard asm keyword for inline assembly
that relies on the contents of pseudo-registers, and should break on
compilers that don't support them. So yeah, have some minor
portability work in exchange for these two pushes not having all that much
in RE'd content.
With that out of the way and the function deciphered, we can confirm the
player hitboxes to be a constant 8×8 /
8×4 pixels, and prove that the hit stripes are nothing but
an adequate optimization that doesn't affect gameplay in any way.
And what's the obvious thing to immediately do if you have both the
collision bitmap and the player hitbox? Writing a "real hitbox" mod, of
course:
Reorder the calls to rendering functions so that player and shot sprites
are rendered after bullets
Blank out all player sprite pixels outside an
8×8 / 8×4 box around the center
point
After the bullet rendering function, turn on the GRCG in RMW mode and
set the tile register set to the background color
Stretch the negated contents of collision bitmap onto each playfield,
leaving only collidable pixels untouched
Do the same with the actual, non-negated contents and a white color, for
extra contrast against the background. This also makes sure to show any
collidable areas whose sprite pixels are transparent, such as with the moon
enemy. (Yeah, how unfair.) Doing that also loses a lot of information about
the playfield, such as enemy HP indicated by their color, but what can you
do:
A decently busy TH03 in-game frame and its underlying collision bitmap,
showing off all three different collision shapes together with the
player hitboxes.
2022-02-18-TH03-real-hitbox.zip
The secret for writing such mods before having reached a sufficient level of
position independence? Put your new code segment into DGROUP,
past the end of the uninitialized data section. That's why this modded
MAIN.EXE is a lot larger than you would expect from the raw amount of new code: The file now actually needs to store all these
uninitialized 0 bytes between the end of the data segment and the first
instruction of the mod code – normally, this number is simply a part of the
MZ EXE header, and doesn't need to be redundantly stored on disk. Check the
th03_real_hitbox
branch for the code.
And now we know why so many "real hitbox" mods for the Windows Touhou games
are inaccurate: The games would simply be unplayable otherwise – or can
you dodge rapidly moving 2×2 /
2×1 blocks as an 8×8 /
8×4 rectangle that is smaller than your shot sprites,
especially without focused movement? I can't.
Maybe it will feel more playable after making explosions visible, but that
would need more RE groundwork first.
It's also interesting how adding two full GRCG-accelerated redraws of both
playfields per frame doesn't significantly drop the game's frame rate – so
why did the drawing functions have to be micro-optimized again? It
would be possible in one pass by using the GRCG's TDW mode, which
should theoretically be 8× faster, but I have to stop somewhere.
Next up: The final missing piece of TH04's and TH05's
bullet-moving code, which will include a certain other
type of projectile as well.
P0174
TH01 decompilation (Sariel, part 2/9: Preparation + birds)
P0175
TH01 decompilation (Sariel, part 3/9: Shield/wand/dress animation + patterns 1-3)
P0176
TH01 decompilation (Sariel, part 4/9: Background transition animation + vertical 2×2 particles)
P0177
TH01 decompilation (Sariel, part 5/9: Patterns 4-9 + wavy 2×2 particles)
P0178
TH01 decompilation (Sariel, part 6/9: Patterns 10-11)
P0179
TH01 decompilation (Sariel, part 7/9: Patterns 12-13 + horizontal 2×2 particles)
P0180
TH01 decompilation (Sariel, part 8/9: Patterns 14-16)
P0181
TH01 decompilation (Sariel, part 9/9: Main function)
💰 Funded by:
Ember2528, Yanga
🏷️ Tags:
Here we go, TH01 Sariel! This is the single biggest boss fight in all of
PC-98 Touhou: If we include all custom effect code we previously decompiled,
it amounts to a total of 10.31% of all code in TH01 (and 3.14%
overall). These 8 pushes cover the final 8.10% (or 2.47% overall),
and are likely to be the single biggest delivery this project will ever see.
Considering that I only managed to decompile 6.00% across all games in 2021,
2022 is already off to a much better start!
So, how can Sariel's code be that large? Well, we've got:
16 danmaku patterns; including the one snowflake detonating into a giant
94×32 hitbox
Gratuitous usage of floating-point variables, bloating the binary thanks
to Turbo C++ 4.0J's particularly horrid code generation
The hatching birds that shoot pellets
3 separate particle systems, sharing the general idea, overall code
structure, and blitting algorithm, but differing in every little detail
The "gust of wind" background transition animation
5 sets of custom monochrome sprite animations, loaded from
BOSS6GR?.GRC
A further 3 hardcoded monochrome 8×8 sprites for the "swaying leaves"
pattern during the second form
In total, it's just under 3,000 lines of C++ code, containing a total of 8
definite ZUN bugs, 3 of them being subpixel/pixel confusions. That might not
look all too bad if you compare it to the
📝 player control function's 8 bugs in 900 lines of code,
but given that Konngara had 0… (Edit (2022-07-17):
Konngara contains two bugs after all: A
📝 possible heap corruption in test or debug mode,
and the infamous
📝 temporary green discoloration.)
And no, the code doesn't make it obvious whether ZUN coded Konngara or
Sariel first; there's just as much evidence for either.
Some terminology before we start: Sariel's first form is separated
into four phases, indicated by different background images, that
cycle until Sariel's HP reach 0 and the second, single-phase form
starts. The danmaku patterns within each phase are also on a cycle,
and the game picks a random but limited number of patterns per phase before
transitioning to the next one. The fight always starts at pattern 1 of phase
1 (the random purple lasers), and each new phase also starts at its
respective first pattern.
Sariel's bugs already start at the graphics asset level, before any code
gets to run. Some of the patterns include a wand raise animation, which is
stored in BOSS6_2.BOS:
Umm… OK? The same sprite twice, just with slightly different
colors? So how is the wand lowered again?
The "lowered wand" sprite is missing in this file simply because it's
captured from the regular background image in VRAM, at the beginning of the
fight and after every background transition. What I previously thought to be
📝 background storage code has therefore a
different meaning in Sariel's case. Since this captured sprite is fully
opaque, it will reset the entire 128×128 wand area… wait, 128×128, rather
than 96×96? Yup, this lowered sprite is larger than necessary, wasting 1,967
bytes of conventional memory. That still doesn't quite explain the
second sprite in BOSS6_2.BOS though. Turns out that the black
part is indeed meant to unblit the purple reflection (?) in the first
sprite. But… that's not how you would correctly unblit that?
The first sprite already eats up part of the red HUD line, and the second
one additionally fails to recover the seal pixels underneath, leaving a nice
little black hole and some stray purple pixels until the next background
transition. Quite ironic given that both
sprites do include the right part of the seal, which isn't even part of the
animation.
Just like Konngara, Sariel continues the approach of using a single function
per danmaku pattern or custom entity. While I appreciate that this allows
all pattern- and entity-specific state to be scoped locally to that one
function, it quickly gets ugly as soon as such a function has to do more than one thing.
The "bird function" is particularly awful here: It's just one if(…)
{…} else if(…) {…} else if(…) {…} chain with different
branches for the subfunction parameter, with zero shared code between any of
these branches. It also uses 64-bit floating-point double as
its subpixel type… and since it also takes four of those as parameters
(y'know, just in case the "spawn new bird" subfunction is called), every
call site has to also push four double values onto the stack.
Thanks to Turbo C++ even using the FPU for pushing a 0.0 constant, we
have already reached maximum floating-point decadence before even having
seen a single danmaku pattern. Why decadence? Every possible spawn position
and velocity in both bird patterns just uses pixel resolution, with no
fractional component in sight. And there goes another 720 bytes of
conventional memory.
Speaking about bird patterns, the red-bird one is where we find the first
code-level ZUN bug: The spawn cross circle sprite suddenly disappears after
it finished spawning all the bird eggs. How can we tell it's a bug? Because
there is code to smoothly fly this sprite off the playfield, that
code just suddenly forgets that the sprite's position is stored in Q12.4
subpixels, and treats it as raw screen pixels instead.
As a result, the well-intentioned 640×400
screen-space clipping rectangle effectively shrinks to 38×23 pixels in the
top-left corner of the screen. Which the sprite is always outside of, and
thus never rendered again.
The intended animation is easily restored though:
Sariel's third pattern, and the first to spawn birds, in its original
and fixed versions. Note that I somewhat fixed the bird hatch animation
as well: ZUN's code never unblits any frame of animation there, and
simply blits every new one on top of the previous one.
Also, did you know that birds actually have a quite unfair 14×38-pixel
hitbox? Not that you'd ever collide with them in any of the patterns…
Another 3 of the 8 bugs can be found in the symmetric, interlaced spawn rays
used in three of the patterns, and the 32×32 debris "sprites" shown at their endpoint, at
the edge of the screen. You kinda have to commend ZUN's attention to detail
here, and how he wrote a lot of code for those few rapidly animated pixels
that you most likely don't
even notice, especially with all the other wrong pixels
resulting from rendering glitches. One of the bugs in the very final pattern
of phase 4 even turns them into the vortex sprites from the second pattern
in phase 1 during the first 5 frames of
the first time the pattern is active, and I had to single-step the blitting
calls to verify it.
It certainly was annoying how much time I spent making sense of these bugs,
and all weird blitting offsets, for just a few pixels… Let's look at
something more wholesome, shall we?
So far, we've only seen the PC-98 GRCG being used in RMW (read-modify-write)
mode, which I previously
📝 explained in the context of TH01's red-white HP pattern.
The second of its three modes, TCR (Tile Compare Read), affects VRAM reads
rather than writes, and performs "color extraction" across all 4 bitplanes:
Instead of returning raw 1bpp data from one plane, a VRAM read will instead
return a bitmask, with a 1 bit at every pixel whose full 4-bit color exactly
matches the color at that offset in the GRCG's tile register, and 0
everywhere else. Sariel uses this mode to make sure that the 2×2 particles
and the wind effect are only blitted on top of "air color" pixels, with
other parts of the background behaving like a mask. The algorithm:
Set the GRCG to TCR mode, and all 8 tile register dots to the air
color
Read N bits from the target VRAM position to obtain an N-bit mask where
all 1 bits indicate air color pixels at the respective position
AND that mask with the alpha plane of the sprite to be drawn, shifted to
the correct start bit within the 8-pixel VRAM byte
Set the GRCG to RMW mode, and all 8 tile register dots to the color that
should be drawn
Write the previously obtained bitmask to the same position in VRAM
Quite clever how the extracted colors double as a secondary alpha plane,
making for another well-earned good-code tag. The wind effect really doesn't deserve it, though:
ZUN calculates every intermediate result inside this function
over and over and over again… Together with some ugly
pointer arithmetic, this function turned into one of the most tedious
decompilations in a long while.
This gradual effect is blitted exclusively to the front page of VRAM,
since parts of it need to be unblitted to create the illusion of a gust of
wind. Then again, anything that moves on top of air-colored background –
most likely the Orb – will also unblit whatever it covered of the effect…
As far as I can tell, ZUN didn't use TCR mode anywhere else in PC-98 Touhou.
Tune in again later during a TH04 or TH05 push to learn about TDW, the final
GRCG mode!
Speaking about the 2×2 particle systems, why do we need three of them? Their
only observable difference lies in the way they move their particles:
Up or down in a straight line (used in phases 4 and 2,
respectively)
Left or right in a straight line (used in the second form)
Left and right in a sinusoidal motion (used in phase 3, the "dark
orange" one)
Out of all possible formats ZUN could have used for storing the positions
and velocities of individual particles, he chose a) 64-bit /
double-precision floating-point, and b) raw screen pixels. Want to take a
guess at which data type is used for which particle system?
If you picked double for 1) and 2), and raw screen pixels for
3), you are of course correct! Not that I'm implying
that it should have been the other way round – screen pixels would have
perfectly fit all three systems use cases, as all 16-bit coordinates
are extended to 32 bits for trigonometric calculations anyway. That's what,
another 1.080 bytes of wasted conventional memory? And that's even
calculated while keeping the current architecture, which allocates
space for 3×30 particles as part of the game's global data, although only
one of the three particle systems is active at any given time.
That's it for the first form, time to put on "Civilization
of Magic"! Or "死なばもろとも"? Or "Theme of 地獄めくり"? Or whatever SYUGEN is
supposed to mean…
… and the code of these final patterns comes out roughly as exciting as
their in-game impact. With the big exception of the very final "swaying
leaves" pattern: After 📝 Q4.4,
📝 Q28.4,
📝 Q24.8, and double variables,
this pattern uses… decimal subpixels? Like, multiplying the number by
10, and using the decimal one's digit to represent the fractional part?
Well, sure, if you really insist on moving the leaves in cleanly
represented integer multiples of ⅒, which is infamously impossible in IEEE
754. Aside from aesthetic reasons, it only really combines less precision
(10 possible fractions rather than the usual 16) with the inferior
performance of having to use integer divisions and multiplications rather
than simple bit shifts. And it's surely not because the leaf sprites needed
an extended integer value range of [-3276, +3276], compared to
Q12.4's [-2047, +2048]: They are clipped to 640×400 screen space
anyway, and are removed as soon as they leave this area.
This pattern also contains the second bug in the "subpixel/pixel confusion
hiding an entire animation" category, causing all of
BOSS6GR4.GRC to effectively become unused:
The "swaying leaves" pattern. ZUN intended a splash animation to be
shown once each leaf "spark" reaches the top of the playfield, which is
never displayed in the original game.
At least their hitboxes are what you would expect, exactly covering the
30×30 pixels of Reimu's sprite. Both animation fixes are available on the th01_sariel_fixes
branch.
After all that, Sariel's main function turned out fairly unspectacular, just
putting everything together and adding some shake, transition, and color
pulse effects with a bunch of unnecessary hardware palette changes. There is
one reference to a missing BOSS6.GRP file during the
first→second form transition, suggesting that Sariel originally had a
separate "first form defeat" graphic, before it was replaced with just the
shaking effect in the final game.
Speaking about the transition code, it is kind of funny how the… um,
imperative and concrete nature of TH01 leads to these 2×24
lines of straight-line code. They kind of look like ZUN rattling off a
laundry list of subsystems and raw variables to be reinitialized, making
damn sure to not forget anything.
Whew! Second PC-98 Touhou boss completely decompiled, 29 to go, and they'll
only get easier from here! 🎉 The next one in line, Elis, is somewhere
between Konngara and Sariel as far as x86 instruction count is concerned, so
that'll need to wait for some additional funding. Next up, therefore:
Looking at a thing in TH03's main game code – really, I have little
idea what it will be!
Now that the store is open again, also check out the
📝 updated RE progress overview I've posted
together with this one. In addition to more RE, you can now also directly
order a variety of mods; all of these are further explained in the order
form itself.
TH03 finally passed 20% RE, and the newly decompiled code contains no
serious ZUN bugs! What a nice way to end the year.
There's only a single unlockable feature in TH03: Chiyuri and Yumemi as
playable characters, unlocked after a 1CC on any difficulty. Just like the
Extra Stages in TH04 and TH05, YUME.NEM contains a single
designated variable for this unlocked feature, making it trivial to craft a
fully unlocked score file without recording any high scores that others
would have to compete against. So, we can now put together a complete set
for all PC-98 Touhou games: 2021-12-27-Fully-unlocked-clean-score-files.zip
It would have been cool to set the randomly generated encryption keys in
these files to a fixed value so that they cancel out and end up not actually
encrypting the file. Too bad that TH03 also started feeding each encrypted
byte back into its stream cipher, which makes this impossible.
The main loading and saving code turned out to be the second-cleanest
implementation of a score file format in PC-98 Touhou, just behind TH02.
Only two of the YUME.NEM functions come with nonsensical
differences between OP.EXE and MAINL.EXE, rather
than 📝 all of them, as in TH01 or
📝 too many of them, as in TH04 and TH05. As
for the rest of the per-difficulty structure though… well, it quickly
becomes clear why this was the final score file format to be RE'd. The name,
score, and stage fields are directly stored in terms of the internal
REGI*.BFT sprite IDs used on the high score screen. TH03 also
stores 10 score digits for each place rather than the 9 possible ones, keeps
any leading 0 digits, and stores the letters of entered names in reverse
order… yeah, let's decompile the high score screen as well, for a full
understanding of why ZUN might have done all that. (Answer: For no reason at
all. )
And wow, what a breath of fresh air. It's surely not
good-code: The overlapping shadows resulting from using
a 24-pixel letterspacing with 32-pixel glyphs in the name column led ZUN to
do quite a lot of unnecessary and slightly confusing rendering work when
moving the cursor back and forth, and he even forgot about the EGC there.
But it's nowhere close to the level of jank we saw in
📝 TH01's high score menu last year. Good to
see that ZUN had learned a thing or two by his third game – especially when
it comes to storing the character map cursor in terms of a character ID,
and improving the layout of the character map:
That's almost a nicely regular grid there. With the question mark and the
double-wide SP, BS, and END options, the cursor
movement code only comes with a reasonable two exceptions, which are easily
handled. And while I didn't get this screen completely decompiled,
one additional push was enough to cover all important code there.
The only potential glitch on this screen is a result of ZUN's continued use
of binary-coded
decimal digits without any bounds check or cap. Like the in-game HUD
score display in TH04 and TH05, TH03's high score screen simply uses the
next glyph in the character set for the most significant digit of any score
above 1,000,000,000 points – in this case, the period. Still, it only
really gets bad at 8,000,000,000 points: Once the glyphs are
exhausted, the blitting function ends up accessing garbage data and filling
the entire screen with garbage pixels. For comparison though, the current world record
is 133,650,710 points, so good luck getting 8 billion in the first
place.
Next up: Starting 2022 with the long-awaited decompilation of TH01's Sariel
fight! Due to the 📝 recent price increase,
we now got a window in the cap that
is going to remain open until tomorrow, providing an early opportunity to
set a new priority after Sariel is done.
The "bad" news first: Expanding to Stripe in order to support Google Pay
requires bureaucratic effort that is not quite justified yet, and would only
be worth it after the next price increase.
Visualizing technical debt has definitely been overdue for a while though.
With 1 of these 2 pushes being focused on this topic, it makes sense to
summarize once again what "technical debt"
means in the context of ReC98, as this info was previously kind of scattered
over multiple blog posts. Mainly, it encompasses
any ZUN-written code
that we did name and reverse-engineer,
but which we simply moved out into dedicated files that are then
#included back into the big .ASM translation units,
without worrying about decompilation or proving undecompilability for
now.
Technically (ha), it would also include all of master.lib, which has
always been compiled into the binaries in this way, and which will require
quite a bit of dedicated effort to be moved out into a properly linkable
library, once it's feasible. But this code has never been part of any
progress metric – in fact, 0% RE is
defined as the total number of x86 instructions in the binary minus
any library code. There is also no relation between instruction numbers and
the time it will take to finalize master.lib code, let alone a precedent of
how much it would cost.
If we now want to express technical debt as a percentage, it's clear where
the 100% point would be: when all RE'd code is also compiled in from a
translation unit outside the big .ASM one. But where would 0% be? Logically,
it would be the point where no reverse-engineered code has ever been moved
out of the big translation units yet, and nothing has ever been decompiled.
With these boundary points, this is what we get:
Not too bad! So it's 6.22% of total RE that we will have to revisit at some
point, concentrated mostly around TH04 and TH05 where it resulted from a
focus on position independence. The prices also give an accurate impression
of how much more work would be required there.
But is that really the best visualization? After all, it requires an
understanding of our definition of technical debt, so it's maybe not the
most useful measurement to have on a front page. But how about subtracting
those 6.22% from the number shown on the RE% bars? Then, we get this:
Which is where we get to the good news: Twitter surprisingly helped me out
in choosing one visualization over the other, voting
7:2 in favor of the Finalized version. While this one requires
you to manually calculate € finalized - € RE'd to
obtain the raw financial cost of technical debt, it clearly shows, for the
first time, how far away we are from the main goal of fully decompiling all
5 games… at least to the extent it's possible.
Now that the parser is looking at these recursively included .ASM files for
the first time, it needed a small number of improvements to correctly handle
the more advanced directives used there, which no automatic disassembler
would ever emit. Turns out I've been counting some directives as
instructions that never should have been, which is where the additional
0.02% total RE came from.
One more overcounting issue remains though. Some of the RE'd assembly slices
included by multiple games contain different if branches for
each game, like this:
; An example assembly file included by both TH04's and TH05's MAIN.EXE:
if (GAME eq 5)
; (Code for TH05)
else
; (Code for TH04)
endif
Currently, the parser simply ignores if, else, and
endif, leading to the combined code of all branches being
counted for every game that includes such a file. This also affects the
calculated speed, and is the reason why finalization seems to be slightly
faster than reverse-engineering, at currently 471 instructions per push
compared to 463. However, it's not that bad of a signal to send: Most of the
not yet finalized code is shared between TH04 and TH05, so finalizing it
will roughly be twice as fast as regular reverse-engineering to begin with.
(Unless the code then turns out to be twice as complex than average code…
).
For completeness, finalization is now also shown as part of the per-commit metrics. Now it's clearly visible what I was
doing in those very slow five months between P0131 and P0140, where
the progress bar didn't move at all: Repaying 3.49% of previously
accumulated technical debt across all games. 👌
As announced, I've also implemented a new caching system for this website,
as the second main feature of these two pushes. By appending a hash string
to the URLs of static resources, your browser should now both cache them
forever and re-download them once they did change on the server. This
avoids the unnecessary (and quite frankly, embarrassing) re-requests for all
static resources that typically just return a 304 Not Modified
response. As a result, the blog should now load a bit faster on repeated
visits, especially on slower connections. That should allow me to
deliberately not paginate it for another few years, without it getting all
too slow – and should prepare us for the day when our first game
reaches 100% and the server will get smashed.
However, I am open to changing the progress blog link in the
navigation bar at the top to the list of tags, once
people start complaining.
Apart frome some more invisible correctness and QoL improvements, I've also
prepared some new funding goals, but I'll cover those once the store
reopens, next year. Syntax highlighting for code snippets would have also
been cool, but unfortunately didn't make it into those two pushes. It's
still on the list though!
Next up: Back to RE with the TH03 score file format, and other code that
surrounds it.
Made it through almost three years without a price increase! It's been
overdue for a while, though.
With the last months being full of rather research- and documentation-heavy
pushes, I've been just about able to keep up with the existing
subscriptions. By now, the amount of quality control and documentation I
found myself putting into this project has far surpassed the raw
reverse-engineering work. Back at the beginning of 2019 when I decided on
the previous push price of 30 €, I didn't have this blog nor the current
aspirations at code quality. Neither of these have ever been reflected in
the price, and I still find it hard to put a number on them. On the other
hand, I continue to dislike the typical Patreon model of no inherent defined
obligations on my part, and no direct association of the resulting work with
the person who funded it. You might have noticed that I don't use the word
"donations" anywhere, and instead refer to them as "orders" or "purchases" –
and that's precisely for this reason.
The result, however, has been a sold-out store for pretty much all of 2021.
I can only begin to imagine how much potential revenue I've already lost
from people who might have wanted to contribute at one point, but couldn't,
and have already written off this project…
Raising prices is pretty much the only way to get the pending workload back
to a more comfortable amount. I also thought about a two-tiered system: Have
a documentation-less option for 30 €, and take 60 € for any push that should
be accompanied by a blog post. However, skimping on documentation will
compromise the quality of the code as well. Writing these blog posts
presents another chance of improving it before release, which has made quite
a difference on many occasions. And after all, this documentation is the one
thing about ReC98 that people mainly interact with. As long as we haven't
hit 100% RE, the actual code seems to be an afterthought, which is perfectly
understandable: Why start work on a bigger mod or port now if the code is
steadily improving in every aspect, and it all will be just a bit more
maintainable in a few months?
But why go more commercial then, and especially now? If the recent attention
to spaztron64's PC-98 Touhou
collection package is any indication, ReC98 has a way bigger career
potential than the dead-end RL job I found myself in. Demand for fixed
translations and replay
support is definitely there – and given that these haven't been done so
far, it's very likely that I'll end up as the one to implement such mods,
especially if that should happen before reaching 100% RE or PI. People also
still seem to want* a port to IBM-compatible DOS,
📝 even though this makes no sense? But if
this is something you all want to pay for, then sure, why not.
And even right now, working on ReC98 sure beats writing junk software using
ill-suited technologies for highly corporate clients, or living close to a
world where academic papers are valued higher than working and maintained
code. I am in fact very happy whenever I'm done with that for the day, and
get to work on ReC98! Who would have thought.
So let's try to grow this into an actual business and raise prices to match
demand, going up to a nicely divisible 60 € per push. If you all still
manage to regularly sell out the store at this level and I get to
raise prices again, I should be able to reduce RL work further and therefore
raise the cap as well. Now that I've also clarified a potential route
towards self-employment, I'm going to react to these sell-out events more
quickly, and with smaller raises. So, no further immediate doubling in the
future.
Now, will this delay the currently highly awaited 100% completion of TH01
past August 2022, the 25th anniversary of its release? We'll see once we're
back to an almost empty backlog, after I'm done with the TH01 Sariel fight.
I'm hopeful that such a price increase will give a new voice to the goals
and priorities of less wealthy potential patrons. This crowdfunding is very
much designed to be hacked by "microtransactions" – small contributions with
specific requests that require other, larger generic contributions to be
fulfilled – and I'd like to see more of that. 😛 And even if 60 € per push
is already more than the combined fandom wants to pay, that means I can get
the 📝 16-bit build system done before the
first big 100% release. (Trust me, you really want that!)
I will still deliver the entire current backlog at the value the
contributions were originally purchased at. Due to the way the cap has to be
calculated, these contributions now appear to have doubled in value. All
existing subscriptions will then pay for half of their original pushes
starting with their respective December 2021 transaction.
Next up: A bunch of smaller website features, including:
a caching strategy for static content,
a third set of percentage bars to visualize the remaining technical
debt,
and, hopefully, some new payment methods that expand the number of
countries I can accept money from. (Yes, this was requested at one point
earlier this year!)
P0168
TH04/TH05 decompilation (EMS swap area, part 1/2)
P0169
TH04/TH05 decompilation (EMS swap area, part 2/2) + Research (TH04 No-EMS Reimu Stage 5 crash)
💰 Funded by:
rosenrose, Blue Bolt
🏷️ Tags:
EMS memory! The
infamous stopgap measure between the 640 KiB ("ought to be enough for
everyone") of conventional
memory offered by DOS from the very beginning, and the later XMS standard for
accessing all the rest of memory up to 4 GiB in the x86 Protected Mode. With
an optionally active EMS driver, TH04 and TH05 will make use of EMS memory
to preload a bunch of situational .CDG images at the beginning of
MAIN.EXE:
The "eye catch" game title image, shown while stages are loaded
The character-specific background image, shown while bombing
The player character dialog portraits
TH05 additionally stores the boss portraits there, preloading them
at the beginning of each stage. (TH04 instead keeps them in conventional
memory during the entire stage.)
Once these images are needed, they can then be copied into conventional
memory and accessed as usual.
Uh… wait, copied? It certainly would have been possible to map EMS
memory to a regular 16-bit Real Mode segment for direct access,
bank-switching out rarely used system or peripheral memory in exchange for
the EMS data. However, master.lib doesn't expose this functionality, and
only provides functions for copying data from EMS to regular memory and vice
versa.
But even that still makes EMS an excellent fit for the large image files
it's used for, as it's possible to directly copy their pixel data from EMS
to VRAM. (Yes, I tried!) Well… would, because ZUN doesn't do
that either, and always naively copies the images to newly allocated
conventional memory first. In essence, this dumbs down EMS into just another
layer of the memory hierarchy, inserted between conventional memory and
disk: Not quite as slow as disk, but still requiring that
memcpy() to retrieve the data. Most importantly though: Using
EMS in this way does not increase the total amount of memory
simultaneously accessible to the game. After all, some other data will have
to be freed from conventional memory to make room for the newly loaded data.
The most idiomatic way to define the game-specific layout of the EMS area
would be either a struct or an enum.
Unfortunately, the total size of all these images exceeds the range of a
16-bit value, and Turbo C++ 4.0J supports neither 32-bit enums
(which are silently degraded to 16-bit) nor 32-bit structs
(which simply don't compile). That still leaves raw compile-time constants
though, you only have to manually define the offset to each image in terms
of the size of its predecessor. But instead of doing that, ZUN just placed
each image at a nice round decimal offset, each slightly larger than the
actual memory required by the previous image, just to make sure that
everything fits. This results not only in quite
a bit of unnecessary padding, but also in technically the single
biggest amount of "wasted" memory in PC-98 Touhou: Out of the 180,000 (TH04)
and 320,000 (TH05) EMS bytes requested, the game only uses 135,552 (TH04)
and 175,904 (TH05) bytes. But hey, it's EMS, so who cares, right? Out of all
the opportunities to take shortcuts during development, this is among the
most acceptable ones. Any actual PC-98 model that could run these two games
comes with plenty of memory for this to not turn into an actual issue.
On to the EMS-using functions themselves, which are the definition of
"cross-cutting concerns". Most of these have a fallback path for the non-EMS
case, and keep the loaded .CDG images in memory if they are immediately
needed. Which totally makes sense, but also makes it difficult to find names
that reflect all the global state changed by these functions. Every one of
these is also just called from a single place, so inlining
them would have saved me a lot of naming and documentation trouble
there.
The TH04 version of the EMS allocation code was actually displayed on ZUN's monitor in the
2010 MAG・ネット documentary; WindowsTiger already transcribed the low-quality video image
in 2019. By 2015 ReC98 standards, I would have just run with that, but
the current project goal is to write better code than ZUN, so I didn't. 😛
We sure ain't going to use magic numbers for EMS offsets.
The dialog init and exit code then is completely different in both games,
yet equally cross-cutting. TH05 goes even further in saving conventional
memory, loading each individual player or boss portrait into a single .CDG
slot immediately before blitting it to VRAM and freeing the pixel data
again. People who play TH05 without an active EMS driver are surely going to
enjoy the hard drive access lag between each portrait change…
TH04, on the other hand, also abuses the dialog
exit function to preload the Mugetsu defeat / Gengetsu entrance and
Gengetsu defeat portraits, using a static variable to track how often the
function has been called during the Extra Stage… who needs function
parameters anyway, right?
This is also the function in which TH04 infamously crashes after the Stage 5
pre-boss dialog when playing with Reimu and without any active EMS driver.
That crash is what motivated this look into the games' EMS usage… but the
code looks perfectly fine? Oh well, guess the crash is not related to EMS
then. Next u–
OK, of course I can't leave it like that. Everyone is expecting a fix now,
and I still got half of a push left over after decompiling the regular EMS
code. Also, I've now RE'd every function that could possibly be involved in
the crash, and this is very likely to be the last time I'll be looking at
them.
Turns out that the bug has little to do with EMS, and everything to do with
ZUN limiting the amount of conventional RAM that TH04's
MAIN.EXE is allowed to use, and then slightly miscalculating
this upper limit. Playing Stage 5 with Reimu is the most asset-intensive
configuration in this game, due to the combination of
6 player portraits (Marisa has only 5), at 128×128 pixels each
a 288×256 background for the boss fight, tied in size only with the
ones in the Extra Stage
the additional 96×80 image for the vertically scrolling stars during
the stage, wastefully stored as 4 bitplanes rather than a single one.
This image is never freed, not even at the end of the stage.
The star image used in TH04's Stage 5.
Remove any single one of the above points, and this crash would have never
occurred. But with all of them combined, the total amount of memory consumed
by TH04's MAIN.EXE just barely exceeds ZUN's limit of 320,000
bytes, by no more than 3,840 bytes, the size of the star image.
But wait: As we established earlier, EMS does nothing to reduce the amount
of conventional memory used by the game. In fact, if you disabled TH04's EMS
handling, you'd still get this crash even if you are running an EMS
driver and loaded DOS into the High Memory Area to free up as much
conventional RAM as possible. How can EMS then prevent this crash in the
first place?
The answer: It's only because ZUN's usage of EMS bypasses the need to load
the cached images back out of the XOR-encrypted 東方幻想.郷
packfile. Leaving aside the general
stupidity of any game data file encryption*, master.lib's decryption
implementation is also quite wasteful: It uses a separate buffer that
receives fixed-size chunks of the file, before decrypting every individual
byte and copying it to its intended destination buffer. That really
resembles the typical slowness of a C fread() implementation
more than it does the highly optimized ASM code that master.lib purports to
be… And how large is this well-hidden decryption buffer? 4 KiB.
So, looking back at the game, here is what happens once the Stage 5
pre-battle dialog ends:
Reimu's bomb background image, which was previously freed to make space
for her dialog portraits, has to be loaded back into conventional memory
from disk
BB0.CDG is found inside the 東方幻想.郷
packfile
file_ropen() ends up allocating a 4 KiB buffer for the
encrypted packfile data, getting us the decisive ~4 KiB closer to the memory
limit
The .CDG loader tries to allocate 52 608 contiguous bytes for the
pixel data of Reimu's bomb image
This would exceed the memory limit, so hmem_allocbyte()
fails and returns a nullptr
ZUN doesn't check for this case (as usual)
The pixel data is loaded to address 0000:0000,
overwriting the Interrupt Vector Table and whatever comes after
The game crashes
The final frame rendered by a crashing TH04.
The 4 KiB encryption buffer would only be freed by the corresponding
file_close() call, which of course never happens because the
game crashes before it gets there. At one point, I really did suspect the
cause to be some kind of memory leak or fragmentation inside master.lib,
which would have been quite delightful to fix.
Instead, the most straightforward fix here is to bump up that memory limit
by at least 4 KiB. Certainly easier than squeezing in a
cdg_free() call for the star image before the pre-boss dialog
without breaking position dependence.
Or, even better, let's nuke all these memory limits from orbit
because they make little sense to begin with, and fix every other potential
out-of-memory crash that modders would encounter when adding enough data to
any of the 4 games that impose such limits on themselves. Unless you want to
launch other binaries (which need to do their own memory allocations) after
launching the game, there's really no reason to restrict the amount of
memory available to a DOS process. Heck, whenever DOS creates a new one, it
assigns all remaining free memory by default anyway.
Removing the memory limits also removes one of ZUN's few error checks, which
end up quitting the game if there isn't at least a given maximum amount of
conventional RAM available. While it might be tempting to reserve enough
memory at the beginning of execution and then never check any allocation for
a potential failure, that's exactly where something like TH04's crash
comes from.
This game is also still running on DOS, where such an initial allocation
failure is very unlikely to happen – no one fills close to half of
conventional RAM with TSRs and then tries running one of these games. It
might have been useful to detect systems with less than 640 KiB of
actual, physical RAM, but none of the PC-98 models with that little amount
of memory are fast enough to run these games to begin with. How ironic… a
place where ZUN actually added an error check, and then it's mostly
pointless.
Here's an archive that contains both fix variants, just in case. These were
compiled from the th04_noems_crash_fix
and mem_assign_all
branches, and contain as little code changes as possible. Edit (2022-04-18): For TH04, you probably want to download
the 📝 community choice fix package instead,
which contains this fix along with other workarounds for the Divide
error crashes.
2021-11-29-Memory-limit-fixes.zip
So yeah, quite a complex bug, leaving no time for the TH03 scorefile format
research after all. Next up: Raising prices.
P0165
TH01 decompilation (Missiles, part 1/2 + large boss sprites, part 1/3)
P0166
TH01 decompilation (Large boss sprites, part 2/3)
P0167
TH01 decompilation (Large boss sprites, part 3/3 + Stage initialization + Defeat animation + Route selection)
💰 Funded by:
Ember2528
🏷️ Tags:
OK, TH01 missile bullets. Can we maybe have a well-behaved entity type,
without any weirdness? Just once?
Ehh, kinda. Apart from another 150 bytes wasted on unused structure members,
this code is indeed more on the low end in terms of overall jank. It does
become very obvious why dodging these missiles in the YuugenMagan, Mima, and
Elis fights feels so awful though: An unfair 46×46 pixel hitbox around
Reimu's center pixel, combined with the comeback of
📝 interlaced rendering, this time in every
stage. ZUN probably did this because missiles are the only 16×16 sprite in
TH01 that is blitted to unaligned X positions, which effectively ends up
touching a 32×16 area of VRAM per sprite.
But even if we assume VRAM writes to be the bottleneck here, it would
have been totally possible to render every missile in every frame at roughly
the same amount of CPU time that the original game uses for interlaced
rendering:
Note that all missile sprites only use two colors, white and green.
Instead of naively going with the usual four bitplanes, extract the
pixels drawn in each of the two used colors into their own bitplanes.
master.lib calls this the "tiny format".
Use the GRCG to draw these two bitplanes in the intended white and green
colors, halving the amount of VRAM writes compared to the original
function.
(Not using the .PTN format would have also avoided the inconsistency of
storing the missile sprites in boss-specific sprite slots.)
That's an optimization that would have significantly benefitted the game, in
contrast to all of the fake ones
introduced in later games. Then again, this optimization is
actually something that the later games do, and it might have in fact been
necessary to achieve their higher bullet counts without significant
slowdown.
After some effectively unused Mima sprite effect code that is so broken that
it's impossible to make sense out of it, we get to the final feature I
wanted to cover for all bosses in parallel before returning to Sariel: The
separate sprite background storage for moving or animated boss sprites in
the Mima, Elis, and Sariel fights. But, uh… why is this necessary to begin
with? Doesn't TH01 already reserve the other VRAM page for backgrounds?
Well, these sprites are quite big, and ZUN didn't want to blit them from
main memory on every frame. After all, TH01 and TH02 had a minimum required
clock speed of 33 MHz, half of the speed required for the later three games.
So, he simply blitted these boss sprites to both VRAM pages, leading
the usual unblitting calls to only remove the other sprites on top of the
boss. However, these bosses themselves want to move across the screen…
and this makes it necessary to save the stage background behind them
in some other way.
Enter .PTN, and its functions to capture a 16×16 or 32×32 square from VRAM
into a sprite slot. No problem with that approach in theory, as the size of
all these bigger sprites is a multiple of 32×32; splitting a larger sprite
into these smaller 32×32 chunks makes the code look just a little bit clumsy
(and, of course, slower).
But somewhere during the development of Mima's fight, ZUN apparently forgot
that those sprite backgrounds existed. And once Mima's 🚫 casting sprite is
blitted on top of her regular sprite, using just regular sprite
transparency, she ends up with her infamous third arm:
Ironically, there's an unused code path in Mima's unblit function where ZUN
assumes a height of 48 pixels for Mima's animation sprites rather than the
actual 64. This leads to even clumsier .PTN function calls for the bottom
128×16 pixels… Failing to unblit the bottom 16 pixels would have also
yielded that third arm, although it wouldn't have looked as natural. Still
wouldn't say that it was intentional; maybe this casting sprite was just
added pretty late in the game's development?
So, mission accomplished, Sariel unblocked… at 2¼ pushes. That's quite some time left for some smaller stage initialization
code, which bundles a bunch of random function calls in places where they
logically really don't belong. The stage opening animation then adds a bunch
of VRAM inter-page copies that are not only redundant but can't even be
understood without knowing the hidden internal state of the last VRAM page
accessed by previous ZUN code…
In better news though: Turbo C++ 4.0 really doesn't seem to have any
complexity limit on inlining arithmetic expressions, as long as they only
operate on compile-time constants. That's how we get macro-free,
compile-time Shift-JIS to JIS X 0208 conversion of the individual code
points in the 東方★靈異伝 string, in a compiler from 1994. As long as you
don't store any intermediate results in variables, that is…
But wait, there's more! With still ¼ of a push left, I also went for the
boss defeat animation, which includes the route selection after the SinGyoku
fight.
As in all other instances, the 2× scaled font is accomplished by first
rendering the text at regular 1× resolution to the other, invisible VRAM
page, and then scaled from there to the visible one. However, the route
selection is unique in that its scaled text is both drawn transparently on
top of the stage background (not onto a black one), and can also change
colors depending on the selection. It would have been no problem to unblit
and reblit the text by rendering the 1× version to a position on the
invisible VRAM page that isn't covered by the 2× version on the visible one,
but ZUN (needlessly) clears the invisible page before rendering any text.
Instead, he assigned a separate VRAM color for both
the 魔界 and 地獄 options, and only changed the palette value for
these colors to white or gray, depending on the correct selection. This is
another one of the
📝 rare cases where TH01 demonstrates good use of PC-98 hardware,
as the 魔界へ and 地獄へ strings don't need to be reblitted during the selection process, only the Orb "cursor" does.
Then, why does this still not count as good-code? When
changing palette colors, you kinda need to be aware of everything
else that can possibly be on screen, which colors are used there, and which
aren't and can therefore be used for such an effect without affecting other
sprites. In this case, well… hover over the image below, and notice how
Reimu's hair and the bomb sprites in the HUD light up when Makai is
selected:
This push did end on a high note though, with the generic, non-SinGyoku
version of the defeat animation being an easily parametrizable copy. And
that's how you decompile another 2.58% of TH01 in just slightly over three
pushes.
Now, we're not only ready to decompile Sariel, but also Kikuri, Elis, and
SinGyoku without needing any more detours into non-boss code. Thanks to the
current TH01 funding subscriptions, I can plan to cover most, if not all, of
Sariel in a single push series, but the currently 3 pending pushes probably
won't suffice for Sariel's 8.10% of all remaining code in TH01. We've got
quite a lot of not specifically TH01-related funds in the backlog to pass
the time though.
Due to recent developments, it actually makes quite a lot of sense to take a
break from TH01: spaztron64 has
managed what every Touhou download site so far has failed to do: Bundling
all 5 game onto a single .HDI together with pre-configured PC-98
emulators and a nice boot menu, and hosting the resulting package on a
proper website. While this first release is already quite good (and much
better than my attempt from 2014), there is still a bit of room for
improvement to be gained from specific ReC98 research. Next up,
therefore:
Researching how TH04 and TH05 use EMS memory, together with the cause
behind TH04's crash in Stage 5 when playing as Reimu without an EMS driver
loaded, and
reverse-engineering TH03's score data file format
(YUME.NEM), which hopefully also comes with a way of building a
file that unlocks all characters without any high scores.
P0162
TH01 decompilation (Player control, part 1/3)
P0163
TH01 decompilation (Player control, part 2/3)
P0164
TH01 decompilation (Player control, part 3/3)
💰 Funded by:
Ember2528, Yanga
🏷️ Tags:
No technical obstacles for once! Just pure overcomplicated ZUN code. Unlike
📝 Konngara's main function, the main TH01
player function was every bit as difficult to decompile as you would expect
from its size.
With TH01 using both separate left- and right-facing sprites for all of
Reimu's moves and separate classes for Reimu's 32×32 and 48×*
sprites, we're already off to a bad start. Sure, sprite mirroring is
minimally more involved on PC-98, as the planar
nature of VRAM requires the bits within an 8-pixel byte to also be
mirrored, in addition to writing the sprite bytes from right to left. TH03
uses a 256-byte lookup table for this, generated at runtime by an infamous
micro-optimized and undecompilable ASM algorithm. With TH01's existing
architecture, ZUN would have then needed to write 3 additional blitting
functions. But instead, he chose to waste a total of 26,112 bytes of memory
on pre-mirrored sprites…
Alright, but surely selecting those sprites from code is no big deal? Just
store the direction Reimu is facing in, and then add some branches to the
rendering code. And there is in fact a variable for Reimu's direction…
during regular arrow-key movement, and another one while shooting and
sliding, and a third as part of the special attack types,
launched out of a slide.
Well, OK, technically, the last two are the same variable. But that's even
worse, because it means that ZUN stores two distinct enums at
the same place in memory: Shooting and sliding uses 1 for left,
2 for right, and 3 for the "invalid" direction of
holding both, while the special attack types indicate the direction in their
lowest bit, with 0 for right and 1 for left. I
decompiled the latter as bitflags, but in ZUN's code, each of the 8
permutations is handled as a distinct type, with copy-pasted and adapted
code… The interpretation of this
two-enum "sub-mode" union variable is controlled
by yet another "mode" variable… and unsurprisingly, two of the bugs in this
function relate to the sub-mode variable being interpreted incorrectly.
Also, "rendering code"? This one big function basically consists of separate
unblit→update→render code snippets for every state and direction Reimu can
be in (moving, shooting, swinging, sliding, special-attacking, and bombing),
pasted together into a tangled mess of nested if(…) statements.
While a lot of the code is copy-pasted, there are still a number of
inconsistencies that defeat the point of my usual refactoring treatment.
After all, with a total of 85 conditional branches, anything more than I did
would have just obscured the control flow too badly, making it even harder
to understand what's going on.
In the end, I spotted a total of 8 bugs in this function, all of which leave
Reimu invisible for one or more frames:
2 frames after all special attacks
2 frames after swing attacks, and
4 frames before swing attacks
Thanks to the last one, Reimu's first swing animation frame is never
actually rendered. So whenever someone complains about TH01 sprite
flickering on an emulator: That emulator is accurate, it's the game that's
poorly written.
And guess what, this function doesn't even contain everything you'd
associate with per-frame player behavior. While it does
handle Yin-Yang Orb repulsion as part of slides and special attacks, it does
not handle the actual player/Orb collision that results in lives being lost.
The funny thing about this: These two things are done in the same function…
Therefore, the life loss animation is also part of another function. This is
where we find the final glitch in this 3-push series: Before the 16-frame
shake, this function only unblits a 32×32 area around Reimu's center point,
even though it's possible to lose a life during the non-deflecting part of a
48×48-pixel animation. In that case, the extra pixels will just stay on
screen during the shake. They are unblitted afterwards though, which
suggests that ZUN was at least somewhat aware of the issue?
Finally, the chance to see the alternate life loss sprite is exactly ⅛.
As for any new insights into game mechanics… you know what? I'm just not
going to write anything, and leave you with this flowchart instead. Here's
the definitive guide on how to control Reimu in TH01 we've been waiting for
24 years:
Pellets are deflected during all gray
states. Not shown is the obvious "double-tap Z and X" transition from
all non-(#1) states to the Bomb state, but that would have made this
diagram even more unwieldy than it turned out. And yes, you can shoot
twice as fast while moving left or right.
While I'm at it, here are two more animations from MIKO.PTN
which aren't referenced by any code:
With that monster of a function taken care of, we've only got boss sprite animation as the final blocker of uninterrupted Sariel progress. Due to some unfavorable code layout in the Mima segment though, I'll need to spend a bit more time with some of the features used there. Next up: The missile bullets used in the Mima and YuugenMagan fights.
P0160
TH01 decompilation (Pellet speed modification + HUD, part 3 (Stage timer) + Particle system)
P0161
Research (Turbo C++ 4.0J's jump optimization bug after SCOPY@)
💰 Funded by:
Yanga, [Anonymous]
🏷️ Tags:
Nothing really noteworthy in TH01's stage timer code, just yet another HUD
element that is needlessly drawn into VRAM. Sure, ZUN applies his custom
boldfacing effect on top of the glyphs retrieved from font ROM, but he could
have easily installed those modified glyphs as gaiji.
Well, OK, halfwidth gaiji aren't exactly well documented, and sometimes not
even correctly emulated
📝 due to the same PC-98 hardware oddity I was researching last month.
I've reserved two of the pending anonymous "anything" pushes for the
conclusion of this research, just in case you were wondering why the
outstanding workload is now lower after the two delivered here.
And since it doesn't seem to be clearly documented elsewhere: Every 2 ticks
on the stage timer correspond to 4 frames.
So, TH01 rank pellet speed. The resident pellet speed value is a
factor ranging from a minimum of -0.375 up to a maximum of 0.5 (pixels per
frame), multiplied with the difficulty-adjusted base speed for each pellet
and added on top of that same speed. This multiplier is modified
every time the stage timer reaches 0 and
HARRY UP is shown (+0.05)
for every score-based extra life granted below the maximum number of
lives (+0.025)
every time a bomb is used (+0.025)
on every frame in which the rand value (shown in debug
mode) is evenly divisible by
(1800 - (lives × 200) - (bombs × 50)) (+0.025)
every time Reimu got hit (set to 0 if higher, then -0.05)
when using a continue (set to -0.05 if higher, then -0.125)
Apparently, ZUN noted that these deltas couldn't be losslessly stored in an
IEEE 754 floating-point variable, and therefore didn't store the pellet
speed factor exactly in a way that would correspond to its gameplay effect.
Instead, it's stored similar to Q12.4 subpixels: as a simple integer,
pre-multiplied by 40. This results in a raw range of -15 to 20, which is
what the undecompiled ASM calls still use. When spawning a new pellet, its
base speed is first multiplied by that factor, and then divided by 40 again.
This is actually quite smart: The calculation doesn't need to be aware of
either Q12.4 or the 40× format, as
((Q12.4 * factor×40) / factor×40) still comes out as a
Q12.4 subpixel even if all numbers are integers. The only limiting issue
here would be the potential overflow of the 16-bit multiplication at
unadjusted base speeds of more than 50 pixels per frame, but that'd be
seriously unplayable.
So yeah, pellet speed modifications are indeed gradual, and don't just fall
into the coarse three "high, normal, and low" categories.
That's ⅝ of P0160 done, and the continue and pause menus would make good
candidates to fill up the remaining ⅜… except that it seemed impossible to
figure out the correct compiler options for this code?
The issues centered around the two effects of Turbo C++ 4.0J's
-O switch:
Optimizing jump instructions: merging duplicate successive jumps into a
single one, and merging duplicated instructions at the end of conditional
branches into a single place under a single branch, which the other branches
then jump to
Compressing ADD SP and POP CX
stack-clearing instructions after multiple successive CALLs to
__cdecl functions into a single ADD SP with the
combined parameter stack size of all function calls
But how can the ASM for these functions exhibit #1 but not #2? How
can it be seemingly optimized and unoptimized at the same time? The
only option that gets somewhat close would be -O- -y, which
emits line number information into the .OBJ files for debugging. This
combination provides its own kind of #1, but these functions clearly need
the real deal.
The research into this issue ended up consuming a full push on its own.
In the end, this solution turned out to be completely unrelated to compiler
options, and instead came from the effects of a compiler bug in a totally
different place. Initializing a local structure instance or array like
const uint4_t flash_colors[3] = { 3, 4, 5 };
always emits the { 3, 4, 5 } array into the program's data
segment, and then generates a call to the internal SCOPY@
function which copies this data array to the local variable on the stack.
And as soon as this SCOPY@ call is emitted, the -O
optimization #1 is disabled for the entire rest of the translation
unit?!
So, any code segment with an SCOPY@ call followed by
__cdecl functions must strictly be decompiled from top to
bottom, mirroring the original layout of translation units. That means no
TH01 continue and pause menus before we haven't decompiled the bomb
animation, which contains such an SCOPY@ call. 😕
Luckily, TH01 is the only game where this bug leads to significant
restrictions in decompilation order, as later games predominantly use the
pascal calling convention, in which each function itself clears
its stack as part of its RET instruction.
What now, then? With 51% of REIIDEN.EXE decompiled, we're
slowly running out of small features that can be decompiled within ⅜ of a
push. Good that I haven't been looking a lot into OP.EXE and
FUUIN.EXE, which pretty much only got easy pieces of
code left to do. Maybe I'll end up finishing their decompilations entirely
within these smaller gaps? I still ended up finding one more small
piece in REIIDEN.EXE though: The particle system, seen in the
Mima fight.
I like how everything about this animation is contained within a single
function that is called once per frame, but ZUN could have really
consolidated the spawning code for new particles a bit. In Mima's fight,
particles are only spawned from the top and right edges of the screen, but
the function in fact contains unused code for all other 7 possible
directions, written in quite a bloated manner. This wouldn't feel quite as
unused if ZUN had used an angle parameter instead…
Also, why unnecessarily waste another 40 bytes of
the BSS segment?
But wait, what's going on with the very first spawned particle that just
stops near the bottom edge of the screen in the video above? Well, even in
such a simple and self-contained function, ZUN managed to include an
off-by-one error. This one then results in an out-of-bounds array access on
the 80th frame, where the code attempts to spawn a 41st
particle. If the first particle was unlucky to be both slow enough and
spawned away far enough from the bottom and right edges, the spawning code
will then kill it off before its unblitting code gets to run, leaving its
pixel on the screen until something else overlaps it and causes it to be
unblitted.
Which, during regular gameplay, will quickly happen with the Orb, all the
pellets flying around, and your own player movement. Also, the RNG can
easily spawn this particle at a position and velocity that causes it to
leave the screen more quickly. Kind of impressive how ZUN laid out the
structure
of arrays in a way that ensured practically no effect of this bug on the
game; this glitch could have easily happened every 80 frames instead.
He almost got close to all bugs canceling out each other here!
Next up: The player control functions, including the second-biggest function
in all of PC-98 Touhou.
P0158
TH01 decompilation (Items, part 1/2)
P0159
TH01 decompilation (Items, part 2/2 + Cards)
💰 Funded by:
Yanga
🏷️ Tags:
Of course, Sariel's potentially bloated and copy-pasted code is blocked by
even more definitely bloated and copy-pasted code. It's TH01, what did you
expect?
But even then, TH01's item code is on a new level of software architecture
ridiculousness. First, ZUN uses distinct arrays for both types of items,
with their own caps of 4 for bomb items, and 10 for point items. Since that
obviously makes any type-related switch statement redundant,
he also used distinct functions for both types, with copy-pasted
boilerplate code. The main per-item update and render function is
shared though… and takes every single accessed member of the item
structure as its own reference parameter. Like, why, you have a
structure, right there?! That's one way to really practice the C++ language
concept of passing arbitrary structure fields by mutable reference…
To complete the unwarranted grand generic design of this function, it calls
back into per-type collision detection, drop, and collect functions with
another three reference parameters. Yeah, why use C++ virtual methods when
you can also implement the effectively same polymorphism functionality by
hand? Oh, and the coordinate clamping code in one of these callbacks could
only possibly have come from nested min() and
max() preprocessor macros. And that's how you extend such
dead-simple functionality to 1¼ pushes…
Amidst all this jank, we've at least got a sensible item↔player hitbox this
time, with 24 pixels around Reimu's center point to the left and right, and
extending from 24 pixels above Reimu down to the bottom of the playfield.
It absolutely didn't look like that from the initial naive decompilation
though. Changing entity coordinates from left/top to center was one of the
better lessons from TH01 that ZUN implemented in later games, it really
makes collision detection code much more intuitive to grasp.
The card flip code is where we find out some slightly more interesting
aspects about item drops in this game, and how they're controlled by a
hidden cycle variable:
At the beginning of every 5-stage scene, this variable is set to a
random value in the [0..59] range
Point items are dropped at every multiple of 10
Every card flip adds 1 to its value after this mod 10
check
At a value of 140, the point item is replaced with a bomb item, but only
if no damaging bomb is active. In any case, its value is then reset to
1.
Then again, score players largely ignore point items anyway, as card
combos simply have a much bigger effect on the score. With this, I should
have RE'd all information necessary to construct a tool-assisted score run,
though? Edit: Turns out that 1) point items are becoming
increasingly important in score runs, and 2) Pearl already did a TAS some
months ago. Thanks to
spaztron64 for the info!
The Orb↔card hitbox also makes perfect sense, with 24 pixels around
the center point of a card in every direction.
The rest of the code confirms the
card
flip score formula documented on Touhou Wiki, as well as the way cards
are flipped by bombs: During every of the 90 "damaging" frames of the
140-frame bomb animation, there is a 75% chance to flip the card at the
[bomb_frame % total_card_count_in_stage] array index. Since
stages can only have up to 50 cards
📝 thanks to a bug, even a 75% chance is high
enough to typically flip most cards during a bomb. Each of these flips
still only removes a single card HP, just like after a regular collision
with the Orb.
Also, why are the card score popups rendered before the cards
themselves? That's two needless frames of flicker during that 25-frame
animation. Not all too noticeable, but still.
And that's over 50% of REIIDEN.EXE decompiled as well! Next
up: More HUD update and rendering code… with a direct dependency on
rank pellet speed modifications?
P0157
TH01 decompilation (16× TRAM letters: 東方★靈異伝, STAGE #, and HARRY UP)
💰 Funded by:
Yanga
🏷️ Tags:
Yup, there still are features that can be fully covered in a single push
and don't lead to sprawling blog posts. The giant
STAGE number and
HARRY UP messages, as well as the
flashing transparent 東方★靈異伝 at the beginning of each scene are drawn
by retrieving the glyphs for each letter from font ROM, and then "blitting"
them to text RAM by placing a colored fullwidth 16×16 square at every pixel
that is set in the font bitmap.
And 📝 once again, ZUN's code there matches
the mediocre example code for the related hardware interrupt from the
PC-9801 Programmers' Bible. It's not 100% copied this time, but
definitely inspired by the code on page 121. Therefore, we can conclude
that these letters are probably only displayed as these 16× scaled glyphs
because that book had code on how to achieve this effect.
ZUN "improved" on the example code by implementing a write-only cursor over
the entire text RAM that fills every 16×16 cell with a differently colored
space character, fully clearing the text RAM as a side effect. For once, he
even removed some redundancy here by using helper functions! It's all still
far from good-code though. For example, there's a
function for filling 5 rows worth of cells, which he uses for both the top
and bottom margin of these letters. But since the bottom margin starts at
the 22nd line, the code writes past the 25th line and into the second TRAM
page. Good that this page is not used by either the hardware or the game.
These cursor functions can actually write any fullwidth JIS code point to
text RAM… and seem to do that in a rather simplified way, because shouldn't
you set the most significant bit to indicate the right half of a fullwidth
character? That's what's written in the same book that ZUN copied all
functions out of, after all. 🤔 Researching this led me down quite the
rabbit hole, where I found an oddity in PC-98 text RAM rendering that no
single one of the widely-used PC-98 emulators gets completely right. I'm
almost done with the 2-push research into this issue, which will
include fixes for DOSBox-X and Neko Project II. The only thing I'm missing
to get these fully accurate is a screenshot of the output created by this binary, on any PC-98 model made by EPSON:
2021-09-12-jist0x28.com.zip
That's the reason why this push was rather delayed. Thanks in advance to
anyone who'd like to help with this!
In maybe more disappointing news: Sariel is going to be delayed for a while
longer. 😕 The player- and HUD-related functions, which previously delayed
further progress there, turned out to call a lot of not yet RE'd functions
themselves. Seems as if we're doing most of the
card-flipping code second, after all? Next up: Point and bomb items, which at least are a significant step in terms of position
independence.
P0153
TH01 decompilation (Konngara, part 3/5.5: Patterns 2-4)
P0154
TH01 decompilation (Konngara, part 4/5.5: Patterns 5-8)
P0155
TH01 decompilation (Konngara, part 5/5.5: Patterns 9-12)
P0156
TH01 decompilation (Konngara, part 5.5/5.5: Main function + Sariel entrance animation + HARRY UP pellets)
💰 Funded by:
Ember2528
🏷️ Tags:
📝 7 pushes to get Konngara done, according to my previous estimate?
Well, how about being twice as fast, and getting the entire boss fight done
in 3.5 pushes instead? So much copy-pasted code in there… without any
flashy unused content, apart from four calculations with an unclear purpose. And the three strings "ANGEL", "OF",
"DEATH", which were probably meant to be rendered using those giant
upscaled font ROM glyphs that also display the
STAGE # and
HARRY UP strings? Those three strings
are also part of Sariel's code, though.
On to the remaining 11 patterns then! Konngara's homing snakes, shown in
the video above, are one of the more notorious parts of this battle. They
occur in two patterns – one with two snakes and one with four – with
all of the spawn, aim, update, and render code copy-pasted between
the two. Three gameplay-related discoveries
here:
The homing target is locked once the Y position of a snake's white head
diamond is below 300 pixels.
That diamond is also the only one with collision detection…
…but comes with a gigantic 30×30 pixel hitbox, reduced to 30×20 while
Reimu is sliding. For comparison: Reimu's regular sprite is 32×32 pixels,
including transparent areas. This time, there is a clearly defined
hitbox around Reimu's center pixel that the single top-left pixel can
collide with. No imagination necessary, which people apparently
📝 still prefer over actually understanding an
algorithm… Then again, this hitbox is still not intuitive at all,
because…
… the exact collision pixel, marked in
red, is part of the diamond sprite's
transparent background
This was followed by really weird aiming code for the "sprayed
pellets from cup" pattern… which can only possibly have been done on
purpose, but is sort of mitigated by the spraying motion anyway.
After a bunch of long if(…) {…} else if(…) {…} else if(…)
{…} chains, which remain quite popular in certain corners of
the game dev scene to this day, we've got the three sword slash
patterns as the final notable ones. At first, it seemed as if ZUN just
improvised those raw number constants involved in the pellet spawner's
movement calculations to describe some sort of path that vaguely
resembles the sword slash. But once I tried to express these numbers in
terms of the slash animation's keyframes, it all worked out perfectly, and
resulted in this:
Yup, the spawner always takes an exact path along this triangle. Sometimes,
I wonder whether I should just rush this project and don't bother about
naming these repeated number literals. Then I gain insights like these, and
it's all worth it.
Finally, we've got Konngara's main function, which coordinates the entire
fight. Third-longest function in both TH01 and all of PC-98 Touhou, only
behind some player-related stuff and YuugenMagan's gigantic main function…
and it's even more of a copy-pasta, making it feel not nearly as long as it
is. Key insights there:
The fight consists of 7 phases, with the entire defeat sequence being
part of the if(boss_phase == 7) {…}
branch.
The three even-numbered phases, however, only light up the Siddhaṃ seed
syllables and then progress to the next phase.
Odd-numbered phases are completed after passing an HP threshold or after
seeing a predetermined number of patterns, whatever happens first. No
possibility of skipping anything there.
Patterns are chosen randomly, but the available pool of patterns
is limited to 3 specific "easier" patterns in phases 1 and 5, and 4 patterns
in phase 3. Once Phase 7 is reached at 9 HP remaining, all 12 patterns can
potentially appear. Fittingly, that's also the point where the red section
of the HP bar starts.
Every time a pattern is chosen, the code only makes a maximum of two
attempts at picking a pattern that's different from the one that
Konngara just completed. Therefore, it seems entirely possible to see
the same pattern twice. Calculating an actual seed to prove that is out
of the scope of this project, though.
Due to what looks like a copy-paste mistake, the pool for the second
RNG attempt in phases 5 and 7 is reduced to only the first two patterns
of the respective phases? That's already quite some bias right there,
and we haven't even analyzed the RNG in detail yet…
(For anyone interested, it's a
LCG,
using the Borland C/C++ parameters as shown here.)
The difficulty level only affects the speed and firing intervals (and
thus, number) of pellets, as well as the number of lasers in the one pattern
that uses them.
After the 📝 kuji-in defeat sequence, the
fight ends in an attempted double-free of Konngara's image
data. Thankfully, the format-specific
_free() functions defend against such a thing.
And that's it for Konngara! First boss with not a single piece of ASM left,
30 more to go! 🎉 But wait, what about the cause behind the temporary green
discoloration after leaving the Pause menu? I expected to find something on
that as well, but nope, it's nothing in Konngara's code segment. We'll
probably only get to figure that out near the very end of TH01's
decompilation, once we get to the one function that directly calls all of
the boss-specific main functions in a switch statement. Edit (2022-07-17):📝 Only took until Mima.
So, Sariel next? With half of a push left, I did cover Sariel's first few
initialization functions, but all the sprite unblitting and HUD
manipulation will need some extra attention first. The first one of these
functions is related to the HUD, the stage timer, and the
HARRY UP mode, whose pellet pattern I've
also decompiled now.
All of this brings us past 75% PI in all games, and TH01 to under 30,000
remaining ASM instructions, leaving TH03 as the now most expensive game to
be completely decompiled. Looking forward to how much more TH01's code will
fall apart if you just tap it lightly… Next up: The aforementioned helper
functions related to HARRY UP, drawing the
HUD, and unblitting the other bosses whose sprites are a bit more animated.
…or maybe not that soon, as it would have only wasted time to
untangle the bullet update commits from the rest of the progress. So,
here's all the bullet spawning code in TH04 and TH05 instead. I hope
you're ready for this, there's a lot to talk about!
(For the sake of readability, "bullets" in this blog post refers to the
white 8×8 pellets
and all 16×16 bullets loaded from MIKO16.BFT, nothing else.)
But first, what was going on📝 in 2020? Spent 4 pushes on the basic types
and constants back then, still ended up confusing a couple of things, and
even getting some wrong. Like how TH05's "bullet slowdown" flag actually
always prevents slowdown and fires bullets at a constant speed
instead. Or how "random spread" is not the
best term to describe that unused bullet group type in TH04.
Or that there are two distinct ways of clearing all bullets on screen,
which deserve different names:
Mechanic #1: Clearing bullets for a custom amount of
time, awarding 1000 points for all bullets alive on the first frame,
and 100 points for all bullets spawned during the clear time.
Mechanic #2: Zapping bullets for a fixed 16 frames,
awarding a semi-exponential and loudly announced Bonus!! for all
bullets alive on the first frame, and preventing new bullets from being
spawned during those 16 frames. In TH04 at least; thanks to a ZUN bug,
zapping got reduced to 1 frame and no animation in TH05…
Bullets are zapped at the end of most midboss and boss phases, and
cleared everywhere else – most notably, during bombs, when losing a
life, or as rewards for extends or a maximized Dream bonus. The
Bonus!! points awarded for zapping bullets are calculated iteratively,
so it's not trivial to give an exact formula for these. For a small number
𝑛 of bullets, it would exactly be 5𝑛³ - 10𝑛² + 15𝑛
points – or, using uth05win's (correct) recursive definition,
Bonus(𝑛) = Bonus(𝑛-1) + 15𝑛² - 5𝑛 + 10.
However, one of the internal step variables is capped at a different number
of points for each difficulty (and game), after which the points only
increase linearly. Hence, "semi-exponential".
On to TH04's bullet spawn code then, because that one can at least be
decompiled. And immediately, we have to deal with a pointless distinction
between regular bullets, with either a decelerating or constant
velocity, and special bullets, with preset velocity changes during
their lifetime. That preset has to be set somewhere, so why have
separate functions? In TH04, this separation continues even down to the
lowest level of functions, where values are written into the global bullet
array. TH05 merges those two functions into one, but then goes too far and
uses self-modifying code to save a grand total of two local variables…
Luckily, the rest of its actual code is identical to TH04.
Most of the complexity in bullet spawning comes from the (thankfully
shared) helper function that calculates the velocities of the individual
bullets within a group. Both games handle each group type via a large
switch statement, which is where TH04 shows off another Turbo
C++ 4.0 optimization: If the range of case values is too
sparse to be meaningfully expressed in a jump table, it usually generates a
linear search through a second value table. But with the -G
command-line option, it instead generates branching code for a binary
search through the set of cases. 𝑂(log 𝑛) as the worst case for a
switch statement in a C++ compiler from 1994… that's so cool.
But still, why are the values in TH04's group type enum all
over the place to begin with?
Unfortunately, this optimization is pretty rare in PC-98 Touhou. It only
shows up here and in a few places in TH02, compared to at least 50
switch value tables.
In all of its micro-optimized pointlessness, TH05's undecompilable version
at least fixes some of TH04's redundancy. While it's still not even
optimal, it's at least a decently written piece of ASM…
if you take the time to understand what's going on there, because it
certainly took quite a bit of that to verify that all of the things which
looked like bugs or quirks were in fact correct. And that's how the code
for this function ended up with 35% comments and blank lines before I could
confidently call it "reverse-engineered"…
Oh well, at least it finally fixes a correctness issue from TH01 and TH04,
where an invalid bullet group type would fill all remaining slots in the
bullet array with identical versions of the first bullet.
Something that both games also share in these functions is an over-reliance
on globals for return values or other local state. The most ridiculous
example here: Tuning the speed of a bullet based on rank actually mutates
the global bullet template… which ZUN then works around by adding a wrapper
function around both regular and special bullet spawning, which saves the
base speed before executing that function, and restores it afterward.
Add another set of wrappers to bypass that exact
tuning, and you've expanded your nice 1-function interface to 4 functions.
Oh, and did I mention that TH04 pointlessly duplicates the first set of
wrapper functions for 3 of the 4 difficulties, which can't even be
explained with "debugging reasons"? That's 10 functions then… and probably
explains why I've procrastinated this feature for so long.
At this point, I also finally stopped decompiling ZUN's original ASM just
for the sake of it. All these small TH05 functions would look horribly
unidiomatic, are identical to their decompiled TH04 counterparts anyway,
except for some unique constant… and, in the case of TH05's rank-based
speed tuning function, actually become undecompilable as soon as we
want to return a C++ class to preserve the semantic meaning of the return
value. Mainly, this is because Turbo C++ does not allow register
pseudo-variables like _AX or _AL to be cast into
class types, even if their size matches. Decompiling that function would
have therefore lowered the quality of the rest of the decompiled code, in
exchange for the additional maintenance and compile-time cost of another
translation unit. Not worth it – and for a TH05 port, you'd already have to
decompile all the rest of the bullet spawning code anyway!
The only thing in there that was still somewhat worth being
decompiled was the pre-spawn clipping and collision detection function. Due
to what's probably a micro-optimization mistake, the TH05 version continues
to spawn a bullet even if it was spawned on top of the player. This might
sound like it has a different effect on gameplay… until you realize that
the player got hit in this case and will either lose a life or deathbomb,
both of which will cause all on-screen bullets to be cleared anyway.
So it's at most a visual glitch.
But while we're at it, can we please stop talking about hitboxes? At least
in the context of TH04 and TH05 bullets. The actual collision detection is
described way better as a kill delta of 8×8 pixels between the
center points of the player and a bullet. You can distribute these pixels
to any combination of bullet and player "hitboxes" that make up 8×8. 4×4
around both the player and bullets? 1×1 for bullets, and 8×8 for the
player? All equally valid… or perhaps none of them, once you keep in mind
that other entity types might have different kill deltas. With that in
mind, the concept of a "hitbox" turns into just a confusing abstraction.
The same is true for the 36×44 graze box delta. For some reason,
this one is not exactly around the center of a bullet, but shifted to the
right by 2 pixels. So, a bullet can be grazed up to 20 pixels right of the
player, but only up to 16 pixels left of the player. uth05win also spotted
this… and rotated the deltas clockwise by 90°?!
Which brings us to the bullet updates… for which I still had to
research a decompilation workaround, because
📝 P0148 turned out to not help at all?
Instead, the solution was to lie to the compiler about the true segment
distance of the popup function and declare its signature far
rather than near. This allowed ZUN to save that ridiculous overhead of 1 additional far function
call/return per frame, and those precious 2 bytes in the BSS segment
that he didn't have to spend on a segment value.
📝 Another function that didn't have just a
single declaration in a common header file… really,
📝 how were these games even built???
The function itself is among the longer ones in both games. It especially
stands out in the indentation department, with 7 levels at its most
indented point – and that's the minimum of what's possible without
goto. Only two more notable discoveries there:
Bullets are the only entity affected by Slow Mode. If the number of
bullets on screen is ≥ (24 + (difficulty * 8) + rank) in TH04,
or (42 + (difficulty * 8)) in TH05, Slow Mode reduces the frame
rate by 33%, by waiting for one additional VSync event every two frames.
The code also reveals a second tier, with 50% slowdown for a slightly
higher number of bullets, but that conditional branch can never be executed
Bullets must have been grazed in a previous frame before they can
be collided with. (Note how this does not apply to bullets that spawned
on top of the player, as explained earlier!)
Whew… When did ReC98 turn into a full-on code review?! 😅 And after all
this, we're still not done with TH04 and TH05 bullets, with all the
special movement types still missing. That should be less than one push
though, once we get to it. Next up: Back to TH01 and Konngara! Now have fun
rewriting the Touhou Wiki Gameplay pages 😛
P0148
TH04/TH05 decompilation (Text popups, gather circle rendering, player position clamping)
💰 Funded by:
[Anonymous]
🏷️ Tags:
Back after taking way too long to get Touhou Patch Center's MediaWiki
update feature complete… I'm still waiting for more translators to test and
review the new translation interface before delivering and deploying it
all, which will most likely lead to another break from ReC98 within the
next few months. For now though, I'm happy to have mostly addressed the
nagging responsibility I still had after willing that site into existence,
and to be back working on ReC98. 🙂
As announced, the next few pushes will focus on TH04's and TH05's bullet
spawning code, before I get to put all that accumulated TH01 money towards
finishing all of konngara's code in TH01. For a full
picture of what's happening with bullets, we'd really also like to
have the bullet update function as readable C code though.
Clearing all bullets on the playfield will trigger a Bonus!! popup,
displayed as 📝 gaiji in that proportional
font. Unfortunately, TLINK refused to link the code as soon as I referenced
the function for animating the popups at the top of the playfield? Which
can only mean that we have to decompile that function first…
So, let's turn that piece of technical debt into a full push, and first
decompile another random set of previously reverse-engineered TH04 and TH05
functions. Most of these are stored in a different place within the two
MAIN.EXE binaries, and the tried-and-true method of matching
segment names would therefore have introduced several unnecessary
translation units. So I resorted to a segment splitting technique I should
have started using way earlier: Simply creating new segments with names
derived from their functions, at the exact positions they're needed. All
the new segment start and end directives do bloat the ASM code somewhat,
and certainly contributed to this push barely removing any actual lines of
code. However, what we get in return is total freedom as far as
decompilation order is concerned,
📝 which should be the case for any ReC project, really.
And in the end, all these tiny code segments will cancel out anyway.
If only we could do the same with the data segment…
The popup function happened to be the final one I RE'd before my long break
in the spring of 2019. Back then, I didn't even bother looking into that
64-frame delay between changing popups, and what that meant for the game.
Each of these popups stays on screen for 128 frames, during which, of
course, another popup-worthy event might happen. Handling this cleanly
without removing previous popups too early would involve some sort of event
queue, whose size might even be meaningfully limited to the number of
distinct events that can happen. But still, that'd be a data structure, and
we're not gonna have that! Instead, ZUN
simply keeps two variables for the new and current popup ID. During an
active popup, any change to that ID will only be committed once the current
popup has been shown for at least 64 frames. And during that time,
that new ID can be freely overwritten with a different one, which drops any
previous, undisplayed event. But surely, there won't be more than two
events happening within 63 frames, right?
The rest was fairly uneventful – no newly RE'd functions in this push,
after all – until I reached the widely used helper function for applying
the current vertical scrolling offset to a Y coordinate. Its combination of
a function parameter, the pascal calling convention, and no
stack frame was previously thought to be undecompilable… except that it
isn't, and the decompilation didn't even require any new workarounds to be
developed? Good thing that I already forgot how impossible it was to
decompile the first function I looked at that fell into this category!
Oh well, this discovery wasn't too groundbreaking. Looking back at
all the other functions with that combination only revealed a grand total
of 1 additional one where a decompilation made sense: TH05's version of
snd_kaja_interrupt(), which is now compiled from the same C++
file for all 4 games that use it. And well, looks like some quirks really
remain unnoticed and undocumented until you look at a function for the 11th
time: Its return value is undefined if BGM is inactive – that is, if the
user disabled it, or if no FM board is installed. Not that it matters for
the original code, which never uses this function to retrieve anything from
KAJA's drivers. But people apparently do copy ReC98 code into their own
projects, so it is something to keep in mind.
All in all, nothing quite at jank level in this one, but we were surely grazing that tag. Next up, with that out of the way: The bullet update/step function! Very soon in fact, since I've mostly got it done already.
Didn't quite get to cover background rendering for TH05's Stage 1-5
bosses in this one, as I had to reverse-engineer two more fundamental parts
involved in boss background rendering before.
First, we got the those blocky transitions from stage tiles to bomb and
boss backgrounds, loaded from BB*.BB and ST*.BB,
respectively. These files store 16 frames of animation, with every bit
corresponding to a 16×16 tile on the playfield. With 384×368 pixels to be
covered, that would require 69 bytes per frame. But since that's a very odd
number to work with in micro-optimized ASM, ZUN instead stores 512×512
pixels worth of bits, ending up with a frame size of 128 bytes, and a
per-frame waste of 59 bytes. At least it was
possible to decompile the core blitting function as __fastcall
for once.
But wait, TH05 comes with, and loads, a bomb .BB file for every character,
not just for the Reimu and Yuuka bomb transitions you see in-game… 🤔
Restoring those unused stage tile → bomb image transition
animations for Mima and Marisa isn't that trivial without having decompiled
their actual bomb animation functions before, so stay tuned!
Interestingly though, the code leaves out what would look like the most
obvious optimization: All stage tiles are unconditionally redrawn
each frame before they're erased again with the 16×16 blocks, no matter if
they weren't covered by such a block in the previous frame, or are
going to be covered by such a block in this frame. The same is true
for the static bomb and boss background images, where ZUN simply didn't
write a .CDG blitting function that takes the dirty tile array into
account. If VRAM writes on PC-98 really were as slow as the games'
README.TXT files claim them to be, shouldn't all the
optimization work have gone towards minimizing them?
Oh well, it's not like I have any idea what I'm talking about here. I'd
better stop talking about anything relating to VRAM performance on PC-98…
Second, it finally was time to solve the long-standing confusion about all
those callbacks that are supposed to render the playfield background. Given
the aforementioned static bomb background images, ZUN chose to make this
needlessly complicated. And so, we have two callback function
pointers: One during bomb animations, one outside of bomb
animations, and each boss update function is responsible for keeping the
former in sync with the latter.
Other than that, this was one of the smoothest pushes we've had in a while;
the hardest parts of boss background rendering all were part of
📝 the last push. Once you figured out that
ZUN does indeed dynamically change hardware color #0 based on the current
boss phase, the remaining one function for Shinki, and all of EX-Alice's
background rendering becomes very straightforward and understandable.
Meanwhile, -Tom- told me about his plans to publicly
release 📝 his TH05 scripting toolkit once
TH05's MAIN.EXE would hit around 50% RE! That pretty much
defines what the next bunch of generic TH05 pushes will go towards:
bullets, shared boss code, and one
full, concrete boss script to demonstrate how it's all combined. Next up,
therefore: TH04's bullet firing code…? Yes, TH04's. I want to see what I'm
doing before I tackle the undecompilable mess that is TH05's bullet firing
code, and you all probably want readable code for that feature as
well. Turns out it's also the perfect place for Blue Bolt's
pending contributions.
Y'know, I kinda prefer the pending crowdfunded workload to stay more near
the middle of the cap, rather than being sold out all the time. So to reach
this point more quickly, let's do the most relaxing thing that can be
easily done in TH05 right now: The boss backgrounds, starting with Shinki's,
📝 now that we've got the time to look at it in detail.
… Oh come on, more things that are borderline undecompilable, and
require new workarounds to be developed? Yup, Borland C++ always optimizes
any comparison of a register with a literal 0 to OR reg, reg,
no matter how many calculations and inlined function calls you replace the
0 with. Shinki's background particle rendering function contains a
CMP AX, 0 instruction though… so yeah,
📝 yet another piece of custom ASM that's worse
than what Turbo C++ 4.0J would have generated if ZUN had just written
readable C. This was probably motivated by ZUN insisting that his modified
master.lib function for blitting particles takes its X and Y parameters as
registers. If he had just used the __fastcall convention, he
also would have got the sprite ID passed as a register. 🤷
So, we really don't want to be forced into inline assembly just
because of the third comparison in the otherwise perfectly decompilable
four-comparison if() expression that prevents invisible
particles from being drawn. The workaround: Comparing to a pointer
instead, which only the linker gets to resolve to the actual value of 0.
This way, the compiler has to make room for
any 16-bit literal, and can't optimize anything.
And then we go straight from micro-optimization to
waste, with all the duplication in the code that
animates all those particles together with the zooming and spinning lines.
This push decompiled 1.31% of all code in TH05, and thanks to alignment,
we're still missing Shinki's high-level background rendering function that
calls all the subfunctions I decompiled here.
With all the manipulated state involved here, it's not at all trivial to
see how this code produces what you see in-game. Like:
If all lines have the same Y velocity, how do the other three lines in
background type B get pushed down into this vertical formation while the
top one stays still? (Answer: This velocity is only applied to the top
line, the other lines are only pushed based on some delta.)
How can this delta be calculated based on the distance of the top line
with its supposed target point around Shinki's wings? (Answer: The velocity
is never set to 0, so the top line overshoots this target point in every
frame. After calculating the delta, the top line itself is pushed down as
well, canceling out the movement. )
Why don't they get pushed down infinitely, but stop eventually?
(Answer: We only see four lines out of 20, at indices #0, #6, #12, and
#18. In each frame, lines [0..17] are copied to lines [1..18], before
anything gets moved. The invisible lines are pushed down based on the delta
as well, which defines a distance between the visible lines of (velocity *
array gap). And since the velocity is capped at -14 pixels per frame, this
also means a maximum distance of 84 pixels between the midpoints of each
line.)
And why are the lines moving back up when switching to background type
C, before moving down? (Answer: Because type C increases the
velocity rather than decreasing it. Therefore, it relies on the previous
velocity state from type B to show a gapless animation.)
So yeah, it's a nice-looking effect, just very hard to understand. 😵
With the amount of effort I'm putting into this project, I typically
gravitate towards more descriptive function names. Here, however,
uth05win's simple and seemingly tiny-brained "background type A/B/C/D" was
quite a smart choice. It clearly defines the sequence in which these
animations are intended to be shown, and as we've seen with point 4
from the list above, that does indeed matter.
Next up: At least EX-Alice's background animations, and probably also the
high-level parts of the background rendering for all the other TH05 bosses.
P0143
Website (Progress number caching)
P0144
Website (Blog tag system, part 1: Manual and automatic tag assignment, blog filtering, design)
P0145
Website (Blog tag system, part 2: Combining tags, per-tag descriptions)
Who said working on the website was "fun"? That code is a mess.
This right here is the first time I seriously
wrote a website from (almost) scratch. Its main job is to parse over a Git
repository and calculate numbers, so any additional bulky frameworks would
only be in the way, and probably need to be run on some sort of wobbly,
unmaintainable "stack" anyway, right? 😛
📝 As with the main project though, I'm only
beginning to figure out the best structure for this, and these new features
prompted quite a lot of upfront refactoring…
Before I start ranting though, let's quickly summarize the most visible
change, the new tag system for this blog!
Yes, I manually went through every one of the 82 posts I've written so
far, and assigned labels to them.
The per-project (rec98 and
website) and per-game (th01th02th03th04th05) tags are automatically generated from the
database and the Git commit history, respectively. That might have
ended us up with a fair bit of category clutter, as any single change
to a tiny aspect is enough for a blog post to be tagged with an
otherwise unrelated game. For now, it doesn't seem too much of
an issue though.
Filtering already works for an arbitrary number of tags. Right now,
these are always combined with AND – no arbitrary boolean expressions for tag filtering yet.
Adding filters simply works by adding components to the URL path:
https://rec98.nmlgc.net/blog/tag/tag1/tag2/tag3/… and so
on.
Hovering over any tag shows a brief description of what that tag is
about. Some of the terms really needed a definition, so I just added one for
all of them. Hope you all enjoy them!
These descriptions are also shown on the new
tag overview page, which now kind of doubles as a
glossary.
Finally, the order page now shows the exact number of pushes a contribution
will fund – no more manual divisions required.
Shoutout to the one email I received, which pointed out this potential
improvement!
As for the "invisible" changes: The one main feature of this website, the
aforementioned calculation of the progress metrics, also turned out as its
biggest annoyance over the years. It takes a little while to parse all the
big .ASM files in the source tree, once for every push that can affect the
average number of removed instructions and unlabeled addresses. And without
a cache, we've had to do that every time we re-launch the app server
process.
Fundamentally, this is – you might have guessed it – a dependency tracking
problem, with two inputs: the .ASM files from the ReC98 repo, and the
Golang code that calculates the instruction and PI numbers. Sure, the code
has been pretty stable, but what if we do end up extending it one day? I've
always disliked manually specified version numbers for use cases like this
one, where the problem at hand could be exactly solved with a hashing
function, without being prone to human error.
(Sidenote: That's why I never actively supported thcrap mods that affected
gameplay while I was still working on that project. We still want to be
able to save and share replays made on modded games, but I do not
want to subject users to the unacceptable burden of manually remembering
which version of which patch stack they've recorded a given replay with.
So, we'd somehow need to calculate a hash of everything that defines the
gameplay, exclude the things that don't, and only show
replays that were recorded on the hash that matches the currently running
patch stack. Well, turns out that True Touhou Fans™ quite enjoy watching
the games get broken in every possible way. That's the way ZUN intended the
games to be experienced, after all. Otherwise, he'd be constantly
maintaining the games and shipping bugfix patches… 🤷)
Now, why haven't I been caching the progress numbers all along? Well,
parallelizing that parsing process onto all available CPU cores seemed
enough in 2019 when this site launched. Back then, the estimates were
calculated from slightly over 10 million lines of ASM, which took about 7
seconds to be parsed on my mid-range dev system.
Fast forward to P0142 though, and we have to parse 34.3 million lines of
ASM, which takes about 26 seconds on my dev system. That would have only
got worse with every new delivery, especially since this production server
doesn't have as many cores.
I was thinking about a "doing less" approach for a while: Parsing only the
files that had changed between the start and end commit of a push, and
keeping those deltas across push boundaries. However, that turned out to be
slightly more complex than the few hours I wanted to spend on it.
And who knows how well that would have scaled. We've still got a few
hundred pushes left to go before we're done here, after all.
So with the tag system, as always, taking longer and consuming more pushes
than I had planned, the time had come to finally address the underlying
dependency tracking problem.
Initially, this sounded like a nail that was tailor-made for
📝 my favorite hammer, Tup: Move the parser
to a separate binary, gather the list of all commits via git
rev-list, and run that parser binary on every one of the commits
returned. That should end up correctly tracking the relevant parts of
.git/ and the new binary as inputs, and cause the commits to
be re-parsed if the parser binary changes, right? Too bad that Tup both
refuses to track
anything inside .git/, and can't track a Golang binary
either, due to all of the compiler's unpredictable outputs into its build
cache. But can't we at least turn off–
> The build cache is now required as a step toward eliminating $GOPATH/pkg.
— Go 1.12 release notes
Oh, wonderful. Hey, I always liked $GOPATH! 🙁
But sure, Golang is too smart anyway to require an external build system.
The compiler's
build
ID is exactly what we need to correctly invalidate the progress number
cache. Surely there is a way to retrieve the build ID for any package that
makes up a binary at runtime via some kind of reflection, right? Right? …Of
course not, in the great Unix tradition, this functionality is only
available as a CLI tool that prints its result to stdout.
🙄
But sure, no problem, let's just exec() a separate process on
the parser's library package file… oh wait, such a thing doesn't exist
anymore, unless you manually install the package. This would
have added another complication to the build process, and you'd
still have to manually locate the package file, with its version-specific
directory name. That might have worked out in the end, but figuring
all this out would have probably gone way beyond the budget.
OK, but who cares about packages? We just care about one single file here,
anyway. Didn't they put the official Golang source code parser into the
standard library? Maybe that can give us something close to the
build ID, by hashing the abstract syntax tree of that file. Well, for
starters, one does not simply serialize the returned AST. At least
into Golang's own, most "native" Gob
format, which requires all types from the go/ast package
to be manually registered first.
That leaves
ast.Fprint() as the
only thing close to a ready-made serialization function… and guess what,
that one suffers from Golang's typical non-deterministic order when
rendering any map to a string. 🤦
Guess there's no way around the simplest, most stupid way of simply
calculating any cryptographically secure hash over the ASM parser file. 😶
It's not like we frequently change comments in this file, but still, this
could have been so much nicer.
Oh well, at least I did get that issue resolved now, in an
acceptable way. If you ever happened to see this website rebuilding: That
should now be a matter of seconds, rather than minutes. Next up: Shinki's
background animations!
P0140
Research (PC-98 DOS graph mode, with implementation into DOSBox-X)
P0141
TH01 decompilation (Konngara, part 1/5.5: Entrance animation)
P0142
TH01 decompilation (Konngara, part 2/5.5: Rendering, pattern 1)
💰 Funded by:
[Anonymous], rosenrose, Yanga
🏷️ Tags:
Alright, onto Konngara! Let's quickly move the escape sequences used later
in the battle to C land, and then we can immediately decompile the loading
and entrance animation function together with its filenames. Might as well
reverse-engineer those escape sequences while I'm at it, though – even if
they aren't implemented in DOSBox-X, they're well documented in all those
Japanese PDFs, so this should be no big deal…
…wait, ESC )3 switches to "graph mode"? As opposed to the
default "kanji mode", which can be re-entered via ESC )0?
Let's look up graph mode in the PC-9801 Programmers' Bible then…
> Kanji cannot be handled in this mode.
…and that's apparently all it has to say. Why have it then, on a platform
whose main selling point is a kanji ROM, and where Shift-JIS (and, well,
7-bit ASCII) are the only native encodings? No support for graph mode in
DOSBox-X either… yeah, let's take a deep dive into NEC's
IO.SYS, and get to the bottom of this.
And yes, graph mode pretty much just disables Shift-JIS decoding for
characters written via INT 29h, the lowest-level way of "just
printing a char" on DOS, which every printf()
will ultimately end up calling. Turns out there is a use for it though,
which we can spot by looking at the 8×16 half-width section of font ROM:
The half-width glyphs marked in red
correspond to the byte ranges from 0x80-0x9F and 0xE0-0xFF… which Shift-JIS
defines as lead bytes for two-byte, full-width characters. But if we turn
off Shift-JIS decoding…
(Yes, that g in the function row is how NEC DOS
indicates that graph mode is active. Try it yourself by pressing
Ctrl+F4!)
Jackpot, we get those half-width characters when printing their
corresponding bytes. I've
re-implemented all my findings into DOSBox-X, which will include graph
mode in the upcoming 0.83.14 release. If P0140 looks a bit empty as a
result, that's why – most of the immediate feature work went into
DOSBox-X, not into ReC98. That's the beauty of "anything" pushes.
So, after switching to graph mode, TH01 does… one of the slowest possible
memset()s over all of text RAM – one printf(" ")
call for every single one of its 80×25 half-width cells – before switching
back to kanji mode. What a waste of RE time…? Oh well, at least we've now
got plenty of proof that these weird escape sequences actually do
nothing of interest.
As for the Konngara code itself… well, it's script-like code, what can you
say. Maybe minimally sloppy in some places, but ultimately harmless.
One small thing that might not be widely known though: The large,
blue-green Siddhaṃ seed syllables are supposed to show up immediately, with
no delay between them? Good to know. Clocking your emulator too low tends
to roll them down from the top of the screen, and will certainly add a
noticeable delay between the four individual images.
… Wait, but this means that ZUN could have intended this "effect".
Why else would he not only put those syllables into four individual images
(and therefore add at least the latency of disk I/O between them), but also
show them on the foreground VRAM page, rather than on the "back buffer"?
Meanwhile, in 📝 another instance of "maybe
having gone too far in a few places":
Expressing distances on the playfield as fractions of its width
and height, just to avoid absolute numbers? Raw numbers are bad because
they're in screen space in this game. But we've already been throwing
PLAYFIELD_ constants into the mix as a way of explicitly
communicating screen space, and keeping raw number literals for the actual
playfield coordinates is looking increasingly sloppy… I don't know,
fractions really seemed like the most sensible thing to do with what we're
given here. 😐
So, 2 pushes in, and we've got the loading code, the entrance animation,
facial expression rendering, and the first one out of Konngara's 12
danmaku patterns. Might not sound like much, but since that first pattern
involves those
blue-green diamond sprites and therefore is one of the more complicated
ones, it all amounts to roughly 21.6% of Konngara's code. That's 7 more
pushes to get Konngara done, then? Next up though: Two pushes of website
improvements.
Secured a 22.5-hour RL workweek to leave plenty of time for this project,
Touhou Patch Center's commissioned MediaWiki update work is also nearing
completion, time to reopen the store! Since it's been a long time, here's
an overview of where we currently are in each game and binary, and what
the next logical step would be:
TH02:
MAIN.EXE: The final PC-98-specific low-level rendering functions are blocked by a single inconsistent and thus undecompilable assembly instruction. Rather than going for 📝 code generation and turning the rest of the function into a mess, I'd like to introduce a new build step between compilation and linking to patch "mistakes" like these. This would be an investment of 1-2 pushes into more readable code. Otherwise, I could continue with
player shots and the control code for the three shot types,
lasers,
player character movement, or
bomb rendering.
MAINE.EXE:main() and the congratulation picture screens.
OP.EXE: Story Mode initialization or the title screen… but most importantly, we want this to be at 100% RE so that we can build nice menus for future netplay.
MAIN.EXE: The enemy structure? Or the bullet structure, maybe? Either way, we're still missing a lot of essential gameplay structures.
OP.EXE: Finalizing the ZUN Soft logo animation, decompiling the game title slide-in animation, and then we're done!
MAIN.EXE:
There's a lot of partially reverse-engineered but not yet decompiled code, including all of this game's custom entity types. Covering that would significantly boost finalization% for very little money. Otherwise, we could go for either
Gengetsu's boss script
HUD rendering (in parallel with TH05)
player update code (in parallel with TH05), required for all midbosses in this game
player shot control functions
the end-of-stage bonus calculation
MAINE.EXE: High score name entry.
TH05:
OP.EXE: Finalizing the ZUN Soft logo animation, and then we're done!
MAIN.EXE: Got quite a lot of segment splits where we could immediately continue:
midboss and boss script code, either continuing with the in-game order and the Stage 2 midboss fight, or going directly for
Mai & Yuki,
the Extra Stage midboss,
or EX-Alice
stage tile rendering
player update code (in parallel with TH04)
items and extends
bomb rendering
the single-color areas of boss backgrounds, which use the GRCG's TDW mode
HUD rendering (in parallel with TH05)
boss sprite rendering boilerplate
player shot collision detection and rendering
MAINE.EXE: The staff roll animation.
But as always, you can request pretty much any other part of any game.
We're now at a pretty good place as far as arbitrary requests are
concerned, as I simply can't decide myself where to put all the current
pending contributions in the funding backlog. 😅 By
spending only the missing amount of money to complete any of those, you can
capture any of those "fractional" contributions towards a specific goal.
The next specific requests are going to set the priorities of this project
for quite some time! The best strategy: Spend a low amount of money on
something very specific, and watch as existing generic contributions will
necessarily have to be put towards making that specific goal happen 😛
Technical debt, part 10… in which two of the PMD-related functions came
with such complex ramifications that they required one full push after
all, leaving no room for the additional decompilations I wanted to do. At
least, this did end up being the final one, completing all
SHARED segments for the time being.
The first one of these functions determines the BGM and sound effect
modes, combining the resident type of the PMD driver with the Option menu
setting. The TH04 and TH05 version is apparently coded quite smartly, as
PC-98 Touhou only needs to distinguish "OPN- /
PC-9801-26K-compatible sound sources handled by PMD.COM"
from "everything else", since all other PMD varieties are
OPNA- / PC-9801-86-compatible.
Therefore, I only documented those two results returned from PMD's
AH=09h function. I'll leave a comprehensive, fully documented
enum to interested contributors, since that would involve research into
basically the entire history of the PC-9800 series, and even the clearly
out-of-scope PC-88VA. After all, distinguishing between more versions of
the PMD driver in the Option menu (and adding new sprites for them!) is
strictly mod territory.
The honor of being the final decompiled function in any SHARED
segment went to TH04's snd_load(). TH04 contains by far the
sanest version of this function: Readable C code, no new ZUN bugs (and
still missing file I/O error handling, of course)… but wait, what about
that actual file read syscall, using the INT 21h, AH=3Fh DOS
file read API? Reading up to a hardcoded number of bytes into PMD's or
MMD's song or sound effect buffer, 20 KiB in TH02-TH04, 64 KiB in
TH05… that's kind of weird. About time we looked closer into this.
Turns out that no, KAJA's driver doesn't give you the full 64 KiB of one
memory segment for these, as especially TH05's code might suggest to
anyone unfamiliar with these drivers. Instead,
you can customize the size of these buffers on its command line. In
GAME.BAT, ZUN allocates 8 KiB for FM songs, 2 KiB for sound
effects, and 12 KiB for MMD files in TH02… which means that the hardcoded
sizes in snd_load() are completely wrong, no matter how you
look at them. Consequently, this read syscall
will overflow PMD's or MMD's song or sound effect buffer if the
given file is larger than the respective buffer size.
Now, ZUN could have simply hardcoded the sizes from GAME.BAT
instead, and it would have been fine. As it also turns out though,
PMD has an API function (AH=22h) to retrieve the actual
buffer sizes, provided for exactly that purpose. There is little excuse
not to use it, as it also gives you PMD's default sizes if you don't
specify any yourself.
(Unless your build process enumerates all PMD files that are part of the
game, and bakes the largest size into both snd_load() and
GAME.BAT. That would even work with MMD, which doesn't have
an equivalent for AH=22h.)
What'd be the consequence of loading a larger file then? Well, since we
don't get a full segment, let's look at the theoretical limit first.
PMD prefers to keep both its driver code and the data buffers in a single
memory segment. As a result, the limit for the combined size of the song,
instrument, and sound effect buffer is determined by the amount of
code in the driver itself. In PMD86 version 4.8o (bundled with TH04
and TH05) for example, the remaining size for these buffers is exactly
45,555 bytes. Being an actually good programmer who doesn't blindly trust
user input, KAJA thankfully validates the sizes given via the
/M, /V, and /E command-line options
before letting the driver reside in memory, and shuts down with an error
message if they exceed 40 KiB. Would have been even better if he calculated
the exact size – even in the current
PMD version 4.8s from
January 2020, it's still a hardcoded value (see line 8581).
Either way: If the file is larger than this maximum, the concrete effect
is down to the INT 21h, AH=3Fh implementation in the
underlying DOS version. DOS 3.3 treats the destination address as linear
and reads past the end of the segment,
DOS
5.0 and DOSBox-X truncate the number of bytes to not exceed the remaining
space in the segment, and maybe there's even a DOS that wraps around
and ends up overwriting the PMD driver code. In any case: You will
overwrite what's after the driver in memory – typically, the game .EXE and
its master.lib functions.
It almost feels like a happy accident that this doesn't cause issues in
the original games. The largest PMD file in any of the 4 games, the -86
version of 幽夢 ~ Inanimate Dream, takes up 8,099 bytes,
just under the 8,192 byte limit for BGM. For modders, I'd really recommend
implementing this properly, with PMD's AH=22h function and
error handling, once position independence has been reached.
Whew, didn't think I'd be doing more research into KAJA's drivers during
regular ReC98 development! That's probably been the final time though, as
all involved functions are now decompiled, and I'm unlikely to iterate
over them again.
And that's it! Repaid the biggest chunk of technical debt, time for some
actual progress again. Next up: Reopening the store tomorrow, and waiting
for new priorities. If we got nothing by Sunday, I'm going to put the
pending [Anonymous] pushes towards some work on the website.
P0138
Separating translation units, part 9/10 (focused around TH03 / TH04) + TH04 RE (.MPN format)
💰 Funded by:
[Anonymous], Blue Bolt
🏷️ Tags:
Technical debt, part 9… and as it turns out, it's highly impractical to
repay 100% of it at this point in development. 😕
The reason: graph_putsa_fx(), ZUN's function for rendering
optionally boldfaced text to VRAM using the font ROM glyphs, in its
ridiculously micro-optimized TH04 and TH05 version. This one sets the
"callback function" for applying the boldface effect by self-modifying
the target of two CALL rel16 instructions… because
there really wasn't any free register left for an indirect
CALL, eh? The necessary distance, from the call site to the
function itself, has to be calculated at assembly time, by subtracting the
target function label from the call site label.
This usually wouldn't be a problem… if ZUN didn't store the resulting
lookup tables in the .DATA segment. With code segments, we
can easily split them at pretty much any point between functions because
there are multiple of them. But there's only a single .DATA
segment, with all ZUN and master.lib data sandwiched between Borland C++'s
crt0 at the
top, and Borland C++'s library functions at the bottom of the segment.
Adding another split point would require all data after that point to be
moved to its own translation unit, which in turn requires
EXTERN references in the big .ASM file to all that moved
data… in short, it would turn the codebase into an even greater
mess.
Declaring the labels as EXTERN wouldn't work either, since
the linker can't do fancy arithmetic and is limited to simply replacing
address placeholders with one single address. So, we're now stuck with
this function at the bottom of the SHARED segment, for the
foreseeable future.
We can still continue to separate functions off the top of that segment,
though. Pretty much the only thing noteworthy there, so far: TH04's code
for loading stage tile images from .MPN files, which we hadn't
reverse-engineered so far, and which nicely fit into one of
Blue Bolt's pending ⅓ RE contributions. Yup, we finally moved
the RE% bars again! If only for a tiny bit.
Both TH02 and TH05 simply store one pointer to one dynamically allocated
memory block for all tile images, as well as the number of images, in the
data segment. TH04, on the other hand, reserves memory for 8 .MPN slots,
complete with their color palettes, even though it only ever uses the
first one of these. There goes another 458 bytes of conventional RAM… I
should start summing up all the waste we've seen so far. Let's put the
next website contribution towards a tagging system for these blog posts.
At 86% of technical debt in the SHARED segment repaid, we
aren't quite done yet, but the rest is mostly just TH04 needing to catch
up with functions we've already separated. Next up: Getting to that
practical 98.5% point. Since this is very likely to not require a full
push, I'll also decompile some more actual TH04 and TH05 game code I
previously reverse-engineered – and after that, reopen the store!
P0137
Separating translation units, part 8/10 (focused around TH03) + Segment alignment research
💰 Funded by:
[Anonymous]
🏷️ Tags:
Whoops, the build was broken again? Since
P0127 from
mid-November 2020, on TASM32 version 5.3, which also happens to be the
one in the DevKit… That version changed the alignment for the default
segments of certain memory models when requesting .386
support. And since redefining segment alignment apparently is highly
illegal and absolutely has to be a build error, some of the stand-alone
.ASM translation units didn't assemble anymore on this version. I've only
spotted this on my own because I casually compiled ReC98 somewhere else –
on my development system, I happened to have TASM32 version 5.0 in the
PATH during all this time.
At least this was a good occasion to
get rid of some
weird segment alignment workarounds from 2015, and replace them with the
superior convention of using the USE16 modifier for the
.MODEL directive.
ReC98 would highly benefit from a build server – both in order to
immediately spot issues like this one, and as a service for modders.
Even more so than the usual open-source project of its size, I would say.
But that might be exactly
because it doesn't seem like something you can trivially outsource
to one of the big CI providers for open-source projects, and quickly set
it up with a few lines of YAML.
That might still work in the beginning, and we might get by with a regular
64-bit Windows 10 and DOSBox running the exact build tools from the DevKit.
Ideally, though, such a server should really run the optimal configuration
of a 32-bit Windows 10, allowing both the 32-bit and the 16-bit build step
to run natively, which already is something that no popular CI service out
there offers. Then, we'd optimally expand to Linux, every other Windows
version down to 95, emulated PC-98 systems, other TASM versions… yeah, it'd
be a lot. An experimental project all on its own, with additional hosting
costs and probably diminishing returns, the more it expands…
I've added it as a category to the order form, let's see how much interest
there is once the store reopens (which will be at the beginning of May, at
the latest). That aside, it would 📝 also be
a great project for outside contributors!
So, technical debt, part 8… and right away, we're faced with TH03's
low-level input function, which
📝 once📝 again📝 insists on being word-aligned in a way we
can't fake without duplicating translation units.
Being undecompilable isn't exactly the best property for a function that
has been interesting to modders in the past: In 2018,
spaztron64 created an
ASM-level mod that hardcoded more ergonomic key bindings for human-vs-human
multiplayer mode: 2021-04-04-TH03-WASD-2player.zip
However, this remapping attempt remained quite limited, since we hadn't
(and still haven't) reached full position independence for TH03 yet.
There's quite some potential for size optimizations in this function, which
would allow more BIOS key groups to already be used right now, but it's not
all that obvious to modders who aren't intimately familiar with x86 ASM.
Therefore, I really wouldn't want to keep such a long and important
function in ASM if we don't absolutely have to…
… and apparently, that's all the motivation I needed? So I took the risk,
and spent the first half of this push on reverse-engineering
TCC.EXE, to hopefully find a way to get word-aligned code
segments out of Turbo C++ after all.
And there is! The -WX option, used for creating
DPMI
applications, messes up all sorts of code generation aspects in weird
ways, but does in fact mark the code segment as word-aligned. We can
consider ourselves quite lucky that we get to use Turbo C++ 4.0, because
this feature isn't available in any previous version of Borland's C++
compilers.
That allowed us to restore all the decompilations I previously threw away…
well, two of the three, that lookup table generator was too much of a mess
in C. But what an abuse this is. The
subtly different code generation has basically required one creative
workaround per usage of -WX. For example, enabling that option
causes the regular PUSH BP and POP BP prolog and
epilog instructions to be wrapped with INC BP and
DEC BP, for some reason:
a_function_compiled_with_wx proc
inc bp ; ???
push bp
mov bp, sp
; [… function code …]
pop bp
dec bp ; ???
ret
a_function_compiled_with_wx endp
Luckily again, all the functions that currently require -WX
don't set up a stack frame and don't take any parameters.
While this hasn't directly been an issue so far, it's been pretty
close: snd_se_reset(void) is one of the functions that require
word alignment. Previously, it shared a translation unit with the
immediately following snd_se_play(int new_se), which does take
a parameter, and therefore would have had its prolog and epilog code messed
up by -WX.
Since the latter function has a consistent (and thus, fakeable) alignment,
I simply split that code segment into two, with a new -WX
translation unit for just snd_se_reset(void). Problem solved –
after all, two C++ translation units are still better than one ASM
translation unit. Especially with all the
previous #include improvements.
The rest was more of the usual, getting us 74% done with repaying the
technical debt in the SHARED segment. A lot of the remaining
26% is TH04 needing to catch up with TH03 and TH05, which takes
comparatively little time. With some good luck, we might get this
done within the next push… that is, if we aren't confronted with all too
many more disgusting decompilations, like the two functions that ended this
push.
If we are, we might be needing 10 pushes to complete this after all, but
that piece of research was definitely worth the delay. Next up: One more of
these.
P0135
Separating translation units, part 6/10 (TH05 PMD loading / Music Room piano)
P0136
Separating translation units, part 7/10 (starting to catch up with TH04)
💰 Funded by:
[Anonymous]
🏷️ Tags:
Alright, no more big code maintenance tasks that absolutely need to be
done right now. Time to really focus on parts 6 and 7 of repaying
technical debt, right? Except that we don't get to speed up just yet, as
TH05's barely decompilable PMD file loading function is rather…
complicated.
Fun fact: Whenever I see an unusual sequence of x86 instructions in PC-98
Touhou, I first consult the disassembly of Wolfenstein 3D. That game was
originally compiled with the quite similar Borland C++ 3.0, so it's quite
helpful to compare its ASM to the
officially released source
code. If I find the instructions in question, they mostly come from
that game's ASM code, leading to the amusing realization that "even John
Carmack was unable to get these instructions out of this compiler"
This time though, Wolfenstein 3D did point me
to Borland's intrinsics for common C functions like memcpy()
and strchr(), available via #pragma intrinsic.
Bu~t those unfortunately still generate worse code than what ZUN
micro-optimized here. Commenting how these sequences of instructions
should look in C is unfortunately all I could do here.
The conditional branches in this function did compile quite nicely
though, clarifying the control flow, and clearly exposing a ZUN
bug: TH05's snd_load() will hang in an infinite loop when
trying to load a non-existing -86 BGM file (with a .M2
extension) if the corresponding -26 BGM file (with a .M
extension) doesn't exist either.
Unsurprisingly, the PMD channel monitoring code in TH05's Music Room
remains undecompilable outside the two most "high-level" initialization
and rendering functions. And it's not because there's data in the
middle of the code segment – that would have actually been possible with
some #pragmas to ensure that the data and code segments have
the same name. As soon as the SI and DI registers are referenced
anywhere, Turbo C++ insists on emitting prolog code to save these
on the stack at the beginning of the function, and epilog code to restore
them from there before returning.
Found that out in
September 2019, and confirmed that there's no way around it. All the
small helper functions here are quite simply too optimized, throwing away
any concern for such safety measures. 🤷
Oh well, the two functions that were decompilable at least indicate
that I do try.
Within that same 6th push though, we've finally reached the one function
in TH05 that was blocking further progress in TH04, allowing that game
to finally catch up with the others in terms of separated translation
units. Feels good to finally delete more of those .ASM files we've
decompiled a while ago… finally!
But since that was just getting started, the most satisfying development
in both of these pushes actually came from some more experiments with
macros and inline functions for near-ASM code. By adding
"unused" dummy parameters for all relevant registers, the exact input
registers are made more explicit, which might help future port authors who
then maybe wouldn't have to look them up in an x86 instruction
reference quite as often. At its best, this even allows us to
declare certain functions with the __fastcall convention and
express their parameter lists as regular C, with no additional
pseudo-registers or macros required.
As for output registers, Turbo C++'s code generation turns out to be even
more amazing than previously thought when it comes to returning
pseudo-registers from inline functions. A nice example for
how this can improve readability can be found in this piece of TH02 code
for polling the PC-98 keyboard state using a BIOS interrupt:
inline uint8_t keygroup_sense(uint8_t group) {
_AL = group;
_AH = 0x04;
geninterrupt(0x18);
// This turns the output register of this BIOS call into the return value
// of this function. Surprisingly enough, this does *not* naively generate
// the `MOV AL, AH` instruction you might expect here!
return _AH;
}
void input_sense(void)
{
// As a result, this assignment becomes `_AH = _AH`, which Turbo C++
// never emits as such, giving us only the three instructions we need.
_AH = keygroup_sense(8);
// Whereas this one gives us the one additional `MOV BH, AH` instruction
// we'd expect, and nothing more.
_BH = keygroup_sense(7);
// And now it's obvious what both of these registers contain, from just
// the assignments above.
if(_BH & K7_ARROW_UP || _AH & K8_NUM_8) {
key_det |= INPUT_UP;
}
// […]
}
I love it. No inline assembly, as close to idiomatic C code as something
like this is going to get, yet still compiling into the minimum possible
number of x86 instructions on even a 1994 compiler. This is how I keep
this project interesting for myself during chores like these.
We might have even reached peak
inline already?
And that's 65% of technical debt in the SHARED segment repaid
so far. Next up: Two more of these, which might already complete that
segment? Finally!
P0134
Separating translation units, part 5/10 (TH05 .PI functions)
💰 Funded by:
[Anonymous]
🏷️ Tags:
Technical debt, part 5… and we only got TH05's stupidly optimized
.PI functions this time?
As far as actual progress is concerned, that is. In maintenance news
though, I was really hyped for the #include improvements I've
mentioned in 📝 the last post. The result: A
new x86real.h file, bundling all the declarations specific to
the 16-bit x86 Real Mode in a smaller file than Turbo C++'s own
DOS.H. After all, DOS is something else than the underlying
CPU. And while it didn't speed up build times quite as much as I had hoped,
it now clearly indicates the x86-specific parts of PC-98 Touhou code to
future port authors.
After another couple of improvements to parameter declaration in ASM land,
we get to TH05's .PI functions… and really, why did ZUN write all of
them in ASM? Why (re)declare all the necessary structures and data in
ASM land, when all these functions are merely one layer of abstraction
above master.lib, which does all the actual work?
I get that ZUN might have wanted masked blitting to be faster, which is
used for the fade-in effect seen during TH05's main menu animation and the
ending artwork. But, uh… he knew how to modify master.lib. In fact, he
did already modify the graph_pack_put_8() function
used for rendering a single .PI image row, to ignore master.lib's VRAM
clipping region. For this effect though, he first blits each row regularly
to the invisible 400th row of VRAM, and then does an EGC-accelerated
VRAM-to-VRAM blit of that row to its actual target position with the mask
enabled. It would have been way more efficient to add another version of
this function that takes a mask pattern. No amount of REP
MOVSW is going to change the fact that two VRAM writes per line are
slower than a single one. Not to mention that it doesn't justify writing
every other .PI function in ASM to go along with it…
This is where we also find the most hilarious aspect about this: For most
of ZUN's pointless micro-optimizations, you could have maybe made the
argument that they do save some CPU cycles here and there, and
therefore did something positive to the final, PC-98-exclusive result. But
some of the hand-written ASM here doesn't even constitute a
micro-optimization, because it's worse than what you would have got
out of even Turbo C++ 4.0J with its 80386 optimization flags!
At least it was possible to "decompile" 6 out of the 10 functions
here, making them easy to clean up for future modders and port authors.
Could have been 7 functions if I also decided to "decompile"
pi_free(), but all the C++ code is already surrounded by ASM,
resulting in 2 ASM translation units and 2 C++ translation units.
pi_free() would have needed a single translation unit by
itself, which wasn't worth it, given that I would have had to spell out
every single ASM instruction anyway.
There you go. What about this needed to be written in ASM?!?
The function calls between these small translation units even seemed to
glitch out TASM and the linker in the end, leading to one CALL
offset being weirdly shifted by 32 bytes. Usually, TLINK reports a fixup
overflow error when this happens, but this time it didn't, for some reason?
Mirroring the segment grouping in the affected translation unit did solve
the problem, and I already knew this, but only thought of it after spending
quite some RTFM time… during which I discovered the -lE
switch, which enables TLINK to use the expanded dictionaries in
Borland's .OBJ and .LIB files to speed up linking. That shaved off roughly
another second from the build time of the complete ReC98 repository. The
more you know… Binary blobs compiled with non-Borland tools would be the
only reason not to use this flag.
So, even more slowdown with this 5th dedicated push, since we've still only
repaid 41% of the technical debt in the SHARED segment so far.
Next up: Part 6, which hopefully manages to decompile the FM and SSG
channel animations in TH05's Music Room, and hopefully ends up being the
final one of the slow ones.
P0133
Separating translation units, part 4/10 (focused around TH02 / TH05)
💰 Funded by:
[Anonymous]
🏷️ Tags:
Wow, 31 commits in a single push? Well, what the last push had in
progress, this one had in maintenance. The
📝 master.lib header transition absolutely
had to be completed in this one, for my own sanity. And indeed,
it reduced the build time for the entirety of ReC98 to about 27 seconds on
my system, just as expected in the original announcement. Looking forward
to even faster build times with the upcoming #include
improvements I've got up my sleeve! The port authors of the future are
going to appreciate those quite a bit.
As for the new translation units, the funniest one is probably TH05's
function for blitting the 1-color .CDG images used for the main menu
options. Which is so optimized that it becomes decompilable again,
by ditching the self-modifying code of its TH04 counterpart in favor of
simply making better use of CPU registers. The resulting C code is still a
mess, but what can you do.
This was followed by even more TH05 functions that clearly weren't
compiled from C, as evidenced by their padding
bytes. It's about time I've documented my lack of ideas of how to get
those out of Turbo C++.
And just like in the previous push, I also had to 📝 throw away a decompiled TH02 function purely due to alignment issues. Couldn't have been a better one though, no one's going to miss a residency check for the MMD driver that is largely identical to the corresponding (and indeed decompilable) function for the PMD driver. Both of those should have been merged into a single function anyway, given how they also mutate the game's sound configuration flags…
In the end, I've slightly slowed down with this one, with only 37% of technical debt done after this 4th dedicated push. Next up: One more of these, centered around TH05's stupidly optimized .PI functions. Maybe also with some more reverse-engineering, after not having done any for 1½ months?
P0132
Separating translation units, part 3/10 (focused around TH02 / TH03)
💰 Funded by:
[Anonymous]
🏷️ Tags:
Now that's the amount of translation unit separation progress I was
looking for! Too bad that RL is keeping me more and more occupied these
days, and ended up delaying this push until 2021. Now that
Touhou Patch Center is also commissioning me to update their
infrastructure, it's going to take a while for ReC98 to return to full
speed, and for the store to be reopened. Should happen by April at the
latest, though!
With everything related to this separation of translation units explained
earlier, we've really got a push with nothing to talk about, this
time. Except, maybe, for the realization that
📝 this current approach might not be the
best fit for TH02 after all: Not only did it force us to
📝 throw away the previous decompilation of
the sound effect playback functions, but OP.EXE also contains
obviously copy-pasted code in addition to the common, shared set of
library functions. How was that game even built, originally??? No
way around compiling that one instance of the "delay until given BGM
measure" function separately then, if it insists on using its own
instance of the VSync delay function…
Oh well, this separated layout still works better for the later games, and
consistency is good. Smooth sailing with all of the other functions, at
least.
Next up: One more of these, which might even end up completing the
📝 transition to our own master.lib header file.
In terms of the total number of ASM code left in the SHARED
code segments, we're now 30% done after 3 dedicated pushes. It really
shouldn't require 7 more pushes, though!
P0130
TH01 decompilation (Boss HP and collision handling, part 1/2)
P0131
TH01 decompilation (Boss HP and collision handling, part 2/2)
💰 Funded by:
Yanga
🏷️ Tags:
50% hype! 🎉 But as usual for TH01, even that final set of functions
shared between all bosses had to consume two pushes rather than one…
First up, in the ongoing series "Things that TH01 draws to the PC-98
graphics layer that really should have been drawn to the text layer
instead": The boss HP bar. Oh well, using the graphics layer at least made
it possible to have this half-red, half-white pattern
for the middle section.
This one pattern is drawn by making surprisingly good use of the GRCG. So
far, we've only seen it used for fast monochrome drawing:
// Setting up fast drawing using color #9 (1001 in binary)
grcg_setmode(GC_RMW);
outportb(0x7E, 0xFF); // Plane 0: (B): (********)
outportb(0x7E, 0x00); // Plane 1: (R): ( )
outportb(0x7E, 0x00); // Plane 2: (G): ( )
outportb(0x7E, 0xFF); // Plane 3: (E): (********)
// Write a checkerboard pattern (* * * * ) in color #9 to the top-left corner,
// with transparent blanks. Requires only 1 VRAM write to a single bitplane:
// The GRCG automatically writes to the correct bitplanes, as specified above
*(uint8_t *)(MK_FP(0xA800, 0)) = 0xAA;
But since this is actually an 8-pixel tile register, we can set any
8-pixel pattern for any bitplane. This way, we can get different colors
for every one of the 8 pixels, with still just a single VRAM write of the
alpha mask to a single bitplane:
And I thought TH01 only suffered the drawbacks of PC-98 hardware, making
so little use of its actual features that it's perhaps not fair to even
call it "a PC-98 game"… Still, I'd say that "bad PC-98 port of an idea"
describes it best.
However, after that tiny flash of brilliance, the surrounding HP rendering
code goes right back to being the typical sort of confusing TH01 jank.
There's only a single function for the three distinct jobs of
incrementing HP during the boss entrance animation,
decrementing HP if hit by the Orb, and
redrawing the entire bar, because it's still all in VRAM, and Sariel
wants different backgrounds,
with magic numbers to select between all of these.
VRAM of course also means that the backgrounds behind the individual hit
points have to be stored, so that they can be unblitted later as the boss
is losing HP. That's no big deal though, right? Just allocate some memory,
copy what's initially in VRAM, then blit it back later using your
foundational set of blitting funct– oh, wait, TH01 doesn't have this sort
of thing, right The closest thing,
📝 once again, are the .PTN functions. And
so, the game ends up handling these 8×16 background sprites with 16×16
wrappers around functions for 32×32 sprites.
That's quite the recipe for confusion, especially since ZUN
preferred copy-pasting the necessary ridiculous arithmetic expressions for
calculating positions, .PTN sprite IDs, and the ID of the 16×16 quarter
inside the 32×32 sprite, instead of just writing simple helper functions.
He did manage to make the result mostly bug-free this time
around, though! (Edit (2022-05-31): Nope, there's a
📝 potential heap corruption after all, which can be triggered in some fights in test mode (game t) or debug mode (game d).)
There's one minor hit point discoloration bug if the red-white or white
sections start at an odd number of hit points, but that's never the case for
any of the original 7 bosses.
The remaining sloppiness is ultimately inconsequential as well: The game
always backs up twice the number of hit point backgrounds, and thus
uses twice the amount of memory actually required. Also, this
self-restriction of only unblitting 16×16 pixels at a time requires any
remaining odd hit point at the last position to, of course, be rendered
again
After stumbling over the weakest imaginable random number
generator, we finally arrive at the shared boss↔orb collision
handling function, the final blocker among the final blockers. This
function takes a whopping 12 parameters, 3 of them being references to
int values, some of which are duplicated for every one of the
7 bosses, with no generic boss struct anywhere.
📝 Previously, I speculated that YuugenMagan might have been the first boss to be programmed for TH01.
With all these variables though, there is some new evidence that SinGyoku
might have been the first one after all: It's the only boss to use its own
HP and phase frame variables, with the other bosses sharing the same two
globals.
While this function only handles the response to a boss↔orb
collision, it still does way too much to describe it briefly. Took me
quite a while to frame it in terms of invincibility (which is the
main impact of all of this that can be observed in gameplay code). That
made at least some sort of sense, considering the other usages of
the variables passed as references to that function. Turns out that
YuugenMagan, Kikuri, and Elis abuse what's meant to be the "invincibility
frame" variable as a frame counter for some of their animations 🙄
Oh well, the game at least doesn't call the collision handling function
during those, so "invincibility frame" is technically still a
correct variable name there.
And that's it! We're finally ready to start with Konngara, in 2021. I've
been waiting quite a while for this, as all this high-level boss code is
very likely to speed up TH01 progress quite a bit. Next up though: Closing
out 2020 with more of the technical debt in the other games.
P0128
TH01 decompilation (Card-flipping stages, part 1/4)
P0129
TH01 decompilation (Card-flipping stages, part 2/4)
💰 Funded by:
Yanga
🏷️ Tags:
So, only one card-flipping function missing, and then we can start
decompiling TH01's two final bosses? Unfortunately, that had to be the one
big function that initializes and renders all gameplay objects. #17 on the
list of longest functions in all of PC-98 Touhou, requiring two pushes to
fully understand what's going on there… and then it immediately returns
for all "boss" stages whose number is divisible by 5, yet is still called
during Sariel's and Konngara's initialization 🤦
Oh well. This also involved the final file format we hadn't looked at
yet – the STAGE?.DAT files that describe the layout for all
stages within a single 5-stage scene. Which, for a change is a very
well-designed form– no, of course it's completely weird, what did you
expect? Development must have looked somewhat like this:
Weirdness #1: "Hm, the stage format should
include the file names for the background graphics and music… or should
it?" And so, the 22-byte header still references some music and
background files that aren't part of the final game. The game doesn't use
anything from there, and instead derives those file names from the
scene ID.
That's probably nothing new to anyone who has ever looked at TH01's data
files. In a slightly more interesting discovery though, seeing the
📝 .GRF extension, in some of the file names
that are short enough to not cut it off, confirms that .GRF was initially
used for background images. Probably before ZUN learned about .PI, and how
it achieves better compression than his own per-bitplane RLE approach?
Weirdness #2: "Hm, I might want to put
obstacles on top of cards?" You'd probably expect this format to
contain one single array for every stage, describing which object to place
on every 32×32 tile, if any. Well, the real format uses two arrays:
One for the cards, and a combined one for all "obstacles" – bumpers, bumper
bars, turrets, and portals. However, none of the card-flipping stages in
the final game come with any such overlaps. That's quite unfortunate, as it
would have made for some quite interesting level designs:
As you can see, the final version of the blitting code was not written
with such overlaps in mind either, blitting the cards on top of all
the obstacles, and not the other way round.
Weirdness #3: "In contrast to obstacles, of
which there are multiple types, cards only really need 1 bit. Time for some
bit twiddling!" Not the worst idea, given that the 640×336 playfield
can fit 20×10 cards, which would fit exactly into 25 bytes if you use a
single bit to indicate card or no card. But for whatever
reason, ZUN only stored 4 card bits per byte, leaving the other 4 bits
unused, and needlessly blowing up that array to 50 bytes. 🤷
Oh, and did I mention that the contents of the STAGE?.DAT files are
loaded into the main data segment, even though the game immediately parses
them into something more conveniently accessible? That's another 1250 bytes
of memory wasted for no reason…
Weirdness #4: "Hm, how about requiring the
player to flip some of the cards multiple times? But I've already written
all this bit twiddling code to store 4 cards in 1 byte. And if cards should
need anywhere from 1 to 4 flips, that would need at least 2 more bits,
which won't fit into the unused 4 bits either…" This feature
must have come later, because the final game uses 3 "obstacle" type
IDs to act as a flip count modifier for a card at the same relative array
position. Complete with lookup code to find the actual card index these
modifiers belong to, and ridiculous switch statements to not include
those non-obstacles in the game's internal obstacle array.
With all that, it's almost not worth mentioning how there are 12 turret
types, which only differ in which hardcoded pellet group they fire at a
hardcoded interval of either 100 or 200 frames, and that they're all
explicitly spelled out in every single switch statement. Or
how the layout of the internal card and obstacle SoA classes is quite
disjointed. So here's the new ZUN bugs you've probably already been
expecting!
Cards and obstacles are blitted to both VRAM pages. This way, any other
entities moving on top of them can simply be unblitted by restoring pixels
from VRAM page 1, without requiring the stationary objects to be redrawn
from main memory. Obviously, the backgrounds behind the cards have to be
stored somewhere, since the player can remove them. For faster transitions
between stages of a scene, ZUN chose to store the backgrounds behind
obstacles as well. This way, the background image really only needs to be
blitted for the first stage in a scene.
All that memory for the object backgrounds adds up quite a bit though. ZUN
actually made the correct choice here and picked a memory allocation
function that can return more than the 64 KiB of a single x86 Real Mode
segment. He then accesses the individual backgrounds via regular array
subscripts… and that's where the bug lies, because he stores the returned
address in a regular far pointer rather than a
huge one. This way, the game still can only display a
total of 102 objects (i. e., cards and obstacles combined) per stage,
without any unblitting glitches.
What a shame, that limit could have been 127 if ZUN didn't needlessly
allocate memory for alpha planes when backing up VRAM content.
And since array subscripts on far pointers wrap around after
64 KiB, trying to save the background of the 103rd object is guaranteed to
corrupt the memory block header at the beginning of the returned segment.
When TH01 runs in debug mode, it
correctly reports a corrupted heap in this case.
After detecting such a corruption, the game loudly reports it by playing the
"player hit" sound effect and locking up, freezing any further gameplay or
rendering. The locking loop can be left by pressing ↵ Return, but the
game will simply re-enter it if the corruption is still present during the
next heapcheck(), in the next frame. And since heap
corruptions don't tend to repair themselves, you'd have to constantly hold
↵ Return to resume gameplay. Doing that could actually get you
safely to the next boss, since the game doesn't allocate or free any further
heap memory during a 5-stage card-flipping scene, and
just throws away its C heap when restarting the process for a boss. But then
again, holding ↵ Return will also auto-flip all cards on the way there…
🤨
Finally, some unused content! Upon discovering TH01's stage selection debug
feature, probably everyone tried to access Stage 21,
just to see what happens, and indeed landed in an actual stage, with a
black background and a weird color palette. Turns out that ZUN did
ship an unused scene in SCENE7.DAT, which is exactly what's
loaded there.
However, it's easy to believe that this is just garbage data (as I
initially did): At the beginning of "Stage 22", the game seems to enter an
infinite loop somewhere during the flip-in animation.
Well, we've had a heap overflow above, and the cause here is nothing but a
stack buffer overflow – a perhaps more modern kind of classic C bug,
given its prevalence in the Windows Touhou games. Explained in a few lines
of code:
void stageobjs_init_and_render()
{
int card_animation_frames[50]; // even though there can be up to 200?!
int total_frames = 0;
(code that would end up resetting total_frames if it ever tried to reset
card_animation_frames[50]…)
}
The number of cards in "Stage 22"? 76. There you have it.
But of course, it's trivial to disable this animation and fix these stage
transitions. So here they are, Stages 21 to 24, as shipped with the game
in STAGE7.DAT:
Wow, what a mess. All that was just a bit too much to be covered in two
pushes… Next up, assuming the current subscriptions: Taking a vacation with
one smaller TH01 push, covering some smaller functions here and there to
ensure some uninterrupted Konngara progress later on.
P0126
TH03/TH04/TH05 decompilation (EGC-powered blitting + .MRS format, part 1/2)
P0127
TH03 decompilation (.MRS format, part 2/2) + separating translation units, part 2/10
💰 Funded by:
Blue Bolt, [Anonymous]
🏷️ Tags:
Alright, back to continuing the master.hpp transition started
in P0124, and repaying technical debt. The last blog post already
announced some ridiculous decompilations… and in fact, not a single
one of the functions in these two pushes was decompilable into
idiomatic C/C++ code.
As usual, that didn't keep me from trying though. The TH04 and TH05
version of the infamous 16-pixel-aligned, EGC-accelerated rectangle
blitting function from page 1 to page 0 was fairly average as far as
unreasonable decompilations are concerned.
The big blocker in TH03's MAIN.EXE, however, turned out to be
the .MRS functions, used to render the gauge attack portraits and bomb
backgrounds. The blitting code there uses the additional FS and GS segment
registers provided by the Intel 386… which
are not supported by Turbo C++'s inline assembler, and
can't be turned into pointers, due to a compiler bug in Turbo C++ that
generates wrong segment prefix opcodes for the _FS and
_GS pseudo-registers.
Apparently I'm the first one to even try doing that with this compiler? I
haven't found any other mention of this bug…
Compiling via assembly (#pragma inline) would work around
this bug and generate the correct instructions. But that would incur yet
another dependency on a 16-bit TASM, for something honestly quite
insignificant.
What we can always do, however, is using __emit__() to simply
output x86 opcodes anywhere in a function. Unlike spelled-out inline
assembly, that can even be used in helper functions that are supposed to
inline… which does in fact allow us to fully abstract away this compiler
bug. Regular if() comparisons with pseudo-registers
wouldn't inline, but "converting" them into C++ template function
specializations does. All that's left is some C preprocessor abuse
to turn the pseudo-registers into types, and then we do retain a
normal-looking poke() call in the blitting functions in the
end. 🤯
Yeah… the result is
batshitinsane.
I may have gone too far in a few places…
One might certainly argue that all these ridiculous decompilations
actually hurt the preservation angle of this project. "Clearly, ZUN
couldn't have possibly written such unreasonable C++ code.
So why pretend he did, and not just keep it all in its more natural ASM
form?" Well, there are several reasons:
Future port authors will merely have to translate all the
pseudo-registers and inline assembly to C++. For the former, this is
typically as easy as replacing them with newly declared local variables. No
need to bother with function prolog and epilog code, calling conventions, or
the build system.
No duplication of constants and structures in ASM land.
As a more expressive language, C++ can document the code much better.
Meticulous documentation seems to have become the main attraction of ReC98
these days – I've seen it appreciated quite a number of times, and the
continued financial support of all the backers speaks volumes. Mods, on the
other hand, are still a rather rare sight.
Having as few .ASM files in the source tree as possible looks better to
casual visitors who just look at GitHub's repo language breakdown. This way,
ReC98 will also turn from an "Assembly project" to its rightful state
of "C++ project" much sooner.
And finally, it's not like the ASM versions are
gone – they're still part of the Git history.
Unfortunately, these pushes also demonstrated a second disadvantage in
trying to decompile everything possible: Since Turbo C++ lacks TASM's
fine-grained ability to enforce code alignment on certain multiples of
bytes, it might actually be unfeasible to link in a C-compiled object file
at its intended original position in some of the .EXE files it's used in.
Which… you're only going to notice once you encounter such a case. Due to
the slightly jumbled order of functions in the
📝 second, shared code segment, that might
be long after you decompiled and successfully linked in the function
everywhere else.
And then you'll have to throw away that decompilation after all 😕 Oh
well. In this specific case (the lookup table generator for horizontally
flipping images), that decompilation was a mess anyway, and probably
helped nobody. I could have added a dummy .OBJ that does nothing but
enforce the needed 2-byte alignment before the function if I
really insisted on keeping the C version, but it really wasn't
worth it.
Now that I've also described yet another meta-issue, maybe there'll
really be nothing to say about the next technical debt pushes?
Next up though: Back to actual progress
again, with TH01. Which maybe even ends up pushing that game over the 50%
RE mark?
P0124
TH04 decompilation (Character selection, part 1/2)
P0125
TH04 decompilation (Character selection, part 2/2)
💰 Funded by:
Blue Bolt, [Anonymous]
🏷️ Tags:
Turns out that TH04's player selection menu is exactly three times as
complicated as TH05's. Two screens for character and shot type rather than
one, and a way more intricate implementation for saving and restoring the
background behind the raised top and left edges of a character picture
when moving the cursor between Reimu and Marisa. TH04 decides to backup
precisely only the two 256×8 (top) and 8×244 (left) strips behind the
edges, indicated in red in the picture
below.
These take up just 4 KB of heap memory… but require custom blitting
functions, and expanding this explicitly hardcoded approach to TH05's 4
characters would have been pretty annoying. So, rather than, uh, not
explicitly hardcoding it all, ZUN decided to just be lazy with the backup
area in TH05, saving the entire 640×400 screen, and thus spending 128 KB
of heap memory on this rather simple selection shadow effect.
So, this really wasn't something to quickly get done during the first half
of a push, even after already having done TH05's equivalent of this menu.
But since life is very busy right now, I also used the occasion to start
addressing another code organization annoyance: master.lib's single master.h header file.
Now that ReC98 is trying to develop (or at least mimic) a more
type-safe C++ foundation to model the PC-98 hardware, a pure C header
(with counter-productive C++ extensions) is becoming increasingly
unidiomatic. By moving some of the original assumptions about function
parameters into the type system, we can also reduce the reliance on its
Japanese-only documentation without having to translate it
It's quite bloated, with at least 2800 lines of code that
currently are #included into the vast majority of files, not
counting master.h's recursively included C standard library
headers. PC-98 Touhou only makes direct use of a rather small fraction of
its contents.
And finally, all the DOS/V compatibility definitions are especially
useless in the context of ReC98. As I've noted
📝 time and
📝 time again, porting PC-98 Touhou to
IBM-compatible DOS won't be easy, and MASTER_DOSV won't be
helping much. Therefore, my upstream version of ReC98 will never include
all of master.lib. There's no point in lengthening compile times for
everyone by default, and those will be getting quite noticeable
after moving to a full 16-bit build process.
(Actually, what retro system ports should rather be doing: Get rid
of master.lib's original ASM code, replace it with
readable, modern
C++, and then simply convert the optimized assembly output of modern
compilers to your ISA of choice. Improving the landscape of such
assembly or object file converters would benefit everyone!)
So, time to start a new master.hpp header that would contain
just the declarations from master.h that PC-98 Touhou
actually needs, plus some semantic (yes, semantic) sugar. Comparing just
the old master.h to just the new master.hpp
after roughly 60% of the transition has been completed, we get median
build times of 319 ms for master.h, and 144 ms for
master.hpp on my (admittedly rather slow) DOSBox setup.
Nice!
As of this push, ReC98 consists of 107 translation units that have to be
compiled with Turbo C++ 4.0J. Fully rebuilding all of these currently
takes roughly 37.5 seconds in DOSBox. After the transition to
master.hpp is done, we could therefore shave some 10 to 15
seconds off this time, simply by switching header files. And that's just
the beginning, as this will also pave the way for further
#include optimizations. Life in this codebase will be great!
Unfortunately, there wasn't enough time to repay some of the actual
technical debt I was looking forward to, after all of this. Oh well, at
least we now also have nice identifiers for the three different boldface
options that are used when rendering text to VRAM, after procrastinating
that issue for almost 11 months. Next up, assuming the existing
subscriptions: More ridiculous decompilations of things that definitely
weren't originally written in C, and a big blocker in TH03's
MAIN.EXE.
Done with the .BOS format, at last! While there's still quite a bunch of
undecompiled non-format blitting code left, this was in fact the final
piece of graphics format loading code in TH01.
📝 Continuing the trend from three pushes ago,
we've got yet another class, this time for the 48×48 and 48×32 sprites
used in Reimu's gohei, slide, and kick animations. The only reason these
had to use the .BOS format at all is simply because Reimu's regular
sprites are 32×32, and are therefore loaded from
📝 .PTN files.
Yes, this makes no sense, because why would you split animations for
the same character across two file formats and two APIs, just because
of a sprite size difference?
This necessity for switching blitting APIs might also explain why Reimu
vanishes for a few frames at the beginning and the end of the gohei swing
animation, but more on that once we get to the high-level rendering code.
Now that we've decompiled all the .BOS implementations in TH01, here's an
overview of all of them, together with .PTN to show that there really was
no reason for not using the .BOS API for all of Reimu's sprites:
CBossEntity
CBossAnim
CPlayerAnim
ptn_* (32×32)
Format
.BOS
.BOS
.BOS
.PTN
Hitbox
✔
✘
✘
✘
Byte-aligned blitting
✔
✔
✔
✔
Byte-aligned unblitting
✔
✘
✔
✔
Unaligned blitting
Single-line and wave only
✘
✘
✘
Precise unblitting
✔
✘
✔
✔
Per-file sprite limit
8
8
32
64
Pixels blitted at once
16
16
8
32
And even that last property could simply be handled by branching based on
the sprite width, and wouldn't be a reason for switching formats. But
well, it just wouldn't be TH01 without all that redundant bloat though,
would it?
The basic loading, freeing, and blitting code was yet another variation
on the other .BOS code we've seen before. So this should have caused just
as little trouble as the CBossAnim code… except that
CPlayerAnimdid add one slightly difficult function to
the mix, which led to it requiring almost a full push after all.
Similar to 📝 the unblitting code for moving lasers we've seen in the last push,
ZUN tries to minimize the amount of VRAM writes when unblitting Reimu's
slide animations. Technically, it's only necessary to restore the pixels
that Reimu traveled by, plus the ones that wouldn't be redrawn by
the new animation frame at the new X position.
The theoretically arbitrary distance between the two sprites is, of
course, modeled by a fixed-size buffer on the stack
, coming with the further assumption that the
sprite surely hasn't moved by more than 1 horizontal VRAM byte compared to
the last frame. Which, of course, results in glitches if that's not the
case, leaving little Reimu parts in VRAM if the slide speed ever exceeded
8 pixels per frame. (Which it never does,
being hardcoded to 6 pixels, but still.). As it also turns out, all those
bit masking operations easily lead to incredibly sloppy C code.
Which compiles into incredibly terrible ASM, which in turn might end up
wasting way more CPU time than the final VRAM write optimization would
have gained? Then again, in-depth profiling is way beyond the scope of
this project at this point.
Next up: The TH04 main menu, and some more technical debt.
This time around, laser is 📝 actually not
difficult, with TH01's shootout laser class being simple enough to nicely
fit into a single push. All other stationary lasers (as used by
YuugenMagan, for example) don't even use a class, and are simply treated
as regular lines with collision detection.
But of course, the shootout lasers also come with the typical share of
TH01 jank we've all come to expect by now. This time, it already starts
with the hardcoded sprite data:
A shootout laser can have a width from 1 to 8 pixels, so ZUN stored a
separate 16×1 sprite with a line for each possible width (left-to-right).
Then, he shifted all of these sprites 1 pixel to the right for all of the
8 possible start positions within a planar VRAM byte (top-to-bottom).
Because… doing that bit shift programmatically is way too
expensive, so let's pre-shift at compile time, and use 16× the memory per
sprite?
Since a bunch of other sprite sheets need to be pre-shifted as well (this
is the 5th one we've found so far), our sprite converter has a feature to
automatically generate those pre-shifted variations. This way, we can
abstract away that implementation detail and leave modders with .BMP files
that still only contain a single version of each sprite. But, uh…, wait,
in this sprite sheet, the second row for 1-pixel lasers is accidentally
shifted right by one more pixel that it should have been?! Which means
that
we can't use the auto-preshift feature here, and have to store this
weird-looking (and quite frankly, completely unnecessary) sprite sheet in
its entirety
ZUN did, at least during TH01's development, not have a sprite
converter, and directly hardcoded these dot patterns in the C++ code
The waste continues with the class itself. 69 bytes, with 22 bytes
outright unused, and 11 not really necessary. As for actual innovations
though, we've got
📝 another 32-bit fixed-point type, this
time actually using 8 bits for the fractional part. Therefore, the
ray position is tracked to the 1/256th of a pixel, using the full
precision of master.lib's 8-bit sin() and cos() lookup
tables.
Unblitting is also remarkably efficient: It's only done once the laser
stopped extending and started moving, and only for the exact pixels at the
start of the ray that the laser traveled by in a single frame. If only the
ray part was also rendered as efficiently – it's fully blitted every frame,
right next to the collision detection for each row of the ray.
With a public interface of two functions (spawn, and update / collide /
unblit / render), that's superficially all there is to lasers in this
game. There's another (apparently inlined) function though, to both reset
and, uh, "fully unblit" all lasers at the end of every boss fight… except
that it fails hilariously at doing the latter, and ends up effectively
unblitting random 32-pixel line segments, due to ZUN confusing both the
coordinates and the parameter types for the line unblitting function.
A while ago, I was asked about
this crash that tends to
happen when defeating Elis. And while you can clearly see the random
unblitted line segments that are missing from the sprites, I don't
quite think we've found the cause for the crash, since the
📝 line unblitting function used theredoes clip its coordinates to the VRAM range.
Next up: The final piece of image format code in TH01, covering Reimu's
sprites!
P0120
TH01 decompilation (.BOS format, part 4/5 + Shape blitting)
P0121
TH01 decompilation (Invincibility sprites, VRAM effects)
💰 Funded by:
Yanga
🏷️ Tags:
Back to TH01, and its boss sprite format… with a separate class for
storing animations that only differs minutely from the
📝 regular boss entity class I covered last time?
Decompiling this class was almost free, and the main reason why the first
of these pushes ended up looking pretty huge.
Next up were the remaining shape drawing functions from the code segment
that started with the .GRC functions. P0105 already started these with the
(surprisingly sanely implemented) 8×8 diamond, star, and… uh, snowflake
(?) sprites
,
prominently seen in the Konngara, Elis, and Sariel fights, respectively.
Now, we've also got:
ellipse arcs with a customizable angle distance between the individual
dots – mostly just used for drawing full circles, though
line loops – which are only used for the rotating white squares around
Mima, meaning that the white star in the YuugenMagan fight got a completely
redundant reimplementation
and the surprisingly weirdest one, drawing the red invincibility
sprites.
The weirdness becomes obvious with just a single screenshot:
First, we've got the obvious issue of the sprites not being clipped at the
right edge of VRAM, with the rightmost pixels in each row of the sprite
extending to the beginning of the next row. Well, that's just what you get
if you insist on writing unique low-level blitting code for the majority
of the individual sprites in the game… 🤷
More importantly though, the sprite sheet looks like this:
So how do we even get these fully filled red diamonds?
Well, turns out that the sprites are never consistently unblitted during
their 8 frames of animation. There is a function that looks
like it unblits the sprite… except that it starts with by enabling the
GRCG and… reading from the first bitplane on the background page?
If this was the EGC, such a read would fill some internal registers with
the contents of all 4 bitplanes, which can then subsequently be blitted to
all 4 bitplanes of any VRAM page with a single memory write. But with the
GRCG in RMW mode, reads do nothing special, and simply copy the memory
contents of one bitplane to the read destination. Maybe ZUN thought
that setting the RMW color to red
also sets some internal 4-plane mask register to match that color?
Instead, the rather random pixels read from the first bitplane are then
used as a mask for a second blit of the same red sprite.
Effectively, this only really "unblits" the invincibility pixels that are
drawn on top of Reimu's sprite. Since Reimu is drawn first, the
invincibility sprites are overwritten anyway. But due to the palette color
layout of Reimu's sprite, its pixels end up fully masking away any
invincibility sprite pixels in that second blit, leaving VRAM untouched as
a result. Anywhere else though, this animation quickly turns into the
union of all animation frames.
Then again, if that 16-dot-aligned rectangular unblitting function is all
you know about the EGC, and you can't be bothered to write a perfect
unblitter for 8×8 sprites, it becomes obvious why you wouldn't want to use
it:
Because Reimu would barely be visible under all that flicker. In
comparison, those fully filled diamonds actually look pretty good.
After all that, the remaining time wouldn't have been enough for the next
few essential classes, so I closed out the push with three more VRAM
effects instead:
Single-bitplane pixel inversion inside a 32×32 square – the main effect
behind the discoloration seen in the bomb animation, as well as the
expanding squares at the end of Kikuri's and Sariel's entrance
animation
EGC-accelerated VRAM row copies – the second half of smooth and fully
hardware-accelerated scrolling for backgrounds that are twice the size of
VRAM
And finally, the VRAM page content transition function using meshed 8×8
squares, used for the blocky transition to Sariel's first and second phases.
Which is quite ridiculous in just how needlessly bloated it is. I'm positive
that this sort of thing could have also been accelerated using the PC-98's
EGC… although simply writing better C would have already gone a long way.
The function also comes with three unused mesh patterns.
And with that, ReC98, as a whole, is not only ⅓ done, but I've also fully
caught up with the feature backlog for the first time in the history of
this crowdfunding! Time to go into maintenance mode then, while we wait
for the next pushes to be funded. Got a huge backlog of tiny maintenance
issues to address at a leisurely pace, and of course there's also the
📝 16-bit build system waiting to be
finished.
So, TH05 OP.EXE. The first half of this push started out
nicely, with an easy decompilation of the entire player character
selection menu. Typical ZUN quality, with not much to say about it. While
the overall function structure is identical to its TH04 counterpart, the
two games only really share small snippets inside these functions, and do
need to be RE'd separately.
The high score viewing (not registration) menu would have been next.
Unfortunately, it calls one of the GENSOU.SCR loading
functions… which are all a complete mess that still needed to be sorted
out first. 5 distinct functions in 6 binaries, and of course TH05 also
micro-optimized its MAIN.EXE version to directly use the DOS
INT 21h file loading API instead of master.lib's wrappers.
Could have all been avoided with a single method on the score data
structure, taking a player character ID and a difficulty level as
parameters…
So, no score menu in this push then. Looking at the other end of the ASM
code though, we find the starting functions for the main game, the Extra
Stage, and the demo replays, which did fit perfectly to round out
this push.
Which is where we find an easter egg! 🥚 If you've ever looked into
怪綺談2.DAT, you might have noticed 6 .REC files
with replays for the Demo Play mode. However, the game only ever seems to
cycle between 4 replays. So what's in the other two, and why are they
40 KB instead of just 10 KB like the others? Turns out that they
combine into a full Extra Stage Clear replay with Mima, with 3 bombs and 1
death, obviously recorded by ZUN himself. The split into two files for the
stage (DEMO4.REC) and boss (DEMO5.REC) portion is
merely an attempt to limit the amount of simultaneously allocated heap
memory.
To watch this replay without modding the game, unlock the Extra Stage with
all 4 characters, then hold both the ⬅️ left and ➡️ right arrow keys in the
main menu while waiting for the usual demo replay.
I can't possibly be the first one to discover this, but I couldn't find
any other mention of it. Edit (2021-03-15): ZUN did in fact document this replay
in Section 6 of TH05's OMAKE.TXT, along with the exact method
to view it.
Thanks
to Popfan for the discovery!
Here's a recording of the whole replay:
Note how the boss dialogue is skipped. MAIN.EXE actually
contains no less than 6 if() branches just to distinguish
this overly long replay from the regular ones.
I'd really like to do the TH04 and TH05 main menus in parallel, since we
can expect a bit more shared code after all the initial differences.
Therefore, I'm going to put the next "anything" push towards covering the
TH04 version of those functions. Next up though, it's back to TH01, with
more redundant image format code…
🎉 TH05 is finally fully position-independent! 🎉 To celebrate this
milestone, -Tom- coded a little demo, which we recorded on
both an emulator and on real PC-98 hardware:
You can now freely add or remove both data and code anywhere in TH05, by
editing the ReC98 codebase, writing your mod in ASM or C/C++, and
recompiling the code. Since all absolute memory addresses have now been
converted to labels, this will work without causing any instability. See
the position independence section in the FAQ
for a more thorough explanation about why this was a problem.
By extension, this also means that it's now theoretically possible
to use a different compiler on the source code. But:
What does this not mean?
The original ZUN code hasn't been completely reverse-engineered yet, let
alone decompiled. As the final PC-98 Touhou game, TH05 also happens to
have the largest amount of actual ZUN-written ASM that can't ever
be decompiled within ReC98's constraints of a legit source code
reconstruction. But a lot of the originally-in-C code is also still in
ASM, which might make modding a bit inconvenient right now. And while I
have decompiled a bunch of functions, I selected them largely
because they would help with PI (as requested by the backers), and not
because they are particularly relevant to typical modding interests.
As a result, the code might also be a bit confusingly organized. There's
quite a conflict between various goals there: On the one hand, I'd like to
only have a single instance of every function shared with earlier games,
as well as reduce ZUN's code duplication within a single game. On the
other hand, this leads to quite a lot of code being scattered all over the
place and then #include-pasted back together, except for the
places where
📝 this doesn't work, and you'd have to use multiple translation units anyway…
I'm only beginning to figure out the best structure here, and some more
reverse-engineering attention surely won't hurt.
Also, keep in mind that the code still targets x86 Real Mode. To work
effectively in this codebase, you'd need some familiarity with
memory
segmentation, and how to express it all in code. This tends to make
even regular C++ development about an order of magnitude harder,
especially once you want to interface with the remaining ASM code. That
part made -Tom- struggle quite a bit with implementing his
custom scripting language for the demo above. For now, he built that demo
on quite a limited foundation – which is why he also chose to release
neither the build nor the source publically for the time being.
So yeah, you're definitely going to need the TASM and Borland C++ manuals
there.
tl;dr: We now know everything about this game's data, but not quite
as much about this game's code.
So, how long until source ports become a realistic project?
You probably want to wait for 100% RE, which is when everything
that can be decompiled has been decompiled.
Unless your target system is 16-bit Windows, in which case you could
theoretically start right away. 📝 Again,
this would be the ideal first system to port PC-98 Touhou to: It would
require all the generic portability work to remove the dependency on PC-98
hardware, thus paving the way for a subsequent port to modern systems,
yet you could still just drop in any undecompiled ASM.
Porting to IBM-compatible DOS would only be a harder and less universally
useful version of that. You'd then simply exchange one architecture, with
its idiosyncrasies and limits, for another, with its own set of
idiosyncrasies and limits. (Unless, of course, you already happen to be
intimately familiar with that architecture.) The fact that master.lib
provides DOS/V support would have only mattered if ZUN consistently used
it to abstract away PC-98 hardware at every single place in the code,
which is definitely not the case.
The list of actually interesting findings in this push is,
📝 again, very short. Probably the most
notable discovery: The low-level part of the code that renders Marisa's
laser from her TH04 Illusion Laser shot type is still present in
TH05. Insert wild mass guessing about potential beta version shot types…
Oh, and did you know that the order of background images in the Extra
Stage staff roll differs by character?
Next up: Finally driving up the RE% bar again, by decompiling some TH05
main menu code.
P0117
Rebuilding ZUN.COM + overall position independence
💰 Funded by:
[Anonymous]
🏷️ Tags:
Wouldn't it be a bit disappointing to have TH05 completely
position-independent, but have it still require hex-editing of the
original ZUN.COM to mod its gaiji characters? As in, these
custom "text" glyphs, available to the PC-98 text RAM:
Especially since we now even have a sprite converter… the lack of which
was exactly 📝 what made rebuilding ZUN.COM not that worthwhile before.
So, before the big release, let's get all the remaining
ZUN.COM sub-binaries of TH04 and TH05 dumped into .ASM files,
and re-assembled and linked during the build process.
This is also the moment in which Egor's 2018
reimplementation of O. Morikawa's comcstm finally gets
to shine. Back then, I considered it too early to even bother with
ZUN.COM and reimplementing the .COM wrapper that ZUN
originally used to bundle multiple smaller executables into that single
binary. But now that the time is right, it is nice to have that
code, as it allowed me to get these rebuilds done in half a push.
Otherwise, it would have surely required one or two dedicated ones.
Since we like correctness here, newly dumped ZUN code means that it also
has to be included in the RE%
baseline calculation. This is why TH04's and TH05's overall RE% bars
have gone back a tiny bit… in case you remember how they previously looked
like After all, I would like to figure
out where all that memory allocated during TH04's and TH05's memory check
is freed, if at all.
Alright, one half of a push left… Y'know, getting rid of those last few PI
false positives is actually one of the most annoying chores in this
project, and quite stressful as well: I have to convince myself that the
remaining false positives are, in fact, not memory references, but with
way too little time for in-depth RE and to denote what they are
instead. In that situation, everyone (including myself!)
is anticipating that PI goal, and no one is really interested in RE.
(Well… that is, until they actually get to developing their mod. But more
on that tomorrow. ) Which means that it boils
down to quite some hasty, dumb, and superficial RE around those remaining
numbers.
So, in the hope of making it less annoying for the other 4 games in the
future, let's systematically cover the sources of those remaining false
positives in TH05, over all games. I/O port accesses with either the port
or the value in registers (and thus, no longer as an immediate argument to
the IN or OUT instructions, which the PI counter
can clearly ingore), palette color arithmetic, or heck, 0xFF constants that
obviously just mean "-1" and are not a reference to offset 0xFF in
the data segment. All of this, of course, once again had a way bigger
effect on everything but an almost position-independent TH05… but
hey, that's the sort of thing you reserve the "anything" pushes for. And
that's also how we get some of the single biggest PI% gains we have seen
so far, and will be seeing before the 100% PI mark. And yes, those will
continue in the next push.
Finally, after a long while, we've got two pushes with barely anything to
talk about! Continuing the road towards 100% PI for TH05, these were
exactly the two pushes that TH05 MAINE.EXE PI was estimated
to additionally cost, relative to TH04's. Consequently, they mostly went
to TH05's unique data structures in the ending cutscenes, the score name
registration menu, and the
staff roll.
A unique feature in there is TH05's support for automatic text color
changes in its ending scripts, based on the first full-width Shift-JIS
codepoint in a line. The \c=codepoint,color
commands at the top of the _ED??.TXT set up exactly this
codepoint→color mapping. As far as I can tell, TH05 is the only Touhou
game with a feature like this – even the Windows Touhou games went back to
manually spelling out each color change.
The orb particles in TH05's staff roll also try to be a bit unique by
using 32-bit X and Y subpixel variables for their current position. With
still just 4 fractional bits, I can't really tell yet whether the extended
range was actually necessary. Maybe due to how the "camera scrolling"
through "space" was implemented? All other entities were pretty much the
usual fare, though.
12.4, 4.4, and now a 28.4 fixed-point format… yup,
📝 C++ templates were
definitely the right choice.
At the end of its staff roll, TH05 not only displays
the usual performance
verdict, but then scrolls in the scores at the end of each stage
before switching to the high score menu. The simplest way to smoothly
scroll between two full screens on a PC-98 involves a separate bitmap…
which is exactly what TH05 does here, reserving 28,160 bytes of its global
data segment for just one overly large monochrome 320×704 bitmap where
both the screens are rendered to. That's… one benefit of splitting your
game into multiple executables, I guess?
Not sure if it's common knowledge that you can actually scroll back and
forth between the two screens with the Up and Down keys before moving to
the score menu. I surely didn't know that before. But it makes sense –
might as well get the most out of that memory.
The necessary groundwork for all of this may have actually made
TH04's (yes, TH04's) MAINE.EXE technically
position-independent. Didn't quite reach the same goal for TH05's – but
what we did reach is ⅔ of all PC-98 Touhou code now being
position-independent! Next up: Celebrating even more milestones, as
-Tom- is about to finish development on his TH05
MAIN.EXE PI demo…
Alright, tooling and technical debt. Shouldn't be really much to talk
about… oh, wait, this is still ReC98
For the tooling part, I finished up the remaining ergonomics and error
handling for the
📝 sprite converter that Jonathan Campbell contributed two months ago.
While I familiarized myself with the tool, I've actually ran into some
unreported errors myself, so this was sort of important to me. Still got
no command-line help in there, but the error messages can now do that job
probably even better, since we would have had to write them anyway.
So, what's up with the technical debt then? Well, by now we've accumulated
quite a number of 📝 ASM code slices that
need to be either decompiled or clearly marked as undecompilable. Since we
define those slices as "already reverse-engineered", that decision won't
affect the numbers on the front page at all. But for a complete
decompilation, we'd still have to do this someday. So, rather than
incorporating this work into pushes that were purchased with the
expectation of measurable progress in a certain area, let's take the
"anything goes" pushes, and focus entirely on that during them.
The second code segment seemed like the best place to start with this,
since it affects the largest number of games simultaneously. Starting with
TH02, this segment contains a set of random "core" functions needed by the
binary. Image formats, sounds, input, math, it's all there in some
capacity. You could maybe call it all "libzun" or something like
that? But for the time being, I simply went with the obvious name,
seg2. Maybe I'll come up with something more convincing in
the future.
Oh, but wait, why were we assembling all the previous undecompilable ASM
translation units in the 16-bit build part? By moving those to the 32-bit
part, we don't even need a 16-bit TASM in our list of dependencies, as
long as our build process is not fully 16-bit.
And with that, ReC98 now also builds on Windows 95, and thus, every 32-bit
Windows version. 🎉 Which is certainly the most user-visible improvement
in all of these two pushes.
Back in 2015, I already decompiled all of TH02's seg2
functions. As suggested by the Borland compiler, I tried to follow a "one
translation unit per segment" layout, bundling the binary-specific
contents via #include. In the end, it required two
translation units – and that was even after manually inserting the
original padding bytes via #pragma codestring… yuck. But it
worked, compiled, and kept the linker's job (and, by extension,
segmentation worries) to a minimum. And as long as it all matched the
original binaries, it still counted as a valid reconstruction of ZUN's
code.
However, that idea ultimately falls apart once TH03 starts mixing
undecompilable ASM code inbetween C functions. Now, we officially have no
choice but to use multiple C and ASM translation units, with maybe only
just one or two #includes in them…
…or we finally start reconstructing the actual seg2 library,
turning every sequence of related functions into its own translation unit.
This way, we can simply reuse the once-compiled .OBJ files for all the
binaries those functions appear in, without requiring that additional
layer of translation units mirroring the original segmentation.
The best example for this is
TH03's
almost undecompilable function that generates a lookup table for
horizontally flipping 8 1bpp pixels. It's part of every binary since
TH03, but only used in that game. With the previous approach, we would
have had to add 9 C translation units, which would all have just
#included that one file. Now, we simply put the .OBJ file
into the correct place on the linker command line, as soon as we can.
💡 And suddenly, the linker just inserts the correct padding bytes itself.
The most immediate gains there also happened to come from TH03. Which is
also where we did get some tiny RE% and PI% gains out of this after
all, by reverse-engineering some of its sprite blitting setup code. Sure,
I should have done even more RE here, to also cover those 5 functions at
the end of code segment #2 in TH03's MAIN.EXE that were in
front of a number of library functions I already covered in this push. But
let's leave that to an actual RE push 😛
All in all though, I was just getting started with this; the real
gains in terms of removed ASM files are still to come. But in the
meantime, the funding situation has become even better in terms of
allowing me to focus on things nobody asked for. 🙂 So here's a slightly
better idea: Instead of spending two more pushes on this, let's shoot for
TH05 MAINE.EXE position independence next. If I manage to get
it done, we'll have a 100% position-independent TH05 by the time
-Tom- finishes his MAIN.EXE PI demo, rather
than the 94% we'd get from just MAIN.EXE. That's bound to
make a much better impression on all the people who will then
(re-)discover the project.
P0001
Build system improvements, part 1 (Using Tup for the 32-bit build)
💰 Funded by:
GhostPhanom
🏷️ Tags:
(tl;dr: ReC98 has switched to Tup for
the 32-bit build. You probably want to get
💾 this build of Tup, and put it somewhere in your
PATH. It's optional, and always will be, but highly
recommended.)
P0001! Reserved for the delivery of the very first financial contribution
I've ever received for ReC98, back in January 2018. GhostPhanom
requested the exact opposite of immediate results, which motivated me to
go on quite a passionate quest for the perfect ReC98 build system. A quest
that went way beyond the crowdfunding…
Makefiles are a decent idea in theory: Specify the targets to generate,
the source files these targets depend on and are generated from, and the
rules to do the generating, with some helpful shorthand syntax. Then, you
have a build dependency graph, and your make tool of choice
can provide minimal rebuilds of only the targets whose sources changed
since the last make call. But, uh… wait, this is C/C++ we're
talking about, and doesn't pretty much every source file come with a
second set of dependent source files, namely, every single
#include in the source file itself? Do we really
have to duplicate all these inside the Makefile, and keep it in sync with the source file? 🙄
This fact alone means that Makefiles are inherently unsuited for
any language with an #include feature… that is, pretty
much every language out there. Not to mention other aspects like changes
to the compilation command lines, or the build rules themselves, all of
which require metadata of the previous build to be persistently stored in
some way. I have no idea why such a trash technology is even touted as a
viable build tool for code.
So, I decided to just
write my own build system, tailor-made for the needs of ReC98's 16-bit
build process, and combining a number of experimental ideas. Which is
still not quite bug-free and ready for public use, given that the
entire past year has kept me busy with actual tangible RE and PI progress.
What did finally become ready, however, is the improvement for the
32-bit build part, and that's what we've got here.
💭 Now, if only there was a build system that would perfectly track
dependencies of any compiler it calls, by injecting code and
hooking file opening syscalls. It'd be completely unrealistic for it to
also run on DOS (and we probably don't want to traverse a graph database
in a cycle-limited DOSBox), but it would be perfect for our 32-bit build
part, as long as that one still exists.
Sure, it might seem really minor to worry about not unconditionally
rebuilding all 32-bit .asm files, which just takes a couple
of seconds anyway. But minimal rebuilds in the 32-bit part also provide
the foundation for minimal rebuilds in the 16-bit part – and those
TLINK invocations do take quite some time after all.
Using Tup for ReC98 was an idea that dated back to January 2017. Back
then, I already opened
the pull request with a fix to allow Tup to work together with 32-bit
TASM. As much as I love Tup though, the fact that it only worked on
64-bit Windows ≥Vista would have meant that we had to exchange perfect
dependency tracking for the ability to build on 32-bit and older Windows
versions at all. For a project that relies on DOS compilers, this
would have been exactly the wrong trade-off to make.
What's worse though: TLINK fails to run on modern 32-bit
Windows with Loader error (0000) : Unrecognized Error.
Therefore, the set of systems that Tup runs on, and the set of systems
that can actually compile ReC98's 16-bit build part natively, would have
been exactly disjoint, with no OS getting to use both at the same time.
So I've kept using Tup for only my own development, but indefinitely
shelved the idea of making it the official build system, due to those
drawbacks. Recently though, it all came together:
The tup generate sub-command can generate a
.bat file that does a full dumb rebuild of everything, which
can serve as a fallback option for systems that can't run Tup. All we have
to do is to commit that .bat file to the ReC98 Git repository
as well, and tell build32b.bat to fall back on that if Tup
can't be run. That alone would have given us the benefits of Tup without
being worse than the current dumb build process.
In the meantime, other contributors improved Tup's own build process to
the point where 32-bit builds were simple enough to accomplish from the
comfort of a WSL terminal.
Two commits of mine
later, and 32-bit Windows Tup was fully functional. Another one later,
and 32-bit Windows Tup even gained one potential advantage over its 64-bit
counterpart. Since it only has to support DLL injection into 32-bit
programs, it doesn't need a separate 32-bit binary for retrieving function
pointers to the 32-bit version of Windows' DLL loading syscalls. Weirdly
enough, Windows Defender on current Windows 10 falsely flags that binary as
malware, despite it doing nothing but printing those pointer values to
stdout. 🤷
I've also added it to the DevKit, for any newcomers to ReC98.
After the switch to Tup and the fallback option, I extensively tested
building ReC98 on all operating systems I had lying around. And holy cow,
so much in that build was broken beyond belief. In the end, the solution
involved just fully rebuilding the entire 16-bit part by default.
Which, of course, nullifies any of the
advantages we might have gotten from a Makefile in the first place, due to
just how unreliable they are. If you had problems building ReC98 in the
past, try again now!
And sure, it would certainly be possible to also get Tup working on
Windows ≤XP, or 9x even. But I leave that to all those tinkerers out there
who are actually motivated to keep those OSes alive. My work here is
done – we now have a build process that is optimal on 32-bit
Windows ≧Vista, and still functional and reliable on 64-bit
Windows, Linux, and everything down to Windows 98 SE, and therefore also
real PC-98 hardware. Pretty good, I'd say.
(If it weren't for that weird crash of the 16-bit TASM.EXE in
that Windows 95 command prompt I've tried it in, it would also work on
that OS. Probably just a misconfiguration on my part?)
Now, it might look like a waste of time to improve a 32-bit build part
that won't even exist anymore once this project is done. However, a fully
16-bit DOS build will only make sense after
master.lib has been turned into a proper library, linked in by
TLINK rather than #included in the big .ASM
files.
This affects all games. If master.lib's data was consistently placed at
the beginning or end of each data segment, this would be no big deal, but
it's placed somewhere else in every binary.
So, this will only make sense sometime around 90% overall PI, and maybe
~50% RE in each game. Which is something else than 50% overall –
especially since it includes TH02, the objectively worst Touhou game,
which hasn't received any dedicated funding ever.
Then, it will probably still require a couple of dedicated pushes to
move all the remaining data to C land.
Oh, and my 16-bit build system project also needs to be done before,
because, again, Makefiles are trash and we shouldn't rely on them even
more.
And who knows whether this project will get funded for that long. So yeah,
the 32-bit build part will stay with us for quite some more time, and for
all upcoming PI milestones. And with the current build process, it's
pretty much the most minor among all the minor issues I can think of.
Let's all enjoy the performance of a 32-bit build while we can 🙂
Next up: Paying some technical debt while keeping the RE% and PI% in place.
P0111
TH05 RE (Code around the final MAIN.EXE data references, part 1/2)
P0112
TH05 RE (Code around the final MAIN.EXE data references, part 2/2)
💰 Funded by:
[Anonymous], Blue Bolt
🏷️ Tags:
Only one newly ordered push since I've reopened the store? Great, that's
all the justification I needed for the extended maintenance delay that was
part of these two pushes 😛
Having to write comments to explain whether coordinates are relative to
the top-left corner of the screen or the top-left corner of the playfield
has finally become old. So, I introduced
distinct
types for all the coordinate systems we typically encounter, applying
them to all code decompiled so far. Note how the planar nature of PC-98
VRAM meant that X and Y coordinates also had to be different from each
other. On the X side, there's mainly the distinction between the
[0; 640] screen space and the corresponding [0; 80] VRAM byte
space. On the Y side, we also have the [0; 400] screen space, but
the visible area of VRAM might be limited to [0; 200] when running in
the PC-98's line-doubled 640×200 mode. A VRAM Y coordinate also always
implies an added offset for vertical scrolling.
During all of the code reconstruction, these types can only have a
documenting purpose. Turning them into anything more than just
typedefs to int, in order to define conversion
operators between them, simply won't recompile into identical binaries.
Modding and porting projects, however, now have a nice foundation for
doing just that, and can entirely lift coordinate system transformations
into the type system, without having to proofread all the meaningless
int declarations themselves.
So, what was left in terms of memory references? EX-Alice's fire waves
were our final unknown entity that can collide with the player. Decently
implemented, with little to say about them.
That left the bomb animation structures as the one big remaining PI
blocker. They started out nice and simple in TH04, with a small 6-byte
star animation structure used for both Reimu and Marisa. TH05, however,
gave each character her own animation… and what the hell is going
on with Reimu's blue stars there? Nope, not going to figure this out on
ASM level.
A decompilation first required some more bomb-related variables to be
named though. Since this was part of a generic RE push, it made sense to
do this in all 5 games… which then led to nice PI gains in anything
but TH05. Most notably, we now got the
"pulling all items to player" flag in TH04 and TH05, which is
actually separate from bombing. The obvious cheat mod is left as an
exercise to the reader.
So, TH05 bomb animations. Just like the
📝 custom entity types of this game, all 4
characters share the same memory, with the superficially same 10-byte
structure.
But let's just look at the very first field. Seen from a low level, it's a
simple struct { int x, y; } pos, storing the current position
of the character-specific bomb animation entity. But all 4 characters use
this field differently:
For Reimu's blue stars, it's the top-left position of each star, in the
12.4 fixed-point format. But unlike the vast majority of these values in
TH04 and TH05, it's relative to the top-left corner of the
screen, not the playfield. Much better represented as
struct { Subpixel screen_x, screen_y; } topleft.
For Marisa's lasers, it's the center of each circle, as a regular 12.4
fixed-point coordinate, relative to the top-left corner of the playfield.
Much better represented as
struct { Subpixel x, y; } center.
For Mima's shrinking circles, it's the center of each circle in regular
pixel coordinates. Much better represented as
struct { screen_x_t x; screen_y_t y; } center.
For Yuuka's spinning heart, it's the top-left corner in regular pixel
coordinates. Much better represented as
struct { screen_x_t x; screen_y_t y; } topleft.
And yes, singular. The game is actually smart enough to only store a single
heart, and then create the rest of the circle on the fly. (If it were even
smarter, it wouldn't even use this structure member, but oh well.)
Therefore, I decompiled it as 4 separate structures once again, bundled
into an union of arrays.
As for Reimu… yup, that's some pointer arithmetic straight out of
Jigoku* for setting and updating the positions of the falling star
trails. While that certainly required several
comments to wrap my head around the current array positions, the one "bug"
in all this arithmetic luckily has no effect on the game.
There is a small glitch with the growing circles, though. They are
spawned at the end of the loop, with their position taken from the star
pointer… but after that pointer has already been incremented. On
the last loop iteration, this leads to an out-of-bounds structure access,
with the position taken from some unknown EX-Alice data, which is 0 during
most of the game. If you look at the animation, you can easily spot these
bugged circles, consistently growing from the top-left corner (0, 0)
of the playfield:
After all that, there was barely enough remaining time to filter out and
label the final few memory references. But now, TH05's
MAIN.EXE is technically position-independent! 🎉
-Tom- is going to work on a pretty extensive demo of this
unprecedented level of efficient Touhou game modding. For a more impactful
effect of both the 100% PI mark and that demo, I'll be delaying the push
covering the remaining false positives in that binary until that demo is
done. I've accumulated a pretty huge backlog of minor maintenance issues
by now…
Next up though: The first part of the long-awaited build system
improvements. I've finally come up with a way of sanely accelerating the
32-bit build part on most setups you could possibly want to build ReC98
on, without making the building experience worse for the other few setups.
P0110
TH05 RE (Shinki and EX-Alice background animation structures)
💰 Funded by:
[Anonymous], Blue Bolt
🏷️ Tags:
… and just as I explained 📝 in the last post
how decompilation is typically more sensible and efficient than ASM-level
reverse-engineering, we have this push demonstrating a counter-example.
The reason why the background particles and lines in the Shinki and
EX-Alice battles contributed so much to position dependence was simply
because they're accessed in a relatively large amount of functions, one
for each different animation. Too many to spend the remaining precious
crowdfunded time on reverse-engineering or even decompiling them all,
especially now that everyone anticipates 100% PI for TH05's
MAIN.EXE.
Therefore, I only decompiled the two functions of the line structure that
also demonstrate best how it works, which in turn also helped with RE.
Sadly, this revealed that we actually can't📝 overload operator =() to get
that nice assignment syntax for 12.4 fixed-point values, because one of
those new functions relies on Turbo C++'s built-in optimizations for
trivially copyable structures. Still, impressive that this abstraction
caused no other issues for almost one year.
As for the structures themselves… nope, nothing to criticize this time!
Sure, one good particle system would have been awesome, instead of having
separate structures for the Stage 2 "starfield" particles and the one used
in Shinki's battle, with hardcoded animations for both. But given the
game's short development time, that was quite an acceptable compromise,
I'd say.
And as for the lines, there just has to be a reason why the game
reserves 20 lines per set, but only renders lines #0, #6, #12, and #18.
We'll probably see once we get to look at those animation functions more
closely.
This was quite a 📝 TH03-style RE push,
which yielded way more PI% than RE%. But now that that's done, I can
finally not get distracted by all that stuff when looking at the
list of remaining memory references. Next up: The last few missing
structures in TH05's MAIN.EXE!
P0109
TH04/TH05 decompilation (Boss movement / Bullet group tuning)
💰 Funded by:
[Anonymous], Blue Bolt
🏷️ Tags:
Back to TH05! Thanks to the good funding situation, I can strike a nice
balance between getting TH05 position-independent as quickly as possible,
and properly reverse-engineering some missing important parts of the game.
Once 100% PI will get the attention of modders, the code will then be in
better shape, and a bit more usable than if I just rushed that goal.
By now, I'm apparently also pretty spoiled by TH01's immediate
decompilability, after having worked on that game for so long.
Reverse-engineering in ASM land is pretty annoying, after all,
since it basically boils down to meticulously editing a piece of ASM into
something I can confidently call "reverse-engineered". Most of the
time, simply decompiling that piece of code would take just a little bit
longer, but be massively more useful. So, I immediately tried decompiling
with TH05… and it just worked, at every place I tried!? Whatever the issue
was that made 📝 segment splitting so
annoying at my first attempt, I seem to have completely solved it in the
meantime. 🤷 So yeah, backers can now request pretty much any part of TH04
and TH05 to be decompiled immediately, with no additional segment
splitting cost.
(Protip for everyone interested in starting their own ReC project: Just
declare one segment per function, right from the start, then group them
together to restore the original code segmentation…)
Except that TH05 then just throws more of its infamous micro-optimized and
undecompilable ASM at you. 🙄 This push covered the function that adjusts
the bullet group template based on rank and the selected difficulty,
called every time such a group is configured. Which, just like pretty
much all of TH05's bullet spawning code, is one of those undecompilable
functions. If C allowed labels of other functions as goto
targets, it might have been decompilable into something useful to
modders… maybe. But like this, there's no point in even trying.
This is such a terrible idea from a software architecture point of view, I
can't even. Because now, you suddenly have to mirror your C++
declarations in ASM land, and keep them in sync with each other. I'm
always happy when I get to delete an ASM declaration from the codebase
once I've decompiled all the instances where it was referenced. But for
TH05, we now have to keep those declarations around forever. 😕 And all
that for a performance increase you probably couldn't even measure. Oh
well, pulling off Galaxy Brain-level ASM optimizations is kind of
fun if you don't have portability plans… I guess?
If I started a full fangame mod of a PC-98 Touhou game, I'd base it on
TH04 rather than TH05, and backport selected features from TH05 as
needed. Just because it was released later doesn't make it better, and
this is by far not the only one of ZUN's micro-optimizations that just
went way too far.
Dropping down to ASM also makes it easier to introduce weird quirks.
Decompiled, one of TH05's tuning conditions for
stack
groups on Easy Mode would look something like:
case BP_STACK:
// […]
if(spread_angle_delta >= 2) {
stack_bullet_count--;
}
The fields of the bullet group template aren't typically reset when
setting up a new group. So, spread_angle_delta in the context
of a stack group effectively refers to "the delta angle of the last
spread group that was fired before this stack – whenever that was".
uth05win also spotted this quirk, considered it a bug, and wrote
fanfiction by changing spread_angle_delta to
stack_bullet_count.
As usual for functions that occur in more than one game, I also decompiled
the TH04 bullet group tuning function, and it's perfectly sane, with no
such quirks.
In the more PI-focused parts of this push, we got the TH05-exclusive
smooth boss movement functions, for flying randomly or towards a given
point. Pretty unspectacular for the most part, but we've got yet another
uth05win inconsistency in the latter one. Once the Y coordinate gets close
enough to the target point, it actually speeds up twice as much as the
X coordinate would, whereas uth05win used the same speedup factors for
both. This might make uth05win a couple of frames slower in all boss
fights from Stage 3 on. Hard to measure though – and boss movement partly
depends on RNG anyway.
Next up: Shinki's background animations – which are actually the single
biggest source of position dependence left in TH05.
P0105
TH01 decompilation (.GRC format / Hardcoded sprites, part 1)
P0106
TH01 decompilation (Boss entity classes / .BOS format, part 1/5)
P0107
TH01 decompilation (Boss entity classes / .BOS format, part 2/5)
P0108
TH01 decompilation (Boss entity classes / .BOS format, part 3/5)
💰 Funded by:
Yanga
🏷️ Tags:
And indeed, I got to end my vacation with a lot of image format and
blitting code, covering the final two formats, .GRC and .BOS. .GRC was
nothing noteworthy – one function for loading, one function for
byte-aligned blitting, and one function for freeing memory. That's it –
not even a unblitting function for this one. .BOS, on the other hand…
…has no generic (read: single/sane) implementation, and is only
implemented as methods of some boss entity class. And then again for
Sariel's dress and wand animations, and then again for Reimu's
animations, both of which weren't even part of these 4 pushes. Looking
forward to decompiling essentially the same algorithms all over again… And
that's how TH01 became the largest and most bloated PC-98 Touhou game. So
yeah, still not done with image formats, even at 44% RE.
This means I also had to reverse-engineer that "boss entity" class… yeah,
what else to call something a boss can have multiple of, that may or may
not be part of a larger boss sprite, may or may not be animated, and that
may or may not have an orb hitbox?
All bosses except for Kikuri share the same 5 global instances of this
class. Since renaming all these variables in ASM land is tedious anyway, I
went the extra mile and directly defined separate, meaningful names for
the entities of all bosses. These also now document the natural order in
which the bosses will ultimately be decompiled. So, unless a backer
requests anything else, this order will be:
Konngara
Sariel
Elis
Kikuri
SinGyoku
(code for regular card-flipping stages)
Mima
YuugenMagan
As everyone kind of expects from TH01 by now, this class reveals yet
another… um, unique and quirky piece of code architecture. In
addition to the position and hitbox members you'd expect from a class like
this, the game also stores the .BOS metadata – width, height, animation
frame count, and 📝 bitplane pointer slot
number – inside the same class. But if each of those still corresponds to
one individual on-screen sprite, how can YuugenMagan have 5 eye sprites,
or Kikuri have more than one soul and tear sprite? By duplicating that
metadata, of course! And copying it from one entity to another
At this point, I feel like I even have to congratulate the game for not
actually loading YuugenMagan's eye sprites 5 times. But then again, 53,760
bytes of waste would have definitely been noticeable in the DOS days.
Makes much more sense to waste that amount of space on an unused C++
exception handler, and a bunch of redundant, unoptimized blitting
functions
(Thinking about it, YuugenMagan fits this entire system perfectly. And
together with its position in the game's code – last to be decompiled
means first on the linker command line – we might speculate that
YuugenMagan was the first boss to be programmed for TH01?)
So if a boss wants to use sprites with different sizes, there's no way
around using another entity. And that's why Girl-Elis and Bat-Elis are two
distinct entities internally, and have to manually sync their position.
Except that there's also a third one for Attacking-Girl-Elis,
because Girl-Elis has 9 frames of animation in total, and the global .BOS
bitplane pointers are divided into 4 slots of only 8 images each.
Same for SinGyoku, who is split into a sphere entity, a
person entity, and a… white flash entity for all three forms,
all at the same resolution. Or Konngara's facial expressions, which also
require two entities just for themselves.
And once you decompile all this code, you notice just how much of it the
game didn't even use. 13 of the 50 bytes of the boss entity class are
outright unused, and 10 bytes are used for a movement clamping and lock
system that would have been nice if ZUN also used it outside of
Kikuri's soul sprites. Instead, all other bosses ignore this system
completely, and just
party on
the X/Y coordinates of the boss entities directly.
As for the rendering functions, 5 out of 10 are unused. And while those
definitely make up less than half of the code, I still must have
spent at least 1 of those 4 pushes on effectively unused functionality.
Only one of these functions lends itself to some speculation. For Elis'
entrance animation, the class provides functions for wavy blitting and
unblitting, which use a separate X coordinate for every line of the
sprite. But there's also an unused and sort of broken one for unblitting
two overlapping wavy sprites, located at the same Y coordinate. This might
indicate that Elis could originally split herself into two sprites,
similar to TH04 Stage 6 Yuuka? Or it might just have been some other kind
of animation effect, who knows.
After over 3 months of TH01 progress though, it's finally time to look at
other games, to cover the rest of the crowdfunding backlog. Next up: Going
back to TH05, and getting rid of those last PI false positives. And since
I can potentially spend the next 7 weeks on almost full-time ReC98 work,
I've also re-opened the store until October!
P0103
TH01 decompilation (HUD, part 1)
P0104
TH01 decompilation (HUD, part 2)
💰 Funded by:
Ember2528
🏷️ Tags:
It's vacation time! Which, for ReC98, means "relaxing by looking at
something boring and uninteresting that we'll ultimately have to cover
anyway"… like the TH01 HUD.
📝 As noted earlier, all the score, card
combo, stage, and time numbers are drawn into VRAM. Which turns TH01's HUD
rendering from the trivial, gaiji-assisted text RAM writes we see in later
games to something that, once again, requires blitting and unblitting
steps. For some reason though, everything on there is blitted to both
VRAM pages? And that's why the HUD chose to allocate a bunch of .PTN
sprite slots to store the background behind all "animated" elements at the
beginning of a 4-stage scene or boss battle… separately for every
affected 16×16 area. (Looking forward to the completely unnecessary
code in the Sariel fight that updates these slots after the backgrounds
were animated!) And without any separation into helper functions, we end
up with the same blitting calls separately copy-pasted for every single
HUD element. That's why something as seemingly trivial as this isn't even
done after 2 pushes, as we're still missing the stage timer.
Thankfully, the .PTN function signatures come with none of ZUN's little
inconsistencies, so I was able to mostly reduce this copy-pasta to a bunch
of small inline functions and macros. Those interfaces still remain a bit
annoying, though. As a 32×32 format, .PTN merely supports 16×16 sprites
with a separate bunch of functions that take an additional
quarter parameter from 0 to 3, to select one of the 4 16×16
quarters in a such a sprite…
For life and bomb counts, there was no way around VRAM though, since ZUN
wanted to use more than a single color for those. This is where we find at
least somewhat of a mildly interesting quirk in all of this: Any life
counts greater than the intended 6 will wrap into new rows, with the bombs
in the second row overlapping those excess lives. With the way the rest of
the HUD rendering works, that wrapping code code had to be explicitly
written… which means that ZUN did in fact accomodate (his own?) cheating
there.
Now, I promised image formats, and in the middle of this copy-pasta, we
did get one… sort of. MASK.GRF, the red HUD
background, is entirely handled with two small bespoke functions… and
that's all the code we have for this format. Basically, it's a variation
on the 📝 .GRZ format we've seen earlier. It
uses the exact same RLE algorithm, but only has a single byte stream for
both RLE commands and pixel data… as you would expect from an RLE format.
.GRF actually stores 4 separately encoded RLE streams, which suggests that
it was intended for full 16-color images. Unfortunately,
MASK.GRF only contains 4 copies of the same HUD background
, so no unused beta data for us there. The only
thing we could derive from 4 identical bitplanes would be that the
background was originally meant to be drawn using color #15, rather than
the red seen in the final game. Color
#15 is a stage-specific background color that would have made the
HUD blend in quite nicely – in the YuugenMagan fight, it's the changing
color of the 邪 in the background, for example. But
really, with no generic implementation of this format, that's all just
speculation.
Oh, and in case you were looking for a rip of that image:
So yeah, more of the usual TH01 code, with the usual small quirks, but
nothing all too horrible – as expected. Next up: The image formats that
didn't make it into this push.
P0099
TH01 decompilation (Pellets, part 1)
P0100
TH01 decompilation (Pellets, part 2)
P0101
TH01 decompilation (Pellets, part 3)
P0102
TH01 decompilation (Pellets, part 4)
💰 Funded by:
Ember2528, Yanga
🏷️ Tags:
Well, make that three days. Trying to figure out all the details behind
the sprite flickering was absolutely dreadful…
It started out easy enough, though. Unsurprisingly, TH01 had a quite
limited pellet system compared to TH04 and TH05:
The cap is 100, rather than 240 in TH04 or 180 in TH05.
Only 6 special motion functions (with one of them broken and unused)
instead of 10. This is where you find the code that generates SinGyoku's
chase pellets, Kikuri's small spinning multi-pellet circles, and
Konngara's rain pellets that bounce down from the top of the playfield.
A tiny selection of preconfigured multi-pellet groups. Rather than
TH04's and TH05's freely configurable n-way spreads, stacks, and rings,
TH01 only provides abstractions for 2-, 3-, 4-, and 5- way spreads (yup,
no 6-way or beyond), with a fixed narrow or wide angle between the
individual pellets. The resulting pellets are also hardcoded to linear
motion, and can't use the special motion functions. Maybe not the best
code, but still kind of cute, since the generated groups do follow a
clear logic.
As expected from TH01, the code comes with its fair share of smaller,
insignificant ZUN bugs and oversights. As you would also expect
though, the sprite flickering points to the biggest and most consequential
flaw in all of this.
Apparently, it started with ZUN getting the impression that it's only
possible to use the PC-98 EGC for fast blitting of all 4 bitplanes in one
CPU instruction if you blit 16 horizontal pixels (= 2 bytes) at a time.
Consequently, he only wrote one function for EGC-accelerated sprite
unblitting, which can only operate on a "grid" of 16×1 tiles in VRAM. But
wait, pellets are not only just 8×8, but can also be placed at any
unaligned X position…
… yet the game still insists on using this 16-dot-aligned function to
unblit pellets, forcing itself into using a super sloppy 16×8 rectangle
for the job. 🤦 ZUN then tried to mitigate the resulting flickering in two
hilarious ways that just make it worse:
An… "interlaced rendering" mode? This one's activated for all Stage 15
and 20 fights, and separates pellets into two halves that are rendered on
alternating frames. Collision detection with the Yin-Yang Orb and the
player is only done for the visible half, but collision detection with
player shots is still done for all pellets every frame, as are
motion updates – so that pellets don't end up moving half as fast as they
should.
So yeah, your eyes weren't deceiving you. The game does effectively
drop its perceived frame rate in the Elis, Kikuri, Sariel, and Konngara
fights, and it does so deliberately.
📝 Just like player shots, pellets
are also unblitted, moved, and rendered in a single function.
Thanks to the 16×8 rectangle, there's now the (completely unnecessary)
possibility of accidentally unblitting parts of a sprite that was
previously drawn into the 8 pixels right of a pellet. And this
is where ZUN went full and went "oh, I
know, let's test the entire 16 pixels, and in case we got an entity
there, we simply make the pellet invisible for this frame! Then
we don't even have to unblit it later!"
Except that this is only done for the first 3 elements of the player
shot array…?! Which don't even necessarily have to contain the 3 shots
fired last. It's not done for the player sprite, the Orb, or, heck,
other pellets that come earlier in the pellet array. (At least
we avoided going 𝑂(𝑛²) there?)
Actually, and I'm only realizing this now as I type this blog post:
This test is done even if the shots at those array elements aren't
active. So, pellets tend to be made invisible based on comparisons
with garbage data.
And then you notice that the player shot
unblit/move/render function is actually only ever called from the
pellet unblit/move/render function on the one global instance
of the player shot manager class, after pellets were unblitted. So, we
end up with a sequence of
which means that we can't ever unblit a previously rendered shot
with a pellet. Sure, as terrible as this one function call is from
a software architecture perspective, it was enough to fix this issue.
Yet we don't even get the intended positive effect, and walk away with
pellets that are made temporarily invisible for no reason at all. So,
uh, maybe it all just was an attempt at increasing the
ramerate on lower spec PC-98 models?
Yup, that's it, we've found the most stupid piece of code in this game,
period. It'll be hard to top this.
I'm confident that it's possible to turn TH01 into a well-written, fluid
PC-98 game, with no flickering, and no perceived lag, once it's
position-independent. With some more in-depth knowledge and documentation
on the EGC (remember, there's still
📝 this one TH03 push waiting to be funded),
you might even be able to continue using that piece of blitter hardware.
And no, you certainly won't need ASM micro-optimizations – just a bit of
knowledge about which optimizations Turbo C++ does on its own, and what
you'd have to improve in your own code. It'd be very hard to write
worse code than what you find in TH01 itself.
(Godbolt for Turbo C++ 4.0J when?
Seriously though, that would 📝 also be a
great project for outside contributors!)
Oh well. In contrast to TH04 and TH05, where 4 pushes only covered all the
involved data types, they were enough to completely cover all of
the pellet code in TH01. Everything's already decompiled, and we never
have to look at it again. 😌 And with that, TH01 has also gone from by far
the least RE'd to the most RE'd game within ReC98, in just half a year! 🎉
Still, that was enough TH01 game logic for a while.
Next up: Making up for the delay with some
more relaxing and easy pieces of TH01 code, that hopefully make just a
bit more sense than all this garbage. More image formats, mainly.
TH01 pellets are coming up next, and for the first time, we'll have the
chance to move hardcoded sprite data from ASM land to C land. As it would
turn out, bad luck with the 2-byte alignment at the end of
REIIDEN.EXE's data segment pretty much forces us to declare
TH01's pellet sprites in C if we want to decompile the final few pellet
functions without ugly workarounds for the float literals there. And while
I could have just converted them into a C array and called it a day, it
did raise the question of when we are going to do this The Right And
Moddable Way, by auto-converting actual image files into ASM or C arrays
during the build process. These arrays are even more annoying to edit in
C, after all – unlike TASM, the old C++ we have to work with doesn't
support binary number literals, only hexadecimal or, gasp, octal.
Without the explicit funding for such a converter,
I reached out to
GitHub, asking backers and outside contributors whether they'd be in
favor of it. As something that requires no RE skills and collides with
nothing else, it would be a perfect task for C/C++ coders who want to
support ReC98 with something other than money.
And surprisingly, those still exist!
Jonathan Campbell, of
DOSBox-X fame,
went ahead and implemented all the required functionality, within just a
few days. Thanks again! The result is probably a lot more portable than it
would have been if I had written it. Which is pretty relevant for future
port authors – any additional tooling we write ourselves should not
add to the list of problems they'll have to worry about.
Right now, all of the sprites are #included from the big ASM
dump files, which means that they have to be converted before those files
are assembled during the 32-bit build part. We could have introduced a
third distinct build step there, perhaps even a 16-bit one so that we can
use Turbo C++ 4.0J to also compile the converter… However, the more
reasonable option was to do this at the beginning of the 32-bit build
step, and add a 32-bit Windows C++ compiler to the list of tools required
for ReC98's build process.
And the best choice for ReC98 is, in fact… 🥁… the 20-year-old Borland C++
5.5 freeware release.
See the README for a lengthy justification, as well as
download links.
So yes, all sprites mentioned in the GitHub issue can now be modded by
simply editing .BMP files, using an image editor of your choice. 🖌
And now that that's dealt with, it's finally time for more actual
progress! TH01 pellets coming tomorrow.
P0096
TH01 decompilation (.PTN format, part 2)
P0097
TH01 decompilation (Orb physics)
P0098
TH01 decompilation (Player shots)
💰 Funded by:
Ember2528, Yanga
🏷️ Tags:
So, let's finally look at some TH01 gameplay structures! The obvious
choices here are player shots and pellets, which are conveniently located
in the last code segment. Covering these would therefore also help in
transferring some first bits of data in REIIDEN.EXE from ASM
land to C land. (Splitting the data segment would still be quite
annoying.) Player shots are immediately at the beginning…
…but wait, these are drawn as transparent sprites loaded from .PTN files.
Guess we first have to spend a push on
📝 Part 2 of this format.
Hm, 4 functions for alpha-masked blitting and unblitting of both 16×16 and
32×32 .PTN sprites that align the X coordinate to a multiple of 8
(remember, the PC-98 uses a
planar
VRAM memory layout, where 8 pixels correspond to a byte), but only one
function that supports unaligned blitting to any X coordinate, and only
for 16×16 sprites? Which is only called twice? And doesn't come with a
corresponding unblitting function?
Yeah, "unblitting". TH01 isn't
double-buffered,
and uses the PC-98's second VRAM page exclusively to store a stage's
background and static sprites. Since the PC-98 has no hardware sprites,
all you can do is write pixels into VRAM, and any animated sprite needs to
be manually removed from VRAM at the beginning of each frame. Not using
double-buffering theoretically allows TH01 to simply copy back all 128 KB
of VRAM once per frame to do this. But that
would be pretty wasteful, so TH01 just looks at all animated sprites, and
selectively copies only their occupied pixels from the second to the first
VRAM page.
Alright, player shot class methods… oh, wait, the collision functions
directly act on the Yin-Yang Orb, so we first have to spend a push on
that one. And that's where the impression we got from the .PTN
functions is confirmed: The orb is, in fact, only ever displayed at
byte-aligned X coordinates, divisible by 8. It's only thanks to the
constant spinning that its movement appears at least somewhat
smooth.
This is purely a rendering issue; internally, its position is
tracked at pixel precision. Sadly, smooth orb rendering at any unaligned X
coordinate wouldn't be that trivial of a mod, because well, the
necessary functions for unaligned blitting and unblitting of 32×32 sprites
don't exist in TH01's code. Then again, there's so much potential for
optimization in this code, so it might be very possible to squeeze those
additional two functions into the same C++ translation unit, even without
position independence…
More importantly though, this was the right time to decompile the core
functions controlling the orb physics – probably the highlight in these
three pushes for most people.
Well, "physics". The X velocity is restricted to the 5 discrete states of
-8, -4, 0, 4, and 8, and gravity is applied by simply adding 1 to the Y
velocity every 5 frames No wonder that this can
easily lead to situations in which the orb infinitely bounces from the
ground.
At least fangame authors now have
a
reference of how ZUN did it originally, because really, this bad
approximation of physics had to have been written that way on purpose. But
hey, it uses 64-bit floating-point variables!
…sometimes at least, and quite randomly. This was also where I had to
learn about Turbo C++'s floating-point code generation, and how rigorously
it defines the order of instructions when mixing double and
float variables in arithmetic or conditional expressions.
This meant that I could only get ZUN's original instruction order by using
literal constants instead of variables, which is impossible right now
without somehow splitting the data segment. In the end, I had to resort to
spelling out ⅔ of one function, and one conditional branch of another, in
inline ASM. 😕 If ZUN had just written 16.0 instead of
16.0f there, I would have saved quite some hours of my life
trying to decompile this correctly…
To sort of make up for the slowdown in progress, here's the TH01 orb
physics debug mod I made to properly understand them. Edit
(2022-07-12): This mod is outdated,
📝 the current version is here!2020-06-13-TH01OrbPhysicsDebug.zip
To use it, simply replace REIIDEN.EXE, and run the game
in debug mode, via game d on the DOS prompt.
Its code might also serve as an example of how to achieve this sort of
thing without position independence.
Alright, now it's time for player shots though. Yeah, sure, they
don't move horizontally, so it's not too bad that those are also
always rendered at byte-aligned positions. But, uh… why does this code
only use the 16×16 alpha-masked unblitting function for decaying shots,
and just sloppily unblits an entire 16×16 square everywhere else?
The worst part though: Unblitting, moving, and rendering player shots
is done in a single function, in that order. And that's exactly where
TH01's sprite flickering comes from. Since different types of sprites are
free to overlap each other, you'd have to first unblit all types, then
move all types, and then render all types, as done in later
PC-98 Touhou games. If you do these three steps per-type instead, you
will unblit sprites of other types that have been rendered before… and
therefore end up with flicker.
Oh, and finally, ZUN also added an additional sloppy 16×16 square unblit
call if a shot collides with a pellet or a boss, for some
guaranteed flicker. Sigh.
And that's ⅓ of all ZUN code in TH01 decompiled! Next up: Pellets!
P0095
TH01 PI (Completing OP and FUUIN, .BOS pointers, scrolling)
💰 Funded by:
Yanga
🏷️ Tags:
🎉 TH01's OP.EXE and FUUIN.EXE are now fully
position-independent! 🎉
What does this mean?
You can now add any data or code to TH01's main menu or ending cutscenes,
by simply editing the ReC98 source, writing your mod in ASM or C++, and
recompiling the code. Since all absolute memory addresses in OP
and FUUIN have now been converted to labels, this
will work without causing any instability. See the
position independence section in the FAQ for a more thorough
explanation about why this was a problem.
As an example, the most popular TH01 mod idea, replacing MDRV2 with PMD,
could now at least be prototyped and tested in
OP.EXE, without having to worry about x86 instruction lengths.
📝 Check the video I made for the TH04/TH05 OP.EXE PI announcement for a basic overview of how to do that.
What does this not mean?
The original ZUN code hasn't been completely decompiled yet. The final
high-level parts of both the main menu and the cutscenes are still ASM,
which might make modding a bit inconvenient right now.
It's not that much more code though, and could quickly be covered in a few
pushes if requested. Due to the plentiful monthly subscriptions, the shop
will stay closed for regular orders until the end of June, but backers
with outstanding contributions could request that now if they want
to – simply drop me a mail. Otherwise, the "generic TH01 RE" money will
continue to go towards the main game. That way, we'll have more substance
to show once we do decide to decompile the rest of
OP.EXE and FUUIN.EXE, and likely get some press
coverage as a result.
Then again, we've been building up to this point over the last few pushes,
and it only really needed a quick look over the remaining false positives.
The majority of the time therefore went towards more PI in
REIIDEN.EXE, where the bitplane pointers for .BOS files yielded
some quite big gains. Couldn't really find any obvious reason why ZUN used
two slighly different variations on loading and blitting those files,
though…
As the final function in this rather random push, we got TH01's
hardware-powered scrolling function, used for screen shaking effects and
the scrolling backgrounds at the start of the Final Boss stages. And while
I tried to document all these I/O writes… it turned out that ZUN actually
copied the entire function straight from the PC-9801 Programmers'
Bible, with no changes. It's the
setgsta() example function on page 150. Which is terribly
suboptimal and bloated – all those integer divisions are really
not how you'd write such code for a 16-bit compiler from the 90's…
And that gives us 60% PI overall, and 50% PI over all of TH01! Next up:
More structures… and classes, even?
P0092
TH01 decompilation (Score menu, part 2)
P0093
TH01 decompilation (Score menu, part 3)
P0094
TH01 decompilation (Score menu, part 4 + Endings, part 1)
💰 Funded by:
Yanga, Ember2528
🏷️ Tags:
Three pushes to decompile the TH01 high score menu… because it's
completely terrible, and needlessly complicated in pretty much every
aspect:
Another, final set of differences between the REIIDEN.EXE
and FUUIN.EXE versions of the code. Which are so
insignificant that it must mean that ZUN kept this code in two
separate, manually and imperfectly synced files. The REIIDEN.EXE
version, only shown when game-overing, automatically jumps to the
enter/終 button after the 8th character was entered,
and also has a completely invisible timeout that force-enters a high score
name after 1000… key presses? Not frames? Why. Like, how do you
even realistically such a number. (Best guess: It's a hidden easter egg to
amuse players who place drinking glasses on cursor keys. Or beer bottles.)
That's all the differences that are maybe visible if you squint
hard enough. On top of that though, we got a bunch of further, minor code
organization differences that serve no purpose other than to waste
decompilation time, and certainly did their part in stretching this out to
3 pushes instead of 2.
Entered names are restricted to a set of 16-bit, full-width Shift-JIS
codepoints, yet are still accessed as 8-bit byte arrays everywhere. This
bloats both the C++ and generated ASM code with needless byte splits,
swaps, and bit shifts. Same for the route kanji. You have this 16-, heck,
even 32-bit CPU, why not use it?! (Fun fact: FUUIN.EXE is
explicitly compiled for a 80186, for the most part – unlike
REIIDEN.EXE, which does use Turbo C++'s 80386 mode.)
The sensible way of storing the current position of the alphabet
cursor would simply be two variables, indicating the logical row and
column inside the character map. When rendering, you'd then transform
these into screen space. This can keep the on-screen position constants in
a single place of code.
TH01 does the opposite: The selected character is stored directly in terms
of its on-screen position, which is then mapped back to a character
index for every processed input and the subsequent screen update. There's
no notion of a logical row or column anywhere, and consequently, the
position constants are vomited all over the code.
Which might not be as bad if the character map had a uniform
grid structure, with no gaps. But the one in TH01 looks like this:
And with no sense of abstraction anywhere, both input handling and
rendering end up with a separate if branch for at least 4 of
the 6 rows.
In the end, I just gave up with my usual redundancy reduction efforts for
this one. Anyone wanting to change TH01's high score name entering code
would be better off just rewriting the entire thing properly.
And that's all of the shared code in TH01! Both OP.EXE and
FUUIN.EXE are now only missing the actual main menu and
ending code, respectively. Next up, though: The long awaited TH01 PI push.
Which will not only deliver 100% PI for OP.EXE and
FUUIN.EXE, but also probably quite some gains in
REIIDEN.EXE. With now over 30% of the game decompiled, it's about
time we get to look at some gameplay code!
P0090
TH01 decompilation (Input blockers + Input, part 1)
P0091
TH01 decompilation (Input, part 2 + Score menu, part 1)
💰 Funded by:
Yanga, Ember2528
🏷️ Tags:
Back to TH01, and its high score menu… oh, wait, that one will eventually
involve keyboard input. And thanks to the generous TH01 funding situation,
there's really no reason not to cover that right now. After all,
TH01 is the last game where input still hadn't been RE'd.
But first, let's also cover that one unused blitting function, together
with REIIDEN.CFG loading and saving, which are in front of
the input function in OP.EXE… (By now, we all know about
the hidden start bomb configuration, right?)
Unsurprisingly, the earliest game also implements input in the messiest
way, with a different function for each of the three executables. "Because
they all react differently to keyboard inputs ",
apparently? OP.EXE even has two functions for it, one for the
START / CONTINUE / OPTION / QUIT main
menu, and one for both Option and Music Test menus, both of which directly
perform the ring arithmetic on the menu cursor variable. A consistent
separation of keyboard polling from input processing apparently wasn't all
too obvious of a thought, since it's only truly done from TH02 on.
This lack of proper architecture becomes actually hilarious once you
notice that it did in fact facilitate a recursion bug!
In case you've been living under a rock for the past 8 years, TH01 shipped
with debugging features, which you can enter by running the game via
game d from the DOS prompt. These features include a
memory info screen, shown when pressing PgUp, implemented as one blocking
function (test_mem()) called directly in response to the
pressed key inside the polling function. test_mem() only
returns once that screen is left by pressing PgDown. And in order to poll
input… it directly calls back into the same polling function that called
it in the first place, after a 3-frame delay.
Which means that this screen is actually re-entered for every 3 frames
that the PgUp key is being held. And yes, you can, of course, also
crash the system via a stack overflow this way by holding down PgUp for a
few seconds, if that's your thing. Edit (2020-09-17): Here's a video from
spaztron64, showing off this
exact stack overflow crash while running under the
VEM486
memory manager, which displays additional information about these
sorts of crashes:
What makes this even funnier is that the code actually tracks the last
state of every polled key, to prevent exactly that sort of bug. But the
copy-pasted assignment of the last input state is only done aftertest_mem() already returned, making it effectively pointless
for PgUp. It does work as intended for PgDown… and that's why you
have to actually press and release this key once for every call to
test_mem() in order to actually get back into the game. Even
though a single call to PgDown will already show the game screen
again.
In maybe more relevant news though, this function also came with what can
be considered the first piece of actual gameplay logic! Bombing via
double-tapping the Z and X keys is also handled here, and now we know that
both keys simply have to be tapped twice within a window of 20 frames.
They are tracked independently from each other, so you don't necessarily
have to press them simultaneously.
In debug mode, the bomb count tracks precisely this window of
time. That's why it only resets back to 0 when pressing Z or X if it's
≥20.
Sure, TH01's code is expectedly terrible and messy. But compared to the
micro-optimizations of TH04 and TH05, it's an absolute joy to work on, and
opening all these ZUN bug loot boxes is just the icing on the cake.
Looking forward to more of the high score menu in the next pushes!
As expected, we've now got the TH04 and TH05 stage enemy structure,
finishing position independence for all big entity types. This one was
quite straightfoward, as the .STD scripting system is pretty simple.
Its most interesting aspect can be found in the way timing is handled. In
Windows Touhou, all .ECL script instructions come with a frame field that
defines when they are executed. In TH04's and TH05's .STD scripts, on the
other hand, it's up to each individual instruction to add a frame time
parameter, anywhere in its parameter list. This frame time defines for how
long this instruction should be repeatedly executed, before it manually
advances the instruction pointer to the next one. From what I've seen so
far, these instruction typically apply their effect on the first frame
they run on, and then do nothing for the remaining frames.
Oh, and you can't nest the LOOP instruction, since the enemy
structure only stores one single counter for the current loop iteration.
Just from the structure, the only innovation introduced by TH05 seems to
have been enemy subtypes. These can be used to parametrize scripts via
conditional jumps based on this value, as a first attempt at cutting down
the need to duplicate entire scripts for similar enemy behavior. And
thanks to TH05's favorable segment layout, this game's version of the
.STD enemy script interpreter is even immediately ready for decompilation,
in one single future push.
As far as I can tell, that now only leaves
.MPN file loading
player bomb animations
some structures specific to the Shinki and EX-Alice battles
plus some smaller things I've missed over the years
until TH05's MAIN.EXE is completely position-independent.
Which, however, won't be all it needs for that 100% PI rating on the front
page. And with that many false positives, it's quite easy to get lost with
immediately reverse-engineering everything around them. This time, the
rendering of the text dissolve circles, used for the stage and BGM title
popups, caught my eye… and since the high-level code to handle all of
that was near the end of a segment in both TH04 and TH05, I just decided
to immediately decompile it all. Like, how hard could it possibly be?
Sure, it needed another segment split, which was a bit harder due
to all the existing ASM referencing code in that segment, but certainly
not impossible…
Oh wait, this code depends on 9 other sets of identifiers that haven't
been declared in C land before, some of which require vast reorganizations
to bring them up to current consistency standards. Whoops! Good thing that
this is the part of the project I'm still offering for free…
Among the referenced functions was tiles_invalidate_around(),
which marks the stage background tiles within a rectangular area to be
redrawn this frame. And this one must have had the hardest function
signature to figure out in all of PC-98 Touhou, because it actually
seems impossible. Looking at all the ways the game passes the center
coordinate to this function, we have
X and Y as 16-bit integer literals, merged into a single
PUSH of a 32-bit immediate
X and Y calculated and pushed independently from each other
by-value copies of entire Point instances
Any single declaration would only lead to at most two of the three cases
generating the original instructions. No way around separately declaring
the function in every translation unit then, with the correct parameter
list for the respective calls. That's how ZUN must have also written it.
Oh well, we would have needed to do all of this some time. At least
there were quite a bit of insights to be gained from the actual
decompilation, where using const references actually made it
possible to turn quite a number of potentially ugly macros into wholesome
inline functions.
But still, TH04 and TH05 will come out of ReC98's decompilation as one big
mess. A lot of further manual decompilation and refactoring, beyond the
limits of the original binary, would be needed to make these games
portable to any non-PC-98, non-x86 architecture.
And yes, that includes IBM-compatible DOS – which, for some reason, a
number of people see as the obvious choice for a first system to port
PC-98 Touhou to. This will barely be easier. Sure, you'll save the effort
of decompiling all the remaining original ASM. But even with
master.lib's MASTER_DOSV setting, these games still very much
rely on PC-98 hardware, with corresponding assumptions all over ZUN's
code. You will need to provide abstractions for the PC-98's
superimposed text mode, the gaiji, and planar 4-bit color access in
general, exchanging the use of the PC-98's GRCG and EGC blitter chips with
something else. At that point, you might as well port the game to one
generic 640×400 framebuffer and away from the constraints of DOS,
resulting in that Doom source code-like situation which made that
game easily portable to every architecture to begin with. But ZUN just
wasn't a John Carmack, sorry.
Or what do I know. I've never programmed for IBM-compatible DOS, but maybe
ReC98's audience does include someone who is intimately familiar
with IBM-compatible DOS so that the constraints aren't much of an issue
for them? But even then, 16-bit Windows would make much more sense
as a first porting target if you don't want to bother with that
undecompilable ASM.
At least I won't have to look at TH04 and TH05 for quite a while now.
The delivery delays have made it obvious that
my life has become pretty busy again, probably until September. With a
total of 9 TH01 pushes from monthly subscriptions now waiting in the
backlog, the shop will stay closed until I've caught up with most of
these. Which I'm quite hyped for!
Alright, the score popup numbers shown when collecting items or defeating
(mid)bosses. The second-to-last remaining big entity type in TH05… with
quite some PI false positives in the memory range occupied by its data.
Good thing I still got some outstanding generic RE pushes that haven't
been claimed for anything more specific in over a month! These
conveniently allowed me to RE most of these functions right away, the
right way.
Most of the false positives were boss HP values, passed to a "boss phase
end" function which sets the HP value at which the next phase should end.
Stage 6 Yuuka, Mugetsu, and EX-Alice have their own copies of this
function, in which they also reset certain boss-specific global variables.
Since I always like to cover all varieties of such duplicated functions at
once, it made sense to reverse-engineer all the involved variables while I
was at it… and that's why this was exactly the right time to cover the
implementation details of Stage 6 Yuuka's parasol and vanishing animations
in TH04.
With still a bit of time left in that RE push afterwards, I could also
start looking into some of the smaller functions that didn't quite fit
into other pushes. The most notable one there was a simple function that
aims from any point to the current player position. Which actually only
became a separate function in TH05, probably since it's called 27 times in
total. That's 27 places no longer being blocked from further RE progress.
WindowsTiger already
did most of the work for the score popup numbers in January, which meant
that I only had to review it and bring it up to ReC98's current coding
styles and standards. This one turned out to be one of those rare features
whose TH05 implementation is significantly less insane than the
TH04 one. Both games lazily redraw only the tiles of the stage background
that were drawn over in the previous frame, and try their best to minimize
the amount of tiles to be redrawn in this way. For these popup numbers,
this involves calculating the on-screen width, based on the exact number
of digits in the point value. TH04 calculates this width every frame
during the rendering function, and even resorts to setting that field
through the digit iteration pointer via self-modifying code… yup. TH05, on
the other hand, simply calculates the width once when spawning a new popup
number, during the conversion of the point value to
binary-coded
decimal. The "×2" multiplier suffix being removed in TH05 certainly
also helped in simplifying that feature in this game.
And that's ⅓ of TH05 reverse-engineered! Next up, one more TH05 PI push,
in which the stage enemies hopefully finish all the big entity types.
Maybe it will also be accompanied by another RE push? In any case, that
will be the last piece of TH05 progress for quite some time. The next TH01
stretch will consist of 6 pushes at the very least, and I currently have
no idea of how much time I can spend on ReC98 a month from now…
Wait, PI for FUUIN.EXE is mainly blocked by the high score
menu? That one should really be properly decompiled in a separate
RE push, since it's also present in largely identical form in
REIIDEN.EXE… but I currently lack the explicit funding to do
that.
And as it turns out, I shouldn't really capture any of the existing generic
RE contributions for it either. Back in 2018 when I ran the crowdfunding
on the Touhou Patch Center Discord server, I said that generic RE
contributions would never go towards TH01. No one was interested in that
game back then, and as it's significantly different from all the other
games, it made sense to only cover it if explicitly requested.
As Touhou Patch Center still remains one of the biggest supporters and
advertisers for ReC98, someone recently believed that this rule was still
in effect, despite not being mentioned anywhere on this website.
Fast forward to today, and TH01 has become the single most supported game
lately, with plenty of incomplete pushes still open to be completed.
Reverse-engineering it has proven to be quite efficient, yielding lots of
completion percentage points per push. This, I suppose, is exactly what
backers that don't give any specific priorities are mainly interested in.
Therefore, I will allocate future partial
contributions to TH01, whenever it makes sense.
So, instead of rushing TH01 PI, let's wait for Ember2528's
April subscription, and get the 25% total RE milestone with some TH05 PI
progress instead. This one primarily focused on the gather circles
(spirals…?), the third-last missing entity type in TH05. These are
rendered using the same 8×8 pellet sprite introduced in TH02… except that
the actual pellets received a darkened bottom part in TH04
.
Which, in turn, is actually rendered quite efficiently – the games first
render the top white part of all pellets, followed by the bottom gray part
of all pellets. The PC-98 GRCG is used throughout the process, doing its
typical job of accelerating monochrome blitting, and by arranging the
rendering like this, only two GRCG color changes are required to draw any
number of pellets. I guess that makes it quite a worthwhile
optimization? Don't ask me for specific performance numbers or even saved
cycles, though
P0084
TH01 decompilation (REYHI*.DAT loading and creation)
💰 Funded by:
Yanga
🏷️ Tags:
Final TH01 RE push for the time being, and as expected, we've got the
superficially final piece of shared code between the TH01 executables.
However, just having a single implementation for loading and recreating
the REYHI*.DAT score files would have been way above ZUN's
standards of consistency. So ZUN had the unique idea to mix up the file
I/O APIs, using master.lib functions in REIIDEN.EXE, and
POSIX functions (along with error messages and disabled interrupts) in
FUUIN.EXE… Could have been worse
though, as it was possible to abstract that away quite nicely.
That code wasn't quite in the natural way of decompilation either. As it
turns out though, 📝 segment splitting isn't
so painful after all if one of the new segments only has a few functions.
Definitely going to do that more often from now on, since it allows a much
larger number of functions to be immediately decompiled. Which is always
superior to somehow transforming a function's ASM into a form that I can
confidently call "reverse-engineered", only to revisit it again later for
its decompilation.
And while I unfortunately missed 25% of total RE by a bit, this push
reached two other and perhaps even more significant milestones:
After (finally) compressing all unknown parts of the BSS segments
using arrays, the number of remaining lines in the
REIIDEN.EXE ASM dump has fallen below TASM's limit of 65,535. Which
means that we no longer need that annoying th01_reiiden_2.inc
file that everyone has forgotten about at least once.
Nope, RL has given me plenty of things to do from home after all,
so the current cap still remains an accurate representation of my free
time. 😕
For now though, we've got one more TH01 file format push, covering the
core functions for loading and displaying the 32×32 and 16×16 sprites from
the .PTN files, as announced – and probably one of the last ones for quite
a while to yield both RE and PI progress way above average. But what is
this, error return values in a ZUN game?! And actually good code
for deriving the alpha channel from the 16th color in the hardware
palette?! Sure, the rest of the code could still be improved a lot, but
that was quite a surprise, especially after the spaghetti code of
📝 the last push. That makes up for two of
the .PTN structure fields (one of them always 0, and one of them always 1)
remaining unused, and therefore unknown.
ZUN also uses the .PTN image slots to store the background of frequently
updated VRAM sections, in order to be able to repeatedly draw on top of
them – like for example the HUD area where the score and time numbers are
drawn. Future games would simply use the text RAM and gaiji for those
numbers. This would have worked just fine for TH01 too – especially since
all the functions decompiled so far align the VRAM X coordinate to the
8-pixel byte grid, which is the simplest way of accessing VRAM given the
PC-98's
planar
memory layout. Looks as if ZUN simply wasn't aware of gaiji during the
development of TH01.
This won't be the last time I cover the .PTN format, since all the
blitting functions that actually use alpha are exclusive to
REIIDEN.EXE, and currently out of decompilation reach. But after
some more long overdue cleaning work, TH01 has now passed both TH02 and
even TH04 to become the second-most reverse-engineered game in
all of ReC98, in terms of absolute numbers! 🎉
Also, PI for TH01's OP.EXE is imminent. Next up though, we've
first got the probably final double-speed push for TH01, covering the last
set of duplicated functions between the three binaries – quite fitting for
the currently last fully funded, outstanding TH01 RE push. Then, we also
might get FUUIN.EXE PI within the same push
afterwards? After that, TH01 progress will be slowing down, since
I'd then have to cover either the main menu or in-game code
or the cutscenes, depending on what the backers request. (By
default, it's going to be in-game code, of course.)
Last of the 3 weeks of almost full-time ReC98 work, supposedly the least
stressful one, and then things still get delayed thanks to illness 😕 In
better news though, it looks like I'll be able to extend these 3 weeks to
8, as my RL is shutting down for coronavirus reasons. I'm going to
wait a bit for the dust to settle before raising the crowdfunding cap
though, since RL might give me more to do from home after all. I may or
may not also get commissioned for a non-Touhou translation patch project
to be worked on in that time…
The .GRP file functions turned out to, of course, also be present in
FUUIN.EXE. In fact, that binary had the largest share of
progress in this push, since it's the only one to include another
reimplementation of master.lib-style hardware palette fading. As a typical
little ZUN inconsistency, the FUUIN.EXE version of one .GRP
palette function directly calls one of these functions.
As for the functions themselves, they basically wrap the single-function
Pi load and
display library by 電脳科学研究所/BERO in a bowl of global state
spaghetti. 🍝 At least the function names now clearly encode important
side effects like, y'know, a changed hardware palette. The reason ZUN used
this separate library over master.lib's PI loading functions was probably
its support for defining a color as transparent. This feature is used for
the red box in the main menu, and the large cyan Siddhaṃ seed syllables in
(again) the Konngara fight.
Sadly, we've already reached the end of fast triple-speed TH01 progress
with 📝 the last push, which decompiled the
last segment shared by all three of TH01's executables. There's still a
bit of double-speed progress left though, with a small number of code
segments that are shared between just two of the three executables.
At the end of the first one of these, we've got all the code for the .GRZ
format – which is yet another run-length encoded image format, but this
time storing up to 16 full 640×400 16-color images with an alpha bit. This
one is exclusively used to wastefully store Konngara's sword slash and
kuji-in kill
animations. Due to… suboptimal code organization, the code for the format
is also present in OP.EXE, despite not being used there. But
hey, that brings TH01 to over 20% in RE!
Decoupling the RLE command stream from the pixel data sounds like a nice
idea at first, allowing the format to efficiently encode a variety of
animation frames displayed all over the screen… if ZUN actually made
use of it. The RLE stream also has quite some ridiculous overhead,
starting with 1 byte to store the 1-bit command (putting a single 8×1
pixel block, or entering a run of N such blocks). Run commands then store
another 1-byte run length, which has to be followed by another
command byte to identify the run as putting N blocks, or skipping N blocks.
And the pixel data is just a sequence of these blocks for all 4 bitplanes,
in uncompressed form…
Also, have some rips of all the images this format is used for:
To make these, I just wrote a small viewer, calling the same decompiled
TH01 code: 2020-03-07-grzview.zip
Obviously, this means that it not only must to be run on a PC-98, but also
discards the alpha information.
If any backers are really interested in having a proper converter
to and from PNG, I can implement that in an upcoming push… although that
would be the perfect thing for outside contributors to do.
Next up, we got some code for the PI format… oh, wait, the actual files
are called "GRP" in TH01.
Last part of TH01's main graphics function segment, and we've got even
more code that alternates between being boring and being slightly weird.
But at least, "boring" also meant "consistent" for once. And
so progress continued to be as fast as expected from the last TH01 pushes,
yielding 3.3% in TH01 RE%, and 1% in overall RE%, within a single day.
There even was enough time to decompile another full code segment, which
bundles all the hardware initialization and cleanup calls into single
functions to be run when starting and exiting the game. Which might be
interesting for at least one person, I guess
But seriously, trying to access page 2 on a system with only page 0 and 1?
Had to get out my real PC-98 to double-check that I wasn't missing
anything here, since every emulator only looks at the bottom bit of the
page number. But real hardware seems to do the same, and there really is
nothing special to it semantically, being equivalent to page 0. 🤷
Next up in TH01, we'll have some file format code!
To finish this TH05 stretch, we've got a feature that's exclusive to TH05
for once! As the final memory management innovation in PC-98 Touhou, TH05
provides a single static (64 * 26)-byte array for storing up to 64
entities of a custom type, specific to a stage or boss portion.
(Edit (2023-05-29): This system actually debuted in
📝 TH04, where it was used for much simpler
entities.)
TH05 uses this array for
the Stage 2 star particles,
Alice's puppets,
the tip of curve ("jello") bullets,
Mai's snowballs and Yuki's fireballs,
Yumeko's swords,
and Shinki's 32×32 bullets,
which makes sense, given that only one of those will be active at any
given time.
On the surface, they all appear to share the same 26-byte structure, with
consistently sized fields, merely using its 5 generic fields for different
purposes. Looking closer though, there actually are differences in
the signedness of certain fields across the six types. uth05win chose to
declare them as entirely separate structures, and given all the semantic
differences (pixels vs. subpixels, regular vs. tiny master.lib sprites,
…), it made sense to do the same in ReC98. It quickly turned out to be the
only solution to meet my own standards of code readability.
Which blew this one up to two pushes once again… But now, modders can
trivially resize any of those structures without affecting the other types
within the original (64 * 26)-byte boundary, even without full position
independence. While you'd still have to reduce the type-specific
number of distinct entities if you made any structure larger, you
could also have more entities with fewer structure members.
As for the types themselves, they're full of redundancy once again – as
you might have already expected from seeing #4, #5, and #6 listed as
unrelated to each other. Those could have indeed been merged into a single
32×32 bullet type, supporting all the unique properties of #4
(destructible, with optional revenge bullets), #5 (optional number of
twirl animation frames before they begin to move) and #6 (delay clouds).
The *_add(), *_update(), and *_render()
functions of #5 and #6 could even already be completely
reverse-engineered from just applying the structure onto the ASM, with the
ones of #3 and #4 only needing one more RE push.
But perhaps the most interesting discovery here is in the curve bullets:
TH05 only renders every second one of the 17 nodes in a curve
bullet, yet hit-tests every single one of them. In practice, this is an
acceptable optimization though – you only start to notice jagged edges and
gaps between the fragments once their speed exceeds roughly 11 pixels per
second:
And that brings us to the last 20% of TH05 position independence! But
first, we'll have more cheap and fast TH01 progress.
Well, that took twice as long as I thought, with the two pushes containing
a lot more maintenance than actual new research. Spending some time
improving both field names and types in
32th System's
TH03 resident structure finally gives us all of those
structures. Which means that we can now cover all the remaining
decompilable ZUN.COM parts at once…
Oh wait, their main() functions have stayed largely identical
since TH02? Time to clean up and separate that first, then… and combine
two recent code generation observations into the solution to a
decompilation puzzle from 4½ years ago. Alright, time to decomp-
Oh wait, we'd kinda like to properly RE all the code in TH03-TH05
that deals with loading and saving .CFG files. Almost every outside
contributor wanted to grab this supposedly low-hanging fruit a lot
earlier, but (of course) always just for a single game, while missing how
the format evolved.
So, ZUN.COM. For some reason, people seem to consider it
particularly important, even though it contains neither any game logic nor
any code specific to PC-98 hardware… All that this decompilable part does
is to initialize a game's .CFG file, allocate an empty resident structure
using master.lib functions, release it after you quit the game,
error-check all that, and print some playful messages~ (OK, TH05's also
directly fills the resident structure with all data from
MIKO.CFG, which all the other games do in OP.EXE.)
At least modders can now freely change and extend all the resident
structures, as well as the .CFG files? And translators can translate those
messages that you won't see on a decently fast emulator anyway? Have fun,
I guess 🤷
And you can in fact do this right now – even for TH04 and TH05,
whose ZUN.COM currently isn't rebuilt by ReC98. There is
actually a rather involved reason for this:
One of the missing files is TH05's GJINIT.COM.
Which contains all of TH05's gaiji characters in hardcoded 1bpp form,
together with a bit of ASM for writing them to the PC-98's hardware gaiji
RAM
Which means we'd ideally first like to have a sprite compiler, for
all the hardcoded 1bpp sprites
Which must compile to an ASM slice in the meantime, but should also
output directly to an OMF .OBJ file (for performance now), as well as to C
code (for portability later)
Which I won't put in as long as the backlog contains actual
progress to drive up the percentages on the front page.
So yeah, no meaningful RE and PI progress at any of these levels. Heck,
even as a modder, you can just replace the zun zun_res
(TH02), zun -5 (TH03), or zun -s (TH04/TH05)
calls in GAME.BAT with a direct call to your modified
*RES*.COM. And with the alternative being "manually typing 0 and 1
bits into a text file", editing the sprites in TH05's
GJINIT.COM is way more comfortable in a binary sprite editor
anyway.
For me though, the best part in all of this was that it finally made sense
to throw out the old Borland C++ run-time assembly slices 🗑 This giant
waste of time
became obvious 5 years ago, but any ASM dump of a .COM
file would have needed rather ugly workarounds without those slices. Now
that all .COM binaries that were originally written in C are
compiled from C, we can all enjoy slightly faster grepping over the entire
repository, which now has 229 fewer files. Productivity will skyrocket!
Next up: Three weeks of almost full-time ReC98 work! Two more PI-focused
pushes to finish this TH05 stretch first, before switching priorities to
TH01 again.
P0072
TH04/TH05 PI (Bullet structure)
P0073
TH04/TH05 RE (32×32 + monochrome 16×16 sprite rendering)
P0074
TH04/TH05 RE (Bullet sprites)
P0075
TH04/TH05 RE (Bullet group types, spawn types, and templates)
Long time no see! And this is exactly why I've been procrastinating
bullets while there was still meaningful progress to be had in other parts
of TH04 and TH05: There was bound to be quite some complexity in this most
central piece of game logic, and so I couldn't possibly get to a
satisfying understanding in just one push.
Or in two, because their rendering involves another bunch of
micro-optimized functions adapted from master.lib.
Or in three, because we'd like to actually name all the bullet sprites,
since there are a number of sprite ID-related conditional branches. And
so, I was refining things I supposedly RE'd in the the commits from the
first push until the very end of the fourth.
When we talk about "bullets" in TH04 and TH05, we mean just two things:
the white 8×8 pellets, with a cap of 240 in TH04 and 180 in TH05, and any
16×16 sprites from MIKO16.BFT, with a cap of 200 in TH04 and
220 in TH05. These are by far the most common types of… err, "things the
player can collide with", and so ZUN provides a whole bunch of pre-made
motion, animation, and
n-way spread / ring / stack group options for those, which can be
selected by simply setting a few fields in the bullet template. All the
other "non-bullets" have to be fired and controlled individually.
Which is nothing new, since uth05win covered this part pretty accurately –
I don't think anyone could just make up these structure member
overloads. The interesting insights here all come from applying this
research to TH04, and figuring out its differences compared to TH05. The
most notable one there is in the default groups: TH05 allows you to add
a stack
to any single bullet, n-way spread or ring, but TH04 only lets you create
stacks separately from n-way spreads and rings, and thus gets by with
fewer fields in its bullet template structure. On the other hand, TH04 has
a separate "n-way spread with random angles, yet still aimed at the
player" group? Which seems to be unused, at least as far as
midbosses and bosses are concerned; can't say anything about stage enemies
yet.
In fact, TH05's larger bullet template structure illustrates that these
distinct group types actually are a rather redundant piece of
over-engineering. You can perfectly indicate any permutation of the basic
groups through just the stack bullet count (1 = no stack), spread bullet
count (1 = no spread), and spread delta angle (0 = ring instead of
spread). Add a 4-flag bitfield to cover the rest (aim to player, randomize
angle, randomize speed, force single bullet regardless of difficulty or
rank), and the result would be less redundant and even slightly
more capable.
Even those 4 pushes didn't quite finish all of the bullet-related types,
stopping just shy of the most trivial and consistent enum that defines
special movement. This also left us in a
📝 TH03-like situation, in which we're still
a bit away from actually converting all this research into actual RE%. Oh
well, at least this got us way past 50% in overall position independence.
On to the second half! 🎉
For the next push though, we'll first have a quick detour to the remaining
C code of all the ZUN.COM binaries. Now that the
📝 TH04 and TH05 resident structures no
longer block those, -Tom- has requested TH05's
RES_KSO.COM to be covered in one of his outstanding pushes.
And since 32th System
recently RE'd TH03's resident structure, it makes sense to also review and
merge that, before decompiling all three remaining RES_*.COM
binaries in hopefully a single push. It might even get done faster than
that, in which case I'll then review and merge some more of
WindowsTiger's
research.
Turns out that covering TH03's 128-byte player structure was way
more insightful than expected! And while it doesn't include every
bit of per-player data, we still got to know quite a bit about the game
from just trying to name its members:
50 frames of invincibility when starting a new round
110 frames of invincibility when getting hit
64 frames of knockback when getting hit
128 frames before a charged up gauge/boss attack is fired
automatically
The damage a player will take from the next hit starts out at ½ heart
at the beginning of each round, and increases by another ½ heart every
1024 frames, capped at a maximum of 3 hearts. This guarantees that a
player will always survive at least two hits.
In Story Mode, hit damage is biased in favor of the player for the
first 6 stages. The CPU will always take an additional 1½ hearts of damage
in stages 1 and 2, 1 heart in stages 3 and 4, and ½ heart in stages 5 and
6, plus the above frame-based and capped damage amount. So while it's
therefore possible to cause 4½ hearts of damage in Stages 1 and 2 if the
first hit is somehow delayed for at least 5120 frames, you'd still win
faster if the CPU gets hit as soon as possible.
CPU players will charge up a gauge/boss attack as soon as their gauge
has reached a certain level. These levels are now proved to be random; at
the start of every round, the game generates a sequence of 64 gauge level
positions (from 1 to 4), separately for each player. If a round were to
last long enough for a CPU player to fire all 64 of those predetermined
attacks, you'd observe that sequence repeating.
Yes, that means that in theory, these levels can be
RNG-manipulated. More details on that once we got this game's resident
structure, where the seed is stored.
CPU players follow two main strategies: trying to not get hit, and…
not quite doing that once they've survived for a certain safety threshold
of frames. For the first 2000 frames of a round, this safety frame counter
is reset to 0 every 64 frames, leading the CPU to switch quickly between
the two strategies in the first few Story Mode stages on lower
difficulties, where this safety threshold is less than 64. The calculation
of the actual value is a bit more complex; more on that also once we got
this game's resident structure.
Section 13 of 夢時空.TXT states that Boss Attacks are only counted
towards the Clear Bonus if they were caused by reaching a certain number
of spell points. This is incorrect; manually charged Level 4 Boss Attacks
are counted as well.
The next TH03 pushes can now cover all the functions that reference this
structure in one way or another, and actually commit all this research and
translate it into some RE%. Since the non-TH05 priorities have become a
bit unclear after the last 50 € RE contribution though (as of this
writing, it's still 10 € to decide on what game to cover in two RE
pushes!), I'll be returning to TH05 until that's decided.
As noted in 📝 P0061, TH03 gameplay RE is
indeed going to progress very slowly in the beginning. A lot of the
initial progress won't even be reflected in the RE% – there are just so
many features in this game that are intertwined into each other, and I
only consider functions to be "reverse-engineered" once we understand
every involved piece of code and data, and labeled every absolute
memory reference in it. (Yes, that means that the percentages on the front
page are actually underselling ReC98's progress quite a bit, and reflect a
pretty low bound of our actual understanding of the games.)
So, when I get asked to look directly at gameplay code right now,
it's quite the struggle to find a place that can be covered within a push
or two and that would immediately benefit
scoreplayers. The basics of score and combo handling themselves
managed to fit in pretty well, though:
Just like TH04 and TH05, TH03 stores the current score as 8
binary-coded
decimal digits. Since the last constant 0 is not included, the maximum
score displayable without glitches therefore is 999,999,990 points, but
the game will happily store up to 24,699,999,990 points before the score
wraps back to 0.
There are (surprisingly?) only 6 places where the game actually
adds points to the score. Not quite sure about all of them yet, but they
(of course) include ending a combo, killing enemies, and the bonus at the
end of a round.
Combos can be continued for 80 frames after a 2-hit. The hit counter
can only be increased in the first 48, and effectively resets to 0 for the
last 32, when the Spell Point value starts blinking.
TH03 can track a total of 16 independent "hit combo sources" per
player, simultaneously. These are not related to the number of
actual explosions; rather, each explosion is assigned to one of the 16
slots when it spawns, and all consecutive explosions spawned from that one
will then add to the hit combo in that slot. The hit number displayed in
the top left is simply the largest one among all these.
Oh well, at least we still got a bit of PI% out of this one. From this
point though, the next push (or two) should be enough to cover the big
128-byte player structure – which by itself might not be immediately
interesting to scoreplayers, but surely is quite a blocker for everything
else.
Now that's more like the speed I was expecting! After a few more
unused functions for palette fading and rectangle blitting, we've reached
the big line drawing functions. And the biggest one among them,
drawing a straight line at any angle between two points using
Bresenham's algorithm, actually happens to be the single longest
function present in more than one binary in all of PC-98 Touhou, and #23
on the list of individual longest functions.
And it technically has a ZUN bug! If you pass a point outside the
(0, 0) - (639, 399) screen range, the function will calculate a new point
at the edge of the screen, so that the resulting line will retain the
angle intended by the points given. Except that it does so by calculating
the line slope using an integer division rather than a floating-point one
Doesn't seem like it actually causes any weirdly
skewed lines to be drawn in-game, though; that case is only hit in the
Mima boss fight, which draws a few lines with a bottom coordinate of
400 rather than the maximum of 399. It might also cause the wrong
background pixels to be restored during parts of the YuugenMagan fight,
leading to flickering sprites, but seriously, that's an issue everywhere
you look in this game.
Together with the rendering-text-to-VRAM function we've mostly already
known from TH02, this pushed the total RE percentage well over 20%, and
almost doubled the TH01 RE percentage, all within three pushes. And
comparatively, it went really smoothly, to the point (ha) where I
even had enough time left to also include the single-point functions that
come next in that code segment. Since about half of the remaining
functions in OP.EXE are present in more than just itself,
I'll be able to at least keep up this speed until OP.EXE hits
the 70% RE mark. That is, as long as the backers' priorities continue to
be generic RE or "giving some love to TH01"… we don't have a precedent for
TH01's actual game code yet.
And that's all the TH01 progress funded for January! Next up, we actually
do have a focus on TH03's game and scoring mechanics… or at least
the foundation for that.
So, the thing that made me so excited about TH01 were all those bulky C
reimplementations of master.lib functions. Identical copies in all three
executables, trivial to figure out and decompile, removing tons of
instructions, and providing a foundation for large parts of the game
later. The first set of functions near the end of that shared code segment
deals with color palette handling, and master.lib's resident palette
structure in particular. (No relation to the game's
resident structure.) Which directly starts us out with pretty much
all the decompilation difficulties imaginable:
iteration over internal DOS structures via segment pointers – Turbo
C++ doesn't support a lot of arithmetic on those, requiring tons of casts
to make it work
calls to a far function near the beginning of a segment
from a function near the end of a segment – these are undecompilable until
we've decompiled both functions (and thus, the majority of the segment),
and need to be spelled out in ASM for the time being. And if the caller
then stores some of the involved variables in registers, there's no
way around the ugliest of workarounds, spelling out opcode bytes…
surprising color format inconsistencies – apparently, GRB (rather than
RGB) is some sort of wider standard in PC-98 inter-process communication,
because it matches the order of the hardware's palette register ports?
(0AAh = green,
0ACh = red,
0AEh = blue)? Yet the
game's actual palette still uses RGB…
And as it turns out, the game doesn't even use the resident palette
feature. Which adds yet another set of functions to the, uh, learning
experience that ZUN must have chosen this game to be. I wouldn't be
surprised if we manage to uncover actual scrapped beta game content later
on, among all the unused code that's bound to still be in there.
At least decompilation should get easier for the next few TH01 pushes now…
right?
A~nd resident structures ended up being exactly
the right thing to start off the new year with.
WindowsTiger and
spaztron64 have already been
pushing for them with their own reverse-engineering, and together with my
own recent GENSOU.SCR RE work, we've clarified just enough
context around the harder-to-explain values to make both TH04's and TH05's
structures fit nicely into the typical time frame of a single push.
With all the apparently obvious and seemingly just duplicated values, it
has always been easy to do a superficial job for most of the structure,
then lose motivation for the last few unknown fields. Pretty glad to got
this finally covered; I've heard that people are going to write trainer
tools now?
Also, where better to slot in a push that, in terms of figures, seems to
deliver 0% RE and only miniscule PI progress, than at the end of
Touhou Patch Center's 5-push order that already had multiple pushes
yielding above-average progress? As usual,
we'll be reaping the rewards of this work in the next few TH04/TH05
pushes…
…whenever they get funded, that is, as for January, the backers have
shifted the priorities towards TH01 and TH03. TH01 especially is something
I'm quite excited about, as we're finally going to see just how fast this
bloated game is really going to progress. Are you excited?
🎉 TH04's and TH05's OP.EXE are now fully
position-independent! 🎉
What does this mean?
You can now add any data or code to the main menus of the two games, by
simply editing the ReC98 source, writing your mod in ASM or C/C++, and
recompiling the code. Since all absolute memory addresses have now been
converted to labels, this will work without causing any instability. See
the position independence section in the FAQ
for a more thorough explanation about why this was a problem.
What does this not mean?
The original ZUN code hasn't been completely reverse-engineered yet, let
alone decompiled. Pretty much all of that is still ASM, which might make
modding a bit inconvenient right now.
Since this push was otherwise pretty unremarkable, I made a video
demonstrating a few basic things you can do with this:
Now, what to do for the last outstanding Touhou Patch Center push?
Bullets, or resident structures?
Just like most of the time, it was more sensible to cover
GENSOU.SCR, the last structure missing in TH05's
OP.EXE,
everywhere it's used, rather than just rushing out OP.EXE
position independence. I did have to look into all of the functions to
fully RE it after all, and to find out whether the unused fields actually
are unused. The only thing that kept this push from yielding even
more above-average progress was the sheer inconsistency in how the games
implemented the operations on this PC-98 equivalent of score*.dat:
OP.EXE declares two structure instances, for simultaneous
access to both Reimu and Marisa scores. TH05 with its 4 playable
characters instead uses a single one, and overwrites it successively for
each character when drawing the high score menu – meaning, you'd only see
Yuuka's scores when looking at the structure inside the rendered high
score menu. However, it still declares the TH04 "Marisa" structure as a
leftover… and also decodes it and verifies its checksum, despite
nothing being ever loaded into it
MAIN.EXE uses a separate ASM implementation of the decoding
and encoding functions
TH05's MAIN.EXE also reimplements the basic loading
functions
in ASM – without the code to regenerate GENSOU.SCR with
default data if the file is missing or corrupted. That actually makes
sense, since any regeneration is already done in OP.EXE, which
always has to load that file anyway to check how much has been cleared
However, there is a regeneration function in TH05's
MAINE.EXE… which actually generates different default
data: OP.EXE consistently sets Extra Stage records to Stage 1,
while MAINE.EXE uses the same place-based stage numbering that
both versions use for the regular ranks
Technically though, TH05's OP.EXEis
position-independent now, and the rest are (should be?
) merely false positives. However, TH04's is
still missing another structure, in addition to its false
positives. So, let's wait with the big announcement until the next push…
which will also come with a demo video of what will be possible then.
Big gains, as expected, but not much to say about this one. With TH05 Reimu
being way too easy to decompile after
📝 the shot control groundwork done in October,
there was enough time to give the comprehensive PI false-positive
treatment to two other sets of functions present in TH04's and TH05's
OP.EXE. One of them, master.lib's super_*()
functions, was used a lot in TH02, more than in any other game… I
wonder how much more that game will progress without even focusing on it
in particular.
Alright then! 100% PI for TH04's and TH05's OP.EXE upcoming…
(Edit: Already got funding to cover this!)
Did WindowsTiger just cover
2% over all games on his
own? While not all of that passed my review, +1.59% RE and +1.66% PI
over all 5 games is still pretty noteworthy, and comfortably pushes TH05
over the 25% mark in RE, and the 60% mark in PI.
However.
While I definitely do appreciate such contributions, reviewing and
adapting these to my current code organization standards also takes more
time than I'd like it to take. And taken to this level, it does
kind of undermine this crowdfunding project, causing both a literal
denial of service and exactly the stress that this crowdfunding was
designed to avoid. Most of the time, I can't merge all of that as-is
without knowingly creating annoyances down the line. But I don't want to
just ignore it either, or reject every non-perfect commit…
That's also why I let it slide this time, due to some of the RE work in
there being genuinely amazing. In the future though, be aware that your
chance of having your work merged diminishes the further you move ahead of
my current master branch. In extreme cases like this one, I'll
then just be waiting until enough generic reverse-engineering pushes have
accrued, and treat the merge as regular work.
But now, time to continue with the regular programming… I am kind
of exhausted from all of this, so no bullets for the next two
Touhou Patch Center pushes, still… Good thing there's still plenty of
simpler things with big percentage gains to be done:
WindowsTiger mostly focused on OP.EXE which I tended to
neglect, as the big MAIN executables seemed to be more
interesting to my backers. (It's not like anyone ever requestedOP to be done either – like, who even cares about boring menu
source code, right?) Good that I therefore sort of left it as low-hanging
fruit to be grabbed by outside contributors – because now, TH04's and
TH05's OP.EXE are close to 100% position-independence. The
GENSOU.SCR format is pretty much the only thing missing there
right now, so let's finally go all the way there, I'd say.
And in TH05, there's still Reimu's shot type functions left to be
decompiled.
… nope, with a game whose MAIN.EXE is still just 5%
reverse-engineered and which naturally makes heavy use of
structures, there's still a lot more PI groundwork to be done before RE
progress can speed up to the levels that we've now reached with TH05. The
good news is that this game is (now) way easier to understand: In contrast
to TH04 and TH05, where we needed to work towards player shots over a
two-digit number of pushes, TH03 only needed two for SPRITE16, and a half
one for the playfield shaking mechanism. After that, I could even already
decompile the per-frame shot update and render functions, thanks to TH03's
high number of code segments. Now, even the big 128-byte player structure
doesn't seem all too far off.
Then again, as TH03 shares no code with any other game, this actually was
a completely average PI push. For the remaining three, we'll return to
TH04 and TH05 though, which should more than make up for the slight drop
in RE speed after this one.
In other news, we've now also reached peak C++, with the introduction of
templates! TH03 stores movement speeds in a 4.4 fixed-point
format, which is an 8-bit spin on the usual 16-bit, 12.4 fixed-point
format.
So, where to start? Well, TH04 bullets are hard, so let's
procrastinate start with TH03 instead
The 📝 sprite display functions are the
obvious blocker for any structure describing a sprite, and therefore most
meaningful PI gains in that game… and I actually did manage to fit a
decompilation of those three functions into exactly the amount of time
that the Touhou Patch Center community votes alloted to TH03
reverse-engineering!
And a pretty amazing one at that. The original code was so obviously
written in ASM and was just barely decompilable by exclusively using
register pseudovariables and a bit of goto, but I was able to
abstract most of that away, not least thanks to a few helpful optimization
properties of Turbo C++… seriously, I can't stop marveling at this ancient
compiler. The end result is both readable, clear, and dare I say
portable?! To anyone interested in porting TH03,
take a look. How painful would it be to port that away from 16-bit
x86?
However, this push is also a typical example that the RE/PI priorities can
only control what I look at, and the outcome can actually differ
greatly. Even though the priorities were 65% RE and 35% PI, the progress
outcome was +0.13% RE and +1.35% PI. But hey, we've got one more push with
a focus on TH03 PI, so maybe that one will include more RE than
PI, and then everything will end up just as ordered?
With no feedback to 📝 last week's blog post,
I assume you all are fine with how things are going? Alright then, another
one towards position independence, with the same approach as before…
Since -Tom- wanted to learn something about how the PC-98
EGC is used in TH04 and TH05, I took a look at master.lib's
egc_shift_*() functions. These simply do a hardware-accelerated
memmove() of any VRAM region, and are used for screen shaking
effects. Hover over the image below for the raw effect:
Then, I finally wanted to take a look at the bullet structures, but it
required way too much reverse-engineering to even start within ¾ of
a position independence push. Even with the help of uth05win –
bullet handling was changed quite a bit from TH04 to TH05.
What I ultimately settled on was more raw, "boring" PI work based around
an already known set of functions. For this one, I looked at vector
construction… and this time, that actually made the games a little
bit more position-independent, and wasn't just all about removing
false positives from the calculation. This was one of the few sets of
functions that would also apply to TH01, and it revealed just how
chaotically that game was coded. This one commit shows three ways how ZUN
stored regular 2D points in TH01:
"regularly", like in master.lib's Point structure (X
first, Y second)
reversed, (Y first and X second), then obviously with two distinct
variables declared next to each other
… yeah. But in more productive news, this did actually lay the
groundwork for TH04 and TH05 bullet structures. Which might even be coming
up within the next big, 5-push order from Touhou Patch Center? These are
the priorities I got from them, let's see how close I can get!
So, here we have the first two pushes with an explicit focus on position
independence… and they start out looking barely different from regular
reverse-engineering? They even already deduplicate a bunch of item-related
code, which was simple enough that it required little additional work?
Because the actual work, once again, was in comparing uth05win's
interpretations and naming choices with the original PC-98 code? So that
we only ended up removing a handful of memory references there?
(Oh well, you can mod item drops now!)
So, continuing to interpret PI as a mere by-product of reverse-engineering
might ultimately drive up the total PI cost quite a bit. But alright then,
let's systematically clear out some false positives by looking at
master.lib function calls instead… and suddenly we get the PI progress we
were looking for, nicely spread out over all games since TH02. That kinda
makes it sound like useless work, only done because it's dictated by some
counting algorithm on a website. But decompilation will want to convert
all of these values to decimal anyway. We're merely doing that right now,
across all games.
Then again, it doesn't actually make any game more
position-independent, and only proves how position-independent it already
was. So I'm really wondering right now whether I should just rush
actual position independence by simply identifying structures and
their sizes, and not bother with members or false positives until that's
done. That would certainly get the job done for TH04 and TH05 in just a
few more pushes, but then leave all the proving work (and the road
to 100% PI on the front page) to reverse-engineering.
I don't know. Would it be worth it to have a game that's "maybe
fully position-independent", only for there to maybe be rare edge
cases where it isn't?
Or maybe, continuing to strike a balance between identifying false
positives (fast) and reverse-engineering structures (slow) will continue
to work out like it did now, and make us end up close to the current
estimate, which was attractive enough to sell out the crowdfunding for the
first time… 🤔
Please give feedback! If possible, by Friday evening UTC+1, before I start
working on the next PI push, this time with a focus on TH04.
No priorities, again…?! Please don't do this to me… 😕
Well, let's not continue with TH05 then 😛 And instead use the occasion to
commit this
interesting discovery, made by @m1yur1 last year. Yup, TH03's "ZUNSP"
sprite driver is actually a "rebranded" version of Promisence Soft's
SPRITE16.COM. Sure, you were allowed to use this
driver in your own game, but replacing the copyright with your own isn't
exactly the nicest thing to do… That now makes three library programmers
that ZUN didn't credit. Makes me wonder what makes M. Kajihara so special.
Probably the fact that Touhou has always been about the music for ZUN,
first and foremost.
But what makes this more than a piece of trivia is the fact that
Promiscence Soft's SPRITE16 sample game StormySpace was bundled
with documentation on the driver. Shoutout to the Neo Kobe PC-98
collection for preserving he original release!
That means more documented third-party code that we don't necessarily have
to reverse-engineer, just like master.lib or KAJA's PMD driver. However,
the PC-98 EGC is rather complex and definitely not designed
for alpha-tested 16-color sprite blitting. So it (once again) took quite a
while to make sense of SPRITE16's code and the available documentation on
the EGC, to come up with satisfying function names. As a result, I'm going
to distribute the entire RE work related to TH03's SPRITE16 interface
across a total of three pushes, this one being the first of them.
The second one will reverse-engineer the SPRITE16 code reachable from
its interrupt handler, and also come with somewhat detailed English
documentation on the PC-98 EGC raster ops in particular,
And just in time for zorg's last outstanding pushes, the
TH05 shot type control functions made the speedup happen!
TH05 as a whole is now 20% reverse-engineered, and 50% position
independent,
TH05's MAIN.EXE is now even below TH02's in terms of not
yet RE'd instructions,
and all price estimates have now fallen significantly.
It would have been really nice to also include Reimu's shot
control functions in this last push, but figuring out this entire system,
with its weird bitflags and switch statement
micro-optimizations, was once again taking way longer than it should
have. Especially with my new-found insistence on turning this obvious
copy-pasta into something somewhat readable and terse…
But with such a rather tabular visual structure, things should now be
moddable in hopefully easily consistent way. Of course, since we're
only at 54% position independence for MAIN.EXE,
this isn't possible yet without
crashing the game, but modifying damage would already work.
Deathbombs confirmed, in both TH04 and TH05! On the surface, it's the same
8-frame window as in
most Windows games, but due to the slightly lower PC-98 frame rate of
56.4 Hz, it's actually slightly more lenient in TH04 and TH05.
The last function in front of the TH05 shot type control functions marks
the player's previous position in VRAM to be redrawn. But as it turns out,
"player" not only means "the player's option satellites on shot levels ≥
2", but also "the explosion animation if you lose a life", which required
reverse-engineering both things, ultimately leading to the confirmation of
deathbombs.
It actually was kind of surprising that we then had reverse-engineered
everything related to rendering all three things mentioned above,
and could also cover the player rendering function right now. Luckily,
TH05 didn't decide to also micro-optimize that function into
un-decompilability; in fact, it wasn't changed at all from TH04. Unlike
the one invalidation function whose decompilation would have
actually been the goal here…
But now, we've finally gotten to where we wanted to… and only got 2
outstanding decompilation pushes left. Time to get the website ready for
hosting an actual crowdfunding campaign, I'd say – It'll make a better
impression if people can still see things being delivered after the big
announcement.
The glacial pace continues, with TH05's unnecessarily, inappropriately
micro-optimized, and hence, un-decompilable code for rendering the current
and high score, as well as the enemy health / dream / power bars. While
the latter might still pass as well-written ASM, the former goes to such
ridiculous levels that it ends up being technically buggy. If you
enjoy quality ZUN code, it's
definitely worth a read.
In TH05, this all still is at the end of code segment #1, but in TH04,
the same code lies all over the same segment. And since I really
wanted to move that code into its final form now, I finally did the
research into decompiling from anywhere else in a segment.
Turns out we actually can! It's kinda annoying, though: After splitting
the segment after the function we want to decompile, we then need to group
the two new segments back together into one "virtual segment" matching the
original one. But since all ASM in ReC98 heavily relies on being
assembled in MASM mode, we then start to suffer from MASM's group
addressing quirk. Which then forces us to manually prefix every single
function call
from inside the group
to anywhere else within the newly created segment
with the group name. It's stupidly boring busywork, because of all the
function calls you mustn't prefix. Special tooling might make this
easier, but I don't have it, and I'm not getting crowdfunded for it.
So while you now definitely can request any specific thing in any
of the 5 games to be decompiled right now, it will take slightly
longer, and cost slightly more.
(Except for that one big segment in TH04, of course.)
Only one function away from the TH05 shot type control functions now!
Here we go, new C code! …eh, it will still take a bit to really get
decompilation going at the speeds I was hoping for. Especially with the
sheer amount of stuff that is set in the first few significant
functions we actually can decompile, which now all has to be
correctly declared in the C world. Turns out I spent the last 2 years
screwing up the case of exported functions, and even some of their names,
so that it didn't actually reflect their calling convention… yup. That's
just the stuff you tend to forget while it doesn't matter.
To make up for that, I decided to research whether we can make use of some
C++ features to improve code readability after all. Previously, it seemed
that TH01 was the only game that included any C++ code, whereas TH02 and
later seemed to be 100% C and ASM. However, during the development of the
soon to be released new build system, I noticed that even this old
compiler from the mid-90's, infamous for prioritizing compile speeds over
all but the most trivial optimizations, was capable of quite surprising
levels of automatic inlining with class methods…
…leading the research to culminate in the mindblow that is
9d121c7 – yes, we can use C++ class methods
and operator overloading to make the code more readable, while still
generating the same code than if we had just used C and preprocessor
macros.
Looks like there's now the potential for a few pull requests from outside
devs that apply C++ features to improve the legibility of previously
decompiled and terribly macro-ridden code. So, if anyone wants to help
without spending money…
Back to actual development! Starting off this stretch with something
fairly mechanical, the few remaining generic boss and midboss state
variables. And once we start converting the constant numbers used for and
around those variables into decimal, the estimated position independence
probability immediately jumped by 5.31% for TH04's MAIN.EXE,
and 4.49% for TH05's – despite not having made the game any more position-
independent than it was before. Yup… lots of false positives in there, but
who can really know for sure without having put in the work.
But now, we've RE'd enough to finally decompile something again next,
4 years after the last decompilation of anything!
Calculating the average speed of the previous crowdfunded pushes, we arrive at estimated "goals" of…
So, time's up, and I didn't even get to the entire PayPal integration and FAQ parts… 😕 Still got to clarify a couple of legal questions before formally starting this, though. So for now, let's continue with zorg's next 5 TH05 reverse-engineering and decompilation pushes, and watch those prices go down a bit… hopefully quite significantly!
In order to be able to calculate how many instructions and absolute memory references are actually being removed with each push, we first need the database with the previous pushes from the Discord crowdfunding days. And while I was at it, I also imported the summary posts from back then.
Also, we now got something resembling a web design!
So yeah, "upper bound" and "probability". In reality it's certainly better than the numbers suggest, but as I keep saying, we can't say much about position independence without having reverse-engineered everything.
Now with the number of not yet RE'd x86 instructions the you might have seen in the thpatch Discord. They're a bit smaller now, didn't filter out a couple of directives back then.
Yes, requesting these currently is super slow. That's why I didn't want to have everyone here yet!
Next step: Figuring out the actual total number of game code instructions, for that nice "% done". Also, trying to do the same for position independence.
Boss explosions! And… urgh, I really also had to wade through that overly complicated HUD rendering code. Even though I had to pick -Tom-'s 7th push here as well, the worst of that is still to come. TH04 and TH05 exclusively store the current and high score internally as unpacked little-endian BCD, with some pretty dense ASM code involving the venerable x86 BCD instructions to update it.
So, what's actually the goal here. Since I was given no priorities , I still haven't had to (potentially) waste time researching whether we really can decompile from anywhere else inside a segment other than backwards from the end. So, the most efficient place for decompilation right now still is the end of TH05's main_01_TEXT segment. With maybe 1 or 2 more reverse-engineering commits, we'd have everything for an efficient decompilation up to sub_123AD. And that mass of code just happens to include all the shot type control functions, and makes up 3,007 instructions in total, or 12% of the entire remaining unknown code in MAIN.EXE.
So, the most reasonable thing would be to actually put some of the upcoming decompilation pushes towards reverse-engineering that missing part. I don't think that's a bad deal since it will allow us to mod TH05 shot types in C sooner, but zorg and qp might disagree
Next up: thcrap TL notes, followed by finally finishing GhostPhanom's old ReC98 future-proofing pushes. I really don't want to decompile without a proper build system.
Sometimes, "strategically picking things to reverse-engineer" unfortunately also means "having to move seemingly random and utterly uninteresting stuff, which will only make sense later, out of the way". Really, this was so boring. Gonna get a lot more exciting in the next ones though.
So, let's continue with player shots! …eh, or maybe not directly, since they involve two other structure types in TH05, which we'd have to cover first. One of them is a different sort of sprite, and since I like me some context in my reverse-engineering, let's disable every other sprite type first to figure out what it is.
One of those other sprite types were the little sparks flying away from killed stage enemies, midbosses, and grazed bullets; easy enough to also RE right now. Turns out they use the same 8 hardcoded 8×8 sprites in TH02, TH04, and TH05. Except that it's actually 64 16×8 sprites, because ZUN wanted to pre-shift them for all 8 possible start pixels within a planar VRAM byte (rather than, like, just writing a few instructions to shift them programmatically), leading to them taking up 1,024 bytes rather than just 64.
Oh, and the thing I wanted to RE *actually* was the decay animation whenever a shot hits something. Not too complex either, especially since it's exclusive to TH05.
And since there was some time left and I actually have to pick some of the next RE places strategically to best prepare for the upcoming 17 decompilation pushes, here's two more function pointers for good measure.
Stumbled across one more drawing function in the way… which was only a duplicated and seemingly pointlessly micro-optimized copy of master.lib's super_roll_put_tiny() function, used for fast display of 4-color 16×16 sprites.
With this out of the way, we can tackle player shot sprite animation next. This will get rid of a lot of code, since every power level of every character's shot type is implemented in its own function. Which makes up thousands of instructions in both TH04 and TH05 that we can nicely decompile in the future without going through a dedicated reverse-engineering step.
P0043
TH04/TH05 RE (Scrolling stage backgrounds, part 1)
P0044
TH04/TH05 RE (Scrolling stage backgrounds, part 2)
P0045
TH04/TH05 RE (Scrolling stage backgrounds, part 3)
Turns out I had only been about half done with the drawing routines. The rest was all related to redrawing the scrolling stage backgrounds after other sprites were drawn on top. Since the PC-98 does have hardware-accelerated scrolling, but no hardware-accelerated sprites, everything that draws animated sprites into a scrolling VRAM must then also make sure that the background tiles covered by the sprite are redrawn in the next frame, which required a bit of ZUN code. And that are the functions that have been in the way of the expected rapid reverse-engineering progress that uth05win was supposed to bring. So, looks like everything's going to go really fast now?
… yeah, no, we won't get very far without figuring out these drawing routines.
Which process data that comes from the .STD files.
Which has various arrays related to the background… including one to specify the scrolling speed. And wait, setting that to 0 actually is what starts a boss battle?
So, have a TH05 Boss Rush patch: 2018-12-26-TH05BossRush.zip
Theoretically, this should have also worked for TH04, but for some reason,
the Stage 3 boss gets stuck on the first phase if we do this?
Actually, I lied, and lasers ended up coming with everything that makes reverse-engineering ZUN code so difficult: weirdly reused variables, unexpected structures within structures, and those TH05-specific nasty, premature ASM micro-optimizations that will waste a lot of time during decompilation, since the majority of the code actually was C, except for where it wasn't.
Laser… is not difficult. In fact, out of the remaining entity types I checked, it's the easiest one to fully grasp from uth05win alone, as it's only drawn using master.lib's line, circle, and polygon functions. Everything else ends up calling… something sprite-related that needs to be RE'd separately, and which uth05win doesn't help with, at all.
Oh, and since the speed of shoot-out lasers (as used by TH05's Stage 2 boss, for example) always depends on rank, we also got this variable now.
This only covers the structure itself – uth05win's member names for the LASER structure were not only a bit too unclear, but also plain wrong and misleading in one instance. The actual implementation will follow in the next one.
So, after introducing instruction number statistics… let's go for over 2,000 lines that won't show up there immediately That being (mid-)boss HP, position, and sprite ID variables for TH04/TH05. Doesn't sound like much, but it kind of is if you insist on decimal numbers for easier comparison with uth05win's source code.
Let's start this stretch with a pretty simple entity type, the growing and shrinking circles shown during bomb animations and around bosses in TH04 and TH05. Which can be drawn in varying colors… wait, what's all this inlined and duplicated GRCG mode and color setting code? Let's move that out into macros too, it takes up too much space when grepping for constants, and will raise a "wait, what was that I/O port doing again" question for most people reading the code again after a few months.
🎉 With this push, we've also hit a milestone! Less than 200,000 unknown x86 instructions remain until we've completely reverse-engineered all of PC-98 Touhou.
While we're waiting for Bruno to release the next thcrap build with ANM header patching, here are the resulting commits of the ReC98 CDG/CD2 special offer purchased by DTM, reverse-engineering all code that covers these formats.
> OK, let's do a quick ReC98 update before going back to thcrap, shouldn't take long
> Hm, all that input code is kind of in the way, would be nice to cover that first to ease comparisons with uth05win's source code
> What the hell, why does ZUN do this? Need to do more research
> …
> OK, research done, wait, what are those other functions doing?
> Wha, everything about this is just ever so slightly awkward
Which ended up turning this one update into 2/10, 3/10, 4/10 and 5/10 of zorg's reverse-engineering commits. But at least we now got all shared input functions of TH02-TH05 covered and well understood.
What do you do if the TH06 text image feature for thcrap should have been done 3 days™ ago, but keeps getting more and more complex, and you have a ton of other pushes to deliver anyway? Get some distraction with some light ReC98 reverse-engineering work. This is where it becomes very obvious how much uth05win helps us with all the games, not just TH05.
5a5c347 is the most important one in there, this was the missing substructure that now makes every other sprite-like structure trivial to figure out.