- 📝 Posted:
- 💰 Funded by:
- Ember2528, Arandui
- 🏷️ Tags:
Here we go, the finale of the Shuusou Gyoku Linux port, culminating in packages for the Arch Linux AUR and Flathub! No intro, this is huge enough as it is.
- Compiling with C++ Standard Library Modules for Linux
- Porting the remaining logic code to Clang
- Picking a free MS Gothic replacement
- Reasons for using the standard Linux text library stack
- The individual Linux text rendering libraries
- Debugging vertical placement issues
- The new icon
- Challenges with pixel art icons in modern OS UIs
- Windows icon jank
- Linux icon jank
- Packaging
- Arch Linux
- Flatpak
- Future work
- Porting the MIDI backend
- Patching IPAMonaGothic
Before we could compile anything for Linux, I still needed to add GCC/Clang support to my Tup building blocks, in what's hopefully the last piece of build system-related work for a while. Of course, the decision to use one compiler over the other for the Linux build hinges entirely on their respective support for C++ standard library modules. I 📝 rolled out import std;
for the Windows build last time and absolutely do not want to code without it anymore. According to the cppreference compiler support table at the time I started development, we had the choice between
- experimental support in the not-yet-released GCC 15, and
- partial support as of Clang 17, two versions ago.
GCC's current implementation does compile in current snapshot builds, but still throws lots of errors when used within the Shuusou Gyoku codebase. Clang's allegedly partial support, on the other hand, turned out just fine for our purposes. So for now, Clang it is, despite not being the preferred C/C++ compiler on most Linux distributions. In the meantime, please forgive the additional run-time dependency on libc++
, its C++ standard library implementation. 🙇 Let's hope that it all will actually work in GCC 15 once that version comes out sometime in 2025.
At a high level, my Tup building blocks only have to do a single thing to support standard library modules with a given compiler: Finding the std
and std.compat
module interface units at the compiler's standard locations, and compiling them with the same compiler flags used for the rest of the project. Visual Studio got the right idea about this: If you compile on its command prompts, you're already using a custom shell with environment variables that define the necessary paths and parameters for your target platform. Therefore, it makes sense to store these module units at such an easily reachable path – and sure enough, you can reliably find the std
module unit at %VCToolsInstallDir%\modules\std.ixx
. While this is hands down the optimal way of locating this file, I can understand why GCC and Clang would want module lookup to work in generic shells without polluting environment variables. In this case, asking some compiler binary for that path is a decent second-best option.
Unfortunately, that would have been way too simple. Instead, these two compilers approached the problem from the angle of general module usage within the common build systems out there:
- Using modules within a project introduces a new kind of dependency relation between C++ source files, forcing all such code to be compiled in an implicitly defined order. For Tup, this isn't much of a problem because it has always required 📝 order-relevant dependencies to be explicitly specified. So it's been quite amusing for me to hear all these CMake-entrenched CppCon speakers in recent years comment on how this aspect of modules places such a burden on build systems… 🤭
- Then again, their goal is a world where devs just write
import name_of_module;
and the build system figures out a project's dependency graph on its own by scanning all source files prior to compilation. Or rather, asking the compiler to parse the source files and dump out this information, using thefdeps-*
options on GCC, the separateclang-scan-deps
tool for Clang, or thecl /scanDependencies
option for MSVC. - Because each of the three major compilers has its own implementation of modules, it's understandable why the options and tools are different. Obviously though, CMake is interested in at least getting all three to output the dependency information in the same format. So they got onto the C++ committee's SG15 working group and proposed a JSON format, which GCC and Clang subsequently implemented.
- But wait! The source files for the
std
andstd.compat
modules don't lie inside the source tree and couldn't be found by such a scan over the declared project files. So SG15 later simply proposed using the same JSON format for this purpose and installing such a JSON file together with the standard library implementation. - But wait! That only shifted the problem, because now we need to find that JSON file. What does the paper have to say on that issue?
- For the Standard Library:
- The build system should be able to query the toolchain (either the compiler or relevant packaging tools) for the location of that metadata file.
- For the Standard Library:
Wonderful. Just what we wanted to do all along, only with an additional layer of indirection that now forces every build system to include a JSON parser somewhere in its architecture. 🤦
In CMake's defense, they did try to get other build systems, including Tup, involved in these proposals. Can't really complain now if that was the consensus of everybody who wanted to engage in this discussion at the time. Still, what a sad irony that they reached out to Tup users on the exact day in 2019 at which I retired from thcrap and shelved all my plans of using Tup for modern C++ code…
So, to locate the interface units of standard library modules on Clang and GCC, a build system must do the following:
Ask the compiler for the path to the
modules.json
file, using the 30-year-old-print-file-name
option.
GCC and Clang implement this option in the worst possible way by basically conditionally prepending a path to the argument and then printing it back out again. If the compiler can't find the given file within its inscrutable list of paths or you made a typo, you can only detect this by string-comparing its output with your parameter. I can't imagine any use case that wouldn't prefer an error instead.
Clang was supposed to offer the conceptually saner-print-library-module-manifest-path
option, but of course, this is modern C++, and every single good idea must be accompanied by at least one other half-baked design or implementation decision.-
Load the JSON file with the returned file name.
-
Parse the JSON file.
-
Scan the
"modules"
array for an entry whose"logical-name"
matches the name of the standard module you're looking for. -
Discover that the
"source-path"
is actually relative and will need to be turned into an absolute one for your compilation command line. Thankfully, it's just relative to the path of the JSON file we just parsed.
Sure, you can turn everything into a one-liner on Linux shells, but at what cost?
clang++ -stdlib=libc++ -c -Wno-reserved-module-identifier -std=c++2c --precompile $(dirname $(clang -print-file-name=libc++.modules.json))/$(jq -r '.["modules"][] | select(."logical-name"=="std")."source-path"' $(clang -print-file-name=libc++.modules.json))
clang -print-file-name
at both of the places in the command line where we need the file name. But, uh, CMake's implementation is 170 lines long…
At least it's pretty straightforward to then use these compiled modules. As far as our Tup building blocks are concerned, it's just another explicit input and a set of command-line flags, indistinguishable from a library. For Clang, the -fmodule-file=module_name=path
option is all that's required for mapping the logical module names to the respective compiled debug or release version.
GCC, however, decided to tragically over-engineer this mapping by devising a plaintext protocol for a microservice like it's 2014. Reading the usage documentation is truly soul-crushing as GCC tries everything in its power to not be like Clang and just have simple parameters. Fortunately, this mapper does support files as the closest alternative to parameters, which we can just echo
from Tup for some 📝 90's response file nostalgia. At least I won't have to entertain this folly for a moment longer after the Lua code is written and working…
So modules are justifiably hard and we should cut compiler writers some slack for having to come up with an entirely new way of serializing C++ code that still works with headers. But surely, there won't be any problems with the smaller new C++ features I've started using. If they've been working in MSVC, they surely do in Clang as well, right? Right…?
Once again, C++ standard versions are proven to be utterly meaningless to anyone outside the committee and the CppCon presenters who try to convince you they matter. Here's the list of features that still don't work in Clang in early 2025:
- C++20's
std::jthread
, which fixes an important design flaw of C++'s regular thread class. This would have been very unfortunate if I hadn't coincidentally already rewritten my threading code to use SDL's more portable thread API as part of the Windows 98 backport. Thus, I could adopt that work into this delivery, gifting a much-needed extra 0.3 pushes of content to the Windows 98 backport. 🙌 - C++17's
std::from_chars()
for floating-point values, which we use to parse 📝 gain factors for waveform BGM out of Vorbis comment tags. This one is a medium-sized tragedy: Since it's not worth it to polyfill this function with a third-party library for just a single call, the best thing we can do is to fall back onstrtof()
from the C standard library. Why wasn't I using this function all along, you may ask? Well, as we all know by now, the C standard library is complete and utter trash, andstrtof()
is no exception by suffering from locale braindeath. - A good
chunk()
(ha) of the C++23 range adaptors. As a rather new addition to the language, I've only made sporadic use of them so far to get a feel for their optimal usage. But as it turns out, sporadic use of range adaptors makes very little sense because the code is much simpler and easier to read without them. And this is what the C++ committee has been demanding our respect for all this time? They have played us for absolute fools.
The-2
might look slightly cryptic at first, but since this code is part of aconstinit
block, we'd get a compiler error if we either wrote too few elements (and left parts of the array uninitialized) or wrote too many (and thus out of the array's bounds). Therefore, the number can't be anything else.
It almost looked like it'd finally be time for my long-drafted rant about the state of modern C++, but the language just barely redeemed itself with the last two sentences there. Some other time, then…
On the bright side, all my portability work on game logic code had exactly the effect I was hoping for: Everything just worked after the first successful compilation, with zero weird run-time bugs resulting from the move from a 32-bit MSVC build to 64-bit Clang. 🎉
Before we can tackle text rendering as the last subsystem that still needs to be ported away from Windows, we need to take a quick look at the font situation. Even if we don't care about pixel-perfectly matching the game's text rendering on Windows, MS Gothic seems to be the only font that fits the game's design at all:
- All text areas are dimensioned around the exact metrics of MS Gothic's embedded bitmaps. In menus, each half-width character is expected to be exactly 7×14 pixels large because most of the submenu items are aligned with spaces. In text boxes and the Music Room, glyphs can be smaller than the intended 8×16 pixels per half-width character, but they can't be larger without cutting off something somewhere.
- Only bitmap fonts can deliver the sharp and pixelated look the game goes for. Subpixel rendering techniques are crucial for making vector fonts look good, but quickly get ugly when applied to drop-shadowed text rendered at these small sizes:
That's MS Gothic in both pictures. The smoothed rendering on the help text might arguably look nicer, but it clashes very badly with the drop shadow in the menus.
However, MS Gothic is non-free and any use of the font outside of a Windows system violates Microsoft's EULA. In spite of that, the AUR offers three ways of installing this font regardless:
- The
ttf-ms-*auto-*
packages download a Windows 10 or 11 ISO from a somewhat official download link on Microsoft's CDN and extract the font files from there. Probably good enough if downloading 5 GB only to scrape a single 9 MB font file out of that image doesn't somehow feel wrong to you. - The
ttf-ms-win10-cdn-*
packages download just the font files from… somewhere on IPFS. - The regular, non-
auto
or-cdn
ttf-ms-win*
packages leave it up to you where exactly you get the files from. While these are theclearest
options in how they let you manually perform the EULA infringement, this manual nature breaks automated AUR helpers. And honestly, requiring you to copy over all 141 font files shipped with modern Windows is massively overkill when we only need a single one of them. At that point, you might as well just copymsgothic.ttc
to~/.local/share/fonts
and don't bother with any package. Which, by the way, works for every distro as well as Flatpaks, which can freely access fonts on the host system.
You might want to go the extra mile and use any of these methods for perfectly accurate text rendering on Linux, and supporting MS Gothic should definitely be part of the intended scope of this port. But we can't expect this from everyone, and we need to find something that we can bundle as part of the Flatpak.
So, we need an alternative free Japanese font that fits the metric constraints of MS Gothic, has embedded bitmaps at the exact sizes we need, and ideally looks somewhat close. Checking all these boxes is not too easy; Japanese fonts with a full set of all Kanji in Shift-JIS are a niche to begin with, and nobody within this niche advertises embedded bitmaps. As the DPI resolutions of all our screens only get higher, well-designed modern fonts are increasingly unlikely to have them, thus further limiting the pool to old fonts that have long been abandoned and probably only survived on websites that barely function anymore.
Ultimately, the ideal alternative turned out to be a font named IPAMonaGothic, which I found while digging through the Winetricks source code. While its embedded bitmaps only cover MS Gothic's first half for font heights between 10 and 16 pixels rather than going all the way to 22 pixels, it happens to be exactly the range we need for this game.


Both of these screenshots were made on Windows. Obviously, the Linux port shouldn't settle for anything less than pixel-perfectly matching these reference renderings with both fonts.
Alright then, how are we going to get these fonts onto the screen with something that isn't GDI? With all the emphasis on embedded bitmaps, you might come to the conclusion that all we want to do is to place these bitmap glyphs next to each other on a monospaced grid. Thus, all we'd need is a TTF/OTF library that gives us the bitmap for a given Unicode code point. Why should we use any potentially system-specific API then?
But if we instead approach this from the point of view of GDI's feature set, it does seem better to match a standard Windows text rendering API with the equivalent stack of text rendering libraries that are typically used by Linux desktop environments. And indeed, there are also solid reasons why this is a better idea for now:
- There actually is a single instance where this game uses MS Gothic at a height of 24 pixels, which is too large to be covered by its embedded bitmaps and thus requires rasterization of vector outlines. Whenever the SCL parser encounters an unknown opcode, it shows this error message:
Modders may very well end up seeing this one as a result of bugs in SCL compilers. - You might see debug text as not worth bothering with, but then there's Kioh Gyoku. Not only does that game display its text at much bigger sizes throughout, but it also renders every string at 3× the size it is ultimately downscaled to, similar to the 2× scale factor used by the 640×480 Windows Touhou games. Going for a full-featured solution that works with both embedded bitmaps and outlines saves us time later.
- We'd be ready for translations into even the most complex-to-render non-ASCII scripts.
- Since our fonts might not support these scripts, having the API fall back on other fonts installed in the system as necessary would allow us to add these translations independently of figuring out the font situation for them.
- In fact, text rendering must technically already support glyph fallback because 📝 the BGM pack selection just displays path names, which count as user input. If people use code points in their BGM pack folder names that aren't covered by either of our two fonts, they probably have some font installed on their system that can display them. Also, the missing .DAT file screen further below in that post shows that GDI already does glyph fallback with emoji, so wouldn't it be lame if the Linux version didn't have at least feature parity in this regard? Instead, the Linux stack would actually outperform GDI thanks to the former's natural support for color emoji. 🎨
- Since we're explicitly porting to desktop Linux here, using the standard Linux text rendering stack is the least bloated option because Linux users will have it installed anyway. We can still reach for more minimalistic alternatives later once we do port this game to something other than Linux.
Let's look at what this stack consists of and how the libraries interact with each other:
-
FreeType provides access to everything related to the rendering of TTF and OTF fonts, including their embedded bitmaps, as well as a rasterizer for regular vector glyphs. It's completely obvious why we need this library.
-
GLib2 is a collection of various general utility functions that modern non-C languages would have in their standard libraries. Most notably, it provides the tables and APIs for Unicode character data, but its
iconv
wrapper also comes in quite handy for converting the Shift-JIS text from the original .DAT files to UTF-8 without additional dependencies. -
FriBidi implements the Unicode Bidirectional Algorithm, just in case you've thrown some Arabic or Hebrew into your string.
-
HarfBuzz implements shaping, i.e., the translation of raw Unicode into a sequence of glyph indices and positions depending on what's supported by the font. We might not strictly need this library right now, but it's completely obvious why we will eventually need it for translations.
-
Fontconfig manages all fonts installed on the system, maps user-friendly font names to file names, tracks their Unicode coverage, and offers a central place for storing various font tweaking options.
Normally, games wouldn't need this library because they just bundle all the fonts they need and hardcode any required tweaking settings to make them look as intended. Looking back at our font situation though, installing MS Gothic in a system-wide way through a package that puts the font into a standard location will be the simplest method of meeting that optional dependency. This is a reasonable assumption in a neatly packaged Linux system where the font is just another item on the game's dependency list, but also within a Flatpak, where "system-wide" includes any fonts shipped with the image. If we now assume that IPAMonaGothic is installed in the same way, we can let Fontconfig handle the actual selection. All we need to do is to specify a preference for MS Gothic over IPAMonaGothic, and Fontconfig will take care of the rest, without us a single line of TTF-loading code. -
Pango combines the three libraries above into an API that somewhat matches GDI's simplicity, laying out text in one or multiple lines based on the shaped output of HarfBuzz and substituting glyphs as necessary based on Fontconfig information. The actual rendering, however, is delegated to…
-
Cairo, a… "2D graphics library"? Now why would we need one of those if all we want is a buffer filled with pixels? Wikipedia's description emphasizes its vector graphics capabilities, which seems to describe the library better than the nondescript blurb on its official website, but doesn't FreeType already do this for text? After looking at it for way too long, the best summary I can come up with is "a collection of font rasterization code that should have maybe been part of FreeType, plus the aforementioned general 2D vector graphics code we don't need". Just like Pango wraps HarfBuzz and Fontconfig to lay out the individual glyphs, Cairo wraps FreeType and raw pixel buffers to actually place these glyphs on its surface abstraction. (And also Fontconfig because of all its configuration settings that can influence the rendering.) Ultimately, this means that each font is represented by a HarfBuzz+FreeType handle, a Pango+Cairo handle, and a Cairo+FreeType handle, which I'm sure won't be relevant later on. 👀
Pango does have a raw FreeType backend that could render text entirely without Cairo, but it's not really maintained and supports neither embedded bitmaps nor color emoji. So we don't have much of a choice in the matter.Created usingpango-view -t 'effective. Power لُلُصّبُلُلصّبُررً ॣ ॣh ॣ ॣ🌈冗' --font='MS Gothic 16px' --backend=cairo
.Created usingpango-view -t 'effective. Power لُلُصّبُلُلصّبُررً ॣ ॣh ॣ ॣ🌈冗' --font='MS Gothic 16px' --backend=ft2
.Fun fact: Since Cairo also manages the temporary CPU image buffer we draw on and then hand to SDL, our backend for Shuusou Gyoku ends up with 3× the amount of Cairo function calls than Pango function calls.
-
Pixman is the library that actually performs all the management of and operations on pixel buffers that you would have thought to be Cairo's job. The combination of it also being a core dependency of the X server and not having any documentation gives off much stronger Nebraska vibes than the ones HarfBuzz advertises itself with. Initially, the dependency on this library comes off as completely useless because Pango's FreeType backend doesn't need anything like it, but judging by the presence of optimized blitting and scaling implementations for various CPU instruction set extensions, it seems to do a pretty good job at what it does. Unlike Cairo, whose abstraction reduces Pixman's support for a variety of 32-bit color formats to the single ARGB one. We're very lucky that this format is also supported for textures by all SDL backends on all operating systems…
In the end, a typical desktop Linux program requires every single one of these 8 libraries to end up with a combined API that resembles Ye Olde Win32 GDI in terms of functionality and abstraction level. Sure, the combination of these eight is more powerful than GDI, offering e.g. affine transformations and text rendering along a curved path. But you can't remove any of these libraries without falling behind GDI.
Even then, my Linux implementation of text rendering for Shuusou Gyoku still ended up slightly longer than the GDI one due to all the Pango and Cairo contexts we have to manually manage. But I did come up with a nice trick to reduce at least our usage of Cairo: Since GDI needs to be used together with DirectDraw, the GDI implementation must keep a system-memory copy of the entire 📝 text surface due to 📝 DirectDraw's possibility of surface loss. But since we only use Cairo with SDL, the Cairo surface in system memory does not actually need to match the SDL-managed GPU texture. Thus, we can reduce the Cairo surface to the role of a merely temporary system-memory buffer that only is as large as the single largest text rectangle, and then copy this single rectangle to the intended packed place within the texture. I probably wouldn't have realized this if the seemingly most simple way to limit rendering to a fixed rectangle within a Cairo surface didn't involve creating another Cairo surface, which turned out to be quite cumbersome.
But can this stack deliver the pixel-perfect rendering we'd like to have? Well, almost:




Cue hours of debugging to find the cause behind these vertical shifts. The overview above already suggested it, but this bug hunt really drove home how this entire stack of libraries is a huge pile of redundantly implemented functionality that interacts with and overrides each other in undocumented and mostly unconfigurable ways. Normally, I don't have much of a problem with that as long as I can step through the code, but stepping through Cairo and especially Pango is a special kind of awful. Both libraries implement dynamic typing and object-oriented paradigms in C, thus hiding their actually interesting algorithms under layers and layers of "clean" management functions. But the worst part is a particularly unexpected piece of recursion: To layout a paragraph of text, Pango requires a few font metrics, which it calculates by laying out a language-specific paragraph of example text. No, I do not like stepping through functions that much, please don't put a call to the text layout function into the text layout function to make me debug while I debug, dawg…
It'll probably take many more years until most of this stack has been displaced with the planned Rust rewrites. But honestly, I don't have great hopes as long as they stay with this pile-of-libraries approach. This pile doesn't even deserve to be called a stack given the circular dependency between FreeType and HarfBuzz…
Ultimately, these are the bugs we're seeing here:
-
When rendering strings that contain both Japanese and Latin characters with MS Gothic, the Japanese characters are pushed down by about 1/8th of the font height. This one was already reported in June 2023 and is a bug in either HarfBuzz, Pango, or MS Gothic. With the main HarfBuzz developer confused and without an idea for a clean solution, the bug has remained unfixed for 1½ years.
For now, the best workaround would be to revert the commit that introduced the baseline shift. Since the Flatpak release can bundle whatever special version of whatever library it needs, I can patch this bug away there, but distro-specific packages or self-compiled builds would have to patch Pango themselves.LD_LIBRARY_PATH
is a clean way of opting into the patched library without interfering with the regular updates of your distro, but there's still a definite hurdle to setting it up. -
The remaining 1-pixel vertical shift is, weirdly enough, caused by hinting. Now why would a technique intended for improving the sharpness of outline fonts even apply to bitmap fonts to begin with? As you might have guessed, the pile-of-libraries approach strikes once more:
-
Hinting is meant to be controlled by Fontconfig settings, but the setting that takes precedence here is Cairo's slightly different metric hint setting, which is enabled by default.
-
Pango then responds to Cairo's hinting request by rounding the font's ascent and descent metrics up to the nearest integer, causing the exact downward shift we see above.
-
We can override Cairo's metric hinting defaults with the API documented in the page I linked above. But we must only do so conditionally because 16-pixel MS Gothic does require metric hinting for its glyph placement to match GDI. The resulting hack is very much not pretty.
-
Cairo's font options can only be really changed at the level of a Cairo context. Any Pango font handle created from a Pango layout mapped to a Cairo context will get a copy of that context's font options at creation time. And of course, the Pango level treats these options as an implementation detail that cannot be modified from the outside. So, we need to figure out the font using raw Fontconfig calls instead of Pango's abstraction. Oh, and this copy also forces us to recreate the Pango layout if we change between 14- and 16-pixel MS Gothic, which is not necessary with IPAMonaGothic.
-
Actually overwriting this setting involves creating a new font option object, filling it with the Cairo context's existing font options, modifying the setting, copying the new font option object back to the Cairo context, and then deleting the temporary font option object. This is real C, done by real C programmers. Reminds me of the one place in TH01 where ZUN tried C++ copy constructors for a class that didn't need them at all, which only added 1,056 bytes of bloat to
REIIDEN.EXE
.
-
Don't you love it when the concerns are so separated that they end up overlapping again? I'm so looking forward to writing my own bitmap font renderer for the multilingual PC-98 translations, where the memory constraints of conventional DOS RAM make it infeasible to use any libraries of this pile to begin with 😛
Before we can package this port for Flathub, there's one more obstacle we have to deal with. Flathub mandates that any published and publicly listed app must come with an icon that's at least 128×128 pixels in size. pbg did not include the game's original 32×32 icon in the MIT-licensed source code release, but even if he did, just taking that icon and upscaling it by 4× would simultaneously look lame and more official than it perhaps should.
So, the backers decided to commission a new one, depicting VIVIT in her title screen pose but drawn in a different style as to not look too official. Mr. Tremolo Measure quickly responded to our search and Ember2528 liked his PC-98-esque pixel art style, so that's what we went for:




The repo also contains textless and boxless variants.
However, the problem with pixel art icons is that they're strongly tied to specific resolutions. This clashes with modern operating system UIs that want to almost arbitrarily scale icons depending on the context they appear in. You can still go for pixel art, and it sure looks gorgeous if their resolution exactly matches the size a GUI wants to display them at. But that's a big if – if the size doesn't match and the icon gets scaled, the resulting blurry mess lacks all the definition you typically expect from pixel art. Even nearest-neighbor integer upscaling looks more cheap rather than stylized as the coarse pixel grid of the icon clashes with the finer pixel grid of everything surrounding it.
So you'd want multiple versions of your icon that cover all the exact sizes it will appear at, which is definitely more expensive than a single smooth piece of scalable vector artwork. On a cursory look through Windows 11, I found no fewer than 7 different sizes that icons are displayed at:
- 16×16 in the title bar and all of Explorer's list views
- 24×24 in the taskbar
- 28×28 in the small icon next to the file name in Explorer's detail pane (which is never sharp for some reason, even if you provide a 28×28 variant?!)
- 32×32 in the old-style Properties window
- 48×48 in Explorer's Medium icons view
- 96×96 in Explorer's Large icons view, and the large icon its detail pane
- 256×256 in Explorer's Extra large icons view
And that's just at 1× display scaling and the default zooming factors in Explorer.
But it gets worse. Adding our commissioned multi-resolution icon to an .exe seems simple enough:
- Bundle the individual images into a single .ico file using
magick in1.png in2.png … out.ico
- Write a small resource script, call
rc
, and add the resulting .res file to the link command line - Be amazed as that icon appears in the title and task bars without you writing a single line of code, thanks to SDL's window creation code automatically setting the first icon it finds inside the executable
But what's going on in Explorer?


That's the 48×48 variant sitting all tiny in the center of a 256×256 box, in a context where we expect exactly what we get for the .ico file. Did I just stumble right into the next underdocumented detail? What was the point of having a different set of rules for icons in .exe files? Make that 📝 another Raymond Chen explanation I'm dying to hear…
Until then, here's what the rules appear to be:
- 256×256 is the one and only mandatory size for high-res program icons on Windows.
- 48×48 is the next smallest supported size, as unbelievable as that sounds. Windows will never use any other icon variant in between. Some sites claim that 64×64 is supported as well, but I sure couldn't confirm that in my tests.
- Those 96×96 use cases from the list above? Yup, Windows will never actually display an embedded 96×96 icon at its native resolution, and either scale up the 48×48 variant (in the Large icons view) or scale down the 256×256 variant (in the detail pane).
- You only ever see an embedded icon with a size between 48×48 and 256×256 if it's the only icon available – and then it still gets scaled to 48×48. Or to 96×96, depending on how Explorer feels like.
- Getting different results in your tests? Try rebuilding the icon cache, because of course Windows still struggles with cache invalidation. This must have caused unspeakable amounts of miscommunication with artists over the decades.
Oh well, let's nearest-neighbor-scale our 128×128 icon by 2× and move on to Linux, where we won't have such archaic restrictions…
…which is not to say that pixel art icons don't come with their own issues there. 🥲
On Linux, this kind of metadata is not part of the ELF format, but is typically stored in separate Desktop Entry files, which are analogous to .lnk shortcuts on Windows. Their plaintext nature already suggests that icon assignment is refreshingly sane compared to the craziness we've seen above, and indeed, you simply refer to PNG or even SVG files in a separate directory tree that supports arbitrary size variants and even different themes. For non-SVG icons, menus and panels can then pick the best size variant depending on how many pixels they allot to an icon. The overwhelming majority of the ones I've seen do a good job at picking exactly the icon you'd expect, and bugs are rare.
But how would this work for title and task bars once you started the app? If you launched it through a Desktop Entry, a smart window manager might remember that you did and automatically use the entry's icon for every window spawned by the app's process. Apparently though, this feature is rather rare, maybe because it only covers this single use case. What about just directly starting an app's binary from a shell-like environment without going through a Desktop Entry? You wouldn't expect window managers to maintain a reverse mapping from binaries to Desktop Entries just to also support icons in this other case.
So, there must be some way for a program to tell the window manager which icon it's supposed to use. Let's see what SDL has to offer… and the documentation only lists a single function that takes a single image buffer and transfers its pixels to the X11 or Wayland server, overriding any previous icon. 😶
Well great, another piece of modern technology that works against pixel art icons. How can we know which size variant we should pick if icon sizing is the job of the window manager? For the same reason, this function used to be unimplemented in the Wayland backend until the committee of Wayland stakeholders agreed on the xdg-toplevel-icon
protocol last year.
Now, we could query the size of the window decorations at all four edges to at least get an approximation, but that approach creates even more problems:
- Which edge do we pick? The top one? The largest one? How can we possibly be sure that the one we pick is the one that will show the icon?
- Even if we picked the correct edge, the icon will likely be smaller and not cover the full area. Again, anything less than an exact match isn't good enough for pixel art.
- This function is not implemented on Wayland because client windows aren't supposed to care about how the server is decorating them.
- But even among X11 window managers, there's at least one that doesn't report back the border sizes immediately after window creation. 🙄
Most importantly though: What if that icon is also used in a taskbar whose icons have a different size than the ones in title bars? Both X11's _NET_WM_ICON
property and Wayland's xdg-toplevel-icon-v1
protocol support multiple size variants, but SDL's function does not expose this possibility. It might look as if SDL 3 supports this use case via its new support for alternate images in surfaces, but this feature is currently only used for mouse cursors. That sounds like a pull request waiting to happen though, I can't think of a reason not to do the same for icons. contribution-ideas?
But if SDL 2's single window icon function used to be unsupported on Wayland, did SDL 2 apps just not have icons on Wayland before October 2024?
Digging deeper reveals the tragically undocumented SDL_VIDEO_X11_WMCLASS
environment variable, which does what we were hoping to find all along. If you set it to the name of your program's Desktop Entry file, the window manager is supposed to locate the file, parse it, read out the Icon
value, and perform the usual icon and size lookup. Window class names are a standard property in both X11 and Wayland, and since SDL helpfully falls back on this variable even on Wayland, it will work on both of them.
Or at least it should. Ultimately, it's up to the window manager to actually implement class-derived icons, and sadly, correct support is not as widespread as you would expect.
How would I know this? Because I've tested them all. 🥲 That is, all non-AUR options listed on the Arch Wiki's Desktop environment and Window manager pages that provide something vaguely resembling a desktop you can launch arbitrary programs from:
WM / DE | Manually transferred pixels | Class-derived icons | Notes |
---|---|---|---|
awesome | ✔️ | Does not report border sizes back to SDL immediately after window creation | |
Blackbox | |||
bspwm | No title bars | ||
Budgie | ✔️ | ✔️ | Title bars have no icons. Taskbar falls back on the icon from the Desktop Entry file the app was launched with. |
Cinnamon | ✔️ | ✔️ | Title bars have no icons, but they work fine in the taskbar. Points out the difference between native and Flatpak apps! |
COSMIC | ✔️ | ✔️ | Title bars have no icons, but they work fine in the taskbar. Points out the difference between native and Flatpak apps! |
Cutefish | ➖ | Title bars have no icons. The status bar only seems to support the X11 _NET_WM_ICON property, and not the older XWMHints mechanism used by e.g. xterm. |
|
Deepin | Did not start | ||
Enlightenment | ✔️ | ➖ | Taskbar falls back on the icon from the Desktop Entry file the app was launched with. Only picks the correctly scaled icon variant in about half of the places, and just scales the largest one in the other half. |
Fluxbox | ✔️ | ||
GNOME Flashback / Metacity | ✔️ | Title bars have no icons | |
GNOME | ✔️ | ✔️ | Title bars have no icons |
GNOME Classic | How do you get this running? The variables just start regular GNOME. | ||
herbstluftwm | No title bars | ||
i3 | ✔️ | ||
IceWM | ✔️ | ➖ | Only doesn't work for Flatpaks because it uses a hardcoded list of icon paths rather than $XDG_DATA_DIRS |
KDE (Plasma) | ✔️ | ✔️ | Taskbar (but not window) falls back on the icon from the Desktop Entry file the app was launched with |
LXDE | ✔️ | ||
LXQt | ✔️ | ||
MATE | ✔️ | Title bars have no icons | |
MWM | |||
Notion | No title bars | ||
Openbox | ✔️ | ||
Pantheon | ✔️ | ✔️ | |
PekWM | |||
Qtile | No title bars | ||
Stumpwm | Did not start | ||
Sway | Architected in a way that made icons too complex to bother with. Might get easier once they take a look at the xdg-toplevel-icon protocol. |
||
twm | |||
UKUI | Window decorations and taskbar didn't work | ||
Weston | Only supports client-side decorations | ||
Xfce | ✔️ | ➖ | Taskbar only supports manually transferred icons. Scaling of class-derived icons in title bars is broken. |
xmonad | No title bars |
Yes, you can probably rice title bars and icons onto WMs that don't have them by default. I don't have the time.
That's only 6 out of 33 window managers with a bug-free implementation of class-derived icons, and still 6 out of 28 if we disregard all the tiling window managers where icons are not in scope. If you actually want icons in the title bar, the number drops to just 2, KDE and Pantheon. I'm really impressed by IceWM there though, beating all other similarly old and minimal window managers by shipping with an almost correct implementation.
For now, we'll stay with class-derived icons for budget reasons, but we could add a pixel transfer solution in the future. And that was the 2,000-word story behind this single line of code… 📕
On to packaging then, starting with Arch! Writing my first PKGBUILD was a breeze; as you'd expect from the Arch Wiki, the format and process are very well documented, and the AUR provides tons of examples in case you still need any.
The PKGBUILD guidelines have some opinions about how to handle submodules, but applying them would complicate the PKGBUILD quite a bit while bringing us nowhere close to the 📝 nirvana of shallow and sparse submodules I've scripted earlier. But since PKGBUILDs are just shell scripts that can naturally call other shell scripts, we can just ignore these guidelines, run build.sh
, and end up with a simpler PKGBUILD and the intended shorter and less bloated package creation process.
Sadly, PKGBUILDs don't easily support specifying a dependency on either one of two packages, which we would need to codify the font situation. Due to the way the AUR packages both IPAMonaGothic and MS Gothic together with their Mincho and proportional variants, either of them would be Shuusou Gyoku's largest individual dependency. So you'd only want to install one or the other, but probably not both. We could resolve this by editing the PKGBUILDs of both font packages and adding a provides
entry for a new and potentially controversial virtual package like ttf-japanese-14-and-16-pixel-bitmap
that Shuusou Gyoku could then depend on. But with both of the packages being exclusive to the AUR, this dependency would still be annoying to resolve and you'd have no context about the difference.
Thus, the best we can do is to turn both MS Gothic and IPAMonaGothic into optional dependencies with a short one-line description of the difference, and elaborating on this difference in a comment at the top of the PKGBUILD. Thankfully, the culture around Arch makes this a non-issue because you can reasonably expect people to read your PKGBUILD if they build something from the AUR to begin with. You do always read the PKGBUILD, right?
Flatpak, on the other hand… I'm not at all opposed to the fundamental idea of installing another distro on top of an already existing distro for wider ABI compatibility; heck, Flatpak is basically no different from Wine or WSL in this regard. It's just that this particular ABI-widening distro works in a rather… unnatural way that crosses the border into utter cringe at times.
There are enough rants about Flatpak from a user's perspective out there, criticizing the bloat relative to native packages, the security implications of bundling libraries, and the questionable utility of its sandbox. But something I rarely see people talk about is just how awful Flatpak is from a developer's point of view:
The documentation is written in this weird way that presents Flatpak and its concepts in complete isolation. Without drawing any connections to previous packaging and dependency management systems you might have worked with, it left a lot of my seemingly basic questions unanswered. While it is important to explain your concepts with example code, the lack of a simple and complete reference of the manifest format doesn't exactly inspire confidence in what you're doing. Eventually, I just resorted to cross-checking features in the JSON Schema to get a better idea of what's actually possible.
-
The ABI-expanding distro part of Flatpak is actually called the Freedesktop platform, a currently 680 MB large stack of typical GUI application libraries updated once a year. It's accompanied by the Freedesktop SDK containing the matching development libraries and tools in another 1.7 GB. As the name implies, this distro is maintained by a separate entity with a homepage that makes the entire thing look deeply self-important and unprofessional. A blurry 25 FPS logo video, a front page full of spelling mistakes, a big focus on sponsors and events… come on, you have one job, and it's compiling and packaging a bunch of open-source libraries. Was this a result of the usual corporate move of creating more departments in order to shift blame and responsibility?
Optics aside, their documentation is even more bizarrely useless. The single bit of actual useful information I was looking for – i.e., the concrete list of packages bundled as part of their runtimes and their versions, is best found by going straight to their code repo. -
The manifest of a Flatpak app can be written in your preferred lesser evil of the two most popular markup languages: JSON (slightly ugly for humans and machines alike), or YAML, the underspecified mess that uses syntactically significant whitespace while outlawing the closest thing we have to a semantic indentation character. Oh well, YAML at least supports comments, and we sure sorely need them to justify our bleeding-edge C++ module setup to the Flathub maintainers.
-
Adding more dependencies on top of the basic runtime can be done by either using runtime extensions or BaseApps. That's two entirely separate concepts that appear to do the same thing on the surface, except that you can only have one BaseApp. The documentation then waffles on and tries to explain both concepts with words that have meaning in isolation but once again answer exactly zero of my questions. Must a BaseApp contain a collection of at least two dependencies or why would anyone ever write the sentence that raises this question? Why do they judge BaseApps to be a "specialized concept" without elaborating, as if to suggest that their audience is too dumb to understand them? Why does a page named Dependencies document extensions as if I wanted to prepare my own package for extension by others? Why be all weird and require "extension points" to be defined when it all just comes down to overlaying another filesystem? Who cares about the special significance of the
.Debug
,.Locale
, and.Sources
conventions in the context of dependencies?
In the end, you once again get a clearer idea by simply looking at how existing code uses these concepts. Basically, SDK extensions = build-time dependencies, BaseApps = run-time dependencies, and extension points don't matter at all for our purposes because you can just arbitrarily extend theorg.freedesktop.Sdk
anyway. 🤷 -
Speaking of extensions: This exact architectural split between build-time and run-time dependencies is why the
org.freedesktop.Sdk.Extension.llvm19
extension packages Clang, but not libc++. When questioned about this omission, one of the maintainers responded with the lamest of excuses: Copying the library would beinconvenient
(for them), andsomething we can't even imagine a use case for
. Um, guys? Here's a table. Compare the color of each cell between GCC and Clang. There's your use case.
Thankfully, you can build libc++ without building LLVM as a whole. Seeing how building libc++ takes basically no time at all compared to the rest of LLVM just raises even more questions about not simply providing some kind of script to copy it over. -
Flatpak stores all data of an app in an app-specific subdirectory under
~/.var/app
, inverting and blatantly violating the XDG Base Directory Specification. Everybody hates this, and it's indefensible no matter how you look at it. The OpenSSH excuse of being old and having a well-known standard path that long predates the XDG spec does not apply to Flatpak at all, and neither does any sandboxing argument. Oh, and if your application ships both a Flatpak and XDG-conforming native packages, it must now add a special case for Flatpak if it wants to prevent its own XDG directory names from becoming even uglier. Still, the Flatpak developers remain stubborn about this choice.But wait, what's that? Couldn't you theoretically add
- --filesystem=xdg-data/myapp - --env=XDG_DATA_HOME=~/.local/share/myapp
to your manifest'sfinish-args
? Too bad that Flatpak deliberately prevents this from working. Not to mention that the resulting package would fail thefinish-args-unnecessary-xdg-data-subdir-mode-access
lint, which would prevent it from being published on Flathub without applying for an exception.
-
Speaking of XDG directories, why do they create the
.flatpak-builder
cache directory in the current working directory and not under$XDG_CACHE_HOME
where it belongs? -
The
modules
in a Flatpak work in a similarly layered way as the commands in a Dockerfile, causing edits to a lower layer to evict previous builds of all successive layers from the cache. Any tweaking work in the lower layers therefore suffers from the same disruptive workflow you might already know from Docker, where you constantly shift the layers around to minimize unnecessary rebuilds because there's never an optimal order. Will we ever see container bros move on from layers to a proper build graph of the entire system? The stagnation in this space is saddening. -
The
--ccache
option sort of mitigates the layering by at least caching object files in.flatpak-builder/ccache
, which reduces repeated C compilation to a mere file copy from the cache to the package. But not only is this option not enabled by default, it also doesn't appear in any of theflatpak-builder
example command lines in the documentation.
Also, it only appears to work with GCC, and settingCCACHE_COMPILERTYPE=clang
seems to have no effect. Fortunately, my investment into C++ modules pays off here as well and keeps compile times decently short. -
flatpak-builder
doesn't validate the manifest schema? Misspelled or misplaced properties just silently do nothing? -
Speaking of validation, why does
flatpak-builder-lint
take 8 seconds to validate a manifest, even if it just consists of a single line? Sure, it's written in Python, but that's an order of magnitude too slow for even that language. -
No tab completion for any of the
org.flatpak.Builder
tools. Sandbox working as designed, I guess 🤷 -
Git submodule handling. Oh my goodness.
-
Flatpak recursively clones and checks out all of a repository's submodules. This might be necessary for some codebases, but not for this one: The Linux build doesn't need the SDL submodule, and nothing needs the second miniaudio submodule that the dr_libs use for its testing code. And if these recursive submodules didn't opt into shallow clones, you end up with lots of disk space wasted for no reason; 166.1 MiB in our case.
-
Except that it's actually twice that amount. There's the download cache that persists across multiple
flatpak-builder
runs, and then there's the temporary directory the build runs in, which gets a full second clone of the entire tree of submodules. This isn't Windows 8, there are no excuses for not using read-only symlinks. -
None of this would be too bad if we could just do the same thing we did with Arch, ignore the default or recommended submodule processing, and let our shell script run the show and selectively download and check out the submodules required for the Linux build. But no – the build process of a Flatpak is strictly separated into a download stage and a build stage, and the build stage cannot access the network. Once again, Flatpak would have the option to allow build-time network access, but enabling it would mean no hosting and discoverability on Flathub for you.
I guess it makes sense from a security point of view, as reviewers would only have to audit a fixed set of declaratively specified sources rather than all code run by the build commands? But even this can only ever apply to the initial review. Allowing app developers to push updates independently from the Flathub maintainers is one of Flathub's biggest selling points. Once you're in, you or your supply chain can just simply hide the malware in an updated version of a module source. 🤷
-
-
Getting Tup to work within the Flatpak build environment is slightly tricky. The build sandbox doesn't provide access to the kernel's FUSE module, which Tup uses to track syscalls by default. Thankfully, Tup also supports syscall tracking via
LD_PRELOAD
, which allows us to still build Shuusou Gyoku in a parallelized way with a regular Tup binary. Imagine compiling FUSE from source only to make Tup compile, but then having to build the game via atup generate
d single-threaded shell script… -
One common user complaint about Flatpak is that it allows Windows app developers to stick to their
beloved
and un-Linux-y way of bundling all dependencies, as if they actually ever enjoyed doing that. In reality, it's not the app authors, but the Flathub maintainers and submission reviewers who do everything in their power to prevent Flathub from turning into a typical package manager. Since they ended up with a system where every new extension to the Freedesktop SDK somehow places a burden on the maintainers, they're quick to shut down everything they consider a bad idea, including a Tup package I submitted. What a great job for people who always wanted to be gatekeepers and arbiters of good ideas. If your system treats CMake as one of two blessed build systems that get first-class support, we already fundamentally disagree on basic questions of good taste. -
Because even the build stages of individual modules are sandboxed from each other, the only way to persist a module's build outputs for further modules is by installing them into the same
/app/
path that the final application is supposed to live in. Since most of these foundational modules will be libraries,/app/
will be full of C header files, static library files, and library-related tooling that you don't want to bloat your shipped package. Docker solves this with multi-stage builds: After building your app into an image full of all build-time dependencies and other artifacts vomited out by your build system, you can start from a fresh, minimal base image and selectively copy over only the files your app actually needs to run. Flatpak solves this in the opposite way, merely letting you manually clean up after your dependencies in the end. At least they support wildcards… -
So you've built your Flatpak, but it has an issue that your native build doesn't have and it's time for some debugging. You open up a shell into the image, fire up gdb… and don't get debug symbols despite your build definitely emitting them. The documentation mentions that debug symbols are placed into a separate package, just like Arch Linux's
makepkg
does it, but the suggested command line to install them doesn't work:error: No remote refs found for ‘$FLATPAK_ID’
The apparently correct command line can only be found in third-party blog posts. Pulling the package directly out of the builder cache is as random as it gets for someone not deeply familiar with the system.
-
Before you publish your package, you might want to inspect the bundle to make sure that your
--cleanup
entries actually covered all the library bloat you suddenly have to care about. Flatpak also adds a few slight annoyances there:-
You could look into the build directory (not the repo directory! Very important difference! 🤪) you pass to
flatpak-builder
, but it also contains all the debug files and source code. -
You could open the
--devel
shell and inspect the contents of/app/
. This shell environment is rather minimal and misses both a lot of typical Linux userland tools and (of course) a package manager, butls
andfind
work and can do the job. - The ideal solution would read explicitly and only from the bundle file. But Flatpak provides no help in this regard, leaving you to resort to low-level hacks that work on the physical container format. Where's the Dive counterpart?
-
You could look into the build directory (not the repo directory! Very important difference! 🤪) you pass to
-
So if all of Flatpak feels like Docker anyway, why isn't it built on top of Docker to begin with? Instead, we got what amounts to a worse copy that doesn't innovate in any way I can notice. Why throw away compatibility with all of Docker's existing tooling just to gain hash-based deduplication at the file level for a couple of images? How can they seriously use a tagline like "Git for apps", which only makes sense for very, very loose definitions of "Git"?
Or maybe all the innovation went into the portals that make this thing work at all, and have at least this little game work indistinguishably from a native build past the initial load time… -
… except when parts of it don't! 🤣 Audio is only supported through PulseAudio, which you might not have installed on Arch Linux. Thus, Flatpak ironically enforces another dependency on the host system that the app itself might not have needed.
-
Alright, you've submitted your app, incorporated the changes requested by the reviewers, waited a while, and now your app is live and has its own page on Flathub. You'd think I'd be done ranting at this point, but no:
-
You give them nice lossless PNG screenshots and icons, and they convert both of them to lossy WebP with clearly visible compression artifacts. How about some trust in the fact that people who give you small PNG files know what they're doing? Verified by a programmatic check whether such a lossy recompression even noticeably improves the file size, instead of blindly blowing up our icon to 4.58× the size of the original PNG.
Source-quality images are way more important to me than brand colors.
-
The screenshot area on the app pages has a fixed height of 468 pixels. Is this some kind of a sick joke? How could anyone look at that height and not go "nah, that looks wrong, 12 more pixels and we'd be VGA-compatible, barely makes a difference anyway"?
That leaves us with two choices:- Crop those 12 pixels out of the raw game screenshots I originally wanted to have there, or
- follow their preferred approach of screenshotting the entire window with its native decorations, rounded corners, and shadows, and hope the contents still look somewhat presentable when scaled down.
The latter probably isn't the worst idea as it also gives us a chance to show off the
16×16 variant of the icon at its intended size. But I sure didn't immediately find a KDE theme that both has 16-pixel window icons (unlike Breeze's 15 pixels at the Small size) and doesn't have obscenely large and asymmetric shadows (unlike Materia or Klassy). Shoutout to the Arc theme for matching all these constraints!
-
Might as well try converting these images to lossless WebP while I'm at it, in the hope that they then leave them alone… but nope, they still get lossily recompressed! 🤪 You know what, I'm not gonna bother with the rest of their guidelines, this is an embarrassment.
-
Why does Flathub claim that the game can access the microphone? I don't remember opting into that. Once again, PulseAudio is to blame, as its security model isn't fine-grained enough. If your app wants to play sound, it has to request access to the PulseAudio socket, which always covers both output and input. Everybody hates this, but it's only going to be fixed with PipeWire and once the XDG developers have agreed on an audio portal.
-
Finally, game controller support comes with a very similar asterisk. By default, it's disabled just like any other piece of hardware, and the documentation tells you to specify
--device=input
to activate it. However, this specific permission is a fairly recent development in Flatpak terms and thus isn't widely available yet? Therefore, the reviewers don't yet allow it in manifests, and your only alternative is a blanket permission for all devices in the user's system. But then, Flathub lists your app as havingpotentially unsafe user device (and even webcam!) access
, even though you had no alternative except for disabling game controller support. What a nice sandbox they have there… 🙄
-
If that's the supposed future of shipping programs on Linux, they've sure made this dev look back into the past with newfound fondness. I'm now more motivated than ever to separately package Shuusou Gyoku for every distribution, if only to see whether there's just a single distro out there whose packaging system is worse than Flatpak. But then again, packaging this game for other distros is one of the most obvious contribution-ideas there is.
In the end though, the fact that we need to patch Pango to correctly render MS Gothic means that there is a point to shipping Shuusou Gyoku as a Flatpak, beyond just having a single package that works on every distro. And with a download size of 3.4 MiB and an installed size of 6.4 MiB, Shuusou Gyoku almost exemplifies the ideal use case of Flatpak: Apart from miniaudio, BLAKE3, the IPAMonaGothic font, the temporary libc++, and the patched Pango, all other dependencies of the Linux port happen to be part of the Freedesktop runtime and don't add more bloat to the system.
And so, we finally have a 100% native Linux port of Shuusou Gyoku, working and packaged, after 36 pushes! 🎉 But as usual, there's always that last bit of optional work left. The three biggest remaining portability gaps are
- the 8-bit render path, 📝 as I've explained when I ported the graphics,
- guaranteed support for ARM CPUs, which currently fail to build the project on Flathub due to a Tup issue, and who knows what other issues there might be,
- the aforementioned proper icon support, and
- MIDI playback.
Despite 📝 spending 10 pushes on accurate waveform BGM, MIDI support seems to be the most worthwhile feature out of the three. The whole point of the BGM work was that Linux doesn't have a native MIDI synth, so why should packagers or even the users themselves jump through the hoops of setting up some kind of softsynth if it most likely won't sound remotely close to a SC-88Pro? But if you already did, the lack of support might indeed seem unexpected.
But as described in the issue, MIDI support
can also mean "a Windows-like plug-and-play" experience, without downloading a BGM pack. Despite the resulting unauthentic
sound, this might also be a worthwhile thing to fund if we consider that 14 of the 17 YouTube channels that have uploaded Shuusou Gyoku videos since P0275 still had MIDI playing through the Microsoft GS Wavetable Synth and didn't bother to set up a BGM pack.
Finally, we might want to patch IPAMonaGothic at some point down the line. While a fix for the ascent and descent values that achieves perfect glyph placement without relying on hinting hacks would merely be nice to have, matching the Unicode coverage of its embedded bitmaps with MS Gothic will be crucial for non-ASCII Latin script translations. IPAMonaGothic's outlines do cover the entire Latin-1 Supplement block, but the font is missing embedded bitmaps for all of this block's small letters. Since the existing outlines prevent any glyph fallback in both Fontconfig and GDI, letters like ä, ö, ü, and ñ currently render as spaces.


Ideally, I'd like to apply these edits by modifying the embedded bitmaps in a more controlled, documented, and diffable way and then recompiling the font using a pipeline of some sort. The whole field of fonts often feels impenetrable because the usual editing workflow involves throwing a binary file into a bulky GUI tool and writing out a new binary file, and it doesn't have to be this way. But it looks like I'd have to write key parts of that pipeline myself:
- The venerable
ttx
provides no comfort features for embedded bitmaps and simply dumps their binary representation as hex strings. - The more modern UFO format does specify embedded images, but both of the biggest implementations (defcon and ufoLib2) just throw away any embedded bitmaps, and thus, the whole selling point of such tools.
That would increase the price of translations by about one extra push if you all agree that this is a good idea. If not, then we just go for the usual way of patching the .ttf file after all. In any case, we then get to host the edited font at a much nicer place than the Wayback Machine.
But for now, here's the new build:
-
Shuusou Gyoku P0303 Windows build (now with the new icon)
- Shuusou Gyoku on the AUR
- Shuusou Gyoku on Flathub
Next up: TH02 bullets! Here's to 2025 bringing less build system and maintenance work and more actual progress.