⮜ Blog

⮜ List of tags

Showing all posts tagged website-

📝 Posted:
🚚 Summary of:
P0170, P0171
⌨ Commits:
(Website) 0c4ab41...4f04091, (Website) 4f04091...e12cf26
💰 Funded by:
[Anonymous]
🏷 Tags:
website- meta+

The "bad" news first: Expanding to Stripe in order to support Google Pay requires bureaucratic effort that is not quite justified yet, and would only be worth it after the next price increase.

Visualizing technical debt has definitely been overdue for a while though. With 1 of these 2 pushes being focused on this topic, it makes sense to summarize once again what "technical debt" means in the context of ReC98, as this info was previously kind of scattered over multiple blog posts. Mainly, it encompasses

Technically (ha), it would also include all of master.lib, which has always been compiled into the binaries in this way, and which will require quite a bit of dedicated effort to be moved out into a properly linkable library, once it's feasible. But this code has never been part of any progress metric – in fact, 0% RE is defined as the total number of x86 instructions in the binary minus any library code. There is also no relation between instruction numbers and the time it will take to finalize master.lib code, let alone a precedent of how much it would cost.

If we now want to express technical debt as a percentage, it's clear where the 100% point would be: when all RE'd code is also compiled in from a translation unit outside the big .ASM one. But where would 0% be? Logically, it would be the point where no reverse-engineered code has ever been moved out of the big translation units yet, and nothing has ever been decompiled. With these boundary points, this is what we get:

Not too bad! So it's 6.22% of total RE that we will have to revisit at some point, concentrated mostly around TH04 and TH05 where it resulted from a focus on position independence. The prices also give an accurate impression of how much more work would be required there.

But is that really the best visualization? After all, it requires an understanding of our definition of technical debt, so it's maybe not the most useful measurement to have on a front page. But how about subtracting those 6.22% from the number shown on the RE% bars? Then, we get this:

Which is where we get to the good news: Twitter surprisingly helped me out in choosing one visualization over the other, voting 7:2 in favor of the Finalized version. While this one requires you to manually calculate € finalized - â‚¬ RE'd to obtain the raw financial cost of technical debt, it clearly shows, for the first time, how far away we are from the main goal of fully decompiling all 5 games… at least to the extent it's possible.


Now that the parser is looking at these recursively included .ASM files for the first time, it needed a small number of improvements to correctly handle the more advanced directives used there, which no automatic disassembler would ever emit. Turns out I've been counting some directives as instructions that never should have been, which is where the additional 0.02% total RE came from.

One more overcounting issue remains though. Some of the RE'd assembly slices included by multiple games contain different if branches for each game, like this:

; An example assembly file included by both TH04's and TH05's MAIN.EXE:
if (GAME eq 5)
	; (Code for TH05)
else
	; (Code for TH04)
endif

Currently, the parser simply ignores if, else, and endif, leading to the combined code of all branches being counted for every game that includes such a file. This also affects the calculated speed, and is the reason why finalization seems to be slightly faster than reverse-engineering, at currently 471 instructions per push compared to 463. However, it's not that bad of a signal to send: Most of the not yet finalized code is shared between TH04 and TH05, so finalizing it will roughly be twice as fast as regular reverse-engineering to begin with. (Unless the code then turns out to be twice as complex than average code… :tannedcirno:).

For completeness, finalization is now also shown as part of the per-commit metrics. Now it's clearly visible what I was doing in those very slow five months between P0131 and P0140, where the progress bar didn't move at all: Repaying 3.49% of previously accumulated technical debt across all games. 👌


As announced, I've also implemented a new caching system for this website, as the second main feature of these two pushes. By appending a hash string to the URLs of static resources, your browser should now both cache them forever and re-download them once they did change on the server. This avoids the unnecessary (and quite frankly, embarrassing) re-requests for all static resources that typically just return a 304 Not Modified response. As a result, the blog should now load a bit faster on repeated visits, especially on slower connections. That should allow me to deliberately not paginate it for another few years, without it getting all too slow – and should prepare us for the day when our first game reaches 100% and the server will get smashed. :onricdennat: However, I am open to changing the progress blog link in the navigation bar at the top to the list of tags, once people start complaining.

Apart frome some more invisible correctness and QoL improvements, I've also prepared some new funding goals, but I'll cover those once the store reopens, next year. Syntax highlighting for code snippets would have also been cool, but unfortunately didn't make it into those two pushes. It's still on the list though!

Next up: Back to RE with the TH03 score file format, and other code that surrounds it.

📝 Posted:
🚚 Summary of:
P0143, P0144, P0145
⌨ Commits:
(Website) 9069fb7...c8ac7e5, (Website) c8ac7e5...69dd597, (Website) 69dd597...71417b6
💰 Funded by:
[Anonymous], Yanga, Lmocinemod
🏷 Tags:
website-

Who said working on the website was "fun"? That code is a mess. :tannedcirno: This right here is the first time I seriously wrote a website from (almost) scratch. Its main job is to parse over a Git repository and calculate numbers, so any additional bulky frameworks would only be in the way, and probably need to be run on some sort of wobbly, unmaintainable "stack" anyway, right? 😛 📝 As with the main project though, I'm only beginning to figure out the best structure for this, and these new features prompted quite a lot of upfront refactoring…

Before I start ranting though, let's quickly summarize the most visible change, the new tag system for this blog!

Finally, the order page now shows the exact number of pushes a contribution will fund – no more manual divisions required. Shoutout to the one email I received, which pointed out this potential improvement!


As for the "invisible" changes: The one main feature of this website, the aforementioned calculation of the progress metrics, also turned out as its biggest annoyance over the years. It takes a little while to parse all the big .ASM files in the source tree, once for every push that can affect the average number of removed instructions and unlabeled addresses. And without a cache, we've had to do that every time we re-launch the app server process.
Fundamentally, this is – you might have guessed it – a dependency tracking problem, with two inputs: the .ASM files from the ReC98 repo, and the Golang code that calculates the instruction and PI numbers. Sure, the code has been pretty stable, but what if we do end up extending it one day? I've always disliked manually specified version numbers for use cases like this one, where the problem at hand could be exactly solved with a hashing function, without being prone to human error.

(Sidenote: That's why I never actively supported thcrap mods that affected gameplay while I was still working on that project. We still want to be able to save and share replays made on modded games, but I do not want to subject users to the unacceptable burden of manually remembering which version of which patch stack they've recorded a given replay with. So, we'd somehow need to calculate a hash of everything that defines the gameplay, exclude the things that don't, and only show replays that were recorded on the hash that matches the currently running patch stack. Well, turns out that True Touhou Fans™ quite enjoy watching the games get broken in every possible way. That's the way ZUN intended the games to be experienced, after all. Otherwise, he'd be constantly maintaining the games and shipping bugfix patches… 🤷)

Now, why haven't I been caching the progress numbers all along? Well, parallelizing that parsing process onto all available CPU cores seemed enough in 2019 when this site launched. Back then, the estimates were calculated from slightly over 10 million lines of ASM, which took about 7 seconds to be parsed on my mid-range dev system.
Fast forward to P0142 though, and we have to parse 34.3 million lines of ASM, which takes about 26 seconds on my dev system. That would have only got worse with every new delivery, especially since this production server doesn't have as many cores.

I was thinking about a "doing less" approach for a while: Parsing only the files that had changed between the start and end commit of a push, and keeping those deltas across push boundaries. However, that turned out to be slightly more complex than the few hours I wanted to spend on it. And who knows how well that would have scaled. We've still got a few hundred pushes left to go before we're done here, after all.

So with the tag system, as always, taking longer and consuming more pushes than I had planned, the time had come to finally address the underlying dependency tracking problem.
Initially, this sounded like a nail that was tailor-made for 📝 my favorite hammer, Tup: Move the parser to a separate binary, gather the list of all commits via git rev-list, and run that parser binary on every one of the commits returned. That should end up correctly tracking the relevant parts of .git/ and the new binary as inputs, and cause the commits to be re-parsed if the parser binary changes, right? Too bad that Tup both refuses to track anything inside .git/, and can't track a Golang binary either, due to all of the compiler's unpredictable outputs into its build cache. But can't we at least turn off–

> The build cache is now required as a step toward eliminating $GOPATH/pkg. — Go 1.12 release notes

Oh, wonderful. Hey, I always liked $GOPATH! 🙁

But sure, Golang is too smart anyway to require an external build system. The compiler's build ID is exactly what we need to correctly invalidate the progress number cache. Surely there is a way to retrieve the build ID for any package that makes up a binary at runtime via some kind of reflection, right? Right? …Of course not, in the great Unix tradition, this functionality is only available as a CLI tool that prints its result to stdout. 🙄
But sure, no problem, let's just exec() a separate process on the parser's library package file… oh wait, such a thing doesn't exist anymore, unless you manually install the package. This would have added another complication to the build process, and you'd still have to manually locate the package file, with its version-specific directory name. That might have worked out in the end, but figuring all this out would have probably gone way beyond the budget.

OK, but who cares about packages? We just care about one single file here, anyway. Didn't they put the official Golang source code parser into the standard library? Maybe that can give us something close to the build ID, by hashing the abstract syntax tree of that file. Well, for starters, one does not simply serialize the returned AST. At least into Golang's own, most "native" Gob format, which requires all types from the go/ast package to be manually registered first.
That leaves ast.Fprint() as the only thing close to a ready-made serialization function… and guess what, that one suffers from Golang's typical non-deterministic order when rendering any map to a string. 🤦

Guess there's no way around the simplest, most stupid way of simply calculating any cryptographically secure hash over the ASM parser file. 😶 It's not like we frequently change comments in this file, but still, this could have been so much nicer.
Oh well, at least I did get that issue resolved now, in an acceptable way. If you ever happened to see this website rebuilding: That should now be a matter of seconds, rather than minutes. Next up: Shinki's background animations!

📝 Posted:
🚚 Summary of:
P0016, P0017
⌨ Commits:
(Website) 98b5090...bca833b, (Website) 3f81d1f...d5b9ea2
💰 Funded by:
qp
🏷 Tags:
website-

Website development time: 12/12

Calculating the average speed of the previous crowdfunded pushes, we arrive at estimated "goals" of…

So, time's up, and I didn't even get to the entire PayPal integration and FAQ parts… 😕 Still got to clarify a couple of legal questions before formally starting this, though. So for now, let's continue with zorg's next 5 TH05 reverse-engineering and decompilation pushes, and watch those prices go down a bit… hopefully quite significantly!

📝 Posted:
🚚 Summary of:
P0013, P0014, P0015
⌨ Commits:
(Website) b9805d2...efeddd8, (Website) 31474a0...9dc9632, (Website) 9dc9632...8d3652f
💰 Funded by:
qp
🏷 Tags:
website-

Website development time: 10/12

In order to be able to calculate how many instructions and absolute memory references are actually being removed with each push, we first need the database with the previous pushes from the Discord crowdfunding days. And while I was at it, I also imported the summary posts from back then.

Also, we now got something resembling a web design!

📝 Posted:
🚚 Summary of:
P0012
⌨ Commits:
(Website) b9918cc...b9805d2
💰 Funded by:
qp
🏷 Tags:
website-

Website development time: 7/12

So yeah, "upper bound" and "probability". In reality it's certainly better than the numbers suggest, but as I keep saying, we can't say much about position independence without having reverse-engineered everything.

Next up: Money goals.

📝 Posted:
🚚 Summary of:
P0011
⌨ Commits:
(Website) 40c1e98...b9918cc
💰 Funded by:
qp
🏷 Tags:
website-

Website development time: 6/12

Here we go, overall ReC98 reverse-engineering progress. Now viewable for every commit on the page.

📝 Posted:
🚚 Summary of:
P0010, P0054, P0055
⌨ Commits:
(Website) cbda977...94127fb, (Website) 94127fb...3161d7e, (Website) 3161d7e...40c1e98
💰 Funded by:
DTM, Egor
🏷 Tags:
website-

Website development time: 5/12

Now with the number of not yet RE'd x86 instructions the you might have seen in the thpatch Discord. They're a bit smaller now, didn't filter out a couple of directives back then.

Yes, requesting these currently is super slow. That's why I didn't want to have everyone here yet!

Next step: Figuring out the actual total number of game code instructions, for that nice "% done". Also, trying to do the same for position independence.