It's fine that others want other things, but I think this is part of the challenge with improving terminals: A lot of the user base are advanced users who want to improve their own workflows, not write something user friendly, and who wants tools that are easy to combine and chain, where the priority is not something user friendly. It just isn't a priority for me to make it usable for anyone else (though I do separate out parts of it into libraries / gems as functionality matures).Īs such, I'm looking for improved terminals not to make things that doesn't require cheat sheets (because to use my editor you'd have to be comfortable with reading the source). I expect to remain its only user, and that's fine. I think we're looking for very different things. Then you were on your way to developing your own TUI library!Īlternatively, you would issue an interrupt call to put the screen into 320x200 256 color mode, get the address of that buffer ($B800 if memory serves), similarly overlay it with a typed grid, then start poking byte values in and getting all sorts of nice colors out of it. There were also interrupt routines you could call that did some of these, and certainly routines in Crt that did some of this. Then you could write routines that would fill in a rectangular region with a color, scroll the text of a region up or down, and do all sorts of other windowy things. True, but you could also get the address of the screen buffer and define a structured type to overlayed it that allowed you to put values directly into memory.Įach screen location was two bytes, one byte for the character value, and one for the character attributes which were a set of bits that controlled red, green, blue and intensity for both the foreground color and background color. I suspect that adding an output length parameter to the cache entry, and using a glyph cache that supports outputting multiple glyph sizes (even as separate arrays of power-of-2 sizes, 1,2,4,8,16) would have been less pessimized. For example we'd still like to cache the output of a library that is written as well as could be expected for what it does but is still too expensive compared to our required throughput.Ĭaching with a key derived from hash(concat(,)) is a pretty clever idea to multiplex the hash table, but I don't really love that it has to hash the input bytes multiple times, and it doesn't solve the problem of figuring out how many output glyphs it should look up. I don't think "bad" in the sense of a general value judgement is the operative term here, and it would be better stated more flatly as "slow code" or maybe "expensive code" because it would apply as equally well but to more situations. I also like the idea of isolating "bad code" as he puts it, but I feel that term is a bit uncharitable and implies an unnecessarily narrow scope. non-pessimization is a prerequisite to effective hardcore optimization. And in particular if a codebase is pessimized, it's largely pointless to do hardcore optimization because every time a new section is optimized it just reveals countless other sections that are hopelessly unoptimized and become the new bottleneck. His argument is that just applying non-pessimization alone represents a 100x-1000x speedup compared to typical code. "Hardcore optimization" is inordinately expensive so can only be used sparingly, but "non-pessimization" is much easier and should be used thoroughout your codebase. I like the conceptual separation of the two (real) optimization techniques into: "hardcore optimization" (my own term) which is intensive, careful measurement and fine-tuning and this idea of "non-pessimization" which is the principle of doing the least possible amount of work while delivering the required features, analyzed from a "basic algorithms" and "fuzzy mechanical sympathy" perspective. Nice recommendation, that was an enjoyable watch.