Why You Should Program for Old Computers
May 23rd, 2022 at 5:00 PM by Kugee
Ever since I was a dumb fat kid, I had always wanted to become a computer programmer. But even as I learned to lose weight, programming still felt like a sort of arcane magic I couldn't quite grasp. The main problem I had was wanting to chase a large goal too soon while not really knowing of all the essentials I needed to know beforehand. It doesn't help that many tutorials tend to be structured in such a way where they tell you the exact steps to program a specific function rather than help you understand how it works.
So how is it that I managed to go from only writing extremely niche programs sparingly to creating a full-fledged game for MS-DOS in a little over three months? It's not as simple as it sounds, but it does boil down to a few things: start changing the approach from trying to get every little thing in all at once to focusing on refining the most important things beforehand, and then build on top of that. In order to be able to execute this effectively, one has to think like a computer. I know nothing of the farces of object-oriented programming or fancy features of most of the heavyweight languages out there these days, but I can tell you that trying to think more about what is being stored in computer memory is the first step to leveraging it with exacting precision.
There were a few good explanations on what a game basically does, some which have I have held onto. It runs in a constant loop, ideally at least 60 times per second. There, it runs its logic, then it draws stuff on the screen - or, the two may be intertwined. Rinse and repeat. The game is driven by collisions, unless it is driven by menu selections. Behind the fancy graphics are usually just very simple sets of polygons (ideally rectangles in a 2D space).
Don't Rely on Libraries (and "easy" languages)
One of the first plunges I tried taking into game programming was in 2016, I believe. I looked over my options and wanted to settle on the MonoGame framework, which is an evolution of XNA from what I recall. I do not mean to say it was bad or anything like that. I guess the mere fact that it was a framework ended up constraining my thinking, like "is there a ceiling on what I can do? Can I do this, or can I not? Is it being used properly? Is there something to it I'm missing?"
It doesn't help that a ton of the online tutorials I had been looking for back then were not entirely clear, incomplete, or just not very versatile. So, am I supposed to bite the bullet and get a formal education on programming? What if it is still insufficient? I took a couple of courses related to programming in high school, and I have to say, they were not very helpful. One covered Turbo Pascal, which didn't go far beyond very barebones character mode programming, and another involved Java. Eugh... I think that speaks for itself. I still have the code to my assignments there, including a craps game that ended up being good enough to be the one showcased on the classroom projector. I guess the creativity was there, but it was not being channeled properly.
I think that tons of other programmers are going to raise their pitchforks over what I'm going to say here, but I think relying on libraries is not a healthy approach to programming. It will just force you to be constrained to their inherent limitations eventually. If you can write your own procedures in tandem with them, that's great, but in my experience the best approach is really to work on bare metal, develop your own methods, and only interface directly with whatever is absolutely necessary to get the results you need, whether it be an API or a direct line to some memory space or hardware port.
Even when I was following an old book about assembly programming, it came with a lot of premade routines that, while nice to have around, raised concerns about if I should even be using them for getting basic tasks working in my programs, like printing strings or numbers, because copyright, I guess. Regardless, that book did prove far more helpful in getting me to understand DOS programming than any tutorial out there, at least to some degree.
I may be overthinking this; some libraries are probably really good, certainly, the C standard library is one thing I really like to use, but even then, you just don't know when complications related to libraries can start to get to you, perhaps in regards to licensing - and Doom is known to suffer from such a thing.
Why Old Stuff?
One of the first things I recommend to others getting started with programming is QBASIC, with good reason. While it is not a language to count on for serious programs, it is very simple to approach, and will have you drawing graphics in no time. It is also designed with an extremely simplistic operating environment in mind: MS-DOS, something which takes very little control of the system compared to Windows or some other proper 32-bit operating system. There's no hassles in installing something you're not sure you're gonna use; QBASIC is just a self-contained executable that's immediately ready to run even in NTVDM or DOSBox.
Once you've gotten the hang of that, the next things I would advise moving up to are C and 8086 assembly. The latter, when paired with a good debugger, really helps a ton in getting one to understand how the CPU works, and therefore what needs to be thrown at it. However, knowing how to get results from it upfront may prove to be difficult at first, which is why C may be a better starting point, if only for a while.
But isn't programming in 8086 assembly for DOS masochistic? Yes, absolutely. But it's like being given a pencil and paper. You could either draw nothing more than a deformed circle, or you could illustrate an absolutely gorgeous landscape inhabited by such magical dragons which takes up the entire page, complete with attentive shading. I am not that great of an artist, but programming on bare metal takes pretty much the same approach. The pencil is the 8086 instruction set, and this cheatsheet from ChibiAliens is an excellent reference on how to use the pencil to greater effect.
That, I think, is what eventually flicked on some switches in me which helped me understand programming a lot better. Ultimately, it can do a lot to help one think for themselves a lot more about how a program should be written. Not every little thing can be worked out immediately, so something as simple as taking time off to reflect on what went wrong can eventually lead to the answer. Isolating yourself away from spoonfed answers goes a long way.
For instance, printing a string could really be as simple as pointing to a location in memory ending in a null byte terminator and running through each non-zero byte and putting them all on the screen until hitting the zero.
It's Less Wasteful
After programming The First Cell, I have become enlightened. I always knew that a program shouldn't have to be incompatible with Windows 95 if there is a shred of a possibility for it to be written to fit within it, but only through writing that program have I solidified that conviction. Try to picture what 2GB of RAM means now versus what it meant back in 1998. If you're dealing with very large files for a good reason, that much memory could very well be useful, and indeed it would have been in high-end applications so long ago. But for, say, a few web browser tabs, then I start to raise a lot of questions.
I say this as someone who has 64GB of RAM installed on my main computer (and uses a lot of it from time to time): there is no reason for a program to take up so many resources and require so many brand new dependencies (at least those from Microsoft). Programming for old computers like an XT clone or some Pentium rig from 1995 provides a great baseline for helping one be mindful of the resources they're utilizing.
It's the same reason why I've been very conscious of how my website has been designed for the most part; I want it to load fast enough on a 486 computer, because it would accomplish the same task that modern websites tend to convolute. There is still some work to be done there; an image 4000 pixels wide should load just fine on many computers now, but one thing that comes to mind is the need to provide other small resolutions for older computers to view comfortably. I also had to drop the idea of having an animated header for The First Cell's page because it was strangling Internet Explorer on my Pentium II Xeon machine at 1600x1200, as cool as it looked. Loading speed comes first!
HTML is simple enough, and I've got a guide on how to get started with that, but what about x86 assembly? Well, as obscure as the art is, don't worry. When The First Cell releases, so too will its source code, under a CC0 license, which is basically public domain and more. I want to help others learn this kind of programming as well, and such a source release should be at least something to do just that.
The First Cell is only the beginning, and I fully intend to keep going with programming for old systems yet more.
This is a perfect game for any computer running DOS/WIN9x Without A Sound Card as this Worth it too.
This is worth it.
osh: I believe comments are for only later posts from this one.
So are comments gonna be a thing on every post from now on or are they enabled on only new posts?
posting comment from a 486DX2 machine with Windows 3.11 for Workgroups.
Comment section is finally here .
cant post comment on w3m web browser
RayZen: you're actually right, a lot of modern of games don't do that anymore and when they port their games for a console or PC, those games will probably become instable or gonna run slowly
I have always thought of at least some optimization going into my programs and games, seems like this isn't a thing a lot modern programmers do...
I think it may be worth mentioning there is a version of QBASIC for modern computers called QB64, it was an excellent start for me in 2013
41 comments on this page
Sort: Ascending | Descending
Leave a Comment