Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(1 edit)

Using the indexed start-of-disk system would be really handy for writing functions in a more efficient manner, and for making a program that can easily generate an Overlay-ready file with functions inside of it - being able to consistently refer to a function by ID sounds like an absolute lifesaver!

My idea for the IDE would, at first, basically just be a preprocessor.  All code is still written purely in TC-06 assembly, but you have some special "codes" the IDE knows how to use (i.e, "FUNC <function_name>" automatically inserts the code to jump to a function - the names are for human purposes only, it'll automatically pick an integer id for 'em at generation time), and it lets you work on individual functions easier.  That alone would probably make my life way easier when making games, OSes, etc.

Actually building up to a proper C/C++/C#/Java/etc compiler/parser of some sort that can output TC-06 assembly code would be INCREDIBLY COOL and definitely stands as a goal for sometime in the future, but I do have a feeling it's going to require some additions to the TC-06 language itself.  There's space for 5 more op-codes (plus, those codes could end up being some sort of "sub-code" thing - e.g, SUBONE [4-bit op-code from the sub-code #1 list] [24 bits of arguments] could be a new "command" to allow 15 more commands with less argument data), which I should hope would be enough!

I'm definitely gonna poke around in QT and see what tools I have available for making an IDE/preprocessor, and if I can wrap my brain around it, I definitely wanna try making a basic test of the preprocessor tool, because I really do think having some basic built-in function/memory location management could be a lifesaver while programming bigger things.

Unrelated-ish, but I'm actually kinda tempted now to run some sort of Senbir Game Jam where the goal is just to make a game that runs in Senbir, in, say, a week.  Not many people are likely to participate, I suspect, and there are some limitations in Senbir itself at the moment (like the apparent lack of support for holding down a key?), but I know I for one would have fun trying (and probably failing) to make a proper game in it.  The one major caveat I see there is the fact that the (apparent) maximum clock speed of 180 Hz is definitely a major limiting factor that could mess up people's entries, and having to deal with the gameified render layer could annoy potential players of said games.

EDIT: Also, possible thing of interest - I've set up an official Senbir Discord server!  Link/join widget is in the main Itch page.

I think the start-of-disk function index system will mostly be useful for when you're doing it manually and referring to the functions by ID yourself.

If your assembler/compiler/preprocessor/IDE (or whatever you want to call it) lets you use "FUNC <name>" and inserts the code to jump there on its own, keeping track of the IDs for you - then it must also keep track of the addresses for those IDs, which suggests it can probably just as easily use those addresses directly, without needing the IDs or such an index table.

Though, I suppose we would want to minimize the number of instructions (or other resources) needed to do a call, to maximize what's available for other code, so I guess havin the table might help with that.

Note: the rest of this post ended up being a rather in-depth analysis of the available approaches that such an assembler/etc. could use. Feel free to correct me if I've missed something, but apparently there's no cut-and-dried "best" approach, and it depends on circumstances... Also, I originally intended to respond to other parts of your post as well, but this is too long already, so I'll add separate replies for those later.

Let's see now. I don't think the JMP to the loader routine can be avoided (unless you can put the call at the end of the memory area), so that's one instruction, but since it's unavoidable we can ignore it for the purpose of comparing the different approaches.

So the main thing we need is to somehow get the actual disk address to load the function from into a register for the loader to use.

With index table

Now, if we have and use an index table somewhere on disk, what would the code for that look like?

Well, to start with, it means we must have a GETDATA instruction to load the actual address from the index table, based on the index.

GETDATA supports an immediate (constant) value, so can we use that for the index, to load the address directly, with just one instruction?

*checks the docs* ... well, that constant value gets shifted up by 10 bits (documentation says 11, but I think that's incorrect, and 11 would be even worse for this). This means that it can only be used to load from addresses that are multiples of 1024. Which means we get one function per 4KiB of disk space, minus one for address 0 since that's where the initially bootloaded program is. Not really useful with the default 256-word disk.

So, we can't practically have the table index inside that one instruction, which means it must be stored either in memory or in a register.

The memory variant means having the function index stored in memory somewhere, probably loaded as part of the current function, and having the GETDATA instruction point to that address. The obvious way to do this would leave each call site using two memory addresses.

Trying to be clever and putting the instruction in the loader doesn't really help either, since we'll either be limited to one call per function, or have to spend instructions on modifying memory (either the instruction or the address with the index), which will probably take more addresses total.

The register variant means getting the function index into a register somehow, which will (generally) require at least one instruction, but in return it's possible to move the GETDATA instruction to the loader.

If we dedicate a register to it and limit ourselves to 256 functions (minus the initially bootloaded program), we can use SET to have it use just one instruction/memory address per call site.

If we don't dedicate a register to it, or need more functions, we have to either store the function index in memory somewhere (to use MOVI), or use multiple instructions (e.g. MATH to clear + SET) - so at least two addresses per call site. This is no better than the memory variant, so we can disregard this option.

So, with an index table, we end up with these options to choose from:
- two memory addresses per call site
- one memory address per call site, plus one in the loader and one register, with limitations

This is in addition to the space taken by the table itself, and the JMP for each call site.

Without index table

Now, if we don't have an index table, and just use the address directly, what can we do then?

Using MOVI will work, and will take two memory addresses - one for the instruction, one for the function address. Simple and straightforward.

Using SET can work, and will take just one instruction, but requires all functions to start at addresses before 256 (since we can only use 8 bits), and that we're careful about how the load-address register is used (since the other 3 bytes must remain 0).

However, the loader is currently using R2 for the load address (to use IFJMP), which I doubt we can be that careful with, so we'll need to use another one, and copy it into R2 as needed. That copy instruction can be in the loader, but this will essentially require a dedicated register.

It's also possible to use multiple SET instructions, to ease (or even eliminate) the limitations, but that means using between two and four instructions per call site, without being any better than using MOVI, so we can disregard this option.

Using PMOV or MATH essentially requires already having the address (or something that can easily be turned into the address) in a register, which seems rather unlikely in the general case, without spending extra instructions on it. Generating a function index using this method is somewhat more likely, but not much. So either way, this can only be used for special cases.

So, without an index table, we end up with these options to choose from:
- two memory addresses per call site
- one memory address per call site, plus one in the loader and one register, with limitations

This is in addition to the JMP for each call site, but avoids taking space for a table.

These options are very similar to the ones with a table, though not quite the same, in that the limitations of the second option are a bit different.

Summary of options

The first option for both uses the same number of instructions, addresses and clock cycles, which suggests that with those approaches, the table is an unnecessary indirection that just takes extra disk space.

The second option for both also uses the same number of instructions, addresses and clock cycles. But they have different limitations.

Now, if the disk is no larger than 256 addresses (like the default disk), then the limitations are void for both, since they are based on needing more than 8 bits to address things. So in this case the table again just takes extra disk space.

However, if the disk is larger than that, then the limitations come into play, and then the table is actually useful as it allows us to have functions stored anywhere on the disk instead of just in the first 256 addresses. However, even with the table, we are limited to at most 256 functions, minus the space taken by the initial program.

Which approach to choose

The above suggests that the choice of approach will mainly be based on which resources are most dear, and how many function calls you have.

First off, if you need more than 256 functions, you obviously need to use the first option, and thus don't need the function table.

If you need all the registers you can get, but have room to spare in memory, then again the first option is probably best.

If you can spare a register, but have lots of calls and little room in memory, then the second option is probably best, as it saves you one word of memory per function call.

If you have little disk space, and at least a couple calls, then the second option without a table is probably best, as it saves you both the table and one word per function call (not just in memory, but on disk).

If your dearest resource is your sanity (because you don't have an assembler/etc. that does this work for you), then I think the second option with a table is probably best, closely followed by the first option with a table, with the non-table options pretty far behind...

Aaargh. Of course. Just moments after posting, and I realized I missed something rather significant... (The worst part is that I'm pretty sure I had noticed it early on and started to write something about it, but it apparently got lost somewhere while editing/expanding, and then I forgot about it...)

For both of the with-table options, I missed/forgot that GETDATA always puts the result in R1, while the loader expects to get the load address in R2, so it can use it with IFJMP without having to move it there first.

This means that those options require another instruction, to move the address from R1 to R2, either per call site or in the loader itself.

Thus, the with-table options are always more expensive (in both memory size and clock cycles) than the non-table options, instead of being equivalent.

... I do think that most of the post is still valid, though, including the last section.

PSA: There is an edit button, so in the future, if you post something and go "aw crap, I should've remembered to add [insert thing here]!" - you have a way out without double-posting.  I know I've done that before - it happens to the best of us!  At least this isn't Twitter or something, right?  :P

I am inclined to agree with pretty much all your conclusions on efficiency analysis - the tables are pretty much only good for human sanity when developing without an IDE or similar.  Having to search & replace for address changes is an absolute pain.  I don't have much in the way of comments beyond that - brain is a bit numb from it being late where I am, and from programming a kernel for the past several hours.  Will re-read everything in the morning when I'm less zombified.  >.<

(+1)

Yeah, I know there's an edit button, but at the time I really didn't feel like doing the editing required, since it wasn't just a matter of "should have added this" - to edit in the correction I would have had to change and rewrite significant chunks of the text, such as the summary, and it was already late (getting towards early), and it would have taken enough time that others might have read the first version in the meantime, so I'd have to add something about the correction anyway, and... yeah.

I suppose I could have added my reply as an "EDIT:" at the bottom or something, but it felt more appropriate as a reply, since it was a pretty significant correction. (And in my mind, double-posting is something entirely different than responding to yourself. But I guess I might be out of touch on that.)

(+1)

(Note: This is based on the version that was out before the update mentioned in the below post about the OS kernel. I don't know if that update changes anything that matters for this post, but thought I'd mention it just in case.)

Regarding the op-codes, one thing I've noticed is that the current instruction set is fairly RISC-like. It even mostly uses a load/store architecture, with MOVI/MOVO being the only memory-accessing instructions - except for GETDATA/SETDATA with flag 1 or 2. That's pretty neat.

Also, several of the existing instructions already have the kind of sub-code you mentioned, though usually called "flag", and their bit positions vary.

E.g. MATH has sub-codes for ADD, SUB, MOV, etc., and IFJMP has two separate ones for forward/back and for EQ, GT, etc..

Regarding the max clock speed, based on the numbers you're using, I'm guessing your clock ticks are currently tied directly to the game's rendering frame rate, or maybe to some kind of timer (with a limited resolution).

However, I don't think you really need them to be tied that way - after all, for an emulator, it doesn't really matter if the ticks are evenly spaced. The most you might need is for them to appear that way to the player.

So, a fairly simple technique for that is to run two or more ticks per frame (or per timer triggering). This would double (or more) the apparent clock frequency, as long as the CPU can actually keep up with the emulation.

I used that technique in my JS TC-06 emulator, and have managed to hit 500kHz that way. Admittedly only with the debugger off, and a very simple program (the max I've reached depends on what it's doing) - but this was in JavaScript, running in Firefox, with code that isn't primarily built for max speed - so I suspect you can do better than that in C#, without needing to move to C++ or using things like threads.

Of course, that assumes you actually are timer bound and not CPU bound, but I don't really see how you could be CPU bound at that frequency, without trying to be...

(1 edit)

RedCode and RISC/CISC (as described by my dad, a C64 vet.  can never remember which, though >.<) have been my main inspirations for developing the TC-06 process set.  While many of the functions already support sub-codes of their own, the main purpose of adding a dedicated sub-code function would be for utilities, mainly - which is what the current UTL code does.  UTL 0 is the offset function used in the kernel, and the assembler will just directly accept OFST [starting address] to output the equivalent of UTL 0 [address].

The max clock speed seems to be a limitation specifically of the Unity function I'm using to execute the actual function that does a clock cycle, MonoBehaviour.InvokeRepeating(string methodName, float time, float repeatRate).  If I could get past that, I have a feeling the clock speed limitation would disappear.  Your idea about having it run multiple ticks per call of the function from InvokeRepeating is a good one - it wouldn't be hard to amp it up to at least 1.8kHz, maybe further!  Unity can be a bit badly-optimized, in my experience, so even a 1kHz max would be a big deal to me - getting up to, say, 1MHz would be huge (and I'm genuinely not sure it's possible without using an external library.)  I would LIKE to keep it trying to run at a specific clock speed (e.g, keep a timer, rather than just "execute as many cycles per frame as possible"), because the limitations in processor speed are a fun part of the challenge for TC-06 development.

You made a JS implementation of the TC-06?  That's mind-blowingly cool!  Do you plan to share it anywhere?  Also, 500kHz really isn't half-bad, especially when you compare it to the current max speed of ~1/2778th that in Unity - I'm not sure that 500kHz could be topped in Unity with just raw C# stuff, but handling it with a native C++ library or similar should pretty much guarantee much higher speeds.

(+1)

Yeah, I did - well, it's not complete, and probably has bugs in it (as in, things that don't work quite the same as in yours), and parts that could use optimization, but it seems to work, more or less.

I'm not sure why I made it, really, but couldn't seem to stop myself... Probably mostly the challenge, to see if I could, without looking (at least too much) at how you did it, just at the docs and the in-game behaviour. (I haven't really looked through most of your code yet, just some of the latest diffs to see what exactly had changed - and I haven't yet integrated those changes into mine.)

I haven't shared it anywhere yet (wasn't sure you'd approve and figured I'd ask first), but I can push it somewhere (probably GitHub) if you'd like to see it (and don't mind it becoming public).

I agree that it's good to try to hit a specific frequency rather than simply running it as fast as it can - that's what my emulator does, too. I don't usually run it as fast as it can go, unless I'm specifically trying to find the max frequency or compare the performance of different implementations. (The default frequency is set at 60Hz to match Senbir.)

Also, that 500kHz figure is pretty much the highest one I've managed so far - most of the time I don't get anywhere near that (not that I really try to, or check all that often). Especially enabling the memory debugger limits the max frequency to a lot less, but it also depends on exactly which instructions it's running, since those take varying amounts of real time (e.g. a simple MOVO vs updating the screen). It has also varied over time with changes to the implementation. I think I've seen something like 30kHz too, at the lower end.

Of course, if I had done just one emulator tick per timer tick, I wouldn't have gotten anywhere close - IIRC setInterval in browsers is usually limited to something like at least 10ms between ticks, which would have limited me to about 100Hz. I still set it to the desired frequency, so it doesn't tick more often than it needs too (e.g. 100Hz when I want 60Hz), but it also doesn't always tick as often as I'd want.

My code for getting around that is a bit odd, though, in that it tries to maintain the asked-for clock frequency even if the underlying timer is unreliable (as in, has significant jitter) - so it keeps track of the time for the last tick that was run, and when the timer triggers, it checks what the current time is and runs as many emulator ticks as necessary to catch up to what it should have run by that time (whether that's 0 or 100k).

This approach is a bit risky, since if the asked-for frequency is too high, it might take so much time to run those ticks that it keeps needing to run even more ticks the next time the timer triggers, which (if not stopped) would eventually make it seem to have locked up since it takes so long.

(My approach for measuring the max Hz is to start the emulator with a reasonably high frequency, make a note of actual time and tick count, wait 10 seconds, make another note of actual time and tick count, and stop it. Then the frequency is calculated as (endTicks - startTicks) / (endTime - startTime). If I hit the target frequency, I increase the target and try again, until I fail to hit it. That gives me an approximation of how fast it can go (with that program etc.) - I usually round that result downwards to avoid problems.)

I've considered adding code to detect and protect against that runaway effect, but haven't done so yet - which enables me to measure the max without having that protection kick in.

A similar risk exists even for a simple loop that gives you N emulator ticks per time tick, too, if you don't protect against setting N too high - but at least that wouldn't start seemingly safe and then run away getting worse. (As long as your timer doesn't try to catch up to missed ticks, anyway.)

(1 edit)

I know the feels - I'm prone to "doodling" in Unity and making little projects, just because they seem like an interesting challenge.

Senbir, and by extension, the TC-06 architecture, are LGPL-licensed.  You're 100% free to redistribute mods, updates, reimplementations, etc - I believe they just have to be LGPL themselves.  I'm not sure how it affects something based on the architecture (maybe I should explicitly define that as Creative Commons or something?  dunno), but far as I'm concerned/can tell, you're good to go for sharing the JS version.  You'd need to include a link (I think) to here, and/or the Gitlab page, but that should be the extent of extra garbage you need to do for sharing it.

I'd be interested in seeing the catch-up/timer code you've got - even if it's not perfect, it'd probably be useful in figuring out how to give Senbir better support for clockrates >180Hz.  My current ideas vaguely revolve around getting some sort of average (e.g, the InvokeRepeating speed is 1f/(clockSpeed/numberOfCyclesPerInvocation), and numberOfCyclesPerInvocation=clockSpeed//180f, assuming // is a "divide and round to integer" function of some variety), but I'm not quite sure what formula would be best, and it could still cap out around some relatively low number, like 324kHz.

EDIT: *fixing derp wording because sleepy*

(+1)

Well, regarding reimplementations, IANAL, but I think clones (whether of games or other things) are an example of how that doesn't require following the license of the original (as long as the clone authors haven't copied any of that original's code, just written equivalent code themselves based on observed behaviour (and maybe documentation?)). (Come to think of it, isn't Mono in some ways an example of that? Having reimplemented .NET?)

So, given that I wrote it without referencing the original code, I probably don't really have to to use the same license etc. for it. On the other hand, it's not like I'm really against using the LGPL (though I'm not very familiar with it, esp. v3, as I usually tend to use simpler ones) - and I have since started looking at the code of the original (not to directly copy it but to figure out what exactly it supports/handles (actually for a different project)), so I guess that might be going into a gray area, since it might affect whatever code I write later even if I'm not trying to copy it. So I guess it would probably be safer and easier to just use the LGPL like you said.

Anyway, here you go: https://github.com/edorfaus/TC-06.js/

You can see it live here: https://edorfaus.github.io/TC-06.js/

Regarding the architecture itself, I'm not sure if that can really be protected with a copyright license, as I think the copyright can protect your documentation about that architecture, but IIUC not the actual information inside that documentation about the architecture itself? (Meaning that if someone learned it well enough to write their own docs from scratch, without copying from yours, I think they could do that without infringing.) I think you might have to get into patents to protect the information itself. But again, IANAL, and this is a pretty messy legal area IIUC. Not to mention the international aspects of it. (I'm in Norway, myself.)

However, I think the idea about using CC for that instead of GPL is probably a good one regardless, as the GPL is not really meant for documentation like that (or documents in general, really). Do look into it yourself before deciding, though, don't take my word for it. (I'm not an expert.)

--

That formula pair looks about right to me (maybe depending on implementation details), though you may want to tinker with that 180 number - if a lower number is more reliable (in terms of the timer actually hitting the frequency), it might be better to do that and just loop a few more times for each timer tick instead. You may also want to add some limits to avoid having someone make it try to loop 1e9 times per timer tick or something crazy like that. (Ideally it would probably adjust that relative to what the host PC can do, but that's harder to get right.)

Here's the class that handles the ticking in my implementation: https://github.com/edorfaus/TC-06.js/blob/master/clock.js

That class emits a "tick" event to run an emulator tick, while the "render-tick" event was added later for optimization purposes (the memory debugger now uses that to avoid updating its HTML elements repeatedly in one render cycle).

(I've tried to cleanly separate the various components, to keep them independent of each other (loose coupling), with just the main index.js linking them together. I do think I have a few places left with some tighter coupling that I haven't gotten rid of yet though. Feel free to ask if something's confusing, though I can't promise to always (or ever) respond quickly.)

(1 edit)

I, also not a lawyer, concur with your thoughts on all the legal jargon.

Really solid JS implementation!  I've spent a bit of time toying around with it, trying to write a basic Blinkenlights program - I have to admit, having an assembler has spoiled me.  Before Senbir, I was toying with an older prototype of the TC-06 architecture, and made a Blinkenlights program with naught but my documentation, and manually writing out the binary code for it.  I'm having a bad time of trying to do the same in the JS version, and I'm not willing to pin that on, say, unfinished debug tools or what-have-you.  >.<

Classic Terminal
(Context for the surroundings: I just hijacked a previous test/doodle project that had a basic first-person controller, so it was possible to walk around and turn the computer on/off)
I've done some maths for the formula being /100 instead of /180, and the numbers come out to something like needing to run the actual clock cycle function ~10k times per call from InvokeRepeating at 100Hz to run the computer at 1MHz, and...My Unity experience tells me it'd probably just lock up if you asked it to crunch like that.  With a native C++ library, it might be able to handle it, though - even in a low-power VM, I can do some heavy crunching with minimal slowdown.  The XScreensaver Galaxy program can be modified to be a proper N-body simulation, without MUCH lag even when there's several galaxies with 1000+ bodies each, and still manage ~45 FPS - in a VM.

I think that's probably gonna be my next development project, just trying to optimize things so it's possible to run at even a meagre 1kHz.  Well, that, and toying with a networking peripheral.  I have an idea for the actual hardware side (i.e, what you have to work with as a programmer), but I'm not sure how to go about letting players actually connect to one another, from a UX standpoint.  Do you pick a peripheral in-game, go to the debug screen, type in a real-world IP, and connect to it?  Does it automatically host things, or do you have to be a host/connector?  Stuff like that.

(+1)

Well, 1kHz would only require each 100Hz timer tick to run the actual clock cycle function 10 times (not 10k), which doesn't sound like it ought to be a problem. (And at 180Hz it'd be just 5.6 times. Lowest integer ratio would be 8 times at 125Hz.)

I'd say try it, and see how high you can make the loop count go at a given frequency before it starts to bog down, not just in theory but in actual practice. It might surprise you (in either direction), and even if it doesn't, at least then you'd have some hard data to compare with, to see if your optimization efforts are actually helping any (and how much). (Sometimes an apparently obvious optimization actually has the opposite effect.)


As for my version not having an integrated assembler, yeah, that's probably one of the reasons I haven't done much with it yet either (other than make it).

Though, on the other hand, there's still an assembler in Senbir, so I suppose we could use that to compile the program, and then copy over the result to the browser. Still nowhere near as convenient as an integrated one, but at least we wouldn't have to do the bit-twiddling manually.

I've been meaning to add save/load actually, with support for the disk image format of Senbir, so you could just load the file instead of copying the bits manually - but I haven't gotten further than including the GUI buttons for it (and then hiding them since they weren't implemented yet).

I'll probably get around to adding an assembler eventually, but it hasn't really been a high priority yet. I was primarily going for the emulator itself, and wanted to get that into decent shape first. But, I suppose having an assembler would make testing it easier, so... Maybe I should push it up the list a bit. We'll see, I guess. (What I'm currently working on is closer to the assembler than the emulator anyway... But it's not in JS, so...)

(Oh, and, in case you didn't realize, it currently comes with a default program (for testing purposes), which includes a pixel blinker.)


Regarding the networking peripheral, yeah, UX for that could get tricky. Security suggest no auto-hosting and ability to bind to specific IPs and such, but that's rather less user-friendly. I don't think I really have much advice for that, I'm more a backend dev. Implementation might get tricky too, what with (probably) needing NAT traversal and such (unless you only intend it for LAN connections or people who can set up forwarding).

By having the IP in debug, I'm guessing you've decided to make it only do predetermined point-to-point connections? (And probably just one at a time?) Otherwise it could be solved by having the program tell the peripheral which IP to connect to (which fits in a single word).

A somewhat related issue: if someone wants to test their networking code locally, before connecting to anyone else, would they have to run another instance of the game in parallel, or is it feasible to have more than one virtual computer in the same 3D environment, and connect them virtually? The answer will probably affect the UX, either way.

(1 edit)

Using 125Hz as the ratio-maker works nicely - tried at 1kHz, 10kHz, 100kHz, and 1MHz.  Up to 100kHz, at least, performance is well within acceptable levels for the end-user - Unity's profiler estimates the game runs at 100 frames per second at 100kHz.  1MHz takes it down to ~8-10 FPS.  Mind, there's nothing fancy put in to make it more efficient - it might be possible to get better performance than that in C#.


It doesn't particularly show in the gif, I don't think, but the paint program feels MUCH smoother at 1kHz - the redraw function when moving the cursor often doesn't even end up showing because it's just running *that fast*.  If 100kHz is acceptable, I'm quite confident some low-fi games should be possible with acceptable levels of performance.  Running at the default-mode resolution of 16x9, a full self-modified screen redraw function would take ~576 cycles, which would let the in-game game run at ~174 FPS.  The same for the Compy6c preset is a bit less optimistic - 3 FPS in 32768 cycles per full redraw.  That said, most games aren't likely to need to re-draw the entire screen at once, and they might run at lower resolutions anyway.

(EDIT: To clarify, by "full self-modified cycle", I mean one where there's at least 4 operations to modify, say, a SETDATA function.  Any sort of basic self-modifying code that relies on data already in the registers (like a loop's counter) tends to need 4 ops: MOVI the data being, well, modified, PMOV it or similar, use MOVO to replace it in RAM, and then actually execute it.  Any sort of fancy redraw function would probably need a lot more than that.)


The Senbir file format is painfully simple, it's literally just plaintext, one address' bits on each line, with the first line being the number of addresses to load.  Meant to add a "save to bytes" function of some sort that'll save it in proper binary, but...never found the right functions for it.  I have seen the pixel-blinker in the default emulator state - I was just trying to recreate my old one from the original TC-06 prototype for the fun of it.  ^_^


I'm thinking binding to a specific IP (and more importantly, specific port) is the way to go about things for the networking system, which means you can have multiple network ports open on your computer at once (as well as maybe an alternate computer you can spawn in that's mostly network ports, to act as a router & enable some larger networks to be built) - port-forwarding might be kind of a pain at times, but it'd be worth it for the ability to let a bunch of people connect to the same computer via different in-game network devices.

I'd like to do predetermined point-to-point connections - it makes them simple to understand, and it adds the possibility of getting to write a custom networking system (like TCP/IP) - which would be SUPER cool.  Far as I'm aware, ethernet cables & whatnot are basically just big connections between computers that funnels data from one device to another - the computers' OS actually handles figuring out where the data goes.  I don't think it'd be too hard to write a simple TCP/IP style networking API that can handle automatic packet forwarding and such, so it'd be possible to make a router computer (or several) that can act as LANs, or on a larger scale, potentially even a miniature internet.

An ingame IRC system is definitely among my programming end-goals - who needs Discord when you can just connect to the Senbirnet, fire up your IRC client, and connect to the official server to chat with people?

Yep, not surprised you can get fairly high with C# (I figure it's generally faster than JS).

Assuming it's a linear relationship between those numbers (which it quite possibly isn't), and using 8fps (the worst) at 1MHz, you should still be able to get 60fps at 491kHz (~59fps at 500kHz). (Though that's on your system, other players might have slower ones. (Or faster.))

Some optimization might let you get at least a bit higher - like 600kHz @ 60fps at least, I'd guess, though maybe not 1MHz. Of course, it depends on how optimizable the code really is - for all I know it might already be about as fast as it will go without major effort.

Frankly, though, I think even 100kHz should be more than plenty, considering the entire idea of Senbir is for it to be a very restricted computer - so, mission already accomplished? :)


Huh, I didn't realize the first line was the number of addresses to load - I thought it was just a plain file of data words, with no header and loading simply stopping at EOF. But then I hadn't really done much more than glance at it yet...

Yeah, saving in binary can be a bit tricky, in part because the easiest APIs are generally only for text, but also because there's often more subtle things to consider regarding the format you save in. As an example, take 32-bit words like you have here, are you saving them in big-endian, little-endian or platform-endian? The latter is usually easiest (as in, what you'd do if you don't even consider it), but also makes the format less portable as there'll technically be multiple variants of it - unless you have it declare the endianness in the headers somehow, but then loading is trickier. And if you want nice extensibility, you really have to design it carefully. Text formats are often easier.

... Huh. If we consider that the file content is a program (with associated data)... Save in ELF format? :D (As if we didn't have enough complications...) ^_^'


Re: networking, if we ignore the physical and data link layers (which also have addresses, forwarding, etc.), that's more or less correct, with the wrinkle that the thing we're ignoring isn't really a point-to-point link since you can reach multiple (local) computers with it. (I could (initially did) go into quite a bit more detail about this, but it's probably not really needed.)

IIUC, I think that what you have in mind essentially means that the IP address and port you put into the peripheral settings will identify the equivalent of a physical port on a switch or network card, and the peripheral device only gives us what is essentially a bidirectional point-to-point serial link that you can send individual words (or bytes?) across, one at a time?

Then, if you want to connect more than two virtual computers together, at least one of them must have more than one network card peripheral (unless you intend to have it act differently in listen mode vs connect mode?) and be forwarding between the others (essentially implementing the function of a switch)?

That would be interesting in several ways, because it means that two-player-only cases could use the serial link directly without any switch computer or non-application networking protocol - while more complicated scenarios (like multi-application networking) would essentially require us to implement our own data link layer protocol with addresses, routing, etc. built into it - and both would be possible with the same peripheral. (Though it's more like a modem than a network card really. Or even just a serial port. (... Oooh. Implement Hayes.))

IRC could actually be in between those two - if everyone connects directly to the IRC server it doesn't require any general networking, since IRC has message routing built in. (See RFC1459 for details... no, wait, it's been updated. And come to think of it, it might be a bit too complex for the TC-06 to do quickly anyway... Unless we use a subset I suppose, that might work...)


... Man, it's been years (...decade?) since I played around with that protocol... Or even used it... I used to be on there 24/7, even wrote some bots... Still remember a server name, even, but that network shut down IIRC...

Oh, man. ^_^' I just remembered the crazy setup I used a little while for getting onto IRC... It's even kind of relevant, since it involves both a modem and a custom serial link, to use a network...

See, me and a sibling both used IRC (same channels even), but we got online with a modem and didn't really have a real LAN. I wanted to be on IRC at the same time they were, and knew a little bit of programming, so the solution we ended up with for a little while was having their PC be online (via modem), with two instances of mIRC (one for them, one for me in the background - on Win95 IIRC), with my mIRC instance scripted up to forward everything to a custom program which connected to a COM port with a null-modem serial cable link to my PC, which had another instance of that program forwarding to a mIRC instance on my PC.

So: Internet -> modem -> PC 1 -> mIRC 1 -> custom program 1 -> serial -> PC 2 -> custom program 2 -> mIRC 2 -> me.

A bit complicated, and completely ad-hoc, but it mostly worked. (Mostly. Not everything, and not perfectly, but I think it usually worked well enough for me to chat at least. (When the serial link cooperated.) No DCC obviously. Nor anything other than IRC. Since it wasn't really networking.)

(... wait... Was it a serial link or parallel? I remember messing around with the LPT port more than the COM port... Though that might have been later... eh, it's been too long. Can't really remember, and I guess it doesn't really matter much anyway. It was a homebrew job at least.)