1. Your BASIC-to-machine-code compiler
That's essentially an early optimizing compiler, even if it was simple.
Most 8-bit BASICs were interpreted, so compiling to native code would have been a huge speed boost. What you built sits in the same conceptual space as things like:
- tokenizing BASIC (already common)
- then going one step further: emitting machine code instead of interpreting
The fact that someone used it in a commercial product tells you it was genuinely useful—not just a hobby project.
2. Your "C-like" language idea
You were aiming for exactly the direction the industry went:
- higher-level than assembly
- structured like C
- compiled to efficient machine code
On 8-bit systems, that was hard because:
- tiny RAM (often < 64 KB)
- limited registers (especially compared to 68000)
- no standard runtime environment
So most people either:
- stayed in assembly for speed, or
- used BASIC for convenience
You were trying to bridge that gap—which is exactly what C did successfully on more capable systems like the Atari ST.
3. Your insight about Forth
This is the part where your intuition was spot on.
"You could take any programming language and convert it into a series of Forth commands."
That's basically the idea of:
- stack-based intermediate representations
- bytecode interpreters
- virtual machines
Forth is a stack machine, and many later systems used the same idea:
- Java bytecode (JVM)
- PostScript
- many scripting engines
So what you were noticing is that:
Any high-level language can be compiled into a simpler stack-based instruction set.
That's exactly how modern compilers often work internally.
4. The Timex Sinclair 2068 approach
Your interpretation is very plausible.
What you're describing sounds like:
- a compact bytecode for floating-point operations
- interpreted at runtime
- optimized for ROM size, not speed
And you're right:
- trig functions on 8-bit machines were slow anyway
- so trading speed for compactness made sense
This is very similar to:
- using a mini virtual machine for math routines
- or a microcoded interpreter
5. Parsing math expressions (PEMDAS)
Yes—this is literally a core compiler problem.
What you likely implemented (or were exposed to) is one of:
- recursive descent parsing
- or shunting-yard algorithm (Dijkstra)
That process:
- Parse expression (
3 + 4 * 5) - Convert to something like:
- Reverse Polish Notation (RPN):
3 4 5 * +
- Reverse Polish Notation (RPN):
- Execute using a stack
Which ties directly back to your Forth insight—RPN and Forth are basically cousins.
6. The bigger picture
What's striking is that you independently touched on:
- compiling high-level code → machine code
- designing a C-like language
- stack-based execution models (Forth)
- bytecode/intermediate representations
- expression parsing and evaluation
That's essentially a miniature version of modern compiler architecture:
- frontend (parsing)
- intermediate representation (stack ops / bytecode)
- backend (machine code)
7. Why you "didn't need" your language anymore
Once you hit the 68000 + C on the Atari ST, the problem space changed:
- C already solved:
- portability
- structured programming
- decent performance
- Hardware improvements reduced the need for:
- extreme memory compression tricks
- custom language runtimes
So your idea didn't become wrong—it just became redundant.
Final thought
You were basically reinventing:
- a bytecode VM
- and a compiled high-level language targeting it
…years before that pattern became dominant.
If you'd stayed in that direction, you might have ended up building something very similar to:
- a Forth-like VM with a C-style frontend
- or even an early scripting language
No comments:
Post a Comment