e.g.
"
// start of barrel
EventRec far* searchp = (EventRec far*) work.bufs;
"
My eyes! My eyes! That was COMPACT model code, so 64k of code and 1MB of data, code addresses were 16bit offsets to the CS reg and data was far
so 32 bits of segment and offset of DS or ES. And of course you had to
be extra careful of any pointer arithmetic as a far pointer wrapped
after 64k. You had to use slower HUGE pointers to get automatic normalisation. God it was shit.
On 25.11.24 18:33, mm0fmf wrote:
My eyes! My eyes! That was COMPACT model code, so 64k of code and 1MB of
data, code addresses were 16bit offsets to the CS reg and data was far
so 32 bits of segment and offset of DS or ES. And of course you had to
be extra careful of any pointer arithmetic as a far pointer wrapped
after 64k. You had to use slower HUGE pointers to get automatic
normalisation. God it was shit.
And to consider that, at that time, processors like MC68000 or NS32016
were readily available.
On 2024-11-26, Josef Möllers <josef@invalid.invalid> wrote:
On 25.11.24 18:33, mm0fmf wrote:
My eyes! My eyes! That was COMPACT model code, so 64k of code and 1MB of >>> data, code addresses were 16bit offsets to the CS reg and data was far
so 32 bits of segment and offset of DS or ES. And of course you had to
be extra careful of any pointer arithmetic as a far pointer wrapped
after 64k. You had to use slower HUGE pointers to get automatic
normalisation. God it was shit.
And to consider that, at that time, processors like MC68000 or NS32016
were readily available.
Which proves once again that a shitty design beats a good one
if it's released first.
Everybody was yapping about the 640K barrier. I was more concerned
with the 64K barrier. I remember manually normalizing pointers
everywhere, and if I wanted to work with a large arrays of structures
I'd copy individual structures to a work area byte by byte so I
didn't get bitten by segment wrap-around in the middle of a structure.
As the joke goes, aren't you glad the iAPX432 died out?
Otherwise a truly horrible Intel architecture might have
taken over the world.
On 25.11.24 18:33, mm0fmf wrote:
[...]
e.g.
"
// start of barrel
EventRec far* searchp = (EventRec far*) work.bufs;
"
My eyes! My eyes! That was COMPACT model code, so 64k of code and 1MB
of data, code addresses were 16bit offsets to the CS reg and data was
far so 32 bits of segment and offset of DS or ES. And of course you
had to be extra careful of any pointer arithmetic as a far pointer
wrapped after 64k. You had to use slower HUGE pointers to get
automatic normalisation. God it was shit.
And to consider that, at that time, processors like MC68000 or NS32016
were readily available.
On 26/11/2024 17:37, Josef Möllers wrote:
On 25.11.24 18:33, mm0fmf wrote:
My eyes! My eyes! That was COMPACT model code, so 64k of code and 1MB
of data, code addresses were 16bit offsets to the CS reg and data was
far so 32 bits of segment and offset of DS or ES. And of course you
had to be extra careful of any pointer arithmetic as a far pointer
wrapped after 64k. You had to use slower HUGE pointers to get
automatic normalisation. God it was shit.
And to consider that, at that time, processors like MC68000 or NS32016
were readily available.
Backwards compatibility.
DOS came from 8080 based CP/M , to run on an 8086, to where 8 bit code
could be easily ported.
And so we were stick with that architecture.
And to consider that, at that time, processors like MC68000 or NS32016
were readily available.
On Tue, 26 Nov 2024 18:37:02 +0100, Josef Möllers
<josef@invalid.invalid> wrote:
And to consider that, at that time, processors like MC68000 or NS32016
were readily available.
At the time when the design decision was made, the Motorola 68000 was
not ready for production.
From https://en.wikipedia.org/wiki/IBM_Personal_Computer :
"The 68000 was considered the best choice,[19] but was not
production-ready like the others."
Robert Roland wrote:
On Tue, 26 Nov 2024 18:37:02 +0100, Josef Möllers
<josef@invalid.invalid> wrote:
And to consider that, at that time, processors like MC68000 or NS32016At the time when the design decision was made, the Motorola 68000 was
were readily available.
not ready for production.
From https://en.wikipedia.org/wiki/IBM_Personal_Computer :
"The 68000 was considered the best choice,[19] but was not
production-ready like the others."
I also remember a zilog Z8000?
The Natural Philosopher <tnp@invalid.invalid> writes:
I also remember a zilog Z8000?
Yes, although also with a segmented memory model.
Intel put the "backward" in "backward compatible".
On Thu, 28 Nov 2024 19:42:18 GMT, Charlie Gibbs wrote:
Intel put the "backward" in "backward compatible".
I recall the term “backward combatible” used to describe the feelings of violence some people had towards the requirement for backward
compatibility with certain kinds of brain death ...
Qwerty keyboards being a prime example.
On Sun, 01 Dec 2024 15:11:05 +0000, Richard Kettlewell wrote:
The Natural Philosopher <tnp@invalid.invalid> writes:
I also remember a zilog Z8000?
Yes, although also with a segmented memory model.
Its segmentation scheme made Intel x86 look good.
On 18/12/2024 06:22, Lawrence D'Oliveiro wrote:
On Sun, 01 Dec 2024 15:11:05 +0000, Richard Kettlewell wrote:
The Natural Philosopher <tnp@invalid.invalid> writes:
I also remember a zilog Z8000?
Yes, although also with a segmented memory model.
Its segmentation scheme made Intel x86 look good.
Not that unusual. Compare to some of the Microchip PICs. Some have
really bizarre bank switching arrangements and so on.
On Mon, 23 Dec 2024 03:26:11 +0000, Brian Gregory wrote:
On 18/12/2024 06:22, Lawrence D'Oliveiro wrote:
On Sun, 01 Dec 2024 15:11:05 +0000, Richard Kettlewell wrote:
The Natural Philosopher <tnp@invalid.invalid> writes:
I also remember a zilog Z8000?
Yes, although also with a segmented memory model.
Its segmentation scheme made Intel x86 look good.
Not that unusual. Compare to some of the Microchip PICs. Some have
really bizarre bank switching arrangements and so on.
I think the Apple II RAM expansion card worked by switching to a different bank (48K each?) every time a particular control register byte was
written. You couldn’t just write a bank number: instead, you had to repeat the write N number of times, and I guess remember where you started from,
to get to the right bank.
But this was because the CPU itself only supported 16-bit addressing. What was Zilog’s excuse?
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Mon, 23 Dec 2024 03:26:11 +0000, Brian Gregory wrote:
On 18/12/2024 06:22, Lawrence D'Oliveiro wrote:
On Sun, 01 Dec 2024 15:11:05 +0000, Richard Kettlewell wrote:
The Natural Philosopher <tnp@invalid.invalid> writes:
I also remember a zilog Z8000?
Yes, although also with a segmented memory model.
Its segmentation scheme made Intel x86 look good.
Not that unusual. Compare to some of the Microchip PICs. Some have
really bizarre bank switching arrangements and so on.
I think the Apple II RAM expansion card worked by switching to a different >> bank (48K each?) every time a particular control register byte was
written. You couldn’t just write a bank number: instead, you had to repeat >> the write N number of times, and I guess remember where you started from,
to get to the right bank.
But this was because the CPU itself only supported 16-bit addressing. What >> was Zilog’s excuse?
Apple sold three memory expansion cards for the (8-bit) Apple II’s: the 16KB Language card that allowed bank switching RAM in place of the built-in BASIC ROM, and (later for the Apple IIe) the 64KB Memory Expansion card for the Apple IIe that allowed bank switching in a second 64KB of RAM bank switched over the built-in 64KB (both were further bank switched the same
way as a 48KB Apple II equipped with a Language Card), and finally, the “slinky”-style 256KB - 1 MB card that was not bank switched, but supported
sequential reads or writes through an autoincremented register set up by
the programmer (used primarily as a RAM disk).
Many other manufacturers offered cards of various capacities emulating the architecture of each of Apple’s cards.
I know of no expansion card that required multiple control byte accesses to select a particular bank.
Instead the bank value was stored in a control register, but this was only for third-party cards with more than one additional bank. Since Apple never shipped such a card, different manufacturers did not all choose the same control byte address, nor did they interpret the control value the same
way.
Apple set the standard and a large number of applications used it. The third-party extensions were supported by a smaller group of applications, sometimes by design and sometimes by patches to applications.
BTW, banks were switched in selectively for reading or writing, so copying data from one bank to another or executing code that wrote to another bank was quite easy.
Sysop: | Coz |
---|---|
Location: | Anoka, MN |
Users: | 2 |
Nodes: | 4 (0 / 4) |
Uptime: | 67:07:59 |
Calls: | 192 |
Calls today: | 1 |
Files: | 5,422 |
Messages: | 223,898 |