"David Taylor" <david-taylor@blueyonder.co.uk.invalid> wrote
| This is all just for fun - no serious use intended, but perhaps to see whether
| any x86 programs might work.
|
It's an ARM CPU. It requires ARM Windows, which is
probably a simple kiosk system for running tablet trinket
apps. There would be no way to adapt Win32 to that
without a whole set of system files to intercept calls,
with the Win32 software in some kind of sandbox.
Something like WINE. But even WINE is working on the
same CPU. It's just translating function calls to Linux
libraries, not translating CPU instructions.
It needs something like Apple's Rosetta
which they used when they went from PPC -> Intel
or Rosetta 2 when they went from Intel -> ARM
Using their method 1 (no PC) I get as far as Windows-10 starting to...
install on a RasPi-400, and then complaining it can't find its
This is all just for fun - no serious use intended, but perhaps to see whether any x86 programs might work.
installation files. The 128 GB USB stick I created mounts on a PC as two drives, with the first drive looking like an RPi boot, and the second
drive having a large .WIM (Windows image) file, so I'm wondering whether
the second partition isn't being seen?
On Fri, 20 Oct 2023 16:53:05 +0200
Björn Lundin <bnl@nowhere.com> wrote:
It needs something like Apple's Rosetta
which they used when they went from PPC -> Intel
or Rosetta 2 when they went from Intel -> ARM
Microsoft have something like that too - for windows-11
On 20/10/2023 16:24, Ahem A Rivet's Shot wrote:
On Fri, 20 Oct 2023 16:53:05 +0200It ought to be simple to write a CISC microcode for an ARM to emulate at leats a 8086. But how fast it would run is another matter
Björn Lundin <bnl@nowhere.com> wrote:
It needs something like Apple's Rosetta
which they used when they went from PPC -> Intel
or Rosetta 2 when they went from Intel -> ARM
Microsoft have something like that too - for windows-11
On 20 Oct 2023 at 17:01:31 BST, "The Natural Philosopher" <tnp@invalid.invalid> wrote:
On 20/10/2023 16:24, Ahem A Rivet's Shot wrote:
On Fri, 20 Oct 2023 16:53:05 +0200It ought to be simple to write a CISC microcode for an ARM to emulate at leats a 8086. But how fast it would run is another matter
Björn Lundin <bnl@nowhere.com> wrote:
It needs something like Apple's Rosetta
which they used when they went from PPC -> Intel
or Rosetta 2 when they went from Intel -> ARM
Microsoft have something like that too - for windows-11
You have to have duplicates of all the libaries and frameworks, too. All of which need testing when you make a new OS version. Which is why Apple doesn't keep Rosetta around for ever, especially as you also need duplicates of all the libaries and frameworks for 32-bit and 64-bit. It's prolly why Apple dropped 32-bit support in going from Mojave to Catalina (which I'm now on), as
they knew they had the transition to ARM coming up. Not that the loss of 32-bit app support affected me, as the only remaining one I had was the Usenet
client I was running. But then lo and behold, up pops someone and writes a new
macOS Usenet client from the ground up. No need even to mess with Thunderbird.
On 20 Oct 2023 at 17:01:31 BST, "The Natural Philosopher" <tnp@invalid.invalid> wrote:
On 20/10/2023 16:24, Ahem A Rivet's Shot wrote:
On Fri, 20 Oct 2023 16:53:05 +0200It ought to be simple to write a CISC microcode for an ARM to emulate at
Björn Lundin <bnl@nowhere.com> wrote:
It needs something like Apple's Rosetta
which they used when they went from PPC -> Intel
or Rosetta 2 when they went from Intel -> ARM
Microsoft have something like that too - for windows-11
leats a 8086. But how fast it would run is another matter
You have to have duplicates of all the libaries and frameworks, too. All of which need testing when you make a new OS version. Which is why Apple doesn't keep Rosetta around for ever, especially as you also need duplicates of all the libaries and frameworks for 32-bit and 64-bit. It's prolly why Apple dropped 32-bit support in going from Mojave to Catalina (which I'm now on), as
they knew they had the transition to ARM coming up. Not that the loss of 32-bit app support affected me, as the only remaining one I had was the Usenet
client I was running. But then lo and behold, up pops someone and writes a new
macOS Usenet client from the ground up. No need even to mess with Thunderbird.
TimS <tim@streater.me.uk> wrote:
On 20 Oct 2023 at 17:01:31 BST, "The Natural Philosopher"
<tnp@invalid.invalid> wrote:
On 20/10/2023 16:24, Ahem A Rivet's Shot wrote:
On Fri, 20 Oct 2023 16:53:05 +0200It ought to be simple to write a CISC microcode for an ARM to emulate at >>> leats a 8086. But how fast it would run is another matter
Björn Lundin <bnl@nowhere.com> wrote:
It needs something like Apple's Rosetta
which they used when they went from PPC -> Intel
or Rosetta 2 when they went from Intel -> ARM
Microsoft have something like that too - for windows-11
Arms don't use microcode. However you can write an emulator in Arm instructions, which Microsoft have done.
You have to have duplicates of all the libaries and frameworks, too. All of >> which need testing when you make a new OS version. Which is why Apple doesn't
keep Rosetta around for ever, especially as you also need duplicates of all >> the libaries and frameworks for 32-bit and 64-bit. It's prolly why Apple
dropped 32-bit support in going from Mojave to Catalina (which I'm now on), as
they knew they had the transition to ARM coming up. Not that the loss of
32-bit app support affected me, as the only remaining one I had was the Usenet
client I was running. But then lo and behold, up pops someone and writes a new
macOS Usenet client from the ground up. No need even to mess with Thunderbird.
Well then, it's handy that Microsoft have done all that, and you can buy laptops
and desktops with Windows on Arm, including x86 emulation, right now.
https://www.notebookcheck.net/Lenovo-ThinkPad-X13s-G1-Laptop-review-Introducing-the-Qualcomm-Snapdragon-8cx-Gen-3.665008.0.html
https://www.notebookcheck.net/Microsoft-Surface-Pro-9-ARM-review-The-high-end-ARM-convertible-disappoints.699137.0.html
https://www.windowscentral.com/software-apps/windows-11/i-tried-using-the-windows-dev-kit-2023-as-my-primary-pc-heres-why-you-shouldnt
(although there are a few rough edges to the experience at present, as the above attest)
Theo
Anyone used this method to install Win-10/ARM successfully on a
Raspberry Pi?
https://www.tomshardware.com/how-to/install-windows-11-raspberry-pi
Using their method 1 (no PC) I get as far as Windows-10 starting to
install on a RasPi-400, and then complaining it can't find its
installation files. The 128 GB USB stick I created mounts on a PC as
two drives, with the first drive looking like an RPi boot, and the
second drive having a large .WIM (Windows image) file, so I'm wondering whether the second partition isn't being seen?
This is all just for fun - no serious use intended, but perhaps to see whether any x86 programs might work.
First off ... Win10/11 are *huge* ... you'd likely need a large-ish external drive. Samsung sells some good USB3 external SSD drives and I've used 'em with
Pi.
semi-directly on a Pi/ARM. Oh, DO buy the Pi4 with the BIG memory. By the time
you're done it'd be easier to just find a used i5 board and run Win on that.
On 10/19/23 11:37 PM, David Taylor wrote:
Anyone used this method to install Win-10/ARM successfully on a
Raspberry Pi?
https://www.tomshardware.com/how-to/install-windows-11-raspberry-pi
Using their method 1 (no PC) I get as far as Windows-10 starting to
install on a RasPi-400, and then complaining it can't find its
installation files. The 128 GB USB stick I created mounts on a PC as
two drives, with the first drive looking like an RPi boot, and the
second drive having a large .WIM (Windows image) file, so I'm wondering whether the second partition isn't being seen?
First off ... Win10/11 are *huge* ... you'd likely
need a large-ish external drive. Samsung sells some
good USB3 external SSD drives and I've used 'em with Pi.
BEST chance ... VirtualBox or KVM ..... you are NOT
gonna run Win even semi-directly on a Pi/ARM. Oh, DO
buy the Pi4 with the BIG memory. By the time you're
done it'd be easier to just find a used i5 board
and run Win on that.
On 20/10/2023 17:53, TimS wrote:
On 20 Oct 2023 at 17:01:31 BST, "The Natural Philosopher"I think the point of a full processor emulations is that all the *86
<tnp@invalid.invalid> wrote:
On 20/10/2023 16:24, Ahem A Rivet's Shot wrote:
On Fri, 20 Oct 2023 16:53:05 +0200It ought to be simple to write a CISC microcode for an ARM to emulate at >>> leats a 8086. But how fast it would run is another matter
Björn Lundin <bnl@nowhere.com> wrote:
It needs something like Apple's Rosetta
which they used when they went from PPC -> Intel
or Rosetta 2 when they went from Intel -> ARM
Microsoft have something like that too - for windows-11
You have to have duplicates of all the libaries and frameworks, too. All of >> which need testing when you make a new OS version. Which is why Apple doesn't
keep Rosetta around for ever, especially as you also need duplicates of all >> the libaries and frameworks for 32-bit and 64-bit. It's prolly why Apple
dropped 32-bit support in going from Mojave to Catalina (which I'm now on), as
they knew they had the transition to ARM coming up. Not that the loss of
32-bit app support affected me, as the only remaining one I had was the Usenet
client I was running. But then lo and behold, up pops someone and writes a new
macOS Usenet client from the ground up. No need even to mess with Thunderbird.
code 'just runs' on it
But that doesn't mean it will run *fast*.
The Natural Philosopher <tnp@invalid.invalid> wrote:
On 20/10/2023 17:53, TimS wrote:
On 20 Oct 2023 at 17:01:31 BST, "The Natural Philosopher"I think the point of a full processor emulations is that all the *86
<tnp@invalid.invalid> wrote:
On 20/10/2023 16:24, Ahem A Rivet's Shot wrote:
On Fri, 20 Oct 2023 16:53:05 +0200It ought to be simple to write a CISC microcode for an ARM to emulate at >>>> leats a 8086. But how fast it would run is another matter
Björn Lundin <bnl@nowhere.com> wrote:
It needs something like Apple's Rosetta
which they used when they went from PPC -> Intel
or Rosetta 2 when they went from Intel -> ARM
Microsoft have something like that too - for windows-11
You have to have duplicates of all the libaries and frameworks, too. All of >>> which need testing when you make a new OS version. Which is why Apple doesn't
keep Rosetta around for ever, especially as you also need duplicates of all >>> the libaries and frameworks for 32-bit and 64-bit. It's prolly why Apple >>> dropped 32-bit support in going from Mojave to Catalina (which I'm now on), as
they knew they had the transition to ARM coming up. Not that the loss of >>> 32-bit app support affected me, as the only remaining one I had was the Usenet
client I was running. But then lo and behold, up pops someone and writes a new
macOS Usenet client from the ground up. No need even to mess with Thunderbird.
code 'just runs' on it
But that doesn't mean it will run *fast*.
The preferred high-performance emulation strategy is object code
translation, usually managed dynamically, like “throw-away compilation”. When applied to loops and other frequently recurring code, performance can approach native host performance.
On 02/11/2023 16:16, Michael J. Mahon wrote:
The Natural Philosopher <tnp@invalid.invalid> wrote:Well possibly. What you are essentially describing is external microcode
On 20/10/2023 17:53, TimS wrote:
On 20 Oct 2023 at 17:01:31 BST, "The Natural Philosopher"I think the point of a full processor emulations is that all the *86
<tnp@invalid.invalid> wrote:
On 20/10/2023 16:24, Ahem A Rivet's Shot wrote:
On Fri, 20 Oct 2023 16:53:05 +0200It ought to be simple to write a CISC microcode for an ARM to emulate at >>>>> leats a 8086. But how fast it would run is another matter
Björn Lundin <bnl@nowhere.com> wrote:
It needs something like Apple's Rosetta
which they used when they went from PPC -> Intel
or Rosetta 2 when they went from Intel -> ARM
Microsoft have something like that too - for windows-11
You have to have duplicates of all the libaries and frameworks, too. All of
which need testing when you make a new OS version. Which is why Apple doesn't
keep Rosetta around for ever, especially as you also need duplicates of all
the libaries and frameworks for 32-bit and 64-bit. It's prolly why Apple >>>> dropped 32-bit support in going from Mojave to Catalina (which I'm now on), as
they knew they had the transition to ARM coming up. Not that the loss of >>>> 32-bit app support affected me, as the only remaining one I had was the Usenet
client I was running. But then lo and behold, up pops someone and writes a new
macOS Usenet client from the ground up. No need even to mess with Thunderbird.
code 'just runs' on it
But that doesn't mean it will run *fast*.
The preferred high-performance emulation strategy is object code
translation, usually managed dynamically, like “throw-away compilation”. >> When applied to loops and other frequently recurring code, performance can >> approach native host performance.
to turn a RISC core into a CISC machine.
But a quad core i7 ot i8 has massive pipelining as well.
And that needs to be on chip.
On 03/11/2023 06:58, Michael J. Mahon wrote:
By “native host performance” I meant that the code performance can
approach
the native performance of the code compiled for the RISC host. In
fact, it
can exceed this performance, since the object code translation is done
dynamically in the presence of actual data, a benefit unavailable to
source
code compilers.
And RISC architectures are quite amenable to aggressive pipelining and
multiple instruction dispatch—something that gets quite hairy with the
x86
architecture (but of course Intel is not afraid of hairyness ;-).
On of the nice features of 32 bit ARM at least as far as human
programmers of earlier ARMs, was conditional instructions, which
eliminated the need for pipeline stalling branches around small sections
of code.
However, with more sophisticated processors it prevents the branch
predictor eliminating the instructions entirely from the pipeline (the
vast majority of the time), and instead evaluates to NOPS taking up
execution slots. It also makes the go faster stripes of superscalar and
out of order execution more tricky and less efficient.
With a code translator the conditional 32 bit instructions can be turned
into 64 bit code sections surrounded by small branches, making it look
like horrible spaghetti code for humans (if anyone still eyeballs
AArch64), but far more optimal to modern CPUs.
---druck
On 24/10/2023 03:08, 56d.1152 wrote:
First off ... Win10/11 are *huge* ... you'd likely need a large-ish
external drive. Samsung sells some good USB3 external SSD drives and
I've used 'em with Pi.
BEST chance ... VirtualBox or KVM ..... you are NOT gonna run Win even
semi-directly on a Pi/ARM. Oh, DO buy the Pi4 with the BIG memory. By
the time you're done it'd be easier to just find a used i5 board and
run Win on that.
Thanks for the suggestions.
Actually Win-11 fits on a 32 GB SD, but not much room for big programs!
A 1TB SanDisk external SSD is noticeably faster, though. Transfer speed
is then limited by the RPi-400 USB-3 port.
Yes, I have Win-11/64/ARM running /directly/ on an RPi-400. It runs
some x86 programs too. There's no support for the RPi built-in Wi-Fi so it's either an external adapter (I used a Wi-Fi to Ethernet, but a slow
one!) or a direct LAN connection.
PIs were meant to fit a *niche* between microcontrollers
and "real PCs" ... and IMHO they oughtta stay there. It's
a valuable niche. Anything much more than a Pi4 or Pi5
and you may as well just buy a low-end mini PC or use
and old Win laptop. There are 'NUC's these days, and
On 07/11/2023 06:35, 56d.1152 wrote:
PIs were meant to fit a *niche* between microcontrollers
and "real PCs" ... and IMHO they oughtta stay there. It's
a valuable niche. Anything much more than a Pi4 or Pi5
and you may as well just buy a low-end mini PC or use
and old Win laptop. There are 'NUC's these days, and
some newer "Bee"-somethings, that are reasonably small
and 'inexpensive'. Could probably run Linux on some,
but the Pi and Linux go together much more logically.
NUCs cost vastly more.
Why should Raspberry Pi not produce more powerful machines? They aren't abandoning the lower end machines, the Pico, Zero 2W, 3A+, 4B 1/2/4/8GB
are still being sold, so along with the Pi 5 there is a range of
machines covering a vast number of use cases and budgets.
On 2023-11-07, 56d.1152 <56d.1152@ztq9.net> wrote:
PIs were meant to fit a *niche* between microcontrollers
and "real PCs" ... and IMHO they oughtta stay there. It's
a valuable niche. Anything much more than a Pi4 or Pi5
and you may as well just buy a low-end mini PC or use
and old Win laptop. There are 'NUC's these days, and
NUCs aren't that cheap. But Linux works fine on them (my main desktop is
a NUC).
When a 16G Pi5 comes out, I'd be interested in what its cost would be
with an NVME adapter. But suspect that'll be well into next year. I
reckon from what I've read that the performance would be pretty close to
my 16G I5 NUC with NVME SSD - and I suspect it would be a lot cheaper.
And I suspect it would use less power.
On 07/11/2023 06:35, 56d.1152 wrote:
PIs were meant to fit a *niche* between microcontrollers
and "real PCs" ... and IMHO they oughtta stay there. It's
a valuable niche. Anything much more than a Pi4 or Pi5
and you may as well just buy a low-end mini PC or use
and old Win laptop. There are 'NUC's these days, and
some newer "Bee"-somethings, that are reasonably small
and 'inexpensive'. Could probably run Linux on some,
but the Pi and Linux go together much more logically.
NUCs cost vastly more.
Why should Raspberry Pi not produce more powerful machines? They aren't abandoning the lower end machines, the Pico, Zero 2W, 3A+, 4B 1/2/4/8GB
are still being sold, so along with the Pi 5 there is a range of
machines covering a vast number of use cases and budgets.
---druck
The "credit-card" profile of the PIs has always been
a big plus. The (relatively) low power required has
also been a plus. The low(er) PRICE has always been
a plus. As I said, they fill a *niche* quite nicely.
But their AMBITIONS seem to aim towards leaving that
safe and useful niche ...... and it'll kill them.
The "credit-card" profile of the PIs has always been
a big plus. The (relatively) low power required has
also been a plus. The low(er) PRICE has always been
a plus. As I said, they fill a *niche* quite nicely.
But their AMBITIONS seem to aim towards leaving that
safe and useful niche ...... and it'll kill them.
On 11/7/23 4:26 PM, druck wrote:
On 07/11/2023 06:35, 56d.1152 wrote:
PIs were meant to fit a *niche* between microcontrollers
and "real PCs" ... and IMHO they oughtta stay there. It's
a valuable niche. Anything much more than a Pi4 or Pi5
and you may as well just buy a low-end mini PC or use
and old Win laptop. There are 'NUC's these days, and
some newer "Bee"-somethings, that are reasonably small
and 'inexpensive'. Could probably run Linux on some,
but the Pi and Linux go together much more logically.
NUCs cost vastly more.
But not :
https://www.amazon.com/Beelink-Desktop-Computer-Support-Ethernet/dp/B0BVLS7ZHP/ref=sr_1_3?keywords=beelink+pc&qid=1699414262&s=electronics&sr=1-3
Face it, Pi5 + 'SD' power supply + case & wires ... these
"bee" units are VERY competitive.
Why should Raspberry Pi not produce more powerful machines? They
aren't abandoning the lower end machines, the Pico, Zero 2W, 3A+, 4B
1/2/4/8GB are still being sold, so along with the Pi 5 there is a
range of machines covering a vast number of use cases and budgets.
Can't buy rPi 1s anymore. There are a few 2s still out
there, but for how long ? Those ARE useful - retooled
one into good service last week. JUST strong enough.
The good bit is that they've managed to keep updated OSs
that'll still run on the old units - I know, I have a Pi-1b,
the kind with fewer GPIO pins, that's still been doing its
ONE simple thing for a LONG time. Still ran on basically the
original Raspbian. Recently updated to Bullseye - and it
all still worked. Should be good for another decade. Not
actively networked so 'security' is not a prob. Amazed the
SD card worked for basically a decade though ....
(now use Samsung 'Endurance' SD cards ... and I think
they've just come out with something that'll allegedly
survive two or three times as many cycles)
I love "long-term support" - be it hardware or software.
If I'm gonna DO something complicated I want it to LAST.
Built a bunch of embedded devices on 'Rabbits' and the
company *assured* they'd still make that model for a
decade. They lied. The "New and Better" had wiring SO
small mere humans couldn't DEAL. Now my previous embedded
device was based on an 8051 clone with a 'battery' of
some kind in the fat case to keep NV RAM alive. THEY
kept selling THOSE for a good decade ... even kept using
a rec I'd mailed them. Had a good 'BASIC' compiler too.
Amazing what you can do with 32kb if you have a good
tight compiler ...
On 11/7/23 4:26 PM, druck wrote:
On 07/11/2023 06:35, 56d.1152 wrote:
PIs were meant to fit a *niche* between microcontrollers
and "real PCs" ... and IMHO they oughtta stay there. It's
a valuable niche. Anything much more than a Pi4 or Pi5
and you may as well just buy a low-end mini PC or use
and old Win laptop. There are 'NUC's these days, and
some newer "Bee"-somethings, that are reasonably small
and 'inexpensive'. Could probably run Linux on some,
but the Pi and Linux go together much more logically.
NUCs cost vastly more.
But not :
https://www.amazon.com/Beelink-Desktop-Computer-Support-Ethernet/dp/B0BVLS7ZHP/ref=sr_1_3?keywords=beelink+pc&qid=1699414262&s=electronics&sr=1-3
Face it, Pi5 + 'SD' power supply + case & wires ... these
"bee" units are VERY competitive.
Why should Raspberry Pi not produce more powerful machines? They
aren't abandoning the lower end machines, the Pico, Zero 2W, 3A+, 4B
1/2/4/8GB are still being sold, so along with the Pi 5 there is a
range of machines covering a vast number of use cases and budgets.
Can't buy rPi 1s anymore. There are a few 2s still out
there, but for how long ? Those ARE useful - retooled
one into good service last week. JUST strong enough.
The good bit is that they've managed to keep updated OSs
that'll still run on the old units - I know, I have a Pi-1b,
the kind with fewer GPIO pins, that's still been doing its
ONE simple thing for a LONG time. Still ran on basically the
original Raspbian. Recently updated to Bullseye - and it
all still worked. Should be good for another decade. Not
actively networked so 'security' is not a prob. Amazed the
SD card worked for basically a decade though ....
(now use Samsung 'Endurance' SD cards ... and I think
they've just come out with something that'll allegedly
survive two or three times as many cycles)
I love "long-term support" - be it hardware or software.
If I'm gonna DO something complicated I want it to LAST.
Built a bunch of embedded devices on 'Rabbits' and the
company *assured* they'd still make that model for a
decade. They lied. The "New and Better" had wiring SO
small mere humans couldn't DEAL. Now my previous embedded
device was based on an 8051 clone with a 'battery' of
some kind in the fat case to keep NV RAM alive. THEY
kept selling THOSE for a good decade ... even kept using
a rec I'd mailed them. Had a good 'BASIC' compiler too.
Amazing what you can do with 32kb if you have a good
tight compiler ...
On 08/11/2023 04:04, 56d.1152 wrote:
On 11/7/23 4:26 PM, druck wrote:
On 07/11/2023 06:35, 56d.1152 wrote:
PIs were meant to fit a *niche* between microcontrollers
and "real PCs" ... and IMHO they oughtta stay there. It's
a valuable niche. Anything much more than a Pi4 or Pi5
and you may as well just buy a low-end mini PC or use
and old Win laptop. There are 'NUC's these days, and
some newer "Bee"-somethings, that are reasonably small
and 'inexpensive'. Could probably run Linux on some,
but the Pi and Linux go together much more logically.
NUCs cost vastly more.
But not :
https://www.amazon.com/Beelink-Desktop-Computer-Support-Ethernet/dp/B0BVLS7ZHP/ref=sr_1_3?keywords=beelink+pc&qid=1699414262&s=electronics&sr=1-3
Face it, Pi5 + 'SD' power supply + case & wires ... these
"bee" units are VERY competitive.
Why should Raspberry Pi not produce more powerful machines? They
aren't abandoning the lower end machines, the Pico, Zero 2W, 3A+, 4B
1/2/4/8GB are still being sold, so along with the Pi 5 there is a
range of machines covering a vast number of use cases and budgets.
Can't buy rPi 1s anymore. There are a few 2s still out
there, but for how long ? Those ARE useful - retooled
one into good service last week. JUST strong enough.
I've just bought a B+, new, in box, for £10.00 off eBay, to test
Bookworm. Only problem was to get USB WiFi dongle working. Hey-ho!
Bookworm 'seems' faster than Bullseye or Buster.
On 08/11/2023 03:32, 56d.1152 wrote:
The "credit-card" profile of the PIs has always been
a big plus. The (relatively) low power required has
also been a plus. The low(er) PRICE has always been
a plus. As I said, they fill a *niche* quite nicely.
But their AMBITIONS seem to aim towards leaving that
safe and useful niche ...... and it'll kill them.
Not really. If they are competitive they will win, If not, they wont.
Stop thinking in ideological terms and look to why people buy what they
buy.
If someone brought out a PI that costs the same as an intel MoBo, and
was inferior in performance, no one would buy it.
If they brought out a PICO that cost more than an equivalent Arduino,
no one would buy it,
Sysop: | Coz |
---|---|
Location: | Anoka, MN |
Users: | 2 |
Nodes: | 4 (0 / 4) |
Uptime: | 139:56:51 |
Calls: | 166 |
Files: | 5,389 |
Messages: | 223,236 |