Sunday, January 24

Forget the Cloud, the revolution may already be on your desktop

For several years now, we have been told about the revolution represented by cloud computing and cloud gaming in our uses. However, it is quite possible that the real revolution will be driven by an architecture that we no longer thought to see again on our office machines …

For more than 20 years, tech, usually very quick to evolve and question certain dogmas, has not called into question an element that is nevertheless central to our digital lives. An element that is both omnipresent and invisible, used every day by millions of people around the world without their necessarily being aware of it. This element is the instruction set of the microprocessors. [MUSIQUE ANGOISSANTE]

A LITTLE CONTEXT

To summarize very briefly we will say that an architecture, or instruction set, is the element that allows the microprocessor (CPU) to do its work. (This is summary eh? I had warned.)

OK, REALLY, A LITTLE CONTEXT

An instruction set is a set of instructions that allow the CPU to know how to do its CPU job, namely, to calculate, store and manage its various registers. Without an instruction set, a CPU is just a cluster of inert transistors. A high precision cluster, of course, but a cluster all the same. The most famous instruction set these days is the x86.

Without going back to the prehistory of data processing, let us specify that at the origin frolic in nature many sets of instructions, related to the various processors available on the market. Mention will be made, for example, of the Zilog Z80 used in ZX Spectrum computers, MSXs, Amstrad CPCs.

The Amstrad CPC 464 powered by a Zilog Z80 clocked at 4 MHz

The Amstrad CPC 464 powered by a Zilog Z80 clocked at 4 MHz

We can also cite the famous 68,000 alias “68 k” of Motorola which made the heyday of the first Macintosh, all the Amigas, the Atari ST, Palm PDAs and, on the video game side, the SEGA Megadrive. and SNK’s Neo-Geo.

SEGA's Megadrive, Motorola 68k ambassador in many homes

SEGA’s Megadrive, Motorola 68k ambassador in many homes

The x86 dates from this time. Originally 16-bit instructional game, it has been present since the Intel 8086 and has followed by adapting each generation of the brand’s CPU (Intel 80386, 80 486, etc., hence the nickname “x86”).

Intel's 8086 which would launch the x86 architecture

Intel’s 8086 which would launch the x86 architecture

The x86 evolved, went to 32 then 64 bits (x64 variation), but remained fundamentally similar: an architecture with a complex instruction set (known as “CISC” for “Complex Instructions Set Computer”) when shortly after appeared architectures the reduced instruction set (“RISC” or “Reduced Instruction Set Computer”).

This CISC opposition against RISC has, like the legendary Betamax oppositions against VHS or HD-DVD against Blu-ray, shaped our uses and computing as we practice it today.

THE LOVE OF RISC

RISC vs CISC, a 40-year-old opposition

RISC vs CISC, a 40-year-old opposition

In the 1980s, therefore, the RISC made its appearance. The simplification of the instruction set being the goal, by definition, everything that existed before has become “complex” (CISC). Two types of processor architecture, two design philosophies that will not be detailed here, but we will just point out that where CISC CPUs have started to store microcode in the CPU ROM, RISC bet everything on the simplicity of instructions, their execution and the use of RAM to store intermediate results. In an era when ROM cost nothing and RAM cost a kidney and with lower production volumes, it’s easy to guess which architecture the market preferred to stick with.

Especially since when marketing got involved, the tracks were even more muddled, especially with the idea, during the presentation of Intel’s 386, that the CPU actually embarked RISC while keeping compatibility with x86 to have the best of both worlds. It was faux, but it was not serious, in the tech, the marketing is often much more emphasized than the rigor. Ultimately, RISC in office machines was only adopted by Apple in the form of the PowerPC chips developed by IBM and Motorola between 1994 and 2006.

Power Mac G5, last use of PowerPC at Apple

Power Mac G5, last use of PowerPC at Apple

Fast forward in 2020, the CISC therefore reigns supreme over personal computers since the abandonment of the PowerPC by Apple in 2006, but all is not rosy for Intel: the founder of Santa Clara remained stuck on CPUs engraved in 14 nm for many years, AMD has returned to position itself as a serious competitor with its Zen architecture and Apple is embarking on the bath by deciding to produce its own chips.

BUT THEN… IS IT A REVOLUTION?

Apple is therefore returning to a RISC-type architecture by adopting M1, an ARM (“Advanced RISC Machines”) SoC of its design. Unlike Intel, ARM does not manufacture any products. The company designs the architectures and leaves it to others to build their own chips. In the case of the M1, it is therefore an Apple design based on the ARMv8-A design, manufactured by the TSMC founder and engraved in 5 nm.

The machines using this chip will also be the first in the world to use this fine engraving (AMD’s Ryzen 5000 are engraved in. 7 nm, the 10th generation of Intel Core CPU is in 10 nm).

With the M1, the industry expected transition transistors, but, as Julien Cadot at Numerama points out:

Our expectations were low and Apple blew them up.

Quite simply. Do not get lost in benchmarks and comparative tables, the M1 demolishes the best Core i9 on the same machine (MacBook Pro) and is only overtaken in multicore by a Mac Pro sold at least 3 times more expensive. No, rather than the technique, let’s focus on what that implies in terms of uses. The M1 makes it possible to envision a new era of mobile computing with machines that are certainly powerful, but also and above all an autonomy and operating silence similar to what we are used to seeing on our tablets and smartphones.

17 hours of autonomy in mixed use in the silence of a cathedral, and if that is the future?

A FOOT IN THE DOOR?

Because now that Apple has shown what it is possible to do by starting again on RISC, it’s a safe bet that it gives ideas to other players in the market of our good old Windows PC, used to to see only x86 forever. There are many other ARM compatible OS, whether we are talking about Linux or its derivatives (on Odroid, Arduino or Raspberry Pi), but in the majority of cases, these OS run on machines that address very specific needs and rarely equalize with a more versatile desktop machine.

Microsoft has been around the subject for a few years with Windows RT becoming Windows 10 on ARM, but had never pushed further experimentation before. This has changed in recent months between the fairly recent announcement of a possible emulation of x64 applications (Windows 10 on ARM was until now limited to only 32-bit applications) and the completely recent announcement of Microsoft to want to produce its own ARM chips. for its Azure servers and Surface: everything seems to be aligned for 2021 to be an important year for the RISC architecture.

In addition, we talk more and more about architecture RISC-V (pronounced “RISC Five”) which, to summarize again, emphasizes a simplification of the central processor in favor of the use of dedicated co-processors.

RISC-V chip prototype

RISC-V chip prototype

And, finally, more than the millennial opposition RISC vs CISC, it is above all a reversal of the response to be brought to thwart the slowdown in Moore’s Law that we are witnessing. The original Moore’s Law states that computing power doubles every year. It has been remixed over time and now we are talking about a doubling of performance every 18 months. What is being questioned now is less this injunction than how to get there.

Since we have trouble going faster, we might as well do everything in parallel ¯ _ (ツ) _ / ¯

Questioning the all-powerful CPU and its multiple cores in favor of a simplification of the central processor and the addition of dedicated co-processors, is to replace power with speed in a field where we have always been told that power IS speed. And for those who remember (or who were born), it looks a lot like what Nintendo had decided to do with its Super Nintendo, which added to the brave Ricoh 5A22 dedicated coprocessors for sound and video (Ah! famous mode 7!).

Super Nintendo's Ricoh 5A22: the future on the move!

Super Nintendo’s Ricoh 5A22: the future on the move!

THE LAST CLOUD IN THE COFFIN

And these movements took the tech world by surprise, which lately swears by cloud gaming and cloud computing as the only vectors of valid innovations. We are talking about uberization of our machines and netflixation (yes, it is said) content in a perpetual headlong rush even when the potential of our machines is hardly exploited. Shadow is only the shadow (do you have it?) Of what the service should have been and on the cloud gaming side, we will soon be counting more offers than users.

And I am not addressing the infra part on the customer side with a deployment of very high speed which seems to be taking its time, but an eternal optimist like Manu will tell you:

In 6 months everyone will be playing on Amazon Luna from their 5G connection! MARK MY WORDS!

A good point all the same for Stadia and GeForce Now who, like vultures, were able to take advantage of the Cyberpunk 2077 industrial accident to push their respective offers to disappointed gamers.

In short, forget the cloud my good friends, the future is playing out right now, on the ground. Because as it sings wonderfully Creative Gamesdone :

You don’t have to look up, sometimes the rainbows are on the ground.

Leave a Reply

Your email address will not be published. Required fields are marked *