Quote:
Originally Posted by GTO I can't for the life of me figure out how or why Intel missed out on the smartphone revolution (just like their longtime partner Microsoft).
Can any BHPian explain why in layman terms? |
For very simple reason, the Intel chipset architecture was never efficient on thermals. When talking about mobile (or handheld devices) heat is not only wasted energy, it also means poor battery life and poor experience.
ARM based chips had lower thermal envelope, thus more energy efficient.
Mobile devices are powered by battery which has only 3V (initial batteries were 2.7V).
Intel did not want to work on low powered high efficiency devices. From first hand experience i can say, in the handset world 1 micro ampere leakage will set alarm bells ringing. Intel was focused on devices that could have 55WH batteries and not some puny 3WH.
If you look from product positioning perspective, why would a company look at significantly lower price segment. A handset CPU is priced at fraction of what a desktop or laptop CPU is. For what it costs to buy a laptop (15-18W) i5 8th gen CPU, you can buy multiple handsets (not just CPU). To make same amount of profits, Intel needed to sell probably 1/10 the volumes.
What Intel missed is the GPU game completely. They were caught napping when crypto currency and AI powered demand for parallel compute. Intel (or any other CPU) could not match GPUs. FPGA is better at crypto (that is a story and subject for another day).nCPU have most of the part lived in low levels of parallel compute. The GPU brings parallelism at different level.
Compare: The cheapest of Nvidia GT-150Mx series (found abundantly in laptops) has 142 cores and 2G onboard memory, and each can clock 1.8GHz.
The most expensive server class CPUs have less than 64 cores. And for consumer class we are talking less than 6 cores mostly.
AI workload really is helped by parallel work. Sample this: what takes on a GTX 1600Ti 4 hours, will take 3 days for i7 10th gen. I do this kind of workload.
I am hearing AMD is attacking this game by partnering with M$ to put some clever sharing technology into Azure cloud and thus bring the GPU access cost even lower. So some challenge is being thrown at Nvidia. And radeon graphics units may perform satisfactorily. Exciting times ahead.
Google is attempting to bring own processing chip for AI workloads called TPU. But the ease and all pervasiveness of Nvidia is a big challenge. Nor does it help that one can not buy the TPU as of now.