Illustration by Prospect. Source: Alamy

Who cares where chips come from?

How such an innocuous word came to represent the power behind AI—and an unlikely driver of the world economy
February 25, 2026

While artificial intelligence may look like software, its real engine is hardware. Known as “the brain of AI”, chips are tiny slices of semiconductor material (usually silicon) carrying billions of microscopic transistors. Behind every AI model, chatbot and image generator sits a vast network of chips, making them the most valuable and contested resource in our global economy.

We are in the middle of a chip crisis. AI companies are buying up all the high-performance chips they can, causing a global shortage and a huge hike in prices. Chips are now so expensive and difficult to source that smaller companies are unable to compete.

Some governments view chip supply as so important, in fact, that they’ve begun tying semiconductor manufacturing to national security and trade policy: “Everyone who wants to build memory has two choices: they can pay a 100 per cent tariff, or they can build in America,” as the US commerce secretary, Howard Lutnick, said recently. This was around the same time that Taiwan, home to the world’s biggest chip maker (TSMC) that supplies American companies such as Nvidia and Apple, promised to invest $250bn to make more chips in the US in exchange for tariff exemptions on chip imports.

Use of the word chip to describe semiconductors first appeared in 1961 at an aeronautical conference in Oslo focused on “microminiaturization”, the development of extremely small versions of electronic components mainly used (at the time) in military vehicles. Discussing the use of micrologic elements on silicon slices, two researchers from Fairchild Semiconductor Corporation in Palo Alto (where the first silicon integrated circuit, latterly the “silicon chip”, had been conceived by the physicist Robert Noyce in 1959) had noted: “The slices are diced into the small chips shown which are then mounted on TO5 or TO18 headers.”

Chips have evolved hugely since then, with AI workloads relying on several types: graphics processing units (GPUs), to train AI models because they can perform many calculations in parallel; application-specific integrated circuits (ASICs), purpose-built for particular AI tasks; field-programmable gate arrays (FPGAs), often called “chameleon chips”, because they can be reconfigured after manufacture for specific AI tasks; and neural processing units (NPUs), circuits designed to speed up computations on mobile devices, for use in real-time voice translation and face recognition.

Just as the AI revolution can’t happen without chips, AI chips can’t exist without fabs (short for fabrication), sophisticated dust-free laboratories where chips are created. A single speck of dust is much larger than the nanoscale features of a microcircuit and can therefore ruin a chip. Modern transistors are so small that thousands of them could fit within the width of a single strand of human hair. Building a fab that can create these intricate chips is extraordinarily expensive, often costing billions of dollars and taking years to build. As a result, only a small number of companies, such as Samsung in South Korea and TSMC in Taiwan, operate the most advanced facilities. Nvidia is considered to be “fabless”, meaning it does not own any fabs and depends entirely on third-party companies to build its chip designs. 

The challenge in today’s chip world is to build chips that can handle massive data processing, machine learning, deep learning and fast and efficient real-time inference. The market is evolving rapidly and there is a race to design chips that improve speed, performance, personalisation and energy efficiency. While AI may be powered by algorithms, its future depends on who can design and build the chips.