Niels Bohr’s famous line that “prediction is very difficult, especially about the future” rings true for climate change. To help us understand how manmade emissions affect our planet, climate scientists have developed some of the most sophisticated computer models ever created. These models allow us to evaluate hypotheticals such as what our climate would be like if the industrial revolution hadn’t happened and no greenhouse gases were emitted. They can also be used to make projections for future change, allowing us to estimate how hot it will get in our lifetimes and further into the future. Perhaps most importantly, they allow us to unpick how climate change will affect different regions of the world, enabling us to better understand the human impacts of climate change, and adapt where possible.Yet climate models—like all models—are imperfect. Today, progress in improving these models seems to be slowing down, even as computer power multiplies exponentially. Climate models are released in “generations,” about once every six years, which coincide with each IPCC (Intergovernmental Panel on Climate Change) assessment report, allowing each report to take advantage of the most up to date simulations. The previous generation of models was released for the 2007 IPCC report, and was able to capture present day climate approximately 30 per cent more accurately than its predecessor. The state-of-the-art models—developed for last year’s report—only improved on their predecessor by a further 20 per cent. Yet in the six years between the latest two generations there was a 60-fold increase in computer power. Why, then, are improvements in climate models lagging behind technological advances? To understand, we first need to understand how climate models work, and why they are flawed. Climate models operate by simulating the relevant physical processes, such as thermodynamics and fluid flow, across the globe. These models allow us to explore how climate will be affected by various scenarios—from “business as usual” scenarios where the world continues to emit more and more CO2, to extreme mitigation scenarios where emissions start to fall rapidly within the next few years. These models take years to develop and require powerful supercomputers to run. These computers certainly earn their name: the Met Office supercomputer in Exeter cost £30m and fills a room the size of two football pitches. It uses as much power as a small town to perform as many calculations as 10,000 laptops combined. Even supercomputers have limits, though, and it is not possible for models to simulate every point of the globe. Instead, they split the world into a grid, where all physical properties are assumed to be uniform inside each grid box. Although weather forecast models work in broadly the same way, they are only required to simulate a small region of the globe a few days into the future. For climate models, which need to be able to simulate the entire globe several centuries into the future, the limits of our computer power means that each grid box is typically more than a hundred kilometres in longitude and latitude. In climate models, therefore, the whole UK is reduced to just a handful of boxes. This gridded design enables climate models to simulate large-scale climate processes, enabling accurate predictions of properties like temperature. Climate processes smaller than these grid boxes, such as precipitation, cannot be properly captured, and so must be approximated. The intuitive approach, then, for simulating smaller scale climate processes, is to take advantage of increasing computational power and make the grid boxes smaller and more specific. And climate models have undoubtedly seen improvements through such increases in grid resolution. However, even if all advances in technology were used to refine the grid, it would still be many decades before climate models could simulate all-important processes directly. Given the threat posed by global warming, we cannot afford to wait. The results of climate models are analysed by a broad range of academics and professionals, from economists to physicists. Within this diverse community, though, there is an increasing frustration at the decelerating rate of progress. As a result, academics from around the world are trying to break free of the cycle of simply increasing resolution, or adding complexity. Computer scientists, mathematicians and physicists are challenging the assumptions climate models are traditionally based upon. There are a host of new approaches to modelling climate, from changing the way in which the models are designed, to choosing which models to run in the first place. Almost all climate models use grids that are set up in the same way—as a uniform longitude, latitude grid. With this design, all grid boxes are roughly the same size. In reality, though, not all grid boxes are equally important. Some areas, such as the Gulf Stream, have a larger impact on the climate system than others. “Adaptive meshing” hopes to tackle exactly this problem. Adaptive meshes form non-uniform grids, so that models can dedicate more grid boxes—and so more computer power—to the areas that need them most. Although adaptive meshes are still relatively new, the Met Office plans to incorporate these grids into their climate models within the next five years. While adaptive meshing aims to improve simulations by changing the structure of climate models, other scientists are investigating the possibility of changing the machines on which the models are run. By relaxing the requirement that the computer hardware perform calculations perfectly, climate models could be run much more quickly. Just as it is quicker to estimate the answer to a complex sum than it is to calculate the precise answer, allowing a computer to make certain mistakes could make models far more efficient, without substantially sacrificing accuracy. Improving the efficiency of models would enable them to be run much faster, and so existing climate models could be run on much more affordable supercomputers. Early testing by a group of researchers in the University of Oxford has confirmed that improvements in efficiency should be possible with this approach, though actual implementations are still in their infancy. Ultimately, the vast majority of scientists working on climate models tend to focus on improving the quality of simulations. When it comes to simulating climate, though, the quantity can be just as important as quality. With this in mind, the climateprediction.net project deliberately makes use of an out of date climate model. Although this last-generation model has been superseded, it is still sufficient to capture the essential detail of many climate phenomena. What’s more, improvements in technology mean supercomputers are no longer required to run this model. Predictions can now be made using home computers. Climateprediction.net has taken advantage of this by running simulations on thousands of different home computers while their owners aren’t using them: a screensaver with purpose. This has enabled hundreds of thousands of simulations to be carried out, in contrast with state-of-the-art models that can only be run a handful of times. This incredible array of simulations has enabled a vast number of climate change scenarios to be studied. It has also allowed climate uncertainties to be evaluated in a way that would simply not be possible in the traditional framework of running one state-of the-art model a small number of times. While the adage that “all models are wrong, but some are useful” remains true, there is invaluable work being done to make models less wrong and more useful. Unfortunately, usefulness does not mean results are used, and there remains a gulf between scientific understanding and political action on climate change. There are still no meaningful international agreements on reducing greenhouse gases, and indeed emissions continue to rise each year. We can only hope that as time progresses, and our understanding of climate change continues to advance, it will become harder for our leaders to excuse inaction.