You can imagine the start of a climate geoengineering programme in a number of ways. The way that most appeals to me is as part of a policy portfolio aimed at reducing the future risks of climate change. This would entail careful consideration of a variety of proposals for reducing incoming sunlight, research into the weaknesses of all of them and the choice of a preferred option. Then, if as the result of a deliberative process that has been going on in parallel to this, with each informing the other, you — for a suitably inclusive, legitimate value of “you” — decide that such risk management is worth trying you start implementing on such a programme, with the aim of slowly but steadily ramping up to the level of offset you have decide is wise, while continuing with other mitigation and adaptation measures.
On the other hand, a programme might be triggered by a specific event — for example, something sudden and dire happening in the Arctic. Some such events (lots of methane coming out of permafrost) might indeed be checked by prompt cooling, though you might need rather a lot of it. Other catastrophes (radical destabilisation of Greenland ice) probably wouldn’t be helped at all. But such an emergency might trigger demands for prompt climate action that politicians found hard to ignore, and climate geoengineering might be the prompt action they turned to whether or not it met the needs of the specific emergency.
I’ve always seen this as a rather worrying scenario. Much better to think carefully about climate geoengineering’s merits and dangers and build it into a portfolio of climate action than to be bounced into it as some sort of new alternative. Among other drawbacks, a programme put together in the context of a climate emergency might have to be sized so as to deliver a dramatic effect — one with a cooling that might be measured in watts per square metre, rather than something a tenth that size — right away. This seems likely to be imprudent.
A new paper by Jim Haywood and colleagues at the Met Office and the University of Exeter in Nature Climate Change brings up a new version of this question, though, one which I find intriguing. What about the use of geoengineering to counteract a natural, rather than man-made, climatic event? (more…)
Filed under: Geoengineering, Global change, Interventions in the carbon/climate crisis
I recently had the great pleasure of attending this year’s Breakthrough Dialogue at Cavallo Point, an event at which the Breakthrough Institute brought together kindred spirits of disparate views to hash out some of the many issues that that Institute takes an interest in. On the basis of this Economist special report I was invited to talk about nuclear power, but in the many fruitful interstices of the meeting found myself talking about geoengineering quite a lot, because this is the sort of crowd where that sort of discussion makes sense, and because I am working on a book on the subject.
Towards the end of the meeting, a friend mentioned to me that perhaps I should be more careful in such conversations – people seemed to be getting the wrong idea about what I believed. This may be the case – I can’t really vouch for what message people were picking up, and I’ll admit that I sometimes run off at the mouth and that jet lag when drink has been taken doesn’t always help matters.
That said, I think there is a danger to being too careful in talking about geoengineering. If all the people who know about geoengineering are meticulous in the care that they take in talking about it, they will create no new misapprehensions – but they may do little to dispel old misapprehensions, and they may pass up the opportunity to carve out for geoengineering a more central place in our ongoing discussion on climate. I think it deserves that place; if I didn’t I wouldn’t be writing a book about it.
But while there may be good reason to be expansive in one’s talk, there’s no good reason for being careless, or even sloppy, in one’s reasoning. I have tried to be pretty careful in published stuff in the past, such as this 2007 piece in Nature and this 2010 piece in Prospect. Some time in the future I hope to provide all the clarity and nuance one could wish for in the book. But for the time being, here are a few key points in my current thinking, expressed with what I hope is appropriate care. (more…)
Filed under: Interventions in the carbon/climate crisis
Lord knows this shouldn’t need saying, but it does. Earlier this week I received a press release from a UK green electricity company claiming that for a couple of months last year wind power had provided 10% of the UK’s energy needs. Today, The Guardian prints a Reuters report saying that during the post-christmas gales it was 12.2%. The same report ended up at Scientific American and quite a lot of other places. In both cases the numbers came from the UK Renewable site (Reuters’ source here) with which I have no beef. But both had taken figures explicitly about electricity consumption and claimed that they reflected total energy use.
I really don’t understand how it is that people sitting in warm homes or offices with cars going past their windows think that electricity and energy are the same thing. But here are the numbers. Page 59 of the latest International Energy Agency figures (pdf) gives TPES (total primary energy supply) for the UK as 197mtoe (million tons of oil equivalent). Converting that into the sort of units electricity is measured in (the IEA provides a handy converter here) you get 2290TWh. In the same table on page 56 you will see that UK net electricity consumption given as 350TWh. So only about 15% of the UK TPES is consumed as electricity.
The two numbers are not quite equivalent. The share of TPES devoted to generating electricity is larger than the amount of electricity consumed, because more than half the energy content of coal and gas burned at power stations doesn’t actually get turned into electricity. So though I don’t have figures to hand on how much of the TPES is devoted to electricity generation, its probably around twice that much, which fits with my sense that about 30-40% of energy supply is used for generating electricity.
Anyway, everyone makes mistakes, but this one is both egregious, distressingly common and genuinely harmful. When people hear that Britain’s rather paltry wind fleet is generating 10% of its energy they are seriously misled about the scale of the decarbonisation challenge. In good months, as far as I can see, wind currently provides a bit less than half of the country’s renewable electricity, which means about 5% of its consumed electricity, which means less than 1% of its TPES.
The renewables company corrected its press release as soon as I pointed out the error. I trust that the Reuters and its subscribers will too.
Filed under: Geoengineering, Interventions in the carbon/climate crisis, Published stuff
This week’s Economist carries an obituary of Steve Schneider. Excerpt:
Mr Schneider’s high profile as a proponent of action on climate change—he was the editor of an important journal, Climatic Change, and an influential member of the Intergovernmental Panel on Climate Change (IPCC) more or less from its inception—would have made him a favourite target for such antagonists anyway, but he came in for particular scorn because of his willingness to discuss the inevitable tensions between advocacy and academic integrity. Critics of Mr Schneider, including this newspaper, portrayed him as giving in to this tension, and being willing to tell “necessary lies” when it suited his purposes. He countered such attacks vehemently, saying such a conclusion rested on a slanted reading of what he had said on the subject. He had no time for advocacy without truth.
Many comments and memories on this post of Andy Revkin’s
To sit next to Steve Schneider while listening to someone else give a talk about climate science is like watching a DVD with a commentary track by an insightful but rather grumpy director. As the speaker makes her points, Schneider, a veteran climate scientist now at Stanford University, will mutter about who first made all the interesting points in the talk, and when this or that bit of science was first appreciated, and how stupid people have been not to act on this knowledge years ago.
The purpose is to remind anyone listening than climate science has a history, if a fairly brief one, and that the message of that history is reasonably consistent — scientists have believed much what they believe now about global warming for decades, and if climate scientists in general and Schneider in particular had been listened to better, the world would have faced up to the issue better and sooner.
This personal memoir by Schneider provides a similar effect…
Image courtesy of Stanford, I believe
Filed under: Interventions in the carbon/climate crisis
My friend Jonathan Rauch — who is undoubtedly one of the best columnists I know — hits what seems to me a rare wrong note in his current column in the National Journal (link subject to rot after a week or so, I think). Riffing off the incandescent light bulbs issue, he moves on to the “don’t regulate, just price carbon” argument. His case against compact fluorescents is that he, and many other consumers, doesn’t find them to be very good, and that the energy savings they make possible will be eaten up by the Jevons (or “rebound”) effect:
Is this a smart way to save some energy? Or, rather, an example of ham-handed environmental grandstanding?
Europhobia aside, there is a case for the phaseout. Incandescents are famously wasteful, emitting much more heat than light. Though cheap to buy, they are expensive to run… Moreover, lightbulbs are low-hanging fruit on the conservation tree. Unlike, say, an air conditioner or a furnace, they are quick and easy to replace. Savings flow instantly. Compact fluorescents may be imperfect, but the new mandate will drive down their prices while stimulating technological advances. Everybody wins.
That case has its points. Nonetheless, I’m going to vote for No. 2: ham-handed environmental grandstanding.
It is true that consumers can and often do undervalue energy efficiency…but replacing your incandescent bulbs with fluorescents is not the same as replacing your low-efficiency refrigerator with a high-efficiency one. As someone who has recently made a good-faith effort to switch, I can tell you that fluorescents deserve their not-ready-for-prime-time reputation…The compact fluorescent lamp, at least in its currently commonplace incarnations, is a lousy product. Consumers who reject it are not necessarily numskulls. Many if not most are exercising a very understandable preference…
The incandescent phaseout is saying: Never mind that you might be willing to raise your summertime thermostat a notch or two in exchange for keeping incandescent bulbs; you still can’t have them. Never mind that your house is full of other potential energy savings; it’s CFLs for you…
Then there is the problem of what Jerry Taylor, an energy analyst at the Cato Institute, calls the rebound effect. Downsizing cars makes driving cheaper, so people do more of it, offsetting some of the gains. Similarly, fluorescents make keeping the lights on cheaper, with the same likely effect.
The Competitive Enterprise Institute’s Sam Kazman notes that in the 1980s a town in Iowa gave out 18,000 free fluorescents in an effort to conserve electricity. “Despite the fact that over half of the town’s households participated, electricity use actually rose by 8 percent. Once people realized they could keep their lights on at lower cost, they kept them on longer.” Having told the public that compact fluorescents cost practically nothing to run and last practically forever, how could we expect people not to leave them on? (I know I do.)
In his fair minded way, Jon points to the strongest arguments on the other side, but I don’t think he gives them sufficient weight. In particular, as he says, the new marketplace is one where we can expect a great deal of competition in terms of better, cheaper and yet more efficient products. It seems to me that this is a really powerful point. With enlightened regulation, governments around the world (and it is important that this is happening in a synchronised way) are forcing innovation into a market where the low price and economies of scale of the previous incumbent technology made the barriers to entry very high. As John points out, if you don’t care much about energy costs, incandescents are a pretty good technology, which is why, as he also notes, compact fluorescents sat around for a long time not getting much better. Now we can foresee a creative free-for-all that will permit a range of new technologies to compete, and to change the manner in which things are lit more profoundly. As my former colleague Stefano Tonzani noted in a feature in Nature (subscribers only, I think)
The general-purpose incandescent light bulb might not be replaced by a single new source, but by a range of technologies, each suited to a particular use. For example, if organic light emitting diode (OLED) lighting can economically be produced in continuous sheets by industrial roll-to-roll techniques, it will be a natural candidate for flat panels that generate a diffuse glow for area lighting. That would make OLEDs a natural complement to the bright, directional light coming from semiconductor LEDs, which could instead be used for more light-intensive tasks such as reading. Such combinations could lead to new concepts of lighting design, so that architects could help save energy by not wasting light where it is not needed.
It is true that by banning incandescents governments are imposing a cost on current consumers who, like Jon, don’t like fluorescents. But for that one-time cost they are bringing into being a more permissive technological state of play with the potential for far more efficient and better products down the line. (Though I’ll admit, in my turn, that the lower turnover of light bulbs in the post-incandescent era will slow this process down, with people locked into the intermediate CFL technology in a way they haven’t been locked into the often-blowing incandescent technology. Unless, that is, they just throw out old fluorescents, which defeats part of the purpose.) This opening up of innovation seems, on balance, a good way to use regulation.
The way that regulation can change contexts bears on Jon’s more general point that the best thing to do is to simply price carbon, rather than also regulate some activities and piurchase choices that lead to carbon emissions. This seems to ignore the degree to which consumption takes place in a complex system defined, in parts, by regulatory frameworks. There are all sorts of things that make it hard or easy to emit carbon that pricing carbon, in and of itself, doesn’t effect very much, but on which regulations and other government decisions have a huge impact. It is possible, and laudably nifty, to find ways to put new low emissions technology straight into existing systems, for example by making roof shingles that work just as roof shingles always have, but also generate solar electricity. In general, though, changing the price of carbon without changing the system in which people live on its own is going to be a suboptimal strategy. Matt Yglesias was making this point recently while writing about Stockholm buses:
A decision to take the bus is heavily influenced by someone’s decision about where to put the bus stops, where to make the routes go, how frequently to run the buses. [It] is also influenced by the relative paucity of parking spaces in the city, which in turn relates to public policy decisions about minimum parking regulations, maximum allowable density and so forth. …Nobody drives on freeways that weren’t built any more than anyone rides subways that don’t exist.
Whether or not putting a solar panel on your roof makes economic sense depends in part on whether you can sell energy to the grid during surplus periods … Whether or not it makes sense to build a huge wind warm in Kansas depends on whether you have a grid robust enough to transmit that energy to population centers.
We also have regulatory issues limiting our ability to innovate…Multi-family structures are more efficient to heat than are detached houses (it’s a surface area to volume thing) but in many places it’s illegal to build a multi-family structure. So if what you want to do is leave this up to the market, you need to take active legislative steps, not just impose a price and say we’ll let the chips fall where they may.
Nick Stern argues, in a manner that might be seen as fence-sitting but which I find convincing, that carbon markets, carbon taxes and regulations all have roles to play in emissions reduction. Carbon taxes work on transport fuel, for example, in a way that cap-and-trade would not. At the same time, people in Europe don’t think it odd to have fuel taxes as well as regulations on efficiency; the situation reflects, among other things, the fact that fuel taxes high enough to force large efficiency improvements across the whole fleet would prove politically unpalatable. And this seems to me to be a key point. If you insist on thinking that the best thing to do is just to price carbon, even within a system not set up to help people cope with that pricing — if you think that using just the price tool, rather than all the tools, is in principle a superior approach — you have to face the fact that in some cases, for some types of emission, a price that makes a real dent in emissions is not going to be politically feasible. This is the territory on which Boxer-Kerry, and all such attempts to impose prices, will be fought. If a carbon price causes real pain to big significant lobbies it becomes very hard to set. Unless you can solve that, gains made through regulations seem a reasonable path.
As to the Jevons effect: yes, but… Yes, efficiency gains tend to spur consumption, to a degree that is often ignored, and this means efficiency does not represent the cornucopia of low-hanging fruit that it is sometimes suggested to be. But as Jon honestly points out, this effect does not necessarily eat up all the efficiency gains. What’s more, there is a time lag between the efficiency gain and the increase in use, and that time lag represents real saving. There is also the point (systems thinking again) that in the presence of energy taxes or other complementary interventions we might expect the size of the effect to be diminished (another reason why we have both efficiency standards and fuel taxes).
And we should not forget that it is possible to saturate the effect, at least in specific modalities. If I keep my house well insulated, efficient-boilered house warm enough to suit me, I will be emitting less carbon than I did when it was less well insulated and the boiler less efficient (this is a hypothetical example: I hope to make it a real one in the next year or so). And once things are efficient, I am unlikely to turn the house into a sauna just because I can. Similarly, I can look forward to a time when I will have a range of devices in my house that allow me any level of illumination up to that of bright daylight and down to that of dim moody glow in any room, with keylights and fills and bounces and spots and so on allowing me to compose my my experience like my own director of photography — and the whole thing will still consume less energy than having a bunch of incandescent bulbs doing a less good job. Of course, the money I save may be used on some completely different sort of consumption. But the more that is done to make consumption of all sorts more efficient, the less that worries me.
I don’t discount the Jevons effect; it is real and powerful, and shows that efficiency alone is not enough. But within an overall system which is trying to make it sensible for people to use less energy while having better experiences in all sorts of ways, the effect can I think be diminished.
Image from Martin Acosta/Greenpeace, used in accord with these conditions.
Filed under: Geoengineering, Interventions in the carbon/climate crisis, Published stuff
It is time again for the annual feast of fun that is Time’s Heroes of the Environment list. As always it is a thought provoking reminder of how narrow my environmental issues are. Climate and energy issues dominate what I think of under that rubric but here there is lots of room for good old fashioned pollution: mines, dirty rivers, rubbish and the like. Not to mention bloody organic farmers, and various people who would not really make my list (Pen Hadow? Really?)
But climate and energy do top the bill: Mohamed Nasheed of the Maldives leads off the whole package, and there’s a nice spread about Joe Romm, who gives his take on the honour here. (Nice note of irony: the piece on Joe Romm is written by Bryan Walsh, eviscerated by Joe earlier this year for a piece that took the Breakthrough Institute’s line on energy R&D; in last year’s Heroes Bryan profiled the Breakthrough Institute’s founders Ted Nordhaus and Michael Shellenberger.)
My contribution this year (following Jim Lovelock in 2007 and Kim Stanley Robinson in 2008) is on David Keith, who I imagine is probably suitably embarrassed by the whole thing; but to my mind deserves the recognition. His heroism consists of thinking hard and clearly about things other people are hardly thinking about at all. That has let him do a great deal to help frame and further the debate on geoengineering, which needed to be done, and now he’s pursuing ideas about direct air carbon captur, which again can but benefit from the serious attention. It also makes him one of the best people to talk to about climate and energy issues, bar none. Excerpt:
Early success in pure physics (his graduate project, supervised by a professor noted for his mentoring of future Nobelists, was a long-awaited experimental breakthrough in atomic optics) did not satisfy him. Climate work promised a greater opportunity to do good while at the same time throwing up what ambitious physicists always want most: questions no one yet knows the answers to.
Soon he was working on nitty-gritty climate-modeling problems while learning economic and policy analysis. That breadth has helped him communicate climate concerns to the often skeptical energy industry; it’s also part of why he is listened to by people like Bill Gates, who relies on meetings organized by Keith to stay up-to-date on climate science. “While he’s got informed and strong opinions,” Gates says, “he’s also incredibly open-minded, pointing out the unknowns in his opinions and just as readily pointing out the merits of others’ opinions.”
Image of David Keith by Ewan Nicholson, used with permission, all rights reserved
Filed under: Geoengineering, Global change, Interventions in the carbon/climate crisis, Trees
An interesting paper in Climatic Change: Irrigated afforestation of the Sahara and Australian Outback to end global warming by Leonard Ornstein, Igor Aleinov and David Rind Doi: 10.1007/s10584-009-9626-y. (Mason Inman has a nice write up with some background and comment over at ScienceNow; [update] and corresponding author Len Ornstetin chronicles the idea’s rocky research road on his own site). The central idea is that with enough irrigation you can turn big deserts into big forests: forests big enough to suck up a large part of total carbon dioxide emissions for decades or even centuries. I think that you can take this notion as a serious plan, a thought experiment, a jeu d’esprit, a warning or a jumping off point, depending on predisposition. Aspects of all that in what follows.
Here are the basic numbers: The Sahara is about a billion hectares in area, on which you could fit a trillion eucalyptus trees. Those trees, if working flat out, could each put on twenty kilos of biomass a year. If roughly half that biomass is carbon, that would mean a net annual sink on the order of ten billion tonnes of carbon. That’s about the amount that humans currently emit.
To create such a forest in a century, you would have to plant as many hectares of trees every year as are currently lost to deforestation worldwide. And, even harder, you’d have to provide them with what they need to order to grow. You need a great many things to turn a desert into a forest — soil nutrients, microbiota, possibly pioneer plants, a compelling reason for doing the work, and so on — but the biggest hurdle, pretty obviously, is water. Eucalyptus, the authors say, needs about a metre of rainfall a year. For a billion hectares, that’s 10 trillion tonnes of water. The authors assume, reasonably for all that I know, that if you have smart irrigation getting the water to just where it is needed you can get away with half that amount. Even so, even the vast aquifers beneath the Sahara don’t contain the amount of water required, so it will have to come from desalination plants on the and be pumped it up to where it is needed (the average elevation of the Sahara is about 450m). The size of this undertaking — more than 50 new Niles, flowing in reverse — may explain why the authors feel they need to use that fine old-school term “terraforming” for their undertaking. The power requirement, if I’m reading their figures right (4.04kWh/m^3 fresh water delivered), is a bit to the north of 2.2 terawatts, about 40% of it for desalination by reverse osmosis and about 60% for pumping.
The world’s electricity generators currently provide about 18,000 TWh of energy, which averages out at 2TW of constant supply. So in energy terms the desalination and pumping needed for the Sahara forest would use a bit more electricity than the world currently generates for every other purpose. This unavoidably sounds nutty. But that is at least in part because of the nuttiness of the situation, rather than its proposed solution — the nutty situation in which we burn fossil carbon at tens or hundreds of thousands of times the rate at which it is sequestered over geological time. If humanity insists on putting so much carbon dioxide into the air every year that it would take a brand new forest the size of the Sahara to suck it all up, then that’s where the madness starts. That creating such a forest would have to be a large undertaking — large in terms of the whole world economy — is just a consequence of the initial folly.
And in practice the investment would be smaller. A nice thing about forests is that they can go some way to creating their own weather, and the authors have looked at this effect with some climate modelling work. If a forest with irrigation dampened soil is imposed on the Sahara, rain begins to fall, in some places as much as a metre of it every year. This rainfall doesn’t obviate the need for irrigation, because it is strongly seasonal — basically an extension of the West African monsoon of April to November. But it might significantly reduce the irrigation requirements. Maybe you could get away with just a terawatt…
The Sahel, to the south of the Sahara, also gets damper in those enhanced and extended monsoon rains, which is definitely a plus, I’d guess, and the African Easterly Jet, a feature which is driven in large part by the temperature contrast between the desert and surrounding land, seems to more or less vanish. Since a large number of Atlantic hurricanes get their starts as kinks in the AEJ, that might be a pretty significant change, too. Beyond that, the rest of the world seems pretty much unaffected. In particular, the authors say that their models show no additional warming that might be laid at the door of the change of albedo which comes with replacing light desert with darker trees. (I think this fits with the 2007 Bala et al paper in PNAS, which suggested that warming associated with afforestation would be due to changes in boreal, rather than tropical, forest cover).
There is, however, a fly in the ointment. The Bodélé depression in Northern Chad is only a small part of the Sahara, but it is the world’s greatest source of mineral dust, with the winds drawing some 700,000 tonnes a day off the surface. According to Koren et al in ERL, 2007 40 million tonnes of dust a year travels from the Bodélé to the Amazon rain forest, half the total annual mineral inputs into the forest basin (the dust fertilises the mid Atlantic, too, and it may play a role in abating hurricanes too — Jim Giles wrote a lovely piece on this for Nature some time back). There’s a real chance that this dust is crucial to maintaining the soil fertility of the forest, and even if the Bodélé itself were left unirrigated and unforested, the increase in precipitation all round it, and the wetter atmosphere downwind of it, would probably shut it down as a dust producer. If growing a forest in the Sahara hurts the one we already have in the Amazon it obviously becomes a less attractive proposition (though if we are going to lose the Amazon forest anyway, things might look different…). That said, if you are pumping trillions of tonnes of water across continental scales, thenpaying to air dump a few tens of millions of tonnes of fine-particle mineral fertiliser upwind of where you want it is hardly going to break the bank.
Something the authors don’t look into is that the higher the CO2 level in the atmosphere gets, the easier this all becomes. Higher carbon dioxide levels make plants more water efficient, all other things being equal. All other things are not, necessarily, equal — higher CO2 also makes things hotter, which plants don’t much care for. In a world with some solar radiation management, though (such as aerosols in the stratosphere) all things might indeed be kept equal, or at least temperature might be. Martin Claussen has been working for some time on the idea that the Sahara is a “tipping element” in the climate regime, one that can be pushed from a dry state to a wetter one relatively easily. In a more carbon rich but not-too-hot world the circumstances might be right for it to tip the other way, and it might take rather less than a 50-Nile terraforming project to nudge it over.
In the final analysis, I don’t think I take this paper very seriously as a practical proposition. Doubling global electricity generation for a single project seems far fetched. For such a thing to be put anywhere near the top of one’s list of African infrastructure investments would require that a great many other large and important development initiatives (provision of power, water, roads, cold chains, vastly improved agronomical advice, etc to the vast majority of the population, for starters) would already have had to have been put in place. But it’s kind of nice to imagine a world in which we were wealthy and together enough to have actually taken the pressing need for those changes to heart, and were thus in a position to consider greening a great desert too.
And regardless of practicalities I think there’s real value in taking the analysis further. A big idea like this throws off many fascinating questions that force you to look at the earth, and what we know about it, in new ways (or old ways but with a new twist):
What polycultures would you build the new forest with? (all-eucalyptus-all-the-time is fine for first calculations, but doesn’t sound like anyone’s idea of a proper landscape. Baobabs? Laurels? And what fauna might be good, or bad?)
What genetic engineering — reduced flammability, higher albedo leaves, more refractory soil carbon, who knows what else — might help?
How much bioenergy with carbon capture could be built into the scheme, perhaps initially to power some of the inland the pumping stations?
Can biochar help? (and a million other soil-creation questions)
What are the best silvicultural ways to make the new woodlands pay, as that is something people by and large like their environments to do, and can there be room for some agriculture too?
How could local people best be convinced this was a good idea? And what are the property title reforms that would be prerequisite?
If the AEJ stops, do hurricanes stop too? Or does some other mechanism initiate them, maybe somewhere else? And does the dust really have an effect?
When the Sahara was wetter and less dusty in the past, did the Amazon actually suffer from lack of nutrients? (I think there is actually some research already out there on that — but can’t offhand think where)
How can the transformation be made stunningly beautiful?
What regions and landforms do you want to keep as monuments/heritage sites/national or world parks? There would undoubtedly be a real aesthetic/biodiversity loss in the removal of the desert, not to mention risks to some utterly wonderful buildings.
How to stop the Fremen becoming soft and decadent now that Arrakis has become a land of milk and honey?
and so on.
In particular, it would be nice to see some analysis of halfway houses; where in the Sahel and points north might merely huge, as opposed to planet-sized, afforestation be attempted, and what would be the costs and benefits? It is possible to transform land on very large scales, if not quite this large: 40m hectares of the Brazilian cerrado have been brought into agricultural production over the past fifty years. Can afforestation/silvicultural interventions on such scales ever make sense? And where else might be suitable for such things?
And on the topic of where else: My apologies to any Australian readers for not going into the paper’s analysis of foresting the Outback in addition, or as an alternative, to the Sahara. Basically the arguments are largely the same but the costs and effects are a bit smaller. There’s also a risk of interfering with El Nino that would definitely merit further attention. If anyone wants to blog more on that aspect of the subject send me a link and I’ll post it up here.
Image credits: Eucalyptus trees at the top from Big Lands Brazil, who would like to sell you some…; Bodele from Charlie Bristow, reused with permission; Tree of life from Flickr user Solvo under Creative Commons license
So yesterday the Royal Society’s report on geoengineering came out, with a launch event and a press conference. It (82pp PDF, press release) is undoubtedly the best overall briefing on geoengineering technologies and their policy/governance implications that you can find right now; John Shepherd and his team did a comprehensive and thoughtful job.
I’m sure that when I get into it in depth I’ll find lots of interesting gems, but here are some highlights
- The overall frame is that none of these options in any way takes the place of emissions control.
- The report makes a clear distinction between carbon dioxide reduction (CDR) techniques — afforestation, burning biomass with carbon capture, biochar, “artificial trees” (possibly the most misleading label any technology is currently labouring under) and so on — and “solar radiation management” (SRM) techniques — sulphate aerosols, cloud-whitening, mirrors in space, etc. CDR interventions will always be very slow to have their effects, while some SRM techniques could be very quick.
- Some of the CDR techniques — those that involve no major interventions in ecosystems — are seen as pretty much unproblematic, if not currently affordable; transnational issues only arise if they start to reduce the carbon dioxide level too far (whatever that might be). CDR that gets into major ecosystem issues — eg ocean fertilization techniques — give greater cause for concern.
- Pretty much all of the SRM techniques are seen as having significant risks, except for painting roofs white, which simply doesn’t do much good.
- In CDR, two technologies stand out: direct-air carbon capture and BECS, biomass energy with carbon sequestration. Both cost a fair bit, but a decent carbon price would help sort that out. BECS has the advantage of producing energy rather than using it; but though direct captureuses quite a lot of energy has the advantage of a footprint that is hundred or thousands of times smaller per tonne of carbon sucked up. Both assume that there are places to put the carbon once it has been purified.
- There’s also more discussion than I’ve seen elsewhere of “enhanced weathering” — reacting carbon dioxide with rocks ground into the soil and things like that. Low on affordability and readiness, and requires a massive new global mining industry, but since it can scale up in a big way worth keeping an eye on…
- In SRM, stratospheric aerosols are the most impressive option, ranking as high as or higher than anything else with comparable potential. The impacts on other things, though, most notably the hydrological cycle, are a worry. In the 1990s the sulphates from Mt Pintaubo not only dimmed the sun — the also dried the world’s rains and reduce the flow of its rivers. Working out how much this effect matters is probably the most important open scientific question in geoengineering (that’s my opinion, not something the report says).
- Cloud-whitening proponents will be disappointed, possibly a little aggrieved, at being seen as consierably less effective than aerosols; proponents argue that they can offset a doubling of CO2. On the other hand the report is kinder than one might expect to space-based systems. “Kinder” here means saying someone should go and think about everything so far proposed in that arena a bit more seriously for a few years and then come back and make a case, rather than simply laughing.
- There needs to be a thorough audit of the many international agreements currently in place for other reasons — the UN framework convention on climate change, the London convention, the Montreal protocol, the law of the sea, the convention to combat desertification, the outer space treaty, the convention on biological diversity, and various others — to see which currently have bearing on any of these techniques, and how they could be used to exert control or to provide incentives.
- The UK should commit to £10m a year for ten years in research; worldwide a suitable figure might be ten times that. As John Shepherd put it, this would be ten times current spending on such things, a tenth of total climate research spending and a hundredth of spending on energy technologies.
All reasonable stuff, it seems to me, and well referenced if not well illustrated. The launch event and press conference, though, did feel a little stifled by worries about being seen as championing the technologies under discussion. The press release was actually headed “Stop emitting CO2 or geoengineering could be our only hope”, framing geoengineering principally as a threat. A little more of a sense that some or more of these technologies might be useful adjuncts to emissions reduction rather than a dread alternative could have been helpful — a little less of a sense that they all must be bad. Interestingly, one of the people discussing the issues at the launch event did go further than others in this, pointing out that if you want to get carbon dioxide levels low enough to do something about ocean acidification you are undoubtedly talking about CDR, not as a “plan B”, but as part of the basic strategy. That was John Beddington, the UK government’s chief science adviser.
Yesterday Dan Lack of NOAA gave a talk to the NCAR media fellows about his work on pollution from shipping, and told us something I found pretty flabbergasting. Last year the International Maritime Organisation, as part of a number of measures aimed at air pollution, decided to do something about the sulphur emissions from shipping by reducing the amount of sulphur dioxide permissible from 4.5% today to 0.5% in 2020. This would have great benefits; sulphate pollution, and associated particulate matter, cause significant health problems. According to a new paper in Environmental Science and Technology by Winebrake et al, if in 2012 the world’s shipping complied with this requirement, the associated sulphate pollution would cause 46,000 premature deaths; if that shipping used today’s higher sulphur fuels the death toll would be 87,000.
However, sulphur emissions from shipping have another effect: the sulphate aerosols that form from the gas make the oceans cooler by increasing the cloud cover above them, as the image at the top of this post shows. The effect is large enough that shipping cools the planet through sulphate aerosols much more than it warms the planet through greenhouse gas emissions. In a companion paper in Environmental Science and Technology, this time with modeller Axel Lauer as first author, the same team looks at this effect. Using the same 2012 scenarios they used for the health figures the researchers find that the cooling effect using fuel like today’s, expressed in terms of radiative forcing, is about 0.57 watts per square metre. The cooling effect if everyone uses the new low sulphur fuels is 0.27 W/m². That means a difference of 0.3 W/m² — which is to say that that’s the amount of warming that switching to low-sulphur fuels would produce.
What does a radiative forcing of 0.3 W/m² mean? Here’s a chart from the IPCC showing the radiative forcings associated with all human climate-changing activities as of today. The total (with biggish error bars) is 1.6 W/m², which shows straight off that 0.3 is quite a lot. It is, for example, twice the amount of forcing as is due to N2O, 60% of the forcing due to methane, and the same as the amount due to halocarbons (HFCs). A huge amount of money is currently being spent on the HFC problem.
Put another way (and I calculated these numbers myself, so please check and correct if you have the necessary skills) 0.3 W/m² is the radiative forcing you would expect if you dumped 47.5 billion tonnes of carbon (in the form of carbon dioxide) into the atmosphere, raising the concentration of CO2 from today’s 387 parts per million to 409 parts per million. That’s well over a decade’s worth of carbon emissions and an enormous amount of warming for the IMO to have committed the world to with no-one, as far as I can see, paying very much attention. (The most obvious environmental response to the IMO changes, from the Clean Air Task Force, was to applaud the health effects of the cuts in sulphur while deploring the lack of action on greenhouse gases and not mentioning the cooling issues at all. If you accept Dan Lack’s figure of just 0.06 W/m² for the total warming from shipping, that seems an odd omission.)
Now there are obviously complexities and caveats. This is just one modelling study — but its figures for the amount of cooling due to sulphur fit with those quoted by of others, such as Dan Lack. Taken at face value it would imply both that the total cooling effect of sulphur on clouds was probably greater than the IPCC best guess, and that sulphate from shipping was responsible for a disproportionate amount of it. But the IPCC’s guess has big error bars, and you would indeed expect sulphate from ships to be peculiarly effective — it gets sprayed into places where the clouds are very susceptible to such things. (This is the effect that John Latham’s geoengineering scheme based on cloud brightening seeks to emulate). The papers compare effects for 2012 not 2020, which is when the regulations will call for al fuel to be low sulphur, but does anyone expect less shipping in 2020 than 2012?
So is this a matter of balancing 40,000 lives a year against a decade of global warming? Not necessarily. There is another sulphate reduction option: burn low-sulphur fuels when close to land, and ordinary fuels when far off. There are already some areas where ships have to use low sulphur fuels, and they could be extended to all the places where the sulphate is likely to do its greatest harm. In further scenarios the authors of the two papers looked at a world of 2012 in which ships’ sulphur was reduced to 0.5% or even 0.1% when within 200 nautical miles of land, but left unchanged in mid voyage. In terms of fatalities the 0.1% in coastal waters is slightly better than 0.5% all over the place (44,000 deaths), 0.5% in coastal waters is slightly worse. In terms of cooling these two options are lower than business as usual but higher than a global reduction to 0.5% — their forcing is 0.45-0.48 W/m².
Low-sulphur fuels in coastal areas could lessen the warming associated with a global sulphur reduction and still save as many lives — or more. They would impose other costs, though. Getting sulphur out of fuel costs money, and this might make getting down from 0.5% to 0.1% an issue. Ships would have to carry two different types of fuel, which is also problematic, though not impossible. And going low-sulphur still deprives the world of a lot of cooling, even if the regulations only apply in coastal waters. That’s largely because most shipping is coastal. (This suggests that forcing ships to take longer, less coastal routes — to put out straight to sea where possible, and spend more time further from land — might be an option. Again it has costs.)
Beyond preferring coastal controls to global controls I have no real policy case to make here. I’m aware that there is in general a trade off between air quality reasons for reducing sulphates and the possibility that their cooling effects can be climatically helpful. But the fact that this measure involves reducing sulphur emissions in places where they do no harm (the mid oceans) and where their cooling effects are greatly enhanced (by the presence of low clouds they can brighten) makes the question particularly pointed. I have no way to balance the advantages of reduced global warming against the advantages of decreased mortality. I don’t know who has. But I do think that it’s kind of extraordinary a regulatory change with this much effect on global warming could be made with so little apparent fuss.
And I also think this all makes the case for experiments with Latham-type techniques that brighten clouds to cool the seas even stronger than it already is. If, for good reason, we are actively reducing the amount of cooling provided by shipping, surely we should at least look at possible ways of putting it back?
“Mitigating the Health Impacts of Pollution from Oceangoing Shipping: An Assessment of Low-Sulfur Fuel Mandates”, Winebrake, J. J. et al, Environ. Sci. Technol., 2009, 43 (13), pp 4776–4782
“Assessment of Near-Future Policy Instruments for Oceangoing Shipping: Impact on Atmospheric Aerosol Burdens and the Earth’s Radiation Budget” Lauer, Axel et al, Environ. Sci. Technol., 2009, 43 (13), pp 5592–5598
Filed under: Interventions in the carbon/climate crisis
My friend Gideon Rachman was so kind as to quote me in his column in Tuesday’s FT, “Climate activists in denial“, to the effect that
Building two terawatts of nuclear capacity by 2050 – enough to supply 10 per cent of the total carbon-free energy that’s needed – means building a large nuclear power station every week; the current worldwide rate is about five a year. A single terawatt of wind – 5 per cent of the overall requirement – requires about 4m large turbines.
Some people have since been in touch to get a source on those claims (a book proposal, as it happens: Gideon and I have the same agent) and I thought I might as well post the answer I emailed them here, as well.
They’re rule of thumb figures — by which I mean certainly good to an order of magnitude and ideally to a factor of two or so — derived not from research per se but from simple arithmetic.
Current world energy use is about 13TW. With realistic/optimistic growth figures for industrialising and less developed economies, 20TW in 2050 seems a fair ballpark. It also seems fair to think that electricity (and thus nuclear reactors and windmills) will become a larger part of the mix.
As a rule of thumb, a nuclear station will be rated at about a gigawatt of electric power (IAEA figures that my former colleagues and I quoted here http://www.nature.com/news/2008/080813/full/454816a.html have 439 power plants producing with a combined capacity of 370 GW, for an average of 840 megawatts each, but newer stations are on the larger side, with the new Westinghous design at a little over a gigawatt and the new French design at a gigawatt and a half or so) and it will actually supply almost that much (nuclear power plants in mature systems typically run at almost 95% of the stated capacity, with a month of downtime every year and a half). So to build 2 terawatts of capacity (10% of 20TW) you have to build roughly 2,000 stations. 2,000 stations in 40 years works out at 50 stations a year.
By a large turbine I meant a 1MW installation. You need a million of those for a terawatt of capacity. But unlike nuclear stations, wind turbines do not produce at their rated capacity very much of the time. On the basis of a system generating 25% of its stated capacity, which is pretty common, you would need 4TW of capacity for a terawatt of generation — hence four million turbines.
Those wind figures are, with hindsight, a little pessimistic. Though anyone who has seen one will agree that a 1MW turbine is large, big wind installations these days, especially those offshore, tend to work on the basis of 1.5MW – 3MW turbines, sometimes even more. And a good farm well placed might generate as much as 33% of its stated capacity. With 3MW turbines at 33%, you get a terawatt with a million turbines, rather than 4 million.
You can find more in the Nature article I linked to above, and more still at David McKay’s excellent site (http://www.withouthotair.com/)