Found 184 Resources containing: Greenhouse effect
The morning of August 23, 1856, saw hundreds of men of science, inventors and curious persons gathered in Albany, New York, for the Eighth Annual Meeting of the American Association for the Advancement of Science, the largest attended to date. The annual meetings of the AAAS brought together scientists from around the United States to share groundbreaking new discoveries, discuss advancements in their fields and explore new areas of investigation. Yet this particular meeting failed to deliver any papers of quality—with one notable exception.
That exception was a paper entitled “Circumstances affecting the heat of the sun’s rays,” by Eunice Foote. In two brisk pages, Foote’s paper anticipated the revolution in climate science by experimentally demonstrating the effects of the sun on certain gases and theorizing how those gases would interact with Earth’s atmosphere for the first time. In a column of the September 1856 issue of Scientific American titled “Scientific Ladies,” Foote is praised for supporting her opinions with “practical experiments.” The writers noted: “this we are happy to say has been done by a lady.”
Foote’s paper demonstrated the interactions of the sun’s rays on different gases through a series of experiments using an air pump, four thermometers, and two glass cylinders. First, Foote placed two thermometers in each cylinder and, using the air pump, removed the air from one cylinder and condensed it in the other. Allowing both cylinders to reach the same temperature, she then placed the cylinders with their thermometers in the sun to measure temperature variance once heated and under various states of moisture. She repeated this process with hydrogen, common air and CO2, all heated after being exposed to the sun.
Looking back on Earth’s history, Foote explains that “an atmosphere of that gas would give to our earth a high temperature ... at one period of its history the air had mixed with it a larger proportion than at present, an increased temperature from its own action as well as from increased weight must have necessarily resulted.” Of the gases tested, she concluded that carbonic acid trapped the most heat, having a final temperature of 125 °F. Foote was years ahead of her time. What she described and theorized was the gradual warming of the Earth’s atmosphere—what today we call the greenhouse effect.
Three years later, the well-known Irish physicist John Tyndall published similar results demonstrating the greenhouse effects of certain gases, including carbonic acid. Controversial though well recognized at the time, Tyndall theorized that Northern Europe was once covered in ice but gradually melted over time due to atmospheric changes. This laid the groundwork for how atmospheric variations over time in addition carbon dioxide emissions could have profound effects on global climate. Presently, Tyndall’s work is widely accepted as the foundation of modern climate science, while Foote’s remains in obscurity.
It goes without saying that the 19th century was not an easy era to be a woman and scientifically curious. With limited opportunities in higher education for women and the gate-keeping of scientific institutions like AAAS, which was all-male until 1850, science was largely a male-dominated field. Even the Smithsonian Institution, one of America’s premier scientific research institutions, was built on the clause “for the increase and diffusion of knowledge among men” (emphasis added). Born in 1819, this is the landscape that Foote found herself navigating.
Although nothing is known about Foote’s early education, it is clear from her experiments that she must have received some form of higher education in science. Her appearance, along with her husband Elisha Foote, at the 1856 AAAS meeting is the first recorded account of her activity in science.
Unlike many other scientific societies, the AAAS did allow amateurs and women to become members. Astronomer Maria Mitchell became the first elected female member in 1850, and later Almira Phelps and Foote, though without election from its standing members. But despite the society’s seemingly open door policy, there were hierarchies within the society itself. Historian Margaret Rossiter, author of the comprehensive three volume series Women Scientists in America, notes that the AAAS created distinctions between male and female members by reserving the title of “professional” or “fellow” almost exclusively for men, whereas women were regarded as mere members.
These gender disparities were highlighted during the August 23 meeting, where Foote was not permitted to read her own paper. Instead, her work was presented by Professor Joseph Henry of the Smithsonian Institution. (Foote's husband, by contrast, was able to read his paper, also on gases.)
At the meeting, Henry appended Foote’s paper with his own added preface: “Science was of no country and of no sex. The sphere of woman embraces not only the beautiful and the useful, but the true.” The introduction, intended to praise Foote, more than anything highlights her difference as a woman in a sea of men, indicating that her presence among them was indeed unusual and needed justification. Even Scientific American’s praise of Foote’s paper was included in a column two pages after the AAAS meeting report. Though both Henry and Scientific American seemed to see Foote as an equal in scientific endeavors, she was still kept separate from the fold.
Adding insult to injury, Foote’s paper was left out of the society’s annual Proceedings, a published record of the papers presented at the annual meetings. In The Establishment of Science in America, historian Sally Gregory Kohlstedt gives some indication of why this might be.
In the 1850s, Alexander Dallas Bache, a leading force for the AAAS, promoted open membership. But Bache also enforced strict and critical reviews of all papers published in the Proceedings in order to cultivate a specific image and voice for American science; even if a local committee of the association approved papers for publication, the standing committee of the AAAS, on which Bache served, could reject them. Just by glancing at the member list and published papers, it is clear that image and that voice were predominantly male.
The only copy of Foote’s paper published in its entirety is found in The American Journal of Science and Arts, and without this outside publication, only Henry’s read version would remain. Compared to other papers published from this meeting, Foote’s—a demonstration of rigorous experimentation and sound reasoning—should arguably have been included in the 1856 collection.
I spoke with Raymond Sorenson, an independent researcher and co-editor for Oil-Industry History, who was the first to publish a paper on Foote in 2011. A collector of scientific manuals, Sorenson found Foote’s paper as read by Joseph Henry in David A. Wells’s Annual of Scientific Discovery. (Wells is the only known source to include Joseph Henry’s impromptu introduction, most likely retrieved through stenographer records of meetings.)
Sorenson says that Foote’s biographical information is difficult to find and piece together, but he has found her correspondence archived at the Library of Congress and has traced some of her familial connections. The more Sorenson researched Foote, the more he realized he has a book project on his hands. Yet before writing tha book, Sorenson decided to go ahead and publish his 2011 article because, as he says, “Eunice Foote deserves credit for being the first to recognize that certain atmospheric gases, such as carbon dioxide would absorb solar radiation and generate heat…[three] years before Tyndall’s research that is conventionally credited with this discovery.”
It now appears that Foote was the first to demonstrate the greenhouse effects of certain gases and also the first to theorize about their interaction with the Earth’s atmosphere over an extended period of time. Her explanation of the greenhouse effect—which would help scientists understand the underlying mechanisms behind global warming in the 20th century—predated Tyndall’s by three years.
For a woman like Eunice Foote—who was also active in the women’s rights movement—it could not have been easy to be relegated to the audience of her own discovery. The Road to Seneca Falls by Judith Wellman shows that Foote signed the 1848 Seneca Falls Convention Declaration of Sentiments, and was appointed alongside Elizabeth Cady Stanton herself to prepare the Convention proceedings for later publication. As with many women scientists forgotten by history, Foote’s story highlights the more subtle forms of discrimination that have kept women on the sidelines of science.
Foote’s work with greenhouse gases does not supersede that of Tyndall, whose body of work overall has been more integral to current climate science. Yet, by including Foote’s 1856 work in the history of climate science, we are reminded that the effort to understand the Earth’s atmosphere and human interactions with it has been an ongoing endeavor over a century in the making. And one of the first steps toward that understanding, it turns out, was taken by a lady.
Sometimes it feels as if every week brings a new host of stories about how climate change is affecting the planet, or new plans to battle its effects like the one announced by President Barack Obama today. But the concept itself isn't new at all — in fact, scientists have been exploring questions about climate change for almost 200 years.
The idea of "greenhouse gases" goes back to 1824, when Joseph Fourier wondered what was regulating the earth’s temperature. Fourier deduced that the atmosphere must responsible for containing the heat absorbed from the sun and described it as like a box with a glass lid: as light shines through the glass, the insides get warmer as the lid traps the heat, writes David Wogan for Scientific American. As Fourier’s ideas spread, it came to be called “the greenhouse effect.”
Scientists continued to study the greenhouse effect, but it wasn’t until a Swedish chemist named Svante Arrhenius came along that scientists understood how global warming actually works. In 1896, Arrhenius published a paper titled “On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground” that finally quantified the effect that increased carbon dioxide had on the greenhouse effect.
Arrhenius first became interested in the topic through one of the great questions in the scientific community at the time: what caused ice ages? Believing that it could be the result of dramatic swings in the levels of atmospheric carbon dioxide, Arrhenius began to calculate the precise amounts that would heat the Earth, writes Ian Sample for The Guardian. After years of work, Arrhenius determined that the level of carbon dioxide in the atmosphere did in fact have a direct effect on global temperatures.
“...if the quantity of carbonic acid [CO2] increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression,” Arrhenius wrote in what is now known as “the greenhouse law.”
Arrhenius found that CO2 and other gases trap infrared radiation, which warms the atmosphere. As a result, the atmosphere can hold on to more water vapor, the biggest contributor to global warming. Arrhenius was the first to suspect that burning coal could contribute to the greenhouse effect. But, as Sample reports, Arrhenius welcomed the warming effect on the planet. At a lecture later that year, Arrhenius noted that residents of a warmer Earth "might live under a milder sky and in less barren surroundings than is our lot at present."
While Arrhenius' findings won him the 1903 Nobel Prize for chemistry, scientists kept debating whether the greenhouse effect was increasing until 1950, when researchers finally began to find strong data supporting it. By the end of the 1950s, American scientists were sounding the alarm on the long-term consequences of climate change.
Climate change research has come a long way since Fourier first described the greenhouse effect — still, maybe Arrhenius should have been more careful of what he wished for.
For his senior thesis at Princeton, Mark Herrema studied farm subsidies and devised a market-driven solution to world hunger. Nothing seems too tall an order for the determined entrepreneur, who majored in politics.
Herrema, 33, has since shifted his focus to climate change—specifically, finding a way to capture greenhouse gases and put them to good use. He and Kenton Kimmel, a high school classmate, founded the Irvine, California-based company Newlight Technologies in 2003. After years of research, the team unveiled a way to produce plastic from carbon emissions that is actually more affordably priced than oil-based plastics. The "secret sauce" is a biocatalyst that combines air and methane, and reassembles all of the carbon, hydrogen and oxygen molecules into a thermoplastic the makers call AirCarbon.
Herrema shares his story with Smithsonian.com.
Let's start with the problem. What problem are you trying to fix?
Newlight started in 2003 with a question. Instead of looking at carbon emissions as a problem, what if we could use carbon emissions as a raw material to make materials, and what if those materials could outcompete oil-based materials on price and performance?
If we could do that, we would have a powerful process to address two issues: first, oil dependency, by replacing oil with captured carbon emissions, and second, climate change, by creating a market-driven carbon capture platform. What if the world was competing for the use of carbon emissions as a resource? There are few things we can imagine that would be so powerful in addressing climate change.
So, what exactly is Newlight Technologies? Could you give me your elevator pitch?
Newlight was founded to realize this vision. The net result, after over a decade of research and development, is AirCarbon, a thermoplastic material made by combining air and captured methane-based carbon emissions that would otherwise become part of the air. The material is as strong as oil-based plastics and significantly less expensive.
How exactly do you make the plastic?
The production process starts with methane emissions generated at places like landfills, farms, water treatment plants and energy production facilities—anywhere that methane is being emitted where it would otherwise be vented or flared. The first thing we do is capture that methane.
For example, at a farm, organic material is often held in a confined area, such as a tank, where it produces methane, and this methane is vented or routed into a pipe and eventually combusted, with essentially 100 percent of the carbon being released to air. In our process, instead of letting that pipe vent or feed a combustion device, we redirect the pipe to our conversion reactor. Inside the reactor, we mix the methane emissions with water, air and our biocatalyst. Here, the biocatalyst pulls oxygen out of the air, and carbon and hydrogen out of the methane, and combines those molecules to make a long-chain thermoplastic polymer molecule, called AirCarbon.
Next, we remove AirCarbon from the reactor, and after a downstream processing step, melt it into a pellet, where it can then be processed into shapes and used to replace oil-based plastics.What the plastic pellets look like, before being made into other products. (Newlight)
How were you able to make this process cost-effective?
The basic science to convert methane into thermoplastic polymers existed for many decades. Unfortunately, while the science existed, the key challenge, and the reason the process had never been commercialized, was cost. Prior to Newlight, the cost to produce polymers from methane emissions was about 2 to 3 times higher than the cost to produce oil-based plastics. Unfortunately, very few companies can afford to use a material at that price level. So, our founding challenge was: how do we carry out this process in such a way where we can outcompete oil-based plastics on price? Ultimately, our key breakthrough was our biocatalyst.
Specifically, in the past, all biocatalysts were self-limiting, meaning that they could only make a certain amount of polymer before they would turn themselves off and make carbon dioxide instead of polymer. Numerically, to make one kilogram of plastic, you needed to make one kilogram of biocatalyst, and that was the maximum yield, which rendered the production cost very expensive.
Over the course of about ten years of work, we developed a new kind of biocatalyst that does not turn itself off. Every kilogram of biocatalyst we make produces about nine kilograms of polymer—nine times more material for the same input than previous options, enabling Newlight to manufacture polymer from greenhouse gases at a price point that features a double-digit percentage cost reduction compared to the cost to produce plastics from oil.
How did you come up with this concept?
I read an Los Angeles Times article about methane emissions from cows in 2003, called “Getting the Cows to Cool It.” The article described the precise volume of methane—634 quarts—emitted per cow per day, and this number started a chain of questions: how much methane does a farm produce? What about a county, a state, a landfill or an energy facility?
What had seemed like an abstract concept of carbon emissions now seemed so real, so touchable. The burning question was, if so many of our materials are made from carbon, why are we letting all of this carbon go into the air? Why not use it to make materials, particularly materials that would otherwise be made from oil, like plastics?
I teamed up with Kenton Kimmel to found Newlight, and in 2006, Evan Creelman joined our team. For nine years, Newlight worked in radio-silence—no website, no public presence—because we said that until we could outcompete oil-based plastics on price, there was nothing to talk about.A chair that's made from AirCarbon. (Newlight)
How would you describe your success to date?
In August of 2013, ten years after our founding, Newlight commenced operations at the world’s first commercial-scale AirCarbon manufacturing facility in California, where we are combining air with methane from a farm to make AirCarbon thermoplastics.
Since Newlight’s commercial scale-up, AirCarbon has been used in a number of products, including chairs from KI, bags from Dell and cell phone cases from Sprint. In 2013, Newlight had five product applications. Within 12 months of our commercial scale-up, we grew to over 75 applications, and today we are working with over 60 Fortune 500 companies to launch AirCarbon in various products in the U.S., Europe and Asia, from automotive applications and electronics components, to bottles, caps and films.
Our focus today is on expansion, with our next benchmark step being to scale production up to 50 million pounds per year. One of the last major innovations in plastics production—Union Carbide's UNIPOL technology—reduced the capital and operating cost of plastics production process and grew from an idea to over 60 billion pounds per year in annual production. We see an equally significant step-change in cost savings, and we aim to achieve similar scale.
In parallel, our focus is building more conversion facilities, so that we can expand the AirCarbon production technology quickly and efficiently, at places like farms and landfills to the [fracking] flares of North Dakota and Texas, where the amount of carbon being flared and emitted every day—carbon that we could be converting into materials—is so intense that from space these rural areas light up the sky at night like Chicago or New York.Members of the Newlight team at one of their manufacturing plants in California. (Newlight)
What impact do you see Newlight Technologies having on the reduction of greenhouse gas emissions?
When you hold AirCarbon in your hand, about 40 percent of the weight you feel is oxygen pulled from air and 60 percent is carbon and hydrogen from captured carbon emissions—carbon that would have otherwise become part of the air.
Our hope is that AirCarbon starts a paradigm shift, where we start to view greenhouse gas emissions as a resource, a raw material that can be used to produce the highest quality, most cost-advantaged, most sustainable materials in the world.
Do you think Newlight can help resolve the tension between those interested in restricting carbon emissions and those who feel such restrictions will be crippling to the economy? If so, how?
Absolutely. AirCarbon is one part Atlas Shrugged and one part An Inconvenient Truth. It is our belief that climate change is not going to be solved by subsidies or taxes. We think that the only way we are going to solve climate change, in the time and at the scale that is required, is through market-driven solutions, where consumers and brands are part of the solution, where the products we make cost less and capture carbon, and where we all participate in that.
Ultimately, what is so exciting to us about AirCarbon is that it changes the terms of the debate. If the reality is political deadlock, we have to stop fighting the same fight and focus on common ground and solutions. We can all agree that we would rather use domestic carbon sequestration than oil to make products, and we can all agree that we would rather use carbon capture materials that cost less than oil-based materials.
Most methods of fighting climate change are about reducing greenhouse gas emissions: inventing cleaner power plants, engineering greener cars. Then, there’s the camp of researchers who focus on drawing gases from the atmosphere once they’ve already been released.
So-called “carbon dioxide capture” has been controversial, often dismissed as impractical or inadequate. Yet as global efforts to reduce emissions have proven difficult and sometimes disappointing, the approach seems increasingly alluring.
A new invention, from scientists at University of California, Berkeley, offers a novel take on carbon capture. The researchers have created a nanomaterial that destroys carbon dioxide by splitting it into oxygen and carbon monoxide.
Scientists have long tried to get rid of carbon dioxide by splitting its molecule. These splitting attempts can be energy-intensive, which defeats the environmental purpose. So reseachers have used various catalysts to speed up the reaction, reducing the amount of electricity needed to split the molecules. Many scientists have focused on porphyrins, ring-shaped organic molecules, to make these reactions happen. Though porphyrins can have various atoms at their centers, the ones used for this purpose are cobalt porphyrins, which are especially catalytically active. When these porphyrins are added to a solution with two electrodes, an electrolyte and some dissolved carbon dioxide, the porphyrins are attracted to the electrolyte. This causes the electrons to move to the carbon dioxide, splitting it into carbon monoxide and oxygen. But this approach has not been perfect. The porphyrins clump together and lose effectiveness over time, and the solutions used to make the process happen are environmentally questionable themselves.
The Berkeley researchers seem to have found a new way to deal with this by creating a porous nanomaterial linking porphyrins together into a mesh-like substance. This is called a covalent organic framework (COF). The carbon dioxide percolates through the COF, splitting into carbon monoxide and oxygen with very little added energy. It works about 60 times more efficiently than splitting the carbon dioxide using free-floating porphyrins. The research was reported in the journal Science.
So what can be done with the oxygen and carbon monoxide created by the process?
“Carbon monoxide is important because it’s one of the feedstocks of the chemical industry, which makes fuels based on carbon monoxide,” says Christian Diercks, one of the lead researchers on the study. “The idea is basically to use carbon dioxide, which is a waste, and turn it into fuel.”
In the future, factories could use sheets of these nanomaterials around carbon dioxide-producing areas, such as smokestacks, turning it directly into carbon monoxide for fuel. But this is a long way down the road.
"If you really want to get something like carbon dioxide reduction to happen on a large scale, I think you always need government incentives," Diercks says, "because it always takes industry a long time to pick up new ideas like this."
So far, the lab has only made the material in tiny amounts, 30 milligrams at a time. It takes multiple days to produce, so the process will need to become more efficient to be implemented at an industrial level. The researchers' next step is to look into ways to more efficiently transform the carbon monoxide into fuel.
Official estimates of U.S. emissions of the greenhouse gas methane may be far too low, according to a report published today by the Proceedings of the National Academy of Sciences. Oil and gas production is contributing much more methane than either the U.S. Environmental Protection Agency (EPA) or the best global survey of the greenhouse gas assume.
Carbon dioxide tends to get the most attention in climate change discussions because it’s the greenhouse gas most responsible for the changes we’re now seeing on Earth. But methane (CH4) has similar heat-trapping effects, and pound for pound, it traps 70 times more heat than carbon dioxide (CO2). However, methane has a shorter atmospheric lifespan, sticking around only for about ten years, compared to a century for CO2.
Like carbon dioxide, methane has been on the rise. Atmospheric concentrations of CH4 have increased from around 680 to 715 parts per billion (ppb) before the Industrial Revolution to approximately 1,800 ppb today. Determining where all that extra methane is coming from is important for efforts to reduce greenhouse gas emissions and limit future climate change effects.
The EPA currently lists livestock production as the biggest methane contributor, followed by, in order, natural gas production, landfills and coal mining. Methane measurements made from aircraft, however, are calling that order, and the EPA’s methane estimates, into question. The EPA and the Emissions Database for Global Atmospheric Research (EDGAR) both use a “bottom up” method of estimating methane, which depends on taking samples and calculating how much methane comes from known emitters, such as livestock herds and petroleum fields, then adding it all up. The aircraft studies take a “top-down” approach instead, starting with measurements of methane in atmospheric samples.
In the new study, Scot M. Miller of Harvard University and colleagues used aircraft-based sampling and a National Oceanic and Atmospheric Administration/Department of Energy air-sampling network to tally 12,694 observations of methane from across the United States in 2007 and 2008. They then used those observations and a computer model to create estimates of monthly methane emissions. The analysis found large differences between their observations and the EPA and EDGAR estimates: The new figures were 1.5 times greater than those of the EPA and 1.7 times those from EDGAR.
Nearly a quarter of the nation’s methane emissions came from just three states—Texas, Oklahoma and Kansas. The estimates for CH4 emissions from these three states were 2.7 times higher than those of EDGAR. “Texas and Oklahoma were among the top five natural gas producing states in the country in 2007,” the researchers note in their paper. The team was able to trace the methane to oil and gas production not simply through coincidences of geography but also because of their observations found propane in the atmosphere above certain areas in these states. Propane is not produced by methane sources such as livestock or landfills–rather, it is released during fossil fuel extraction. Thus, its presence indicates that some fraction of the methane over those those regions must come from fossil fuels.
“This is the first study to quantify methane emissions at regional scales within the continental United States with enough spatial resolution to significantly criticize the official inventories,” study co-author Marc L. Fischer, of the University of California Berkeley, said in a statement. “Even if we made emissions from livestock several times higher than inventory estimates would suggest for the southwest, you still don’t get enough to cover what’s actually being observed. That’s why it looks like oil and gas are likely responsible for a large part of the remainder…Cows don’t produce propane; oil and gas does.”
Cow farts aren’t getting off the hook here, and clearly the oil and gas industry is already known to be a big contributor to climate change. But one of the selling points of natural gas has been that it is more climate-friendly–or at least less climate-damaging–than other forms of fossil fuels, such as coal. If producing that natural gas results in more methane emissions than currently assumed, then it might not be such a good choice after all.
At this week’s United Nations Conference on Climate Change, over 40,000 attendees will discuss the future of global attempts to reduce greenhouse gas emissions. But which countries are the biggest supporters of climate change action?
A new study by Pew Research shows that in most countries, there’s a large gap between concern about climate change and the willingness to act. But the results are a bit different than you might expect.
The study, which polled people in 40 nations, found that people worldwide are worried about how climate change will affect Earth’s future. A majority of people surveyed in each nation felt that climate change is a serious problem, and 54 percent worldwide characterized climate change as “a very serious problem.”
The most concerned respondents were in Latin America (74 percent) and Africa (61 percent), with the smallest number of concerned citizens in the Middle East (38 percent). United States respondents ranked near the bottom of concern—only 45 percent of Americans surveyed believe that global warming is very serious, and only 30 percent were “very concerned that climate change will harm me personally.”
The poll also showed that concern and willingness to act are two very different things. But surprisingly, even people who didn't believe climate change was a serious concern called for action to curb emissions.
The survey found that in 37 out of 40 nations polled, support of limiting greenhouse gas emissions as part of an international agreement exceeded personal concern about climate change as a “very serious” problem. A median of 78 percent of respondents felt that treaties should be in place, despite only 54 percent agreeing that climate change is “very serious.”
In places like China, there was a 53 percentage point differential between personal concern about climate change (18 percent) and support of an international treaty to cut emissions (71 percent). The United States had a smaller differential: 45 percent were personally concerned, while 69 percent supported an international treaty. In India, the differential was only six percentage points. Pew concludes that for people in many countries, a “better safe than sorry” attitude prevails.
But despite growing awareness of the effects of climate change, there are still large gaps in who supports real action. Even though a global median of 51 percent of respondents believe that people are already being harmed by climate change, respondents from many of the countries that emit the most greenhouse gas have the smallest amount of support for limiting their emissions. The two biggest opponents of limiting greenhouse gas emissions were Turkey (26 percent opposed) and the United States (24 percent opposed).
So who’s most open to climate change curbs? To meet the world’s most climate-aware respondents, you’ll want to travel to Uganda (91 percent in favor, five percent opposed to limiting emissions), Spain (91 percent in favor, six percent opposed) and Tanzania (90 percent in favor, seven percent opposed).
Cities are to greenhouse-gas emissions what Chernobyl was to nuclear power plant failures, which is to say, they’re the worst offenders out there. Cities consume two-thirds of the world’s energy and cough up 70 percent of global CO2 emissions. Some are even gaining notoriety: Air pollution in Beijing is so severe these days that residents can’t even escape it by going indoors, according to scientists at Columbia University’s Earth Institute.
But many cities are making progress in shrinking their greenhouse-gas footprints, and a recent new study shows that they can make reductions of as much as 70 percent. Scientists at University of Toronto’s Civil Engineering department used Toronto as a test piece for studying cities’ carbon footprints, and they outlined how changes in transportation, buildings and energy supplies–things like boosting insulation, switching to LED lighting and putting in building management systems and automatic lighting controls–can reduce emissions.
A 30 percent reduction would be fairly simple, the researchers say. “With current policies, especially cleaning of the electricity grid, Toronto’s per-capita GHG emissions could be reduced by 30 per cent over the next 20 years,” study author Chris Kennedy said in a statement. “To go further, however, reducing emissions in the order of 70 per cent, would require significant retrofitting of the building stock, utilization of renewable heating and cooling systems, and the complete proliferation of electric, or other low carbon, automobiles.”
Toronto has yet to begin adopting the plan Kennedy and his colleagues have outlined, but it is among the 58 city-members of the C40 Cities Climate Leadership Group, an organization committed to developing and implementing policies and practices to reduce greenhouse gas emissions. The group’s chair is New York City Mayor Michael Bloomberg, and in fact, New York is one of the most innovative and aggressive cities in the world when it comes to emissions reduction. “In my mind London and NYC are providing the greatest leadership,” Kennedy told Surprising Science.
Many other cities are also making strides, according to a 2011 study issued by C40 that details what its member-cities are doing to reduce their emissions. Forty major cities participated in the research, including Chicago, Houston, Los Angeles, Philadelphia and New York in the U.S., and cities from Moscow and Jakarta to Beijing and Mexico City internationally–many of the most populated, high-traffic urban centers in the world. Engineering and design firm Arup, along with the Clinton Climate Initiative, surveyed city officials and conducted research on their greenhouse-gas output and actions to reduce emissions.
Five cities stood out–here’s a breakdown of some highlights:
São Paulo: When landfills were reaching capacity in South America’s most populous city, the Brazilian metropolis installed thermoelectric power plants to capture and burn biogases emitted by the decaying waste. São Paulo’s 10 million citizens generate 15,000 tons of garbage each day, and trash is one of the city’s biggest greenhouse-gas challenges—as opposed to other cities, which struggle more with emissions from buildings and energy supplies. This step allowed São Paulo to reduce methane emissions and produce clean energy at the same time, and now 7 percent of the city’s electricity needs are met this way.
Copenhagen: Known for its bicycle culture, Denmark’s capital is a leader in green transportation, with 36 percent of work- or school-related commutes done by pedaling, according to the C40 study. Other cities have used Copenhagen as a model for their cycle parking, lanes, signage and other biking infrastructure. But Copenhagen is also a leader in waste management. Since 1988, it has reduced the amount of garbage it sends to landfills from 40 percent to less than 2 percent, and fully half of the city’s waste is recycled and used to generate heat. Nearly all of Copenhagen’s buildings (PDF) utilize an underground piping network that distributes hot water or steam in lieu of relying on boilers or furnaces. Citizens are required to pay for the heat regardless of whether they’re connected to the system.
Addis Ababa: In Ethiopia’s capital, shoddy water pipes are being replaced to help boost the city’s 50 percent leakage rate “Cities can lose huge amounts of their often energy-intensively produced potable water due to leakage from pipes during distribution,” the C40 study authors wrote. “Wasting potable water… increases greenhouse gas emissions, and is also a major issue for those cities that are threatened with droughts. The number of drought-threatened cities is rising due to climate change.”
That project joins large-scale, low-carbon housing developments that will create new homes for people currently living in Addis Ababa’s shanty towns, the C40 study showed. The city is also planning to convert 40 percent of its land to green space, which serves to absorb CO2 emissions and reduce the urban-heat-island effect. To that end, Addis Ababa’s mayor instituted a plan to plant three million new trees (the most ambitious tree-planting project in the world) and create a giant nature reserve featuring every tree and plant native to Ethiopia.
New York City: The city that never sleeps is a leader in green policy, according to the C40 study. Its PlaNYC, a program designed to reduce greenhouse gas emissions and otherwise prepare for climate change, includes planting trees and other vegetation to enhance 800 acres of parks and open spaces and pushing new development to areas with existing transit access so that new subway and bus lines don’t have to be added. The Greener Greater Buildings plan mandates upgrades to meet the NYC Energy Conservation Code for renovations, and the NYC Green Infrastructure Plan integrates details like green roofs and porous pavement into the city’s quest to manage storm runoff and alleviate pressure on wastewater treatment plants, which overflow in storms. New York is also known for its system of innovative pneumatic troughs that remove trash from Roosevelt Island through underground tunnels and eliminate the need for fleets of fossil-fuel-burning garbage trucks that clog traffic and wear down streets.
London: Greenhouse-gas reductions in the UK’s capital and largest city are impressive in part because it’s the only city to have achieved them “by diminishing consumption than a change of energy sources,” according to another study published last fall by Kennedy. His research showed that London was also the sole city where carbon emissions from commercial and institutional buildings have dropped. How did London make it happen? Establishing a so-called Congestion Charge Zone (PDF) was one key measure. A fee structure tied to emissions restricts the movement of freight and other heavy goods vehicles within the city’s center and allows electric vehicles to travel for free in the zone. The scheme, introduced in 2003, “has reduced vehicle numbers in the central business district by over 70,000 per day, cutting carbon emissions in the zone by 15%,” according to the study authors. Also, the city’s transit systems are integrated and easy to use thanks to a smart-ticket program, attracting more riders who might otherwise drive gas-guzzling cars.
While the overall effect of these emissions-reduction efforts hasn’t yet been measured, C40 study authors say the 40 cities have taken a combined total of 4,734 actions to tackle climate change. The simplest and most immediate change cities can make, according to Kennedy, is to decarbonize their electricity grids. “This is important because a low-carbon electricity source can be an enabler of low carbon technologies in other sectors, for example electric vehicles, or heating via ground source heat pumps,” he says. But the most effective change Kennedy recommends that city residents make in lowering their carbon footprints is to set their home thermostats 1 or 2 degrees lower in the winter or higher in the summer.
What does or could your city do to reduce its emissions? Leave us a note with your ideas!
What do you imagine when you hear the phrase “greenhouse gases?” If you think of a factory belching out coal or a packed freeway filled with idling cars comes to mind, you’re on the right track: Emissions from these and other human-driven processes vomit tens of billions of tons of carbon dioxide into the air each year. But it turns out that CO2 isn’t the only game in town. It’s one of several greenhouse gases that trap heat in the atmosphere, driving global warming and climate change. Here’s what you need to know about CO2’s cousins—greenhouse gases that get less air time, but are no less important to Earth’s atmosphere.
Scientists have known about greenhouse gases since Joseph Fourier, a French physicist and mathematician, theorized that the planet’s temperature must be regulated by something that both absorbs the sun’s rays and emits some of the resulting heat back to Earth. Fourier theorized that gases must be that something, and his work in the 1820s was soon continued by other scientists determined to find out which gases trap heat from the sun on Earth. Eventually, people started comparing the work of those gases to that of glass that covers a greenhouse, bouncing its internal heat back toward the building that emits it and warming itself even when it’s cold outside.
Over time, scientists began to develop a more nuanced view of how gases form and act. Not all gases on Earth are greenhouse gases. The amount of greenhouse gases in the atmosphere depends on sources (natural and man-made processes that produce them) and sinks (reactions that remove the gases from the atmosphere). Carbon dioxide is only part of that equation, and only the second most abundant greenhouse gas on Earth.
At the top of the list is water vapor, the granddaddy of all greenhouse gases. Water vapor is present wherever there’s measurable humidity. Clouds aren’t water vapor—water vapor is invisible. But that doesn’t mean it’s not abundant: About 80 percent of the atmosphere’s total mass of greenhouse gas is water vapor.
Water vapor sounds pretty nonthreatening, but it’s part of a cycle that is warming Earth. Here’s where it gets confusing: Water vapor doesn’t cause global warming, but it worsens it. As carbon dioxide and other emissions grow, water vapor increases, too. More concentrated water vapor and higher evaporation rates means more global warming.
The phenomenon is called stratospheric water vapor feedback, and its concerning to Sean Davis, a CIRES research scientist working at the National Oceanic and Atmospheric Administration whose research focuses on the gas. “It’s really a complicated problem,” he tells Smithsonian.com. In 2013, Davis and colleagues showed evidence of that vicious cycle—and suggested that it contributes significantly to the sensitivity of Earth’s climate. Though satellites and space-based radar that monitors precipitation are now available to researchers, he says, they still need more data about how water vapor and carbon dioxide interact in Earth’s atmosphere.
Methane, the third-most abundant greenhouse gas, presents a similar quandary for researchers. In recent years, they’ve learned much more about how the gas, which is the second most emitted in the United States, contributes to global warming. Methane is emitted by everything from farting cows to wetlands and natural gas systems, and industry, agriculture and rotting trash make sure plenty is spewed into the atmosphere. But even though the gas warms Earth by an order of magnitude more than CO2 (up to 86 times as much), both sensors and environmental watchdogs often underestimate .
Other gases contribute to climate change and global warming—there’s nitrous oxide, which is emitted by fertilizer and has become one of the biggest ozone depleters in the atmosphere. You may know the gas better in its incarnation in dentists’ offices and whipped cream dispensers, but there’s plenty of nitrous in the atmosphere, too. Since the beginning of the industrial era in the 1700s, nitrous oxide levels have grown, and atmospheric levels of the gas could nearly double by 2050.
Nitrous oxide isn’t alarming just because of its warming power (one molecule traps as much heat as 300 CO2 molecules). It can take over a century for a molecule of N2O to degrade. In the meantime, it contributes to ozone loss in the atmosphere, which in turn spurs warming on Earth. There’s still plenty scientists don’t know about N2O: For example, its ozone-depleting potential seems sensitive to different environmental conditions. It may take decades before it’s clear just how the gas reacts with other GHGs and the changing climate.
Though chlorofluorocarbons, or CFCs, are non-toxic to humans and are inert in the lower atmosphere, things are different once they reach the stratosphere. There, the man-made chemicals eat up ozone, and they are still present in today's atmosphere despite sweeping regulation aimed at closing the ozone hole.
Like N2O, CFCs last long periods of time in the upper atmosphere. They’re being phased out with good reason: On a molecule-by-molecule basis, CFCs have a much higher global warming potential than carbon dioxide. For example, CFC-13 (also known as Freon 13), which cools some industrial freezers, is 16,400 times as warming as carbon dioxide over a 500-year period. CFCs are banned in the United States, but plenty made their way into the atmosphere before the Montreal Protocol, which was agreed to in 1987. Though they are no longer present in deodorant cans and spray bottles, they’re still up above, breaking down ozone. (It would hypothetically be beneficial for N2O and CFCs to "eat" ozone when it's in the troposphere, where it's technically considered a "bad" greenhouse gas. But once ozone makes it up to the stratosphere, it actually protects Earth from the sun's brutal rays.)
It’s tempting to think that because CO2 has so many counterparts, it’s not worth worrying about. But just because CO2 isn’t the only greenhouse gas doesn't mean it’s not cause for concern. “A lot of people use [greenhouse gases] to downplay the importance of carbon dioxide,” says Davis. “That’s the biggest issue we face.” Some gases may be more abundant, but none stand alone—and with CO2 rates rising at unprecedented levels, it’s difficult to estimate just how dire the consequences of unchecked emissions of any kind might be.
Darius Nassiry, an assistant of Dr. Bert Drake, a plant physiologist at the Smithsonian Environmental Research Center (SERC), examines a marsh plant in a steel pressure cylinder at SERC near the Chesapeake Bay in Edgewater, Maryland. This was an innovative Smithsonian experiment to explore future greenhouse effect.
Venus might be the closest planet to Earth and the most similar in size, but it's a scary place: the atmosphere is hot, the air is poison, rain is made of sulfuric acid and volcanoes pepper the surface. It was also the inspiration for our modern understanding of the hazards of a strong global greenhouse effect.
Yet for all its importance, Venus has been studied relatively poorly compared to some of the other planets. After a burst of activity in the 1970s and 80s, our attention on Earth's smoldering twin has largely waned.
The European Space Angecy's Venus Express orbiter is an exception, and for the past eight years VEX has been circling the planet. But now the spacecraft is out of fuel, and its main mission has come to an end.
Not content to let VEX retire just yet, the ESA is going to take one last step to get everything they can out of the little spacecraft. Over coming weeks the space agency is going to maneuver VEX out of its safe orbit and gradually push it into Venus' atmosphere.
Sensors about VEX will be able to gather direct observations of the temperature and pressure within Venus' atmosphere, says Space Fellowship, along with measurements of the planet's magnetic field, the properties of the solar wind and the composition of the air.
The increased drag from the thickening atmosphere will likely kill the orbiter, says the ESA, yet with so few missions making it to Venus its important to study everything you can while you're there.
Despite its hellish conditions today, Venus may once have been a welcoming world. It's just a bit smaller than Earth, and if water arrived at both planets the same way, Venus could have once hosted oceans on its surface. At some point, however, its atmosphere took off in a runaway greenhouse effect, and now surface temperatures are hot enough to melt lead.
Planetary scientists have been trying to figure out what happened to poor Venus to trigger this dramatic transformation. Now simulations have offered an intriguing—if still very early—theory: Venus developed its stifling atmosphere following a collision with a Texas-sized object.
Cedric Gillmann of the Royal Observatory of Belgium and his colleagues simulated what would happen if various sized objects crashed into Venus. They found that immediate effects, such as blowing part of the atmosphere into space, made only small changes that the planet could quickly recover from. But a significant impact could have driven changes deep within the mantle that could have changed the geology and atmosphere of the planet over hundreds of millions of years, especially if it occurred when Venus was relatively young.
"There are some periods of time when a large impact can be enough to switch a cool surface to a hot surface and change the history of the planet," Gillmann says.
According to their models, if a spherical object between 500 and 1,000 miles wide hit Venus, energy from the colliding object would have heated the upper mantle enough to melt it. That melted portion would have risen to the surface, spreading into a long, shallow layer just beneath the crust. Water and carbon dioxide within the mantle could then be released to the surface as gases, which could have caused a significant shift in the planet's atmosphere.
If Venus suffered an impact early enough in its lifetime, water released from the mantle could have then been stripped away by the stronger solar wind streaming from a more active young sun, leaving behind a drier planet. With the bulk of the planet's water pulled from the mantle early on, little would be left to become trapped in the atmosphere once solar activity calmed down. The resulting dense atmosphere, rich in carbon dioxide, would help to dramatically heat the planet, the team reports in the April issue of Icarus.
"A large collision is going to affect not just the formation of large craters on the surface, but it may also affect the atmosphere through a range of processes," says Simone Marchi of the Southwest Research Institute in Colorado, was not involved in the research. "[The new study] focuses on an effect that perhaps has not been fully investigated in the past—what happens precisely to the internal evolution of the planet."
Impacts of objects of this size are rare. According to other studies, bodies roughly the size of the dwarf planet Ceres, which is 590 miles wide, crash into planets approximately once in their lifetime. Larger objects are even rarer.
"No such impacts should have happened in the last 3 billion years or so," Gillmann says. Still, we know that the early solar system went through a period called the Late Heavy Bombardment, when fragments of protoplanets smashed into the rocky worlds near the sun, leaving scores of craters. And there's plenty of evidence Earth suffered a significant collision in its youth. Scientists think that a Mars-sized body slammed into our planet, carving out the material that formed the moon.
So why didn't Earth wind up with a super-greenhouse effect? The colliding object is estimated to be far larger—around 4,000 miles wide. Such a drastic impact would have completely removed and reformed Earth's surface, essentially allowing it to be reset. On Venus, however, the crust would have remained intact, with only a small portion of the mantle allowed to leak out into the planet's atmosphere.Radar maps of Venus's surface show a world dominated by volcanic structures. (NASA/JPL)
If a massive impact really did scar Venus enough to change its atmosphere, other effects aren't readily apparent. The planet's surface is fairly young, covered up with lava that could have come from an impact or from its once active volcanoes. But there are more indirect clues. The planet has a strangely slow rotation—a day on Venus is longer than its year—and it spins backwards compared to the rest of the planets in the solar system.
Previous studies have suggested that Venus's strange spin could have been caused by a major impact. Still, a significant impactor isn't the only way to heat up the planet's atmosphere. Volcanoes erupting over the course of billions of years could also have funneled carbon dioxide from the mantle to the surface, heating the planet over its history.
Marchi adds that he would like to have seen more detailed estimates on the amounts and composition of the gases removed from the various collisions, factors which would depend on when in the history of the planet an impact occurred.
"This is a very fundamental process not just for Venus, but for all the terrestrial planets," he says.
One of the biggest difficulties in creating more detailed models comes from the fact that we have very little data to work with. While Mars has received a slew of robotic visitors over the past 40 years, Earth's "evil twin" has garnered much less attention.
"At the moment, we simply do not have a lot of information on the history of Venus, which could help us find out evidence of an impact," Gillmann says. "We hope that further missions and observations could find some areas which could be older."
Scientists now understand how the carbon and methane emissions from our cars, livestock and electricity use are helping drive dramatic shifts in our climate through their contribution to the greenhouse effect. But they’re just beginning to untangle the effects of some of the other pollutants we produce. For instance, iron emissions from coal burning and steel smelting could actually be helping the oceans thrive and suck up more atmospheric carbon, according to new research.
If that sounds like a good thing, it isn’t. When we reduce our levels of iron oxide emissions—which we ultimately have to, to protect human and animals from inflammation and other adverse health effects—it will necessitate an even more drastic reduction in pollution to avoid the effects of climate change, the researchers warn.
Iron is a vital nutrient for nearly all living things. Humans need it to make new blood cells, while many plants need it to perform photosynthesis. However, iron is relatively rare in the open ocean, since it mainly comes in the form of soil particles blown from the land. For the trillions of phytoplankton in Earth's oceans, iron is a "limiting nutrient," meaning the available amount of it is a natural check on these creatures' population size. (To prove this, scientists in the early 1990s dumped iron across a 64 square kilometer region of the open ocean and quickly observed a doubling in the amount of phytoplankton biomass.)
Some scientists have proposed taking advantage of this fact through geoengineering, or deliberately intervening in the climate system using technology. Much like forests on land, phytoplankton in the ocean serve as "carbon sinks" because they take up carbon dioxide and then take that carbon with them into the deep ocean when they die. Therefore, adding more iron to the seas could potentially make these sinks even more potent at sucking up the carbon humans have dumped into the atmosphere, these proponents reason.
But the new research suggests that humans are already—albeit inadvertently—geoengineering this process, according to a study published today in the journal Science Advances.
Despite its promises to halt the growth of its carbon emissions by 2030, China remains the world's largest producer and burner of coal and the largest manufacturer of steel. Along with carbon, steel smelting and coal burning release particles of iron that can easily be carried away by the wind. Scientists have speculated for years that all those emissions could be fertilizing the oceans with extra iron, thus driving phytoplankton population growth, says Zongbo Shi, an environmental scientist at England's University of Birmingham.
These iron particles come in the form of iron oxides produced by burning, and are thus insoluble and unable to be consumed by the plankton on their own. However, emitted along with those iron oxide particles are acidic gases like sulfur dioxide and nitrous oxide, Shi says. These gases could react with the iron oxide molecules as they're carried through the atmosphere to form soluble forms of iron.
"No one could prove this definitively," Shi says. He and his collaborators set out to fix that. In 2013, the researchers carefully collected aerosol particle samples from the air from a boat in the Yellow Sea between China and South Korea. Then, they used sophisticated electron microscopes and other detection techniques to parse out the composition of these particles.
The researchers found that the particles included sulfates that contained soluble iron. Since there is no natural source of iron sulfates in the atmosphere, Shi says, they concluded that these particles must have derived from human emissions. "We have proved that this process indeed exists," Shi says.
Phillip Boyd, a marine biogeochemist at the University of Tasmania who was not involved in the research, says the study provides "compelling evidence" that these atmospheric interactions can make emitted iron available to ocean life. However, the scientists are "sort of halfway there" when it comes to seeing how much impact manmade iron fertilization actually has, says Boyd, who is a leading researcher on ocean-climate interactions and geoengineering.
Eastern China has iron-rich soil and is close to the iron-rich Gobi Desert, Boyd says, meaning that there is plentiful natural iron potentially seeding the oceans there. Determining how much of the iron in the air is from natural versus industrial sources will be the "acid test" for how much effect human emissions are actually having on ocean life, according to Boyd.
Shi agrees that it is vital to understand the human contribution to this process. Next, he plans on working to collect more atmospheric and oceanic data to build a thorough model of human iron fertilization of the oceans going back a century. This model would also be able to predict how much impact our 150 years of human industry have had on the levels of carbon in the atmosphere.
It may turn out, Shi says, that our emitted iron has helped tamp down atmospheric carbon levels. "If the amount of soluble iron is being doubled [in the oceans]," says Shi, referencing a 2011 study, "then you'd expect to have something like 30 [extra] gigatons of carbon dioxide being absorbed by the ocean in a century."
Reducing the amount of iron being deposited into the oceans through reducing emissions could make efforts to reduce the greenhouse effect even harder, he says. "There will be less phytoplankton, less carbon dioxide absorbed by the ocean," Shi says.
However, Shi is wary of proposals to dump iron into the oceans to geoengineer away the greenhouse effect. "Geoengineering is a very controversial subject," he notes, referencing the fierce debate over this kidn of large-scale human intervention and its many potentially unintended effects. With respect to artificial iron fertilization, biologists fear that it could lead to widespread algal blooms which could choke out oxygen from the water for other ocean creatures and lead to yet unknown effects.
What’s certain is that we cannot continue spewing iron emissions at our current rate, says Shi, because they have been shown to cause inflammation in people who inhale them and could harm other living things. People may think that “by releasing iron, it could potentially do us a favor,” he says. But while they may help the planet, at least in the short term, these “particles are always not very good” for human health, he adds.
The 2015 Paris climate agreement represents one of the first attempts at a truly global response to the threat of climate change. For nearly two years, the pact has linked almost every country in the joint effort to cut back greenhouse gas emissions and stave off human-influenced climate change. As of yesterday, that effort does not include the United States.
President Donald Trump announced on Thursday that the U.S.—a major player on the climate scene and one of the treaty's de facto leaders—would be pulling out of the historic pact. “In order to fulfill my solemn duty to protect America and its citizens, the United States will withdraw from the Paris Climate Accord,” he announced at a press conference at the White House Rose Garden.
The controversial decision makes the U.S. one of just three countries that are not part of the voluntary agreement, the other two being Syria and Nicaragua. It also reverses the past administration’s efforts on climate change, following recent actions to begin dismantling Obama-era climate protection policies.
But it doesn't take America out of the climate equation. No matter how you crunch the numbers, the U.S. still ranks among the top greenhouse gas emitters in the world. Based on data from the European Commission, Joint Research Center/Netherlands Environmental Agency and Emissions Database for Global Atmospheric Research, the top five emitters in what’s known as "carbon dioxide equivalents" (CO2 eq) released in 2012 are as follows:
China (12.45 million kilotons CO2 eq)
United States (6.34 million kilotons CO2 eq)
India (3.00 million kilotons CO2 eq)
Brazil (2.99 million kilotons CO2 eq)
Russian Federation (2.80 million kilotons CO2 eq)
Importantly, these numbers are based on CO2 equivalents. That means they include all the greenhouse gases a country emits—including carbon dioxide, methane, nitrous oxide and fluorinated compounds—to reflect the fact that warming results from a combination of gases released from both natural and human activities. By measuring emissions in equivalents, scientists can take into account the differing impacts of each of these gases on the atmosphere.
You’re probably familiar with carbon dioxide, which is emitted through fossil fuel combustion and industrial processes, as well as forestry and land use. It’s by far the most ubiquitous gas humans emit, composing 76 percent of global greenhouse gas emissions in 2010. But methane comes in at an important second. A much more potent warming agent, scientists estimate that methane has 25 times greater impact than CO2 over a 100-year period. And while it isn’t just cow farts driving this trend, agricultural activities—including waste management—and burning of biomass do release methane into the environment.
Under the Obama administration, the U.S. had committed to a 26 to 29 percent reduction of greenhouse gas emissions below its 2005 level by 2025. However, as an analysis from four European research organizations known as the Climate Action Tracker points out, without any further action, the country will miss its commitment “by a large margin.” One of the most significant steps in U.S. climate actions was the Clean Power Plan, announced in August 2015. But the EPA has been ordered to review and possibly revise this plan, which mean significant challenges lay ahead in meeting emissions targets.
Overall, global CO2 emissions have slowed since 2012, which could reflect changes in the world’s economy and investments in energy efficiency. Both China and India—the two other leading greenhouse gas emitters—are well on track to meeting their emission goals, according to the Climate Action Tracker. China in particular has taken significant steps toward shuttering coal-fired power plants and increasing its reliance on renewable energy. Experts predict that America’s withdrawal from the Paris agreement gives the Chinese government the opportunity to take the lead in the fight against climate change.
What will the actual effects of America’s withdrawal look like? For starters, nothing will happen immediately. The accord stipulates a four-year legal process for a country to pull out, meaning the soonest the U.S. could officially withdraw is 2020 (as news outlets have pointed out, this also means that a future U.S. president could potentially opt to stay in).
Even then, many argue that the move won’t necessarily change U.S. progress toward reducing its emissions. From falling renewable energy prices to state-level commitments to continue efforts to staunch emissions, America is already working toward lowering greenhouse gasses. Others have argued that the Paris Agreement could even be stronger without U.S. participation, which—with President Trump’s stated commitment to bringing back coal and reduce regulations on industry's emissions—could “water down” the treaty’s goals, writes Robinson Meyer for The Atlantic.
Moreover, as a recent Gallup poll suggests, the American public strongly supports a continued shift away from environmentally harmful forms of energy like oil, gas and coal, with 71 percent favoring an emphasis on alternative energy sources like solar and wind. “Given the choice, the majority of Americans think protecting the environment should take precedence over developing more energy supplies, even at the risk of limiting the amount of traditional supplies the U.S. produces,” according to Gallup’s website.
It is now up to the American public—as individuals, companies and communities—to take the lead in reducing their impact on the environment in whatever way they can. As David Moore, ecosystems scientist at the University of Arizona wrote on Twitter after the announcement: “Walk it off … walk it off … then get to work with your local school, city, or state to make the world more sustainable.”
One day the world will end, and unless we've managed to Noah's Ark ourselves into the deep recesses of space, we'll end along with it. The sun is getting brighter—roughly 1 percent every 110 million years—and eventually this ticking increase is going to cook us out of a home.
When it really comes down to it, the sun has control over our planetary thermostat. As the temperature rises, more water is evaporated into the atmosphere. Water vapor is a strong greenhouse gas, and soon enough we've got a runaway greenhouse effect. Then, bam, 650 million years later the Earth has turned into Venus.
According to a new study, though, we may have a little more time than that. What previous estimates tended to neglect was the Earth's climate system—how the land and the air and the sea all interact, keeping each other in check. Using a more advanced climate model, two scientists, Eric Wolf and Owen Brian Toon, dug into the details of the apocalypse.
We don't need to get all the way to a Venus-level disaster, they say, for Earth to be a rather awful place to live.
“While a catastrophic runaway greenhouse would unquestionably sterilize the planet, habitability may become threatened before this ultimate tipping point is reached,” the scientists wrote in their study. “A more stringent estimate for the hot limit to planetary habitability is based on the so-called moist greenhouse climate.”
With even lesser levels of warming, the upper portions of the Earth's atmosphere will become wetter. And water in the upper atmosphere is more likely to break down and be lost to space. Eventually, the scientists say, the increasing warming will cause “Earth’s oceans to effectively evaporate away to space.”
On the one hand, this will delay the transformation of the Earth into a giant hot mess. On the other, the oceans will be evaporating.
The scientists found that the Earth would stay “habitable” until the sun's output ticked up to at least 15.5 percent higher than it is now—giving us roughly 1.5 billion years left.
But these end days would not be happy days.
First off, when it gets this hot, clouds will cease to exist. Instead, the air will be steam. Then, says Nanci Bompey, reporting on the paper for the AGU's blog:
Temperatures in areas just below the Arctic Circle would resemble today’s tropics, and there would be a lot more rain as the oceans evaporated...
"There would be twice as much rainfall everywhere, a lot more floods and things like that,” Toon said. “It will be like a really unpleasant day in the Sahara Desert, but rainy.”
With a 15.5 percent increase in solar output, the scientists say, the annual average temperature in the tropics would be 114 F. At the poles, 74 F.
But, still, good news, right?
“While such a hot climate would undoubtedly provide great challenges for humanity, Earth will remain safe from both water loss and thermal runaway limits to habitability even for a 15.5% increase in solar constant," the authors of the new study write. We'll just all be camping out at the South Pole (the North Pole will be long gone) and pretending it's the Australian outback.
One important aside: In terms of comparing the sun-induced apocalypse against modern warming, the two are really not on the same scale, at all. In this study, the authors say that a 2 percent increase in the sun's energy is equal to us doubling the atmospheric concentration of carbon dioxide. Matching a 15.5 percent increase in solar energy, then, isn't really something we could do.
The Pritzker is considered to be the Nobel Prize of the architecture world. Now there's a new name to add to the list of the award that recognizes the profession's greats. Previously honored luminaries include Frank Gehry, Oscar Niemeyer and I. M. Pei. The Pritzker Architecture award jury announced on Wednesday that Chilean architect Alejandro Aravena is the 2016 Pritzker Prize laureate.
Aravena, a 48-year-old architect based in Santiago, Chile, is perhaps best known for what he left unfinished. In a characteristic project in a Chilean public housing project, Aravena designed unfinished buildings—essentially half-completed houses—that were then finished and perfected by the homeowners, themselves. The project was designed to meet tight budget constraints ($7,500 per house) and give the low-income residents a sense of ownership and investment.
"Instead of designing a small house," a representative from Aravena's architecture firm, ELEMENTAL, tells ArchDaily, "...we provided a middle-income house, out of which we were giving just a small part now."
This creative approach to modern architecture has already won Aravena plenty of accolades. Last year, he was named director of the prestigious 2016 Venice Biennale, one of the world's most famous exhibitions. He's been working to help rebuild the Chilean city of Constitución, which was hit hard by a 2010 earthquake and tsunami. The team he leads was given just 100 days to create a master plan for the entire city—and their ambitious scheme includes planting an entire forest to make the city less vulnerable to flooding.
His monolithic public buildings incorporate energy-efficient details, too. When the Universidad Católica de Chile asked him to build a glass tower, for example, Aravena pushed back. Though glass is a good design for a building's exterior, he reasoned, it also creates hot, greenhouse-like conditions in buildings. So he designed a glass building and then designed a building within that building made out of fiber cement to encourage hot air convection and reduce the greenhouse effect of the glass. His "Siamese Towers" are now one of Santiago, Chile's most graceful—and energy-efficent—structures.
Aravena will receive a grant of $100,000 and a swanky gold medallion as his prize, which he will accept at an award ceremony at the United Nations Building in New York this spring. Here's what the Pritzker Prize committee had to say about his work:
He understands materials and construction, but also the importance of poetry and the power of architecture to communicate on many levels....As the jury visited Aravena’s projects, they felt a sense of wonder and revelation; they understood that his is an innovative way of creating great architecture, with the best yet to come.
As wind energy capacity continues to grow, those who oppose it—for instance, people who don't want turbines built near their property—have seized upon all sorts of reasons to argue against construction of new turbines.
One of these reasons is the idea that in harnessing wind energy, turbines disturb air currents to a degree that they actually alter the climate of the surrounding area. Most of these arguments cite a 2012 study that observed 1.3°F in warming over the course of a decade in westerm Texas and attributed it to the construction of several large wind farms.
But the researchers of that study noted that the warming they observed occurred only at night, and was simply the effect of warmer air—which generally settles higher than ground level during nighttime—getting chopped up by whirling turbines, with some of it coming down to ground level. As a result, this mechanism would not drive long-term climate change in the same way as the greenhouse effect—it would simply make the area immediately surround the turbines a bit warmer than otherwise, and air at higher altitudes a bit cooler.
A new study, published today in Nature Communications, considers the climatic effect of mass wind turbine construction on a much broader region: Europe. Using climate modeling software, a group of French researchers led by Robert Vautard calculated the impact of doubling current wind energy capacity across Europe, the amount necessary to hit the EU's goal of reducing greenhouse gas emissions by 20 percent by 2020.
They found that the construction of all these turbines would only alter climate during the winter, and wouldn't cause temperatures to rise by more than 0.54°F (0.3°C)—firmly within the range of natural year-to-year variability, and far less than the long-term effect of greenhouse gas emissions in driving global climate change.
The researchers came to the finding by using existing atmospheric models and adding in the simulated effect of turbines, which causes increased turbulence between air layers and increased drag on wind currents. For existing turbines, they incorporated manufacturer data on height and rotor size, using it to calculate effects on passing wind currents. They placed hypothetical future turbines in areas with the fastest wind speeds (mostly in Northern Germany, Denmark, Spain and Italy, along with offshore farms on the coasts of the English Channel, the North Sea and the Baltic Sea). With the turbines in place, they simulated Europe's climate over the course of 33 years, and compared it a scenario where the continent had no turbines at all.
The model predicted that, even with the projected increase in European wind turbines by 2020, the effects on daily temperature and rainfall would be minimal. The turbines would produce a slight current of air flow moving clockwise over Europe, but its influence on weather would be undetectable for most of the year.
Only in December, January and February were the turbines projected to trigger fluctuations in weather that the researchers could detect, but these were still considered negligible: temperature might increase or decrease, but not by more than 0.54°F, and precipitation might increase somewhere between zero and five percent in total.
Compare this to normal fluctuations: On an annual basis, European temperatures naturally vary by 10 percent on average, and precipitation varies by 20 percent. Superimposed on this, the effect of the turbines barely registers a blip.
Of course, with any predictive model, there is uncertainty. But in building the model, the scientists calibrated it with actual weather data (temperature, wind speed, precipitation, air pressure and other measures) collected every three hours in thousands of weather stations across Europe for all of 2012, making slight adjustments until the model closely replicated the behavior of air currents as they actually flowed across Europe during that period. This calibration increases the chance that the model reflects real world conditions.
The researchers do allow that water-atmosphere interactions are more complex (and less well-understood) than land-atmosphere interactions, so the findings may apply better to onshore wind farms than those located offshore. Another possible limitation is that rotating turbines could alter atmospheric currents at an even larger scale, which wouldn't be detected by the model, as it only simulated climate conditions over Europe.
Nevertheless, the new study is one of the largest-scale pieces of research into the climatic effects of wind turbines yet, and its findings are pretty damning for the claim that they dramatically alter climate. There are other plausible environmental reasons why you might be anti-wind power (they do kill birds, although significantly fewer than fossil fuel power plants do through pollution and climate change), but if you're looking for a more substantive argument against turbines other than the fact that they ruin your view, you'll probably have to look elsewhere.
The MLS's Processing Board/Filter Bank processed data received through the Ghz Receiver, looking for the spectra signatures of key chemicals, such as chlorine monoxide, a molecule that breaks down and depletes atmospheric ozone.
NASA transferred this object to the museum in 2016.
The MLS's Processing Board/Filter Bank processed data received through the Ghz Receiver, looking for the spectra signatures of key chemicals, such as chlorine monoxide, a molecule that breaks down and depletes atmospheric ozone.
NASA transferred this object to the museum in 2016.
When talking about climate change, not all fossil fuels are created equal. Burning natural gas, for instance, produces nearly half as much carbon dioxide per unit of energy compared with coal. Natural gas is thus considered by many to be a “bridge fuel” that can help nations lower carbon emissions while they transition more slowly from fossil fuels to renewable, carbon-neutral forms of energy. The recent boom in natural gas production in the United States, for instance, contributed to a 3.8 percent drop in carbon emissions in 2012.
But natural gas has a climate downside—it’s mostly composed of methane. “Methane is a potent greenhouse gas,” said energy researcher Adam Brandt of Stanford University. The gas is about 30 times better at holding in the atmosphere’s heat compared with carbon dioxide. So if enough methane leaks during production, natural gas’s slim advantage over other fuels could be wiped out.
A report published today in Science, however, concludes that the United States’ leaky natural gas production system currently isn’t leaking enough methane to make it worse fuel for the climate than coal.
The natural gas production system is not sealed tight. There are some areas where methane is allowed to leak intentionally for purposes of safety, but there’s also a lot of leaky valves and cracked pipes out there that can let the gas out. Quantifying all those leaks, though, has proven tricky.
The Environmental Protection Agency provides estimates of methane emitted in the United States. To calculate these estimates, someone has to go to a facility and take direct measurements from various equipment and devices. Those measurements are added up to get a total for the facility. And the facilities where the measurements are taken will serve as the basis for calculations of methane emissions for a type of source or a region.
These official estimates, however, probably underestimate total methane leaked because the devices that are sampled to provide those estimates aren't necessarily representative of all of the devices used by the natural gas industry to produce and move its product. In addition, sampling is expensive and limited. It also only takes place at locations where facilities let the EPA in—those facilities may be different from the average facility, leading to sampling bias.
Studies that have directly measured methane levels have gotten much different results. Atmospheric tests that have covered the entire United States come up with methane emissions that are about 50 percent higher than the EPA estimates, according the new paper in Science. Partly that’s because air sampling will pick up both anthropogenic methane and methane from natural sources, such as wetlands. But it’s also because the EPA’s methods are so inaccurate—natural sources only account for a fraction of the discrepancy.
The air sampling studies, though, have found some odd peaks in regional methane emissions, causing scientists to worry that there could be a lot more methane leaking from sites of natural gas production than thought. So Brandt and his colleagues began tallying up all the places where natural gas production could be leaking methane along with other sources of methane that could be mistaken for natural gas emissions.
The large natural gas leaks suggested in regional studies “are unlikely to be representative of the entire [natural gas] industry,” they write. If there were natural gas leaks of that magnitude across the natural gas industry, then methane levels in the atmosphere would be much higher that surveyed in the air sampling studies. “Most devices do not leak,” Brandt noted. Only about 1 to 2 percent of the devices used in natural gas production leak any methane, and large emitters—what the researchers nickname “superemitters”—are even rarer.
Brandt and his team then took a look at all the excess methane being released into the atmosphere. For their calculations, they assumed all that methane was coming from the natural gas industry. That’s unlikely, they note, but it makes for a good worst-case scenario. But even that level of methane wasn’t enough to make natural gas a bigger greenhouse gas contributor than coal, the researchers found. And switching from coal to natural gas for energy production does reduce the total greenhouse effect on a scale of 100 years, the standard scientists use in calculations like these.
“We believe the leakage rates are likely higher than official estimates, but they are unlikely to be high enough to disfavor shifting from coal to natural gas,” Brandt said.
Natural gas has also been promoted as a cleaner fuel than diesel, and it’s replaced that fuel in many trucks and buses on city streets. But the climate benefits of such a switch are not as clear as the switch from coal to natural gas.
Taking into account methane leaks from extraction all the way down the pipeline to the pump may actually make natural gas less climate friendly than diesel. But it’s probably not time to abandon the natural gas bus. “There’s all sorts of reasons we might want to [replace] diesel buses,” Brandt says. For example, burning natural gas results in less air pollution and less reliance on imported petroleum.
For natural gas to assert itself as a more environmentally friendly fuel, though, the industry is going to have to plug up its leaky system. Companies may find it worth their while to do so, and not simply for the climate benefits. Less leakage equals more profit, and plugging just a few of the biggest leaks could easily increase income, Brandt says. “If we can develop ways to quickly and cheaply find these sources, it’s going to be very profitable for companies.”
Tesla Motors and SpaceX founder Elon Musk appeared on Late Night with Steven Colbert Wednesday and raised eyebrows among a surprising set of viewers: scientists. As Samantha Masunaga reports for The Los Angeles Times, Musk’s claim that nuking the poles of Mars could make the planet livable is “raising red flags” within the scientific community.
That’s right — Musk suggested using nuclear weapons to make Mars a better place for humans on national television. He was referring to the idea of terraforming, in which Mars would be transformed from what the entrepreneur calls “a fixer-upper of a planet” to a habitable environment. In order to do so, scientists would have to figure out a way to transform the planet’s inhospitable surface to one on which human life could thrive. One theory on how to do so involves dropping thermonuclear weapons on Mars’ icy poles so they would vaporize and raise the planet’s miserably chilly temperatures.
Musk’s comment led Colbert to call him a “supervillain,” but scientists seem to think his idea is more misguided than evil. Brian Toon of the University of Colorado, Boulder tells Masunaga that, while there are many theories on how to terraform Mars, “blowing up bombs is not a good one.”
Gary King, a microbiologist at Louisiana State University, agrees — he tells NBC News’ Keith Wagstaff that scientists aren’t sure dropping nuclear bombs will work. In fact, says King, the plan could backfire, he tells Wagstaff. “Cloud formation could have a dampening effect, for instance, cooling Mars rather than warming it,” he says.
There is a slower solution: Create a greenhouse gas effect, heating the planet until it was able to support liquid water, plants and perhaps humans. But as physicist and “acolyte of Mars” Casey Handmer writes, the energy required would be “10 million times more than the energy provided by the largest nuclear weapon ever built.”
Maybe a nuclear bomb could help speed up that process, but it would almost certainly result in the destruction of Mars’ surface — something NASA is against. An agency representative told Masunaga that though they, too, are all about colonizing Mars, they’re also “committed to promoting exploration of the solar system in a way that protects explored environments as they exist in their natural state.”
The MLS's Ghz Receiver helped analyze the behavior and effects of several molecules in the upper atmosphere, particularly chlorine monoxide, a molecule that breaks down and depletes atmospheric ozone.
NASA transferred this object to the museum in 2016.
A tree that Sam Van Aken grows might look like any other—until it blooms. First, its branches blossom in different shades of pink, white and crimson, and then, quite magically, the tree displays a mix of fruit.
Van Aken's Tree of 40 Fruit, an invention that’s just what it sounds like, is capable of producing 40 different varieties of fruit—plums, peaches, apricots, nectarines, cherries and others. The 42-year-old sculptor and art professor at Syracuse University created his first multi-fruit tree back in 2008, by grafting together branches from different trees. He intended to produce a piece of natural art that would transform itself. He thought of the tree as a sculpture, because he could, based on what he grafted where, determine how it morphed.
Today, there are 18 of these wondrous trees across the country, with three more being planted this spring in Illinois, Michigan and California. Seven are located in New York—including the very first Tree of 40 Fruit that’s still on the Syracuse campus—and six more are in a small grove in Portland, Maine. Other individual trees, reportedly costing up to $30,000, have been purchased for private homes and museums, such as the 21C Museum/Hotel in Bentonville, Arkansas. That one, says Van Aken, may be the “most beloved” of his trees. “From the day it was planted,” he says, “it seemed to have some draw for people.”
The kindest cut
While it takes precision, the grafting required to create these multi-fruit trees is not that complicated a process. Van Aken, who grew up on a farm in Pennsylvania, takes a slice of a fruit tree that includes buds and inserts it into a matching incision in a host tree, one that’s been growing for at least three years. He then wraps electrical tape around the spot to hold the pieces together. When all goes well, the “veins,” he says, of the different trees flow into each other so that they share a vascular system.
Other times, Van Aken uses a type of grafting involving just the buds. He removes healthy buds from a tree in February and stores them in a freezer until August. Then, he trims buds off a host tree’s branches and replaces them with the ones that have been in cold storage. He wraps the new buds in plastic, creating a greenhouse effect, and the following spring cuts off any of the remaining old buds near the graft. The idea, says Van Aken, is to trick the host tree into believing the new pieces are part of itself. He explained how the Tree of 40 Fruit came to be at a TED talk in Manhattan last year.
For three years after one of his trees is sited, the artist visits it twice a year, once in the spring to prune the branches and again in the summer to add more grafts. Van Aken estimates that it takes at least nine years for a Tree of 40 Fruit to reach its peak—that is five years for the grafts to develop and another four for the different fruit to appear.
Van Aken uses only trees that produce stone fruits, or those that have pits, because these species tend to be compatible with each other. He was able to gain access to almost 250 different varieties, but to the general public, most of these types of peaches, plums and apricots are unfamiliar, because they aren't the preferred size or color and don't have a shelf-life long enough to allow them to be sold in stores. But that means people are missing out on a wide variety of taste sensations. Some of the fruits, Van Aken says, are so sweet, “they’ll hurt your teeth,” and others are sour.
The art project, in this sense, gradually became a means of conservation. Van Aken is doing his part to keep these fruit species from disappearing.
In fact, his work with lesser known types of fruit attracted the attention of DARPA, the research arm of the Department of Defense. This past fall he met with people from the agency’s Biological Technologies Office to share what he has learned about preserving heirloom and native varieties of fruit.
While he continues to create Trees of 40 Fruit, Van Aken’s agricultural focus is broadening. His latest project, based on the German concept of streuobstweise, or community orchards, is a step toward not only educating communities about the fruits native to their region, but also in engaging a younger generation in the fading tradition of growing food. Van Aken, art historian and entrepreneur Chris Thompson and some local businesses and community groups hope to start their first streuobstweise in Freeport, Maine. Some multi-fruit trees will be planted in the orchard, but most of the trees will provide only one type of fruit—the goal being to bring back local varieties that most people have never tasted.
“The Trees of 40 Fruit were a way for me to collapse an entire orchard into one tree to preserve varieties and diversity,” says Van Aken. “But if the Tree of 40 Fruit is collapse, the streuobstweise is explosion, returning these varieties to individual trees.”
Venus is one of Earth’s closest neighbors, but astronauts won’t be stepping foot on the second planet from the sun anytime soon. Venus is a true hellscape, sporting an atmosphere thick enough to crush a person, temperatures high enough to melt lead and pervasive clouds of sulfuric acid. But new simulations suggest that wasn't always the case. Venus was downright Earth-like for 2 to 3 billion years and didn’t turn into the violent no-man’s land we know today until 700 million years ago.
Venus was a cloudy mystery to astronomers until 1978, when the Pioneer Venus Project reached the planet and found indications that it was once home to shallow seas. To understand whether the planet could have ever supported liquid water, and possibly life, researchers from NASA’s Goddard Institute for Space Science ran five simulations each representing different levels of water covering the planet. In all scenarios, they found that the planet would have been able to maintain a stable temperate climate for a couple billion years. The research was presented at the European Planetary Science Congress—Division for Planetary Sciences Joint Meeting 2019 in Geneva, Switzerland.
NASA’s Michael Way and Anthony Del Genio calculated three scenarios based off the topography of Venus was see today: one with a 1,017-foot average ocean, one with a shallow 30-foot-deep ocean and one in which the moisture was locked in the soil. The team adjusted their model to account for changing atmospheric conditions and for the sun heating up over time. They found that in all scenarios the planet could maintain an average temperature between 68 and 122 degrees Fahrenheit.
“Venus currently has almost twice the solar radiation that we have at Earth. However, in all the scenarios we have modeled, we have found that Venus could still support surface temperatures amenable for liquid water,” Way says in a press release. “Our hypothesis is that Venus may have had a stable climate for billions of years. It is possible that the near-global resurfacing event is responsible for its transformation from an Earth-like climate to the hellish hot-house we see today.”
Soon after it first formed around 4.2 billion years ago, Venus cooled off quickly and had an atmosphere dominated by carbon dioxide, the researchers hypothesize. If the planet followed similar patterns to the early Earth, much of that carbon dioxide would have been absorbed by silicate rocks and locked into the surface over the course of 3 billion years. Roughly 715 million years ago, the Venusian atmosphere would have been pretty similar to Earth, with a predominance of nitrogen with some trace amounts of carbon dioxide and methane.
Around that time, however, massive amounts of carbon dioxide re-entered the atmosphere, setting off the runaway greenhouse effect that transformed the planet into what it is today. The researchers believe it was likely a volcanic event that released gas trapped in massive amounts of magma but prevented the carbon dioxide from being reabsorbed.
“Something happened on Venus where a huge amount of gas was released into the atmosphere and couldn’t be re-absorbed by the rocks,” Way says. “On Earth we have some examples of large-scale outgassing, for instance the creation of the Siberian Traps 500 million years ago which is linked to a mass extinction, but nothing on this scale. It completely transformed Venus.”
There are still some big questions about whether Venus was habitable. First, researchers need to learn more about how quickly Venus cooled off after its formation. It’s possible that it never cooled down enough for liquid water to form. It’s also unknown whether the event that reshaped the planet was one mega-cataclysm or if it was a series of smaller events over billions of years that gradually turned Venus into what it is today.
If Venus was habitable for billions of years, it opens up the possibility that exoplanets spotted in a so-called “Venus Zone,” or roughly the same distance as Venus is from the sun, could be candidates for supporting life in their solar system. But confirming the hypothesis will take more missions to study the planet.
There are plenty of compelling reasons to go back. A study released last month shows that cyclical dark patches that appear and disappear in the upper reaches of Venus’s thick atmosphere are associated with changes in the planet’s brightness and energy levels. Astronomer Carl Sagan and other notable scientists have hypothesized that the unusual darkening could be caused by microscopic life in the clouds.