Visit MCJ West for Action, Updates, and More!
CONNECT

enter your email for updates

MCJ on Facebook!
MCJ West on Facebook!
Follow the MCJ on Twitter!




COP15 Gears What happened at the Copenhagen Climate Talks?
Visit Rising Tide North America's
WhatIsCOP15.net



View N30 Actions (U.S.) in a larger map

Browse by Topic

Posts Tagged ‘gjep’

By Chris Lang, March 20, 2014. Source: REDD-Monitor

Image: Santiago Armengod and Melanie Cervantes

Image: Santiago Armengod and Melanie Cervantes

The World Bank continues with its push to trade the carbon stored in forests. But new research shows that safeguards and legal protections for indigenous peoples and local communities in these new forest carbon markets are “non-existent”.

The research was carried out by the Rights and Resources Initiative (RRI) together with the Ateneo School of Government in the Philippines. It includes a survey of 23 countries in Latin America, Asia, and Africa, covering two-thirds of the Global South’s forests. 21 of these countries are members of the UN-REDD programme and/or the World Bank’s Forest Carbon Partnership Facility. Brazil has a US$1 billion REDD agreement with Norway. India is the only non-REDD country included in the research.

In a press release, Arvind Khare, RRI’s Executive Director, said,

“As the carbon in living trees becomes another marketable commodity, the deck is loaded against forest peoples, and presents an opening for an unprecedented carbon grab by governments and investors. Every other natural resource investment on the international stage has disenfranchised Indigenous Peoples and local communities, but we were hoping REDD would deliver a different outcome. Their rights to their forests may be few and far between, but their rights to the carbon in the forests are non-existent.”

The report, “Status of Forest Carbon Rights and Implications for Communities, the Carbon Trade, and REDD+ Investments”, can be downloaded here.

The report argues that, “The dispossession of local communities and Indigenous Peoples does not have to be an outcome of the emergence of carbon markets.” But the drive to create markets to trade forest carbon could actually be impeding progress on establishing Indigenous Peoples’ and local communities’ rights to their land.

Currently REDD country governments legally control vast areas of forest land. Even without forest carbon markets, governments are reluctant to hand over the rights to this land. If carbon markets make forests more valuable, are governments more or less likely to try to hold on to forest land?

RRI and Ateneo School of Government point out in their report that most REDD countries have recognised the importance of land tenure rights. We might therefore expect that in the six years since REDD was launched the area of forest land recognised as owned by Indigenous Peoples and local communities would have increased. The report reviews community tenure in 28 REDD countries and finds that the area of forest land secured for community ownership since 2008 was less than one-fifth of the area in the previous six years:

The report is critical of the World Bank Carbon Fund’s Methodological Framework noting that, it “does not identity or or provide adequate guidance on how to address the risks associated with the existing ambiguity on carbon rights”. The Methodological Framework states, “The status of rights to carbon and relevant lands should be assessed to establish a basis for successful implementation of the emissions reduction program.” But the Methodological Frameworks says nothing about respecting or enforcing those rights.

Of the 23 countries studied in the report, only Mexico and Guatemala have national legislation defining tenure rights over carbon. None of the countries have a national legal framework establishing rules and institutions for trade in forest carbon. Bolivia has passed legislation prohibiting the commodification of ecosystems services.

Six of the countries have draft national laws to establish carbon rights. While 17 countries have legal frameworks that could provide legislation for carbon trading, these laws have not been harmonised, and do not provide safeguards or institutions to arbitrate grievances.

The report points out that,

Even countries that recognize Indigenous Peoples’ and local communities’ tenure rights over their lands do not necessarily extend this tenure to include ownership of natural resources such as minerals, oil, timber, and other forest products, which can often remain under State ownership. While emissions reductions are not tangible products in the same way as timber or minerals, it is quite possible that governments may perceive them in the same manner, should it suit their interests.

On the other hand, REDD won’t work unless the rights of Indigenous Peoples and local communities are protected. Andy White, RRI’s coordinator, points out that,

Since the birth of the REDD+, organisations have called for secure land rights for indigenous peoples and local communities as a critical component, highlighting the copious amount of research showing that local communities excel at the sustainable management of their land and resources when they are entrusted with greater ownership and control.

Catch 22, then? To work, REDD requires that governments respect the land rights and carbon rights of local communities. But a market for forest carbon could result in large revenues for governments, thus increasing the chances of a “carbon grab” by corporations and governments.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

By Brian John, March 19, 2014. Source: Spinwatch

Anne Glover, EU Chief Scientific Adviser. Photo: Wikimedia Commons

Anne Glover, EU Chief Scientific Adviser. Photo: Wikimedia Commons

Anne Glover, the EU’s chief scientist, is playing politics with science warns Brian John of GM-Free Cymru. Her role in promoting GMOs as safe and attempting to get rid of the precautionary principle, he argues in this guest post, is all part of a carefully crafted attempt to redesign science and to impose a scientific orthodoxy worked out with the “learned” academies. Dismissing some GMO discoveries by claiming they are “contested” ignores how scientific debate really works.  

Since taking office in 2012, EU Science Chief Anne Glover seems to have taken it upon herself to redefine the meaning of the term “scientific evidence” and to shake up the manner in which scientists work and communicate with each other.

In two staged “interviews” published in Euractiv 1 it has become apparent that she sees her role as providing science in support of the predetermined political positions of her boss, EC President Barroso. She will deny this, of course, but if we look at one field – the contentious area of GMOs – we see immediately that she is intent upon the “scientific validation” of the line pushed relentlessly by the Commission – ie that GMOs are safe and that they should be allowed without any great hindrance into the food supply, even if there is no public taste for them and no demand from retailers. This pressure towards market liberalisation and the insidious dismantling of GMO regulations is now being increased by the US Transatlantic Trade and Investment Partnership (TTIP) negotiators.

Politics will always be a dirty business. But there is something far more worrying going on here, and it has to do with science itself. If you look at the carefully crafted statements from Anne Glover, a number of things become apparent:

1. She is seeking to re-define the term “scientific evidence” by claiming that there is no substantiated evidence of harm associated with GMO crops and foods. Leaving aside the matter of what is substantiated, and what is not, and who does the substantiating, Glover is perpetrating a falsehood here. She has repeated it over and again.2 As she knows full well, there is a large body of peer-reviewed literature which shows harmful direct and indirect effects arising from the growing of GMO crops. She seems to be suggesting that the evidence cited by hundreds of scientists, in these publications, is not “evidence” at all, simply because she does not agree with it or finds it inconvenient. If nothing else, this shows a deep disrespect for working scientists and a lack of awareness of how science works through a process of data collection in the field and in the laboratory, and by hypothesis-testing in a climate of mutual respect.3

2. She is arguing that the Precautionary Principle is no longer needed in the assessment of GMO crops and foods, since in her view the arguments about safety are over. This is both arrogant and dangerous, putting at risk the health of Europeans on the basis of a deeply flawed premise. This undermines one of the key underpinnings of EU law and Codex Alimentarius in the matter of GMOs – namely that they are different from conventional organisms and are liable therefore to be uniquely unpredictable and potentially dangerous for health and the environment. Many would agree that the “presumption of high risk” has now been amply confirmed through scientific investigation.4

3. She is also arguing that GMOs in Europe have been “regulated to death” – with the implication that these regulations are now holding back progress and need to be changed or even dismantled. That again is a dangerous attitude, in which she seems (on the basis of her own convictions about GMO safety) to be seeking to undermine the regulations which protect the people of the EU. Her role is to advise Mr Barroso, not to seek to change the EU regulations in tune with her personal views.

4. She refers to learned “independent” bodies like the European Academies of Science Advisory Council (EASAC) in support of her position on GMOs, presumably on the assumption that such academies are uniquely qualified to tell the rest of us what to believe and what to do. The report to which she frequently refers, called “Planting the Future”, does not even have any cited authors – and it is essentially a position statement distributed by a relatively small scientific community whose members are seeking to protect their own status and to guarantee a flow of research funds into their own pet projects.5 Is that too cynical a view? Having watched the Royal Society’s take on GMOs over the years, it seems reasonable. It is of course a standard tactic for somebody in Glover’s position to “defer” to some distant reputable body and to quote it verbatim, giving her words a gloss of respectability and deflecting responsibility in the event that their assurances about safety turn out, in due course, to be false.

5. She repeats the EASAC line that controversies about the negative impacts of GMO crops and foods are based upon “contested science”.1 This is disingenuous and even dishonest, especially when she implies that papers purporting to show that GMOs are safe are somehow “uncontested”. That is of course nonsense – all the science in the GMO debate is contested in the literature and in public discourse, and so it should be.

6. Finally – and this is the most serious issue of all – Glover seems intent upon establishing a scientific orthodoxy with respect to GMOs, determined by an unelected and biased small group of scientists (including herself) who have decided that the GMO safety debate is over, and that GMOs are harmless. She does not recognise the integrity of the scientists who argue otherwise, and indeed she denies that their published conclusions are based upon proper “evidence”. We have known for years that the scientific establishment hates mavericks and researchers who challenge the “consensus” or “accepted wisdom” – and it seems blissfully unaware of the irony of this situation, given the long history of religious and political suppression of scientific research and results. The Flat Earth comes to mind, as do the names of Galileo, Stalin and Orwell. The scientific establishment has already been heavily implicated in the disgraceful treatment meted out to such honest scientists as Pusztai, Ermakova, Carman, Chapela and Quist, Huber, Carrasco and Seralini.6 There is no sign of a change in this attitude.

What we want from Europe’s Chief Scientist is a demonstration that she knows what science is and how it works, and an acceptance that “scientific evidence” exists on both sides of every scientific argument. We want respect and recognition for those whose views she might not personally accept. And we want an acceptance that in the field of GMOs (as in many others) there is no consensus about safety and environmental impacts. Can she bring herself to make a simple statement to that effect? On the current evidence in the public domain, we doubt it.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

By Martin Robards, March 24, 2014. Source: The Guardian

Staining the vista of the Chugach Mountains, the Exxon Valdez lies atop Bligh Reef two days after the grounding on 25 March 1989. Photograph: Natalie B Fobes/NG/Getty Images

Staining the vista of the Chugach Mountains, the Exxon Valdez lies atop Bligh Reef two days after the grounding on 25 March 1989. Photograph: Natalie B Fobes/NG/Getty Images

Even after the recent Deepwater Horizon incident in the Gulf of Mexico— a much larger accident in terms of the amount of oil released — the spectre of Exxon Valdez remains fresh in the minds of many Americans old enough to remember the wall-to-wall media coverage of crude-smothered rocks, birds, and marine mammals.

In the quarter century since the Exxon Valdez foundered, changing economic and climatic conditions have led to increased Arctic shipping, including increasing volumes of petroleum products through the Arctic. Sadly, apart from a few areas around oil fields, there is little to no capacity to respond to an accident – leaving the region’s coastal indigenous communities and iconic wildlife at risk of a catastrophe.

Local Alaskans and conservationists like myself – who witnessed the Exxon Valdez impact at close range – will never forget the damage. The wake of oil spread far from Bligh Reef, devastating life in Prince William Sound, killing over a quarter of a million seabirds at the large colonies in neighbouring Cook Inlet, before moving along the coast of Kodiak and to a point on the Alaska Peninsula 460 miles to the south.

Yet more than memories remain. Oil persists beneath the boulders and cobbles of the affected region, sea otters have only just recovered after 25 years, and some species such as Pacific herring and the fisheries reliant on them are still not recovering at all, despite Exxon’s overtly optimistic prediction of a quick and full recovery of Prince William Sound.

The fact is that even under ideal conditions, relatively little oil is actually recovered from a large spill. Its long-term impacts demand that we redouble our efforts on prevention to protect natural resources and the communities that rely on them – particularly in the Arctic where the environmental challenges are greater, the response and cleanup infrastructure frequently poor, and the logistics for mounting a response in remote environments immense. Furthermore, Arctic wildlife tends to aggregate in staggering numbers, rendering large portions of entire species vulnerable to a spill, like the seabirds of Cook Inlet.

Late last year, recognising that accidents will happen, I helped to lead a workshop with representatives of government agencies and coastal communities to address the lack of oil spill response capacity in the waterways separating Alaska in the United States from Chukotka in the Russian Federation. Residents from the Bering and Anadyr Straits and other villages met with representatives from federal and state agencies and other organisations in order to better identify the best ways to understand, prepare for, and respond to, an oil spill in a co-ordinated manner.

While overall co-ordination of any large oil spill naturally rests with a formalised incident command, the first responders to a future oil spill in Arctic waters will more often than not be from the nearest local communities. Local hunters possess knowledge of natural resources passed down over centuries, including the migratory movements of birds, marine mammals, and fish, as well as how to operate safely in their coastal waters.

These are the people who stand to lose the most in the event of a spill, which could devastate regional wildlife and fish populations. Providing them the proper training, equipment, and infrastructure for their communities will help them to play a more meaningful role in planning for and safely responding to any future environmental disasters.

Communities, agencies, and other responsible groups on both sides of the political border must also establish predetermined roles and priorities. For example, will oil be allowed to wash ashore or will an attempt at dispersal be made? While oil on Arctic beaches is nobody’s wish, the long-term impacts of dispersant use on food security in the Arctic environment are unknown. Both options have long-term environmental and human health consequences – and only through local input into the planning process can these difficult decisions be addressed.

During the Exxon Valdez incident, villages dependent on fishing were financially ruined. A similar event farther north, impacting the health and abundance of marine mammal populations, could be even more devastating. Such losses of iconic wildlife and damage to this stunning environment threaten not only a unique and precious part of our planet; but also the nutritional needs of coastal communities and a critical component of their cultures.

In the end, the story of the Exxon Valdez remains a cautionary tale. While simply hoping for the best may be the cheapest way forward given the resources required to establish functional networks of community and government bodies willing and able to work together, accidents do and will continue to happen. If we are to secure the long-term health and security of the Arctic’s magnificent natural resources and vibrant indigenous cultures there can be little doubt concerning the value of both prevention and preparedness.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

By Martin Robards, March 24, 2014. Source: The Guardian

Staining the vista of the Chugach Mountains, the Exxon Valdez lies atop Bligh Reef two days after the grounding on 25 March 1989. Photograph: Natalie B Fobes/NG/Getty Images

Staining the vista of the Chugach Mountains, the Exxon Valdez lies atop Bligh Reef two days after the grounding on 25 March 1989. Photograph: Natalie B Fobes/NG/Getty Images

Even after the recent Deepwater Horizon incident in the Gulf of Mexico— a much larger accident in terms of the amount of oil released — the spectre of Exxon Valdez remains fresh in the minds of many Americans old enough to remember the wall-to-wall media coverage of crude-smothered rocks, birds, and marine mammals.

In the quarter century since the Exxon Valdez foundered, changing economic and climatic conditions have led to increased Arctic shipping, including increasing volumes of petroleum products through the Arctic. Sadly, apart from a few areas around oil fields, there is little to no capacity to respond to an accident – leaving the region’s coastal indigenous communities and iconic wildlife at risk of a catastrophe.

Local Alaskans and conservationists like myself – who witnessed the Exxon Valdez impact at close range – will never forget the damage. The wake of oil spread far from Bligh Reef, devastating life in Prince William Sound, killing over a quarter of a million seabirds at the large colonies in neighbouring Cook Inlet, before moving along the coast of Kodiak and to a point on the Alaska Peninsula 460 miles to the south.

Yet more than memories remain. Oil persists beneath the boulders and cobbles of the affected region, sea otters have only just recovered after 25 years, and some species such as Pacific herring and the fisheries reliant on them are still not recovering at all, despite Exxon’s overtly optimistic prediction of a quick and full recovery of Prince William Sound.

The fact is that even under ideal conditions, relatively little oil is actually recovered from a large spill. Its long-term impacts demand that we redouble our efforts on prevention to protect natural resources and the communities that rely on them – particularly in the Arctic where the environmental challenges are greater, the response and cleanup infrastructure frequently poor, and the logistics for mounting a response in remote environments immense. Furthermore, Arctic wildlife tends to aggregate in staggering numbers, rendering large portions of entire species vulnerable to a spill, like the seabirds of Cook Inlet.

Late last year, recognising that accidents will happen, I helped to lead a workshop with representatives of government agencies and coastal communities to address the lack of oil spill response capacity in the waterways separating Alaska in the United States from Chukotka in the Russian Federation. Residents from the Bering and Anadyr Straits and other villages met with representatives from federal and state agencies and other organisations in order to better identify the best ways to understand, prepare for, and respond to, an oil spill in a co-ordinated manner.

While overall co-ordination of any large oil spill naturally rests with a formalised incident command, the first responders to a future oil spill in Arctic waters will more often than not be from the nearest local communities. Local hunters possess knowledge of natural resources passed down over centuries, including the migratory movements of birds, marine mammals, and fish, as well as how to operate safely in their coastal waters.

These are the people who stand to lose the most in the event of a spill, which could devastate regional wildlife and fish populations. Providing them the proper training, equipment, and infrastructure for their communities will help them to play a more meaningful role in planning for and safely responding to any future environmental disasters.

Communities, agencies, and other responsible groups on both sides of the political border must also establish predetermined roles and priorities. For example, will oil be allowed to wash ashore or will an attempt at dispersal be made? While oil on Arctic beaches is nobody’s wish, the long-term impacts of dispersant use on food security in the Arctic environment are unknown. Both options have long-term environmental and human health consequences – and only through local input into the planning process can these difficult decisions be addressed.

During the Exxon Valdez incident, villages dependent on fishing were financially ruined. A similar event farther north, impacting the health and abundance of marine mammal populations, could be even more devastating. Such losses of iconic wildlife and damage to this stunning environment threaten not only a unique and precious part of our planet; but also the nutritional needs of coastal communities and a critical component of their cultures.

In the end, the story of the Exxon Valdez remains a cautionary tale. While simply hoping for the best may be the cheapest way forward given the resources required to establish functional networks of community and government bodies willing and able to work together, accidents do and will continue to happen. If we are to secure the long-term health and security of the Arctic’s magnificent natural resources and vibrant indigenous cultures there can be little doubt concerning the value of both prevention and preparedness.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

By Martin Robards, March 24, 2014. Source: The Guardian

Staining the vista of the Chugach Mountains, the Exxon Valdez lies atop Bligh Reef two days after the grounding on 25 March 1989. Photograph: Natalie B Fobes/NG/Getty Images

Staining the vista of the Chugach Mountains, the Exxon Valdez lies atop Bligh Reef two days after the grounding on 25 March 1989. Photograph: Natalie B Fobes/NG/Getty Images

Even after the recent Deepwater Horizon incident in the Gulf of Mexico— a much larger accident in terms of the amount of oil released — the spectre of Exxon Valdez remains fresh in the minds of many Americans old enough to remember the wall-to-wall media coverage of crude-smothered rocks, birds, and marine mammals.

In the quarter century since the Exxon Valdez foundered, changing economic and climatic conditions have led to increased Arctic shipping, including increasing volumes of petroleum products through the Arctic. Sadly, apart from a few areas around oil fields, there is little to no capacity to respond to an accident – leaving the region’s coastal indigenous communities and iconic wildlife at risk of a catastrophe.

Local Alaskans and conservationists like myself – who witnessed the Exxon Valdez impact at close range – will never forget the damage. The wake of oil spread far from Bligh Reef, devastating life in Prince William Sound, killing over a quarter of a million seabirds at the large colonies in neighbouring Cook Inlet, before moving along the coast of Kodiak and to a point on the Alaska Peninsula 460 miles to the south.

Yet more than memories remain. Oil persists beneath the boulders and cobbles of the affected region, sea otters have only just recovered after 25 years, and some species such as Pacific herring and the fisheries reliant on them are still not recovering at all, despite Exxon’s overtly optimistic prediction of a quick and full recovery of Prince William Sound.

The fact is that even under ideal conditions, relatively little oil is actually recovered from a large spill. Its long-term impacts demand that we redouble our efforts on prevention to protect natural resources and the communities that rely on them – particularly in the Arctic where the environmental challenges are greater, the response and cleanup infrastructure frequently poor, and the logistics for mounting a response in remote environments immense. Furthermore, Arctic wildlife tends to aggregate in staggering numbers, rendering large portions of entire species vulnerable to a spill, like the seabirds of Cook Inlet.

Late last year, recognising that accidents will happen, I helped to lead a workshop with representatives of government agencies and coastal communities to address the lack of oil spill response capacity in the waterways separating Alaska in the United States from Chukotka in the Russian Federation. Residents from the Bering and Anadyr Straits and other villages met with representatives from federal and state agencies and other organisations in order to better identify the best ways to understand, prepare for, and respond to, an oil spill in a co-ordinated manner.

While overall co-ordination of any large oil spill naturally rests with a formalised incident command, the first responders to a future oil spill in Arctic waters will more often than not be from the nearest local communities. Local hunters possess knowledge of natural resources passed down over centuries, including the migratory movements of birds, marine mammals, and fish, as well as how to operate safely in their coastal waters.

These are the people who stand to lose the most in the event of a spill, which could devastate regional wildlife and fish populations. Providing them the proper training, equipment, and infrastructure for their communities will help them to play a more meaningful role in planning for and safely responding to any future environmental disasters.

Communities, agencies, and other responsible groups on both sides of the political border must also establish predetermined roles and priorities. For example, will oil be allowed to wash ashore or will an attempt at dispersal be made? While oil on Arctic beaches is nobody’s wish, the long-term impacts of dispersant use on food security in the Arctic environment are unknown. Both options have long-term environmental and human health consequences – and only through local input into the planning process can these difficult decisions be addressed.

During the Exxon Valdez incident, villages dependent on fishing were financially ruined. A similar event farther north, impacting the health and abundance of marine mammal populations, could be even more devastating. Such losses of iconic wildlife and damage to this stunning environment threaten not only a unique and precious part of our planet; but also the nutritional needs of coastal communities and a critical component of their cultures.

In the end, the story of the Exxon Valdez remains a cautionary tale. While simply hoping for the best may be the cheapest way forward given the resources required to establish functional networks of community and government bodies willing and able to work together, accidents do and will continue to happen. If we are to secure the long-term health and security of the Arctic’s magnificent natural resources and vibrant indigenous cultures there can be little doubt concerning the value of both prevention and preparedness.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

By Martin Robards, March 24, 2014. Source: The Guardian

Staining the vista of the Chugach Mountains, the Exxon Valdez lies atop Bligh Reef two days after the grounding on 25 March 1989. Photograph: Natalie B Fobes/NG/Getty Images

Staining the vista of the Chugach Mountains, the Exxon Valdez lies atop Bligh Reef two days after the grounding on 25 March 1989. Photograph: Natalie B Fobes/NG/Getty Images

Even after the recent Deepwater Horizon incident in the Gulf of Mexico— a much larger accident in terms of the amount of oil released — the spectre of Exxon Valdez remains fresh in the minds of many Americans old enough to remember the wall-to-wall media coverage of crude-smothered rocks, birds, and marine mammals.

In the quarter century since the Exxon Valdez foundered, changing economic and climatic conditions have led to increased Arctic shipping, including increasing volumes of petroleum products through the Arctic. Sadly, apart from a few areas around oil fields, there is little to no capacity to respond to an accident – leaving the region’s coastal indigenous communities and iconic wildlife at risk of a catastrophe.

Local Alaskans and conservationists like myself – who witnessed the Exxon Valdez impact at close range – will never forget the damage. The wake of oil spread far from Bligh Reef, devastating life in Prince William Sound, killing over a quarter of a million seabirds at the large colonies in neighbouring Cook Inlet, before moving along the coast of Kodiak and to a point on the Alaska Peninsula 460 miles to the south.

Yet more than memories remain. Oil persists beneath the boulders and cobbles of the affected region, sea otters have only just recovered after 25 years, and some species such as Pacific herring and the fisheries reliant on them are still not recovering at all, despite Exxon’s overtly optimistic prediction of a quick and full recovery of Prince William Sound.

The fact is that even under ideal conditions, relatively little oil is actually recovered from a large spill. Its long-term impacts demand that we redouble our efforts on prevention to protect natural resources and the communities that rely on them – particularly in the Arctic where the environmental challenges are greater, the response and cleanup infrastructure frequently poor, and the logistics for mounting a response in remote environments immense. Furthermore, Arctic wildlife tends to aggregate in staggering numbers, rendering large portions of entire species vulnerable to a spill, like the seabirds of Cook Inlet.

Late last year, recognising that accidents will happen, I helped to lead a workshop with representatives of government agencies and coastal communities to address the lack of oil spill response capacity in the waterways separating Alaska in the United States from Chukotka in the Russian Federation. Residents from the Bering and Anadyr Straits and other villages met with representatives from federal and state agencies and other organisations in order to better identify the best ways to understand, prepare for, and respond to, an oil spill in a co-ordinated manner.

While overall co-ordination of any large oil spill naturally rests with a formalised incident command, the first responders to a future oil spill in Arctic waters will more often than not be from the nearest local communities. Local hunters possess knowledge of natural resources passed down over centuries, including the migratory movements of birds, marine mammals, and fish, as well as how to operate safely in their coastal waters.

These are the people who stand to lose the most in the event of a spill, which could devastate regional wildlife and fish populations. Providing them the proper training, equipment, and infrastructure for their communities will help them to play a more meaningful role in planning for and safely responding to any future environmental disasters.

Communities, agencies, and other responsible groups on both sides of the political border must also establish predetermined roles and priorities. For example, will oil be allowed to wash ashore or will an attempt at dispersal be made? While oil on Arctic beaches is nobody’s wish, the long-term impacts of dispersant use on food security in the Arctic environment are unknown. Both options have long-term environmental and human health consequences – and only through local input into the planning process can these difficult decisions be addressed.

During the Exxon Valdez incident, villages dependent on fishing were financially ruined. A similar event farther north, impacting the health and abundance of marine mammal populations, could be even more devastating. Such losses of iconic wildlife and damage to this stunning environment threaten not only a unique and precious part of our planet; but also the nutritional needs of coastal communities and a critical component of their cultures.

In the end, the story of the Exxon Valdez remains a cautionary tale. While simply hoping for the best may be the cheapest way forward given the resources required to establish functional networks of community and government bodies willing and able to work together, accidents do and will continue to happen. If we are to secure the long-term health and security of the Arctic’s magnificent natural resources and vibrant indigenous cultures there can be little doubt concerning the value of both prevention and preparedness.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

March 21, 2014. Source: GMO Free Europe

gmo_free_europeOn March the 20th about 100 activists from the social centers of Emilia-Romagna, Marche and North-East of Italy entered in the EFSA’s headquarters in Parma, and occupied it for half an hour, also blocking the ongoing proclamation of the new executive director.

EFSA, the European Food Security Agency, is not a “private” place. It is a public one, instead.

Firstly for its status of European institution. But above all because it is in charge of our collective and individual safety and the ecological security, as for what is related to food and crops.

As we detail below, there are many criticisms that can be raised about the way EFSA evaluates the risk of the GMOs. Those criticisms have been raised and discussed in the past by many scientists and environmental associations.

However, no answers came from EFSA, nor its guidelines have been modified in any respect.

While the biotech companies are aggressively trying to invade Europe with GMO cultivations, it is mandatory to publicly expose the lack of a rigorous and scientific evaluation of GMOs risks in the EFSA process as it comes from the guidelines and praxis.

Furthermore, the risk assesment is only one of the factors under scrutiny when the EU Commission makes its final decision (almost every time against the opinion of the majority of States). We would like to know what are precisely the other factors that are evaluated and what is the relative rank of the risk and safety issues.

In our opinion, EFSA and the EU Commission act much more like agents of biotech companies than like the institutions they are supposed to be.

That’s why our action was aimed to raise unavoidable attention on this issue and we think it was totally legitimate.

The police stormed on us right after the action, when we already went out of the building and started to close all of us in the court.

We tried to get out and the police counter-reaction was grossly and unreasonably violent, by use of batons, spits, punches and insults.

A police officer pulled out his gun.

After that we improvised a demonstration along with the police continued to be extremely aggressive and other clashes took place. Several people was injured and the police wanted to identify all the activists because of the occupation and the subsequent resistance in the court and throughout the demonstration.

However, the activists, at that point blocked by dozens of cops on a bridge, refused to be identified and asked the identification of police officers responsible for violence, instead.

The police then decided to renounce the massive identification and let the activists out.

All that sounds totally shameful and clearly shows how delicate are the issues that we raised.

Food, Hearth and community: against bio-capitalism and GMOs for food sovereignty and safety.

Today a hundred of activists from social centers of Emilia-Romagna, Marche and North-East of Italy have occupied the EFSA headquarters in Parma.

EFSA is the European Food Safety Agency, and with this action we boost again a direct and radical conflict against the GMOs crops in Europe.

Social centers, activists, farmer, many of us are organizing starting from the rage and dignity that moves farmers all around the world.

GMOs crops, as an industrial intensive cultivation, exert a unacceptable violence on the agricolture, the environment and our same bodies.

They do not have any reason to exist but the profit of the companies that produces them and the control over the food chain they allow: the wishful thinking with which they are advised is no more than a smugging of wrong ideas and data.

They do not increase the yield, lead to an increase of herbicides and pesticides use, determines the onset of superweeds and resistant insects, exhibit potentially dangerous effects for human health.

Along with the industrial intensive agriculture, they are among the main contributors to climatic changes and ecological crisis.

While reducing biodiversity, impoverishing the terrain and unacceptably introducing the copyrights on seeds, they undermine the food security and sovereignty, the freedom of choice for farmer and communities and the share of food sources.

All that is unacceptable, neither we are keen to wait for the lobbies’ game play. Furthermore, farmers who want to be the agent of biotech do not wait as well, constantly forcing the law and the limits and, above all, trespassing the biological borders of our own bodies.

For all these reasons, today we have occupied the EFSA headquarters, because the Agency and the European Commission only guarantee to biotech companies a gateway to Europe.

EFSA did not reject a single application. Because of the high scientific level of the proposals, they say. However, often in the past several scientists raised criticisms about the fact that EFSA’s decisions only rely on documentation provided by the very same proponents, and that that documentation is often based on gray-data (not publicy available, not published on peer-reviewed journals).

The EFSA’s guidelines are simply shocking.

- Applicant companies have the full freedom to determine the essential elements upon which the risk assessment must be based, and the approaches taken into account are among those most favourable to biotech industries and have been established by, or in tight collaboration with, scientist involved in industry.

- The main pillar of the risk assessment is the so-called “comparative safety assessment”, which is the parallel of “substantial equality” used at FDA.

It has been defined by scientists working with ILSI, a biotech-funded institute, while they, in the same years, had relevant positions in EFSA.

EFSA then, as suggested by ILSI, consider the comparative safety assessment as the basis for the safety assessment itself instead of just a starting tool in a more rigorous process.

- The comparative safety assessment does not have a real scientific base, at least because its definition is at least nebulous and, above all, not at all quantitative.

Moreover, it totally lacks any account for the fact that the methods of DNA engineering has nothing to do with common gene regulation and heredity. The risk that newly introduced genes are capable to escape to or interfere with the normal gene regulation is specific of this technique and the comparative assessment is totally inadequate to address it.

However, was it applied as rigorously as possible, it will be sufficient to reject many applications. As a matter of fact, it is well known that many GM plants differ significantly in levels of nutrients, proteins and sometimes in toxins and allergens.
Exactly on the purpose of avoiding this, the comparison is not made between the GM plant and its isogenic counterpart cultivated at same time, place and conditions (that would comply wth EU directive 2001/18).

It is made with a very large database, built up by ILSI, that contains a very large variety of that specie, cultivated in different times and places and totally different conditions. That database comprise also very unusual varieties with very low or high level of some component.

That on purpose of making the range for the comparison so large that anything would basically fit in it.
EFSA allows the use of that database without asking for more rigorous application of the even the comparative assessment itself.

- EFSA does not require any assessment of the synergic and combinatorial effects of different toxins/herbicides expressed, in spite of the fact that combinatorial effects can not be foreseen starting from the isolated effects of each factor.

- EFSA does not require a comprehensive assessment of risk for non-targeted organisms at all level of the food chain

- EFSA does not require that the stacked effect of different genetically engineered traits are evaluated, that is to say that a GM crop with more than one trait is not considered a new specie, and the assessment relies on assessments of each trait singularly treated.

- There is not a clear definition for the case in which a GM application must be rejected

- There is not any clause for the submitted raw data to be made available to the scientific community for further independent studies and evaluations.

All that is totally unacceptable and would be simply ridiculous if it wasn’t outrageous, considering that national authorities must rely on safety assessments from EFSA.

However, on would say, tha safety assessment is just “one among others” of the factors that the EU Commission evaluates when approving an application. It would be interesting to know whether there is a ranking among those factors, and what it is.

Finally, let’s just make a “comparative assessment” between what Monsanto declares:

Monsanto should not have to vouchsafe
the safety of biotech food. Our interest
is in selling as much of it as possible.
Assuring its safety is the FDA’s job”
(P. Angell, NYT magazine, 25 Oct 1998)
with the EFSA guidelines:

it is not foreseen that EFSA carry out such
studies as the onus is on the applicants to
demonstrate the safety of the GM product”

A kind of aporia arises at this point: who is actually in charge of ensuring safety of GM food and crops? Who is controlling that raw data are scientifically sounded? And in what sense, exactly, EFSA guarantees that their “safety assessments” are reliable?

The GMOs are a real battlefield, an open conflict between the bio-capitalism and the freedom to defend the earth, the food, and the health and the ecosystem.

Note: Well now that we’ve debunked this crackpot scheme, we can refocus on the priorities — like leaving fossil fuels in the ground, and transforming an economy based in extractive industry and relations.

-The GJEP Team

By Becky Oskin, March 20, 2014. Source: Live Science

Photo: Peter Barritt/Alamy

Photo: Peter Barritt/Alamy

During Earth’s last ice age, iron dust dumped into the ocean fertilized the garden of the sea, feeding a plankton bloom that soaked up carbon dioxide from the air, a new study confirms.

But the results deal a blow to some geoengineering schemes that claim that people may be able use iron fertilization to slow global warming. The planet’s natural experiment shows it would take at least a thousand years to lower carbon dioxide levels by 40 parts per million — the amount of the drop during the ice age.

Meanwhile, carbon dioxide is now increasing by 2 parts per million yearly, so in about 20 years human emissions could add another 40 parts per million of carbon dioxide to the atmosphere. Levels currently hover around 400 parts per million.

“Even if we could reproduce what works in the natural world, it’s not going to solve the carbon dioxide problem,” said Alfredo Martínez-García, a climate scientist at ETH Zurich in Switzerland and author of the study, published today (March 20) in the journal Science.

Iron and ice

The idea of fertilizing the ocean with iron to combat rising carbon-dioxide levels has intrigued scientists for more than 20 years, since the late researcher John Martin observed that the ice-age drop in carbon dioxide (noted in ice cores) synced with a surge in iron-rich dust.

The link between more iron in the ocean and less carbon dioxide in the air lies in the tiny ocean-dwelling plants called phytoplankton. For them, iron is an essential nutrient. In some regions, such as the Southern Ocean surrounding Antarctica, the water lacks iron but has plenty of the other nutrients that phytoplankton need to grow. Sprinkling a little iron dust in that region could boost plankton numbers considerably, the theory goes.

When climate changes during the ice age boosted the amount of iron-rich dust blowing into the Southern Ocean, the phytoplankton there grew and spread, gobbling up more carbon dioxide from the atmosphere in the process, Martin said.

The model, called the iron fertilization hypothesis, has been borne out by modern tests. Seeding small areas of the oceans does, indeed, cause big phytoplankton growth spurts. [7 Schemes to Geoengineer the Planet]

In the new study, Martínez-García and his co-authors examined seafloor sediments from the Subantarctic Zone of the Southern Ocean, southwest of Africa. When the last ice age peaked between 26,500 and 19,000 years ago, dust blowing off of Patagonia and the southern part of South America settled there, the drill core shows.

To gauge the changes in seawater composition at the time, the researchers examined the fossilized shells of microscopic marine animals called foraminifera, which eat plankton and preserve the local ocean chemistry in their shells. During the ice age, nitrogen levels dropped when iron-rich dust increased at the drill core site, Martínez-García discovered.

“It is particularly gratifying to see such persuasive evidence for the iron hypothesis now appear in the sediment record,” said Kenneth Coale, director of the Moss Landing Marine Laboratories in Moss Landing, Calif., who was not involved in the study.

In previous research, Coale and colleagues looked at the effect of iron enrichment in these waters for over 40 days. The new study shows “the effects of iron enrichment for over 40,000 years, providing a historical validation of the iron hypothesis,” Coale said.

Too big to succeed?

The dust level in the drill core suggests that about four to fives times more sediment fell across the Southern Ocean between South America and Africa during the ice age than the amount that falls there today, Martínez-García said.

“The magnitude of the area we are talking about is equivalent to three times the areas of the entire United States, and is maintained for several thousand years,” he told Live Science. “This helps put into perspective what we can do in terms of the modern ocean.”

The new study supported the argument that the amount of iron needed for geoengineering is untenable in the long term, said Gabriel Filippelli, a biogeochemist at Indiana University-Purdue University in Indianapolis. “It is difficult to imagine even a decade-long international effort of iron fertilization, sustained by continual ship runs dumping iron in a weather-hostile and isolated region of the world, let alone an effort that lasts a millennium,” Filippelli said.

But Filippelli also said he thinks the ice-age iron story is more complicated than just dust blowing in the wind. “The authors note only one source of iron — from above,” he said. There is also evidence that the oceans were richer in iron because of more river input during the ice ages, he said. Thus, the ice-age ocean had extra iron from above and from below.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

Note: Well now that we’ve debunked this crackpot scheme, we can refocus on the priorities — like leaving fossil fuels in the ground, and transforming an economy based in extractive industry and relations.

-The GJEP Team

By Becky Oskin, March 20, 2014. Source: Live Science

Photo: Peter Barritt/Alamy

Photo: Peter Barritt/Alamy

During Earth’s last ice age, iron dust dumped into the ocean fertilized the garden of the sea, feeding a plankton bloom that soaked up carbon dioxide from the air, a new study confirms.

But the results deal a blow to some geoengineering schemes that claim that people may be able use iron fertilization to slow global warming. The planet’s natural experiment shows it would take at least a thousand years to lower carbon dioxide levels by 40 parts per million — the amount of the drop during the ice age.

Meanwhile, carbon dioxide is now increasing by 2 parts per million yearly, so in about 20 years human emissions could add another 40 parts per million of carbon dioxide to the atmosphere. Levels currently hover around 400 parts per million.

“Even if we could reproduce what works in the natural world, it’s not going to solve the carbon dioxide problem,” said Alfredo Martínez-García, a climate scientist at ETH Zurich in Switzerland and author of the study, published today (March 20) in the journal Science.

Iron and ice

The idea of fertilizing the ocean with iron to combat rising carbon-dioxide levels has intrigued scientists for more than 20 years, since the late researcher John Martin observed that the ice-age drop in carbon dioxide (noted in ice cores) synced with a surge in iron-rich dust.

The link between more iron in the ocean and less carbon dioxide in the air lies in the tiny ocean-dwelling plants called phytoplankton. For them, iron is an essential nutrient. In some regions, such as the Southern Ocean surrounding Antarctica, the water lacks iron but has plenty of the other nutrients that phytoplankton need to grow. Sprinkling a little iron dust in that region could boost plankton numbers considerably, the theory goes.

When climate changes during the ice age boosted the amount of iron-rich dust blowing into the Southern Ocean, the phytoplankton there grew and spread, gobbling up more carbon dioxide from the atmosphere in the process, Martin said.

The model, called the iron fertilization hypothesis, has been borne out by modern tests. Seeding small areas of the oceans does, indeed, cause big phytoplankton growth spurts. [7 Schemes to Geoengineer the Planet]

In the new study, Martínez-García and his co-authors examined seafloor sediments from the Subantarctic Zone of the Southern Ocean, southwest of Africa. When the last ice age peaked between 26,500 and 19,000 years ago, dust blowing off of Patagonia and the southern part of South America settled there, the drill core shows.

To gauge the changes in seawater composition at the time, the researchers examined the fossilized shells of microscopic marine animals called foraminifera, which eat plankton and preserve the local ocean chemistry in their shells. During the ice age, nitrogen levels dropped when iron-rich dust increased at the drill core site, Martínez-García discovered.

“It is particularly gratifying to see such persuasive evidence for the iron hypothesis now appear in the sediment record,” said Kenneth Coale, director of the Moss Landing Marine Laboratories in Moss Landing, Calif., who was not involved in the study.

In previous research, Coale and colleagues looked at the effect of iron enrichment in these waters for over 40 days. The new study shows “the effects of iron enrichment for over 40,000 years, providing a historical validation of the iron hypothesis,” Coale said.

Too big to succeed?

The dust level in the drill core suggests that about four to fives times more sediment fell across the Southern Ocean between South America and Africa during the ice age than the amount that falls there today, Martínez-García said.

“The magnitude of the area we are talking about is equivalent to three times the areas of the entire United States, and is maintained for several thousand years,” he told Live Science. “This helps put into perspective what we can do in terms of the modern ocean.”

The new study supported the argument that the amount of iron needed for geoengineering is untenable in the long term, said Gabriel Filippelli, a biogeochemist at Indiana University-Purdue University in Indianapolis. “It is difficult to imagine even a decade-long international effort of iron fertilization, sustained by continual ship runs dumping iron in a weather-hostile and isolated region of the world, let alone an effort that lasts a millennium,” Filippelli said.

But Filippelli also said he thinks the ice-age iron story is more complicated than just dust blowing in the wind. “The authors note only one source of iron — from above,” he said. There is also evidence that the oceans were richer in iron because of more river input during the ice ages, he said. Thus, the ice-age ocean had extra iron from above and from below.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

Note: Well now that we’ve debunked this crackpot scheme, we can refocus on the priorities — like leaving fossil fuels in the ground, and transforming an economy based in extractive industry and relations.

-The GJEP Team

By Becky Oskin, March 20, 2014. Source: Live Science

Photo: Peter Barritt/Alamy

Photo: Peter Barritt/Alamy

During Earth’s last ice age, iron dust dumped into the ocean fertilized the garden of the sea, feeding a plankton bloom that soaked up carbon dioxide from the air, a new study confirms.

But the results deal a blow to some geoengineering schemes that claim that people may be able use iron fertilization to slow global warming. The planet’s natural experiment shows it would take at least a thousand years to lower carbon dioxide levels by 40 parts per million — the amount of the drop during the ice age.

Meanwhile, carbon dioxide is now increasing by 2 parts per million yearly, so in about 20 years human emissions could add another 40 parts per million of carbon dioxide to the atmosphere. Levels currently hover around 400 parts per million.

“Even if we could reproduce what works in the natural world, it’s not going to solve the carbon dioxide problem,” said Alfredo Martínez-García, a climate scientist at ETH Zurich in Switzerland and author of the study, published today (March 20) in the journal Science.

Iron and ice

The idea of fertilizing the ocean with iron to combat rising carbon-dioxide levels has intrigued scientists for more than 20 years, since the late researcher John Martin observed that the ice-age drop in carbon dioxide (noted in ice cores) synced with a surge in iron-rich dust.

The link between more iron in the ocean and less carbon dioxide in the air lies in the tiny ocean-dwelling plants called phytoplankton. For them, iron is an essential nutrient. In some regions, such as the Southern Ocean surrounding Antarctica, the water lacks iron but has plenty of the other nutrients that phytoplankton need to grow. Sprinkling a little iron dust in that region could boost plankton numbers considerably, the theory goes.

When climate changes during the ice age boosted the amount of iron-rich dust blowing into the Southern Ocean, the phytoplankton there grew and spread, gobbling up more carbon dioxide from the atmosphere in the process, Martin said.

The model, called the iron fertilization hypothesis, has been borne out by modern tests. Seeding small areas of the oceans does, indeed, cause big phytoplankton growth spurts. [7 Schemes to Geoengineer the Planet]

In the new study, Martínez-García and his co-authors examined seafloor sediments from the Subantarctic Zone of the Southern Ocean, southwest of Africa. When the last ice age peaked between 26,500 and 19,000 years ago, dust blowing off of Patagonia and the southern part of South America settled there, the drill core shows.

To gauge the changes in seawater composition at the time, the researchers examined the fossilized shells of microscopic marine animals called foraminifera, which eat plankton and preserve the local ocean chemistry in their shells. During the ice age, nitrogen levels dropped when iron-rich dust increased at the drill core site, Martínez-García discovered.

“It is particularly gratifying to see such persuasive evidence for the iron hypothesis now appear in the sediment record,” said Kenneth Coale, director of the Moss Landing Marine Laboratories in Moss Landing, Calif., who was not involved in the study.

In previous research, Coale and colleagues looked at the effect of iron enrichment in these waters for over 40 days. The new study shows “the effects of iron enrichment for over 40,000 years, providing a historical validation of the iron hypothesis,” Coale said.

Too big to succeed?

The dust level in the drill core suggests that about four to fives times more sediment fell across the Southern Ocean between South America and Africa during the ice age than the amount that falls there today, Martínez-García said.

“The magnitude of the area we are talking about is equivalent to three times the areas of the entire United States, and is maintained for several thousand years,” he told Live Science. “This helps put into perspective what we can do in terms of the modern ocean.”

The new study supported the argument that the amount of iron needed for geoengineering is untenable in the long term, said Gabriel Filippelli, a biogeochemist at Indiana University-Purdue University in Indianapolis. “It is difficult to imagine even a decade-long international effort of iron fertilization, sustained by continual ship runs dumping iron in a weather-hostile and isolated region of the world, let alone an effort that lasts a millennium,” Filippelli said.

But Filippelli also said he thinks the ice-age iron story is more complicated than just dust blowing in the wind. “The authors note only one source of iron — from above,” he said. There is also evidence that the oceans were richer in iron because of more river input during the ice ages, he said. Thus, the ice-age ocean had extra iron from above and from below.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

Note: Well now that we’ve debunked this crackpot scheme, we can refocus on the priorities — like leaving fossil fuels in the ground, and transforming an economy based in extractive industry and relations.

-The GJEP Team

By Becky Oskin, March 20, 2014. Source: Live Science

Photo: Peter Barritt/Alamy

Photo: Peter Barritt/Alamy

During Earth’s last ice age, iron dust dumped into the ocean fertilized the garden of the sea, feeding a plankton bloom that soaked up carbon dioxide from the air, a new study confirms.

But the results deal a blow to some geoengineering schemes that claim that people may be able use iron fertilization to slow global warming. The planet’s natural experiment shows it would take at least a thousand years to lower carbon dioxide levels by 40 parts per million — the amount of the drop during the ice age.

Meanwhile, carbon dioxide is now increasing by 2 parts per million yearly, so in about 20 years human emissions could add another 40 parts per million of carbon dioxide to the atmosphere. Levels currently hover around 400 parts per million.

“Even if we could reproduce what works in the natural world, it’s not going to solve the carbon dioxide problem,” said Alfredo Martínez-García, a climate scientist at ETH Zurich in Switzerland and author of the study, published today (March 20) in the journal Science.

Iron and ice

The idea of fertilizing the ocean with iron to combat rising carbon-dioxide levels has intrigued scientists for more than 20 years, since the late researcher John Martin observed that the ice-age drop in carbon dioxide (noted in ice cores) synced with a surge in iron-rich dust.

The link between more iron in the ocean and less carbon dioxide in the air lies in the tiny ocean-dwelling plants called phytoplankton. For them, iron is an essential nutrient. In some regions, such as the Southern Ocean surrounding Antarctica, the water lacks iron but has plenty of the other nutrients that phytoplankton need to grow. Sprinkling a little iron dust in that region could boost plankton numbers considerably, the theory goes.

When climate changes during the ice age boosted the amount of iron-rich dust blowing into the Southern Ocean, the phytoplankton there grew and spread, gobbling up more carbon dioxide from the atmosphere in the process, Martin said.

The model, called the iron fertilization hypothesis, has been borne out by modern tests. Seeding small areas of the oceans does, indeed, cause big phytoplankton growth spurts. [7 Schemes to Geoengineer the Planet]

In the new study, Martínez-García and his co-authors examined seafloor sediments from the Subantarctic Zone of the Southern Ocean, southwest of Africa. When the last ice age peaked between 26,500 and 19,000 years ago, dust blowing off of Patagonia and the southern part of South America settled there, the drill core shows.

To gauge the changes in seawater composition at the time, the researchers examined the fossilized shells of microscopic marine animals called foraminifera, which eat plankton and preserve the local ocean chemistry in their shells. During the ice age, nitrogen levels dropped when iron-rich dust increased at the drill core site, Martínez-García discovered.

“It is particularly gratifying to see such persuasive evidence for the iron hypothesis now appear in the sediment record,” said Kenneth Coale, director of the Moss Landing Marine Laboratories in Moss Landing, Calif., who was not involved in the study.

In previous research, Coale and colleagues looked at the effect of iron enrichment in these waters for over 40 days. The new study shows “the effects of iron enrichment for over 40,000 years, providing a historical validation of the iron hypothesis,” Coale said.

Too big to succeed?

The dust level in the drill core suggests that about four to fives times more sediment fell across the Southern Ocean between South America and Africa during the ice age than the amount that falls there today, Martínez-García said.

“The magnitude of the area we are talking about is equivalent to three times the areas of the entire United States, and is maintained for several thousand years,” he told Live Science. “This helps put into perspective what we can do in terms of the modern ocean.”

The new study supported the argument that the amount of iron needed for geoengineering is untenable in the long term, said Gabriel Filippelli, a biogeochemist at Indiana University-Purdue University in Indianapolis. “It is difficult to imagine even a decade-long international effort of iron fertilization, sustained by continual ship runs dumping iron in a weather-hostile and isolated region of the world, let alone an effort that lasts a millennium,” Filippelli said.

But Filippelli also said he thinks the ice-age iron story is more complicated than just dust blowing in the wind. “The authors note only one source of iron — from above,” he said. There is also evidence that the oceans were richer in iron because of more river input during the ice ages, he said. Thus, the ice-age ocean had extra iron from above and from below.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

Note: Well now that we’ve debunked this crackpot scheme, we can refocus on the priorities — like leaving fossil fuels in the ground, and transforming an economy based in extractive industry and relations.

-The GJEP Team

By Becky Oskin, March 20, 2014. Source: Live Science

Photo: Peter Barritt/Alamy

Photo: Peter Barritt/Alamy

During Earth’s last ice age, iron dust dumped into the ocean fertilized the garden of the sea, feeding a plankton bloom that soaked up carbon dioxide from the air, a new study confirms.

But the results deal a blow to some geoengineering schemes that claim that people may be able use iron fertilization to slow global warming. The planet’s natural experiment shows it would take at least a thousand years to lower carbon dioxide levels by 40 parts per million — the amount of the drop during the ice age.

Meanwhile, carbon dioxide is now increasing by 2 parts per million yearly, so in about 20 years human emissions could add another 40 parts per million of carbon dioxide to the atmosphere. Levels currently hover around 400 parts per million.

“Even if we could reproduce what works in the natural world, it’s not going to solve the carbon dioxide problem,” said Alfredo Martínez-García, a climate scientist at ETH Zurich in Switzerland and author of the study, published today (March 20) in the journal Science.

Iron and ice

The idea of fertilizing the ocean with iron to combat rising carbon-dioxide levels has intrigued scientists for more than 20 years, since the late researcher John Martin observed that the ice-age drop in carbon dioxide (noted in ice cores) synced with a surge in iron-rich dust.

The link between more iron in the ocean and less carbon dioxide in the air lies in the tiny ocean-dwelling plants called phytoplankton. For them, iron is an essential nutrient. In some regions, such as the Southern Ocean surrounding Antarctica, the water lacks iron but has plenty of the other nutrients that phytoplankton need to grow. Sprinkling a little iron dust in that region could boost plankton numbers considerably, the theory goes.

When climate changes during the ice age boosted the amount of iron-rich dust blowing into the Southern Ocean, the phytoplankton there grew and spread, gobbling up more carbon dioxide from the atmosphere in the process, Martin said.

The model, called the iron fertilization hypothesis, has been borne out by modern tests. Seeding small areas of the oceans does, indeed, cause big phytoplankton growth spurts. [7 Schemes to Geoengineer the Planet]

In the new study, Martínez-García and his co-authors examined seafloor sediments from the Subantarctic Zone of the Southern Ocean, southwest of Africa. When the last ice age peaked between 26,500 and 19,000 years ago, dust blowing off of Patagonia and the southern part of South America settled there, the drill core shows.

To gauge the changes in seawater composition at the time, the researchers examined the fossilized shells of microscopic marine animals called foraminifera, which eat plankton and preserve the local ocean chemistry in their shells. During the ice age, nitrogen levels dropped when iron-rich dust increased at the drill core site, Martínez-García discovered.

“It is particularly gratifying to see such persuasive evidence for the iron hypothesis now appear in the sediment record,” said Kenneth Coale, director of the Moss Landing Marine Laboratories in Moss Landing, Calif., who was not involved in the study.

In previous research, Coale and colleagues looked at the effect of iron enrichment in these waters for over 40 days. The new study shows “the effects of iron enrichment for over 40,000 years, providing a historical validation of the iron hypothesis,” Coale said.

Too big to succeed?

The dust level in the drill core suggests that about four to fives times more sediment fell across the Southern Ocean between South America and Africa during the ice age than the amount that falls there today, Martínez-García said.

“The magnitude of the area we are talking about is equivalent to three times the areas of the entire United States, and is maintained for several thousand years,” he told Live Science. “This helps put into perspective what we can do in terms of the modern ocean.”

The new study supported the argument that the amount of iron needed for geoengineering is untenable in the long term, said Gabriel Filippelli, a biogeochemist at Indiana University-Purdue University in Indianapolis. “It is difficult to imagine even a decade-long international effort of iron fertilization, sustained by continual ship runs dumping iron in a weather-hostile and isolated region of the world, let alone an effort that lasts a millennium,” Filippelli said.

But Filippelli also said he thinks the ice-age iron story is more complicated than just dust blowing in the wind. “The authors note only one source of iron — from above,” he said. There is also evidence that the oceans were richer in iron because of more river input during the ice ages, he said. Thus, the ice-age ocean had extra iron from above and from below.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

Note: Well now that we’ve debunked this crackpot scheme, we can refocus on the priorities — like leaving fossil fuels in the ground, and transforming an economy based in extractive industry and relations.

-The GJEP Team

By Becky Oskin, March 20, 2014. Source: Live Science

Photo: Peter Barritt/Alamy

Photo: Peter Barritt/Alamy

During Earth’s last ice age, iron dust dumped into the ocean fertilized the garden of the sea, feeding a plankton bloom that soaked up carbon dioxide from the air, a new study confirms.

But the results deal a blow to some geoengineering schemes that claim that people may be able use iron fertilization to slow global warming. The planet’s natural experiment shows it would take at least a thousand years to lower carbon dioxide levels by 40 parts per million — the amount of the drop during the ice age.

Meanwhile, carbon dioxide is now increasing by 2 parts per million yearly, so in about 20 years human emissions could add another 40 parts per million of carbon dioxide to the atmosphere. Levels currently hover around 400 parts per million.

“Even if we could reproduce what works in the natural world, it’s not going to solve the carbon dioxide problem,” said Alfredo Martínez-García, a climate scientist at ETH Zurich in Switzerland and author of the study, published today (March 20) in the journal Science.

Iron and ice

The idea of fertilizing the ocean with iron to combat rising carbon-dioxide levels has intrigued scientists for more than 20 years, since the late researcher John Martin observed that the ice-age drop in carbon dioxide (noted in ice cores) synced with a surge in iron-rich dust.

The link between more iron in the ocean and less carbon dioxide in the air lies in the tiny ocean-dwelling plants called phytoplankton. For them, iron is an essential nutrient. In some regions, such as the Southern Ocean surrounding Antarctica, the water lacks iron but has plenty of the other nutrients that phytoplankton need to grow. Sprinkling a little iron dust in that region could boost plankton numbers considerably, the theory goes.

When climate changes during the ice age boosted the amount of iron-rich dust blowing into the Southern Ocean, the phytoplankton there grew and spread, gobbling up more carbon dioxide from the atmosphere in the process, Martin said.

The model, called the iron fertilization hypothesis, has been borne out by modern tests. Seeding small areas of the oceans does, indeed, cause big phytoplankton growth spurts. [7 Schemes to Geoengineer the Planet]

In the new study, Martínez-García and his co-authors examined seafloor sediments from the Subantarctic Zone of the Southern Ocean, southwest of Africa. When the last ice age peaked between 26,500 and 19,000 years ago, dust blowing off of Patagonia and the southern part of South America settled there, the drill core shows.

To gauge the changes in seawater composition at the time, the researchers examined the fossilized shells of microscopic marine animals called foraminifera, which eat plankton and preserve the local ocean chemistry in their shells. During the ice age, nitrogen levels dropped when iron-rich dust increased at the drill core site, Martínez-García discovered.

“It is particularly gratifying to see such persuasive evidence for the iron hypothesis now appear in the sediment record,” said Kenneth Coale, director of the Moss Landing Marine Laboratories in Moss Landing, Calif., who was not involved in the study.

In previous research, Coale and colleagues looked at the effect of iron enrichment in these waters for over 40 days. The new study shows “the effects of iron enrichment for over 40,000 years, providing a historical validation of the iron hypothesis,” Coale said.

Too big to succeed?

The dust level in the drill core suggests that about four to fives times more sediment fell across the Southern Ocean between South America and Africa during the ice age than the amount that falls there today, Martínez-García said.

“The magnitude of the area we are talking about is equivalent to three times the areas of the entire United States, and is maintained for several thousand years,” he told Live Science. “This helps put into perspective what we can do in terms of the modern ocean.”

The new study supported the argument that the amount of iron needed for geoengineering is untenable in the long term, said Gabriel Filippelli, a biogeochemist at Indiana University-Purdue University in Indianapolis. “It is difficult to imagine even a decade-long international effort of iron fertilization, sustained by continual ship runs dumping iron in a weather-hostile and isolated region of the world, let alone an effort that lasts a millennium,” Filippelli said.

But Filippelli also said he thinks the ice-age iron story is more complicated than just dust blowing in the wind. “The authors note only one source of iron — from above,” he said. There is also evidence that the oceans were richer in iron because of more river input during the ice ages, he said. Thus, the ice-age ocean had extra iron from above and from below.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

Note: Well now that we’ve debunked this crackpot scheme, we can refocus on the priorities — like leaving fossil fuels in the ground, and transforming an economy based in extractive industry and relations.

-The GJEP Team

By Becky Oskin, March 20, 2014. Source: Live Science

Photo: Peter Barritt/Alamy

Photo: Peter Barritt/Alamy

During Earth’s last ice age, iron dust dumped into the ocean fertilized the garden of the sea, feeding a plankton bloom that soaked up carbon dioxide from the air, a new study confirms.

But the results deal a blow to some geoengineering schemes that claim that people may be able use iron fertilization to slow global warming. The planet’s natural experiment shows it would take at least a thousand years to lower carbon dioxide levels by 40 parts per million — the amount of the drop during the ice age.

Meanwhile, carbon dioxide is now increasing by 2 parts per million yearly, so in about 20 years human emissions could add another 40 parts per million of carbon dioxide to the atmosphere. Levels currently hover around 400 parts per million.

“Even if we could reproduce what works in the natural world, it’s not going to solve the carbon dioxide problem,” said Alfredo Martínez-García, a climate scientist at ETH Zurich in Switzerland and author of the study, published today (March 20) in the journal Science.

Iron and ice

The idea of fertilizing the ocean with iron to combat rising carbon-dioxide levels has intrigued scientists for more than 20 years, since the late researcher John Martin observed that the ice-age drop in carbon dioxide (noted in ice cores) synced with a surge in iron-rich dust.

The link between more iron in the ocean and less carbon dioxide in the air lies in the tiny ocean-dwelling plants called phytoplankton. For them, iron is an essential nutrient. In some regions, such as the Southern Ocean surrounding Antarctica, the water lacks iron but has plenty of the other nutrients that phytoplankton need to grow. Sprinkling a little iron dust in that region could boost plankton numbers considerably, the theory goes.

When climate changes during the ice age boosted the amount of iron-rich dust blowing into the Southern Ocean, the phytoplankton there grew and spread, gobbling up more carbon dioxide from the atmosphere in the process, Martin said.

The model, called the iron fertilization hypothesis, has been borne out by modern tests. Seeding small areas of the oceans does, indeed, cause big phytoplankton growth spurts. [7 Schemes to Geoengineer the Planet]

In the new study, Martínez-García and his co-authors examined seafloor sediments from the Subantarctic Zone of the Southern Ocean, southwest of Africa. When the last ice age peaked between 26,500 and 19,000 years ago, dust blowing off of Patagonia and the southern part of South America settled there, the drill core shows.

To gauge the changes in seawater composition at the time, the researchers examined the fossilized shells of microscopic marine animals called foraminifera, which eat plankton and preserve the local ocean chemistry in their shells. During the ice age, nitrogen levels dropped when iron-rich dust increased at the drill core site, Martínez-García discovered.

“It is particularly gratifying to see such persuasive evidence for the iron hypothesis now appear in the sediment record,” said Kenneth Coale, director of the Moss Landing Marine Laboratories in Moss Landing, Calif., who was not involved in the study.

In previous research, Coale and colleagues looked at the effect of iron enrichment in these waters for over 40 days. The new study shows “the effects of iron enrichment for over 40,000 years, providing a historical validation of the iron hypothesis,” Coale said.

Too big to succeed?

The dust level in the drill core suggests that about four to fives times more sediment fell across the Southern Ocean between South America and Africa during the ice age than the amount that falls there today, Martínez-García said.

“The magnitude of the area we are talking about is equivalent to three times the areas of the entire United States, and is maintained for several thousand years,” he told Live Science. “This helps put into perspective what we can do in terms of the modern ocean.”

The new study supported the argument that the amount of iron needed for geoengineering is untenable in the long term, said Gabriel Filippelli, a biogeochemist at Indiana University-Purdue University in Indianapolis. “It is difficult to imagine even a decade-long international effort of iron fertilization, sustained by continual ship runs dumping iron in a weather-hostile and isolated region of the world, let alone an effort that lasts a millennium,” Filippelli said.

But Filippelli also said he thinks the ice-age iron story is more complicated than just dust blowing in the wind. “The authors note only one source of iron — from above,” he said. There is also evidence that the oceans were richer in iron because of more river input during the ice ages, he said. Thus, the ice-age ocean had extra iron from above and from below.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>

Note: Well now that we’ve debunked this crackpot scheme, we can refocus on the priorities — like leaving fossil fuels in the ground, and transforming an economy based in extractive industry and relations.

-The GJEP Team

By Becky Oskin, March 20, 2014. Source: Live Science

Photo: Peter Barritt/Alamy

Photo: Peter Barritt/Alamy

During Earth’s last ice age, iron dust dumped into the ocean fertilized the garden of the sea, feeding a plankton bloom that soaked up carbon dioxide from the air, a new study confirms.

But the results deal a blow to some geoengineering schemes that claim that people may be able use iron fertilization to slow global warming. The planet’s natural experiment shows it would take at least a thousand years to lower carbon dioxide levels by 40 parts per million — the amount of the drop during the ice age.

Meanwhile, carbon dioxide is now increasing by 2 parts per million yearly, so in about 20 years human emissions could add another 40 parts per million of carbon dioxide to the atmosphere. Levels currently hover around 400 parts per million.

“Even if we could reproduce what works in the natural world, it’s not going to solve the carbon dioxide problem,” said Alfredo Martínez-García, a climate scientist at ETH Zurich in Switzerland and author of the study, published today (March 20) in the journal Science.

Iron and ice

The idea of fertilizing the ocean with iron to combat rising carbon-dioxide levels has intrigued scientists for more than 20 years, since the late researcher John Martin observed that the ice-age drop in carbon dioxide (noted in ice cores) synced with a surge in iron-rich dust.

The link between more iron in the ocean and less carbon dioxide in the air lies in the tiny ocean-dwelling plants called phytoplankton. For them, iron is an essential nutrient. In some regions, such as the Southern Ocean surrounding Antarctica, the water lacks iron but has plenty of the other nutrients that phytoplankton need to grow. Sprinkling a little iron dust in that region could boost plankton numbers considerably, the theory goes.

When climate changes during the ice age boosted the amount of iron-rich dust blowing into the Southern Ocean, the phytoplankton there grew and spread, gobbling up more carbon dioxide from the atmosphere in the process, Martin said.

The model, called the iron fertilization hypothesis, has been borne out by modern tests. Seeding small areas of the oceans does, indeed, cause big phytoplankton growth spurts. [7 Schemes to Geoengineer the Planet]

In the new study, Martínez-García and his co-authors examined seafloor sediments from the Subantarctic Zone of the Southern Ocean, southwest of Africa. When the last ice age peaked between 26,500 and 19,000 years ago, dust blowing off of Patagonia and the southern part of South America settled there, the drill core shows.

To gauge the changes in seawater composition at the time, the researchers examined the fossilized shells of microscopic marine animals called foraminifera, which eat plankton and preserve the local ocean chemistry in their shells. During the ice age, nitrogen levels dropped when iron-rich dust increased at the drill core site, Martínez-García discovered.

“It is particularly gratifying to see such persuasive evidence for the iron hypothesis now appear in the sediment record,” said Kenneth Coale, director of the Moss Landing Marine Laboratories in Moss Landing, Calif., who was not involved in the study.

In previous research, Coale and colleagues looked at the effect of iron enrichment in these waters for over 40 days. The new study shows “the effects of iron enrichment for over 40,000 years, providing a historical validation of the iron hypothesis,” Coale said.

Too big to succeed?

The dust level in the drill core suggests that about four to fives times more sediment fell across the Southern Ocean between South America and Africa during the ice age than the amount that falls there today, Martínez-García said.

“The magnitude of the area we are talking about is equivalent to three times the areas of the entire United States, and is maintained for several thousand years,” he told Live Science. “This helps put into perspective what we can do in terms of the modern ocean.”

The new study supported the argument that the amount of iron needed for geoengineering is untenable in the long term, said Gabriel Filippelli, a biogeochemist at Indiana University-Purdue University in Indianapolis. “It is difficult to imagine even a decade-long international effort of iron fertilization, sustained by continual ship runs dumping iron in a weather-hostile and isolated region of the world, let alone an effort that lasts a millennium,” Filippelli said.

But Filippelli also said he thinks the ice-age iron story is more complicated than just dust blowing in the wind. “The authors note only one source of iron — from above,” he said. There is also evidence that the oceans were richer in iron because of more river input during the ice ages, he said. Thus, the ice-age ocean had extra iron from above and from below.

About these ads

div { margin-top: 1em; } #google_ads_div_wpcom_below_post_adsafe_ad_container { display: block !important; }
]]>