Uncertainty in Clean Air Rules Continues to Impede Planning

by Bob Shively, Enerdynamics President and Lead Instructor

Pity the poor owner or developer of a power plant in the U.S.  Power plants are at least 20-year investments and longer for some types of generation such as coal or nuclear.  Yet in the U.S., we no longer have clarity on air quality rules for power plants from month to month let alone from year to year.

First a bit of history:  The 1970 Clean Air Act gave the EPA jurisdiction over certain emissions from power plants. Congress modified and expanded this rule in 1990.  Resulting regulation included rules on particulates, the Acid Rain Program that regulated emissions of sulfur dioxide (SO2) and nitrous oxide (NOx), and the Ground Level Ozone rule that regulated emissions of NOx and volatile organic compounds (VOC).

In 2005, EPA updated the rules for SO2 and NOx with the Clean Air Instate Rule (CAIR).  This rule was taken to court and found to be invalid in 2008 by the U.S. Court of Appeals for the District of Columbia Circuit. The court let the rule stand until EPA could replace it with something else but required EPA to replace it as quickly as possible.  EPA went through a lengthy process to develop a new rule and finally issued the finalized Cross-State Air Pollution Rule (CSAPR) in July 2011.

Most in the industry assumed this was going into effect and most spent significant time and money to develop (and begin implementing) plans to deal with the significant regulatory changes.  Strategies included plans to shut down many older less-efficient coal plants and to replace needed capacity with other sources such as combined-cycle gas turbines. They also installed emissions control equipment on units they planned to keep running.  And parties began trading emissions credits under the new rule to manage their financial impacts.

But as it often happens, certain parties sued in court to overturn the rule.  On December 30, 2011,  – less than 48 hours before CSAPR was to go into effect – it was stayed by the U.S. Court of Appeals for the District of Columbia Circuit.  Then on August 22 the court ruled in a 2-1 decision that EPA had overstepped its authority in issuing the rules.  The court, stating that EPA needed to give the states more times to craft their own rules, vacated the rule and required the EPA to come up with yet another rule.  EPA says they are reviewing the decision and may appeal.  Alternately, EPA will have to again start over the process of crafting a new rule in response to the court’s 2008 ruling.

Meanwhile, power plant owners and utilities are left with continuing uncertainty.  No one knows what the emissions rules will be in the near future.  Should owners move forward with shutting down older coal plants and installing emissions control equipment on others?  Or should they just put all plans on hold and wait?  How should they plan for the presumed upcoming implementation of the new EPA mercury and air toxics emissions rules? Under what assumptions should they develop future resource plans and develop new projects?

The problem with this state of affairs is that it fosters very short-term planning.  The result is that our existing power plant fleet fails to get modernized; older, less-efficient, more-polluting units keep running; and new power plants are developed based on the lowest level of risk.  This means that new plants are going to be either natural gas combined-cycle turbines, simple-cycle turbines, or wind since these carry the lowest level of risk.  And all other innovative concepts such as Integrated Gasification Combined-cycle (IGCC), combined heat and power with district heating, and new generation nuclear units are left to be developed by entities in other countries.  And some time in the future, we will have to import these technologies rather than profit by selling them to others.

Posted in Electricity | Tagged , , , , , , | 1 Comment

A Brighter Future for Coal Integrated Gasification Combined-Cycles? Part II

by Bob Shively, Enerdynamics President and Lead Instructor

Last week in Part I of this post we examined what a coal Integrated Gasification Combined-cycle (IGCC) unit is, how it works, and how it may change the future of coal as a fuel source. So is IGCC a mainstream reality and, if so, where is it currently being implemented?

Development of IGCC has been slowed by uncertain costs and lack of operating experience with only limited demonstration units around the world.  But recent developments will soon give us a new window into actual costs and operational capabilities of IGCC units.  In the U.S., Duke will soon complete the 618 MW Edwardsport plant in Indiana, and Southern Company’s 582 MW Kemper County plant is under construction in Mississippi.  These units are not initially designed to capture carbon although they are designed to easily add carbon capture after the fact.  And each of these projects has been beset by rising costs through the construction project, which has made each project less than popular with consumers and regulators in their home states.  Until recently this made it appear unlikely that more projects would be built in the U.S. until more was known about the results of these initial projects[1].

But this perception may be changing with the $2.5 billion Texas Clean Energy Project.  The 400 MW project is planned to include 90% carbon capture, with the resulting CO2 used  for injection into oil fields in the Permian Basin.  Such injection helps oil to flow from underground formations thus stimulating oil production.  This project is different from the Duke and Southern projects in that it is being developed by an independent power producer rather than by a utility company protected by inclusion of costs in customer rates.  Thus it must be financed on its own merits rather than off a utility company’s balance sheet.  The project has already has secured $450 million in grants from the U.S. Department of Energy and has acquired necessary permits.

The likelihood of the project moving to construction increased recently with word that China Petrochemical Corp, commonly called Sinopec, is in talks to acquire an equity stake and provide up to $1 billion to help fund the project.  Sinopec’s participation makes it much more likely that the project will be able to complete its financing needs and fosters U.S. and Chinese government goals of cooperation on clean energy research and development.

Again, hearing from DOE Secretary Steven Chu:  “Cooperation with China on clean energy is good for Americans and good for the world. As the world’s largest producers of energy, consumers of energy and greenhouse gas emitters, the energy and climate challenge cannot be solved without the United States and China. What we do — or do not do — in the coming decades matters to the entire world.”[2]

While the role of IGCC in our future generation mix remains to be seen, the current developments in the U.S. along with developments in Europe and Asia are noteworthy and deserve a close watch as they navigate this new terrain.

Posted in Electricity, Natural Gas | Tagged , , , , , | Leave a comment

A Brighter Future for Coal Integrated Gasification Combined-Cycles?

by Bob Shively, Enerdynamics President and Lead Instructor

Turbine for combined cycle power plant

Turbine for combined cycle power plant (Photo credit: Worklife Siemens)

In the U.S. and some other markets around the world, the future of coal generation appears uncertain as concerns about emissions including sulphur dioxide (SO2), nitrous oxides (NOx), mercury, and carbon dioxide (CO2) have slowed new construction. Yet reserves of coal remain robust in many key electric markets including China, India, Russia, and the U.S.  As U.S. Energy Secretary Steven Chu stated to the Washington Post in 2009: “Even if the United States turns its back on coal, China and India will not, and so, given the state of affairs, I would prefer to say let’s try to develop technologies that can get a large fraction of the carbon dioxide out of coal.”

One way to clean up coal generation is to increase the efficiency of power plants, thus reducing the amount of emissions per MWh.  Significant improvements have been made in this area with the development of supercritical and ultrasupercritical units. And currently available emissions technology including baghouses, scrubbers, and selective catalytic reduction can significantly reduce emissions of SO2, NOx , and mercury.  But no off-the-shelf technologies are available to capture carbon emissions.

The potential to reduce carbon emissions comes through post-combustion technologies placed on the exhaust stack or through use of pre-combustion technologies[1].  Post combustions technologies have proven to work in limited demonstration projects, but their operational and economic effectiveness in plant size applications appear uncertain.

An alternative is to convert coal to a synthetic gas similar to natural gas through a gasification process and then to utilize it in a gas combined-cycle turbine[2].  This provides multiple benefits including the higher efficiency of the combined-cycle gas turbine generation process, the operational flexibility of a gas turbine that can mesh well with other clean generation such as wind and solar, and the opportunity to capture and sequester carbon prior to combustion.

Development of IGCC has been slowed by uncertain costs and lack of operating experience with only limited demonstration units around the world.  But recent developments will soon provide a new window into actual costs and operational capabilities of IGCC units.

Our next week’s post on Energy Currents will look at current IGGC developments in the U.S. along with developments in Europe and Asia that bear watching.


[1] See “Coming Clean on Coal” available at http://www.enerdynamics.com/documents/Insider41408_000.pdf

[2] For a description of the IGCC process, see  “A Closer Look at IGCC Coal Generation” available at http://www.enerdynamics.com/documents/Insider72406.pdf

Posted in Electricity, Natural Gas | Tagged , , , , , , , | 1 Comment

Energy Efficiency – Real Usage Reductions or Simply a Mirage?

by Bob Shively, Enerdynamics President and Lead Instructor

According to the Energy Information Administration (EIA), peak electric loads in the U.S. are 20,800 MW lower than they would be without utility energy efficiency programs[1].  If it is correct it represents perhaps 100 power plants that did not need built that would have otherwise been required.  Similarly, the EIA data shows that electric usage is 86,926,000 MWh lower than it would be without these programs[2]

But others argue this is a mirage due to the “rebound effect,” a theory that says people that reduce energy consumption through efficiency then simply take their saved money and spend it on other energy consuming devices thus pushing their demand and usage back up.  For instance, a report prepared for the Breakthrough Institute[3] suggests that rebound effects can erode energy efficiency gains by 10-30% in developed countries and as much as 40-80% in developing nations[4].

Their belief is that the effect will much higher in developing countries because, for example, someone who installs an efficient heater in one room of their house may take the savings and heat other rooms in the house that previously they could not afford to heat. The theory, if proven true, is important not because it suggests we should stop doing energy efficiency, but because it suggests the value of energy efficiency programs may be significantly lower than initially thought thus making many programs no longer beneficial.

The American Council for an Energy Efficient Economy (ACEEE) surveyed studies of the rebound effect in a white paper released in August 2012[5].  They studied both direct rebound effects (i.e. if I weatherize my house I might now turn my thermostat in winter to 74o instead 70o degrees since it will cost less to heat my house to 74o) and indirect effects (i.e. if I save lots of money on my heating bill after weatherizing I might take the money and use it to build a new addition on my house that will consume new amounts of energy).  The ACEEE concluded that different uses have different rebound effects:

Passenger vehicles 10%
Space heating  1-12%
Space cooling  1-13%
Residential lighting and appliances  0-12%

Thus the ACEEE suggests that for each unit of reduction through energy efficiency investment, we get 0.9 units of reduced overall energy use.  And indeed, looking at EIA data for average consumption we see that the average consumption per household has declined significantly since 1980 even while the average size of a house has increased:

Source:  Average consumption per household EIA Residential Energy Consumption Survey[6]

Average size EIA Residential Energy Consumption Survey[7]

Based on this data, we can conclude that at least for the U.S., increasing energy efficiency has allowed to increase the size of our homes while using less energy.  And we can say that at least for households, any rebound effect is not overwhelming the benefits of more efficient homes.

Posted in Electricity, Renewables | Tagged , , , , , , , | Leave a comment

When Onsite Energy Training Makes Economic Sense

By John Ferrare, Enerdynamics’ CEO

At the public seminars we host across the country, I often meet groups of attendees from the same company who are taking advantage of our buy-three-get-the-fourth-free policy. It’s not unusual to see the same company send employees to the same seminars located in different cities. I’m especially pleased to see this as it usually means that employees attending one of our seminars are returning to work and recommending it to their colleagues. It’s great for our public seminar business – but is this the best use of an organization’s training dollars?

I often advise clients who have 8-10 employees interested in a course that it is more economical to bring the seminar onsite. Doing so has many advantages:

  • Saves employee travel time
  • Saves employee travel expenses
  • Allows content to be customized to fit a company’s specific training needs
  • Gets people from the same organization together, which facilitates networking and sharing of ideas
  • Trains more employees for less cost per employee

But here’s some interesting math. Let’s say you are sending four employees to our Electric Business Understanding (EBU) seminar in Chicago this fall. Below is a ballpark estimation of what that might cost your company:

  • Seminar fee for four employees: 3 x $1190 = $3,570 (remember that the fourth attendee is free)
  • Airfare and transportation to/from airport: 4 x $700 = $2,800 (of course this depends on airline and location, but fares have increased significantly in the past year)
  • Two nights’ hotel for four employees: 4 x $450 = $1,800
  • Miscellaneous expenses including food for four employees: 4 x $150 = $600
  • The grand total is $8,770

This figure doesn’t include the cost in unproductive time for four employees to fly from your site to Chicago and back. (And let’s not forget the personal cost of the obligatory retail excursion down Michigan Avenue!)

So here’s where an onsite seminar really pays off. To send just four employees to Chicago, your cost is already more than 50% of what it would cost to bring the same seminar onsite for up to 30 employees! Send a group of four twice and you are well beyond the cost of bringing the same seminar to your company site – with the option of including 22 additional employees. If you calculate the per-employee cost of these two options, you can see that the onsite option is tremendously more cost-effective:

  • Approximate cost per employee to attend EBU in Chicago: $2,200
  • Approximate cost per employee to bring EBU onsite for 30 employees: $450

And you get the added benefits of onsite training listed above.

One last note about onsite training: First-time clients often voice concerns about filling a class. If you are offering one of our basic gas or electric business understanding sessions, or even one of our market dynamics sessions, my experience has shown me this: The only companies who do not fill these classes are those that are very small organizations and those that do not market the seminar in such a way that employees understand what’s being offered. Countless times I’ve seen a new client fill one of these classes (often with a waiting list) just by making it available to those who could benefit from it.

If you’d like to explore the costs and benefits of onsite training including the results other companies like yours have seen, please call me at 866-765-5432 (extension 700) or e-mail me at jferrare@enerdynamics.com.

Posted in Energy Training | Tagged , , , , | 1 Comment

Demand Side Management and Its Impact on Wholesale Electricity Markets

By Matthew Rose, Enerdynamics Instructor

The debate over the impact of demand response and energy efficiency in the wholesale power markets remains active. On the surface, there is an intuitive belief that if we better manage and even reduce demand, especially at the higher-cost peak periods, then the market will benefit from lower wholesale prices.

An initial effort to demonstrate this idea was attempted in 2007 when the Brattle Group conducted a research study designed to quantify demand response benefits in PJM [1]. The results indicate that a 3% reduction of demand during peak periods translates to wholesale price reductions of 5% to 8% on average, with reductions even greater in some regions within PJM. The study has been held up by policymakers as reason to advance demand side management (DSM) options for customers.


The Federal Energy Regulatory Commission (FERC) has been a strong proponent of advancing DSM as a resource in the wholesale market.  FERC has consistently ruled to position DSM as a wholesale market resource competing on a “level playing field” with supply options. A quick scan of the formal ISO/RTO operations indicates a growing number of ISO/RTO-facilitated markets open to DSM resources addressing energy, capacity, and ancillary services opportunities.

DSM programs on the rise
As evidence, throughout the organized wholesale markets there are active programs to include price-responsive demand response and energy efficiency in ISO/RTO markets. Customers, either directly or through a third-party agent (e.g. curtailment service provider), routinely bid customer demand response reductions in the relevant markets to compete through the auction process. If the customer resources are “cleared,” then customers must reduce their loads as bid, similar to conventional supply resources. According to some of the latest data, the role of demand response in ISO/RTO operations remains very strong. The demand response potential at the ISO/RTO level increased 16% between 2009 and 2010. This trend seems to be continuing today.

Demand Response Resource Potential at U.S. ISOs and RTOs

2009   MW

Percent   of 2009 Peak Demand

2010   MW

Percent   of 2010 Peak Demand

California ISO

3,267

7.1%

2,135

4.5%

Electric  Reliability Council of Texas

1,309

2.1%

1,484

2.3%

ISO New England

2,183

8.7%

2,116

7.8%

Midwest ISO

5,300

5.5%

8,663

8.0%

NY-ISO

3,291

10.7%

2,498

7.5%

PJM  Interconnection

10,454

7.2%

13,306

10.5%

Southwest Power Pool

1,385

3.5%

1,500

3.3%

Total

27,189

6.1%

31,702

7.0%

2011 Assessment of Demand Response and Advanced Metering
Staff Report-Federal Energy Regulatory Commission-November 2011

Measuring DSM’s impact on wholesale prices
The use of demand response including price-responsive demand reduction, direct load control, displacement through on-site generation or active building controls remains a viable resource option in dispatching operations in wholesale power markets. What remains cloudy in these transactions is the direct and explicit impact these efforts have on wholesale prices.

There is general consensus that strategic demand response and even energy efficiency resources result in lower wholesale prices. However, there are still questions about how much, how long, and any variations across regions and locations. There seems to be greater attention on quantifying the benefits and costs of DSM as a resource option including proposed efforts in the Commonwealth of Pennsylvania where distribution utilities are required to reduce demand in their territories by 4.5% for the 100 hottest hours this coming summer. The plan calls for a formal analysis of the demand response efforts and determination of their impacts on ISO/RTO operations and wholesale prices [2].

Energy efficiency in forward capacity markets
One of the recent initiatives in some of the ISOs/RTOs has been the inclusion of energy efficiency as a resource in the forward capacity markets. Energy efficiency differs from demand response as it provides a measurable decrease in consumption over the course of a year rather than just a reduction in consumption for selected hours in a given year. A large commercial or industrial establishment could presumably invest in energy efficiency improvements and bid the savings from the project as a resource in the capacity market. This poses a whole new set of issues for the system operators who are most interested in making sure the capacity reductions are measurable and persistent over the course of a given year in the future.

This opportunity has been in place at ISO-New England for a number of years and became a formal element in the full forward capacity market in 2010. PJM also has offered opportunities for energy efficiency projects and demand response resources to bid into its forward capacity market (i.e. Reliability Pricing Model). In combination with the growth of DSM throughout the PJM region as a result of state mandates, the sample of projects continues to grow. In fact, the most recent PJM Base Residual Auction for its capacity market yielded large increases in DSM projects as part of the pool of resources.  The most recent results follow:

New   Generation

Generation
Uprates

Demand Response

Energy   Efficiency

2015/2016   Auction

4,898.9

477.4

14,832.8

922.5

2014/2015   Auction

415.5

341.1

14,118.4

822.1

PJM: 2015-16 Base Residual Auction Results, PJM Docs #699093, May 2012.

Where all this leads is not entirely certain, but there are signals as to where things are moving directionally.  It appears that the FERC continues to advance DSM in the ISO/RTO wholesale markets.  Additionally, with the growth of advanced meters and alternative rate options being offered at the wholesale and the retail levels, customers will be offered greater opportunity and incentive to participate. Though regional variations will remain, wholesale cost impacts will differ, and year-to-year participation levels will be difficult to predict, the impact of DSM is real and likely to expand moving forward.


References

[1] Quantifying Demand Response Benefits in PJM, The Brattle Group, Prepared for PJM Interconnection, LLC and the Mid-Atlantic Distributed Resources Initiative (MADRI), January 29, 2007.

[2] The general discussion of the demand reduction requirements are included in Pennsylvania House Bill 2200-Act 129 as passed by the General Assembly in 2008.

Posted in Electricity | Tagged , , , , , , , , , , | Leave a comment

Gas-to-Liquids: What Is It and How May It Change the Natural Gas Marketplace? Part II

by Christina Nagy-McKenna, Enerdynamics Instructor

NOTE: In Part I of this article we discussed what GTL is and how it works. Part II looks at where GTL is becoming a major market player as well as if and how GTL could carve a niche in the U.S. gas market.

Where is it done today or being planned for tomorrow – and who is doing it?
The economics of GTL demand that the process take place in an area that possesses a large, long-term supply of cheap natural gas.  Countries with high natural gas reserves and low marginal costs for natural gas such as Qatar, Nigeria, and Trinidad and Tobago are great candidates.

There are several plants in place today:

    • Royal Dutch Shell owns and operates the world’s largest GTL plant, Pearl GTL in Qatar. It also produces diesel fuel from natural gas in Malaysia.
    • SASOL, a South African company, has built a GTL plant in Qatar.

Photo: Storing liquid fuels in Qatar. Among its products, Pearl GTL will make diesel  equivalent to fill over 160,000 cars a day. Photo courtesy of Shell Flickr album: http://www.flickr.com/photos/royaldutchshell/5552566092/in/set-72157623792015947.

Known future plans include the following:

  • Chevron is partnering with the Nigerian National Petroleum Corporation to build a GTL plant at Escravos, Nigeria. It is expected to begin operation in 2013.
  • Petrobras, a Brazilian company, intends to post two small experimental GTL plants off-shore. It would be cost prohibitive to build pipelines from the off-shore wells to on-shore GTL plants. Thus, the gas would otherwise be stranded.
  • Royal Dutch Shell is studying the feasibility of a plant in Louisiana.
  • SASOL is studying the feasibility of a plant near Lake Charles, La.

What are the economics of GTL?
The capital outlay for a GTL plant is enormous. The Pearl GTL plant owned by Shell has an estimated cost of U.S$18-$19 billion. Variable costs are unpredictable particularly for the natural gas that will be converted into liquid. For instance, while natural gas prices have been low for the past two years, the volatility of the market is dramatic.  In the past 12 years, gas has swung from $15/MMBtu to $2/MMbtu. The volatility makes it very difficult to forecast future prices. Operating costs and shipping costs must also be factored into the equation. Shipping costs are estimated to be the same as for oil tankers.

Could GTL work in the United States?
GTL could physically work in the U.S.  However, in order to work financially, a GTL processing plant would need to be located next to a source of natural gas that would remain cheap and stable for the next 20 years.  It is possible that a large shale gas field could provide such a stable fuel source, however we have no data from the modern era of natural gas that would support such stable market behavior. The U.S. Energy Information Administration (EIA) estimates that such stability would require natural gas  to be in the range of $6/MMBtu as long as oil is hovering around $100/barrel.

In comparison, when it reaches its peak in mid-2012, the Pearl GTL facility will process close to 1.6 Bcf/day of wellhead gas from the North Field in Qatar, the largest non-associated gas field in the world.  It will process the equivalent of 3 billion barrels of oil-equivalent during its useful life.  Pricing for gas from this field is expected to remain stable.

If so, how would it change gas and diesel markets?
The impact of GTL on transportation fuels is estimated to be very modest.  EIA forecasts that at least 1 million barrels of transportation fuel would need to be produced per day to have any meaningful impact on diesel prices.  Right now worldwide production of GTL is estimated to be a little over 400,000 barrels per day.

So, while the future for GTL is bright in some parts of the world, it remains to be seen if it will be meaningful to the U.S. market. Long-haul trucking is used to a great extent in the U.S., and diesel fuel accounts for nearly half of all vehicle fuels worldwide.  Developing countries also use diesel for buses in addition to trucks, thus GTL-produced diesel may well find a robust global market for its product.


References:

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Gas-to-Liquids: What Is It and How May It Change the Natural Gas Marketplace? Part I

by Christina Nagy-McKenna, Enerdynamics Instructor

Gas-to-liquids (GTL) is a technology that chemically converts natural gas to a liquid synthetic fuel that can be used in place of diesel and jet fuels. The process is based on work done by scientists in the 1920s and the 1970s. While GTL can also produce other chemical feedstock, the primary commercial interest in this technology is the creation of transportation fuels.

For many years natural gas and crude oil prices moved in tandem with one another. If one went up or down, so did the other. Since 2009 when oil prices rose again after their spectacular rise and dramatic fall in 2008, natural gas prices have remained low. Analysts agree that the two fuels are now decoupled — they now move independently of one another. U.S. gas producers, flush in production from shale gas fields, have spent two years watching natural gas prices drop to levels not seen in a decade.

Although the U.S. imports liquefied natural gas (LNG), producers are now considering exporting LNG to Europe and Asia as gas prices there are much higher. GTL offers another way to monetize natural gas and get a piece of the more lucrative transportation fuels market instead of just a larger piece of the traditional end-user energy market.

How does it work?
GTL takes hydrocarbons such as methane-rich natural gas and converts them into longer-chain hydrocarbons such as diesel fuel. There are two primary processes:  Fischer-Tropsch and Mobile. Fischer-Tropsch was developed by two German scientists in the 1920s, while Mobile was developed in the mid-1970s. Fischer begins the process with the partial oxidation of natural gas (methane) into carbon dioxide, carbon monoxide, hydrogen, and water. The ratio of hydrogen to carbon monoxide must be adjusted and later the excess carbon dioxide and excess water are removed. This leaves a synthetic gas (syngas) that is reacted to in an iron or cobalt catalyst. The result is liquid hydrocarbons and other byproducts.

GTL process using Fischer-Tropsch process

The Mobile process converts the natural gas to a syngas, and then it converts the syngas to methanol.  It is then polymerized into alkanes over a zeolite catalyst.

Now that we’ve laid out what GTL is and how it works, next week’s blog post will look at where GTL is becoming a major market player as well as if and how GTL could carve a niche in the U.S. gas market.  Stay posted for Part II!

Posted in Uncategorized | Leave a comment

Is Offshore Wind Power a U.S. Reality?

by Dan Bihn, Enerdynamics Instructor

Note: This blog post is an excerpt from an article recently featured in our Q2 2012 issue of Energy Insider. To read the full article, click here.

Today wind turbines generate about 3% of electricity in the United StatesImage — all of it onshore. Europe gets about 6% of its electricity from wind. But nearly 4 GW of its 900 GW of installed wind capacity now comes from turbines mounted on the seabed miles offshore. This is an impressive statistic considering the first commercial-scale offshore turbine was installed in 2001 and the real push for offshore didn’t really kick off until 2007.

Offshore wind isn’t easy and it isn’t cheap. Electricity and water don’t mix. Steel and saltwater don’t mix. And few things mix well with hurricanes. Not surprisingly the International Energy Agency (IEA) estimates that offshore wind costs two to three times more than onshore wind.

So why would anyone want to put a 300-foot-tall wind turbine 20 miles out to sea?

The main reason is that is where strong, consistent, unobstructed winds are closest to coastal population centers. This results in wind output that more resembles a baseload power plant than a variable resource.  And there are other advantages: no need to buy land (try that 20 miles from New York City); no one is around to complain about their views being ruined (a 400-foot structure is completely over the horizon 25 miles offshore); and the length of wind turbine blades isn’t constrained by trucking limits on the autobahn or the interstate.

Initial offshore wind development in the U.S.
While the U.S. doesn’t have any commercial offshore wind turbines as of 2012, the idea isn’t new here. Since 2001, a group of developers has been proposing an ambitious 468 MW wind farm off the coast of Cape Cod — a project called Cape Wind. The plan is to build 130 3.6-MW turbines four to 11 miles offshore. The tips of the 182-foot blades will reach 440 feet above the water, making them visible onshore.

Being visible is the problem — and the source of local and regional opposition from the late Ted Kennedy to former Massachusetts Governor Mitt Romney. While the project is still officially in play, it seems unlikely it will be constructed. Time will tell.

But the idea of offshore wind in the U.S. is far from dead. In 2010, the U.S. Department of the Interior (whose jurisdiction extends to the offshore exterior) launched the “Smart from the Start” initiative to promote the construction of 10 GW of offshore wind by 2020 and another 44 GW by 2030.

A total of 10 GW of offshore wind along the Atlantic seaboard probably means 10 to 30 wind farms. Each could potentially construct its own underwater transmission line. Surprisingly, underwater transmissions lines can be cheaper than traditional overhead lines — especially when those overhead lines need to go through heavily populated areas.

But it may be more economical to aggregate the power from these future farms and use a single system to bring that power back to shore. This is exactly what Google and financial heavyweight Marubeni are proposing. It’s called the Atlantic Wind Connection — a high-voltage DC superhighway with floating substations. The 230-mile offshore transmission system would span the U.S. mid-Atlantic seaboard from New Jersey to Virginia, bringing 7 GW of proposed offshore wind power to the Atlantic seaboard. On May 15, 2012, the Department of the Interior moved its permit to the next stage.

Read more…

Posted in Electricity, Renewables | Tagged , , , , , , , | 1 Comment

Breaking Down Net Zero Building: Reality or Wishful Thinking?

By Ashley Halligan, an analyst at Software Advice and guest author to Enerdynamics’ Energy Currents 

Commercial and industrial facilities throughout the United States account for 40% of all energy use. That said, as part of the Energy Independence and Security Act of 2007, it’ll be required that all federal buildings become net zero energy consumption by 2030 – and all commercial buildings by 2050.

But, what exactly does “net zero” consumption imply? Though there are variations of this phrase, the most widely accepted definition is as so: net zero indicates that a building has generated at least as much power as it has consumed over a 12-month benchmarking period – which is the standard benchmark given that it factors in seasonal variants.

Several experts have chimed in to discuss how an existing facility can make retrofits when undertaking a net zero project, how to begin a from-the-ground-up initiative, and discuss whether the idea on a wide-scale is, in fact, possible.

Beginning with the case study of McCormick, the food conglomerate, McCormick’s Sustainable Manufacturing Manager Jeff Blankman explains how they developed a five-year energy efficiency plan at their distribution center in Belcamp, Maryland – a 363,000-square-foot facility. Initially, net zero was not their goal – rather an overall reduction in consumption with common upgrades like efficient lighting, HVAC system, and motion sensors.

After seeing a 55% reduction in consumption, McCormick decided to further their consciousness by installing photovoltaic solar panels on its roof, provided by Constellation Energy. After the standard 12-month benchmarking period, they were pleasantly with not only achieving net zero consumption – but also having an energy surplus.

Blankman explains, “The most important maneuver in a net zero makeover is to focus on energy efficiency first. You must reduce consumption – making a facility as efficient as possible.”

Then, use an alternative energy resource to achieve the remaining energy necessary to operate.

In the case of ground-up initiatives, the experts offer four considerations:

1) Do thorough energy and cost modeling before hand, and don’t be too optimistic. Keep in mind things like occupant behavior and climate fluctuations. It’s impossible to accurately predict consumption.

2) Research funding opportunities and tax incentives. There are many state and federal-level funding options and tax incentives that can save your organization significantly.

3) Have an integrated plan for the design phase. Having multiple minds from all roles of a project including the design team, management team, CEO, etc. will provide a collective group of minds for the project analysis.

4) Know the project will be challenging and will require ongoing insight. Dru Crawley, Director of Building Performance at Bentley Systems, says, “It requires a commitment from the building owner and operator to ensure the design intent is carried out. It requires that all energy use in the building be considered. It also takes periodic testing – energy simulation – to ensure performance goals remain on track.”

So, from a wide-scale perspective, is this a reality or wishful thinking?

Crawley suggested that net zero communities may be more realistic than expecting all buildings to be able to achieve an equilibrium between generation and consumption. Multiple story buildings, for example, cannot generate the same amount of solar power as a large single-floor distribution center such as McCormick. Crawley explains, “The potential over-supply from lower energy-intensity one and two-story buildings can offset higher energy-intensity higher-rise buildings.”

Read the original story here.

Posted in Electricity, Renewables | Tagged , , , | 1 Comment