The Triumphs

By Baselinemag  |  Posted 2006-12-06 Email Print this article Print
 
 
 
 
 
 
 

WEBINAR: Live Date: December 14, 2017 @ 1:00 p.m. ET / 10:00 a.m. PT

Modernizing Authentication — What It Takes to Transform Secure Access REGISTER >

Technology teams at Chevron, Hendrick Motorsports and the Bank of New York are named in our annual report of the starkest I.T. lessons of the last year. Did those companies shine or earn demerits?



The Triumphs
$138 billion
The potential revenue at Chevron from a computer-assisted oil discovery.

For years, Chevron pumped billions of dollars into deep-water oil exploration in the Gulf of Mexico. In September, the company and its partners, Devon Energy and Statoil, may have struck a massive return—buried five miles beneath the surface.

The successful test drilling of a well 175 miles south of the Louisiana coastline is potentially one of the most substantial oil discoveries in years, with estimates placing the total oil capacity of the well's surrounding area at between three and 15 billion barrels.

But the oil didn't spout up by itself: The successful test of the "Jack 2" test well marked the culmination of 10 years of planning and investing—and the maturing of seismic imaging, the mix of science and technology used to determine geologic compositions.

In 2001, when geophysics manager Brad Hoffman and his team sought to speed up the production of seismic images, San Ramon, Calif.-based Chevron upgraded its computing systems to 64-bit Linux clusters and now has more than 1,000 central processing units, working to create underground projections from algorithms and sound-wave tests.

In 1999, testing a 600-square-kilometer area took about 30 days. Today, Chevron can chart three times that space over a weekend—a 45-fold improvement. And the stepped-up imaging helped the company navigate the Gulf's thick, salt-canopied bottom, which Hoffman deemed more difficult to see beneath than the sandstone and shale sea floors they usually survey.

Chevron says the 200-mile strip in the Gulf, of which the Jack 2 site is only a few miles, could yield up to 15 billion barrels. While the company says it could take years to determine a potential return, results from another Gulf project might offer some clues.

Chevron and its partners spent approximately $3.5 billion to survey and drill the "Tahiti" site, a small field in the Gulf that may yield 125,000 barrels a day and an eventual total of 400 to 500 million barrels.

Based on per-barrel prices in mid-November, that could mean as much as $7 million a day, and an eventual total topping at least $23 billion. The Jack site, in its low range, could deliver six times more. —Brian P. Watson

40X
The consistency improvement the Hendrick Motorsports team realized from product life-cycle management.

The image of a half-dozen good ol' boys tuning their souped-up car engine beneath a big ol' shade tree is one that the multibillion-dollar NASCAR racing world has long left behind. Today, the use of product life-cycle management (PLM) software to monitor and improve engine performance is every bit as important to winning as a fast car and a driver with skill and stamina to burn.

Nowhere is that more true than at Hendrick Motorsports. As the 36-race NASCAR season concluded Nov. 19 with the Ford 400 in Miami, Hendrick driver Jimmie Johnson—aided by some high-tech supercharging in the form of UGS' Teamcenter PLM—won NASCAR's highest honor, the coveted Nextel Cup.

Johnson, whose No. 48 car was maintained and overhauled each week by the Hendrick engine team, was the most reliable and dependable competitor throughout the season, with 24 top-10 finishes for the season—the best of any Cup driver.

The PLM software played a key role, enabling Hendrick, which overhauls 700 engines each year, to standardize the weekly rebuilding of engines for all of its half-dozen NASCAR racing teams, making engine performance more dependable for team drivers. That's critical, because in pre-PLM days, the variance of performance among the Hendrick firm's engines was so great that drivers argued among themselves over who would get the fastest car.

Before PLM, drivers routinely had to accept discrepancies in horsepower of up to 20% among cars on the same team, which could mean a variation of up to 80 or even 160 horsepower, depending on the type of engine. "The teams were not happy," says Hendrick chief engineer Jim Wall. "They wanted more uniform performance." The PLM enabled the engine overhaul team to ensure that each engine was rebuilt identically and tuned for almost exactly the same output, cutting the variation down to just 0.5%—a 40-fold increase in predictability.

As a result, Hendrick drivers like Johnson and four-time NASCAR champion Jeff Gordon know exactly how much power they can depend on from their car when heading into a turn or closing on a competitor on a straightaway.—Doug Bartholomew

35%
Decline in service disruptions that resulted from Bank of New York's embrace of ITIL practices.

The Bank of New York has reduced major service disruptions—from server crashes to network outages—from 65 incidents a month last year to 42 a month now.

How? By focusing on processes instead of technology.

Bank managers say the decline in severity-one incidents, defined as a disruption that affects three or more internal or external customers, is due to a companywide effort to follow best practices for managing technology operations as prescribed by the Information Technology Infrastructure Library, or ITIL. The guidelines, started by the U.K. government in the late 1980s and covering areas such as problem and asset management, give organizations a common vocabulary and metrics to track and improve technology operations.

The bank's use of ITIL's problem management guideline is one example. Bank of New York technology managers meet daily at 9 a.m. to investigate and verify the causes of each severe service disruption and develop a remediation plan. The process lets them look for patterns in outages and discover causes they may have initially overlooked.

Bank managers are also deploying a tool to help track hardware and software assets from acquisition to disposal. When a need is recognized, say, for a server to support an application, an asset request is generated as part of an approved business proposal. The asset is then ordered, and the order is tracked through receipt and deployment by the asset management system. Financial information about the cost, amortization and useful life of the asset is also part of the asset record. Changes are recorded as the value of the asset depreciates, until the asset is removed from use.

Besides problem and asset management, the bank is working on four other ITIL processes: incident, change, service-level and configuration management.

Despite the promising early results, the bank doesn't expect improvements from some of its ITIL efforts to be visible for years. "If you rush service management efforts too much, you end up with a weak implementation," says Gordon Green, a bank vice president. "It's worth our time to do it correctly." —Anna Maria Virzi

Next page: I.T. Trip-Ups of 2006



<123>
 
 
 
 
 
 
 
 
 



















 
 
 
 
 
 

Submit a Comment

Loading Comments...
Manage your Newsletters: Login   Register My Newsletters