New Storage System Increases Performance Tenfold

By Maggie O’Neill

San Francisco’s Morrison Planetarium has shaken things up—and not just through its recent production of the show Earthquake.

About two years ago, Michael Garza, planetarium and production engineering manager, realized that the planetarium needed to expand its IT storage solutions to be able to continue producing shows of high CGI (computer-generated imagery) quality. For some time, the nonprofit has created one digital show every two years, and has also completed a number of smaller projects for the California Academy of Sciences and outside organizations such as Science Today online.

“In general, the shortcoming [of the existing system] was an inability to handle the amount of data that we were dealing with,” he says.

Garza looked at various storage solutions and realized they were out of the planetarium’s price range and required an up-front replacement of the system it already had. He also considered open-source projects and creating his own storage solution.

Ultimately, Garza selected Avere based on a demo that showed him how effective the system would be, as well as the fact that its cost was within budget. The Avere FXT 3200 filers provided the planetarium with about 200 terabytes of increased back-end storage and about 12 terabytes of cache capacity.

The network-attached storage solution allowed the planetarium to increase its overall input/output performance by tenfold. “We’re very pleased with it,” Garza says. “It has been working remarkably well and has met or exceeded expectations in every way.”

The Avere filers were integrated using a three-node cluster system that fit between the client and back-end data storage systems. The vendor provided top-notch customer service and even wrote a script to help Garza, who also receives a snapshot of what is going on in the system at any moment. That process took a “fair amount of work” to achieve in the past.

The upgrade allowed the planetarium’s production team of six CGI staff members to more quickly download large data files coming from research centers and institutions that have been integral to research and production. In the past, these files had been large—sometimes approaching100 terabytes. In addition, the planetarium uses enormous data files to create CGI for its shows because the screen’s size is six to 10 times larger than a regular theater screen.

The storage upgrade has also enabled the CGI team to improve turnaround time in workflow production and to send and receive content files more quickly than before. However, the end goal of the upgrade is not so much to improve production time as to continue creating realistic, high-quality CGI education films.

For example, in Earthquake, the CGI team wanted viewers to feel as if they were standing on the buckling streets of San Francisco during its 1906 earthquake. However, this level of realism can be very difficult to achieve and requires significant storage optimization, according to Garza.

Currently, the planetarium’s production team is working on a show with a tentative title of Eco, and, so far, all systems are doing well.

“We’re reaching the point where, in the past, we would start to see issues arise, but as it stands now, we have been forging ahead without restrictions,” Garza says.

The planetarium has been using its upgraded storage system for approximately six months, and the results have been promising. In fact, the upgrade was so seamless that the planetarium’s CGI team was not even aware that it occurred. “I’m not receiving the same number of complaints that I was previously,” he says.

Garza says he will continue to expand storage capacity at the planetarium. Since each production project seems to double in size, this is something he anticipates and is prepared to do.

“I went through the process of adding additional nodes myself, and it was remarkably simple,” he reports. “It was just a matter of plugging it in.”