By Tom Steinert-Threlkeld  |  Posted 2005-07-08 Print this article Print

Decades before the idea took hold in the dot-com era, Reader's Digest kept a "360-degree view" of each of its customers—tracking every contact it ever had with a subscriber to its magazine or a purchaser of any of its condensed books or o


Age. Income. Magazine subscriber. Recency of payment. Number of times bought from a particular series of books.

They had been just statistics that publishers had collected. But to Nester and the quant jocks who preceded her at Reader's Digest, they and hundreds of variables like them were a means to an end: making predictions.

Nester is director of database marketing. Her job is to help Reader's Digest make more money by mailing less. To get $600,000 of new revenue from a campaign sent to 1.6 million potential buyers, instead of $500,000 from one sent to 2 million.

That means finding those names on record who are most likely to respond to a particular promotion.

With the advent of a single database that would keep records on every customer Reader's Digest had, that would be seriously possible, nearly five decades after founder DeWitt Wallace started the Digest in the basement of a New York speakeasy, printed 5,000 copies of the first edition and had the perspicacity to buy a mailing list to send potential subscribers a sales pitch.

The brazenness of the Unified File System was its simple devotion to comprehensive information. Sure, the names and addresses of present and past customers would go into it. But so would their payment history, and their mailing history.

But Reader's Digest's marketers and circulation managers weren't keeping track of whether each issue showed up in mailboxes nationwide. They were instead keeping track of every mailing campaign they would conduct, now as many as 630 a year, as the system got its legs.

They would keep track of each campaign in a shorthand code, and mark off, in another field, if there was a response. They would also keep track of how fast the company got paid, after that.

Nester's job would then be to use a form of mathematics known as regression analysis to figure out which variables actually mattered. Such analysis assumes that there are significant relationships between variables. If the relationship is strong, the result can be depicted in a straight line—and knowing where one of the variables, say, age, falls on the line can help predict where the other, say, spending, also falls. If a variable was particularly useful at predicting what you wanted—for example, the propensity to open red mailing envelopes—it was lauded for its ability to "discriminate."

Even when the magazine, condensed book and one-shot businesses had separate databases, variables were analyzed. But the only variable in a magazine record might be when the subscription ended. Pulling together records of all businesses, as well as payment history, into one file vastly expanded the variables that could be analyzed. "It was like quadrupling the size of your dictionary,'' Otten explains.

Nester could use all the data to test different models of customer behavior. The trick would be to rank each variable, to find which were the best predictors. And then which combinations could really make a difference. Age by itself might give some insight into the willingness to buy a condensed book. But age and time since last order might be a whole lot more telling.

"I'm not a gambler,'' says Nester, who came to Reader's Digest 24 years ago. "I'm a statistician.''

Recency of an order, frequency of an order and the amount spent all come into play. Out of the constant iterating of models would, at some point in the '70s, popout one particularly useful predictor: Promotions Since Last Order.

The PSLO helped separate the wheat from the chaff. If a person, on average, bought one tape a year, and a year had gone by, it was time to send a promotion to that person. If the person had just bought, it was time to back off. Sending a new piece of mail was very likely a waste.

When uncovered, the reaction among programmers and quantitative analysts at Reader's Digest was, "'Wow, that baby really discriminates,'" Burns recalls.

Before Nester's forerunners started applying regression analysis, the term "PSLO" didn't even exist at Reader's Digest. But as the analysis got into full swing, such benchmarks would be codified in little reusable bits of code, called "terms."

Each term would measure customer activity in some way, such as total number of purchases from a product line or total number of payments made in a product line.

The terms would make it easier for non-technical staff, like marketing director Kathy Gilbert Haggerty, to run campaigns. Based on prior analysis, she could have a pretty good idea of which variables would provide the best results for a new gardening book she might want to promote. And she could specify the terms.

Knowing which terms really worked was not of idle importance. After 35.5 years in operation, the results of each model, each set of variables, had been fed back into the system. Haggerty could pretty much predict what the profitability would be of any given campaign, based on the choices she made about who should receive which offer.

That's critical, because Haggerty is now responsible for the results of condensed book and one-shot sales efforts. Even at the height of the company's mail marketing prowess, "On a good day, 95% of your customers don't respond,'' she says.

Tom was editor-in-chief of Interactive Week, from 1995 to 2000, leading a team that created the Internet industry's first newspaper and won numerous awards for the publication. He also has been an award-winning technology journalist for the Dallas Morning News and Fort Worth Star-Telegram. He is a graduate of the Harvard Business School and the University of Missouri School of Journalism.

Submit a Comment

Loading Comments...
eWeek eWeek

Have the latest technology news and resources emailed to you everyday.