April 17, 1983

Getting It On: How Did It Test?

by Tim Brooks, Director of Program Research

 For NBC this year there was plentiful product—a crop of 35 pilots to select from, the highest and costliest level of pilot activity for NBC Entertainment since 1979. The shows that finally made it owe that to one thing: a favorable judgment on their basic appeal. Arriving at those judgments is one of the toughest jobs in television, but there’s help available. It’s called program testing, described here by Tim Brooks, Director of Program Research.


One night last January about 400 people sat in a theater in Los Angeles watching a sneak preview of a new TV series called The A-Team. They were ordinary people, recruited at random from the greater Los Angeles area, but they were in no ordinary theater. This was Preview House, where every seat is equipped with a little knob. Turn the knob to the right to indicate thumbs-up, you like what you are seeing at that moment. Turn it to the left for thumbs-down. For The A-Team that night, it was thumbs-up.

Most people know which programs they like and don’t like—and will tell you at the slightest provocation. A few people have a much harder question to answer: what are other people going to like? Picking the programs that will attract the largest possible audience is perhaps the most difficult job in our business, and one that ultimately determines a network’s success or failure.

One of the tools program executives use to help make these multimillion-dollar decisions is program testing, which is carried out for almost all new programs by the NBC Research Department. The idea behind program testing is simple enough. Before deciding whether to put a program on the air, show it to some people—preferably ordinary viewers, scattered around the country—and see what they think of it.

Sometimes (as was the case with The A-Team) the decision to schedule the show is made first, and the testing is used to help detect possible weaknesses that might be corrected. Almost every business pre-tests its products like this, before going ahead with expensive national distribution.

How does NBC pre-test its TV series? Research often begins with a concept, a written description similar to what you might see in TV Guide. This can be read to a sampling of viewers for their reactions. Few program ideas are ever dropped as a result of this test, because a written description is a poor substitute for an actual film or tape of the program. But viewers’ reactions may produce some angles that had not been thought of previously.

Next comes the pilot, an actual sample episode of the series. This is an expensive commitment. A script must be commissioned, roles cast, sets built, all to see what the idea actually looks like on the screen. Typically a pilot episode of the series will cost between $500,000 and $1 million or more to produce. A pilot episode can be shown to viewers in a variety of ways. The theater test for The A-Team was unusual for NBC. Normally we prefer to feed the pilot over unused channels on specially selected cable TV systems around the country. This way viewers have a chance to see the program in their own homes, under relatively normal viewing conditions.

A sample of subscribers to the cable system are contacted and asked their opinion of the program as a whole and of specific elements of the show, such as the story, setting, actors, etc. Thus the test is more than a simple popularity measure. Sometimes it suggests changes that might make the program more attractive to viewers. In the case of The A-Team subtle changes were made that helped improve the episode’s flow. This kind of finding—called “diagnostic”—may be helpful even after the program has gone on the air. One of the most heavily tested NBC series of recent years was The Facts of Life, which gradually defined its focus and narrowed its initially large cast of regulars with the help of testing. Facts of Life is now one of NBC’s most successful series.

Program test results do not by themselves determine a pilot’s fate, nor do producers have to use the diagnostic findings. Researchers are the first to tell you that you can’t simply patch together a program out of popular elements and expect to have a success. For these reasons program test results must be used carefully, taking into account the type of program being tested and past experience with a wide range of examples.

How accurate is program testing in predicting success or failure? The overall record is pretty good. A strong testing pilot stands a much better chance of making it than a moderate testing one, and a weak testing pilot rarely succeeds. So why not simply put the strong testing shows on the air and dump the weak ones? The answer is that strong testing pilots are extremely rare for any network, and there are never enough of them to fill a network’s needs. Most pilots test poorly, but some of those can be nurtured into successes through careful scheduling and perhaps content changes—if the basic appeal is there.

Program testing is only one part of the program selection process. Basically, it’s a way of letting the public have some say before the final decision is made. It cannot create hits like The A-Team or Diff’rent Strokes, but used carefully it can spot them a little earlier, and help make them a little bit better.

This page was last modified on November 4th, 2011.
© 2011 Tim Brooks All rights reserved HomeTV HistoryRecord Industry HistoryCopyright IssuesConsulting ServicesBook and CD ReviewsAbout My BooksGeorge W. Johnson, the First Black Recording StarLinks & ResourcesDartmouth CollegePress RoomFAQ