COLLEGE STATION --
An image of an exploding rocket hangs in Texas A&M University statistician Valen Johnson's
campus office with a caption addressed to him that reads, "In Grateful Appreciation for Teaching Us the 'Ways of Bayes.'"
"Bayes" -- Bayesian statistics, the branch of the discipline that uses betting odds to assign probabilities -- is the renowned researcher's field of study. The photo is a gift from the 45th Space Wing at Patrick Air Force Base in Florida, beneficiaries of Johnson's past consulting work there that found the catastrophic failure rate of a NASA space shuttle launch was roughly 1 in 125 -- far greater than 1 in a half-million, as the revered space agency once believed.
Whether working with NASA and the United States Air Force to assess the reliability of space systems or behind the scenes in the recent presidential election when fellow Bayesian statistician Nate Silver of The New York Times
correctly called the vote in all 50 states, statistics is a powerful, flexible tool with wide-ranging potential impact and application. Johnson's own career is a case study across a broad spectrum of fields beyond space. He has used it to gauge the effectiveness of the U.S. nuclear arsenal, analyze the intelligence of non-human primates, probe grade inflation at American universities and develop more effective tests for evaluating cancer drugs.
"Statistics is the heart of the scientific method," said Johnson, a 1999 American Statistical Association
fellow who joined the Texas A&M Department of Statistics
faculty in September after most recently serving as a professor of biostatistics and interim head of the Division of Quantitative Sciences at The University of Texas MD Anderson Cancer Center. "You use past experience to predict future outcomes. So statisticians end up getting involved in almost every area of science."
This is an age with more data than ever, but quantity doesn't necessarily lead to quality. Poor decisions based on flawed statistical models or flimsy science can lead to catastrophic consequences, such as the bursting of the housing bubble after ratings agencies grossly underestimated the riskiness of mortgage-backed securities, according to Silver.
Although foreseeing future events can never be perfect, Johnson notes that statistics is the science of making better predictions.
So how could NASA, an agency filled with some of the brightest minds in the world, predict a space shuttle launch failure rate so off the mark?
"Optimism," Johnson said. "It wasn't based on any real valid scientific knowledge."
It turns out the agency's analysis consisted of calculating the failure rate of each individual component of the shuttle, concluding that the rate of failure for the overall launch would roughly be the sum of the rate of failure for each of the rocket's components. In addition to being overly simplistic, the method didn't account for human error, Johnson says.
"That's been a real challenge in statistics," Johnson said. "It's easy to specify very complex models. But the challenge -- and it's often a challenge that's not really addressed -- is to verify that the models fit the data. That's actually another research area of mine: developing methods to assess whether a model you specify for data actually fits. So goodness-of-fit diagnostics and model assessment are a big part of statistics, and Bayesian statistics, in particular."
While at MD Anderson and now at Texas A&M, Johnson researches clinical trial design. He says new drugs are typically tested in three phases. The first gauges how much of the drug can be administered before it becomes toxic. The second evaluates the drug's effectiveness, while the third tests it on large groups of people. Johnson is looking at ways to combine the first two phases, which potentially could lead to more efficient and cheaper creation of life-saving drugs.
"There's that trade off," Johnson said. "For a cancer that would certainly kill someone, you might be willing to accept a 10 percent increase in efficacy if it was accompanied by only a 5 percent increase in toxicity. Combining those two phases would mean researchers would be getting efficacy information while they're doing the preliminary experiments on dosing."
In the mid-1990s, when Johnson was a statistics professor at Duke University, the provost there wanted to address the issue of grade inflation, given that the median course grade for undergraduates at the Durham, N.C., school had risen to almost an A-minus. After requesting and receiving a trove of data from the registrar's office, Johnson found the problem wasn't that grades were being inflated, but that faculty were using vastly different criteria to assign grades. For instance, some would bestow As far more freely than others.
Johnson's solution involved creating a grading system that adjusted for professor variances so that an A in a class where everyone got an A would be weighted less than one in which everyone earned a B or C. Although his model had administrative backing, Johnson said the administration made a tactical error by trying to implement the change too quickly. In the end, it didn't come to pass because it was unpopular with many faculty and students.
Meanwhile, Johnson was able to glean valuable insight from the data, including a 2-to-1 probability that students who knew in advance that the average grade for a particular course section was an A-minus would sign up for that class, rather than other open sections of the same course with B averages. He also probed the relationship between grades and teacher evaluations and published a book about his findings in 2003, "Grade Inflation: A Crisis in College Education."
In 2000 Johnson spent a year at Los Alamos National Laboratory developing reliability models for the nation's arsenal of nuclear weapons. He says a challenge was determining whether the weapons -- which were never intended to last indefinitely and would often be moved around on planes and ships -- would work if needed. Because the weapons obviously couldn't be tested, Johnson conducted his analysis by combining data with tests of individual parts.
"The challenge there was to take historical data along with any current test data and physical models you can come up with, and combine those to predict what the reliability of a specific weapons system would be," Johnson said.
Recently, Johnson helped analyze intelligence experiments on non-human primates to gauge whether there was intelligence variance among species or in animals within a primate species. An example test might consist of putting a banana in a clear tube and putting a broom in the room to see if the animal could figure out how to use the broom to push the banana out. Johnson worked with Duke researchers Rob Deaner and Carel van Schaik, who had sifted through decades of published research on the topic. Many of the statistical methods used in the analyses of these data are summarized in a book that Johnson co-wrote with Bowling Green State University Bayesian expert James Albert titled "Ordinal Data Modeling."
"We used some very sophisticated Bayesian hierarchical models to combine all this data together and get the probability that each species was more intelligent than every other species," Johnson said. "We were able to quantify the probability that, say, a chimpanzee had a higher general intelligence than a macaque, but we weren't able to detect strong domain-specific intelligences. Animals that did well on one type of task also seemed to do well on other, unrelated tasks."
To learn more about Johnson and his career research, visit http://www.stat.tamu.edu/directory-details.php?directoryid=321
For additional information about Texas A&M Statistics, go to http://www.stat.tamu.edu/
# # # # # # # # # #
About 12 Impacts for 2012: 12 Impacts for 2012
is an ongoing series throughout 2012 highlighting the significant contributions of Texas A&M University students, faculty, staff and former students on their community, state, nation and world. To learn more about the series and see additional examples, visit http://12thman.tamu.edu/
Contact: Vimal Patel, (979) 845-7246 or firstname.lastname@example.org or Dr. Valen Johnson, (979) 845-3141 or email@example.com