skip to main content

Think You’re an Expert? Then Take the Test


By Jennifer Spencer

Robert Mislevy, Ph.D., advocates testing to prove whether someone is an expert in his or her field. Photo courtesy of the University of Maryland

Robert Mislevy, Ph.D., advocates testing to prove whether someone is an expert in his or her field.
Photo courtesy of the University of Maryland

Whether being used to ensure the competency of dental practitioners, assess the skills of aspiring architects, or understand how effectively children learn, psychological research can have important practical implications.

At the fourth annual Anne Anstasi Lecture, Robert Mislevy, Ph.D., spoke on the implications of expert research for education assessment, exploring the kind of meaningful questions that were so highly valued by the lecture’s namesake.

Anne Anastasi, a groundbreaking leader in the field of psychology, was a member of the Fordham faculty from 1947 to 1985 and chair of the Department of Psychology.

Mislevy focused on how research in the area of expertise could be applied to designing more effective assessment tools. He said that by understanding how experts think, psychology professionals can more effectively create tools that measure whether someone is, in fact, expert at his or her craft.

“Experts organize their knowledge more effectively. It’s not just knowing more information; it’s organizing it around key principles,” said Mislevy, the Frederic M. Lord Chair in Measurement and Statistics at Educational Testing Service (ETS) and professor emeritus at the University of Maryland.

All people have certain cognitive limitations, such as limited attention and working memory. They overcome such limitations by organizing information into patterns. By measuring patterns rather than just knowledge, assessment developers can more efficiently and accurately assess expertise.

Mislevy also spoke about how technological advances continue to affect the field of assessment. While technology can’t solve every problem faced by test writers and researchers, he suggested that it can often help save time and money.

He cited an example from his work at ETS. The Architects Registration Exam, an assessment used to measure the skills of architects, previously required 10 hours of hand-drawn work. However, the assessment sought to measure expertise and understanding of patterns required to compose complex architectural plans, not basic drawing skills. As such, ETS employed computer-aided design technology to save time and more accurately assess the level of expertise that the test was designed to measure.

Mislevy said that the benefits of technology can be most effectively applied when assessment developers thoughtfully consider the goals of the assessment and how, specifically, technological tools might create a benefit.

“We can use technology to help us [create better assessments], but this means that we have to think about assessments more deeply than just simply writing a bunch of items,” Mislevy said.

“We have to be more explicit as to what our assessment arguments are, about the nature of the capabilities that we want to see, and the conditions that we need to see in action.”


Comments are closed.