In introducing the 12th IEEE International Conference on Cognitive Informatics and Cognitive Computing, Michael Latham, Ph.D., dean of Fordham College at Rose Hill (FCRH), called the field “profoundly interdisciplinary.”
“This field illustrates how important it is that we begin to integrate knowledge across and outside of traditional academic backgrounds,” he said.
Conference Co-Chair Frank Hsu, Ph.D., the Clavius Distinguished Professor of Science and professor of computer and information science, concurred.
“Cognitive science is useful in many different subjects, including social sciences and humanities, in professional studies like business law, or in education and social work,” he said.
Latham noted that FCRH and Fordham College at Lincoln Center are launching an undergraduate program that brings together biology, psychology, and computer science together in a new integrated neuroscience program. While many programs bring together biology and psychology, this program, like the field, lands at the interface between humans and machines.
To break it down, cognitive informatics (CI) is a multidisciplinary research field that tackles problems shared by information science, computer science, cognitive science, medical science, artificial intelligence, neuropsychology, systems science, software engineering, and cognitive robots, to name but a few. Cognitive computing (CC) brings together computing methodologies with an eye toward mimicking mechanisms of the brain. Together CI and CC investigate how the brain functions and how computers work, thereby teaching the next generation of computers to learn and think like humans.
This twelfth conference took place over three days from July 16 through July 18. Co-Chair and conference founder Yingxu Wang, Ph.D., gave a brief history of the event, which first took place at the University of Calgary, where Wang is professor of cognitive computing & software engineering. Wang’s slideshow tour of conference locales included several capital cities in Europe, Asia, and North America.
Gabriele Fariello, head of neuroinformatics at Harvard University, kicked off the conference with the first keynote. In a talk titled “Brain Dump: How Publicly Available fMRI can Help Inform Neuronal Network Architecture,” Fariello honed in on functional magnetic resonance imaging, also known as functional MRIs, which are used to measure brain activity.
Fariello’s work is part of Harvard’s Brain Genomics Superstruct Project Open Data Release, which will publicly release functional MRI’s data on 1,500 human participants thereby allowing the public help define how neuro systems behave.
The primary principal investigator is Randy Buckner, Ph.D., professor of psychology at Harvard. Fariello’s group built the infrastructure to do the analysis and automated sequencing of the data–of which there are tremendous amounts.
The informatics behind the neuroimage collection include demographic and cognitive data, as well as personality and lifestyle metrics. Within the next few months, the data sets will be available to download. Buckner said Harvard is publicly releasing the data, which costs millions of dollars to acquire, in order to accelerate the science. There will be no restrictions on how participants use the data.
“This allows individuals who don’t have access to 30 scanners, seven hospitals, and 26 principal investigators, to do the same research,” said Fariello “There’s simply too much research for everybody to do, so everybody from computer science undergraduates to faculty can derive publications of significance by analyzing these data.”
The second keynote was delivered IBM’s Christopher Welty, Ph.D. Welty discussed Watson, the IBM computing system that understands the nuances of human speech well enough to answer questions like those posed in the game show Jeopardy! In his talk, “Watson: The Jeopardy! Challenge and Beyond,” Welty explained that computers no longer need precise questions in order to offer precise definitions or precise solutions.
The Watson system takes questions posed in natural language, identifies key words, and then tries to find passages in its data bank that match the question as best it can. As the system used on Jeopardy! was offline, programmers crammed Watson’s two terabytes with data from Wikipedia, The New York Times, Time, and Encyclopedia Britannica.
Welty said future applications of the technology, such as Watson’s use in healthcare, would pack the system with information from that field to answer medical questions. All the context that goes into answering a medical question would then be poured into the system, such as patient history, lab reports, images, machine data from EKGs, but also research articles from medical journals.
Welty was careful to point out that Watson shouldn’t be misinterpreted as a replacement for human judgment. He said it would be more appropriate to view the technology as a “cognitive prosthesis,” a system that helps human’s ability to process information.
Doctors don’t have the time to process all the information in the chart of a patient with a dense medical history, he said. And just as the Jeopardy! system included voice recognition, a medical system would need to understand similar non-data based information, such as a doctor’s handwriting. To be clear, Welty said, IBM is not building the product; they are investigating technologies required to make that product.
Three other keynotes anchored the event. A. Ravishankar Rao, Ph.D., also of IBM, discussed how neuro networks of the brain behave. Wang looked at new mathematical models that explain how the cognitive process works. And Hsu discussed measuring the cognitive difference between two judgment systems.