• Sat. Dec 2nd, 2023

Human-aware AI is helping to accelerate scientific discoveries, new research shows

Human-aware AI is helping to accelerate scientific discoveries, new research shows

A new study explores how artificial intelligence can not only better predict new scientific discoveries, but also usefully develop them. Researchers who have published their work Nature is human natureHuman assumptions, and the scientists who make them, have built models that can make predictions.

The authors also built models that avoided human assumptions to create scientifically promising “alien” theories that might not be considered until the distant future. Two demonstrations — the first allowing the acceleration of human discovery, the second identifying and overcoming its blind spots — mean that human-aware AI will be allowed to move beyond contemporary scientific frontiers.


“If you raise awareness of what people are doing, you can improve prediction and spur them on to accelerate science,” co-author James A. says Evans, Max Palewski Professor in the Department of Sociology and director of the Knowledge Lab. “But you can also find out what people can’t do currently, or won’t be able to do decades or more into the future. You can enhance them by giving them such complementary intelligence.

AI models trained on published scientific findings have been used to discover valuable substances and targeted therapies, but they typically ignore the input of the human scientists involved. The researchers considered how humans have competed and collaborated in research throughout history, so they wondered what could be learned if AI programs were made more clearly aware of human expertise: Could we do a better job of complementing the collective human capacity of following and exploring places? Haven’t humans explored?

Predicting the future of discovery

To test the question, the team first simulated rational processes by constructing random walks through the research literature. They start with one property, such as covid vaccination, and then jump to a paper with the same property, another paper by the same author, or material cited in that paper.

They ran millions of random walks and their model offered a 400% improvement in predictions of future discoveries over focusing only on research content, especially when the relevant literature was sparse. They could predict the actual people making each discovery with more than 40% accuracy because the program knew that the predicted person was one of the few whose experience or relationship connected the object and material in question.


Evans describes the model as a “digital double” of the scientific system, which allows the simulation of what might happen in it and the testing of alternative possibilities. He explains how it highlights the ways in which scientists relate to the methods, qualities, and people they experience.

“It also allows us to learn things about that system and its limitations,” he says. “This suggests that some aspects of our current science system, such as average and graduate education, for example, are not tuned to discovery. They are tuned to provide a label that helps people get jobs – to flood the job market. They do not optimize the discovery of new and technologically relevant content. To do that, each student will be an experiment – ​​crossing new gaps in the landscape of expertise.

In a second demonstration of the paper, they asked the AI ​​model not to find the predictions most likely to be discovered by people, but to find predictions that are scientifically plausible but less likely to be discovered by people.

Researchers call these extraterrestrial or complementary hypotheses, and they have three characteristics: they are rarely detected by humans; Once discovered, scientific systems will not reorganize themselves for many years in the future; Alien hypotheses are, on average, better than human hypotheses because humans will concentrate every ounce of discovery from an existing theory or approach before exploring a new one. Because these models avoid the connections and configurations of human scientific activity, they explore entirely new territory.

Radically enhanced intelligence

Evans explains that looking at AI as an attempt to copy human ability — building on Alan Turing’s idea of ​​the imitation game in which humans are the benchmarks of intelligence — doesn’t help scientists accelerate their ability to solve problems. He says we’re likely to benefit more from a radical increase in our collective intelligence than an artificial copy.

“People in these domains — science, technology, culture — they try to stay with the pack,” Evans said. “You survive by influencing others when they use your ideas or technology. You maximize this by staying close to the pack. Our models complement that bias by creating algorithms that follow signals of scientific credibility, but avoid the pack alone.

Rather than simply reflecting what human scientists might think in the near future, using AI to move outside existing methods and collaborations enhances human capacity and supports better exploration.

“It’s about changing the framing of AI from artificial intelligence to radically augmented intelligence, which requires learning more about individual and collective cognitive capacity, not less,” Evans said. “As we understand more about human perception, we can clearly design systems that compensate for its limitations and lead us to collectively know more.”

This story originally appeared on UChicago’s Social Science Division website.

Leave a Reply

Your email address will not be published. Required fields are marked *