Presented By: Interdisciplinary Committee on Organizational Studies - ICOS
Using AI to Study Problems of Innovation in Science and Business
Brian Uzzi, Northwestern University
Innovation involves recombining past knowledge. The increasing rate at which scientific knowledge is expanding should bode well for innovation. Nevertheless, new problems related to the appraisal and valuation of new ideas and inventions are inhibiting innovation. One problem is that more scientific studies fail than pass replication tests, which has led to a deep skepticism of science, billions in economic losses, funding cuts, and a weakened job market in psychology. A second problem is that patent review has slowed while patent examiner disagreement and wrongful rejections of meritorious patent applications have risen. In this exploratory study, we investigate the potential of AI and machine learning to enhance the innovation evaluation process.
In study 1, we trained an artificial intelligence model to estimate a paper’s replicability using ground truth data on studies that had passed or failed manual replication tests, and then tested the model’s generalizability on an extensive set of out-of-sample studies. The model predicts replicability better than the base rate of reviewers and on part with prediction markets, the best present-day method for predicting replicability. We then used the model to conduct a discipline-wide census of replicability in Psychology. The analysis covers 14,129 papers, the near universe of all papers published in Psychology journals over a 20-year period in the six major subfields of Developmental, Social, Clinical, Cognitive, Organizational, and Personality psychology. We find replicability varies by subfield, research design methods, but not institutional prestige. Further, researchers’ past research records and social media strongly predict replicability; in particular, media attention predicts non-replication.
In study 2, we use new data on almost 4,000,000 U.S. patent applications, 2,000,000 EU patents, and 250,000 Canadian patents to test AI’s ability to identify patentable inventions and their future citation impact. We report three key findings. First, AI accurately predicts human experts’ decisions in spotting meritorious innovation at agreement levels of up to 95%, which is remarkable given the degree of variation and potential disagreement among individual patent examiners. Second, although hit patents disproportionately drive investments and innovation, current models and analysts have been unable to predict a patent’s future influence. We find that AI accurately predicts an invention’s future influence from application data, providing a new view of technological trajectories at the earliest time possible. Third, AI can reduce review process biases and misevaluations. Using applications that were mistakenly rejected by examiners but should have been accepted, AI model would made 47% fewer wrongful rejections.
In the case of both scientific papers and patents, AI appears to garner error-reducing information not from the application’s quantified data but from its descriptive free-text, which machines quantify better than humans. We discuss how these findings can improve innovation, scientific training, and performance.
In study 1, we trained an artificial intelligence model to estimate a paper’s replicability using ground truth data on studies that had passed or failed manual replication tests, and then tested the model’s generalizability on an extensive set of out-of-sample studies. The model predicts replicability better than the base rate of reviewers and on part with prediction markets, the best present-day method for predicting replicability. We then used the model to conduct a discipline-wide census of replicability in Psychology. The analysis covers 14,129 papers, the near universe of all papers published in Psychology journals over a 20-year period in the six major subfields of Developmental, Social, Clinical, Cognitive, Organizational, and Personality psychology. We find replicability varies by subfield, research design methods, but not institutional prestige. Further, researchers’ past research records and social media strongly predict replicability; in particular, media attention predicts non-replication.
In study 2, we use new data on almost 4,000,000 U.S. patent applications, 2,000,000 EU patents, and 250,000 Canadian patents to test AI’s ability to identify patentable inventions and their future citation impact. We report three key findings. First, AI accurately predicts human experts’ decisions in spotting meritorious innovation at agreement levels of up to 95%, which is remarkable given the degree of variation and potential disagreement among individual patent examiners. Second, although hit patents disproportionately drive investments and innovation, current models and analysts have been unable to predict a patent’s future influence. We find that AI accurately predicts an invention’s future influence from application data, providing a new view of technological trajectories at the earliest time possible. Third, AI can reduce review process biases and misevaluations. Using applications that were mistakenly rejected by examiners but should have been accepted, AI model would made 47% fewer wrongful rejections.
In the case of both scientific papers and patents, AI appears to garner error-reducing information not from the application’s quantified data but from its descriptive free-text, which machines quantify better than humans. We discuss how these findings can improve innovation, scientific training, and performance.
Livestream Information
ZoomFebruary 4, 2022 (Friday) 1:30pm
Meeting ID: 91018242181
Meeting Password: 102357
Explore Similar Events
-
Loading Similar Events...