By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.
CHOOSE YOUR LANGUAGE
CHOOSE YOUR LANGUAGE
互联网新闻信息许可证10120180008
Disinformation report hotline: 010-85061466
VCG
Global publisher Wiley released a survey earlier this month, indicating that researchers are expected to widely accept the use of artificial intelligence (AI) tools in preparing papers, writing grant applications and conducting peer reviews within the next two years.
The survey gathered responses from 4,946 researchers across more than 70 countries, assessing how they currently use generative AI tools, such as ChatGPT and DeepSeek, and their perspectives on AI's potential applications.
Most respondents believe AI will become integral to scientific research and publishing. Over half of the surveyed researchers rated AI as superior to humans in more than 20 listed tasks, including reviewing vast amounts of literature, summarizing research findings, detecting writing errors, checking for plagiarism and organizing citations. Additionally, more than half anticipate AI will become mainstream in 34 out of 43 research-related tasks within two years.
Among the surveyed researchers, 27 percent are in the early stages of their careers. Of the initial group of respondents, 45 percent (1,043 individuals) reported already using AI in their research, with the most common applications being translation, proofreading and manuscript editing. Among these AI users, 81 percent have engaged with OpenAI's ChatGPT for personal or professional purposes, yet only one-third were familiar with other generative AI tools, such as Google's Gemini and Microsoft's Copilot.
The survey also revealed significant disparities across disciplines and regions, with computer scientists being the most likely to integrate AI into their work.
A report published in Nature on January 23 echoes the survey, indicating that the Chinese-built large language model DeepSeek-R1 is thrilling scientists as an affordable and open rival to "reasoning" models such as OpenAI's o1.
The report said that initial tests of DeepSeek-R1 showed that its performance on certain tasks in chemistry, mathematics and coding is on a par with that of OpenAI's o1.
The report suggested that the DeepSeek-R1-type models demonstrate capabilities beyond early language models in addressing scientific problems and hold potential for research applications.