Using Generative AI (Artificial Intelligence) tools in research

The potential impact of generative AI (GenAI) on research is undeniable. Despite the possibilities for greater efficiencies, researchers should carefully consider a variety of factors as they approach using these tools in their work. GenAI tools such as ChatGPT, Stable Diffusion, Bard, etc. can be applied in many realms of research – idea development, proposal writing, manuscript preparation, simulated data generation and modeling, preparing for conferences, etc. But should researchers use it? What are the guidelines?      

 

Best practices 

Use of GenAI tools comes with risks in key areas including information security, plagiarism, fabrication, data privacy, and bias. To minimize the inherent risks associated with use of GenAI tools, Duke researchers should adhere to the following best practices. 

  • Identifiable information should not be inputted into GenAI tools. Unless stated in an executed research contract with approved information security controls, information with identifiers should not be uploaded into AI tools. This can include information collected in a research project, information gleaned from the medical record, student information, employee information, and more. 
  • Don’t presume accuracy. Output provided by GenAI tools may be biased, inaccurate, contain copyrighted material, or can be entirely false (i.e., hallucinated). It is the responsibility of researchers and scholars at Duke to thoroughly review any AI-generated material before its use.  
  • Let others know if you use AI tools. Duke researchers employing GenAI tools should always provide attribution. Treat the AI as the author and cite appropriately: "Text generated by [the AI tool]. Date content was generated. URL for the tool.” Various publishers may have specific guidelines on how AI can and cannot be used, and whether AI can be cited as an author. Review author guidelines carefully.  

 

Use of Microsoft Copilot 

Copilot is powerful, built on Large Language Models (LLMs) such as GPT-4, or advanced tools designed to predict and generate text based on your internal data (emails, reports, presentations, articles, etc.). Whether using the Copilot app or accessing it through a web browser, keep Duke data secure by signing in with your NetID. Duke’s OIT has additional guidance and information on using Copilot for your work. 

Discover the resources here and visit often for updates

Topics Covered Resource 
  • AI in Health-related Research (predictive and generative)  
  • Imaged-based research  
  • Regulatory considerations  
  • Trustworthiness and AI  
AI and Academic Research Town Hall (April 25, 2023) 
  • Guidance for using generative AI in teaching  
  • Guide to writing AI policies  
  • AI Detection software  
AI and Teaching at Duke  
  • AI Health Seminar Series  
  • Workshops and Studios 
AI Health Events 
  • Algorithm-Based Clinical Decision Support for Duke Health 
ABCDS 
  • MS Teams group for people interested in generative AI in healthcare.  
Generative AI interest group in Teams 
  • Author guidelines on use of AI-assisted technology  
ICMJE Guidelines 
  • Research Participant Protections
  • Data Management and Sharing
  • Health Information Privacy
  • Licensing, Intellectual Property, & Technology Transfer
  • Peer Review
  • Biosecurity and Biosafety
NIH: Artificial Intelligence in Research: Policy Considerations and Guidance
  • Notice to research community on use of generative AI technologies in the merit/grant review process 

NSF notice 

NIH notice 

  • Appropriate use of LLMs in Duke clinical activity
Guidance from Duke Health Technology System

FAQs

FAQs

Frequently asked questions coming soon!

Do you know of additional resources of use to the Duke research community? Contact researchinitiatives@duke.edu with ideas.