The following uses of generative AI may carry a higher risk of potential harm due to the prompts or materials shared with the AI becoming part of the training set, giving other users of the AI access to the materials, or possible accidental disclosure to the owner or other users of the AI technology. Therefore, it is NOT recommended to use generative AI technology with the following inputs or prompts:
- Proprietary information– this could be a violation of non-disclosure agreements.
- Inventive information or intellectual property that you may be able to protect through copyright or a patent application.
- Complete manuscripts you plan to submit for publication when being the first to report critical findings is important.
- Confidential or personal identifying information
- When serving as a reviewer for a research proposal for a funding agency or for manuscript review for a publisher do not submit any portion of the research proposal, manuscript, or any personal information to an AI platform.
Low risk inputs to or uses of AI:
- Publicly known data.
- Generated output for public consumption (e.g., marketing material))
NSF and NIH expressly prohibit use of AI for proposal reviews*:
- NSF proposers are encouraged to indicate in the project description the extent to which, if any, generative AI technology was used and how it was used to develop their proposal.
- NSF reviewers are prohibited from uploading any content from proposals, review information and related records to non-approved generative AI tools.
- "NIH prohibits NIH scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals."
*NIH - Notice NOT-OD-22-044 ; Dec 14, 2023 Notice to research community: Use of generative artificial intelligence technology in the NSF merit review process | NSF - National Science Foundation