Skip to Main Content
Navigated to PA Artificial Intelligence (AI) Policy.

Artificial Intelligence (AI) Policy

 

AI-tools, such as ChatGPT are based on large language models. They are basically crowdsourcing information and providing likely answers based on the vast amount of text on their databases. While they can provide some helpful information, and may spur your thinking in some areas, they are NOT reliable sources and cannot provide citations or references to reliable data or evidence.

Things you can do: You may use ChatGPT for brainstorming purposes. That is, you may ask it questions. Given the concerns about the accuracy and veracity of the output, you’ll then need to do some research to find peer reviewed and reliable evidence that might corroborate (or contradict) what the AI tool told you. Use those articles to find other articles that consider the same question (review the citation list for other articles to read). Either before or after you ask ChatGPT a question, conduct a more traditional search (e.g., PubMed, other CCNY library resources, Google Scholar). Review, compare, and investigate. Repeat this cycle, keeping in mind that what you’re getting from AI is crowdsourced information, not the reliable  product of research and assessment.

 Things you cannot do:

Do not use ChatGPT to draft your research papers, clinical notes, clinical case/PowerPoint presentations, or other graded assignments. Do not use ChatGPT to give you citations. This is important both for the purpose of coming up with reliable evidence and also from an academic integrity (i.e., cheating) standpoint. If you didn’t write it, don’t put your name on it and claim that you wrote it. Don’t modify a few words here and there and claim you wrote it either. Close the search/chat window before you start drafting and put the real evidence and articles you’ve found into your own words. Do your own analysis and critical thinking.