頁籤選單縮合
| 題 名 | 生成式人工智慧聊天機器人作為醫學影像暨放射科學系學生的學習工具=The Use of Generative Artificial Intelligence Chatbots as a Learning Tool for Students in Medical Imaging and Radiological Sciences |
|---|---|
| 作 者 | 吳宜霖; 辜美安; | 書刊名 | Annals of Nuclear Medicine and Molecular Imaging |
| 卷 期 | 37:2 2024.06[民113.06] |
| 頁 次 | 頁44-50 |
| 分類號 | 312.83 |
| 關鍵詞 | 人工智慧; 人工智能; 學習工具; 聊天機器人; Artificial intelligence; Chatbots; Learning tools; |
| 語 文 | 中文(Chinese) |
| DOI | 10.6332/ANMMI.202406_37(2).0002 |
| 中文摘要 | 背景:美國人工智能研究公司OpenAI於2022年底推出具備強大模擬生成人類的語言文本能力的人工智能聊天機器人ChatGPT,即時成為全球熱門話題。ChatGPT在學術表現方面,亦引起討論和分析,然而目前尚未有分析ChatGPT在中文考試表現的相關研究。因此,本研究分別使用GPT-3.5和GPT-4模型回答醫事放射師考試,分析其考試成績。方法:本研究利用民國111年第二次專門職業及技術人員高等考試,從醫事放射師考試6個科目,從中選取基數題目共240題,並排除需要根據圖像作答的題目。整理格式後,透過ChatGPT「應用程式介面」(API)方式進行測試。根據考試院考選部考畢試題查詢平臺所公布的答案作為判定正確的準則。結果:GPT-3.5和GPT-4模型在醫事放射師考試的整體得分分別為39.4%和68.1%。在不同的科目的表現也有明顯的差別,可能是因為其在理解題意和回答問題方面仍有不足。因此,目前而言,ChatGPT可以應用於翻譯、文法糾正、文章摘要和簡化放射診斷報告等。但是,透過ChatGPT修改的文章都必須經過作者的確認,以確保其正確性。結論:人工智能技術的發展和應用已是必然之勢,應該積極面對,並遵循倫理原則,以實現雙贏的局面。 |
| 英文摘要 | Background: ChatGPT, an artificial intelligent chatbot with powerful language generation capabilities for simulating human-like text, was launched in late 2022 by OpenAI, an American artificial intelligence research company, and immediately became a global buzzword. ChatGPT's academic performance has also sparked discussion and analysis, but there is currently no research analyzing ChatGPT's performance in examinations based on Chinese language. Therefore, this study used GPT3.5 and GPT-4 to answer questions from the medical radiation technologist examination in Taiwan and analyzed its exam scores. Methods: A total of 240 questions were selected from six subjects in the medical radiation technologist examination using the 2022 Second Senior Professional and Technical Examinations for Medical Personnel, and excluding questions that required answering based on image. The questions were then formatted and tested using ChatGPT application programming interface (API). The answers published on the Examination Yuan’s examination platform were used as the criteria for determining correct answers. Results: GPT-3.5 and GPT-4 were able to obtain a score of 39.4% and 68.1%, respectively. There were differences in performance across different subjects, which may be due to a lack of understanding of the questions and the ability to answer them. Therefore, for the time being, ChatGPT can be used for tasks such as translation, grammar correction, article summarization, and simplifying radiographic reports. However, any article revised through ChatGPT must be validated by human authors to ensure its correctness. Conclusions: The development and application of artificial intelligence technology are inevitable trends that should be addressed positively and ethically to achieve a win-win situation. |
本系統中英文摘要資訊取自各篇刊載內容。