作成者 |
|
|
|
本文言語 |
|
出版者 |
|
発行日 |
|
収録物名 |
|
巻 |
|
開始ページ |
|
終了ページ |
|
会議情報 |
|
出版タイプ |
|
アクセス権 |
|
利用開始日 |
|
関連DOI |
|
関連DOI |
|
|
|
関連ISBN |
|
|
|
関連HDL |
|
|
|
関連情報 |
|
概要 |
Generative AIs have become widespread and are being used for various purposes. However, a phenomenon called hallucination, in which generative AI outputs plausible lies, is being viewed as a problem. ...While hallucination is a problem when it is important to be factual, in some cases, such as idea generation, it is not a problem. In fact, in some cases, it may be better to have hallucination occur to come up with surprising and diverse ideas. In this study, we compared several prompts with the aim of getting ChatGPT to output information that differs from the training data for GPT when generating ideas for a fictional historical novel. The results showed that it was more effective and led to generation of more surprising ideas to directly instruct ChatGPT to include content that differs from the facts within the prompt rather than using adversarial prompts that cause hallucinations.続きを見る
|