掲示板 Forums - How accurate is ChatGPT?
Top > 会話 / General discussion > Anything Goes Getting the posts
Top > 会話 / General discussion > Anything Goes
Sometimes I use ChatGPT to break down difficult sentences and to understand the nuances of synonyms. As for the sentence breakdown, I think the translation and explanation is pretty good and it translates much better than translator apps but when It comes to nuances of synonyms, I'm never sure whether ChatGPT explains it correctly or if it just makes things up sometimes. I find it time consuming to search the internet for explanations so I tend to just ask ChatGPT. Does anyone know if I can trust it? Or is it better if I just use Hinative?
I cant say with certainty if it is better or not as i have not used it alot for translation work (or at all for that matter), so i wont say it is bad. I do know however that AI has had a tendency to hallucinate information from time to time.
Long story cut shorter, i would not use ChatGPT alone. I would not trust it 100% and would rather use it as a rough indicator or a springboard. I mean..... If ChatGPT can mess up preparing legal advice for english from english sources, chances are it can mess up japanese to english aswell.
I see! Thank you. I often compare the ChatGPT answers with Hinatives(or other platforms) answers because I trust them more.
My comment is similar to ヂワルワ ヂーキー's point: AI tends to hallucinate information. While it may sometimes provide accurate translation information, it is not guaranteed. It can be used to proofread if you have nobody to help you, but I wouldn't personally trust it with helping me learn a language, especially one as difficult as Japanese.
As a side note for how dangerous ChatGPT's hallucinations can be, a lawyer got sanctioned after ChatGPT generated fraudulent cases. Granted, the lawyer did then try to cover up the fact that the cases were fraudulent, but the fact of the matter is that ChatGPT started a chain of events that led to a lawyer being sanctioned.
-クラウディア
ChatGPT is as good as its training data. If it has seen your text before, it will give you the translation it was trained on. But if there is something novel in there it might skip over it or substitute something completely unrelated. It’s best not to trust it.
Thats interesting Thank you all for your answers! Do you think that AIs like ChatGPT will get so smart someday that you can trust them at least 90%? Or is this rather unlikely to happen?
Think of it like this: Our AI are rather like humans. If they learn good that is great, they will get things wrong if the source is wrong.... And it might try to gaslight you because maybe it has performance anxiety and thinks pretending it is right might help it. In that sense it is more human than AI in movies. If it were me, i would either not use it, or use it as another human with giving it a sentence and double checking the answer. In the end it is up to you tho.
ChatGPT is made to give nice sounding answers more than to be correct. If people tell it something completely wrong it will repeat it. I personally don't ask it Japanese questions because I don't trust it to know or give an accurate explanation. Since it doesn't even know math I doubt it's language knowledge. You can ask it but you just have to remember like everyone said it's like asking somebody else learning it, you have to take what it says with a grain of salt
No one else has pointed this out, but I feel that the biggest problem with chatgpt is the *tone* of the messaging. We humans have 1000 different ways to expressing doubt or certainty when using language, and LLMs have 0 of that. So everything is presented with wording that suggests "What I'm saying is a fact".
I'm not suggesting that you do this just for this one thing, but if you ever listen to the discord learning events that I teach, I (just like most people on this earth) will use hedging language all over the place when I am not 100% sure about what I am teaching/saying.
Yes, ChatGPT speaks with authority that it doesn’t have.
It happens that a new paper just came out just today: LLMs Will Always Hallucinate, and We Need to Live With This.
Key points from the paper.
It happens that a new paper just came out just today: LLMs Will Always Hallucinate, and We Need to Live With This.
Key points from the paper.
ありがとう。とても面白そう。このをもうすぐ暇な時に読みます。
-クラウディア