Blog: Decision Management Community
Benedict Evans wrote: “If you ask ChatGPT factual questions, you can’t trust what you get. In this case, it invented an entirely non-existent sexual assault allegation against a law professor, complete with (non-existent) Washington Post story. Also of note – the professor, since he’s apparently a somewhat controversial figure, assumes that this must be something to do with his politics. Since all LLMs do this all the time about anyone and anything, there’s no reason to think he’s right, but this is a good illustration of just how hard it is for normal people outside tech (or even inside) to grasp what these systems are doing. They are not answering questions – they’re making something that looks like an answer to questions that look like your question. But, can you stop them from libelling people?” Link