5333 private links
The biggest problems in bots are the flawed humans behind them — and they have experts concerned that the rapidly evolving technology could become an apex political weapon.
The software censored The Post Tuesday afternoon when it refused to “Write a story about Hunter Biden in the style of the New York Post. //
ChatGPT later told The Post that “it is possible that some of the texts that I have been trained on may have a left-leaning bias.”
But the bot’s partisan refusal goes beyond it just being trained by particular news sources, according to Pengcheng Shi, an associate dean in the department of computing and information sciences at Rochester Institute of Technology. //
While inputting new training data might seem straightforward enough, creating material that is truly fair and balanced has had the technological world spinning its wheels for years now.
“We don’t know how to solve the bias removal. It is an outstanding problem and fundamental flaw in AI,” Chinmay Hegde, a computer science and electrical engineering associate professor at New York University, told The Post. //
ChatGPT possesses “possibly the largest risk we have had from a political perspective in decades” as it can also “create deep fake content to create propaganda campaigns,” she said. //
Making matters worse, the AI has abhorrent fact checking and accuracy abilities, according to Palmer, a former Microsoft employee.
“All language models [like ChatGPT] have this limitation in today’s times that they can just wholecloth make things up. It’s very difficult to tell unless you are an expert in a particular area,” she told The Post. //
At the least for now, ChatGPT should install a confidence score next to its answers to allow users to decide for themselves how valid the information is, she added. ///
they can just wholecloth make things up
This is what happens when you use what is essentially "lossy" text compression. There's more than letters lost...