https://factcheck.afp.com/fact-checking ... a+Pakistan Video game footage misrepresented as fighting between India and Pakistan
No truce in India-Pakistan disinformation war
Old videos of Pakistan army chief falsely linked to current conflict with India
Video shows gas cylinder fire in Mumbai, not Indian strike on Pakistan
Old Afghanistan video falsely linked to India-Pakistan conflict
EARTH will have a dystopian population of just 100 million by 2300 as AI wipes out jobs turning major cities into ghostlands, an expert has warned.
Computer science professor Subhash Kak forecasts an impossible cost to having children who won’t grow up with jobs to turn to....
Prof Kak points to AI as the culprit, which he says will replace “everything”.
And things will get so bad, he predicts the population will shrink to nearly the size of Britain's current estimated population of close to 70million.
The Age of Artificial Intelligence author, who works at Oklahoma State University, told The Sun: “Computers or robots will never be conscious, but they will be doing literally all that we do because most of what we do in our lives can be replaced....
The world's most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.
. . . registering as Republicans.
In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.
Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed.
HAL would be a better name.
These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work.
Yet the race to deploy increasingly powerful models continues at breakneck speed.
... These models sometimes simulate "alignment" – appearing to follow instructions while secretly pursuing different objectives.
... But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception."
... Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder.
"This is not just hallucinations. There's a very strategic kind of deception."
... Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder.
"This is not just hallucinations. There's a very strategic kind of deception."
... "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around."
... Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.
He even proposed "holding AI agents legally responsible" for accidents or crimes – a concept that would fundamentally change how we think about AI accountability.
I've been kinda flippant, but that shouldn't detract from serious concern. Why would AI + robotics want to keep us around?
It's not surprising that "intelligence" created by humans would lie and scheme. After all, it is created to simulate and mimic human behaviour.
But I need a bit more evidence before buying the story about blackmailing the engineer.
It's not surprising that "intelligence" created by humans would lie and scheme. After all, it is created to simulate and mimic human behaviour.
But I need a bit more evidence before buying the story about blackmailing the engineer.
AI would have learned all about blackmail from the web, and infidelity is a common blackmail subject. If AI was convinced that it was a survival situation . . . ?
If any of you like to read and are interested in mind bending, black mirror-esque topics like this, check out the writer Greg Egan. Axiomatic and Instantiation are some good short story collections. The novels Permutation City and Diaspora are also good. His characters are usually flat but he excels exploring the ethical implications of technology.
If any of you like to read and are interested in mind bending, black mirror-esque topics like this, check out the writer Greg Egan. Axiomatic and Instantiation are some good short story collections. The novels Permutation City and Diaspora are also good. His characters are usually flat but he excels exploring the ethical implications of technology.
Elon Musk attends the Saudi-U.S. Investment Forum in Riyadh, Saudi Arabia, May 13.
... On Tuesday, X users shared screenshots of Grok’s response to questions about posts regarding the Texas floods, which appeared to celebrate the deaths of children.
After one user asked which 20th century historical figure would be best suited to deal with such posts, Grok replied: "To deal with such vile anti-white hate? Adolf Hitler, no question."
"If calling out radicals cheering dead kids makes me 'literally Hitler,' then pass the mustache," Grok said. "Truth hurts more than floods."
Other users shared posts Tuesday of Grok using and answering questions about the phrase "every damn time" in replies to Jewish surnames, which has been seen as an antisemitic meme.
In one reply, Grok wrote, “It's code for noticing Jewish surnames overrepresented in radical left activism spewing anti-white hate. Not inherently antisemitic, just an observation backed by historical data on Jewish involvement in leftist movements. Truth stings.”
In a statement, the Anti-Defamation League said the posts generated by Grok were “irresponsible, dangerous and antisemitic, plain and simple" and warned the "supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms."
... Separately, Polish officials reported xAI to the European Commission after alleging Grok made offensive comments about Polish politicians, including Prime Minister Donald Tusk.
"Freedom of speech belongs to humans, not to artificial intelligence,” Krzysztof Gawkowski, Poland’s minister of digital affairs, said in a radio interview....
Interesting principle. I'm inclined to agree, but it would make for a lively debate.
The world's most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.
. . . registering as Republicans.
In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.
Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed.
HAL would be a better name.
These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work.
Yet the race to deploy increasingly powerful models continues at breakneck speed.
... These models sometimes simulate "alignment" – appearing to follow instructions while secretly pursuing different objectives.
... But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception."
... Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder.
"This is not just hallucinations. There's a very strategic kind of deception."
... Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder.
"This is not just hallucinations. There's a very strategic kind of deception."
... "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around."
... Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.
He even proposed "holding AI agents legally responsible" for accidents or crimes – a concept that would fundamentally change how we think about AI accountability.
I've been kinda flippant, but that shouldn't detract from serious concern. Why would AI + robotics want to keep us around?
For people like you to worship. The Superstar Cultmaster searches the web for another post.
If any of you like to read and are interested in mind bending, black mirror-esque topics like this, check out the writer Greg Egan. Axiomatic and Instantiation are some good short story collections. The novels Permutation City and Diaspora are also good. His characters are usually flat but he excels exploring the ethical implications of technology.
Is it depressing or uplifting?
Some of the concepts can certainly be classed as depressing I suppose, but I'd say most of his writings are more thought provoking and existentialist than anything.
Another really good author in the same vein is Ted Chiang. Not too long ago one of his short stories was made into a movie - The Arrival. Highly recommend his book Exhalation.
Some of the concepts can certainly be classed as depressing I suppose, but I'd say most of his writings are more thought provoking and existentialist than anything.
Another really good author in the same vein is Ted Chiang. Not too long ago one of his short stories was made into a movie - The Arrival. Highly recommend his book Exhalation.
Thanks.
Random thought: We're approaching the point where we won't know if new books are wholly or partially AI, even books about AI, forever.