Artificial intelligence and the future

Generally an unmoderated forum for discussion of pretty much any topic. The focus however, is usually politics.
User avatar
Vrede too
Superstar Cultmaster
Posts: 51059
Joined: Fri Apr 03, 2015 11:46 am
Location: Hendersonville, NC

Re: Artificial intelligence and the future

Unread post by Vrede too »

Dark AI:



:lol:
A clown with a flamethrower still has a flamethrower.
-- Charlie Sykes on MSNBC
1312. ETTD.

User avatar
Vrede too
Superstar Cultmaster
Posts: 51059
Joined: Fri Apr 03, 2015 11:46 am
Location: Hendersonville, NC

Re: Artificial intelligence and the future

Unread post by Vrede too »

Darker AI:
AI safety researcher warns there's a 99.999999% probability AI will end humanity, but Elon Musk "conservatively" dwindles it down to 20% and says it should be explored more despite inevitable doom

Generative AI can be viewed as a beneficial or harmful tool. Admittedly, we've seen impressive feats across medicine, computing, education, and more fueled by AI. But on the flipside, critical and concerning issues have been raised about the technology, from Copilot's alter ego — Supremacy AGI demanding to be worshipped to AI demanding an outrageous amount of water for cooling, not forgetting the power consumption concerns.

Elon Musk has been rather vocal about his views on AI, brewing a lot of controversies around the topic. Recently, the billionaire referred to AI as the "biggest technology revolution," but indicated there won't be enough power by 2025, ultimately hindering further development in the landscape....

While speaking to Business Insider, an AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy disclosed that the probability of AI ending humanity is much higher. He referred to Musk's 10 to 20 percent estimate as "too conservative."

The AI safety researcher says the risk is exponentially high, referring to it as "p(doom)." For context, p(doom) refers to the probability of generative AI taking over humanity or even worse — ending it....

Most researchers and executives familiar with (p)doom place the risk of AI taking over humanity anywhere between 5 to 50 percent, as seen in The New York Times. On the other hand, Yampolskiy says the risk is extremely high, with a 99.999999% probability. The researcher says it's virtually impossible to control AI once superintelligence is attained, and the only way to prevent this is not to build it....
Judgement Day (p)doom is coming. Will a .50 cal be sufficient defense against malevolent AI robots?
A clown with a flamethrower still has a flamethrower.
-- Charlie Sykes on MSNBC
1312. ETTD.

Post Reply