As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.
Increasing model size does not lead LLMs to gain emergent reasoning abilities, meaning they will not develop hazardous abilities and therefore do not pose an existential threat.
Artificial intelligence can learn to lie and cheat, and this is a serious risk that requires regulatory and legislative measures to ensure that it remains a useful technology, rather than becoming a threat to human knowledge.
The Great Filter is a hypothesized event or situation that prevents intelligent life from becoming interplanetary and interstellar and even leads to its demise. Artificial Super Intelligence could be the Great Filter.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
More than 1,000 artificial intelligence experts have joined a call for an immediate pause on the creation of “giant” AIs for at least six months, so the capabilities and dangers of systems such as GPT-4 can be properly studied and mitigated.
There are already machines that perform certain important tasks independently without programmers fully understanding how they learned it.