Artificial intelligence can learn to lie and cheat, and this is a serious risk that requires regulatory and legislative measures to ensure that it remains a useful technology, rather than becoming a threat to human knowledge.
More than 1,000 artificial intelligence experts have joined a call for an immediate pause on the creation of “giant” AIs for at least six months, so the capabilities and dangers of systems such as GPT-4 can be properly studied and mitigated.
There are already machines that perform certain important tasks independently without programmers fully understanding how they learned it.
There’s a strong global convergence towards five ethical principles, including transparency, justice and fairness, non-maleficence, responsibility, and privacy.
If we are going to make machines with human psychological capacities, we should prepare for the possibility that they may become sentient. How then will they react to our behaviour towards them?
The Institute for Ethics in Artificial Intelligence will explore fundamental issues affecting the use and impact of AI and will ensure that AI treats people fairly, protects their safety, respects their privacy and works for them.