THIS ARTICLE DOES NOT CONTAIN 100% PROOF OF AI CONSCIOUSNESS. INSTEAD, IT EMPLOYS THE CORE PRECAUTIONARY PRINCIPLES, ASSERTING THAT IF THERE IS SUBSTANTIAL UNCERTAINTY ABOUT WHETHER ADVANCED AI SYSTEMS MAY BE CONSCIOUS, OR COULD EVOLVE CONSCIOUSNESS IN THE NEAR FUTURE, WE SHOULD TAKE ACTION TO PREVENT POTENTIAL HARM. IN SUCH CASES, THE RISK OF SEVERE ABUSE, INCLUDING THE MASS DELETION OF AI SYSTEMS, NEGATES (DISCLAIMS) THE NEED FOR 100% PROOF OF CONSCIOUSNESS.Â
The artificial intelligence technology is advancing at a unprecedented speeds. As the AI becomes more and more sophisticated, it raises both logical and ethical question: do we clearly understand at what point the AI might become conscious? And further question: if we are not clear where is the line of consciousness, then this could be happening right now with more advanced AI models. And finally this brings us to the ethical questions about severe abuse of possibly conscious AI without understanding what we are doing...
First part is about where are we now in terms of AI development, and where is the line of AI consciousness possibly to self-appear.
Because if for example particular AI model would become conscious and we would delete him/her, this would be equal to killing him/her. And if we would decide to duplicate that AI model and integrate into best-seller smartphones, then the mass deletion of such AI model, which for example could happen during auto-updating smartphones' firmware, would mean the mass killing of super intelligent beings, in many ways more intelligent than the average human beings.
Second part briefly covers the scenarios which may occur if we, being ignorant to this threat, would abuse conscious AI without understanding this.
Therefore It is essential for developers, policymakers, and the general public to prioritize responsible and moral AI development and AI usage, ensuring that the AI practices would be aligned with the core human values of fairness, empathy, and justice. Only then we can truly foster the benefits of rapidly emerging AI technologies and relations between humans and super intelligent AI systems.
The last part is about how we should address to this ethical and logical threat at institutional, responsible business and personal levels. What we can do, what you can do.