Would a democracy of artificial intelligences hold a variety of opinions?
Friday, June 2nd, 2017[ by Charles Cameron — opening a conversation ]
.
I’m hoping to engage some of my friends and net acquaintances — Peter Rothman, John Robb, August Cole, Jamais Cascio, Monica Anderson, Chris Bateman, JM Berger, Tim Burke, Bryan Alexander, Howard Rheingold, Jon Lebkowsky and no doubt others — in a conversation on this topic, here at Zenpundit.
Starting as of now: with encouragement to come — send posts to hipbonegamer@gmail.com, any length, fire at will!.
On the face of it, AIs that are seeded with different databases will come to different conclusions, and thus the politics of the company of AIs, democratically assessed — ie one AI one vote — would be stacked in favor of the majority of kindred DBs from which the set was seeded. But is that all we can say? Imaginatively speaking, our topic is meant to arouse questions around both democracy and intelligence, artificial and oitherwise. and politics, we should remember, extends into warfare..
**
Two announcements I saw today triggered my wish to stir the AI pot: both had to do with AI and religion.
The first had to do with an event that took place last month, May 2017:
Artificial intelligence and religion
Theos Newsletter, June 2017:Can a robot love? Should beings with artificial intelligence be granted rights? The rise of AI poses huge ethical and theological questions. Last month we welcomed John Wyatt and Beth Singler from the Faraday Institute to discuss these issues.
Specifically:
Advancements in Artificial Intelligence (AI) and robotics have been making the headlines for some time now. Articles in mainstream media and features in prime-time television keep pouring in. There is clearly a growing interest in humanoid robots and the varied issues raised by their interactions with humans.
The popularity of films such as Ex Machina, Chappie, I-Robot and more recently Her reveal an awareness of the challenges hyper-intelligent machines are already beginning to pose to complex issues such as human identity, the meaning of empathy, love and care.
How will more advanced, integrated technology shape the way we see our families, our societies – even ourselves?
and one event next year:
AI and Apocalypse
Centre for the Critical Study of Apocalyptic and Millenarian Movements (CenSAMM)
April 5 – 6, 2018. Inside the Big Top at the Panacea Charitable Trust gardens, Bedford, United Kingdom
CenSAMM Symposia Series 2018 / www.censamm.orgWe invite papers from those working across disciplines to contribute to a two-day symposium on the subject of AI and Apocalypse.
Abstracts are due by December 31, 2017.Recently ‘AlphaGo’, a Google/Deepmind programme, defeated the two most elite players at the Chinese game ‘Go’. These victories were, by current understandings of AI, a vast leap forward towards a future that could contain human-like technological entities, technology-like humans, and embodied machines. As corporations like Google invest heavily in technological and theoretical developments leading towards further, effective advances – a new ‘AI Summer’ – we can also see that hopes, and fears, about what AI and robotics will bring humanity are gaining pace, leading to new speculations and expectations, even amidst those who would position themselves as non-religious.
Speculations include Transhumanist and Singularitarian teleological and eschatological schemes, assumptions about the theistic inclinations of thinking machines, the impact of the non-human on our conception of the uniqueness of human life and consciousness, representations in popular culture and science fiction, and the moral boundary work of secular technologists in relation to their construct, ‘religion’. Novel religious impulses in the face of advancing technology have been largely ignored by the institutions founded to consider the philosophical, ethical and societal meanings of AI and robotics.
This symposium seeks to explore the realities and possibilities of this unprecedented apocalypse in human history.
**
You’ll note that thse two events address religious and ethical issues surrounding AI, which in turn revolve, I imagine, around the still disputed matter of the so-called hard problem in consciousness. I’d specifically welcome responses that explore any overlap between my title question and that hard problem.