zenpundit.com » monica anderson

Archive for the ‘monica anderson’ Category

Would a democracy of artificial intelligences hold a variety of opinions?

Friday, June 2nd, 2017

[ by Charles Cameron — opening a conversation ]
.

I’m hoping to engage some of my friends and net acquaintances — Peter Rothman, John Robb, August Cole, Jamais Cascio, Monica Anderson, Chris Bateman, JM Berger, Tim Burke, Bryan Alexander, Howard Rheingold, Jon Lebkowsky and no doubt others — in a conversation on this topic, here at Zenpundit.

Starting as of now: with encouragement to come — send posts to hipbonegamer@gmail.com, any length, fire at will!.

On the face of it, AIs that are seeded with different databases will come to different conclusions, and thus the politics of the company of AIs, democratically assessed — ie one AI one vote — would be stacked in favor of the majority of kindred DBs from which the set was seeded. But is that all we can say? Imaginatively speaking, our topic is meant to arouse questions around both democracy and intelligence, artificial and oitherwise. and politics, we should remember, extends into warfare..

**

Two announcements I saw today triggered my wish to stir the AI pot: both had to do with AI and religion.

The first had to do with an event that took place last month, May 2017:

Artificial intelligence and religion
Theos Newsletter, June 2017:

Can a robot love? Should beings with artificial intelligence be granted rights? The rise of AI poses huge ethical and theological questions. Last month we welcomed John Wyatt and Beth Singler from the Faraday Institute to discuss these issues.

Specifically:

Advancements in Artificial Intelligence (AI) and robotics have been making the headlines for some time now. Articles in mainstream media and features in prime-time television keep pouring in. There is clearly a growing interest in humanoid robots and the varied issues raised by their interactions with humans.

The popularity of films such as Ex Machina, Chappie, I-Robot and more recently Her reveal an awareness of the challenges hyper-intelligent machines are already beginning to pose to complex issues such as human identity, the meaning of empathy, love and care.

How will more advanced, integrated technology shape the way we see our families, our societies – even ourselves?

and one event next year:

AI and Apocalypse
Centre for the Critical Study of Apocalyptic and Millenarian Movements (CenSAMM)
April 5 – 6, 2018. Inside the Big Top at the Panacea Charitable Trust gardens, Bedford, United Kingdom
CenSAMM Symposia Series 2018 / www.censamm.org

We invite papers from those working across disciplines to contribute to a two-day symposium on the subject of AI and Apocalypse.
Abstracts are due by December 31, 2017.

Recently ‘AlphaGo’, a Google/Deepmind programme, defeated the two most elite players at the Chinese game ‘Go’. These victories were, by current understandings of AI, a vast leap forward towards a future that could contain human-like technological entities, technology-like humans, and embodied machines. As corporations like Google invest heavily in technological and theoretical developments leading towards further, effective advances – a new ‘AI Summer’ – we can also see that hopes, and fears, about what AI and robotics will bring humanity are gaining pace, leading to new speculations and expectations, even amidst those who would position themselves as non-religious.

Speculations include Transhumanist and Singularitarian teleological and eschatological schemes, assumptions about the theistic inclinations of thinking machines, the impact of the non-human on our conception of the uniqueness of human life and consciousness, representations in popular culture and science fiction, and the moral boundary work of secular technologists in relation to their construct, ‘religion’. Novel religious impulses in the face of advancing technology have been largely ignored by the institutions founded to consider the philosophical, ethical and societal meanings of AI and robotics.

This symposium seeks to explore the realities and possibilities of this unprecedented apocalypse in human history.

**

You’ll note that thse two events address religious and ethical issues surrounding AI, which in turn revolve, I imagine, around the still disputed matter of the so-called hard problem in consciousness. I’d specifically welcome responses that explore any overlap between my title question and that hard problem.

Considering Viv, Wolfram Language, Syntience, and the GBG

Wednesday, June 8th, 2016

[ by Charles Cameron — expanding the computable to include qualitative ideation ]
.

Let’s start with Viv. It looks pretty phenomenal:

That video is almost exactly a month old, and it’s pitched at “the universe of things” with a marked tilt towards e-commerce. Fair enough.

**

It’s instructive to compare it with Wolfram Language, although here I’ve had to go with a video that’s a couple of years old:

Stephen Wolfram, the creator of both Mathematica and Wolfram Alpha, is focused on the world of numbers — and incidentally, that includes graphs of the sort I’ve been discussing in my series here On the felicities of graph-based game-board design, as you can see in the video above.

It will be interesting to see how the two of them — Viv and Wolfram — interact over time. After all, one of the purposes of these lines of development is to dissolve the “walled gardens” which serve as procrustean beds for current thinking about the nature and possibilities of the web. Do these two gardens open to each other? If so, why? If not, why not?

**

I’ve talked enough for my purposes about AlphaGo and it’s narrowly focused though impressive recent triumph, and the wider picture behind it, as expressed by Monica Anderson — and tying the two together, we have this video from Monica’s timeline, Bob Hearn: AlphaGo and the New Era of Artificial Intelligence:

Bob Hearn: AlphaGo and the New Era of Artificial Intelligence from Monica Anderson on Vimeo.

Monica’s Syntience, it seems to be, is a remarkable probing of the possibilities before us.

**

But I’m left asking — because Hermann Hesse in his Nobel-winning novel The Glass Bead Game prompts me to ask — what about the universe of concepts — and in particular for my personal tastes, the universe of musical, philosophical, religious and poetic concepts. What of the computational mapping of the imagination?

My question might well have large financial implications, but I’m asking it in a non-commercially and not only quantitative way. I believe it stands in relationship to these other endeavors, in fact, as pure mathematics stands in relation to physics, and hence also to chemistry, biology and more. And perhaps music stands in that relationship to mathematics? — but I digress.

If I’m right about the universe of concepts / Glass Bead Game project, it will be the most intellectually demanding, the least commercially obvious, and finally the most revelatory of these grand-sweep ideas..

From my POV, it’s also the one that can give the most value-add to human thinking in human minds, and to CT analysts, strategists, journos, educators, therapists, bright and playful kids — you name them all!

Seeing it in terms of counterpoint, as Hesse did — it’s the virtual music of ideas.

Conkers (the game) and Deep Learning

Wednesday, April 13th, 2016

[ by Charles Cameron — offered to 3QD ]
.

This is my second submission attempting (and failing) to become a regular contributor at the fine web-aggregator known as 3QD — you can read my first here. On rereading this more recent attempt, I am not sure I would have selected it myself had I been one of the judges — it perhaps expects too mych “British” knowledge of its readers, who are as likely to come from Hoboken or Pakistan as from Oxford (hat-tip there to my two 3QD friends, Bill Benzon & Omar Ali). For those of you, therefore, who may not know what conkers is, here’s a brief video introduction, with the actual game demo starting around the 1’35” mark:

I’ll add a fantastic passage from Seamus Heaney at the end of my post, to give it the final mahogany polish a conkers post deserves.

____________________________________________________________________________________________

Three Quarks Daily submission:

Conkers is for kids. So what does it have to do with computers, AI, “deep learning” or robots?

It’s a very British game, conkers, played with the seeds of horse chestnut trees, of which Britain counts almost half a million, pierced and strung on string. I suppose you could think of it as a primitive form of rosary bashing, with each player’s rosary having only one bead, but that might give others the wrong impression since conkers (the game) is sacred the way taste is sacred before anyone has told you, “this is strawberry”, not the way sanctuaries are sacred after someone has put an altar rail round them, or screened them off with a rood screen.

Even giving the game the name Conkers with a capital C puts it on a pedestal, when all it’s about is gleefully finding a suitable conker, maybe loose on the ground or maybe encased in its prickly green shell, drilling it through with some sort of skewer or Swiss Army device, threading it (stringing it) on string, finding a gleeful or shy playmate, and whazzam! conking their conker with your own so their falls apart and yours remains triumphant. Battered perhaps, but victorious.

conkers in the wild - bbc

An artificial general intelligence, left to its own devices and skilled in “deep learning”, will surely figure out that play plays a significant role in learning, by reading Johan Huizinga perhaps, or figuring out “the play’s the thing” – or noting that play is how learning develops in mammalian infancy and expresses itself in human mastery. And since learning is what artificial general intelligence is good for and would like to be even better at, you can bet your best conker that artificial general intelligence will want to learn to play.

Okay, computers have played, and beaten humanity’s best, at such games as Pong, Space Invaders, Draughts, Backgammon, Chess, Jeopardy, Rock-Paper-Scissors, and Go

Rock-Paper-Scissors

— and apparently one set of AI researchers is considering Texas Hold’em as a plausible next challenge. I want to see them try their hands (hands?) and minds at conkers, though.

Consider: they’d need more than brainpower, they’d need mobility. It wouldn’t take a rocket scientist but a robot to do the trick – locating chestnut trees, okay, with a judicious use of Google maps for targeting and drones for close observation, agility to get around the trees (climbing ‘em?) gathering and evaluating nuts, their sizes, densities, colors and weights, testing different angles of attack and types of needles for threading, the respective efficacies of polished (ooh like mahogany!) vs unpolished nuts (somewhat more in the spirit of sabi-wabi), styles of rough or silken string — and then the dexterity to swing the strung nut at its similarly strung and loosely hanging sibling-opponent!!

conkers - sun

Ah, but there’s a child to find first, shy or enthusiastic, and the very approach of a disembodied brain or robot might scare or enchant said child. Your successful robot will need to avoid the uncanny valley of too close resemblance, in which a machine garners the same emotional thrill as Chucky the scary doll from Child’s Play and its endless sequels…

All this, to beat the poor kid at the kid’s own game?

Perhaps our robot overchild will have read the bit about “It matters not who won, or lost, but how you played the game” – and will have the good sense and humility to build itself an arm that’s not quite as strong as a child’s arm, and will eventually have played enough child opponents to develop a style that wins only fractionally more games than it loses – say 503 games out of a thousand.

What is man, that thou art mindful of him? and the son of man, that thou playest conkers with him?

Oh, conkers isn’t the only game I’d like to see the computers try for -– in fact it’s one of two games that has been outlawed in some British schools –- and to live outside the law you must be honest, as Dylan says. Another banned — and therefore extra-interesting — game is leap-frog. How do you win and how do you lose at that? You don’t –- you just play.

That’s when things will begin to get really interesting, I think –- when an artificial general intelligence, with or without robotic body, learns playfulness. There will need to be constraints of course, of the Do no Harm variety -– a fireproof Faraday cage playpen perhaps?

**

But play is by no means limited to infancy, it also finds an outlet in genius. So here’s my more serious, though still playful, question for the AI folks out there, and Monica Anderson with her “Intuitive AI” in particular:

When will an AI be able to play Hermann Hesse’s Glass Bead Game?

After all, it’s the only game design that has arguably won its designer a Nobel Prize. And at the moment, it’s brilliantly undefined..

Grail for Game Designers

Hesse conceptualized the game as a virtual music of ideas – contrapuntal, polyphonic. Its origins were found in musical games, mathematicians soon added their own quotient of arcane symbols to those of musicians, and other disciplines followed suit until as Hesse puts it:

The Glass Bead Game is thus a mode of playing with the total contents and values of our culture; it plays with them as, say, in the great age of the arts a painter might have played with the colors on his palette. All the insights, noble thoughts, and works of art that the human race has produced in its creative eras, all that subsequent periods of scholarly study have reduced to concepts and converted into intellectual values the Glass Bead Game player plays like the organist on an organ. And this organ has attained an almost unimaginable perfection; its manuals and pedals range over the entire intellectual cosmos; its stops are almost beyond number. Theoretically this instrument is capable of reproducing in the Game the entire intellectual content of the universe.

Various people, myself among them, have proposed playable variants on Hesse’s great Game, and with a little further precision, definition and development, one of us might be able to propose a specific GBG format that would challenge AI to, essentially, produce profundity, beauty, perhaps even “holiness.” For when all’s said and done, that’s where Hesse’s game inexorably leads, as his Game Master Joseph Knecht tells us:

Every transition from major to minor in a sonata, every transformation of a myth or a religious cult, every classical or artistic formulation was, I realized in that flashing moment, if seen with a truly meditative mind, nothing but a direct route into the interior of the cosmic mystery, where in the alternation between inhaling and exhaling, between heaven and earth, between Yin and Yang, holiness is forever being created.

Joshua Fost GBG sample
Sample “island” of moves from Joshua Fost‘s Toward the Glass Bead Game – a rhetorical invention

3QD is a polyphonic glass bead game of sorts, and comes far closer to Hesse’s ideal than the internet as a whole, since it curates the thoughts and insights it delivers — so maybe the challenge will be for an AI to win a place at the 3QD table by providing an essay, 1,000 to 2,500 words in length, to Abbas, by email at or before 11.59pm New York City time on some Saturday yet to be announced.

GBG cover Hesse

Preferably, such an AI will also have a stock of essays available, touching on a variety of subjects with passion, clarity, and at least a hint of quasi-human humor, for posting on specified Mondays. And before it writes in, it may need to think itself up a nom de plume.

____________________________________________________________________________________________

So that was my 2QD submission, with a couple of additional images thrown in. As I say, I’m not convinced I got it right — and next time. if there is a next time, I’ll likely avoid the Glass Bead Game and pick on some other topic. To give you a “rounded experience”, however, I am going to return now to conkewrs and chestnut trees, and offer you that quote I promised from Seamus Heaney, taken from his essay The Placeless Heaven: Another Look at Kavanagh:

In 1939, the year that Patrick Kavanagh arrived in Dublin, an aunt of mine planted a chestnut tree in a jam jar. When it began to sprout, she broke the jar, made a hole and transplanted the thing under a hedge in front of the house. Over the years, the seedling shot up into a young tree that rose taller and taller above the boxwood hedge. And over the years I came to identify my own life with the life of the chestnut tree.

This was because everybody remembered and constantly repeated the fact that it had been planted the year I was born; also because I was something of a favourite with that particular aunt, so her affection came to be symbolised in the tree; and also perhaps because the chestnut was the one significant thing that grew visibly bigger by the year…

When I was in my early teens, the family moved away from that house and the new owners of the place eventually cut down every tree around the yard and the lane and the garden including the chestnut tree. We all deplored that, of course, but life went on satisfactorily enough where we resettled, and for years I gave no particular thought to the place we had left or to my tree which had been felled.

Then, all of a sudden, a couple of years ago, I began to think of the space where the tree had been or would have been. In my mind’s eye I saw it as a kind of luminous emptiness, a warp and waver of light, and once again, in a way that I find hard to define, I began to identify with that space just as years before I had identified with the young tree.


Switch to our mobile site