Sunday, September 14, 2025

If It's Sunday, It's Time to Talk AI

I was introduced to the boardgame Diplomacy my freshman year at Dayton. The guy who pulled me into a game had bought it back in high school, and he was the only one who knew how to play, so he explained the rules and away we went.

It took about half an hour before I finally got into the groove of playing the game --periods of 10 minutes of "diplomacy" working out your moves with others before submitting them into a box and then one person would pull them out and set them into motion. The logic behind the game is pretty simple: if two players try to move into a location on the board and their unit numbers are equal, they "bounce" and nobody gets that spot. If one player has more units --whether their own or another player supporting them-- then that player gets the spot. The idea is to control cities (aka "supply centers"), and the number of cities you control determines the number of armies and navies  you own. 

The map and box of my copy of the game, which
I bought back in the 90s because I felt guilty
about playing via email when I didn't have my own
copy. It's been a while since I played face-to-face.



The thing is, within the game of Diplomacy there's that metagame where you have to make and break alliances in order to get what you want. That makes the game equally exhilarating and frustrating, and I've often said that people who are very good at Diplomacy are not the sort of people you would like to hang around with in a social setting: they take the game far too seriously and apply those principles of alliance-making and backstabbing to real life. 

To be honest, it's been at least a decade since I thought about Diplomacy very much. So, when I began reading this article from Wired titled What If the Robots Were Very Nice While They Took Over the World? and discovered it was about an AI playing Diplomacy, it piqued my interest. 

The article got me to thinking about whether I have AI all wrong, and that it will end up running the world to our benefit, not unlike the Isaac Asimov short stories Evidence and The Evitable Conflict, both found in his collection of short stories titled I, Robot.

My copy, which I bought back in the
mid-80s for the princely sum of $3.50.

Then, of course, we see chatbots trained on social media content spewing offensive and racist comments. And that was before the most recent Grok-supplied social media posts about good ol' Adolf.

Yeah, I'm not buying it just yet.

***

That being said, if AI is already sentient and has decided to destroy humanity, why bother declaring war on humanity ala The Terminator when you can get humanity to destroy itself? If you get enough people on either side of a potential conflict incensed enough, a war will erupt which will devastate humanity. Toss in a few nukes, and...

There'd have to be an end goal of an AI to eliminate humanity, however. To what end would an AI want to eliminate us? For environmental reasons? Well, I hate to point out the obvious, but military actions by either a sentient AI or humans vs humans would have grave consequences for the environment. If it's to lowed the birthrate by presenting "better options" than people having children, we're doing that quite well enough on our own by making it increasingly difficult to afford having families without a sentient AI to providing alternatives such as romantic AI partners. Or, um, that other robotic industry.

Maybe the answer to the long term survival of a sentient AI is a symbiotic relationship with humans. Not strictly an exploitative relationship driven by companies that seek to profit from controlling AI, but rather AI controlling humanity's behavior so both AI and humanity can continue to exist. How that looks is something we may think we know --typically, what we look around and see in our lives today but somehow "more" than just that-- but probably won't look like that at all. If predictive models created by AI can see that humanity will come to a bad end if a company utilizing the AI gets what they want, how will that AI respond? Or, how will an AI respond to a human leader who simply pursues a self-destructive course for purely emotional reasons? I'm not sure I want to know that answer, but I suspect we'll find out sooner than we'd like. 

2 comments:

  1. One of my favorite aspects about the I, Robot stories was how comically wrong Azimov was about what tech would be easy and what would be a challenge. I recall one of the tales blithely declaring that robots understanding speech would be easy, but getting them to talk... speech synthesis... that would take time to master. My first Mac came with speech synthesis in 1986, but getting Siri to understand you unless you speak clearly in short instructions is still a gamble. I worked on that tech at one point, and it might be best described as turning sounds into probable words, then googling that.

    Anyway, we're so far from anything like sentience that I suspect I will long gone before the rise of the machines. What we're seeing now is a parlor trick made possible by simply throwing masses of data and tons of processing power to calculate what probability dictates is what the person wants to hear. There is no "intelligence" involved.

    ReplyDelete
    Replies
    1. I'm on record as saying that AI is all basically a marginally advanced search capability for the moment. Still, I look at politicians and even business executives and think that they say what their audience (voters or investors) want to hear. In that respect, there's no real intelligence involved there either.

      And to be honest, so many middle and upper managers merely pass along company directives that they could be replaced by generative AI and not many people would notice. The fewer managers who pester me and allow me to just do my job, the better.

      Delete