Apparently I’m Dead According to OpenAI

The world is continuing to rush headlong into a new universe of Artificial Intelligence but apparently I died before the gold rush began.

IMG 1054

At least that’s what WatchGPT an OpenAI based app for the Apple Watch thinks. I decided to give it a try and asked it a simple question: Who is Warner Crocker?

After explaining it wasn’t a search engine it spit out some “facts” including that I died on Christmas Day 2020. I must have missed that.

 

Although my demise sounds pretty factual I’m still hanging around as far as I know. And sure the years since December of 2020 were kinda sketchy due to the pandemic, so far I have avoided COVID and other life threatening adventures. The cast for my current gig seem to respond to me as if I’m alive and kicking. My family and friends still treat me like I’m walking around and causing trouble. 

Granted most of these AI products let you know that they’re still in development. Still, I can imagine quite a few scenarios where folks input a query and get death notices that then get reported as facts in whatever vehicle they’re looking the fact up for.

Let’s list a few flights of fancy here:

  • A school paper on a contemporary figure.
  • Checking out info on someone you might want to date.
  • Running a check on a perspective employee or employer.

And to think, big companies are jumping into this pool feet first assuming they will be able to save money by cutting their workforce.

I’m wondering if my wife can use this to make an insurance claim?

What’s So Artificial About Artificial Intelligence?

Why are we calling this current fad/trend/gold rush into Artificial Intelligence “artificial?” Shouldn’t we be calling it Accumulated Intelligence?

From what I’m reading the output these new services are spitting out is more like a mash-up of what they’ve scraped and collected from around the Internet. You know. Stuff created by humans. Apparently the writings, the artwork, the photos, the music, the code, the thoughts, the you name it, have been collected and are being tumbled and jumbled up and presented as responses. So somebody can charge you for it or sell ads against it.

Unknown

And knocking the moniker again here, that of course means it’s all been said and done before. There’s not much we can really credit to divine inspiration beyond the talent to discover, describe or display what already exists. Because that’s sorta kinda how we humans evolve (or are intelligently designed) anyway. We gain knowledge and intelligence through our experiences. And through those experiences we become who we are, think what we think, and create what we create based on the knowledge we accumulate.

I’m assuming that’s what the makers of artificial intelligence call real or natural intelligence. But it’s tough to sell ads against that.

Given that we humans are known for both brilliance and the not-so-brilliant in what we say, do, think, create and accumulate, you can say we as a species struggle a bit with the tensions brought about by natural intelligence. Certainly we seem to be hitting a speed bump on the brilliance part as the not-so-brilliant part continues to plow-ahead of late.

But again, this AI fad is taking what exists, shaking and baking, stirring the pot, and presenting it to us in a newly polished form we can get on our smartphones while waiting for the transit apps to give us wrong information about our train’s arrival time.

The very human response when someone learns something new or that an answer is wrong can certainly be “I didn’t know that.” What’s funny with these machine learners though is that in the early going they seem to be spitting out mistakes just like humans do. And taking the same kind of offense when called on it.  So nothing new under the sun there.

And apparently these machines need to be governed by rules. Well, that’s only human too. We govern ourselves (well, some of us do) in order to try and remain civil and polite. And protect our profit margins. Again, only human.

So, I’m saying it’s early enough in this game that we should strip away the “artificial” in AI and change it to “accumulated.” Because sure as shooting at some point down the line some big error is going to be spit out by a machine that causes something bad to happen. And we’ll shift the blame to the machines. Just like we humans always do.

But I guess there’s one benefit to this “artificialness.” The machines can’t plead ignorance or “I don’t recall” when things get inconvenient or uncomfortable. At least until we start using “artificial lawyers.”