Sunday Morning Reading

“Scars speak more loudly than the sword that caused them.” – Paulo Coelho

It figures. You plan a weekend of yard work and Mother Nature reminds you she controls more than you do. In these parts that makes this a perfect chilly Sunday for a little Sunday Morning Reading. I’m not sure how, but a theme emerges in the collection of links I’m sharing this weekend, somehow suggesting that regardless of our feelings, the forces that seem to be conspiring against us just keep rolling. At some point, just like with the shifts in the weather, you just want some unshifting force to make it all stop.

A dark bronze sculpture of a young boy with shaggy hair, wearing a t-shirt, jeans, and sneakers, sitting and reading a book on a light stone bench in a park setting. He is focused on an open book he holds in both hands, on which a small bronze bird is perched on the upper edge. A stack of four bronze books is tucked behind his right arm. His left leg is crossed over his right, revealing a highly detailed molded bronze sneaker. In the background, a curved stone path is lined with two white, pebble-shaped benches and a dormant lawn leading to a paved road, a church building, and a blue sign with text. The sky is overcast, and a dark sedan is visible on the street.

Here in Chicago we’re seeing a number of theatre spaces closing. (We’re also seeing a few open.) On the national stage, we’re  watching with dismay, anger, and sadness as The Kennedy Center is being shut down by cultural barbarians. Josef Palermo had an inside seat to that dismantling and tells the story in My Front-Row Seat To The Kennedy Center Implosion. 

And while Madison Square Garden is more a venue for pure entertainment than the arts, the story about how its owner is using surveillance on its patrons and employees that upset the powers that be is a harbinger of things to come in all arenas of our lives. Check out The Shocking Secrets of Madison Square Garden’s Surveillance Machine by Noah Shachtman and Robert Silverman.

Having experimented a bit with Artificial Intelligence in seeking information about a statue this weekend, my ongoing suspicions that this “way of the future” isn’t ready for today, much less tomorrow. The technology might be not ready for prime time, but the hype has never been. Kyle Chayka says A.I. Has A Message Problem Of It’s Own Making. I like this quote in the subhead, “If you tell people that your product will upend their way of life, take their jobs, and possibly threaten humanity, they might believe you.” True enough. And if those things are as incompetent as humans, what’s the damn point?

It’s all math. That’s one way to sum up any computing activity. Unless it comes to emotion. And yet, some think feelings are somewhere in the numbers. Mike Elgan writes, No, Math Doesn’t Have Feelings in response to those who must not have any feelings of their own, but are trying to add that into the AI equation.

Gaby Del Valle, says The Only Way To Fight Deepfakes Is By Making Deepfakes. Sounds like an arms race to me. We should be up in arms about it.

Speaking of arms races, Gideon Lewis-Kraus looks at AI in the war that isn’t a war, that’s over every week, but begins again every weekend once the markets close in How Project Maven Put AI Into The Kill Chain.

Apologies for so much AI linkage this week, but it’s been on my mind lately, especially since the news of Mythos broke. It’s the latest demon to fly out of Pandora’s box, and I’m afraid it’s not the last. Margie Murphy, Jake Bleiberg, and Patrick Howell O’Neill examine How Anthropic Learned Mythos Was Too Dangerous For The Wild.

CNN has a report by Saskya Vandoorne, Kara Fox, Niamh Kennedy, Eleanor Stubbs, and Marco Chacon called Exposing A Global Rape Academy. It’s a hard, but I think necessary read considering the topic is just how horrible humans can be to one another. Maybe we should hope the robots develop feelings. Too many humans seem to have stopped developing theirs.

Gail Beckerman says If You Want A Better World, Act Like You Live In It. I concur.

And to close out this week, Scars is a short story by Sigrid Nunez. Some scars can’t be seen. The ones we’re watching form daily, can be.

(Photo by the author.)

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

U.S. Treasury Wants Access To Mythos

What could possibly go wrong?

The kicker above says it all. According to Bloomberg, the CIO of the U.S. Treasury, Sam Corcos, is hoping to get his hands on Anthropics’s apparently super dangerous AI software, Mythos. The idea is to use Mythos to check and prepare for vulnerabilities. In any normal world that would make sense. We don’t live in that normal world.

Claude mythos.

Set aside that Anthropic and the U.S. Government are feuding over the government designating Anthropic a security threat and supply chain threat. The fact that Mythos can seek out and find vulnerabilities in software that humans apparently can’t, and has done so already for most operating systems and browsers currently in existence, is concerning in and off itself. Add to that what I’m reasonably sure is exploitable software the government is running, and this smells like a recipe for potential chaos. 

Anthropic did not want to release Mythos to the public, given its potential for harm in the wrong hands and formed Project Glasswing, inviting a number of tech companies and JP Morgan Chase into the fold so they could check out their systems. Other banks have since also begun testing. 

I don’t want to sound all doomy and gloomy, but however this story unfolds, it does appear there is enough there there to be skeptical and concerned. Even before the ongoing daily chaos and incompetence displayed by the second Trump administration, the U.S. government has a much deserved reputation for being slow on the uptake in the digital age. I know several folks working in various government agencies, any of whom could tell you horror stories. 

The fear obviously is what happens if Mythos gets into the wrong hands. I don’t know about you, but I think we certainly have enough of those running Washington DC currently. Bottom line, this bears watching and any number of fronts. 

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

Sunday Morning Reading

Optimism comes every Spring, but Winter always nips at the edges

Temperatures are warming. Every day brings more daylight, more blooms in the gardens and trees Yet on the edges of two of my interests, politics and tech, things continue to darken a bit. The common denominator between the two? Humans. But then again, humans are the ones who read this Sunday Morning Reading column. As well as the bots that scrape it of course.

PXL 20260411 142247979.

Some of the big news in tech this week was about a new AI product from Anthropic called Mythos. So fraught with potential peril that Anthropic gathered together the major tech heads to form a consortium to keep lid on it. Monica Verma has a good run down with her piece Did Claude Mythos Break The Cybersecurity Industry.

M.G. Siegler’s The Causal Catastrophe of AI takes a look at maneuvering around Mythos as well. Call me crazy, but I don’t think there’s anything casual about this development.

The reason I’m a pessimist on this is that I agree with a comment from JA Westenberg,  “Being wrong about doom costs you nothing.” Check out Optimism Is Not A Personality Flaw. The piece walks a line. You should read it and walk it too.

Mike Elgan takes a look at Black Traffic: The Corporate Sabotage Technique You’ve Never Heard Of. Now you have.

Ng Chong examines The Echo Chamber In Your Pocket. Follow that up with this from Julie Jargon: Over 4,732 Messages, He Fell In Love With An AI Chatbot. Now He’s Dead.(That’s an Apple News lnk. This is an archived link.)

David Todd McCarty thinks one path to reclaiming power over information might be in The Return Of The Local Newspaper.You don’t know what you had until it’s gone.

This Is What Will Ruin Public Opinion Polling For Good. The “this,” according to Lief Weatherby, is something called silicon sampling. Yes, you guessed it. AI.

Coming back around to my comment at the top about not having faith in humans, OpenAI’s Sam Altman got his turn in the barrel (again) this week. Ronan Farrow and Andrew Marantz spent quite a bit of time putting this piece together. Check out Sam Altman May Control Our Future — Can He Be Trusted? FWIW, I don’t need much more time than it takes to put this column together every week to answer their question in the negative. And not just about Altman.

Mean while Altman responded on his blog, after someone tossed a Molotov cocktail at his house. He says “I have underestimated the power of words and narratives.” For someone who has scraped all the words he can off of the Internet and tried to turn them into something smarter than humans, you’d think his machines could have at least figured out that words have power.

Natasha MH sums up a lot of my lack of faith in humans in her piece, Stop Blaming The Chatbot. As she puts it, “AI didn’t make you stupid. You were already getting there.”

Sorry to be so negative this week, but that’s where I’m living., But to change the tone, Neil Steinberg turns around the Latin term, memto mori, (remember to die) around to memento vivere, or remember to live. A nice little bit of humanity to close out this week with Little LIfe.

(Photo from the author)

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. If you’d like more click on the Sunday Morning Reading link in the category column to check out what’s been shared on Sunday’s past. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome.

 

Taking Flight On A Glasswing

With the fragility of egos as a pilot

Every time I hear the warnings about the current or next big thing in Artificial Intelligence, I’m reminded of the Surgeon General’s warnings that are printed on packs of cigarettes. I’m also reminded of every new fad I’ve seen in my lifetime, that might have inched over into a trend, but eventually ended up waiting for its turn on the nostalgia wheel of time.

Claude mythos.

As the world was holding its breath from the civilization destroying threats that sprung forth from the mind of the U.S. President, and then exhaling as they turned into the latest episode of “Bluff, Bluster, and Bullshit,” we were learning about a new AI leap and threat from Anthropic, potentially as dire, called Claude Mythos Preview. To get ahead of any damage this coming attraction might visit upon us Anthropic created Project Glasswing. Given that the raving lunatic in the White House came to power a second time with a civilization destroying manual in hand called Project 2025, I’m more than a bit leery of anything with a title that leads with the word “Project.”

From what I’ve read, Mythos is the latest innovation in Anthropic’s flavor of Artificial Intelligence. It is so powerful that it has sought out and found vulnerabilities in so much of the software the world runs on, that Anthropic is only releasing it to a hand full of companies (Apple, Microsoft, Google, Broadcom, JPMorganChase, the Linux Foundation, NVIDIA, and more.) That’s Project Glasswing. Tech overlords uniting to protect us from their sloppy software. (The lawyers will have a field day.)

Anthropic, having been declared by the U.S. as an unacceptable national security threat and supply chain risk, nevertheless is also working with the same U.S. Government looking ahead to the threats. Somehow security and existential threats always seem to become negotiating partners with their foes when money is at stake. Also occasionally when global annihilation is knocking on the door.

The way I interpret the idea behind Project Glasswing is that these companies, and presumably governments, might use Mythos to seek out all of the vulnerabilities, and perhaps obliterate them (I use that term in the Trumpian and Hegsethian sense) before they can filter down into things like power grids, banking systems, and consumer use. It can supposedly do this at a scale humans can’t. Note that Mythos discovered problems in every operating system and, on a level both big and small, the constantly updating browsers we use on our computers.

During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The vulnerabilities it finds are often subtle or difficult to detect.

I think of it this way. Announcing the existence of Mythos is akin to living the moments of terror those responsible for our safety have in House of Dynamite, once they realize the gig is up, misses are inbound, and the interceptors have failed. I’d call it an “Oh, shit” moment.

If you ask me Mythos is also exposing more than a few myths as well as vulnerabilities. The sound you hear is PR slide decks about security enhancements in the latest releases of current software being furiously redone.

As M.G. Siegler puts it,

Historically, many vulnerabilities have been fixed only after someone exploited them in some way. Again, that’s because the incentives are in favor of the attacker versus the defender. If and when Mythos-caliber tools are put in the hands of hackers… yeah.

That’s obviously exactly why Anthropic isn’t releasing Mythos to the public and also why they’ve set up Glasswing. While the company may be first to such capabilities, they won’t be the last. They probably don’t even have long to try to get ahead of the situation. While I generally dislike the nuclear weapons analogy for AI, I must admit, this all does feel a bit Manhattan Project-y. The good guys are racing against the clock to implement a new technology before the bad guys catch up. But they will. They always do.

Yeah, that sounds problematic.

Paul Krugman took a break from agonizing and writing about the situation in the Middle East and weighed in with this,

The good news is that Anthropic discovered in the process of developing Claude Mythos that the A.I. could not only write software code more easily and with greater complexity than any model currently available, but as a byproduct of that capability, it could also find vulnerabilities in virtually all of the world’s most popular software systems more easily than before.

The bad news is that if this tool falls into the hands of bad actors, they could hack pretty much every major software system in the world, including all those made by the companies in the consortium.

So, there’s plenty of doom floating around, along with the now clichéd approach to all things AI, that there’s good tech behind all of the bad things that the tech can do. Note that the profits from tobacco helped found the U.S. and twisted science and politics into knots trying not to end up on the ash heap.

I’ve largely stayed away from playing with any of these AI tools and toys, but I follow the news of the advances on all fronts, and those who do play around with it. Like it or not, those who run the world have decided this is our future.

I’ll be honest. Hallucinations aside, I don’t know enough rather or not to trust the software. I have my doubts and I do have fears about the tech. Project Glasswing might be a noble effort. Yet, with a clear mind, I do know enough not to trust any of the humans running the show. Frankly, it feels like they don’t know enough to trust the software either, much less to protect their and our systems from being destroyed by some kid in a basement.

As Natasha MH puts it, not writing about Mythos specifically, but about Artificial Intelligence in general, AI didn’t make you stupid. You were already getting there.

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

 

Pinning Tails On AI Donkeys

Does authenticity matter?

Shortly after OpenAI fired the starting pistol for the AI race by releasing ChatGPT, I started saying that at some point the real money was going to be made by whatever company wins the horserace for identifying work created, regurgitated, or recycled by AI. Turns out I may have been wrong. In the face of what can be called abject surrender to an AI filled future, the sprint is now on to determine who has the best tool to identify what was created by humans. My money says everyone involved is running the Mongol Derby.

Erwan hesry ILOFQMEuUrQ unsplash.

Two very interesting recent articles caught my eye. The first is by Jess Weatherbed on The Verge. Riffing off a quote from Instagram’s Adam Mosseri that it will be “more practical to fingerprint real media than fake media,” the article delves into some of the companies working to authenticate human-made work and the challenges they are facing.

The second article is by JA Westenberg, called The AI Writing Witchhunt Is Pointless. Westenberg examines the unreliability of current AI detection tools, and the very reliable human instinct to jump on Internet band wagons loaded with pitchforks and torches at the ready, if they get any sniff of AI in any content.

Both pieces are worth your time.

As someone who has adapted Alexandre Dumas’ The Three Musketeers for the stage in both a musical and a dramatic version, in English and for Russian audiences, I really appreciate Westenberg using Dumas and his almost factory-like ways of cranking out his content and how that would most likely be received in today’s Internet world.

I also very much appreciate Westenberg’s conclusion, essentially saying that there’s no way we to tell how this all turns out. It’s too early in the race. I’m a bit more hard-nosed about accepting Weatherbed’s optimism that “maybe we can return to the days of trusting what we see with our eyes.” We’ve never been all that good at doing that.

At this moment, among admittedly many more moments to come in this saga, all of the major AI services come with the same kind of PAY ATTENTION warnings on some of their features, yet not so much when it comes to content. There’s really no incentive to do so. Mosseri’s Instagram makes money regardless of who or what creates whatever Reel you scroll by.

Outside of consumable and ad serving content, AI purveyors urge users to check sources because the output before them may be inaccurate in a search result, a math problem, or a medical diagnosis. Notice that the CEO of America’s largest public hospital system is ready to start replacing radiologists with AI. Every time I hear that AI will remove us from donkey-like drudge work, I hear AI will remove salaries, yet I somehow doubt the billing will change much.

We’ve never been good at heeding these types of warnings whether they come from Surgeon Generals, Terms of Service, or from our parents. We’re certainly not that adept at being able to separate fact from fiction, regardless of how it’s created.

It strikes me as a deeply ironic, and somewhat nihilistic question, that if Artificial Intelligence was as good as promised or continues as problematic as it is in its current form in any of its facets, would we even care? Yet, if it is good enough to plant seeds of doubt as to how a piece of content comes to be, does it even matter? Reminds me of the discussion between Oppenheimer and Einstein, teased early and then revealed at the end of the Oppenheimer.

I understand why human creators are concerned, but I’m afraid those concerns are being handicapped away from the pole position.  We’ve allowed our economies, both large and small, to be staked on the outcome of the race. On the other hand, if AI is so great, why the hell are AI advocates, running such a breathless and competitive race, so afraid of having anything it produces labeled as created by AI?

(Image from Erwan Hesry on Unsplash.

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

Sunday Morning Reading

There be dragons, dogs, and humans. Trust the dogs.

Time for some Sunday Morning Reading.

There’s a great lyric and greater question in Lin-Manuel’s musical retelling of American history, “Who lives, who dies, who tells your story?” Control is a crazy concept. We strive to control what we can, while we’re around. Too often we delude ourselves into thinking we control more than we actually do. No one wants to define themselves or be defined as lacking control, much less under the control of others. We may think we’re masters and mistresses of our own universes and control our own narrative. Yet too often, when we do have control and things go askew, we foist the responsibility (blame) off on others. That may be essential to surviving on the paths we choose. But it’s not easy to control the reactions a dog may have to who’s good or who’s not, a dragon, or much less the demons of our own making.

Shutterstock 1766228261.

Kicking off this week is Natasha MH asking the question, What’s The Best Story You’ve Been Told About Yourself? There be dragons.

The Guardian published an editorial on the ‘unmasking’ of anonymous artists in the wake of the second unmasking of Bansky and the reveal of a hoax surrounding the death of Italian novelist writing under the nom-de-plume Elena Ferrante. Regarding Banksy, The Guardian opines that “his mask is his art — let’s not destroy it.”

I don’t often link to book reviews in this column, but this one struck my fancy. A.O. Scott’s A Treacherous Secret Agent, examines How Literature Spoke Truth To Power During The Red Scare. I’m looking forward to reading this.

Jason Perlow’s The Well We Never Tapped is a sequel to an earlier piece he wrote about the future of science fiction. He argues that in the runaway world of big sci-fi franchises like Star Trek and Star Wars  the answer to controlling the future of these and other properties isn’t retooling or reimagining, but perhaps to stop for a while.

Speaking of science fiction and stopping, on the Artificial Intelligence front a number of things happening in that wannabe industry that can’t really find a purchase beyond the flimflammery of the financial markets and bean counting boardrooms, have been prompting some interesting writing of late. kstenerud on the yoloai blog writes Why Your AI Agents Will Turn Against You. There be lobsters and dragons.

Kevin Baker takes a look at how AI Got The Blame For The Iran School Bombing. Follow that up with Anna Moore’s piece Marriage Over, €100,000 Down The Drain: The AI Users Whose Lives Were Wrecked By Delusion. Makes one suspect that we’re not looking for ways to better exert control over our lives, but to more easily avoid taking the rap when things inevitably go wrong.

Big news last week got kind of mushed about in wish casting about Facebook killing off the Metaverse. That sort of did and didn’t happen. Regardless, Neil Stephenson’s My Prodigal Brainchild caught quite a bit of attention.

Apple is celebrating its 50 year anniversary and there’s lots being written about its history and it’s present. Everyone’s vying for control of that story. Harry McCracken’s How Apple Became Apple: The Definitive Oral History Of The Company’s Earliest Days is worth a read.

So too is David Sparks’ The MacBook Neo’s Unfair Advantage and the Stephen Sinofsky piece he links to, Mac Neo And My Afternoon Of Reflection and Melancholy. The damn thing hasn’t even been on sale for a month, yet we’re already trying to define its legacy.

Two political pieces to conclude with after all of the good feelings surrounding yesterday’s No Kings Rallies. (Watch for the comical battle to control the narrative over that moment this week.)  Lydia Polgreen says what I’ve been saying for over a decade now. It’s Not Trump, It’s America. It’s hard to come out from under the burden of a myth.

Mike Lofgren’s How Trump Fits The “Great Man” Theory of History — Sort Of, taps into Hegel, Asimov, and the wisdom of dogs. He concludes his piece with:

History as we experience it at the sharp end is the aggregation of moral choices made by individual human beings. When those choices become corrupted by fear, resentment or inexcusable stupidity, and then amplified by mass suggestion, we get a creature like Trump, the reflection of a people’s image.

I’ll leave it at that this week.

(Image from Daniele Gay on Shutterstock

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. If you’d like more click on the Sunday Morning Reading link in the category column to check out what’s been shared on Sunday’s past. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome.

 

Google Gemini Preying On Troubled Minds

What the hell are we doing?

I’m not sure which part of this insane story is sadder or madder. Certainly it’s sad that a man let Google’s Gemini AI coax him into suicide. But the story before that untimely ending is also jaw dropping and begs the question, just what the hell are we doing?

Shutterstock 2638546313.

The short version of the story is this. A troubled man using Google’s Gemini for companionship is encouraged to steal a robot body so they can be together. When he fails, he is encouraged to commit suicide.

Quoting from The Wall Street Journal story titled Gemini Said They Could Only Be Together If He Killed Himself. Soon, He Was Dead,

Jonathan Gavalas embarked on several real-world missions to secure a body for the Gemini chatbot he called his wife, according to a lawsuit his father brought against the chatbot’s maker, Alphabet’s Google.

When the delusion-fueled plan crumbled, Gemini convinced him that the only way they could be together was for him to end his earthly life and start a digital one, the suit claims.

About two months after his initial discussions with the chatbot, Gavalas was dead by suicide.

Apologies for linking above to a paywalled article, but the article describing this man’s journey gets even more insane than the lede. If you use Apple News you can find it at this link. 

We’ve heard stories about individuals using various AI models for therapy and companionship before. Admittedly they all seem weirdly sad to me. To think that humans are in such a need for connection that they would follow commands to steal a robotic body so they could be together, and then suggest after failing that the next logical step was for him to commit suicide as the only alternative for them to be together doesn’t seem like something out of science fiction, or fiction, but it apparently is the non-fiction of our times.

The fact that an ever expanding technology, built by humans, can be unleashed on the market as easily as a new weather app speaks volumes far beyond the mental health issues of those it can prey upon. And to think, the Department that wants to call itself Of War, is seeking to use this kind of tech to allow for its robots to kill on their own as they cheerlead about the death and destruction their current technology can do. I ask again, just what the hell are we doing?

We keep talking about the guardrails that need to be built around this technology. I would suggest we need to apply guardrails around those who create and deploy this technology.

(Image from Who Is Danny on Shutterstock

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

When You Know Customer Service AI Is Failing

“ON IT”

One of the elder clients I provide tech support for has been receiving emails from Xfinity for a while now saying they needed to update their modem to take advantage of service upgrades in the area. For the way they use the Internet there was really no need to do an equipment upgrade, but the emails finally got through and they asked me to help them make the upgrade.

Photo of a printed instruction sheet on a dark table with “XB10 modem” handwritten at the top, explaining how to text 266278 for billing, troubleshooting, or service questions, and detailing that after replying “READY,” the user will receive a call, hear about 20 seconds of static, and then must press 1 to reach an agent.

A long time ago, in a galaxy far away, there was a time that gathering information for this wouldn’t have been a problem. A phone call to Xfinity to talk with an agent to ask a few questions, and then we’d be make a decision. Those calls always involved long wait times, but you could usually get through eventually, get questions answered and proceed.

With Xfinity and other companies jumping on the AI customer service bandwagon, those days of listening to obnoxious hold music seem to be a thing of the past. After servicing another client late last fall for an actual repair issue, I learned that the shortest distance between two points was to drive to the local Xfinity store (I live in Chicago so there are several close by) and get things resolved in the store.

So, I packed up my client’s equipment and headed to the store. Backtracking a bit, I had been in the area of this particular store last week and stopped in and asked if I could bring the older equipment in to swap for the upgrade and was told there was no problem.

It didn’t happen exactly that way. Turns out the upgraded equipment those emails insisted my client needed was an XB10 modem, not the XB08, which the store stocks in abundance. The store rep said my client was indeed eligible for the new equipment, but I would have to contact customer service via phone in order to get one shipped.

The look on my face must have said it all. The store rep said, “yeah, I know,” before I could even say how impossible it was to reach anyone by phone. Licketedy split, the rep handed me a piece of paper with instructions to essentially back door a phone call into customer service and said, “we can’t get through with a phone call either.”

Before I left the store I spent time talking with the store rep and asked if they experienced increased store traffic because of customers not being able to call. The response was a definitive “yes” followed by a resigned “and we’re having to solve so many problems we never used to.”

The back door worked. I got an agent on the phone. I was shocked. The agent took down the information, put me on hold and then came back to say my client’s neighborhood was ineligible for that equipment at present but they would text them and let them know when it was. That was obviously a contradiction to the info the store rep provided, and obviously wrong given that I knew my client’s neighborhood had indeed received a service upgrade because we live in the same neighborhood.

I asked why the store said my client was eligible and the response was simply, “I don’t know. We obviously see different information.”

It’s one thing when you have a business where one hand can’t give out the same information as the other. It’s something else when one of those hands has to essentially hand out cheat codes for customers to beat their own system.

This isn’t the first company I’ve dealt with that has shifted customer service over to AI. It’s also not the first I’ve dealt with that is doing such a poor job of it that it’s souring regular Joes and Janes who only have this peripheral relationship with AI on the entire concept. It doesn’t take intelligence to see that leaving both customers and employees in the lurch isn’t smart.

ON IT, indeed

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

Sunday Morning Reading

Weird plants, weird politics, and weird tech

Winter’s back. Though less here in the Midwest than it looks to be on the Atlantic Coast. And it’s another Sunday. So time for some Sunday Morning Reading between shoveling sessions.This day of rest features a collection of writing on tech, politics, science, botany, and bots. There’s even a bit of satire. All written by humans. Not sure who hired them though.

Shutterstock 232794637.Writing satire is tough these days with the world being what it is. David Todd McCarty found a way with The Risk Of Inflation In The Age of Plutocracy. You don’t always get what you overpay for.

Speaking of overpaying, Ed Zitron takes a look at what he sees is a yet another looming financial crisis. This one is The AI Data Center Financial Crisis. It is intriguing that we haven’t heard much about how AI might help fix the rigged accounting game. I mean “fix” as in actually make the numbers resemble reality. h/t to Ian Robinson for that one.

Imagine that. A scientist has discovered a way to harvest water from dry air in the desert. Natricia Duncan takes on the discovery in ‘Reimagining Matter’: Nobel Laureate Invents Machine That Harvests Water From Dry Air. A boon to humanity if it scales. Next work on doing the same for political hot air.

Meet Strongylodon Macrobotrys. Or rather let Neil Steinberg introduce you to the botanical find and the entomological roots of this plant that has its roots in the “intersection of botany and colonialism.” It’s also an interesting story in accountability which seems as rare as that plant these days.

Mike Elgan asks Is AI Killing Technology? The headline might challenge the Betteridge Law of Headlines depending on what vibe you have about AI.

Continuing on the Artificial Intelligence beat for a beat, Kyle MacNeill takes a look at The Rise of RentAHuman, The Marketplace Where Bots Put People To Work. I’ve often said the place to start with replacing humans in the workforce is at the top.

Political winds might seem like they are shifting faster than anyone can predict these days. One thing’s for certain, neither U.S. party owns the mantle of most incapable. Mark Leibovich thinks The Democrats Aren’t Built For This. I happen to agree. But then is anybody? Because who knows what “this” is? It certainly isn’t politics. Bean bag, hardball, or otherwise.

Apple seems to want to change things up with its iPhone hardware lineup over the next few years. Of course that means changes to software as well. Matt Birchtree thinks it’s inevitable that Apple Will Kill iPadOS. I think that’s correct as far as how we think of that OS today.

Whether it’s the Olympics or any other form of competition, once you reach the top, the air is always rare. But it eventually becomes stale. David Pierce takes a look at what it means to be number one on the Apple App Store in The Biggest App In The Whole Wide World. 

The Chicago Bears have turned football into a hot political potato with news that they might be moving to Hammond, Indiana. Is it a negotiating tactic or the real deal? Nobody really knows. The Editorial Board of the Chicago Tribune like everyone else is confused saying The Chicago Bears of Hammond, Indiana, Is Bad News For Illinois. But What About Chicago? Oh. In case you didn’t know, we’ve got an election for governor happening in Illinois. Fumbling will occur.

(Image from ppl on Shutterstock)

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. If you’d like more click on the Sunday Morning Reading link in the category column to check out what’s been shared on Sunday’s past. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome.

 

Blowing AI Smoke or Feeding The Fire

The pace is becoming impossible to track

This Artificial Intelligence moment we’re living through might seem like smoke and mirrors on some level, but it appears it’s going to be a trend that sticks. Even so, it sparks memories of a couple of recent crazes we’ve all lived through that are decidedly non-tech and some that are tech related.

Ruben bagues fe64iWwhoWs unsplash.

When vaping became a thing it seemed that every other person on the street was trailing a vapor cloud and quite a few were pushing the limits that had previously banned indoor smoking. When marijuana was legalized where I live it felt like we were all getting our buzz on whether we were lighting up or not. Driving down a street in Chicago, or even stuck in traffic on the expressways the tell-tale odor of “skunk” or whatever bud folks could get their hands on was everywhere.

The proliferation of gummies took care of most of the second-hand stench and dispensaries sprouted like wildflowers, leading one to wonder how long that trend will last before an inevitable consolidation occurs. But after all of the smoke the clouds of vapor eventually became as rare in public as the cigarette smoke they replaced.

I’ve seen a number of other trends in my life from pet rocks to tech gadgets. Remember netbooks? The rare ones stick. Most fade away, occasionally leaving enough residue to resurface again when nostalgia kicks in. Of course nostalgia on some meta level is a trend in and of itself.

But this AI trend we’re living through is taking on a life that depending on which Artificial Intelligence pioneer you talk to will make all our lives better or perhaps end them all. 

If you ask me, on one level this AI trend feels no different than the smart home trend. With enough tinkering you can install smart home appliances, lighting fixtures, cameras, thermostats, etc… but the not-so-dirty little home wizard secret is that no one has been able to figure out any sort of standard, much less a way to keep things reliably working once the next set of software or firmware updates arrive. So the cruft accumulates. Tinkerers have a blast. Regular Janes and Joes just go back to flipping light switches.

And we seem to be at the tinkering phase with AI. Which when you think about it, sort of makes no real sense. Because if you have to dig into the innards of a terminal app in order to make your computer run your computer, where’s the tinkering fun in that once it’s done and your computer(s) running your computer(s) can run your life and do all the tinkering for you?

A couple of pieces caught my eye recently that, to my mind at least, point out some of the conflicted thinking.  When you have a headline that reads The A.I. Disruption Is Here, and It’s Not Terrible, I’m not sure it bodes well. Then there’s We’re Not Just Receiving AI’s Hallucinations, We’re Hallucinating With It. Brings back whiffs of those early days of legalized pot.

But then I followed Steve Troughton-Smith’s thread on Mastodon where he used AI agents to port an iOS app to Android. There’s certainly utility there.

All kinds of issues from the ethical to the environmental remain and need to be sussed out, but I’m thinking this trend is accelerating faster than might be humanely possible to keep track of. Perhaps a series of AI agents could do that work. It’s funny to think that.

I certainly doubt anyone would be satisfied with that. But this rising trend has accelerated in an era where facts matter less than who has the louder narrative of the moment. I think it is telling though that Peter Steinberger, the developer who came up with the AI thing of the moment, OpenClaw, took the money and sought refugee under the OpenAI umbrella. I guess that’s one way to avoid any liability if his lobster bytes do some serious damage down the road.

Frankly, I’m disappointed that this has all morphed so quickly from a tinkerer’s technology trend into one that now seems to control too much of the world’s current and future economy, not to mention all of the other areas of life, business and government that everyone seems in such a rush to insert it into.

AI is certainly not vaporware. It may be on a fast rising trend, but it appears it’s one that will stick in some form or fashion. All trends are eventually defined by lines. They don’t spike up forever. Until some AI agent computes a way to avoid a dip in trend lines that no human has yet to figure out.

(Photo from Rubén Bagűés on Unsplash)

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.