The Promise No Tech CEO or Politician Will Ever Make

A promise not made is easier to avoid than a promise made

There’s an issue out there that could change the way people think about a nuisance we all increasingly live with. That issue is spam. Emails, texts, phone calls, you name it. We’re swarmed with it like with mosquitos at dusk. And every effort you hear a tech company make to try and make unwanted calls and messages less of a problem is essentially a sop, soon to be defeated. The bad guys are better at this game, and quite frankly, the good guys don’t really care.

Cans of Spam displayed in a grid by Hannes johnson mRgffV3Hc6c unsplash.

I’ve often said that any politician running for national office promising to end spam in all forms as we know it would instantly find a constituency. I still believe that.

Politicians won’t do it, because, hey, they are part of the spamming problem. Note that they’ve exempted themselves from any soft shelled regulations they’ve legislated in the past.

These days, Tech CEOs also have an opening they’ll never take advantage of it. Not that they don’t care the way politicians don’t, but spam is good for their business. Take the AI push and the reactions to it. The folks pushing Artificial Intelligence are worried about a backlash spoiling their game from consumers, corporations, and maybe a government or two. And that backlash appears to be growing.

Who knew that if the sales pitch was AI would take your job, some would be unhappy?

Who knew that if your CEO discovered that they weren’t wracking up bottom line savings by dismissing the workforce that they’d be a bit peeved?

Who knew in what AI-induced downsizing law firms that feeding legal advice or sensitive information into an AI chatbot removed attorney client privilege?

Who knew that folks watching in plain sight as local politicians took cash to push through new data center construction that would increase their utility bills that folks would shockingly rise up in anger?

Who knew that employees of AI companies would be so concerned about how governments might use AI for surveillance and war fighting that they would petition their CEOs to stop government contracts?

Who knew that governments, that at one point were fat and happy to let AI run its race given all the cash lobbyists were stuffing in their pockets, would discover that perhaps these robots could possibly indeed bring chaos to things like financial systems and just about anything else?

Who knew that in order to keep AI chatbots from hallucinating, the user has to tell the AI chatbot not to hallucinate? It’s like telling your kid or a politician not to lie and expecting that to happen.

Here’s a small hint. Everybody knew. Everybody knows. It sounds like for the most part the chumps are catching on.

While there are spheres where AI might actually be of benefit to society AI might not get that chance unencumbered. So far on a consumer level its time saving and life altering benefits seem to have boiled down to sorting through emails and calendars, creating nonconsensual porn, making music and podcasts that nobody wants, dishing out bad therapy advice, and creating conversational partners for those who can’t converse with others in real life.

Essentially the same promises that computer technology has always promised. Only this time around the wheel it’s becoming exponentially easier to collect data from anyone using the computers. And that’s the end game.

Even with this growing backlash, tech CEOs aren’t going to make a promise to use this new super intelligence that can schedule a flower delivery, or spit out your calendar, to derail the possibility of them controlling that game. It is funny though that no one seems to have created a chatbot or LLM that can solve PR problems.

I don’t pretend to understand all of the technological ins and outs of chatbots, LLMs, MCPs, and other terms that seem to change each time a new version comes out or something goes wrong. I do suspect that the technology they are promising could fix the spam problem if that was the desire. In the same way, politicians could do so with regulation.

There’s a part of me that thinks these are actually political promises with technological problems that could actually be solved, or at least ameliorated. But promises not made are easier to deal with than keeping promises made.

There’s money to be made, and plenty of suckers willing to pony up. So why upset the game by pandering to sentiment?

(Image from Hannes Johnson on Unsplash)

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above. This site does not use affilate links. 

 

Sunday Morning Reading

Sometimes a pear is just a pear.

Another Sunday dawns, so it must be time for Sunday Morning Reading. An interesting collection of pieces to share this week. On one hand it seems like any other week. On the other, this week’s edition offers a few nuggets worth chewing on. Don’t over think it. Enjoy.

Three green pears on a table top in various degrees of ripening. Photo by Tijana drndarski 3zmVSZQIozA unsplash.

Leading off, I’m highlighting an excellent series from The Baffler called The Profession That Does Not Exist. The Baffler bills itself as “America’s leading voice of incisive and unconventional left-wing criticism”, for what that’s worth. I find it an excellent source of good writing. Each of the pieces in the series that has the subhead “writing won’t make you a living”, is worth your time, but I’ll highlight two.

A Pear Is Just A Pear by Timmy Straw. Making your way in a crazy world you can find that sometimes a pear is just that. A pear.

Bertrand Cooper’s ISpyForGood recounts his experience as a social media investigator, a job that allowed the possibility of stepping out of poverty that entailed examining how others often scammed their ways to do the same.

Apparently the ruling class in Silicon Valley are worried that folks don’t take too kindly to their products or their ruling. David Wallace-Wells takes a look in A.I. Populism Is Here. And No One Is Ready. I guess when you threaten to turn the world upside down folks do get a bit antsy.

Open your arms and wave at just about anything happening around and to us and you can’t miss the obvious. Tom Wellborn takes it all on in The Frequency At Which Accountability Cannot Reach. Sometimes a pear is just a pear.

JA Westenberg says Outrage Is Letting Someone Else Set The Frame. Westenberg also offers up The War Between Fast And Legitimate Is Here. I suggest getting out of these messes we’re in calls for new frames or new acceptance of coloring outside the lines. Oh, wait. All the lines have been blurred.

James O’Sullivan thinks We’ll Soon Find Out What Is Truly Special About Human Writing. I suggest we’ll “rediscover” rather than finding out, but his point is spot on.

Meanwhile, Will Gottsegen says Sam Altman Wants To Know Whether You’re Human. It appears Altman and his ilk are looking at the problem through the wrong end of a telescope at a tiny mirror reflecting back.

On another front, Marianne Dhenin takes a look at The Small Wisconsin City That Defeated A Giant Data Center. I don’t think the robots will ever be able to muster this kind of civil action.

You, like I, may be overly tired of hearing anything having to do with the Epstein Files. Even so, I encourage you to take a look at this excellent piece from Gabrielle Glancy. I Grew Up With Epstein In Brooklyn. Our Neighborhood Held Dark Secrets not only tells a tale that should frighten, but one that I guess more might share than most ever want to acknowledge.

Happy Mother’s Day to all our mothers out there and all to come. Sometimes a pear is just a pear.

(Image from Tijana Drndarski on Unsplash)

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. If you’d like more click on the Sunday Morning Reading link in the category column to check out what’s been shared on Sunday’s past. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome. This site does not use affilate links. 

 

Streaming Services and Sports

Streaming fantasies

I have a fantasy. It’s a sports fantasy. Actually it’s a sports viewing fantasy. Perhaps it’s an entertainment streaming fantasy. Regardless, it’ll never be fulfilled.

Just about every streaming service has jumped onto the live sports streaming bandwagon. That’s understandable. Sports attracts eyeballs. Eyeballs equal money. Money makes the balls bounce.

Streaming services that I turn to insist on pushing their sports investments on to the top of their poorly designed homepages, forcing the user to scroll if they aren’t interested. Of course streaming services homepages are notoriously poor user experiences to begin with.

Like I said, I get all the reasons behind this. I get that the streaming  executives have overpaid for the right to stream whatever they’re streaming and are trying to capitalize on the investment, on the way to raising prices to cover that cost, and perhaps find a few new viewers who might not already be fans. It feels very much like my grandkids screaming “watch this, watch this!”

To be fair, things have gotten better. Streaming services that feature live sports have at least reduced some top line over exposure along the way, or provided tabs for different categories that segment sports and other viewing genres. But they could go further.

So, here’s my fantasy.

Give users an option to not see sports programming so prominently displayed on the already atrociously and algorithmically designed homepages. A simple switch that says “give me more of this” or “give me less.” Trust me, as someone who likes to view sports, I’ll find a game or a match that I’m interested in if I want it. And I’m sure there are plenty of users who will want to see sports programming prominently featured. So let viewers choose. Those who run these networks should be interested in that choice.

Streaming services could also extend a give me more or less feature to other  programming. How many times do you need to see the same title displayed in different categories, or after you’ve watched it, or have to scroll past a genre you have no interest in?

Whether it’s sports or any other entertainment genre it seems to me it would be better to gauge interest ahead of time, instead of waiting for viewership numbers after the fact. Who knows, it might be a good way to provide metrics that might actually be meaningful when it comes to thinking about where these services are going to spend money in the future.

Like I said, it’s a fantasy.

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above. This site does not use affilate links. 

 

Sunday Morning Reading

It’s all a loop

Back from spending time with the grandkids and back for some Sunday Morning Reading. There’s an interesting context to the many issues we face that evolves while watching the little ones grow and learn. Things are happening that will affect their lives in the years ahead. Yet there’s a blissful innocence cocooning them from it all. At the moment.

In my reading, and in my sharing of that reading, I find I’m doing so mostly for the thousands of tomorrows they have in their future, much more so than for anything that will happen in this week’s tomorrows that might affect me in the moment. Read on.

Neil Steinberg’s Meet My Metaphors #5: ConAgra is about so much more than the agricultural giant moving to Chicago years ago. If you like metaphors, it’s a must read. If you’re approaching the last leg of the journey, it’s a must read. If you’re concerned about what you may leave behind, well, it’s a must read.

JA Westenberg posits that it’s all a loop. Joke’s on us, I guess. Check out The Loop: Everything Has Happened Before, And Everything Will Happen Again. 

Ky Decker wonders, Do I Belong In Tech Anymore? I find if you’re asking that question about anything, you already know the answer.

Wesley Hilliard thinks we should Stop With The Tech Celebrity Worship. I concur. AND I’m for knocking down all the pedestals we erect for celebrities to ascend in any and all fields of human endeavor.

Timothy Noah takes a look at How The Tech World Turned Evil. Pop the bubbles. Tear down the pedestals. Endless loops.

Meanwhile, Makena Kelly examines how Palantir Employees Are Talking About The Company’s Descent Into Fascism. 

Follow that up with Jasmine Sun’s piece, Silicon Valley Is Bracing For A Permanent Underclass. 

The previous four links speak to a much darker future in one way or the other. Read them. Then go back and re-read the first two links by Steinberg and Westenberg. Looping context.

Closing out this week, here’s a couple of links that feel a bit more uplifting. First up, check out Mat Duggan’s Boy Was I Wrong About the Fediverse. 

Then follow that up with David Todd McCarty’s Becoming A Local. Sometimes the horizon is much closer than you think.

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. If you’d like more click on the Sunday Morning Reading link in the category column to check out what’s been shared on Sunday’s past. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome. This site does not use affilate links. 

 

Nilay Patel on Software Brain

“I’m sorry, Dave. I’m afraid I can’t do that.”

Every now and then someone crystalizes a lot of the thoughts that spin around discussions, debates, and dialogues about a topic. When those topics are of great import, when the crystallization shows up, it is not only wise, but essential to pay attention. Call it a benchmark. Call it a new starting point for the conversation going forward. Nilay Patel has delivered just a benchmark to pay attention to with his monologue of sorts on his Decoder podcast. If you’re not up for a listen, you can give it a read on The Verge.

Artwork of The Decoder Podcast, featuring Nilay Patel

For the clear thinking presented there is a confusing array of headlines to choose from depending where you look, Including The People Do Not Yearn For Automation, and Why People Hate AI, but the one I think should stick shows up in my browser tab: Beware Software Brain.

Patel takes a well considered tour through the arguments and discussion that are scattered about and pulls them together nicely. If you ask for a core theme, I’d say that he argues that there are two schools of thought. One rushing to turn AI into what controls our lives. The other isn’t buying the sales pitch.

To me it’s always been a tough sell to foist this innovation on people if one of your selling points is that it will make their jobs unnecessary, let alone create environmentally hazardous data centers to run the machines that are going to eventually unemploy them. I know a few folks who, after training themselves up on AI to do what they do, only to be dismissed in favor of the AI once that training is complete. I  don’t think it’s going to be much longer before that predicament touches someone everyone knows.

Getting inside what makes the folks pushing AI’s thinking, Patel defines “Software Brain” as follows:

So what is software brain? The simplest definition I’ve come up with is that it’s when you see the whole world as a series of databases that can be controlled with the structured language of software code. Like I said, this is a powerful way of seeing things. So much of our lives run through databases, and a bunch of important companies have been built around maintaining those databases and providing access to them.

He later goes on:

Anyone who’s actually ever run a database knows this. At some point, the database stops matching reality. At that point, we usually end up tweaking the database, not the world. But the AI industry has fully lost sight of this, because AI thrives on data. It’s just software, after all. And so the ask is for more and more of us to conform our lives to the database, not the other way around.

You need to read or listen to the whole piece.

While I think “Software Brain” well defines the mindset of those celebrating and working towards an AI future. The crux of the matter for me, on perhaps a larger scale, is that for some reason, as ambiguous and arbitrary as we humans can be, we seem to shy away from our own ambiguity in favor of looking for a binary solution. On or off. Right or wrong. Correct or incorrect. We get angry with the shades and shadows of grey that muddy our yearning for black and white.

Perhaps a binary approach to everything seems like it would make life easier. It certainly helps avoid the danger zones of responsibility.

These are certainly early days of whatever Artificial Intelligence may or may not become. Even so, it appears to me it’s just going to be yet another way humans develop, market, and use to avoid facing the tough choices life tosses at us, or we toss at each other. I’m glad to see there is increasing skepticism.

I don’t build or code things with AI, so I can’t speak to that degree of what seems so exciting to so many. That said, the one thing I keep coming back to in my own, very rudimentary experiments with AI is this. At the moment it’s as error prone, and often as ambiguous and obsequious as any human in correcting itself. It seems to be a very human response etched into the code by its creators, knowing things don’t add up. Much like apparently, our DNA. The machines and the math behind them just don’t care.

I don’t think the humans running this race do either.

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above. This site does not use affilate links. 

 

Sunday Morning Reading

“Scars speak more loudly than the sword that caused them.” – Paulo Coelho

It figures. You plan a weekend of yard work and Mother Nature reminds you she controls more than you do. In these parts that makes this a perfect chilly Sunday for a little Sunday Morning Reading. I’m not sure how, but a theme emerges in the collection of links I’m sharing this weekend, somehow suggesting that regardless of our feelings, the forces that seem to be conspiring against us just keep rolling. At some point, just like with the shifts in the weather, you just want some unshifting force to make it all stop.

A dark bronze sculpture of a young boy with shaggy hair, wearing a t-shirt, jeans, and sneakers, sitting and reading a book on a light stone bench in a park setting. He is focused on an open book he holds in both hands, on which a small bronze bird is perched on the upper edge. A stack of four bronze books is tucked behind his right arm. His left leg is crossed over his right, revealing a highly detailed molded bronze sneaker. In the background, a curved stone path is lined with two white, pebble-shaped benches and a dormant lawn leading to a paved road, a church building, and a blue sign with text. The sky is overcast, and a dark sedan is visible on the street.

Here in Chicago we’re seeing a number of theatre spaces closing. (We’re also seeing a few open.) On the national stage, we’re  watching with dismay, anger, and sadness as The Kennedy Center is being shut down by cultural barbarians. Josef Palermo had an inside seat to that dismantling and tells the story in My Front-Row Seat To The Kennedy Center Implosion. 

And while Madison Square Garden is more a venue for pure entertainment than the arts, the story about how its owner is using surveillance on its patrons and employees that upset the powers that be is a harbinger of things to come in all arenas of our lives. Check out The Shocking Secrets of Madison Square Garden’s Surveillance Machine by Noah Shachtman and Robert Silverman.

Having experimented a bit with Artificial Intelligence in seeking information about a statue this weekend, my ongoing suspicions that this “way of the future” isn’t ready for today, much less tomorrow. The technology might be not ready for prime time, but the hype has never been. Kyle Chayka says A.I. Has A Message Problem Of It’s Own Making. I like this quote in the subhead, “If you tell people that your product will upend their way of life, take their jobs, and possibly threaten humanity, they might believe you.” True enough. And if those things are as incompetent as humans, what’s the damn point?

It’s all math. That’s one way to sum up any computing activity. Unless it comes to emotion. And yet, some think feelings are somewhere in the numbers. Mike Elgan writes, No, Math Doesn’t Have Feelings in response to those who must not have any feelings of their own, but are trying to add that into the AI equation.

Gaby Del Valle, says The Only Way To Fight Deepfakes Is By Making Deepfakes. Sounds like an arms race to me. We should be up in arms about it.

Speaking of arms races, Gideon Lewis-Kraus looks at AI in the war that isn’t a war, that’s over every week, but begins again every weekend once the markets close in How Project Maven Put AI Into The Kill Chain.

Apologies for so much AI linkage this week, but it’s been on my mind lately, especially since the news of Mythos broke. It’s the latest demon to fly out of Pandora’s box, and I’m afraid it’s not the last. Margie Murphy, Jake Bleiberg, and Patrick Howell O’Neill examine How Anthropic Learned Mythos Was Too Dangerous For The Wild.

CNN has a report by Saskya Vandoorne, Kara Fox, Niamh Kennedy, Eleanor Stubbs, and Marco Chacon called Exposing A Global Rape Academy. It’s a hard, but I think necessary read considering the topic is just how horrible humans can be to one another. Maybe we should hope the robots develop feelings. Too many humans seem to have stopped developing theirs.

Gail Beckerman says If You Want A Better World, Act Like You Live In It. I concur.

And to close out this week, Scars is a short story by Sigrid Nunez. Some scars can’t be seen. The ones we’re watching form daily, can be.

(Photo by the author.)

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

Apple and Google Still Generating Profits from Grok’s Sexualized Image Generation

It’s my rule. I’ll break it if I want.

Rules are made to be broken is the cliché. That’s a theme that’s running louder and wilder through much of life these days. Build complicated and successful things. Create rules to protect what you’ve built. Mass enough power and then bend or ignore the rules when they become inconvenient.

Image 4-15-26 at 07.50.

That theme surfaces frequently enough that it’s almost a meme. In politics it happens every day, enough to make a mocking myth of things like the rule of law and the constitution. We see it in religion. We see it in business, too frequently in the business of tech. When you’re big enough that you have to, and can create rules to protect what you’ve built against others, and yourself, you substitute the convenience of adhering to the rules for the inconvenience of principle.

Back in January, (damn that seems so long ago), Elon Musk’s Grok released an AI image editing feature that allowed users to create nonconsensual sexualized deepfakes. It was ugly and disgusting.

As with all new things tech, it caught on like wildfire, and then X took fire from many quarters including some governments. (Not ours — caterwauling congress critters no longer count.) Apple and Google also took hits for continuing to allow the app on their respective App stores in violation of existing rules. There were calls for both Apple and Google to follow those rules and take the app down. Something both companies have done for other rule violating apps with and without public punity.

That didn’t happen.

Yesterday, a report from NBC revealed that Apple, in a letter to U.S. Senators, claimed that it worked behind the scenes of the public uproar to demand that the developers “create a plan to improve content moderation.” According to The Verge, 

Throughout this covert back-and-forth, Grok and X appear to have remained live on the App Store, a drawn-out process that may help explain the confusing, haphazard rollout of moderation changes announced in real time. This included limiting Grok on X to paying subscribers and attempting to stop Grok from undressing women. Our investigations revealed that neither were particularly effective beyond making the tool a bit harder to access. Later interventions, like X letting users block Grok from editing their photos, are also easily circumvented.

Despite Apple’s approval and xAI’s claims it has tightened safeguards, Grok still appears to be able to generate sexualized deepfakes with relative ease.

So, essentially nothing of any real effect happened. Scratch that. Something did. X and Grok put the feature behind a paying subscription. One that Apple also reaped profits from and still does. As does Google.

The one rule this era has taught us is that if you’re big and rich enough, and can weather the storm of public scorn, you can essentially ignore the rules. Even those you’ve written yourself. With impunity.

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

U.S. Treasury Wants Access To Mythos

What could possibly go wrong?

The kicker above says it all. According to Bloomberg, the CIO of the U.S. Treasury, Sam Corcos, is hoping to get his hands on Anthropics’s apparently super dangerous AI software, Mythos. The idea is to use Mythos to check and prepare for vulnerabilities. In any normal world that would make sense. We don’t live in that normal world.

Claude mythos.

Set aside that Anthropic and the U.S. Government are feuding over the government designating Anthropic a security threat and supply chain threat. The fact that Mythos can seek out and find vulnerabilities in software that humans apparently can’t, and has done so already for most operating systems and browsers currently in existence, is concerning in and off itself. Add to that what I’m reasonably sure is exploitable software the government is running, and this smells like a recipe for potential chaos. 

Anthropic did not want to release Mythos to the public, given its potential for harm in the wrong hands and formed Project Glasswing, inviting a number of tech companies and JP Morgan Chase into the fold so they could check out their systems. Other banks have since also begun testing. 

I don’t want to sound all doomy and gloomy, but however this story unfolds, it does appear there is enough there there to be skeptical and concerned. Even before the ongoing daily chaos and incompetence displayed by the second Trump administration, the U.S. government has a much deserved reputation for being slow on the uptake in the digital age. I know several folks working in various government agencies, any of whom could tell you horror stories. 

The fear obviously is what happens if Mythos gets into the wrong hands. I don’t know about you, but I think we certainly have enough of those running Washington DC currently. Bottom line, this bears watching and any number of fronts. 

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

Sunday Morning Reading

Optimism comes every Spring, but Winter always nips at the edges

Temperatures are warming. Every day brings more daylight, more blooms in the gardens and trees Yet on the edges of two of my interests, politics and tech, things continue to darken a bit. The common denominator between the two? Humans. But then again, humans are the ones who read this Sunday Morning Reading column. As well as the bots that scrape it of course.

PXL 20260411 142247979.

Some of the big news in tech this week was about a new AI product from Anthropic called Mythos. So fraught with potential peril that Anthropic gathered together the major tech heads to form a consortium to keep lid on it. Monica Verma has a good run down with her piece Did Claude Mythos Break The Cybersecurity Industry.

M.G. Siegler’s The Causal Catastrophe of AI takes a look at maneuvering around Mythos as well. Call me crazy, but I don’t think there’s anything casual about this development.

The reason I’m a pessimist on this is that I agree with a comment from JA Westenberg,  “Being wrong about doom costs you nothing.” Check out Optimism Is Not A Personality Flaw. The piece walks a line. You should read it and walk it too.

Mike Elgan takes a look at Black Traffic: The Corporate Sabotage Technique You’ve Never Heard Of. Now you have.

Ng Chong examines The Echo Chamber In Your Pocket. Follow that up with this from Julie Jargon: Over 4,732 Messages, He Fell In Love With An AI Chatbot. Now He’s Dead.(That’s an Apple News lnk. This is an archived link.)

David Todd McCarty thinks one path to reclaiming power over information might be in The Return Of The Local Newspaper.You don’t know what you had until it’s gone.

This Is What Will Ruin Public Opinion Polling For Good. The “this,” according to Lief Weatherby, is something called silicon sampling. Yes, you guessed it. AI.

Coming back around to my comment at the top about not having faith in humans, OpenAI’s Sam Altman got his turn in the barrel (again) this week. Ronan Farrow and Andrew Marantz spent quite a bit of time putting this piece together. Check out Sam Altman May Control Our Future — Can He Be Trusted? FWIW, I don’t need much more time than it takes to put this column together every week to answer their question in the negative. And not just about Altman.

Mean while Altman responded on his blog, after someone tossed a Molotov cocktail at his house. He says “I have underestimated the power of words and narratives.” For someone who has scraped all the words he can off of the Internet and tried to turn them into something smarter than humans, you’d think his machines could have at least figured out that words have power.

Natasha MH sums up a lot of my lack of faith in humans in her piece, Stop Blaming The Chatbot. As she puts it, “AI didn’t make you stupid. You were already getting there.”

Sorry to be so negative this week, but that’s where I’m living., But to change the tone, Neil Steinberg turns around the Latin term, memto mori, (remember to die) around to memento vivere, or remember to live. A nice little bit of humanity to close out this week with Little LIfe.

(Photo from the author)

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. If you’d like more click on the Sunday Morning Reading link in the category column to check out what’s been shared on Sunday’s past. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome.

 

Taking Flight On A Glasswing

With the fragility of egos as a pilot

Every time I hear the warnings about the current or next big thing in Artificial Intelligence, I’m reminded of the Surgeon General’s warnings that are printed on packs of cigarettes. I’m also reminded of every new fad I’ve seen in my lifetime, that might have inched over into a trend, but eventually ended up waiting for its turn on the nostalgia wheel of time.

Claude mythos.

As the world was holding its breath from the civilization destroying threats that sprung forth from the mind of the U.S. President, and then exhaling as they turned into the latest episode of “Bluff, Bluster, and Bullshit,” we were learning about a new AI leap and threat from Anthropic, potentially as dire, called Claude Mythos Preview. To get ahead of any damage this coming attraction might visit upon us Anthropic created Project Glasswing. Given that the raving lunatic in the White House came to power a second time with a civilization destroying manual in hand called Project 2025, I’m more than a bit leery of anything with a title that leads with the word “Project.”

From what I’ve read, Mythos is the latest innovation in Anthropic’s flavor of Artificial Intelligence. It is so powerful that it has sought out and found vulnerabilities in so much of the software the world runs on, that Anthropic is only releasing it to a hand full of companies (Apple, Microsoft, Google, Broadcom, JPMorganChase, the Linux Foundation, NVIDIA, and more.) That’s Project Glasswing. Tech overlords uniting to protect us from their sloppy software. (The lawyers will have a field day.)

Anthropic, having been declared by the U.S. as an unacceptable national security threat and supply chain risk, nevertheless is also working with the same U.S. Government looking ahead to the threats. Somehow security and existential threats always seem to become negotiating partners with their foes when money is at stake. Also occasionally when global annihilation is knocking on the door.

The way I interpret the idea behind Project Glasswing is that these companies, and presumably governments, might use Mythos to seek out all of the vulnerabilities, and perhaps obliterate them (I use that term in the Trumpian and Hegsethian sense) before they can filter down into things like power grids, banking systems, and consumer use. It can supposedly do this at a scale humans can’t. Note that Mythos discovered problems in every operating system and, on a level both big and small, the constantly updating browsers we use on our computers.

During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The vulnerabilities it finds are often subtle or difficult to detect.

I think of it this way. Announcing the existence of Mythos is akin to living the moments of terror those responsible for our safety have in House of Dynamite, once they realize the gig is up, misses are inbound, and the interceptors have failed. I’d call it an “Oh, shit” moment.

If you ask me Mythos is also exposing more than a few myths as well as vulnerabilities. The sound you hear is PR slide decks about security enhancements in the latest releases of current software being furiously redone.

As M.G. Siegler puts it,

Historically, many vulnerabilities have been fixed only after someone exploited them in some way. Again, that’s because the incentives are in favor of the attacker versus the defender. If and when Mythos-caliber tools are put in the hands of hackers… yeah.

That’s obviously exactly why Anthropic isn’t releasing Mythos to the public and also why they’ve set up Glasswing. While the company may be first to such capabilities, they won’t be the last. They probably don’t even have long to try to get ahead of the situation. While I generally dislike the nuclear weapons analogy for AI, I must admit, this all does feel a bit Manhattan Project-y. The good guys are racing against the clock to implement a new technology before the bad guys catch up. But they will. They always do.

Yeah, that sounds problematic.

Paul Krugman took a break from agonizing and writing about the situation in the Middle East and weighed in with this,

The good news is that Anthropic discovered in the process of developing Claude Mythos that the A.I. could not only write software code more easily and with greater complexity than any model currently available, but as a byproduct of that capability, it could also find vulnerabilities in virtually all of the world’s most popular software systems more easily than before.

The bad news is that if this tool falls into the hands of bad actors, they could hack pretty much every major software system in the world, including all those made by the companies in the consortium.

So, there’s plenty of doom floating around, along with the now clichéd approach to all things AI, that there’s good tech behind all of the bad things that the tech can do. Note that the profits from tobacco helped found the U.S. and twisted science and politics into knots trying not to end up on the ash heap.

I’ve largely stayed away from playing with any of these AI tools and toys, but I follow the news of the advances on all fronts, and those who do play around with it. Like it or not, those who run the world have decided this is our future.

I’ll be honest. Hallucinations aside, I don’t know enough rather or not to trust the software. I have my doubts and I do have fears about the tech. Project Glasswing might be a noble effort. Yet, with a clear mind, I do know enough not to trust any of the humans running the show. Frankly, it feels like they don’t know enough to trust the software either, much less to protect their and our systems from being destroyed by some kid in a basement.

As Natasha MH puts it, not writing about Mythos specifically, but about Artificial Intelligence in general, AI didn’t make you stupid. You were already getting there.

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.