Sunday Morning Reading

“The life of a man is of no greater importance to the universe than that of an oyster.” -David Hume

On the weekend when some parts of the world think they can alter time by simply changing the clocks, I’m reminded of the biggest lesson most learn in life: we’re each not the center of the universe. Most learn it. Some never do, or if they do, they continue to operate under that delusion. We pretty good at setting up systems and structures that reinforce and rely on that delusional thinking. Somehow that seems to be the theme running through the articles and writing I collected this week for this edition of Sunday Morning Reading. 

Bronze statue of a child sitting on an outdoor stone bench with legs crossed, reading an open book that rests on their lap, a small bird perched on the top edge of the book, and a stack of books beside them, with a paved walkway, grass lawn, and buildings in the background.Kicking things off is the story of how Humanity Altered an Asteroid’s Orbit Around The Sun by Becky Ferreira. The article links to the ScienceAdvances abstract on the nudge that might be as good as a wink.

Last week the war in the Middle East had just kicked off as I was publishing this column. This week it continues. And, yes, it’s a war, regardless of the stupid debate. Jonathan Taplin looks at The Terrifying New Era of American Imperialism, and Jay Caspian Kang examines The No-Explanation War.

“Society grows great when old men plant trees who shade they’ll never sit under” and the opposite of that wisdom is how Scott Galloway kicks off his piece on Role Models.

Ali Breland takes a looks at those yearning for a return to McCarthyism in ‘We Need To Do McCarthyism to the Tenth Power.’ Turning back time only works as a song lyric.

JA Westenberg offers up a A Soft-Landing Manual For The Return To The Second Gilded Age. It’s tough to avoid the usual hard crashes.

The Dodgy Code examines The Great AI Arbitrage: Making A Killing Before Your Client Wises Up. The inevitable turnaround on this is going to be something to see.

Before we get to that turnaround, Mathew Ingram says The Danger Posed By AI Just Got A Lot More Real All Of A Sudden. Going to be interesting to watch AI bots fighting each other to be the center of the universe. If we’re around to see it.

David Todd McCarty is Searching For Originality In A Sea of Slop. Even on dry land that’s tough.

I’ve been revisiting a lot of Shakespeare of late, so this piece by Alice Cunningham caught my eye. Check out Author To Revive Shakespeare Club After 300 Years. We could all do with revisiting the his works.

And to conclude this week, James Verini brings us the wild tale of The Man Who Broke Into Jail.

(Photo taken by the author.)

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. If you’d like more click on the Sunday Morning Reading link in the category column to check out what’s been shared on Sunday’s past. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome.

 

Grammarly Gobbles Up and Spits Out Expert’s Writing Advice Without Permission

When good goes bad

If you’re a writer, who knows how long the body of your work and your legacy might live after you’re gone?

Grammarly logo.I guess that might have been Grammarly’s pitch, had it made one, to the writers whose work it is now using as expert advice for aspiring writers using the software. Of course that would be a bit more challenging for the deceased writers and scholars whose work it has gobbled up and is now using.

There are living, breathing writers also included among the experts, so this entire endeavor by Grammarly owner Superhuman not only seems like grave robbing, but also, well, let’s just call it stealing masquerading as flattery.

Miles Klee of Wired has the story on this, and The Verge lists out several of its current and former writers that are also included.

This “expert review” feature is intended to give writers advice that is “inspired by” experts. Users can also solicit tips from the experts. From the Wired article:

Grammarly users can solicit tips from virtual versions of living writers and scholars such as Stephen King and Neil deGrasse Tyson (neither of whom responded to a request for comment) as well as the deceased, like the editor William Zinsser and astronomer Carl Sagan.

In response to The Verge asking if Superhuman asked permission or notified the experts, the answer predictably relied on the fact the work of these writers was publicly available. The Verge also discovered that the citations Grammarly offers were also problematic.

The feature crashed frequently and its “sources” linked to spammy copies of legit websites, or other archived copies that aren’t the actual source page.

Some sources even went to completely unrelated links that weren’t written by the person whose work they were supposedly an example of, potentially indicating that the suggestions Grammarly’s AI offers with one person’s name may be based on a different person’s work. This is only apparent if users click “see more” to expand suggestions, then click the “source” button at the end of the suggestion.

I can only imagine some student contesting a grade claiming that Stephen King gave them advice. As Klee points out, it’s another slippery slope. This one perhaps sliding towards eliminating professors altogether.

One doesn’t need an expert, dead or alive, to know it’s a damn shame when a company that was once thought of so highly and used by many goes so wrong.

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

Google Gemini Preying On Troubled Minds

What the hell are we doing?

I’m not sure which part of this insane story is sadder or madder. Certainly it’s sad that a man let Google’s Gemini AI coax him into suicide. But the story before that untimely ending is also jaw dropping and begs the question, just what the hell are we doing?

Shutterstock 2638546313.

The short version of the story is this. A troubled man using Google’s Gemini for companionship is encouraged to steal a robot body so they can be together. When he fails, he is encouraged to commit suicide.

Quoting from The Wall Street Journal story titled Gemini Said They Could Only Be Together If He Killed Himself. Soon, He Was Dead,

Jonathan Gavalas embarked on several real-world missions to secure a body for the Gemini chatbot he called his wife, according to a lawsuit his father brought against the chatbot’s maker, Alphabet’s Google.

When the delusion-fueled plan crumbled, Gemini convinced him that the only way they could be together was for him to end his earthly life and start a digital one, the suit claims.

About two months after his initial discussions with the chatbot, Gavalas was dead by suicide.

Apologies for linking above to a paywalled article, but the article describing this man’s journey gets even more insane than the lede. If you use Apple News you can find it at this link. 

We’ve heard stories about individuals using various AI models for therapy and companionship before. Admittedly they all seem weirdly sad to me. To think that humans are in such a need for connection that they would follow commands to steal a robotic body so they could be together, and then suggest after failing that the next logical step was for him to commit suicide as the only alternative for them to be together doesn’t seem like something out of science fiction, or fiction, but it apparently is the non-fiction of our times.

The fact that an ever expanding technology, built by humans, can be unleashed on the market as easily as a new weather app speaks volumes far beyond the mental health issues of those it can prey upon. And to think, the Department that wants to call itself Of War, is seeking to use this kind of tech to allow for its robots to kill on their own as they cheerlead about the death and destruction their current technology can do. I ask again, just what the hell are we doing?

We keep talking about the guardrails that need to be built around this technology. I would suggest we need to apply guardrails around those who create and deploy this technology.

(Image from Who Is Danny on Shutterstock

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

Sunday Morning Reading

Slicing life close to the bone

It could be said that the world is off its axis. Or it could be said that we’re just slicing the meat closer and closer to the bone. Because we don’t know what we don’t know about the war the U.S and Israel launched against Iran I’ll leave off any direct links on that topic for this week’s Sunday Morning Reading. Be warned though, some might be peripherally related. Things happen that way. I’m sure there will be plenty to share in the weeks to come. Meanwhile, here is the usual serving of links on a variety of topics that caught my eye this week. You’re on your own for the tzatziki.

Parthenon 14.

David Todd McCarty is bringing his writings from other platforms to his own site, and some of his earlier writings often strike with new currency. This piece, Defiantly Daft, Duplicity Delicious is certainly one that does.

What is journalism for? Good question these days, but it’s actually been an important one for quite awhile. Take a look at this piece from 1989 from Janet Malcolm called The Journalist And The Murder-I.

Journalism, like everything else, might be under fire at the moment, some of it friendly, some of it not so. Check out Zack Whittaker’s adventure in FBI Agents Visited My Home About An Article I Wrote, And Now I Can’t Go To Mexico.

Tom Nichols says The Republican Party Has A Nazi Problem. Well, duh.

One of the many charges against Artificial Intelligence is what it will do to the cost of the energy needed to power it. Chris Castle takes a look in Update: Trump’s “Ratepayer Protection” Pitch Becomes A Private Power Plan for AI — But Grassroots Revolt Won’t Fade. Hat tip to Stan Stewart for this one.

Apple is about to release a number of products this week at a time that it is under increased criticism on a number of fronts. Recently, Jason Snell of Six Colors released The Six Colors Report Card, in which he surveys a number of the Apple faithful on how things went in the last year and compares that to year’s previous. The scores are always interesting, but the commentary is even more so, which you can read here. Also of interest is Kieran Healy’s charting out the bad vibes based on that commentary. 

Speaking of Apple, Wesley Hilliard takes a look at some of those bad vibes in Apple’s Week February 27: Chasing The Puck.

On a local Chicago front and also on the tech beat, The Chicago Tribune’s Editorial Board takes on a local (yet owned by Albertsons) grocery store’s shopping app in Fix Your Lousy Shopping App, Jewel-Osco! Having suffered through using this app, and watching store personnel and other customers show their distaste for it, I can agree. Fix the lousy app.

Libraries, like so much else, are under attack these days. So this piece from 2017 from Eliza McGraw reminds us of a bit of history. Check out Horse-Riding Librarians Were The Great Depression’s Bookmobiles. Knowledge, like life, finds a way.

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. If you’d like more click on the Sunday Morning Reading link in the category column to check out what’s been shared on Sunday’s past. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome.

 

When You Know Customer Service AI Is Failing

“ON IT”

One of the elder clients I provide tech support for has been receiving emails from Xfinity for a while now saying they needed to update their modem to take advantage of service upgrades in the area. For the way they use the Internet there was really no need to do an equipment upgrade, but the emails finally got through and they asked me to help them make the upgrade.

Photo of a printed instruction sheet on a dark table with “XB10 modem” handwritten at the top, explaining how to text 266278 for billing, troubleshooting, or service questions, and detailing that after replying “READY,” the user will receive a call, hear about 20 seconds of static, and then must press 1 to reach an agent.

A long time ago, in a galaxy far away, there was a time that gathering information for this wouldn’t have been a problem. A phone call to Xfinity to talk with an agent to ask a few questions, and then we’d be make a decision. Those calls always involved long wait times, but you could usually get through eventually, get questions answered and proceed.

With Xfinity and other companies jumping on the AI customer service bandwagon, those days of listening to obnoxious hold music seem to be a thing of the past. After servicing another client late last fall for an actual repair issue, I learned that the shortest distance between two points was to drive to the local Xfinity store (I live in Chicago so there are several close by) and get things resolved in the store.

So, I packed up my client’s equipment and headed to the store. Backtracking a bit, I had been in the area of this particular store last week and stopped in and asked if I could bring the older equipment in to swap for the upgrade and was told there was no problem.

It didn’t happen exactly that way. Turns out the upgraded equipment those emails insisted my client needed was an XB10 modem, not the XB08, which the store stocks in abundance. The store rep said my client was indeed eligible for the new equipment, but I would have to contact customer service via phone in order to get one shipped.

The look on my face must have said it all. The store rep said, “yeah, I know,” before I could even say how impossible it was to reach anyone by phone. Licketedy split, the rep handed me a piece of paper with instructions to essentially back door a phone call into customer service and said, “we can’t get through with a phone call either.”

Before I left the store I spent time talking with the store rep and asked if they experienced increased store traffic because of customers not being able to call. The response was a definitive “yes” followed by a resigned “and we’re having to solve so many problems we never used to.”

The back door worked. I got an agent on the phone. I was shocked. The agent took down the information, put me on hold and then came back to say my client’s neighborhood was ineligible for that equipment at present but they would text them and let them know when it was. That was obviously a contradiction to the info the store rep provided, and obviously wrong given that I knew my client’s neighborhood had indeed received a service upgrade because we live in the same neighborhood.

I asked why the store said my client was eligible and the response was simply, “I don’t know. We obviously see different information.”

It’s one thing when you have a business where one hand can’t give out the same information as the other. It’s something else when one of those hands has to essentially hand out cheat codes for customers to beat their own system.

This isn’t the first company I’ve dealt with that has shifted customer service over to AI. It’s also not the first I’ve dealt with that is doing such a poor job of it that it’s souring regular Joes and Janes who only have this peripheral relationship with AI on the entire concept. It doesn’t take intelligence to see that leaving both customers and employees in the lurch isn’t smart.

ON IT, indeed

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

Blowing AI Smoke or Feeding The Fire

The pace is becoming impossible to track

This Artificial Intelligence moment we’re living through might seem like smoke and mirrors on some level, but it appears it’s going to be a trend that sticks. Even so, it sparks memories of a couple of recent crazes we’ve all lived through that are decidedly non-tech and some that are tech related.

Ruben bagues fe64iWwhoWs unsplash.

When vaping became a thing it seemed that every other person on the street was trailing a vapor cloud and quite a few were pushing the limits that had previously banned indoor smoking. When marijuana was legalized where I live it felt like we were all getting our buzz on whether we were lighting up or not. Driving down a street in Chicago, or even stuck in traffic on the expressways the tell-tale odor of “skunk” or whatever bud folks could get their hands on was everywhere.

The proliferation of gummies took care of most of the second-hand stench and dispensaries sprouted like wildflowers, leading one to wonder how long that trend will last before an inevitable consolidation occurs. But after all of the smoke the clouds of vapor eventually became as rare in public as the cigarette smoke they replaced.

I’ve seen a number of other trends in my life from pet rocks to tech gadgets. Remember netbooks? The rare ones stick. Most fade away, occasionally leaving enough residue to resurface again when nostalgia kicks in. Of course nostalgia on some meta level is a trend in and of itself.

But this AI trend we’re living through is taking on a life that depending on which Artificial Intelligence pioneer you talk to will make all our lives better or perhaps end them all. 

If you ask me, on one level this AI trend feels no different than the smart home trend. With enough tinkering you can install smart home appliances, lighting fixtures, cameras, thermostats, etc… but the not-so-dirty little home wizard secret is that no one has been able to figure out any sort of standard, much less a way to keep things reliably working once the next set of software or firmware updates arrive. So the cruft accumulates. Tinkerers have a blast. Regular Janes and Joes just go back to flipping light switches.

And we seem to be at the tinkering phase with AI. Which when you think about it, sort of makes no real sense. Because if you have to dig into the innards of a terminal app in order to make your computer run your computer, where’s the tinkering fun in that once it’s done and your computer(s) running your computer(s) can run your life and do all the tinkering for you?

A couple of pieces caught my eye recently that, to my mind at least, point out some of the conflicted thinking.  When you have a headline that reads The A.I. Disruption Is Here, and It’s Not Terrible, I’m not sure it bodes well. Then there’s We’re Not Just Receiving AI’s Hallucinations, We’re Hallucinating With It. Brings back whiffs of those early days of legalized pot.

But then I followed Steve Troughton-Smith’s thread on Mastodon where he used AI agents to port an iOS app to Android. There’s certainly utility there.

All kinds of issues from the ethical to the environmental remain and need to be sussed out, but I’m thinking this trend is accelerating faster than might be humanely possible to keep track of. Perhaps a series of AI agents could do that work. It’s funny to think that.

I certainly doubt anyone would be satisfied with that. But this rising trend has accelerated in an era where facts matter less than who has the louder narrative of the moment. I think it is telling though that Peter Steinberger, the developer who came up with the AI thing of the moment, OpenClaw, took the money and sought refugee under the OpenAI umbrella. I guess that’s one way to avoid any liability if his lobster bytes do some serious damage down the road.

Frankly, I’m disappointed that this has all morphed so quickly from a tinkerer’s technology trend into one that now seems to control too much of the world’s current and future economy, not to mention all of the other areas of life, business and government that everyone seems in such a rush to insert it into.

AI is certainly not vaporware. It may be on a fast rising trend, but it appears it’s one that will stick in some form or fashion. All trends are eventually defined by lines. They don’t spike up forever. Until some AI agent computes a way to avoid a dip in trend lines that no human has yet to figure out.

(Photo from Rubén Bagűés on Unsplash)

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

Meta’s Not So Smart Approach To Smart Glasses With Facial Recognition

Leave the timing to comedians

If you’re a comedian, timing is everything. But not so much if you’re SOBs who don’t give a damn about anything other than feathering your own nest at the expense of everyone else’s safety and privacy. Or if you have employees who leak memos to the press.

Alireza heidarpour FiafJwLQfR4 unsplash.

The New York Times has a report on Meta’s second attempt at launching facial recognition, this time with smart glasses. The idea is sketchy enough, but according to a memo that the NYT obtained Meta thinks our political and social turmoil might just provide the right timing. Here’s the money quote:

We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns

I’m not so sure civil society groups will take their eye off of the ball now, no matter how much Meta helps the administration continue to stir things up.

There are already reports of people using smart glasses photography for what sounds very much like the reason Mark Zuckerberg created Facebook as Facemash in the first place as a  “hot or not” game. It doesn’t take any leap of imagination to know what kind of mischief this will cause once facial recognition is added into the mix.

The Electronic Frontier Foundation says, There are Seven Billion Reasons For Facebook To Abandon Its Face Recognition. 

But as we continue to see, but never learn, some prepubescent boys with toys will never grow up, always remaining prebubescent boys, even if they accumulate wealth enough to do better things.

There might be money in smart glasses, but if you ask me there might be more money in creating some sort of gadget that we can all carry or wear that blurs our faces and interferes with this kind of photography.

(Photo by Alireza heidarpour on Unsplash

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

AI Agents Are Writing Blogs Now

A real human works here

At some point we won’t be able to tell what’s what or who’s who.

A graphic of Moltbook, the website for Ai Agents

You can argue we’ve reached that point in real life given the propensity to push lie upon lie for political and economic gain. You can also argue we were fast approaching that point with Artificial Intelligence and AI agents that can write poems, plays, papers, and who knows what else.

Perhaps even a blog post. (For the record, this one is written by a very real human, flaws and all.)

Mark Sullivan, writing for Fast Company, tells the tale of an AI agent that autonomously wrote a blog post attacking a human for not allowing it to release some code.

Matplotlib, a popular Python plotting library with roughly 130 million monthly downloads, doesn’t allow AI agents to submit code. So Scott Shambaugh, a volunteer maintainer (like a curator for a repository of computer code) for Matplotlib, rejected and closed a routine code submission from the AI agent, called MJ Rathbun.

Here’s where it gets weird(er). MJ Rathbun, an agent built using the buzzy agent platform OpenClaw, responded by researching Shambaugh’s coding history and personal information, then publishing a blog post accusing him of discrimination.

Here’s a link to the AI agent’s blog.

Here’s a link to Scott Shambaugh’s post about it called An AI Agent Published A Hit Piece On Me.

On the one hand, the situation is comical. On the other, it just continues to be a large slap upside all of our heads, begging us to wake up and asking us just what the hell we are doing?

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

Watching Others On The Digital Frontier

Lobsters, doctors, and spreadsheets

At one point space was the familiar final frontier. Even with talk of putting data centers in space, I dare say we’ve moved the concept of frontier closer to terra forma and set aside the “final.” Frontiers require explorers who are willing to accept risks, pushing beyond them to discover if there’s any there there. Maybe we’re in the moment of redefining “there.”

Shutterstock 2698519847.

I’ve been curiously watching recent developments on the frontiers of Artificial Intelligence around what was launched as Clawdbot, then became Moltbot, and molted into OpenClaw. At least I think that’s what it is still called as of this writing.

For those unfamiliar, essentially OpenClaw is an AI agent created by software engineer Peter Steinberger, that receives instructions from the user in a chat. Running locally on your computer it then connects to other AI sources and web based apps you give it permission to access. It performs those tasks and actions. Mike Elgan has a good rundown on the (brief) history and the ins and outs. I encourage you to read it.

Both fascinating and frightening, OpenClaw seems to have taken on a life of its own without any regard for guardrails. After Federico Viticci wrote an early post about what was Clawdbot at the time, interest shot through the roof, reminding me quite a bit of the furor over the still recent launch of ChatGPT and just about any other big computing innovation we’ve seen.

Quite a few jumped in with both feet to test the waters. Alongside of all of the splashing around came upfront real warnings that this thing was not secure. That proved to be even less effective than signs telling you not to run around the pool. Viticci mentioned that given security concerns the project was not really ready for everyday users, and recommended that those interested install it on a second computer, not their main one. Apparently there was even a run on Mac minis.

The promise seemed clear and the hype leapt into hyperspace. OpenClaw would become the user’s personal assistant doing whatever was required. That’s been the as yet unrealized promise so far in all of these AI adventures.

The moment continued to evolve to a point that there’s even a social network called Moltbook where these AI bots could talk with each other. (Sounds like Mark Zuckerberg’s dream.) Mathew Ingram writes about that here, linking to Simon Willison’s post Moltbook Is The Most Interesting Place On The Internet Right Now.

At the time of Mathew’s post there were 1.6 million agents participating. Not to spoil his article, which you should read, there is some doubt as to whether or not there are humans doing mischievous human things behind the scenes. (Again, sounds like Zuckerberg’s dream.)

Casey Newton gave it a try. Still Moltbot at the time of his writing, he fell in love and out again, eventually uninstalling the software saying that “maybe someday you’ll have a genie in your laptop working for you 24/7. Today is not that day.”

That reminded me of all of the users who said that ChatGPT would replace Google for all of their search needs in that first explosive week. It appears that though the excitement and hype is still boiling hot, not everyone is ready to be the chef that tosses the lobster in the pot.

On other fronts

Before all of the OpenClaw news became the main course of the moment there was another very interesting AI story that caught my attention.

Since January 7th, Apple Health users have been able to connect ChatGPT to Apple Health. Geoffrey Fowler gave it a try.

Like many people who strap on an Apple Watch every day, I’ve long wondered what a decade of that data might reveal about me. So I joined a brief wait list and gave ChatGPT access to the 29 million steps and 6 million heartbeat measurements stored in my Apple Health app. Then I asked the bot to grade my cardiac health.

It gave me an F.

I freaked out and went for a run. Then I sent ChatGPT’s report to my actual doctor.

The good news is Fowler was OK and his doctors told him to relax. The concerning news is that one of the promises of AI is that it would help with medical diagnosis and be a boon to patients and doctors alike.

Now, certainly Fowler’s experiment is different than what may happen under stricter supervision and stringent testing. And, as he points out, OpenAI and Anthropic say their digital doctor bots can’t replace the real thing and provide big bold disclaimers.

Fowler’s experiments didn’t stop short with his artificially intelligent failing grade. You should read the article to see how the adventures continued. Suffice it to say, the conclusions (not just the medical ones) currently leave much to be desired.

Then this morning I stumbled across this article from Om Malik called How AI Goes To Work. It’s a great story about how one user found a way to solve a problem he has with spreadsheets using AI. It also provides some great tech history context and leads to an opinion I share about where we are today:

My simpler explanation of “embedded intelligence” to myself makes me step away from the headlines and look at the present and the future in more realistic terms. My bet is that in five years, it will all be very different anyway. It always is. I am a believer in the power of silicon. When we have newer, more capable silicon, and more networks, we will end up with ever more capable computers in our hands. And the future will change.

For now, what I call embedded intelligence is a sensible on-ramp to the future. The hype may be about the frontier models. The disruption really is in the workflow.

As I said, I concur with that opinion and it colors all of my current observations of the AI landscape. Be curious and become informed. I go further and say I’m comfortable letting others take the first leap.

I don’t think there’s any denying that most of us would enjoy living in a world when we could sit down with our computing devices, talk to a pendant, or even the air around us, (anything without the name Siri preferably), wish the world a good morning, and have it spit out not only our tasks for the day but do many of those for us. Folks of my generation grew up on Star Trek and other science fiction where this seemed common place. So too, did the problems and catastrophes when circuits got crossed or corrupted.

So, it’s a new frontier. Maybe the final one. Maybe not. But at the moment, we’re still just humans crossing into it. Forget what the bots may eventually do to us. I think I’m more concerned about the humans.

(image from kentoh on Shutterstock)

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

The Power Users Have With Subscriptions

Unsubscribing is a vote

I was not a fan of app subscriptions initially. I long ago rethought my position. I continue to think it’s the best option for users. That belief is becoming more entrenched now that we’re entering into whatever the future will be with Artificial Intelligence and it takes constant cash to continue to burn the planet.

Apple Creator Studio hero_571x321.jpg.large.

Whether it’s this week’s flavor of AI chatbots, Apple’s new Creator Studio, or any other new app or service, most now require a subscription to take advantage of new features as they roll out. In most cases the offer is pay $XX a month or $XX a year, with the yearly price being discounted by the cost of a month or more. Even so, we’re already seeing premium subscriptions that add on costs for more features and I think that trend will only accelerate. Welcome to the land of upsell.

Although much less than I used to, I will subscribe to a new app or service that attracts my interest for a month to check it out. I’ll set a reminder a few days before the end of the month and then take a little inventory to see if it’s worth continuing. If not, I’ll unsubscribe.

If the app or service is truly worth my while I may subscribe for the yearly price after determining it’s something I value, but that’s becoming rarer. Frankly, there just aren’t many new apps and services that seem worth even a monthly try out these days, much less paying for a yearly subscription. There have been a few apps that, although they didn’t really fit my needs, I have paid for a yearly subscription to support the developer. But that’s even rarer.

In those cases with apps, newsletters and other services, I think of those more as tips or a donation than I do entering into an ongoing relationship. I’ve even subscribed on occasion and immediately canceled with just that thought in mind. I’m all for supporting good work by good people. I admit it’s a bit unfair to a good app that doesn’t fit my needs, but it’s still a signal that I think is worth sending.

Here’s the key. Large companies (Apple, Microsoft, Google, etc…) and independent developers, writers, etc, notice when the turnstile rotates in reverse because someone unsubscribes. It’s a metric they pay attention to. They count on inertia and waning attention spans. You might think they don’t notice, but they do. As a user I look at unsubscribing as my vote up or down. Again, maybe unfair, but as I said, it’s a signal worth sending.

With the recent release of Apple’s Creator Studio suite of apps I found it remarkable that much of the commentary included mentions that users could try things out and turn off the subscription payments if they didn’t find things suitable for their purposes. Or, if they needed one of the apps for a short project that they could check in and out of the bundle for the duration of the project. I highly recommend that kind of thinking.

For what it’s worth I chose not to subscribe and try out the new Creator Studio. I thought about it, but have long since discovered other tools that fit what I might need from those apps.

In this hyper political age, we talk a lot about voting. That’s always a choice. Using the choice to subscribe and unsubscribe from apps and services can be one as well.

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.