Apple Intelligently Delays New Siri

Apple says stay tuned. The question is: For what?

It has been inevitable for some time that Apple was going to delay launching whatever the new personalized Siri with Apple Intelligence was supposed to be. To expect otherwise was as foolish as hoping the new American government wasn’t going to wreak havoc on its own citizenry and the rest of the world after the most recent election.

Now Apple has owned up to the inevitable. In a statement to Daring Fireball’s John Gruber announced the delay and a new set of expectations:

“Siri helps our users find what they need and get things done quickly, and in just the past six months, we’ve made Siri more conversational, introduced new features like type to Siri and product knowledge, and added an integration with ChatGPT. We’ve also been working on a more personalized Siri, giving it more awareness of your personal context, as well as the ability to take action for you within and across your apps. It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year.”

Note that last sentence includes “we anticipate.” I anticipate dying at some point. I also anticipate warmer days this summer, rain occasionally, and eating pizza on some day in the future. So, the message is stay tuned.

I have several thoughts on this and I’ll lay them out below, along with links to some interesting hot takes following the announcement, some of which have already cooled off a bit.

First, I think Apple was smart to make this announcement even if everyone paying attention already knew this was going to be the case. This delay wasn’t and isn’t news. That said, the announcement comes after Apple, generally perceived as rushing to catch up in the push for Artificial Intelligence, has made what can only be called a poor first impression. Sure, you can call Apple Intelligence a beta if you want. Apple does. But advertising a flawed beta as the tent pole to push new iPhones can’t be called anything but a marketing misfire, if not malpractice.

First impressions of shipping products matter more than clever shiny announcements of things yet to come.

Apple should know this because they are no strangers to bad first impressions. MobileMe left a bad stain that iCloud still has difficulty erasing. The VisionPro continues struggling with poor perception and reception. Yes, Apple also does have a history of turning some poorly received rollouts around. The best examples of that are Apple Maps and the Apple Watch. Even so, once a product launched becomes a product laughed at, it’s difficult to erase the echos of that laughter.

But perhaps the product ridiculed crucially here is the one that Apple married to this all out AI effort: Siri. Purchased, proudly launched, and then allowed to wallow — like too many other of Apple’s efforts (*cough* iPadOS *cough*), Siri has become not just a joke, but one that keeps on giving. Some say it has improved. I’ll agree with that to a point but that depends on the day.

Siri has never fulfilled Apple’s bold promises with any consistent value beyond setting a timer or adding a reminder. Even that fails enough of the time to earn users’ distrust and provide late night comedians with jokes so easy to make that the shrewder jokesters have moved on.

The debate following this recent Apple announcement in pundit circles seems to be on whether or not Apple should jettison Siri and start from scratch. I’m sure that debate has gone round and round in the circular halls of the Apple campus. I doubt that happens, given that the marketing mavens in Cupertino seem to be erratically driving the bus these days. There’s been a huge investment in Siri branding, problematic as it has always been. Unfortunately salvaging a brand is also expensive.

Apple’s Long Game Mindset Might Just Be Short Sighted

The success of the iPhone has given Apple the benefit of playing a long game, plotting product and growth strategy with a large enough cushion to weather the occasional storm. It’s certainly easier to sail through rough seas in a large ship, but the bigger the boat, the more maintenance is required to keep the hull from rusting and the engines running smoothly. The nuts and bolts matter.

Artificial Intelligence, regardless of what company is pushing it, is nuts and bolts, bits and bytes, ones and zeros. Everyone scanning the horizon thinks this is the future we’re sailing towards, full steam ahead. But nothing that’s been released or demonstrated yet has really proven that anyone can chart a correct course. The current moment resembles that scene in Jaws when all the ships set out in an armada to chase a bounty, not knowing really what they’re up against.

Don’t get me wrong. I think Artificial Intelligence may indeed prove useful. Someday. On an enterprise level. I’m just not so sure if it will ever be as big a deal on the consumer front as the marketers want us to believe it is or will be.

I also doubt Apple Intelligence will end up being another Butterfly Keyboard, MobileMe, or Siri, but at the moment there’s as good a shot of it joining the ranks of those jokes in Apple lore as there is for it becoming a success, much less useful.

Ian Betteridge in this piece, lays out what I think the AI true believer vision is in this excerpt:

But AI presents a fundamentally different challenge. This isn’t merely a new product category to be perfected – it’s a paradigm shift in how humans interact with technology. Unlike hardware innovations where Apple could polish existing concepts, AI is redefining the entire computing experience, from point-click or touch-tap to conversations. The interface layer between humans and devices is transforming in ways that might render Apple’s traditional advantages increasingly irrelevant.

He also captures the key context that reveals the tension between the long and short game as Apple has historically played it in this excerpt from earlier in that post:

Apple has long been characterised as a “fast follower” rather than a pioneering innovator. It wasn’t the first to make an MP3 player, smartphone, or even a personal computer. This strategy served Apple brilliantly in the past – observing others’ mistakes, then delivering exquisitely refined products with unmatched attention to design, usability, and integration. The first iPhone wasn’t novel in concept, but revolutionary in execution because it had a unique interface: multitouch. In fact, I would argue this was the last time Apple’s user interfaces went in a bold direction.

What is obvious in this frenzied sea of Artificial Intelligence is that Apple did a quick course correction and tried to “fast follow” before the mistakes of others could be identified well enough to refine and/or correct the way Apple has historically been successful in the past. In the case of Siri, the fact that Apple has let it languish for so long more than hints that it just doesn’t see enough value in the voice assistant proposition.

Were those bad moves? Who can really say at present. It is true that Apple had to react. OpenAI’s release of ChatGPT upset a lot of apple carts and not just those in Cupertino. But Apple’s quick course correction, coupled with a less than enthusiastic response in the same year of its other attempt at a computer interaction paradigm shift–spatial computing with the Vision Pro–has cut down the chances for any short term smooth sailing.

Some are positioning this moment Apple has created for itself as a necessary gamble Apple had to make. Here’s an excerpt from Jason Snell at Six Colors:

And if you asked those same Apple executives if they were aware that the cost of underdelivering those features in the spring of 2025 would be getting beaten up in the press a little bit for delaying features, perhaps even back to iOS 19? I’m pretty sure they’d say that a little bit of negative press today, when the world isn’t really paying that close attention to Apple and AI, would totally be worth it.

That may indeed be true in and of itself. I have no way of knowing. What I do know is that this gamble might have had better odds if Siri, prior to all of this, hadn’t been such a historical and neglected mess for far too long.

Security and Privacy

This delay announcement has also opened wider the door for criticism that might shatter another of Apple’s tent pole marketing strengths: security and privacy. Here’s a post from Simon Willison, who has a hunch that the delay might be related to those issues. It’s also worth taking a look at Willison’s earlier post on prompt injection. John Gruber of Daring Fireball takes Willison’s point further in this post. Here’s the key excerpt:

Prompt injection seems to be a problem that LLM providers can mitigate, but cannot completely solve. They can tighten the lid, but they can’t completely seal it. But with your private information, the lid needs to be provably sealed — an airtight seal, not a “well, don’t turn it upside down or shake it” seal. So a pessimistic way to look at this personalized Siri imbroglio is that Apple cannot afford to get this wrong, but the nature of LLMs’ susceptibility to prompt injection might mean it’s impossible to ever get right. And if it is possible, it will require groundbreaking achievements. It’s not enough for Apple to “catch up”. They have to solve a vexing problem — as yet unsolved by OpenAI, Google, or any other leading AI lab — to deliver what they’ve already promised.

Ay there’s the rub,” as Hamlet would say. No one has those solutions, yet it’s full speed ahead as the selling and hype continues. There may be a dream in there somewhere, but as for now, whether sleeping, sleepwalking, or blindly chasing bounties, all the consumer is left with at the moment is “stay tuned.”

For better or worse, we are not going to return to our regularly scheduled programming.

(I note that I was putting the final touches on this piece Bloomberg is reporting that Apple plans the biggest user interface design overhaul in quite some time with this year’s new operating system releases that will be unveiled at WWDC. Apple is under pressure from not only this Apple Intelligence, but other issues that concern developers as well. Shiny distractions generally win when it comes to taking the heat off of failures and problems.)

 You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above. 

Sunday Morning Reading

Bogus science, finance, politics, and tech dominate this Groundhog Day edition of Sunday Morning Reading.

Here we go again. If it feels like Groundhog Day that’s because it is. Happens every year, but the things going on in this country feel very similar to, yet even more dangerous, than they did eight years ago. It’s a movie we don’t want to revisit, but are living through. Live through it we must. Enjoy today’s Sunday Morning Reading while we try to avoid repeating the same mistakes, or at least dodging them. With trade wars now needlessly underway most of the big news ahead this week will be in the financial markets. John Lanchester has an excellent piece with excellent context about finance and what he calls “its grotesquely outsize role in the way we live now” in For Every Winner A Loser. Meanwhile as the world focuses on trade wars, Elon Musk and who knows who else is rampaging through the federal government in ways that sound more than illegal. Josh Marshall asks Who Can Stop Elon’s ‘Team’ Wilding Its Way Through The Federal Government? I don’t often link to Wall Street Journal pieces in this column unless they are about tech related topics. This one by The Editorial Board is worth a read and definitely worth the headline: The Dumbest Trade War In History. Seems like Murdoch and his scribes got what they wished for. Again. On the tech front, running parallel to our political misfortunes is a river of thought on Artificial Intelligence, most of it negative these days, but also thoughtful. Alex Kirshner interviews Ed Zitron and came away with One Of Big Tech’s Angriest Critics Explains The Problem.  Audrey Watters tackles the issue and says “In this AI future, there is no accountability. There is no privacy. There is no public education. There is no democracy. AI is the antithesis of all of this.” I fear she’s correct. Check out AI Foreclosure for her piece, but also the excellent collection of links on the subject she provides. Whether it’s the science of tech or the science of finance, there’s science. We ignore it at our peril. But what happens if some of the science is bogus? Frederick Joelving, Cyril Labbé, and Guillaume Cabanac tell us that Bogus Research Is Undermining Good Science, Slowing Lifesaving Research. In this day and age going viral is the equivalent of getting that infamous 15 minutes of fame. Both are fleeting. Joan Westenberg says Trust Me. You Don’t Want To Go Viral. NatashaMH writes about a woman finding meaning in memoirs in Drowning In Sobriety. And, as we enter Black History Month in the U.S., check out Deborah W. Parker’s piece on Belle da Costa Greene in The Black Librarian Who Rewrote The Rules Of Power, Gender and Passing As White. If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here.  You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome. You can also find me on social networks under my own name.

Picking Your Tech Poison

It’s not easy loving tech these days.

There are no good options when it comes to choosing your tech these days. Let me rephrase that slightly, if you’re hesitant or resistant to AI taking over your tech there are no good options these days. Whether it be mobile devices, laptops, or desktop rigs, the makers of the major operating systems have all jumped on the Artificial Intelligence band wagon and are doing really poor Harold Hill impersonations trying to sell us on it. Sure there are different flavors, but they’ve become or are becoming intrusively the default. We all know where this appears to be heading. Computing devices without AI will be the flip phones of tomorrow, If they are even available. Apple has turned on Apple Intelligence by default, (even though it is still in beta). Microsoft is forcing Copilot into Office 365 and its operating system and charging you more for it, wanted or not. (There are ways to ditch it.) Google is doing the same thing with Android. Even if you don’t use an Android device, but use Google services, Google’s AI now accompanies anything you do with those services. Of course other smartphone users that rely on Android are following along, but there’s really no choice. If Artificial Intelligence was a virus, we’ve all been infected and there’s no vaccine to argue over, nor will wearing a mask help, because it extends beyond our own computing lives to the interactions we have with our doctors, banks, any form of customer service, and other affiliations of our daily lives. Yes, there are still refuges where you can attempt to avoid AI, but that’s not the real world of daily commerce and daily personal interaction. Now, it sounds like I’m 100% in the anti-AI camp. I’m not. I think there are legitimate uses. Some are even quite good. Some offer promise. I actually experiment with some of that. But I also think that there’s too much that isn’t useful, too much that just doesn’t work as advertised (beta or not), and too much that’s more than potentially harmful, especially in greedy hands. I can get excited about the technology, especially on some of the exciting hardware we now see. I just consider it a shame that all of that computing power is going to be put to the uses it appears we’re in for. We’ve been here before with new technology. First it’s a curious trickle then it becomes a tidal wave that sweeps us along in its path.  It’s tough to live daily life without a smartphone these days. That’s a more recent fact than many want to acknowledge. There’s another factor. Part of the hesitancy and resistance I know I’m feeling is that I don’t feel like I can trust the likes of Apple, Google, and Microsoft, much less the social networks and other applications that run on their hardware. I’ve always been skeptical, but that trust level took a knock with the recent knee-bending by these companies, trading cash for favors from the evil regime now in place in the U.S. I’m not sure how much more capitulation will be required, but I’m betting the folks trying stay in the game will find themselves laying prostrate before this is all over. I’ve used Apple products and have been a fan for quite some time. I imagine I will continue to be a user of those products going forward, given the investment I have in that ecosystem. But I also use Microsoft and Google products and support a coterie of folks who do as well. I also use services on my Apple devices by both Google and Microsoft. In order to support the folks I do, I keep up to speed with this increasing and haphazard pace we’re all forced into. The questions I deal with lately focus on how to remove or prevent these AI features more than they do about how to guide them through new features. When every day users are asking those questions there’s obviously a problem. As for me, tasting the poison in order to understand the which antidote is needed feels unhealthy, a bit dangerous, and just plain dirty. So, I’m starting to check out other hardware to become even more familiar, but also to look at my own options. Again, there’s no easy choice. I picked up a Pixel Pro 9 recently and am checking that out. Does that mean I’m thinking of changing horses in this stream we’re in? Probably not. As I said, there are no good choices. It really is a pick your poison era we’re in. I’m not happy about it. I’ve always been tech curious, it’s just sad my current curiosity is bred from such distaste, distrust, and disgust. You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above. 

Sunday Morning Reading

Looking back, while heading forward, with a nod to Beckett wandering through a lot of good questions.

This is the first edition of Sunday Morning Reading in the New Year, 2025. A new year certainly has meaning astronomically. From a human perspective it is a way of looking back in remembrance, even as we continue to evolve and move forward. Often these days, the evolving part seems more and more in question, even as humans make strides and advances in their various fields of endeavors. Some improve our lives, even as it appears so many of us remain stuck in the habits of the past and feel good about celebrating that choice to turn the clock back.

This week’s edition, in a way, marks that always thin dividing line between one year and the next, when what was old carries over into the new.

Natasha MH kicks things off with a lovely remembrance of her grandfather, It Begins With A Grain Of Salt. There’s a lovely quote:

Human intuition is not always reliable. Our perceptions can be distorted by biases and the limitations of our senses, which capture only a small fraction of the world’s phenomena.”

Christopher Luu offers a terrific look at one who made choices in ‘She Believed You Have To Take Sides’: How Audrey Hepburn Became A Secret Spy During World War Two.

Om Malik has a lovely piece about his “re-birthday” after surviving a heart attack in The Story of The Stent.

James Thomson, the developer of PCalc and other Apple software, looks back on the last 25 years in I Live My Life A Quarter Century At A Time.

The Next Big Idea Club shares some insights from Greg Epstein’s new book Tech Agnostic: How Technology Became the World’s Most Powerful Religion, and Why It Desperately Needs a Reformation, in The Weird Worship of Tech That Demands Serious Questioning. Epstein is the Humanist Chaplin at Harvard and at MIT, where he advises students, faculty and staff on ethical and existential concerns from a humanist perspective.

One thing is certain as we head into the new year, Artificial Intelligence will continue to dominate discourse. Jennifer Ouellette examines what happened at the Journal of Human Evolution when all but one member of the editorial board resigned. Some of the issues predate the current AI moment, but that seems to have been a breaking point as she explains in Evolution Journal Editors Resign En Masse.

Simon Willison takes a look at Things We Learned About LLMs in 2024. It’s an excellent look back and worth hanging onto as we plunge ahead, willingly or no.

Edward Zitron believes that generative AI has no killer apps, nor can it justify its valuations. Here’s him quoting himself from March 2024:

What if what we’re seeing today isn’t a glimpse of the future, but the new terms of the present? What if artificial intelligence isn’t actually capable of doing much more than what we’re seeing today, and what if there’s no clear timeline when it’ll be able to do more? What if this entire hype cycle has been built, goosed by a compliant media ready and willing to take career-embellishers at their word?

Strip out the reference to AI and apply it anywhere along the timeline of human evolution and innovation and the questions resonant in a very Beckett-like way. Check out his piece Godot Isn’t Making It. 

Judges in the U.S. Sixth Circuit drove a stake through the heart of Net Neutrality as the new year dawned. Brian Barrett says it’s crushing blow not just for how we live our lives on the Internet but consumer protections in general in The Death Of Net Neutrality Is A Bad Omen. He’s correct.

And finally this week, an incredible piece of reporting from Joshua Kaplan at ProPublica. The Militia And The Mole is at once terrifying and also confirming when it comes to the fears those paying attention harbor heading into whatever this next year is going to bring.

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here.  You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome. You can also find me on social networks under my own name.

Sunday Morning Reading

Drones may be circling and society may be circling the drain, but there’s always time for Sunday Morning Reading.

Drones may (or may not) be circling the skies overhead, but that doesn’t mean we shouldn’t keep our eyes peeled for some good writing and good reading. This week’s Sunday Morning Reading features a usual mix of writing on tech, Artificial Intelligence, politics, and culture. Buckle up and enjoy.

A man reading a newspaper on a porch with a sky full of drones and a cityscape background. AI generated

Speaking of Artificial Intelligence, Arvino Narayanan and Sayash Kapoor tell us that Human Misuse Will Make Artificial Intelligence More Dangerous. I’ve been saying that for a while and so have any number of science fiction writers. Still, this short piece is worth a read.

Matthew Ingram asks and answers the question Are AI Chatbots Good or Bad For Mental Health? Yes. Good read.

Reed Albergotti chronicles an interview with Google’s Sundar Pichai on Google going all in on AI and the next move,  Agentic AI. Check out Why Sundar Pichai Never Panicked.

Rounding out this group of links on AI, take a look at this intelligent and very human piece from NatashaMH. In No Society Left Behind she posits that AI will still leave us with uneven playing fields across the different strata of society.

John Gruber has an interesting piece On The Accountability of Unnamed Public Relations Spokespeople. It’s politics specific but it also speaks more broadly about the, in my opinion, decline of PR as an effective tool.

We still haven’t come to grips with the shooting of the United Health Care executive and the reaction to it. Adrienne LaFrance takes that as a cue for Decivilization May Already Be Under Way. I would argue it’s been under way for quite some time now. Itt’s just accelerating.

David Todd McCarty says We’re All Going to Need To Hunker Down For A Long-Ass Storm. I concur, although I fear it’s going to be looked back on as a major climate shift.

Dave Troy in the Washington Spectator gives us The Wide Angle: “Project Russia,” Unknown In The West, Reveals Putin’s Playbook. It will never ceae to amaze me how we let this one slip by us.

Looking back a bit in history take a look at this piece from the Atlantic’s 1940 issue called The Passive Barbarian by Lewis Mumford. With the exception of a few references in the article and the publication date, I bet you would think it had been written in this current moment.

And finally, with the holiday drone buzzing around us David Todd McCarty offers up Struggling To Find Peace In The Midst of Exuberant Joy.

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here.  You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome. You can also find me on social networks under my own name.

Getting Smart About Apple Intelligence

Time to start getting smart about Apple Intelligence

Everybody on the Apple Intelligence Bus! Everybody on the bus! That sure seems to be the rallying cry from Wall Street to tech blogs and the pundit beat. It’s quite exciting but only in the way a trailer for the next big movie might get us excited. The story the feature will tell isn’t ready yet, and only those in the know have knowledge of the script and what secrets it might contain.

I’m not poo-pooing what Apple will be offering. I have no way to make any judgment on whether it’ll be the next big thing, the future of computing, a train wreck or an also ran. All we have to go on is a very polished presentation designed to illicit interest. Apple boldly promises Apple Intelligence is AI for the rest of us. If that proves to be the case it begs the question as to who are the “them” or “they” that aren’t “us.”

On a promotional level alone Apple achieved success and at the moment it looks like it accomplished one of its goals with the announcement. Wall Street is certainly jumping on the Apple Intelligence bus based. Investment trends don’t always prove that intelligence and common sense go hand in hand but the market has indeed moved with Apple setting a new record high.

If all or most of what Apple promises comes close to reality it indeed does look promising. But as most of us should have learned by now, technology promises, especially AI promises of late, can have some bumps along the hallucination highway.

The timing will also be interesting, given that most of this won’t be rolling out when new iPhones debut this fall. Apple may have shifted the focus and succeeded in swinging the spotlight squarely back onto itself and yet, all we really know at the moment are the promises with a big helping of “coming later this year” tagged on at the end.

That said, it is probably wise to add to our own knowledge bases with what we do know about Apple Intelligence to this point and going forward. To that end I’ve put together a reading list of what I’ve seen so far that attracts my attention. Some of it is punditry. Some of it is technical. All of it makes for interesting reading.

First up is an interview with Apple CEO Tim Cook by Josh Tyrangiel in The Washington Post. When asked what his confidence was that Apple Intelligence will not hallucinate Cook responded:

“It’s not 100 percent. But I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we’re using it in. So I am confident it will be very high quality. But I’d say in all honesty that’s short of 100 percent. I would never claim that it’s 100 percent.”

John Hwang in Enterprise AI Trends lays out what he views as Apple’s AI Strategy in a Nutshell.From what I know his thoughts make sense to me.

Casey Newton says Apple’s AI Moment Arrives. Here’s a quote:

“The question now is how polished those features will feel at release. Will the new, more natural Siri deliver on its now 13-year-old promise of serving as a valuable digital assistant? Or will it quickly find itself in a Google-esque scenario where it’s telling anyone who asks to eat rocks?”

Craig Federighi did an interview with Michael Grothaus of Fast Company about Apple Intelligence.

For some historical context and picking up on the discussion on how this changes (hopefully improves?) Siri, M.G. Siegler gives us The Voice Assistant Who Cried Wolf.

Wes David on The Verge gives us a list of every new AI feature coming to the iPhone and Mac. 

Security and privacy are two of the big picture concerns that accompany anything about AI in general. A couple of technical entries into the discussion that I find informative include Private Cloud Computer: A new frontier for AI privacy in the Cloud from Security Research and Introducing Apple’s On-Device and Server Foundation Models. Both are released via Apple and focus on privacy and responsible AI development.

A terrific interview with Tim Cook from YouTuber MKBHD. Most relaxed I’ve ever seen Cook in an interview.

Brian Fung of CNN takes a look at how Apple is handling your data with Apple Intelligence and the differences between Apple Intelligence and ChatGPT.

Sebastiaan de With ponders Apple’s Intelligent Strategy. 

Michael Parekh weighs in with his thoughts on how Apple is thinking different.

Steven Aquino weighs in talks about how Apple’s efforts have the potential to affect Accessibility.

Peter Kafka has published a transcript of an interview with Ben Thompson. Here’s a quote:

The real risk is execution risk. Apple does have the luxury of coming to market later, and they benefited from a huge amount of research and improvements. Like shrinking down these models, giving them high efficiencies, so they can run on-device. They’ve had all those benefits.

What they are proposing to do — to actually orchestrate different apps and different bits of data — no one has done well, yet. Apple’s bet is they can do it well because they have the data, because they are on the device. But there is a real execution risk.

Will Knight on Wired suggests that Apple Proved AI is a Feature, Not a Product. 

Some folks are concerned that one of the ways Apple is training its Apple Intelligence includes crawling the open web. If you run a website there are ways to exclude it from being crawled, but unless whatever data has already been crawled is jettisoned by Apple prior to a new training it’s a bit late. Here’s some coverage from MacStories and Six Colors on that. This issue will be one to watch in the future.

Jason Snell of Six Colors opines that Apple’s skin the game might not have been as willingly all in as Apple would have preferred.

Dwight Silverman of The Houston Chronicle wonders Will Apple’s new AI plans Help Siri Fulfill its Promise or will it disappoint again?

You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above. 

Sunday Morning Reading

Secret octopi, culture wars, convictions, and reading between the letters. In this week’s Sunday Morning Reading.

Life is beginning to settle in after the big move, although there’s parts of it we still can’t figure out which box we packed some of it in. Perhaps we need some sort of A.I. bot to help us figure that out.  But we’ll get there. In the meantime here’s some Sunday Morning Reading to share.

Speaking of AI, WTF is AI? That’s the question posed with some attempted answers by Devin Coldeway. It’s a decent primer on the topic. Watch out for secret ocotopi.

A couple of pieces on AI from Nico Grant at the NY Times shows just how unknown and perhaps reliably unreliable this fast evolving tech territory is. First up is Google’s A.I. Search Leaves Publishers Scrambling. Follow that up with Google Rolls Back A.I. Search Feature After Flubs and Flaws. I wonder how AI will spit all of this back at us once articles like these are trained in. I also wonder when publishers will start to standardize whether or not we’ll write it as AI or A.I.

Some think The AI Revolution Is Already Losing Steam. I happened to agree with Christopher Mims, the author of this piece.

Even in the midst of moving it’s been tough to ignore the political comings, goings and convictions in the news. Check out David Todd MCCarty on Bedtime for Bonzo, Or Nothing To See Here. Even after 34 convictions for the orange dude, this piece holds up.

This piece from July of 2021 by John Pavlovitz resurfaced in my feeds in the last week. The Sadness of Sharing A Country With Trump Supporters is worth a re-read in the wake of this week’s news. Somehow I think it will remain relevant for quite some time.

With all that is going on in the political world, it’s a good idea to always remember there is so much more going on behind the scenes than we ever want to realize. Check out Ken Silverstein’s look behind the curtain in Off Leash: Inside The Secret, Global, Far-Right Group Chat. You might be sorry you did.

I hope The Wonkette is writing you visit often. There’s an excellent serial novel there called The Split by Ellis Weiner and Steve Radlauer. It’s up to Chapter 30. It’s terrific and worth your time.

There’s a new book worth highlighting and highlighted by Laura Colliins-Hughes in the NY Times. James Shapiro’s The Playbook chronicles the history of The Federal Theatre Project. The subtitle teases well: A Story of Theatre, Democracy and The Making Of A Culture War. A great story from back in the day when live theatre was actually something folks believed was dangerous enough that it could change minds.

And to close out this week’s edition check out Natasha MH’s Writing The Unpretentious Prose. Don’t just read the words. Look between the letters.

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here.  You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome.

But the Demos Aren’t Lying

Steven Levy has seen enough AI demos to think we should believe the hype. I’m still in wait and see mode.

Call me curious. Call me skeptical. Two sides of the same coin. The tech industry is dancing on the edge of a coin called Artificial Intelligence waiting to see which side lands face up. As they dance, we also dance, because the promise/hope/hype/hyperbole is that the technology will make lives better, fill lots of coffers, and set us all free (except for that sure to increase every year subscription price) to enjoy more of life.

Artificial intelligence new technology science futuristic abstract human brain ai technology cpu central processor unit chipset big data machine learning cyber mind domination generative ai scaled 1 2048x1366.

Steven Levy has apparently seen enough demos that he has penned a piece telling us that It’s Time to Believe the AI Hype. It’s a well reasoned piece, as usual from Levy, and worth a read if you’re trying to follow what all of this means. But the moment that caught me was this quote:

Skeptics might try to claim that this is an industry-wide delusion, fueled by the prospect of massive profits. But the demos aren’t lying.

But the demos aren’t lying.” They may not be. It all might come true. Or some of it. Or enough of it to matter. Even so, I’ve been around enough blocks too many times to stake anything on any demo for any product. Some do pan out. Too many do not. Given the pace of things in tech these days, I’m guessing that once the inevitable explosion yields to the equally inevitable contraction, there’s a better than average chance that we’ll be eyeing some other piece of universe altering tech within a year or two.

The reality is what’s coming in AI is coming. We’ll all get a taste. The proof wil be in how we digest whatever tech related nutrional value it offers.

You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome.

What Happens to Ads with AI Summaries of Web Pages?

Will AI summarize web ads into submission?

Artificial Intelligence is still the dominate tech craze of the moment. Big announcements are expected within the next several weeks from Apple, Google and just about anyone else who can prompt an AI generated press release into being.  I’m sure AI will continue to be on the tips of most digital tongues.

Or will it all just be summarized? 

One of the trends I’m seeing predicted is how users will take advantage of Artificial Intelligence to summarize web pages. That sounds like a useful, perhaps noble idea but it raises questions. The web relies so much on advertising to generate revenue. AI is supposed to help ad creators and marketers do better and more efficient designing and targeting. What happens when users stop visiting web pages and just rely on summaries? That’s a genuine question I have and would love to read some possible answers.

It’s not that I’m a big fan of ads, but I remember back in the heyday of RSS that there was all sorts of tension regarding losing ad impressions between web publishers and web users that relied on RSS readers. Then RSS feeds of web articles got truncated into teasers to send users clicking. Then ads got inserted into RSS. Will the same thing happen with ads being inserted into AI summaries? How would that work with something like an AI Pin or the Rabbit R1? (Although I doubt those devices will be around for us to find out.) 

Given that one of the other predicted AI trends is being able to verbally converse with whatever AI machine you choose, how would that work with advertising? Will a user need to listen to ads before getting a response to their prompt? There’s already a lag in compute capacity resulting in delays delivering responses to queries with most current AI engines. I don’t imagine waiting for an ad insertion will help improve on that. 

Again, these are sincere questions that I’d love to hear some thoughts on. Just don’t summarize them.

You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome.

The AI Pin Feels Less Than Humane

It’s tough to do a hot take on the AI Pin from Humane given how creepy cold the launch and the video was. The makers’ chill approach sure didn’t light any fires of enthusiasm. I’ve seen friends in a hangover stupor with more enthusiasm about their prospects of greeting a new day. If that’s the sort of calm, cool, and collected monotone our future AI world promises it sure doesn’t feel like a very Humane one. 

399914237 312962674854697 8436178476870399795 n

As for the technology, certainly at some point in the future we’re headed to something like this and I’ll give the makers credit for their efforts so far. At this point there’s no way to really judge the product or its future, but you can see a certain promise in this kind of Star Trek type of human to computer interaction. 

Even so, whether collated and sorted by AI or generated by apps you still need to somehow get something “on screen” at some point. And I’m thinking that needs to be larger than your palm. I can’t imagine negotiating with a laser image of someone’s face in my palm, and “voice only” only gets you so far.

That’s the big disconnect in my first reaction. The AI Pin feels more like an input accessory than an end point. If I’m out for the day and snap a few pictures or video they need to be viewed before they are of any value other than further training an AI engine or sending location tracking data.  And yes, i can imagine a future with some sort of headset or glasses to view those images, but I also imagine whatever that face computer might be, it will also have the same approximate features as the AI Pin. 

So, I say kudos for pushing the discussion. Push it with a little more human enthusiasm next time around. 

Here’s the video.