We all fell for it. We all thought it would be beneficial to us as users. I don’t want to say we were all suckers, so I’ll just say we were naive. But in the end we were all suckers. Targeted advertising was supposed to cater to our needs, desires, and wishes. Surfacing what we were interested in out of the clutter was a hope and a promise that died in colliding avalanches of greed and gluttony.
To be fair some ad targeting actually works. To also be fair, even a broken clock is right twice a day. But the money came rolling in and the temptation to grab it all became far too much and made it far too easy to let slip those early promises.
Now the brains behind Artificial Intelligence are doing what many suspected from the get go and edging their way into the browser wars. TechCrunch has an interesting post talking about Perplexity’s plans to get to know us better by building a better browser.
Here’s the money quote:
“That’s kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you,” Srinivas said. “Because some of the prompts that people do in these AIs is purely work-related. It’s not like that’s personal.”
Focus on the “personal” part.
Both Perplexity and OpenAI have made statements they would be interested in buying Google’s Chrome browser should Google be forced into a breakup for anti-trust reasons. But that’s years away. So why wait? Better to get in the game now before the regulators catch up. Or before all the data that’s good to grab gets grabbed and starts feeding on itself.
There’s irony in all of this that underlies and underlines the dissembling behind it that might just be seeping into the open. One of the promises of this new technology is that it will free us from drudgery, giving us all more time for creative pursuits and more balanced lifestyles. But the underlying goal is the same. Grab as much data as possible, especially “personal” data. That’s the currency. That will always be the currency.
Here’s the second money quote from Perplexity’s Aarvind Srinivasa:
“On the other hand, what are the things you’re buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you.”
AI might continue its move into the enterprise, but that’s not enough. And if the corporate mindset of using AI to replace workers continues, that equation points to diminishing returns eventually, even if the advertisers never catch on.
We all know how this story plays out. Because it’s a rerun. And too often a plagiarized one as well.
You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome.I can also be found on social media under my name as above.
We’re all circling. We’re not listening. We should be reading.
Everything changes. Everything remains the same. Damnit. With that said, here is this week’s Sunday Morning Reading with links to articles worth sharing and perhaps pondering over. There’s a bit of satire, a golden toilet heist, and the evolving nature of a piece from draft to final polish. And, yes, there is politics. Everything changes. Everything remains the same. Damnit.
Let’s kick off with Tina He and The Last Human Choice. That link is to the final version of the story. I also strongly encourage you to check out the draft version she shared here.
Mark Jacob always has a great look at the media, especially in this moment, In this one he examines When The Media Take MAGA Liars At Their Word. I mentioned to Mark that what infuriates me is not just the media taking him at his word–ignorance and stupidity know no bounds–but that they know better and report it out as if they don’t.
Bringing my words at the topfull circle, NatashaMH once again delves deep into the personal past through a contemporary moment (her reaction to the streaming hit Adolescence) in A Requiem For My Dreams. I’ll close with a quote from her piece about the series that applies to everything, everywhere all at once:
People say the series is about a new world that’s happening. Fuck that, ignoramuses. It’s about a world that has always been out there behind closed doors when ears weren’t listening
If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome.
Trust is not an easy thing to earn. It’s far easier to burn. When it catches fire, it quickly consumes whatever is in its path. Such a conflagration is made worse when it singes those who have long cozied up, supported, and promulgated that trust as their own. Apple and those who make a living covering the company are both fighting a fire neither can put out without the other, regardless of what caused Apple’s rush to market whatever Apple Intelligence and the new personalized Siri was supposed to be.
The fiasco is that Apple pitched a story that wasn’t true, one that some people within the company surely understood wasn’t true, and they set a course based on that.
You could say it starts and stops there. You wouldn’t be wrong.
WWDC 2024 changed all that and gave me hope that Apple was in the AI race, but there were worrisome signs even back then that because, well, it was Apple, I chose to ignore or forgive.
It’s clear Apple must radically rethink its reason for being.
The heat on Apple has been smoldering for some time now with smoke in the air, wafting on a number of fronts. While I’m not pointing fingers and criticizing Apple pundits directly, (they were misled in my view), they’ve carried a lot of water for Apple, keeping these other recent flare-ups from burning too hot.
I’ve written about this Apple Intelligence episode previously, but to recap the particulars: Apple announced its flavor of Artificial Intelligence at last year’s World Wide Developers Conference (WWDC), carving out a fire line to slow down the burning narrative that it was behind and possibly missing the moment with AI. Boldly branding it as Apple Intelligence, the key reveal was unveiling a more personalized Siri, that unlike all of the other AI efforts on the market, would give users “AI For The Rest Of Us,” that would retain the firewall of Apple’s marketing mantra of being more secure and private.
Turns out it was a reveal that wasn’t really a reveal, but has now proven all too revealing.
As has been typical with new operating system features the last few years, Apple was clear at WWDC that some of this newness would roll out over the course of the year, so there was no surprise there. Also typical since COVID is that Apple’s announcement was a canned commercial.
Atypical, however, none of the flashier features were ever shown to pundits and journalists, even under cover of an NDA. As Gruber and others are now saying, that smoky smell reeks of vaporware.
Each year Apple faces some degree of heat as it heads into WWDC. I think things will be hotter than most this year with a higher degree of skepticism. What we’re witnessing is a landscape built by years of trust, earning the benefit of doubt, turned to ashes. They say that hell has no fury like a woman scorned, but I’m here to tell you that might take second place when it comes to torching the trust relationship between a company’s PR reps and those who cover them.
Let’s talk about that trust.
Back in my gadget blogging days for GottaBeMobile.com the first rule of thumb was always be skeptical of PR. I’ve been on both sides of that fence, pushing out PR for my own projects and covering it for others. A PR pro tells you the story they want you to cover. Covering that story, you look for the holes in addition to covering it. By and large most of the well know Apple pundits have done a reasonably good job of revealing those holes in my opinion.
Apple was different in that for the most part if they made a claim it usually held up. I remember distinctly when the first iPad was released with a claimed battery life of 10 hours. Those of us at GBM were surprised when those claims proved accurate once we had the devices in our hands. Promise made. Promise fulfilled. Trust earned.
No company is perfect, certainly not Apple. But Apple has been reasonably consistent for most of the time I’ve been covering or using their hardware and software. There have been lapses — Siri being a prime example — but nothing that wasn’t overcome and perhaps, now in retrospect, wrongly overlooked because of the trust Apple built with the media and enthusiasts who covered the company. As most now realize, the smoke and mirror show of last year’s WWDC Apple Intelligence announcement was a red flag warning that needed more scrutiny than relying on trust banked through good will and follow through.
It’s currently being endlessly debated whether or not this failure was caused by a rush to satisfy Wall Street deep in its AI bubble, poor leadership, or just trying to climb too high a mountain too fast in an attempt to create a technical solution that, as announced, would one up those already on the market. In the end I don’t think it matters much what exactly sparked this blaze. I do think it matters how Apple chooses to put out the fire. Those who cover Apple, and more importantly users, feel scorched. I’m guessing there are some in Cupertino feeling that as well.
Burn scars don’t heal well or quickly.
You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome.I can also be found on social media under my name as above.
Tough reads for tough times with a nod to the Commodore 64.
The rapid decay of all things continues. I’m not even sure if “decay” is the right word. “Collapse” might be a better choice. Regardless, there’s no “decay” or “collapse” in my sharing articles and writing every week in Sunday Morning Reading. Enjoy.
Marc Elias says We Can’t Give In To Fear. He’s right. But with those we mistakenly counted on having already done so, it makes it tougher for the rest of us.
Brian Barrett of Wired (which continues to do excellent reporting) gives us a rundown on The United States of Elon Musk. Good piece with good context. I don’t disagree with his premise that it’s unsustainable. The larger concern is what’s left in its wake.
NatashaMH opens up a personal tale of exploring justice, relationships, and personal power in The Price of Guns And Butter.
Things aren’t just decaying on political and social fronts, technology is marching right alongside, if not leading the charge. John Gruber lays out a mea culpa of sorts in discussing Apple’s less than intelligent move into Artificial Intelligence in Something Is Rotten In The State of Cupertino.Om Malik also weighs in with Apple Intelligence, Fud, Dud or Both. I’ll have more to say on this later this week. I wrote a bit about it last week also.
In times of uncertain futures it’s always somewhat uncomfortably comforting to reminisce about simpler times. When it comes to technology there was perhaps no simpler or more innocent time than during the age of the Commodore 64, which was my first home computer. We’ve come a long way. Gareth Edwards takes a look at Jack Tramiel’s success in How Commodore Invented The Mass Market Computer.
If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome.
It has been inevitable for some time that Apple was going to delay launching whatever the new personalized Siri with Apple Intelligence was supposed to be. To expect otherwise was as foolish as hoping the new American government wasn’t going to wreak havoc on its own citizenry and the rest of the world after the most recent election.
Now Apple has owned up to the inevitable. In a statement to Daring Fireball’s John Gruber announced the delay and a new set of expectations:
“Siri helps our users find what they need and get things done quickly, and in just the past six months, we’ve made Siri more conversational, introduced new features like type to Siri and product knowledge, and added an integration with ChatGPT. We’ve also been working on a more personalized Siri, giving it more awareness of your personal context, as well as the ability to take action for you within and across your apps. It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year.”
Note that last sentence includes “we anticipate.” I anticipate dying at some point. I also anticipate warmer days this summer, rain occasionally, and eating pizza on some day in the future. So, the message is stay tuned.
I have several thoughts on this and I’ll lay them out below, along with links to some interesting hot takes following the announcement, some of which have already cooled off a bit.
First, I think Apple was smart to make this announcement even if everyone paying attention already knew this was going to be the case. This delay wasn’t and isn’t news. That said, the announcement comes after Apple, generally perceived as rushing to catch up in the push for Artificial Intelligence, has made what can only be called a poor first impression. Sure, you can call Apple Intelligence a beta if you want. Apple does. But advertising a flawed beta as the tent pole to push new iPhones can’t be called anything but a marketing misfire, if not malpractice.
First impressions of shipping products matter more than clever shiny announcements of things yet to come.
Apple should know this because they are no strangers to bad first impressions. MobileMe left a bad stain that iCloud still has difficulty erasing. The VisionPro continues struggling with poor perception and reception. Yes, Apple also does have a history of turning some poorly received rollouts around. The best examples of that are Apple Maps and the Apple Watch. Even so, once a product launched becomes a product laughed at, it’s difficult to erase the echos of that laughter.
But perhaps the product ridiculed crucially here is the one that Apple married to this all out AI effort: Siri. Purchased, proudly launched, and then allowed to wallow — like too many other of Apple’s efforts (*cough* iPadOS *cough*), Siri has become not just a joke, but one that keeps on giving. Some say it has improved. I’ll agree with that to a point but that depends on the day.
Siri has never fulfilled Apple’s bold promises with any consistent value beyond setting a timer or adding a reminder. Even that fails enough of the time to earn users’ distrust and provide late night comedians with jokes so easy to make that the shrewder jokesters have moved on.
The debate following this recent Apple announcement in pundit circles seems to be on whether or not Apple should jettison Siri and start from scratch. I’m sure that debate has gone round and round in the circular halls of the Apple campus. I doubt that happens, given that the marketing mavens in Cupertino seem to be erratically driving the bus these days. There’s been a huge investment in Siri branding, problematic as it has always been. Unfortunately salvaging a brand is also expensive.
Apple’s Long Game Mindset Might Just Be Short Sighted
The success of the iPhone has given Apple the benefit of playing a long game, plotting product and growth strategy with a large enough cushion to weather the occasional storm. It’s certainly easier to sail through rough seas in a large ship, but the bigger the boat, the more maintenance is required to keep the hull from rusting and the engines running smoothly. The nuts and bolts matter.
Artificial Intelligence, regardless of what company is pushing it, is nuts and bolts, bits and bytes, ones and zeros. Everyone scanning the horizon thinks this is the future we’re sailing towards, full steam ahead. But nothing that’s been released or demonstrated yet has really proven that anyone can chart a correct course. The current moment resembles that scene in Jaws when all the ships set out in an armada to chase a bounty, not knowing really what they’re up against.
Don’t get me wrong. I think Artificial Intelligence may indeed prove useful. Someday. On an enterprise level. I’m just not so sure if it will ever be as big a deal on the consumer front as the marketers want us to believe it is or will be.
I also doubt Apple Intelligence will end up being another Butterfly Keyboard, MobileMe, or Siri, but at the moment there’s as good a shot of it joining the ranks of those jokes in Apple lore as there is for it becoming a success, much less useful.
Ian Betteridge in this piece, lays out what I think the AI true believer vision is in this excerpt:
But AI presents a fundamentally different challenge. This isn’t merely a new product category to be perfected – it’s a paradigm shift in how humans interact with technology. Unlike hardware innovations where Apple could polish existing concepts, AI is redefining the entire computing experience, from point-click or touch-tap to conversations. The interface layer between humans and devices is transforming in ways that might render Apple’s traditional advantages increasingly irrelevant.
He also captures the key context that reveals the tension between the long and short game as Apple has historically played it in this excerpt from earlier in that post:
Apple has long been characterised as a “fast follower” rather than a pioneering innovator. It wasn’t the first to make an MP3 player, smartphone, or even a personal computer. This strategy served Apple brilliantly in the past – observing others’ mistakes, then delivering exquisitely refined products with unmatched attention to design, usability, and integration. The first iPhone wasn’t novel in concept, but revolutionary in execution because it had a unique interface: multitouch. In fact, I would argue this was the last time Apple’s user interfaces went in a bold direction.
What is obvious in this frenzied sea of Artificial Intelligence is that Apple did a quick course correction and tried to “fast follow” before the mistakes of others could be identified well enough to refine and/or correct the way Apple has historically been successful in the past. In the case of Siri, the fact that Apple has let it languish for so long more than hints that it just doesn’t see enough value in the voice assistant proposition.
Were those bad moves? Who can really say at present. It is true that Apple had to react. OpenAI’s release of ChatGPT upset a lot of apple carts and not just those in Cupertino. But Apple’s quick course correction, coupled with a less than enthusiastic response in the same year of its other attempt at a computer interaction paradigm shift–spatial computing with the Vision Pro–has cut down the chances for any short term smooth sailing.
Some are positioning this moment Apple has created for itself as a necessary gamble Apple had to make. Here’s an excerpt from Jason Snell at Six Colors:
And if you asked those same Apple executives if they were aware that the cost of underdelivering those features in the spring of 2025 would be getting beaten up in the press a little bit for delaying features, perhaps even back to iOS 19? I’m pretty sure they’d say that a little bit of negative press today, when the world isn’t really paying that close attention to Apple and AI, would totally be worth it.
That may indeed be true in and of itself. I have no way of knowing. What I do know is that this gamble might have had better odds if Siri, prior to all of this, hadn’t been such a historical and neglected mess for far too long.
Security and Privacy
This delay announcement has also opened wider the door for criticism that might shatter another of Apple’s tent pole marketing strengths: security and privacy. Here’s a post from Simon Willison, who has a hunch that the delay might be related to those issues. It’s also worth taking a look at Willison’s earlier post on prompt injection. John Gruber of Daring Fireball takes Willison’s point further in this post. Here’s the key excerpt:
Prompt injection seems to be a problem that LLM providers can mitigate, but cannot completely solve. They can tighten the lid, but they can’t completely seal it. But with your private information, the lid needs to be provably sealed — an airtight seal, not a “well, don’t turn it upside down or shake it” seal. So a pessimistic way to look at this personalized Siri imbroglio is that Apple cannot afford to get this wrong, but the nature of LLMs’ susceptibility to prompt injection might mean it’s impossible to ever get right. And if it is possible, it will require groundbreaking achievements. It’s not enough for Apple to “catch up”. They have to solve a vexing problem — as yet unsolved by OpenAI, Google, or any other leading AI lab — to deliver what they’ve already promised.
‘Ay there’s the rub,” as Hamlet would say. No one has those solutions, yet it’s full speed ahead as the selling and hype continues. There may be a dream in there somewhere, but as for now, whether sleeping, sleepwalking, or blindly chasing bounties, all the consumer is left with at the moment is “stay tuned.”
For better or worse, we are not going to return to our regularly scheduled programming.
(I note that I was putting the final touches on this piece Bloomberg is reporting that Apple plans the biggest user interface design overhaul in quite some time with this year’s new operating system releases that will be unveiled at WWDC. Apple is under pressure from not only this Apple Intelligence, but other issues that concern developers as well. Shiny distractions generally win when it comes to taking the heat off of failures and problems.)
You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome.I can also be found on social media under my name as above.
Bogus science, finance, politics, and tech dominate this Groundhog Day edition of Sunday Morning Reading.
Here we go again. If it feels like Groundhog Day that’s because it is. Happens every year, but the things going on in this country feel very similar to, yet even more dangerous, than they did eight years ago. It’s a movie we don’t want to revisit, but are living through. Live through it we must. Enjoy today’s Sunday Morning Reading while we try to avoid repeating the same mistakes, or at least dodging them.
With trade wars now needlessly underway most of the big news ahead this week will be in the financial markets. John Lanchester has an excellent piece with excellent context about finance and what he calls “its grotesquely outsize role in the way we live now” in For Every Winner A Loser.
Meanwhile as the world focuses on trade wars, Elon Musk and who knows who else is rampaging through the federal government in ways that sound more than illegal. Josh Marshall asks Who Can Stop Elon’s ‘Team’ Wilding Its Way Through The Federal Government?
I don’t often link to Wall Street Journal pieces in this column unless they are about tech related topics. This one by The Editorial Board is worth a read and definitely worth the headline: The Dumbest Trade War In History. Seems like Murdoch and his scribes got what they wished for. Again.
On the tech front, running parallel to our political misfortunes is a river of thought on Artificial Intelligence, most of it negative these days, but also thoughtful. Alex Kirshner interviews Ed Zitron and came away with One Of Big Tech’s Angriest Critics Explains The Problem. Audrey Watters tackles the issue and says “In this AI future, there is no accountability. There is no privacy. There is no public education. There is no democracy. AI is the antithesis of all of this.” I fear she’s correct. Check out AI Foreclosure for her piece, but also the excellent collection of links on the subject she provides.
Whether it’s the science of tech or the science of finance, there’s science. We ignore it at our peril. But what happens if some of the science is bogus? Frederick Joelving, Cyril Labbé, and Guillaume Cabanac tell us that Bogus Research Is Undermining Good Science, Slowing Lifesaving Research.
In this day and age going viral is the equivalent of getting that infamous 15 minutes of fame. Both are fleeting. Joan Westenberg says Trust Me. You Don’t Want To Go Viral.
NatashaMH writes about a woman finding meaning in memoirs in Drowning In Sobriety.
And, as we enter Black History Month in the U.S., check out Deborah W. Parker’s piece on Belle da Costa Greene in The Black Librarian Who Rewrote The Rules Of Power, Gender and Passing As White.
If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome. You can also find me on social networks under my own name.
There are no good options when it comes to choosing your tech these days.
Let me rephrase that slightly, if you’re hesitant or resistant to AI taking over your tech there are no good options these days. Whether it be mobile devices, laptops, or desktop rigs, the makers of the major operating systems have all jumped on the Artificial Intelligence band wagon and are doing really poor Harold Hill impersonations trying to sell us on it.
Sure there are different flavors, but they’ve become or are becoming intrusively the default. We all know where this appears to be heading. Computing devices without AI will be the flip phones of tomorrow, If they are even available.
Apple has turned on Apple Intelligence by default, (even though it is still in beta).
Microsoft is forcing Copilot into Office 365 and its operating system and charging you more for it, wanted or not. (There are ways to ditch it.)
Google is doing the same thing with Android. Even if you don’t use an Android device, but use Google services, Google’s AI now accompanies anything you do with those services. Of course other smartphone users that rely on Android are following along, but there’s really no choice.
If Artificial Intelligence was a virus, we’ve all been infected and there’s no vaccine to argue over, nor will wearing a mask help, because it extends beyond our own computing lives to the interactions we have with our doctors, banks, any form of customer service, and other affiliations of our daily lives. Yes, there are still refuges where you can attempt to avoid AI, but that’s not the real world of daily commerce and daily personal interaction.
Now, it sounds like I’m 100% in the anti-AI camp. I’m not. I think there are legitimate uses. Some are even quite good. Some offer promise. I actually experiment with some of that. But I also think that there’s too much that isn’t useful, too much that just doesn’t work as advertised (beta or not), and too much that’s more than potentially harmful, especially in greedy hands.
I can get excited about the technology, especially on some of the exciting hardware we now see. I just consider it a shame that all of that computing power is going to be put to the uses it appears we’re in for.
We’ve been here before with new technology. First it’s a curious trickle then it becomes a tidal wave that sweeps us along in its path. It’s tough to live daily life without a smartphone these days. That’s a more recent fact than many want to acknowledge.
There’s another factor. Part of the hesitancy and resistance I know I’m feeling is that I don’t feel like I can trust the likes of Apple, Google, and Microsoft, much less the social networks and other applications that run on their hardware. I’ve always been skeptical, but that trust level took a knock with the recent knee-bending by these companies, trading cash for favors from the evil regime now in place in the U.S. I’m not sure how much more capitulation will be required, but I’m betting the folks trying stay in the game will find themselves laying prostrate before this is all over.
I’ve used Apple products and have been a fan for quite some time. I imagine I will continue to be a user of those products going forward, given the investment I have in that ecosystem. But I also use Microsoft and Google products and support a coterie of folks who do as well. I also use services on my Apple devices by both Google and Microsoft.
In order to support the folks I do, I keep up to speed with this increasing and haphazard pace we’re all forced into. The questions I deal with lately focus on how to remove or prevent these AI features more than they do about how to guide them through new features. When every day users are asking those questions there’s obviously a problem. As for me, tasting the poison in order to understand the which antidote is needed feels unhealthy, a bit dangerous, and just plain dirty.
So, I’m starting to check out other hardware to become even more familiar, but also to look at my own options. Again, there’s no easy choice. I picked up a Pixel Pro 9 recently and am checking that out. Does that mean I’m thinking of changing horses in this stream we’re in? Probably not. As I said, there are no good choices. It really is a pick your poison era we’re in. I’m not happy about it. I’ve always been tech curious, it’s just sad my current curiosity is bred from such distaste, distrust, and disgust.
You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome.I can also be found on social media under my name as above.
Looking back, while heading forward, with a nod to Beckett wandering through a lot of good questions.
This is the first edition of Sunday Morning Reading in the New Year, 2025. A new year certainly has meaning astronomically. From a human perspective it is a way of looking back in remembrance, even as we continue to evolve and move forward. Often these days, the evolving part seems more and more in question, even as humans make strides and advances in their various fields of endeavors. Some improve our lives, even as it appears so many of us remain stuck in the habits of the past and feel good about celebrating that choice to turn the clock back.
This week’s edition, in a way, marks that always thin dividing line between one year and the next, when what was old carries over into the new.
Natasha MH kicks things off with a lovely remembrance of her grandfather, It Begins With A Grain Of Salt. There’s a lovely quote:
Human intuition is not always reliable. Our perceptions can be distorted by biases and the limitations of our senses, which capture only a small fraction of the world’s phenomena.”
The Next Big Idea Club shares some insights from Greg Epstein’s new book Tech Agnostic: How Technology Became the World’s Most Powerful Religion, and Why It Desperately Needs a Reformation, in The Weird Worship of Tech That Demands Serious Questioning. Epstein is the Humanist Chaplin at Harvard and at MIT, where he advises students, faculty and staff on ethical and existential concerns from a humanist perspective.
One thing is certain as we head into the new year, Artificial Intelligence will continue to dominate discourse. Jennifer Ouellette examines what happened at the Journal of Human Evolution when all but one member of the editorial board resigned. Some of the issues predate the current AI moment, but that seems to have been a breaking point as she explains in Evolution Journal Editors Resign En Masse.
Simon Willison takes a look at Things We Learned About LLMs in 2024. It’s an excellent look back and worth hanging onto as we plunge ahead, willingly or no.
Edward Zitron believes that generative AI has no killer apps, nor can it justify its valuations. Here’s him quoting himself from March 2024:
What if what we’re seeing today isn’t a glimpse of the future, but the new terms of the present? What if artificial intelligence isn’t actually capable of doing much more than what we’re seeing today, and what if there’s no clear timeline when it’ll be able to do more? What if this entire hype cycle has been built, goosed by a compliant media ready and willing to take career-embellishers at their word?
Strip out the reference to AI and apply it anywhere along the timeline of human evolution and innovation and the questions resonant in a very Beckett-like way. Check out his piece Godot Isn’t Making It.
Judges in the U.S. Sixth Circuit drove a stake through the heart of Net Neutrality as the new year dawned. Brian Barrett says it’s crushing blow not just for how we live our lives on the Internet but consumer protections in general in The Death Of Net Neutrality Is A Bad Omen. He’s correct.
And finally this week, an incredible piece of reporting from Joshua Kaplan at ProPublica. The Militia And The Mole is at once terrifying and also confirming when it comes to the fears those paying attention harbor heading into whatever this next year is going to bring.
If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome. You can also find me on social networks under my own name.
Drones may be circling and society may be circling the drain, but there’s always time for Sunday Morning Reading.
Drones may (or may not) be circling the skies overhead, but that doesn’t mean we shouldn’t keep our eyes peeled for some good writing and good reading. This week’s Sunday Morning Reading features a usual mix of writing on tech, Artificial Intelligence, politics, and culture. Buckle up and enjoy.
Speaking of Artificial Intelligence, Arvino Narayanan and Sayash Kapoor tell us that Human Misuse Will Make Artificial Intelligence More Dangerous. I’ve been saying that for a while and so have any number of science fiction writers. Still, this short piece is worth a read.
Reed Albergotti chronicles an interview with Google’s Sundar Pichai on Google going all in on AI and the next move, Agentic AI. Check out Why Sundar Pichai Never Panicked.
Rounding out this group of links on AI, take a look at this intelligent and very human piece from NatashaMH. In No Society Left Behind she posits that AI will still leave us with uneven playing fields across the different strata of society.
We still haven’t come to grips with the shooting of the United Health Care executive and the reaction to it. Adrienne LaFrance takes that as a cue for Decivilization May Already Be Under Way. I would argue it’s been under way for quite some time now. Itt’s just accelerating.
Looking back a bit in history take a look at this piece from the Atlantic’s 1940 issue called The Passive Barbarian by Lewis Mumford. With the exception of a few references in the article and the publication date, I bet you would think it had been written in this current moment.
If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome. You can also find me on social networks under my own name.
Time to start getting smart about Apple Intelligence
Everybody on the Apple Intelligence Bus!Everybody on the bus! That sure seems to be the rallying cry from Wall Street to tech blogs and the pundit beat. It’s quite exciting but only in the way a trailer for the next big movie might get us excited. The story the feature will tell isn’t ready yet, and only those in the know have knowledge of the script and what secrets it might contain.
I’m not poo-pooing what Apple will be offering. I have no way to make any judgment on whether it’ll be the next big thing, the future of computing, a train wreck or an also ran. All we have to go on is a very polished presentation designed to illicit interest. Apple boldly promises Apple Intelligence is AI for the rest of us. If that proves to be the case it begs the question as to who are the “them” or “they” that aren’t “us.”
On a promotional level alone Apple achieved success and at the moment it looks like it accomplished one of its goals with the announcement. Wall Street is certainly jumping on the Apple Intelligence bus based. Investment trends don’t always prove that intelligence and common sense go hand in hand but the market has indeed moved with Apple setting a new record high.
If all or most of what Apple promises comes close to reality it indeed does look promising. But as most of us should have learned by now, technology promises, especially AI promises of late, can have some bumps along the hallucination highway.
The timing will also be interesting, given that most of this won’t be rolling out when new iPhones debut this fall. Apple may have shifted the focus and succeeded in swinging the spotlight squarely back onto itself and yet, all we really know at the moment are the promises with a big helping of “coming later this year” tagged on at the end.
That said, it is probably wise to add to our own knowledge bases with what we do know about Apple Intelligence to this point and going forward. To that end I’ve put together a reading list of what I’ve seen so far that attracts my attention. Some of it is punditry. Some of it is technical. All of it makes for interesting reading.
First up is an interview with Apple CEO Tim Cook by Josh Tyrangiel in The Washington Post. When asked what his confidence was that Apple Intelligence will not hallucinate Cook responded:
“It’s not 100 percent. But I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we’re using it in. So I am confident it will be very high quality. But I’d say in all honesty that’s short of 100 percent. I would never claim that it’s 100 percent.”
John Hwang in Enterprise AI Trends lays out what he views as Apple’s AI Strategy in a Nutshell.From what I know his thoughts make sense to me.
“The question now is how polished those features will feel at release. Will the new, more natural Siri deliver on its now 13-year-old promise of serving as a valuable digital assistant? Or will it quickly find itself in a Google-esque scenario where it’s telling anyone who asks to eat rocks?”
For some historical context and picking up on the discussion on how this changes (hopefully improves?) Siri, M.G. Siegler gives us The Voice Assistant Who Cried Wolf.
The real risk is execution risk. Apple does have the luxury of coming to market later, and they benefited from a huge amount of research and improvements. Like shrinking down these models, giving them high efficiencies, so they can run on-device. They’ve had all those benefits.
What they are proposing to do — to actually orchestrate different apps and different bits of data — no one has done well, yet. Apple’s bet is they can do it well because they have the data, because they are on the device. But there is a real execution risk.
Some folks are concerned that one of the ways Apple is training its Apple Intelligence includes crawling the open web. If you run a website there are ways to exclude it from being crawled, but unless whatever data has already been crawled is jettisoned by Apple prior to a new training it’s a bit late. Here’s some coverage from MacStories and Six Colors on that. This issue will be one to watch in the future.
Jason Snell of Six Colors opines that Apple’s skin the game might not have been as willingly all in as Apple would have preferred.
You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome.I can also be found on social media under my name as above.