You Will Be Assimilated as OpenAI Seeks Single Sign On Capabilities

Resistance to Single Sign On is not futile

News on so many fronts is fast and furious these days and this little Artificial Intelligence nugget seemed to skirt around quite a few radars. OpenAI, the purveyors of ChatGPT is working on a Sign In with ChatGPT feature. 

OpenAI logo

As I said on social media when this news broke, we’ve seen this movie before. It’s a complex plot, that never seems to work out in the end. Signing in with Beginning what seems like a generation ago, Facebook, Twitter, Google, and the like proliferated and many users joined the parade out of convenience. Apple has its own Sign in with Apple feature, and swears up and down that it doesn’t share your data. That may be true, but we now know different about most, if not all of the others.

Like what happens with most new technology, we jump into the pool without really knowing what lurks beneath, and once it became more apparent how single sign in allowed companies to track you across most online activities folks began changing their habits. Swimming with sharks is never fun.

The tracking is the key. So is the passage of time. There’s an entire new generation of users who have embraced Artificial Intelligence, OpenAI’s ChatGPT in particular. TechCrunch cites that there are 600 million monthly active users of ChatGPT. I’d wager that a large number of those users were too young to experience the last generation of the single sign in revolution years ago.

As I said, we’ve seen this movie before, and by and large it never ends well. Data is tracked, traded — and now with AI used for training — in ways that should cause greater care when it comes to the tradeoff for convenience when consenting to those user agreements no one ever reads.

As the TechCrunch article points out the intent here is to use that data for commercial purposes supposedly to “help people with a wide range of online services.” That’s the pitch. But it’s a knuckle ball that is difficult to control, much less swing at. It’s always about the money and data is money.

OpenAI may be the first of the AI companies vying to sign you in, it won’t be the last. In my opinion the safest bet in the big data casino is to always create a separate sign in for each online service you use. Don’t let the convenience factor outweigh what little control you do have over how your data is used and abused.

You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above. 

Sunday Morning Reading

So what is this future we’re heading into anyway?

If you’re observing Memorial Day weekend in the U.S. I hope you have had a pleasant one. Even if the weather isn’t cooperating, seemingly echoing the threats of seeing that tradition, like so many others, diminished. We’re on the road again for a dear friend’s memorial service, but there’s still time for a little Sunday Morning Reading. Mostly tech related this week, some politics, and of course some cultural happenings. If you’re paying attention, it’s all intertwining. Listening to a lot of Bruce Springsteen. Enjoy.

Adding to what’s becoming a recurring theme in this column, Ian Dunt is looking for ways to get the most out of our digital lives while taking back a bit of control from the tech god wanna-be’s. Check out Taking Back Control of Our Digital Life.

Matthew Ingram wonders If AI Helps To Kill The Open Web What Will Replace It? Excellent piece and excellent topic, because like it or not, it’s the current and next movement we on the ground are going to have to contend with. Pay attention.

Neil Steinberg, one of my favorite of a dying breed of Chicago journalists, gives his take on the recent Chicago Sun-Times AI flap in The AI Genie Is Out of The Bottle, and the Granted Wish Often Brings Trouble.

Lucy Bannerman takes on the AI’s abuse of copyright and artists rights in Nick Clegg: Artists’ Demand Over Copyright Are Unworkable. They aren’t. Those demands just cost more than folks counting the beans want to pay.

Lynette Bye’s Misaligned AI Is No Longer Just Theory raises up that specter that haunts this entire episode of our life across all spectrums that seems easy to fall prey to or dismiss, depending on which side of the coin you’re on. Frankly, if you don’t think the future of this can be manipulated, you’re not paying attention.

Jason Snell’s take on the recent announcement that OpenAI has bought Jony Ive’s company to produce new hardware for AI I think is the correct one. Check out Sam and Jony and Skepticism.

Chloe Rabinowitz fills us in on the outgoing president of the Kennedy Center’s response to the bullshit coming out of the White House. in Deborah Rutter Releases Statement In Response to Trump Kennedy Center Allegations.

The real boss, Bruce Springsteen, continues to piss off the orange buffoon in the White House and I’m glad to see it. So is Eric Alterman in a guest essay in The New York Times proclaiming Bruce Springsteen Will Never Surrender to Donald Trump. We need more of this.

And to wrap up this week, here’s NatashaMH wondering Do We Really Need To Have This Discussion? No hints. No clues. Just good stuff for you to read.

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here. You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome.

Time For The Shibboleth of Targeted Ads To Die

It’s always the data.

We all fell for it. We all thought it would be beneficial to us as users. I don’t want to say we were all suckers, so I’ll just say we were naive. But in the end we were all suckers. Targeted advertising was supposed to cater to our needs, desires, and wishes. Surfacing what we were interested in out of the clutter was a hope and a promise that died in colliding avalanches of greed and gluttony.

 

St,small,507x507 pad,600x600,f8f8f8.

To be fair some ad targeting actually works. To also be fair, even a broken clock is right twice a day. But the money came rolling in and the temptation to grab it all became far too much and made it far too easy to let slip those early promises.

Now the brains behind Artificial Intelligence are doing what many suspected from the get go and edging their way into the browser wars. TechCrunch has an interesting post talking about Perplexity’s plans to get to know us better by building a better browser.

Here’s the money quote:

“That’s kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you,” Srinivas said. “Because some of the prompts that people do in these AIs is purely work-related. It’s not like that’s personal.”

Focus on the “personal” part.

Both Perplexity and OpenAI have made statements they would be interested in buying Google’s Chrome browser should Google be forced into a breakup for anti-trust reasons. But that’s years away. So why wait? Better to get in the game now before the regulators catch up. Or before all the data that’s good to grab gets grabbed and starts feeding on itself.

There’s irony in all of this that underlies and underlines the dissembling behind it that might just be seeping into the open. One of the promises of this new technology is that it will free us from drudgery, giving us all more time for creative pursuits and more balanced lifestyles. But the underlying goal is the same. Grab as much data as possible, especially “personal” data. That’s the currency. That will always be the currency.

Here’s the second money quote from Perplexity’s Aarvind Srinivasa:

“On the other hand, what are the things you’re buying; which hotels are you going [to]; which restaurants are you going to; what are you spending time browsing, tells us so much more about you.”

AI might continue its move into the enterprise, but that’s not enough. And if the corporate mindset of using AI to replace workers continues, that equation points to diminishing returns eventually, even if the advertisers never catch on.

We all know how this story plays out. Because it’s a rerun. And too often a plagiarized one as well.

You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above. 

Scorched Apple Trust

Hey Siri, how do you rebuild trust?

Trust is not an easy thing to earn. It’s far easier to burn. When it catches fire, it quickly consumes whatever is in its path. Such a conflagration is made worse when it singes those who have long cozied up, supported, and promulgated that trust as their own. Apple and those who make a living covering the company are both fighting a fire neither can put out without the other, regardless of what caused Apple’s rush to market whatever Apple Intelligence and the new personalized Siri was supposed to be.

New Screenshot.

The money quote of this episode and this moment is from John Gruber at Daring Fireball in Something Is Rotten in the State of Cupertino.

The fiasco is that Apple pitched a story that wasn’t true, one that some people within the company surely understood wasn’t true, and they set a course based on that.

You could say it starts and stops there. You wouldn’t be wrong.

Here’s a quote for Lance Ulanoff on TechRadar:

WWDC 2024 changed all that and gave me hope that Apple was in the AI race, but there were worrisome signs even back then that because, well, it was Apple, I chose to ignore or forgive.

Om Malk says:

It’s clear Apple must radically rethink its reason for being.

The heat on Apple has been smoldering for some time now with smoke in the air, wafting on a number of fronts. While I’m not pointing fingers and criticizing Apple pundits directly, (they were misled in my view), they’ve carried a lot of water for Apple, keeping these other recent flare-ups from burning too hot.

I’ve written about this Apple Intelligence episode previously, but to recap the particulars: Apple announced its flavor of Artificial Intelligence at last year’s World Wide Developers Conference (WWDC), carving out a fire line to slow down the burning narrative that it was behind and possibly missing the moment with AI. Boldly branding it as Apple Intelligence, the key reveal was unveiling a more personalized Siri, that unlike all of the other AI efforts on the market, would give users “AI For The Rest Of Us,” that would retain the firewall of Apple’s marketing mantra of being more secure and private.

Turns out it was a reveal that wasn’t really a reveal, but has now proven all too revealing.

As has been typical with new operating system features the last few years, Apple was clear at WWDC that some of this newness would roll out over the course of the year, so there was no surprise there. Also typical since COVID is that Apple’s announcement was a canned commercial.

Atypical, however,  none of the flashier features were ever shown to pundits and journalists, even under cover of an NDA. As Gruber and others are now saying, that smoky smell reeks of vaporware.

Each year Apple faces some degree of heat as it heads into WWDC. I think things will be hotter than most this year with a higher degree of skepticism. What we’re witnessing is a landscape built by years of trust, earning the benefit of doubt, turned to ashes. They say that hell has no fury like a woman scorned, but I’m here to tell you that might take second place when it comes to torching the trust relationship between a company’s PR reps and those who cover them.

Let’s talk about that trust.

Back in my gadget blogging days for GottaBeMobile.com the first rule of thumb was always be skeptical of PR. I’ve been on both sides of that fence, pushing out PR for my own projects and covering it for others. A PR pro tells you the story they want you to cover. Covering that story, you look for the holes in addition to covering it. By and large most of the well know Apple pundits have done a reasonably good job of revealing those holes in my opinion.

Apple was different in that for the most part if they made a claim it usually held up. I remember distinctly when the first iPad was released with a claimed battery life of 10 hours. Those of us at GBM were surprised when those claims proved accurate once we had the devices in our hands. Promise made. Promise fulfilled. Trust earned.

No company is perfect, certainly not Apple. But Apple has been reasonably consistent for most of the time I’ve been covering or using their hardware and software. There have been lapses — Siri being a prime example — but nothing that wasn’t overcome and perhaps, now in retrospect, wrongly overlooked because of the trust Apple built with the media and enthusiasts who covered the company. As most now realize, the smoke and mirror show of last year’s WWDC Apple Intelligence announcement was a red flag warning that needed more scrutiny than relying on trust banked through good will and follow through.

It’s currently being endlessly debated whether or not this failure was caused by a rush to satisfy Wall Street deep in its AI bubble, poor leadership, or just trying to climb too high a mountain too fast in an attempt to create a technical solution that, as announced, would one up those already on the market. In the end I don’t think it matters much what exactly sparked this blaze. I do think it matters how Apple chooses to put out the fire. Those who cover Apple, and more importantly users, feel scorched. I’m guessing there are some in Cupertino feeling that as well.

Burn scars don’t heal well or quickly.

You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above. 

Sunday Morning Reading

History has its layers and facts might be damned, but that’s what myths are made of.

Tick. Tock. Or is that TikTok? Regardless, it’s the last Sunday Morning Reading column before things take a drastic turn here in the United States. Plenty to be concerned about, but Sunday Morning Reading will sill keep chugging along until they turn out the lights. That said, quite a bit of today’s chugging focuses on that messy intersection of tech and politics, because, well, you know, that muddled mess of things is what attracts my attention. They are no small things.

Speaking of small things, David Todd McCarty suggests that when we get too overwhelmed perhaps it’s time to get small. Check out Let’s Get Small.

There’s A Reason Why It Feels Like The Internet Has Gone Bad is actually a short interview with Cory Doctorow by Allison Morrow about a term Doctorow coined that I think fits much (and not just on the Internet) of what we’re already living through and what we’re in for: enshittification.

George Dillard wonders why modern business tycoons are like their forbears in Nerds, Curdled.

Jared Yates Sexton has some thoughts on dealing with what’s coming in Back Into The Breach: Thoughts On The Second Trump Presidency. Good read.

The toadstools salivating to use government to dismantle government no longer grow in shadowy, dank places. Joan Westenberg takes on Silicon Valleys’s Secret Love Affair With The State.

John Gruber highlights and expands on an article by Kyle Wiggins at TechCrunch that hit amidst the growing chaos this week when Google announced that’s its ever declining search product would now require JavaScript in order to use Google Search. Check out Google Search, More Machine Now Than Man, Begins Requiring JavaScript.

Joseph Finder takes a look at The Russian Roots Of American Crime Fiction—And The O.G. It’s not that the characters created by Dostoevsky and Gogol were Russian. They were merely human.

We love stories, but we love our myths more. Neil Steinberg takes on The Myths of Telephone History. The lies we agree upon might just be the most pungent of them all.

To close things out, NatashaMH reminds us in Chestnut Roasting On An Open Fire that some say that “the strength of a superhero is determined by the strength of his villain—the greater the adversary, the mightier the hero.”  We’re about to find out if that’s true or not.

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here.  You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome. You can also find me on social networks under my own name.

Sunday Morning Reading

Looking back, while heading forward, with a nod to Beckett wandering through a lot of good questions.

This is the first edition of Sunday Morning Reading in the New Year, 2025. A new year certainly has meaning astronomically. From a human perspective it is a way of looking back in remembrance, even as we continue to evolve and move forward. Often these days, the evolving part seems more and more in question, even as humans make strides and advances in their various fields of endeavors. Some improve our lives, even as it appears so many of us remain stuck in the habits of the past and feel good about celebrating that choice to turn the clock back.

This week’s edition, in a way, marks that always thin dividing line between one year and the next, when what was old carries over into the new.

Natasha MH kicks things off with a lovely remembrance of her grandfather, It Begins With A Grain Of Salt. There’s a lovely quote:

Human intuition is not always reliable. Our perceptions can be distorted by biases and the limitations of our senses, which capture only a small fraction of the world’s phenomena.”

Christopher Luu offers a terrific look at one who made choices in ‘She Believed You Have To Take Sides’: How Audrey Hepburn Became A Secret Spy During World War Two.

Om Malik has a lovely piece about his “re-birthday” after surviving a heart attack in The Story of The Stent.

James Thomson, the developer of PCalc and other Apple software, looks back on the last 25 years in I Live My Life A Quarter Century At A Time.

The Next Big Idea Club shares some insights from Greg Epstein’s new book Tech Agnostic: How Technology Became the World’s Most Powerful Religion, and Why It Desperately Needs a Reformation, in The Weird Worship of Tech That Demands Serious Questioning. Epstein is the Humanist Chaplin at Harvard and at MIT, where he advises students, faculty and staff on ethical and existential concerns from a humanist perspective.

One thing is certain as we head into the new year, Artificial Intelligence will continue to dominate discourse. Jennifer Ouellette examines what happened at the Journal of Human Evolution when all but one member of the editorial board resigned. Some of the issues predate the current AI moment, but that seems to have been a breaking point as she explains in Evolution Journal Editors Resign En Masse.

Simon Willison takes a look at Things We Learned About LLMs in 2024. It’s an excellent look back and worth hanging onto as we plunge ahead, willingly or no.

Edward Zitron believes that generative AI has no killer apps, nor can it justify its valuations. Here’s him quoting himself from March 2024:

What if what we’re seeing today isn’t a glimpse of the future, but the new terms of the present? What if artificial intelligence isn’t actually capable of doing much more than what we’re seeing today, and what if there’s no clear timeline when it’ll be able to do more? What if this entire hype cycle has been built, goosed by a compliant media ready and willing to take career-embellishers at their word?

Strip out the reference to AI and apply it anywhere along the timeline of human evolution and innovation and the questions resonant in a very Beckett-like way. Check out his piece Godot Isn’t Making It. 

Judges in the U.S. Sixth Circuit drove a stake through the heart of Net Neutrality as the new year dawned. Brian Barrett says it’s crushing blow not just for how we live our lives on the Internet but consumer protections in general in The Death Of Net Neutrality Is A Bad Omen. He’s correct.

And finally this week, an incredible piece of reporting from Joshua Kaplan at ProPublica. The Militia And The Mole is at once terrifying and also confirming when it comes to the fears those paying attention harbor heading into whatever this next year is going to bring.

If you’re interested in just what the heck Sunday Morning Reading is all about you can read more about the origins of Sunday Morning Reading here.  You can also find more of my writings on Medium at this link, including in the publications Ellemeno and Rome. You can also find me on social networks under my own name.

What Happens to Ads with AI Summaries of Web Pages?

Will AI summarize web ads into submission?

Artificial Intelligence is still the dominate tech craze of the moment. Big announcements are expected within the next several weeks from Apple, Google and just about anyone else who can prompt an AI generated press release into being.  I’m sure AI will continue to be on the tips of most digital tongues.

Or will it all just be summarized? 

One of the trends I’m seeing predicted is how users will take advantage of Artificial Intelligence to summarize web pages. That sounds like a useful, perhaps noble idea but it raises questions. The web relies so much on advertising to generate revenue. AI is supposed to help ad creators and marketers do better and more efficient designing and targeting. What happens when users stop visiting web pages and just rely on summaries? That’s a genuine question I have and would love to read some possible answers.

It’s not that I’m a big fan of ads, but I remember back in the heyday of RSS that there was all sorts of tension regarding losing ad impressions between web publishers and web users that relied on RSS readers. Then RSS feeds of web articles got truncated into teasers to send users clicking. Then ads got inserted into RSS. Will the same thing happen with ads being inserted into AI summaries? How would that work with something like an AI Pin or the Rabbit R1? (Although I doubt those devices will be around for us to find out.) 

Given that one of the other predicted AI trends is being able to verbally converse with whatever AI machine you choose, how would that work with advertising? Will a user need to listen to ads before getting a response to their prompt? There’s already a lag in compute capacity resulting in delays delivering responses to queries with most current AI engines. I don’t imagine waiting for an ad insertion will help improve on that. 

Again, these are sincere questions that I’d love to hear some thoughts on. Just don’t summarize them.

You can find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome.