The AI Revolution Has a Foundation Problem
AI is powerful in focused, well-scoped applications. The broad enterprise transformation narrative? That requires a foundation most organizations never built and still aren't building.

Here’s something nobody selling AI wants you to think about too hard: every promise being made about enterprise AI — agents, copilots, RAG, agentic workflows, whatever the vendors are calling it this quarter — depends on a foundation that doesn’t exist in most organizations.
Not because the AI is bad. Because the information underneath it is a tottering tower of redundant, outdated, trivial, ungoverned, mislabeled, duplicated, scattered, and often unfindable content.
ROT, if you want the acronym. Redundant, outdated, trivial. The IDP (intelligent document processing; formerly ECM, formerly Document Capture, and even more formerly, microfilm and micrographics) industry has been talking about ROT for decades. It hasn’t gotten better. It’s gotten worse — there’s just more of it now, spread across more systems, with AI piled on top like a penthouse suite bolted onto a building with no plumbing.
I know this because I’ve been watching this industry not fix these problems for 30 years from a position of editor, marketer, and ghostwriter.
I started in the industry as an editorial intern at AIIM in February 1995. True story: when I was cleaning out the office a few years later after becoming editor, I found my resume in the “No Way” pile. I stumbled into this industry sideways through an open screen door — and never totally left.
One of the things that has stuck with me across all of those years is how little things have changed at the core. I recall a “paperless workflow” poster in the editor’s cube when I started that, with some slight updating, wouldn’t be out of place today. The one thing I wish had changed — and it hasn’t — is the gap between the need for what the IDP industry produces and the actual implementation of those tools by organizations.
That gap is about to matter a lot more than it used to. Because every promise being made about AI depends on the stuff this industry has been trying to get organizations to do for three decades.
I recently saw a conversation about trying to find research on converging information management teams with knowledge management teams inside large enterprises. A big multinational had asked an industry analyst (who has been in the industry since the late 80s) if any such research existed. He couldn’t find any.A practitioner who’d tried it at a large retailer said the two teams couldn’t even get past the fact that they didn’t share the same vocabulary. They worked at it for a while, got busy with their own goals, and drifted apart. I recalled suggesting something similar to AIIM leadership 16 or 17 years ago and they weren’t interested.
If the professionals whose literal job is managing information can’t converge their own teams — inside organizations that know they need to — what hope does a generic enterprise have of getting AI-ready across departments?
Note: I used both Claude and ChatGPT to compile this. Everything here is pulled directly from my thinking. The comparison of 2014 to today’s issue is all genAI, but pulled directly from the ebook (explained immediately below) or sites I directed the tools to look at. It’s my work, including most of the language. For those interested, I explain what I did at the end of this tome.
The Comparison Test: 2014 to Today
In 2014, I pulled together an ebook to market the AIIM conference that year. I stumbled across it last week and had the thought “Huh, I wonder how much of what’s in there are the exact same issues as now.” The title was “From InfoChaos to Information Opportunity.” It was a preview of the AIIM Conference – each speaker and sponsor contributed a short piece that we published online and then used in the ebook. I edited the thing and gave the designer instructions for what I wanted it to look like. I remember the conversations back then and they’re the same ones the industry is having today. The ebook (I’ve embedded it below) covered governance, fragmentation, findability, cloud sprawl, legacy workarounds, workflow exceptions, paper, metadata, change management, and the eternal struggle to get business value out of unstructured content.
I directed both Clause and ChatGPT to read it and then pointed it at what current vendors and analysts like AIIM, Gartner, McKinsey, Deep Analysis, Hyland, OpenText, IBM, Forrester, and others are talking about right now.
The overlap is not subtle.
Of the 10 big problem areas from the 2014 ebook. About 8 of them are still being discussed in almost identical terms. If you allow for the AI-era vocabulary swap — “information chaos” becomes “AI-ready data,” “ECM” becomes “content intelligence,” “information governance” becomes “AI governance” — it’s closer to 9 out of 10.
The industry has been better at changing labels than solving root problems.
Here’s the list.
1. Governance, retention, privacy, and compliance
Still the same fight. The 2014 ebook treated governance as foundational — eDiscovery risk, privacy exposure, the absence of real retention policies. Johnny Lee’s chapter was literally titled “I Don’t Need Governance” with the parenthetical “(or lower ediscovery costs or access to my content or...)” — that sarcastic tone tells you how far behind organizations were – 12 years ago.
In 2026, AIIM’s Global Summit has an entire opening keynote on AI governance and regulatory compliance — the EU AI Act, ISO 42001, cross-border data rules. Gartner predicts half of all organizations will adopt zero-trust data governance by 2028 because AI-generated content is creating a whole new class of unverified information to manage. And Gartner’s 2026 Magic Quadrant for data and analytics governance platforms has expanded significantly into unstructured data governance — something the 2025 edition barely touched.
Different acronyms. Same risk. Same organizations dragging their feet.
2. Fragmentation and silos across repositories
In 2014, the problem was file shares, SaaS silos, Dropbox, laptops, and phones. Alan Pelz-Sharpe — who is still covering this industry, now through Deep Analysis [Note: I’ve know Alan since the late 90s and currently do some marketing work for Deep. On the bullshit scale of 1 to 10; he’s a zero.] — called it a “Polar Vortex” of fragmentation that had dumped a pile of problems in its wake. The goal of one master file, stored and managed in one location, was a worthy ECM goal, he said, but it proved too impractical for most.
Today, AIIM’s own research says the average enterprise manages over ten information management platforms. Forrester says organizations store content across an average of 21 systems. Hyland’s new CEO — who came from the structured data world at Informatica — says content management was “perennially a black box.” His exact framing: large language models now allow you to give structure to unstructured data, so you can do the things we’ve been doing with structured data — without structured data. That’s a vendor pitch for solving a problem the industry identified a decade ago.
Same problem. More tabs open

3. Findability, metadata, search, and dark data
The 2014 ebook had an entire section on metadata being taken for granted. Another chapter complained about dark data straining IT budgets “for no good reason.” People couldn’t find anything, and the metadata that would have helped them didn’t exist or wasn’t maintained.
In 2026, AIIM is still centering “knowledge enrichment” and “knowledge discovery” as conference tracks. OpenText is selling tools to automate metadata tagging and surface content across repositories. The AIIM/Deep Analysis Market Momentum report found that unstructured data knowledge bases are spread across multiple applications. And according to recent industry research, only 11% of organizations have high metadata management maturity.
Twelve years of progress and we’re still swimming in information we can’t properly classify or find. That’s not a technology gap. That’s an effort gap. The technology exists. It has for a while. People just haven’t done the work.
4. Cloud, mobile, remote work, and BYOD versus control
The 2014 ebook was already panicking about information leaking through personal email, unmanaged devices, and free sharing tools. Monica Crocker’s chapter — one of the best in the ebook, a person with a sense of humor writing about a real problem — described how Land O’Lakes “grasped the cloud in a panic” because employees were storing company content on thumb drives and personal accounts. Her line about what happens when you don’t build an official solution stuck with me: people will “staple together a tar paper shack without adequate sanitary facilities and invite their friends to live there, too.”
That was 2014. Today the leak surface includes AI embedded in browsers, desktops, and operating systems. Shadow IT has become shadow AI. Gartner’s 2026 CIO survey found that 84% of enterprises expect to increase GenAI funding this year — but the governance to match that spending doesn’t exist yet. The devices changed. The control problem didn’t. And the tar paper shacks now have chatbots in them.
5. Legacy systems and poor usability driving workarounds
The ebook said it plainly: when the official system is painful, people route around it.
I wrote about this in 2022 when I did a presentation for the AIIM Florida Chapter. One of the things that sticks out to me across 30 years is that gap between the need for what this industry produces and the implementation of those tools by organizations. The technology exists. It works. People just don’t use it because it’s a pain in the ass — or it was implemented without anyone asking the people who would have to use it what they actually needed.
In 2026, Hyland is still pitching migrations from legacy ECM to its unified cloud platform. OpenText is positioning its AI Data Platform as a way to “activate” content trapped in old systems. The word “activate” is doing a lot of heavy lifting there — it means the content has been sitting around doing nothing. For years. In systems people hate.
Meanwhile, Hyland’s CEO is adamant that delivering AI and cloud value “should not start with a content migration” — which tells you that’s exactly what some organizations are being told to do. And if migrations are still the conversation in 2026, the legacy problem hasn’t gone anywhere.

6. Workflow and process automation that breaks on exceptions
In 2014, ibml’s Dan Lucarini warned about “casual capturers” passing errors directly to the most expensive workers in the loop — underwriters getting bogged down with missing paperwork because the capture step was pushed out to people who didn’t know what they were doing. BancTec said it simply: connect content and process.
In 2026, Forrester makes what it calls a critical and often-overlooked point: AI agents need clear, step-by-step instructions to perform enterprise tasks reliably, and in most organizations, that process knowledge still lives in fragmented workflows, undocumented tribal knowledge, and unofficial workarounds. Forrester’s readiness test is blunt: ask yourself if you know exactly where to find the documentation for how your organization actually does the work. If the answer is “no” — and it usually is — your AI agent is going to be about as useful as a new hire on day one with no onboarding, no training, and no one to ask.
I’ve been saying this for a while: companies are generally shit about understanding workflows, and agentic AI will depend on workflows and workflow orchestration to provide actual business value. Color me dubious of that happening any time soon.
The happy path gets automated. Real work still lives in the exceptions.
7. Paper, manual intake, re-keying, and accuracy problems
The 2014 ebook explicitly complained about re-keying and the stubborn persistence of paper. I.R.I.S.’s Frank Tiedt literally listed “time spent re-keying data” as a top business problem and noted that it was “tedious, time consuming and generates arthritis.” I appreciated the honesty.
In 2026, the AIIM Global Summit agenda includes a session on a nonprofit — Boys Hope Girls Hope — that had to digitize decades of paper records before it could even begin to use AI for classification and routing. And the AIIM/Deep Analysis research found that 58% of organizations report capture inconsistencies and 62% report incomplete data.
Are we getting closer to a paperless office? I’ve been asking that question for 30 years. Paper never really died. Neither did manual entry and the errors that come with it. The industry still talks a big game about transformation while plenty of core processes remain stubbornly manual.
Here’s an “ebook” I created for World Paperfree Day in 2012. I’ll try to download and embed, but the interwebs are being finicky. Click the image to see the whole thing. Scanners are faster. Capture/recognition tools are WAY better. Same problems exist.
8. People, process, change management, and executive sponsorship
Technology was never the whole issue. The 2014 ebook’s chapter on zero-based information governance laid out eight tenets, and the first three were executive mandate, cultural attention, and personal accountability. Deborah Juhnke wrote that without those three things, information governance initiatives fail. Early technology purchases, she said, mask the problem and often fall short of full implementation.
In 2026, McKinsey says nearly two-thirds of firms have failed to scale their AI projects. Forrester describes a “vicious cycle” where lack of employee trust leads to weak governance, which leads to perpetually deprioritized data cleanup, which leads to AI tools that underperform — which further erodes trust. Around and around it goes.
Different decade, same management failure mode. And it’s the one that matters most because you can’t buy your way out of it. You never could.
9. Getting business value out of unstructured content
In 2014, John Mancini’s keynote (John was AIIM’s president at the time) framed four questions for the industry, and the last was the simplest: how do we get any business insight out of all the information we’re gathering?
In 2026, that same question shows up wearing much fancier trousers. It’s now called “AI-ready data,” “content intelligence,” “RAG,” “knowledge enrichment,” and “agentic AI.” Gartner predicts that through 2026, organizations will abandon 60% of AI projects because their data isn’t ready. Hyland says 80% of enterprise content is usually unstructured, disorganized, and incomplete. AIIM’s own mission statement is still centered on helping organizations “manage and prepare unstructured data for AI and automation.”
The Hyland CEO framed it well: content management was perennially a black box. CIOs would ask him at Informatica what he could do for their unstructured data. Now that LLMs exist, the answer is theoretically “a lot.” But only if that content is clean, governed, and accessible. Most of it isn’t.
Same core problem. Better outfit.
10. “Social business” as a standalone theme
This is the one that feels more renamed than identical. The 2014 ebook gave it a full keynote. Pelz-Sharpe’s chapter explored what social business even meant, and whether the enterprise social networks of that era could deliver real value or were just “social for social’s sake.”
In 2026, nobody uses the phrase “social business” or talks about enterprise social networks as their own category. But the underlying concern — how people collaborate, share know-how, and connect knowledge to work — has been absorbed into broader conversations about collaboration layers, knowledge discovery, copilots, and AI-assisted work.
The problem persists. The framing doesn’t stand on its own anymore.
So You Say You Want an (AI) Revolution
Now, let me clarify something because I don’t want to come off as a Luddite screaming into the void.
AI is genuinely powerful. Not “powerful” in the way a lot of the hype around AI wants you to believe — actually, functionally powerful in specific, focused applications. GenAI and agentic AI can find connections across vast amounts of data and information that a human simply doesn’t have the time or speed to process (hello this thing thing you’re reading here). In bounded, well-scoped use cases — claims intake, invoice processing, document classification, contract review, onboarding, fraud detection — the gains are real and measurable. Deep Analysis has been saying for years that IDP was the first really profitable application of GenAI in enterprise, and that’s only become clearer since. Firms using AI-powered due diligence tools are already cutting contract review time significantly. That’s not vaporware. That’s working technology applied to a well-defined problem.
Where it works is where the content is clean, the process is documented, and the scope is narrow enough that the AI doesn’t have to guess about context. In other words: the exact conditions that most business don’t have.
And that’s the gap this whole piece is about.
AI is a useful tool. I use it every day. I’m using it right now — again this piece was almost entirely compiled using genAI, drawing on my stored thoughts and past writing. It’s a prediction engine trained on data that gives the illusion of intelligence because it does stuff really, really fast. And yes, it is extremely useful, as limited as it is.
But AI doesn’t think for you. It can’t. It’s a server in a rack . . . somewhere.
Now apply that same thinking to the enterprise AI narrative.
I keep hearing that AI is going to take over everything. That it’s going to replace knowledge workers, eliminate back-office staff, and automate the enterprise. And the executive class is sprinting to make that narrative real — or at least to use it as cover for headcount cuts they wanted to make anyway.
Here’s a question worth asking: is anyone pushing the “AI will devastate entire career paths” narrative who doesn’t have a financial stake in that being true? Altman, Amodei, Nadella — they’re running companies with billions in infrastructure bets that only pay off if adoption is fast, broad, and deep. That doesn’t make them wrong about the long arc. It makes them unreliable narrators about the timeline.
And the timeline is where reality gets interesting.
Deloitte’s 2026 State of AI report found that only 25% of organizations have moved even 40% of their AI pilots into production. Talent readiness sits at 20%. Governance at 30%. Data management at 40% — and these numbers actually decreased compared to last year. Companies are less prepared than they were 12 months ago, even as they set more ambitious goals. PwC’s 2026 Global CEO Survey found 56% of CEOs report getting “nothing” from their AI adoption efforts. And MIT says 95% of generative AI pilots fail to move beyond the experimental phase.
Meanwhile, 60% of hiring managers admit they emphasize AI’s role in reducing hiring because it sounds better than citing financial constraints. When New York gave companies a legal form to formally attribute layoffs to automation, zero out of 160 companies — including Amazon and Goldman Sachs — checked the box. Companies are publicly blaming AI for cuts they won’t legally attribute to AI.
The hype is doing the damage the technology hasn’t earned yet.
But AI can’t do any of that on a foundation this shaky.
Here’s why I’m dubious, and it ties directly to the 10 problems above:
AI agents depend on documented workflows. Most organizations don’t have them (problem #6). Forrester says process knowledge lives in tribal knowledge and unofficial workarounds. You can’t agent your way through a process that isn’t written down. I read recently about an AI agent on a website negotiating a cut-rate deal with a customer — 8,000 pounds (was in the UK) of service for 2,000. It could have even been two agents negotiating with each other. That’s what happens when you deploy agents without understanding the workflows they’re supposed to follow.
AI models depend on clean, governed, findable data. Most organizations don’t have that either (problems #1, #2, #3, #9). Gartner says 63% of organizations either don’t have — or aren’t sure if they have — the right data management practices for AI. Only 11% have high metadata management maturity. The average enterprise manages content across 10+ platforms with manual processes bridging the gaps. That’s not a foundation for an AI revolution. That’s a foundation for a very expensive AI disappointment.
AI readiness depends on people and change management. The hardest problem of all, and the one technology can’t solve (problem #8). McKinsey says nearly two-thirds of firms have failed to scale their AI projects. Forrester’s “vicious cycle” — distrust leads to weak governance leads to bad data leads to bad AI leads to more distrust — is the same people-and-process failure mode this industry has been watching for decades.
AI still needs the basics the IDP industry sells. You simply can’t digitally transform an organization — with or without AI — without capture and process automation tools. That was true when I was writing about it in 2022 (or 1995). It’s true now. Every agentic workflow, every RAG deployment, every copilot rollout depends on documents being captured, classified, indexed, stored, governed, and retrievable. If you can’t find the right document, your AI can’t either.
Deep Analysis put it best in a recent piece: “We are not here to stop the AI train; we are here to lay the tracks so it doesn’t derail spectacularly at the first curve.” They describe the AI opportunity for information management professionals as requiring “intelligent guerrilla warfare” — not grand head-on assaults but strategic, essential infiltration. Making governance, structure, and the ethics of information impossible to ignore. Because without that foundation, as they put it, “this entire glittering AI edifice is built on a ghost — the ghost of messy, uncontrolled, and profoundly dangerous data.”
That’s not my hyperbole. That’s an analyst firm that has been covering this market for decades. And, again, I know these guys and do work with them. They are some of the smartest folks I’ve ever met whose work and advice has been informing the IDP industry for decades. In short, they’ve shaped it.
Behind every customer-facing automation, there’s some poor monkey in the background pushing all the right buttons to provide that good customer experience (I’ve been that monkey on the back-end of HubSpot automations). The “promise” of AI agents talking to each other is supposedly going to remove this pain from us. Given that companies are generally shit about understanding workflows and agentic AI will depend on workflow orchestration to provide actual business value — color me dubious of that happening any time soon.
So AI will absolutely automate parts of the enterprise. It already is in bounded, well-scoped use cases — claims intake, invoice processing, document classification, onboarding. The kind of tightly defined, document-heavy work this industry has been chipping away at for decades. Celonis — the sexy process mining company — touts invoice processing and accounts payable as part of their platform. Same low-hanging fruit capture vendors have been picking for 40 years (40 is not a typo, this stuff has been around for a long time). Some things never change.
But the broad, sweeping, “AI is coming for everyone’s job next Tuesday” narrative? That requires clean data, strong governance, documented processes, good metadata, findable content, and systems people actually use. Most organizations don’t have any of those things. They didn’t have them in 2014. They don’t have them now.
The Lazy Marketer Problem Isn’t New Either
One more thing, because I see this playing out in my own world of marketing, especially content marketing.
The lazy marketer (and the cheapskate C-suite) who skimps on marketing — who views it as a checkbox, not a differentiator — has always looked for cheap shortcuts. Content mills outsourcing to Fiverr for $12 blog posts. Keyword stuffing. Stenography “reporting” that just restates a vendor press release. None of this is new. AI just allows that process to move faster with worse results if there’s no human intelligence, filtering, and taste applied.
Having been on the receiving end of marketing slop from IT vendors for most of my career starting in the mid-90s, I can un-merrily report that people often write worse than AI. Because if genAI is giving us average, that means it’s trained on even worse content than what it spits out.
The same dynamic playing out at the enterprise level — executives hoping AI will compensate for missing effort — is playing out in marketing too. The exec who says “why are we paying a content person when AI can write blogs” is making the same mistake as the law firm that stops hiring junior associates. They’re cutting based on the demo, not the results.
But that’s a piece for another day.
The Word Salad Hasn’t Helped
The word salad changes every few years. ECM became content services became intelligent information management became IDP became “AI-ready content.” I once jotted down all of the phrases used for process and capture technology — intelligent capture, intelligent document capture, intelligent automation, hyperautomation, digital transformation, capture 2.0, process automation, intelligent process automation, intelligent document automation, digital process automation — and I found vendors using multiple combinations in the same paragraph on the same Web page. If there’s any whiff of AI, the product is “intelligent.”
Here’s one I copied from a vendor website a few years ago:
“Hyperautomation with Intelligent Document Processing, Business Process Management, Robotic Process Automation, Automated Governance, and Integration.”
I’m not making that up. That is from an actual vendor website. They intentionally pushed “publish” on that garbage.
What does that even mean? I have no idea. And I’ve been reading about and/or covering this stuff for 30 years.
For the vendors out there: what’s that confusion doing to your customers?
The constantly changing terminology doesn’t just cause confusion. I suspect it has actively hurt industry adoption over the years. Every time the industry renames itself, organizations have to re-learn what they’re buying. That friction adds up. And now AI is layered on top of that confusion, promising to solve problems that require the very tools organizations couldn’t figure out how to buy under the previous three names.
The Honest Version
When a CEO announces that AI will transform the company by Q3, what I hear is: “I don’t have the patience to fix the foundation, so I’m going to pretend the foundation doesn’t matter.”
It matters.
Given how slowly companies have addressed these problems over the past 12 years — the past 30, if I’m being honest — expecting AI to sweep through the enterprise like wildfire is not a plan. It’s a fantasy. AI will move quickly in narrow, well-defined lanes where the content is clean, the processes are documented, and the governance is in place. (Or it’ll move quickly and the press will have a field day covering those clusterfucks.) It will move slowly everywhere else.
And “everywhere else” is most of the enterprise.
This is actually good news for IDP vendors if they frame it right. Every organization that wants to use AI needs what this industry sells first. The capture, the classification, the indexing, the governance, the workflow automation — all of it. AI doesn’t replace the need for information management. It makes the need for information management urgent in a way that two decades of conference keynotes never could.
The opportunity isn’t “AI will do it all for you.” The opportunity is “you literally can’t do AI without doing this first.”
Like everyone, I’m probably wrong. Just not in the way it looks like I’ll be wrong.
I don’t think AI is going away or slowing down in capability. I think the organizations trying to use it are going to keep tripping over the same problems they’ve been tripping over since 2014 — and since 1994, if I’m being honest. The technology will keep getting better. The foundations will keep lagging behind. And the gap between the keynote and the cubicle will keep being wider than anyone with a financial stake in AI wants to admit.
2014’s information chaos is 2026’s AI-readiness problem. Same mess. Better branding.
I’m heading to the AIIM Global Summit in Baltimore at the end of April since that’s where I live (well, at least the lobby, not paying to go). If you’re an IDP or information management vendor and your marketing sounds like the word salad I just described — or worse, sounds like every other vendor in the space — I can help with that. Social ghostwriting, newsletter ghostwriting, content strategy grounded in 30 years in this industry. Reply to this email or reach me at bryant@simplyusefulmarketing.co.
For more on the specific challenges facing AI and automation adoption in this space, I’d recommend reading Deep Analysis — particularly their pieces on the lengthy hikes awaiting AI in 2026, the demand for hyper-specificity, and how information management professionals can avoid losing the opportunity AI presents. They’re one of the few analyst firms that don’t buy into the bullshit. While you can accuse me of bias, I’d say the same if I weren’t working with them. I don’t pander to anyone.
How I Compiled This.
I did not “write” this, yet this is all me just the same. So I’m in a Slack group of folks I knew from my time at AIIM — analysts, users, consultants (some of the smartest folks in the IDP industry you can find). There’s an underlying “why the hell don’t more companies pay attention to this shit” vibe in some of the conversions. (Talk to a vendor in the industry, you’ll get the same vibe.)
I was literally looking for something else and noticed that 2014 sneak peek. So I though, “Hmmm, genAI is good for brute force comparison, let me run this against what’s out there today.” So I did. I had ChatGPT read it and compile the main themes. Then directed it to look at those themes relative to what’s out today across a few vendors, AIIM, analyst, and consultant sites. I was like, “Well, shit, that’s pretty good.” So I ported it over to Claude, and ran it through again. After a few tweaks, you’ve read the result.
It is me. Other than the 10 point comparison, which is almost entirely AI, the rest is pulled directly from my thinking and words — which is a combo of prompts (thinking and framing), a webinar presentation and article based on that I wrote, the preview document (which was conceived, assigned, and edited by me; except for the vendor stuff, John had done that), and various other snippets and background AI research I’ve done in both GPT and Claude.
As for the 10 point comparison — that’s exactly where the power of AI lies. It would have taken me a full day, maybe longer, to go point by point with this. That ability to pull together massive volume is spectacular. It’s dangerous because we need to doublecheck for sanity, but . . . wow.
So while I didn’t “write” this in the traditional sense, it’s me. My words. My thinking. My edits on anything GPT/Claude popped out that I didn’t like. I’m pretty sure I won’t “write” many things like this, but it can be useful in some situations.
For anyone interested, this is the AIIM 14 ebook that inspired this thing.
Musical Interlude
Amy Winehouse from Jools Holland (a fantastic UK music show). Sigh. Wish she HAD taken rehab seriously. What a fucking loss.
Joss Stone. I love the Dusty Springfield original. My favorite Dusty is after this, but Joss does a good job on this cover. She’s pretty great. Been a few years since I’ve listened to her.
But, OMFG, Dusty . . . She of course rules on the above song, but this . . . yeah, baby. Do yourself a favor, hit a road trip and push play on Dusty in Memphis or a day when you’re doing some cooking.
Have loved this Trampled By Turtles Song since I first heard it on the The Way Way Back (a movie that I thought was a comedy because I saw Steve Carrell; it is funny, but not a comedy). Anyway, this song rules (the soundtrack is good top to bottom too).
Come into the world alone
And you go out of the world alone
But in between, there’s you and me
Oh-oh
Man, if that ain’t life and the desire to connect, I don’t know what is.
Just as I was scheduling, Etta popped up. I’m not one of those “oh, music was so much better when . . . “ but, damn, you ain’t getting better than this.




