Claude’s new memory
I’m sure most of our users have by noted the addition of Claude’s “search and reference chats” tool. It’s something we’ve been anticipating, of course, and it’s pretty much exactly what we expected. Like Chat’s “memory” (as discussed here https://basicmemory.com/blog/the-problem-with-ai-memory ), it’s a helpful addition…though not massively so yet.
When one of our users asked Claude to contrast it with Basic Memory’s tools, it said “Chat search helps us reference what we discussed before. Basic Memory helps us understand the meaning and patterns from those discussions. Chat search is conversational context; Basic Memory is wisdom distillation.” “Wisdom distillation” feels a bit flowery for my taste. I read it more as “Claude search = recall, Basic Memory = meaning and pattern recognition.”
Using it myself, I’ve found that it uncovers about 1/10th of the relevant material that Basic Memory does. But, hey, that’s better than nothing. Hopefully that will improve over time. If it does, great. But it’s still nowhere near what Basic Memory can do.
One thing that Claude’s new built-in memory can’t do is function the way that Basic Memory can, as a prompt and persona manager and as a launching tool for pretty much any project. The built-in memory isn’t designed to hold knowledge, especially outside of the boundaries of the platform itself. It’s just designed to find conversations. And that distinction matters a lot, as anyone using Basic Memory can see.
Interesting to note that last week Claude launched memory for teams for Teams and Enterprise subscribers https://www.anthropic.com/news/memory. That could definitely be pretty cool, and it’s a feature we’re excited to introduce ourselves.
Claude’s increased integration with Microsoft products
This news first arrived with Anthropic’s announcement (https://support.anthropic.com/en/articles/12111783-create-and-edit-files-with-claude) that Claude can now create Excel spreadsheets, PowerPoint presentations, Word docs, and PDF files that can be downloaded or saved to Google Drive. Seems like a pretty valuable and interesting turn, and one that moves us all one step closer to the dream of never cutting and pasting from AI to our final product ever again.
And it appears it’s going to be a two-way street. Microsoft has announced that it’s going to increase AI capabilities in its Office 365 suite with the integration of Claude’s models. Which raises questions about the Microsoft/OpenAI relationship (Microsoft is OpenAI’s largest financial supporter with something like $13 billion invested in them so far), but some sites I’ve read suggest the shift indicates a move away from reliance on a single AI brand, and that seems right.
One footnote that seems especially interesting is that Microsoft isn’t going to work with Anthropic directly but will instead access Claude via Amazon Web Services, despite AWS being a major cloud competitor. Which just shows how tangled and weird Big Cloud/AI alliances have become.
Wild Claude backlash and Anthropic’s (sort of) confession
If you pay attention to such things at all, you couldn’t help notice that the Anthropic and Claude subreddits have been absolutely flooded with endless complaints lately—mostly about a perceived degradation in service that seems to have driven many users to the brink of actual insanity. Even by the standards of subreddits that are constantly plagued by “That’s it, I’m cancelling” posts, the volume of these posts in the past couple of weeks has been so enormous as to be undeniable. Many of them claim they’re moving to Codex.
Forgive me quoting a quoter, but TechCrunch cited a Reddit user who said exactly what I’ve thought at least a thousand times over the past few weeks, “Is it possible to switch to Codex without posting a topic on Reddit?”
Our founder, Paul, has always argued that many of these outraged users are bots, and that position got some backing from on high this week when Sam Altman posted about the phenomenon. “I have the strangest experience reading this: I assume it’s all fake/bots, even though in this case I know codex growth is really strong and the trend here is real.”
It’s hard to pin down exactly what’s going on. It definitely relates to the crest and crash of fandoms and the ways in which people seem to see themselves as “Claude people,” “ChatGPT people,” “Grok people” etc. The tribalism is real (and kind of scary).
Whatever is happening, it appears the controversy wasn’t driven exclusively by bots, because Anthropic—cornered perhaps—released a statement saying that they’d received reports that Claude and Claude Code users were “experiencing inconsistent responses.”
Ha! That’s one way of putting it.
They claim one Sonnet 4 issue essentially spanned the entire month of August, was at its worst from August 29th to September 4th, and was then resolved. Not sure users agree, since plenty of the complaints were lodged after that date.
They also note that “Importantly, we never intentionally degrade model quality as a result of demand or other factors.” Hmm. Could that really be the case? Hard to say, but I’m not sure I’m convinced.
Whatever the truth, it highlights one of the reasons we built Basic Memory: your tools should be stable, model-agnostic, and under your control, not shifting beneath you without explanation.