Lawtee Blog

What AI Truly Lacks Now Is Not Computing Power, But 'Memory'

What AI Truly Lacks Now Is Not Computing Power, But 'Memory'

Over the past couple of years, I’ve read numerous analysis articles about AI, and I’ve written a few myself. But the more I write, the more I feel the focus of the discussion might be a bit off. Whether it’s computing power, model parameters, or comparisons of AI investment scales between China and the US, these essentially remain comparisons at the “outcome level”: who computes faster, who applies it more, who raises more capital. This way of discussing things isn’t meaningless, but it implies a hidden assumption: as long as you master enough knowledge, data, and methods, wisdom will naturally emerge.

However, when you place AI back into the real-world structures of organizations, enterprises, and society, this assumption immediately falls apart. Because in the real world, wisdom is never just an accumulation of knowledge; it’s the sedimentation of long-term experience. And experience is precisely what most current AIs are most severely lacking.

Wisdom Is Not Just Knowledge, But Also Experience

Many people tend to understand AI’s capability as “knowing a lot.” But real-world wisdom is often reflected in “what one has seen, what one has experienced, and what mistakes one has made in similar situations.” The real value of a seasoned lawyer, engineer, or manager lies not in how many rules they can recite or how much data they have mastered, but in having experienced enough uncertain scenarios, knowing when rules should be broken and when data should be interpreted inversely.

This experience isn’t abstract knowledge entries but a whole set of judgment accumulations related to time, environment, emotion, and博弈 (game theory). The problem is, current AI almost entirely lives in the “world of knowledge,” rarely truly entering the “world of experience,” let alone bearing the consequences of that experience in real environments.

In the real business world, almost no important decision is made automatically by running a single rule. The same product: why give Customer A a 10% discount and Customer B a 20% discount? If you truly复盘 (review/post-mortem) such decisions, you’ll find the answers often aren’t in the sales system or Excel spreadsheets. Instead, they are scattered across various unstructured information: background mentioned in a phone call, the identity characteristics of the contact person, a single sentence in a WeChat group saying “this customer’s situation is special,” a temporary compromise during cross-department communication, or even a manager’s intuitive call based on a risk assessment at that moment.

This information is extremely important but is almost never systematically recorded. What ultimately remains is just cold, hard result data.

This also explains why many companies, despite having “a lot of data,” find themselves at a complete loss when it comes to summarizing experiences, formulating rules, or replicating successful paths. Because data records outcomes, not the decision-making process itself.

It’s worth noting that recently, after reading several articles about AI “Agents,” I’ve found many startups are already moving in this direction.

For example, FoundationCapital’s AI’s trillion-dollar opportunity: Context graphs , where the author argues: “The previous generation of enterprise software created a trillion-dollar software ecosystem by building systems of record. Today, startups building context graphs are laying the foundation for the next trillion dollars.” However, these articles still hold a somewhat “idealistic” view of AI Agents.

Of course, as I mentioned earlier in AI Investment May Drag the US into Another Quagmire of a War on Terror , the US is now habitually blowing AI bubbles, and without a bit of “idealism,” it would be hard to inflate this bubble. After all, human experience itself is subjective, not necessarily correct, sometimes vague, and doesn’t always have logical connections. For instance, “once bitten by a snake, ten years afraid of the rope” is itself a fuzzy judgment, and whether it’s 10 years, 20 years, or 1 year isn’t quantifiable, but these fall within the “error tolerance” of human behavior itself.

Most AIs Only See Outcomes

Shifting the perspective back to AI, the fundamental problem becomes clearer. The vast majority of current AIs, especially conversational AIs for the public, are essentially better at one thing: simulating outcomes from existing results. They can summarize, generalize, and generate, appearing very intelligent, but the prerequisite is that these results themselves have already been structured, recorded, and fed to the model.

Yet in the real world, the most valuable “experience” is precisely what cannot be structured. This leads to a very awkward situation: AI can analyze ten thousand historical transaction records but cannot understand why that deal “had to be done that way”; AI can generate seemingly reasonable decision suggestions but struggles to explain “why that choice was made at the time.”

In other words, many AIs today are more like perpetual post-mortem observers who have never truly participated in the decision-making process. The vast majority of decision-making procedures are actually told to them by humans.

You might have noticed a phenomenon in daily AI use: most AIs on the market can only remember contextual information within the same conversation. Moreover, when a conversation has too many turns, the rules inputted earlier might be ignored again by the AI.

From a technical perspective, this problem is often packaged as engineering issues like “context length,” “multi-turn dialogue capability,” or “memory module.” But frankly, such AI is turn-based. Once a conversation ends, the “wisdom” is cleared; once the context breaks, all rule judgments have to start over.

But real-world decision-making is never like this. Most of the time, our individual decisions are continuous and causal. Many judgments themselves depend on “what you thought before,” “why you didn’t choose that path back then,” or “has this exception happened before?” Without long-term memory, it’s impossible to form true experience.

This is why many AIs today seem “very articulate” but struggle to truly take on long-term decision-making roles. They resemble smart temporary consultants rather than veteran employees who have worked in an organization for years and understand priorities.

ChatGPT and Gemini have progressed a bit further in this regard, supporting memory saving features. But recently, in a technical blog I saw on HN I Reverse Engineered ChatGPT’s Memory System, and Here’s What I Found! , the author, after reverse-engineering ChatGPT’s memory function, found that this memory saving essentially focuses on recording some “metadata,” similar to nationality, language, gender, age, profession, conversation direction, project type, etc. It’s more of a technical workaround than a complete decision memory.

Information Fragmentation Is Increasing AI’s Costs

If the lack of decision memory in AI is a global issue, then domestically in China, this problem is further amplified. The reason isn’t complicated, as I mentioned earlier in Why Can Calls Between Different Carriers Go Through Unimpeded, While Chat Apps ‘Refuse to Communicate’? : information fragmentation.

WeChat, DingTalk, Feishu, Douyin, Weibo, QQ—each forms a closed ecosystem. Decision traces are scattered across different platforms, permissions, and data structures. Superficially, this is due to protocol inconsistency, different encryption methods, and varying interface standards. But the root cause isn’t technical; it lies in commercial interests and ecosystem closure.

The core assets of these platforms aren’t “connection,” but “user relationships” and “behavioral data.” And decision memory is precisely the most sensitive part of this data and the part least willing to be shared externally.

The result is that for AI to reconstruct a complete decision, it needs to穿越 (traverse/navigate) not technical obstacles but one artificially erected wall after another.

The West has a slight advantage in this area, as I saw in an article by Tao Xiaotao a few days ago: The EU Seems to Prefer Making Rules with Its Mouth Rather Than Doing Cutting-Edge Technology . In the Western context, whether driven by regulation or historically formed protocol culture, there is objectively some space left for partial interoperability between platforms. For example, after the EU Digital Markets Act was introduced, top global messaging apps like Facebook and WhatsApp have gradually achieved interconnection.

However, this isn’t a simple matter of “who is more advanced,” but rather differences in systems and ecosystems. At the same time, it doesn’t mean commercial giants will voluntarily give up their interests, but at least at the institutional level, data fluidity hasn’t been completely blocked. And AI is precisely a system extremely sensitive to information continuity.

AI’s Long-Term Persistent Memory Capability Is a Major Trend

Lately, many groups have been吐槽 (complaining/mocking) about the soaring price of memory. Setting aside the “conspiracy theories” behind it, “explosive demand for AI computing power” is at least a reasonable explanatory direction. However, I feel this explanation isn’t entirely complete.

Undoubtedly, the most迅猛 (rapid/vigorous) development in AI currently is “Agent” applications, and the operation of Agents undoubtedly requires a large amount of memory to participate in computation. Compared to the previous model primarily reliant on GPU computation, there has been a significant change.

In other words, AI is actually transitioning from “computing once to give an answer” to “long-term participation in decision-making,” and from “turn-based” to “long-term memory.” This also means that for AI systems, “memory” is far more important than computation. Remembering context, remembering historical judgments, remembering those process details that cannot be easily structured—when these memories become core capabilities, memory is no longer just a hardware metric but a new type of infrastructure, much like NVIDIA’s GPUs.

Looking back at this wave of AI frenzy, I increasingly feel that the real watershed in AI development might not lie in model parameters, inference speed, or even entirely in computing scale. Instead, it lies in who can build sustainable AI memory systems, who can use AI to replace high-frequency, process-heavy, experience-dependent tasks, and who can串联 (link up/string together) data and information scattered across different platforms and organizations.

The ultimate winner might not be the one who computes best, but the one who remembers best.

But just as mentioned earlier, wisdom includes not only knowledge but also experience. When you ask AI what to eat for lunch today, future AI might collect and analyze various data over a long period—your dietary records, taste preferences, account balance, schedule, etc.—to come up with the most ideal answer. But it will never know the spontaneous decision you need when you suddenly recall the delicious meals at your grandmother’s house.

#artificial intelligence #ai agent #contextual memory #decision experience #information fragmentation #chinese internet

Comments