The biggest difference is not that OpenClaw can answer questions. Plenty of systems can do that. The more interesting difference is that it can carry context forward in a way that feels less disposable.
The real value unlocks when you have genuinely hard technical problems. The "I trust you, figure it out" prompt unlocks autonomous multi-hour workflows.
CuspAI, which uses AI to help customers discover new materials, is reported to be raising $200 million or more in funding at a valuation of more than $1 billion.
as AI models have become more sophisticated, we see customers evolving the use of AI models from being used to answer questions in a chatbot-like fashion, to actually automating tasks on their behalf, and to automate process flows within the organization.
Those early adopters found, to their surprise, not only that the models were good at puzzles, but that they could help break genuinely new ground. Soon, mathematicians were using AI to discover and prove new results, accomplishing in a day what would have once taken them weeks or months.
AI magnifies your strengths and weaknesses—if your deployment pipeline is broken or your code review process is manual chaos, AI will just help you ship broken code faster. Fix your fundamentals first, then pour gasoline on the fire.
why he sees AI as the best shot at restoring global growth and democratic resilience; and why it's so important that more nations than the world's biggest benefit.
For all the book smarts of LLMs, they currently have little sense for how the real world works and there's a growing list of "world model" AI startups trying to fill the gap.
There's a crucial difference between asking AI to categorize things repeatedly versus using it to build code that handles structured data through APIs. Yash used OpenClaw to build a Slack digest that pulls notifications via API endpoints—AI built the tool once, but the categorization runs on deterministic code (except for the final action/read/FYI sorting).
Cybersecurity is inherently adversarial; if attackers use a very powerful AI coding model to hack, defenders probably have to use a model that's equally good or better to defend — and vice versa. This can lead to an arms race where neither side can afford not to shell out big bucks for the latest and greatest model they can get their hands on.
However, a point I make on Sharp Tech is that Anthropic's exponential growth includes the part of the curve everyone misses: the company has been on this once-barely-visible trajectory for nearly two years now. Now the company has what is undoubtedly the most powerful model in the world, so powerful, in fact, that Anthropic says it can't release it publicly. There's reason for cynicism, given Anthropic's history, but the part of the "Boy Cries Wolf" myth everyone forgets is that the wolf did come in the end.
What Arc and the OpenAI Foundation are doing is what private capital and motivated foundations can do that institutional science usually can't: pick a hard problem, fund a full-stack experimental-and-AI engine, and run the loop fast enough that we might actually get somewhere by the time it matters to my family.
As global data-center power consumption (the energy for AI Infra) is expected to roughly double to nearly 1,000 terawatt-hours by the end of the decade, according to an estimate by the International Energy Agency, solar arrays in space, on the Moon, around the Moon beaming energy back to earth isn't as crazy as it sounds.
Basically, none of these groups thinks that any amount of AI capabilities will enable economic take-off. To me, that suggests that they're thinking — perhaps subconsciously — about something more than just friction and slow adoption. One possibility — which I should write about more — is that people suspect that humanity is getting satisfied, at least in the developed countries, and that the amount of new valuable things that even a godlike AI could create for us is limited by our inability to desire more goods and services.
It genuinely feels to me like GPT-5.2 and Opus 4.5 in November represent an inflection point: the moment when AI coding agents crossed from "mostly works" to "actually works."
The more I play with OpenClaw, the more convinced I am that it is one of the most powerful AI tools for personal use, and a sign of where these tools are going.
This is an era of managing AIs, rather than working with them. This new approach to AI is the outcome of the rapid exponential improvement in AI abilities. That means you can't understand where we are, and where we might be going, without understanding the increasing capability of AI.
If you wanted more evidence that AI is changing everything, look no further than Arm: the company was famous for its high margin IP-licensing business model, but this week announced that instead of (just) facilitating other company's making chips, it would start making and selling chips itself.
Over the past fifty years, the U.S. economy built a giant rent-extraction layer on top of human limitations: things take time, patience runs out, brand familiarity substitutes for diligence, and most people are willing to accept a bad price to avoid more clicks. Trillions of dollars of enterprise value depended on those constraints persisting.
Rather than building complex, permanent workflows, these micro-agents handle specific tasks when needed and then disappear. This approach is becoming a trend across general-purpose AI tools, blurring the line between consuming and building agents.
We shouldn't seek to quell AI anxiety, we should embrace and analyze it. The truth is, the U.S. labor market is in serious trouble, and it has little to do with AI so far.
There could be more complex patterns in nature — too complex for a human to hold in their mind, or even notice in the first place, but stable and useful nonetheless. What if there are other complex-but-useful patterns in other domains, like materials science and biology? If they exist, I think AI will be able to find them and apply them.
Agents are fundamentally changing the shape of demand for compute, both in terms of how they work and in terms of who will use them. They're so compelling that I no longer believe we're in a bubble.
Instead of designers creating comprehensive design packages with every state documented, AI enables bidirectional flow between Figma and code. Pull production code into Figma to see what actually exists, make changes in Figma, then push those changes directly back to code.
The interface of how we interact with AI will become more multi-sensory, hands-off and accessible. The dream of ambient computing has a lot of potential to evolve in the decade ahead.
This new approach to AI is the outcome of the rapid exponential improvement in AI abilities. That means you can't understand where we are, and where we might be going, without understanding the increasing capability of AI.
Like almost everyone in the AI model-making industry, Anthropic's employees believe that they are literally creating a god, and that this god will come into its full existence sooner rather than later.
Because of this change, you have to consider three things when deciding what AI to use: Models, Apps, and Harnesses. Models are the underlying AI brains, and the big three are GPT-5.2/5.3, Claude Opus 4.6, and Gemini 3 Pro (the companies are releasing new models much more rapidly than the past, so version numbers may change in the coming weeks).
Advanced autonomy, improved sensors, millions of miles more experience, integrated software and embedded AI have all improved considerably in the last decade.
ByteDance's new Seedance 2.0 AI video model seemed unstoppable—until heavy demand strained the company's compute capacity and copyright complaints began piling up.
Because of this change, you have to consider three things when deciding what AI to use: Models, Apps, and Harnesses. Models are the underlying AI brains, and the big three are GPT-5.2/5.3, Claude Opus 4.6, and Gemini 3 Pro. These are what determine how smart the system is, how well it reasons, how good it is at writing or coding or analyzing a spreadsheet, and how well it can see images or create them.
Advanced autonomy, improved sensors, millions of miles more experience, integrated software and embedded AI have all improved considerably in the last decade. That world of robots, delivery drones, robo-taxis and consumer ambient intelligence: well, it's finally coming.
The idea that humans will always be "in the loop" is quickly fading. National defense ethics is giving way to new engineering reinforcement learning optimizations.
Because of this change, you have to consider three things when deciding what AI to use: Models, Apps, and Harnesses. Models are the underlying AI brains, and the big three are GPT-5.2/5.3, Claude Opus 4.6, and Gemini 3 Pro.
Now the same model can behave very differently depending on what harness it's operating in. Claude Opus 4.6 talking to you in a chat window is a very different experience from Claude Opus 4.6 operating inside Claude Code, autonomously writing and testing software for hours at a stretch.
Until a few months ago, for the vast majority of people, "using AI" meant talking to a chatbot in a back-and-forth conversation. But over the past few months, it has become practical to use AI as an agent: you can assign them to a task and they do them, using tools as appropriate.
Until recently, you didn't have to know this. The model was the product, the app was the website, and the harness was minimal. You typed, it responded, you typed again. Now the same model can behave very differently depending on what harness it's operating in.
1mo ago
Underscored — save the words that stop you in your tracks.