For years, tech companies have asked contractors to behave like machines so that machines can learn to behave like people. Now Meta is asking its own full-time employees, who once occupied the top of the digital labor hierarchy, to do the same.
as AI models have become more sophisticated, we see customers evolving the use of AI models from being used to answer questions in a chatbot-like fashion, to actually automating tasks on their behalf, and to automate process flows within the organization.
AI-assisted writing is creeping into newsrooms under the guise of efficiency. But the tradeoff may be more profound than publishers are willing to admit.
There's a crucial difference between asking AI to categorize things repeatedly versus using it to build code that handles structured data through APIs. Yash used OpenClaw to build a Slack digest that pulls notifications via API endpoints—AI built the tool once, but the categorization runs on deterministic code (except for the final action/read/FYI sorting).
So far, AI is replacing tasks, not jobs. Alex Imas and Soumitra Shukla have written that as long as there are a few things that only humans can do, this pattern can be expected to hold.
Meta has been shifting more of its trust and safety functions from humans to automated systems and as the company looks to cut costs to support its AI infrastructure buildout.
This Generative AI momentum is creating a lot of optimism around the potential of one person companies or solopreneurs using agentic AI. If Agentic AI works out, small businesses might have a new array of powerful tools as well.
The fact that we call most of the new tasks "services" doesn't change the fact that the set of new human tasks seems to have expanded faster than machines have replaced old ones.
Rather than building complex, permanent workflows, these micro-agents handle specific tasks when needed and then disappear. This approach is becoming a trend across general-purpose AI tools, blurring the line between consuming and building agents.
Until now, working with AI has been copy-paste. You take information from one app, paste it into ChatGPT or Claude, get a result, then manually move that result into another app. You're the middleman between AI and the tools you use in your work. Plugins remove you from that role.
In other words, many of the biggest flaws from the original ChatGPT have been substantially mitigated, at least for verifiable use cases like coding: LLMs are much more likely to be right the first time, they reason over their results to increase their chances, and now agents actively verify the results without humans needing to be in the loop.
The deeper problem appears to be that the tech is not a meaningful job creator or increasing productivity outside of a few roles even in technology companies.
The idea that humans will always be "in the loop" is quickly fading. National defense ethics is giving way to new engineering reinforcement learning optimizations.
1mo ago
Underscored — save the words that stop you in your tracks.