Tag:aiClear

You also got Nat Friedman and Daniel Gross. These guys have backed a lot of founders. They've worked with a lot of AI startups. They can understand the team that they're trying to build over there.

TBPN
1d ago

“Apps are the expression of intelligence, whereas core intelligence itself is derived from the models. Look at any apps or agents that exist in the market, and how few lines of code they are. They don’t need to be what we thought of previously as software, because they’re effectively just wrapping up this new thing, this alien intelligence that we now have.”

5d ago

“Venture capitalists are no longer in the foundation model race, because they can’t be at this scale, so the narrative quickly becomes that the application layer is going to win,” says Kant. “If your incentive structure is oriented towards the application layer, that’s what you have to say,” adds Warner.

5d ago

One phenomenon we've seen when teams are building things really quickly with AI is that the more AI-generated or assisted they are, the more generic they tend to turn out. Which makes sense if you think about how LLMs were developed, they're all basically pre-trained on the same data. And so in this excitement, in this rush to say, "Wow, look at how fast we can build it." You actually end up with something that is less differentiated.

1w ago

Advanced autonomy, improved sensors, millions of miles more experience, integrated software and embedded AI have all improved considerably in the last decade. That world of robots, delivery drones, robo-taxis and consumer ambient intelligence: well, it's finally coming.

1w ago

Treat your AI agent like a new hire, not an extension of yourself. Jesse’s entire agent management philosophy comes from her experience hiring employees. She gives agents their own identities, separate data access, and communication channels—never full access to her email or accounts. Progressive trust is the model: start limited, expand as the agent proves reliable.

1w ago

These labs, like other larger tech companies, already employ former government agents and counter-terror experts, as well as red teams to test for all kinds of vulnerabilities and risks, so these jobs aren’t totally out of left field. Google DeepMind posted a job listing more than a year ago for a research scientist focused on biosecurity and its high-impact risks, too.

1w ago

Post by post, it can be impossible to tell whether what you are reading was indeed written by a bot, or whether a human being exercised a heavy hand. (“Who really made this thing I am looking at?” is a question that is becoming ever more salient this year.)

1w ago

As we develop increasingly capable AI models, it’s currently necessary to deprecate and retire our past models due to the cost and complexity of maintaining public access. However, model deprecation carries some downsides. These include costs to users who value particular models, limitations on research, and potential risks both to AI safety and to the welfare of the models themselves.

1w ago

Underscored — save the words that stop you in your tracks.

Start saving quotes →