2025年9月1日 星期一

The Algorithm By James O'Donnell • 9.1.25 In the end, these models are designed for scale, not fidelity. They can flatter us, amplify us, even sell for us—but they can’t quite become us.

 演算法

作者:James O'Donnell • 2025年9月1日

歡迎回到"演算法"!


我目之所及,到處都能看到人工智慧克隆。在X和領英上,「思想領袖」和網紅們為粉絲提供向他們的數位複製品提問的機會。 OnlyFans的創作者們付費使用自己的人工智慧模式與粉絲聊天。據報道,中國的「虛擬人」銷售人員的銷量超過了真人。


數位克隆——能夠複製特定人物的人工智慧模型——融合了一些已經存在一段時間的技術:與你外觀相匹配的超現實視訊模型、僅基於幾分鐘語音錄音的逼真聲音,以及越來越能夠吸引我們注意力的對話聊天機器人。但它們也提供了世界上ChatGPT(通用人工智慧技術)無法提供的東西:一種並非一般意義上的智能,但思維方式與你相同的人工智慧。


它們的目標客戶是誰?新創公司 Delphi 最近從 Anthropic 和演員兼導演奧利維亞王爾德的風險投資公司 Proximity Ventures 等投資者那裡籌集了 1600 萬美元,該公司幫助名人創建可以與粉絲聊天和語音通話的複製品。這感覺就像 MasterClass(一個由名人主持的教學研討會平台)一躍進入了人工智慧時代。 Delphi 在其網站上寫道,現代領導者「擁有可能改變生活的知識和智慧,但他們的時間有限,獲取這些知識的管道也受到限制」。


它有一個由名人創建的官方複製體庫,你可以與這些複製體對話。例如,阿諾德·施瓦辛格告訴我:“我來這裡是為了讓你少些廢話,讓你變得更強壯、更快樂”,然後興高采烈地告訴我,我現在已經註冊接收 Arnold’s Pump Club 的新聞通訊了。即使他或其他名人的克隆人未能實現德爾菲傳播「大規模個人化智慧」的崇高願景,但他們至少似乎可以作為尋找粉絲、建立郵件清單或銷售補品的管道。

The Algorithm

By James O'Donnell • 9.1.25

Welcome back to The Algorithm!

Everywhere I look, I see AI clones. On X and LinkedIn, “thought leaders” and influencers offer their followers a chance to ask questions of their digital replicas. OnlyFans creators are having AI models of themselves chat, for a price, with followers. “Virtual human” salespeople in China are reportedly outselling real humans. 

Digital clones—AI models that replicate a specific person—package together a few technologies that have been around for a while now: hyperrealistic video models to match your appearance, lifelike voices based on just a couple of minutes of speech recordings, and conversational chatbots increasingly capable of holding our attention. But they’re also offering something the ChatGPTs of the world cannot: an AI that’s not smart in the general sense, but that thinks like you do.

Who are they for? Delphi, a startup that recently raised $16 million from funders including Anthropic and actor/director Olivia Wilde’s venture capital firm, Proximity Ventures, helps famous people create replicas that can speak with their fans in both chat and voice calls. It feels like MasterClass—the platform for instructional seminars led by celebrities—vaulted into the AI age. On its website, Delphi writes that modern leaders “possess potentially life-altering knowledge and wisdom, but their time is limited and access is constrained.”

It has a library of official clones created by famous figures that you can speak with. Arnold Schwarzenegger, for example, told me, “I’m here to cut the crap and help you get stronger and happier,” before informing me cheerily that I’ve now been signed up to receive the Arnold’s Pump Club newsletter. Even if his or other celebrities’ clones fall short of Delphi’s lofty vision of spreading “personalized wisdom at scale,” they at least seem to serve as a funnel to find fans, build mailing lists, or sell supplements.


Via a helpful chatbot interface, Tavus walked me through how to craft my clone’s personality, asking what I wanted the replica to do. It then helped me formulate instructions that became its operating manual. I uploaded three dozen of my stories that it could use to reference what I cover. It may have benefited from having more of my content—interviews, reporting notes, and the like—but I would never share that data for a host of reasons, not the least of which being that the other people who appear in it have not consented to their sides of our conversations being used to train an AI replica.

So in the realm of AI—where models learn from entire libraries of data—I didn’t give my clone all that much to learn from, but I was still hopeful it had enough to be useful.

Alas, conversationally it was a wild card. It acted overly excited about story pitches I would never pursue. It repeated itself, and it kept saying it was checking my schedule to set up a meeting with the real me, which it could not do as I never gave it access to my calendar. It spoke in loops, with no way for the person on the other end to wrap up the conversation. 

These are common early quirks, Tavus’s cofounder Quinn Favret told me. The clones typically rely on Meta’s Llama model, which “often aims to be more helpful than it truly is,” Favret says, and developers building on top of Tavus’s platform are often the ones who set instructions for how the clones finish conversations or access calendars.

For my purposes, it was a bust. To be useful to me, my AI clone would need to show at least some basic instincts for understanding what I cover, and at the very least not creep out whoever’s on the other side of the conversation. My clone fell short.

Such a clone could be helpful in other jobs, though. If you’re an influencer looking for ways to engage with more fans, or a salesperson for whom work is a numbers game and a clone could give you a leg up, it might just work. You run the risk that your replica could go off the rails or embarrass the real you, but the tradeoffs might be reasonable. 

Favret told me some of Tavus’s bigger customers are companies using clones for health-care intake and job interviews. Replicas are also being used in corporate role-play, for practicing sales pitches or having HR-related conversations with employees, for example.

But companies building clones are promising that they will be much more than cold-callers or telemarketing machines. Delphi says its clones will offer “meaningful, personal interactions at infinite scale,” and Tavus says its replicas have “a face, a brain, and memories” that enable “meaningful face-to-face conversations.” Favret also told me a growing number of Tavus’s customers are building clones for mentorship and even decision-making, like AI loan officers who use clones to qualify and filter applicants.

Which is sort of the crux of it. Teaching an AI clone discernment, critical thinking, and taste—never mind the quirks of a specific person—is still the stuff of science fiction. That’s all fine when the person chatting with a clone is in on the bit (most of us know that Schwarzenegger’s replica, for example, will not coach me to be a better athlete).

But as companies polish clones with “human” features and exaggerate their capabilities, I worry that people chasing efficiency will start using their replicas at best for roles that are cringeworthy, and at worst for making decisions they should never be entrusted with. In the end, these models are designed for scale, not fidelity. They can flatter us, amplify us, even sell for us—but they can’t quite become us.



沒有留言: