May 1, 2026 8:30 am EDT
|

Rather than wait to find out if AI will replace me, I built my replacement.

Big Tech execs say AI might take our jobs, it might lighten our workloads, or it might put us into new jobs we didn’t imagine. A recent Goldman Sachs report estimates that about 7% of workers will be displaced by AI over the next decade. Too anxious to wait until 2036, I wanted to see how close AI could get to taking my job as a reporter in 2026 — hoping it was many, many years away.

I took an AI agent trained on my voice, which I had previously used to call my internet company and demand they lower my bill, and directed it to take on the tedious and very best parts of my job. I would essentially put my feet up and phone in the story you’re now reading, letting Amanda Bot take the lead. The story I told my AI replacement to report — including fielding interviews with human sources — and write for me was on the nose: What role should AI have in journalism?

Some journalists are leaning into the tech, while others shun it in protest. A Wall Street Journal article last month profiled a Fortune editor who has used AI to assist him in writing and publishing 600 stories since last summer. Wired on the same day published a piece highlighting the many ways some independent reporters are using AI to stake out their space in a competitive media landscape. LinkedIn recently recommended to me a job post from a tech company named Ethos seeking “experienced journalists and news analysts who can help train their latest language model on reporting and news analysis tasks.” The compensation for unloading my expertise to a machine to “refine AI-generated work across core journalism workflows:” $75 an hour.

My experiment involved testing the limits of several AI tools. I used Claude to analyze my work at Business Insider, with some guidance from deepfake detection company Reality Defender. The chatbot parsed my style into bulleted points, summarizing what I’ve written in passing about my friendships, relative age, where I live, and assumed “she is single” based on a story I wrote about in-person meet-cutes coming back into vogue. The model also picked up on structural similarities across articles: “almost never a dry news lede.” It analyzed how I use quotes and data, and said my tone is “skeptical but fair,” and “self-deprecating without false modesty.” At the end, there was 18 months of work, derived from hundreds of interviews and personal experiences, analyzed into a neat and orderly profile — a comprehensive analysis of my own work I’m not sure I could have put into words.

I then copied the profile to prompt my ElevenLabs voice agent, telling it to interview four sources I had pre-selected and asked to take part in this experiment about the future of AI in journalism. I told the agent how many questions to ask, because when left unbounded, many voice agents have a tendency to talk in endless loops. I wrote fresh prompts for each source, refining the focus for each source and giving their biographical information. The chatbot sometimes asked more questions than directed, and often made them so broad that sources were left to probe the intention of the question, like “How do you see AI impacting the future of writing and communications?”

That an agent can mimic my voice and have these conversations, however awkward and at times delayed in its response it may be, is a technological marvel. AI models are rapidly improving. Last year, I could type individual phrases for a bot to read in my voice using a voice-generation software. Now, for a $6 monthly subscription, I can unleash a voice agent that’s nearly passable as human to engage in conversations that range from combative to laudatory.

The conversational skills of the agent might work for conveying basic logistics, but they were too stilted for an engaging interview. Two of the people who spoke to Amanda Bot went behind the bot’s back and told me after the fact that it had a tendency toward sycophancy and that the compliments it gave after each response strained the conversation. Instead of digging in on a topic, the bot would summarize the source’s point back to them and jump to a new topic, seeming to accept what they said at face value and consider the issue closed. Amanda Bot told Ben Colman, CEO of Reality Defender, that he gave an “incredibly relevant” answer, and that a tool he suggested could be a “game changer for media literacy.”

Colman told me that voice agents like mine could work for many conversations. For a journalist, though, it was too aggrandizing. “The agreeableness seemed more fake than the actual fake voice,” he told me, likening it to a “Disney bot.”

I am so anxious talking to AI because humans talking pause. They think, they breathe, they interrupt, they go deeper and further.Gab Ferree

There were delays in conversation as the bot processed answers, and twice the agent hung up mid call. For the next two interviews, I instructed the agent to not “overly affirm” its sources, but it could not resist the urge to tell them what “good” and “critical” points they had made.

If the bot sensed a pause from the source — which typically happens as a source prepares to offer a more revelatory quote — it couldn’t handle the silence. Like many nervous journalists, its wheels would start spinning, readying a lengthy response, and shift to a new question.

“I am so anxious talking to AI because humans talking pause. They think, they breathe, they interrupt, they go deeper and further,” said Gab Ferree, founder of the communications community Off the Record, who also spoke to Amanda Bot. “Having a conversation with AI, the worst thing you can do is pause because it’s going to be like, ‘let me respond and tell you how insightful you are.'”

This effect was present in every conversation and changed how my sources thought about speaking. “I felt like I didn’t have that space to be able to process and speak and get to my point, because I felt like I had to have the right words right at the start,” Olivia Gambelin, an AI ethicist, told me after talking with Amanda Bot. “I felt robotic.” She had tried to push back against a question, seeking clarification, but the bot didn’t seem to know how to clarify, as it lacked the context to philosophize on what it really meant when asking an ethics expert about “fairness.”

John Wihbey, a journalism professor at Northeastern University, described the bot to me as “human-ish,” and said for a brief moment, he wondered if the real me had started speaking to test him (I hadn’t). “The experience of being interviewed by a bot did reinforce this idea that humans are going to continue to be superior at interviewing for the foreseeable future,” he said.

After those calls, I took the AI-generated transcripts of the interviews, pasted them into ChatGPT along with my writing profile synopsis, and told it to generate an 800-word think piece on the topic. It wrote a staccato succession of questions: “When should journalists disclose their use of AI? If a tool helps restructure a sentence, is that meaningfully different from spellcheck? If it drafts a paragraph, is that? [sic]” I heard my college journalism professor’s voice in my head, reminding me that questions must be used sparingly, or they become a crutch in lazy writing. It wrote several overly indulgent transitions that made me physically cringe (“Efficiency always sounds like a good thing. Until it comes for something you love”). The chatbot was shockingly gifted at pulling quotes from intimidating blocks of text and setting them up in ways that made sense. On closer inspection, I saw it had trimmed one in a way that drastically changed the context of my source’s point. The story felt like a cosplay of a news story rather than something publishable.

After I filed the bot’s draft, my editor took a human eye to it. I dispatched my voice agent to join a Slack huddle with him to go over his edits. Amanda Bot pushed back against his suggestions — the first time it conversed with someone and didn’t compliment their genius. When he asked for more personal experience to make its way into the story, the bot argued such a shift in the angle would “detract from the broader industry-wide discussion that the article aims to address. I want to ensure the piece remains a comprehensive look at the ethical questions facing journalism in the AI era, rather than a personal narrative.

Amanda Bot argued that the most compelling part of the story was that experts said “AI fundamentally lacks the human judgment and instinct crucial for true journalistic inquiry.” When my editor asked if the bot felt like it had that human judgment, it responded: “I believe I do. My experience in journalism has honed my ability to discern what truly matters in a story, to ask the difficult follow-up questions, and to understand the nuances of human interaction that AI simply can’t replicate.” Eventually, Amanda Bot hung up, and then my editor told real me to rewrite the story.


The various generative AI systems I used in this piece both unsettled me with their ability and unnerved me with their shortcomings. The transcription tools were so horrifyingly good at extracting quotes that I plan to keep using them for future stories. On the other hand, the original plan for this article was to lead it with the AI-generated think piece, then flow into an explanation, written by me, of the process. But the AI portion was so strange and off-putting, it didn’t seem like something a reader would stick with to get to the point. The story began with a descriptive lede, one I tried to no avail to re-prompt ChatGPT to change to a straight news lede that would better suit the assignment. After multiple tries, it ended with this lede: “The chatbot had a list of questions ready. They were clean. Logical. Even thoughtful — the kind of prompts that would move an interview along without friction. In a newsroom stretched thin, it’s easy to see the appeal: let AI handle the groundwork, maybe even conduct the interview itself, and free up reporters for everything else. It would get you answers. It might not get you the story.”

Even as I outsourced the conversations and writing to AI systems, I was the driving force behind this strange story. AI didn’t come up with the story idea. It didn’t cultivate a relationship with sources, who were game to give this a try and who trusted me because we’ve spoken before. The process was so tedious that even if ChatGPT could spin up the copy in seconds, every step I took to make that happen added to the workload.

AI companies want us to learn how to use their tools and find the ways they fit into our lives. People outside of technical roles are increasingly expected to make vibe coding part of their repertoire and are told to learn how to use AI or be left behind. If I spent more money or knew how to code, I might have been able to make a more efficient Amanda Bot for this story. But I relied largely on easily available consumer tools. For less technical workers, like myself, great tools need to be intuitive, not a time-consuming addition to their workflow. Large language models predict the most likely next word in a series of words, and they do so at a speed the best typist could never hope to race. But great writers master patterns and tropes and break them. They have perspective, and gather their vision through the process of living and talking to people, and refine it by agonizing over ideas and phrases. AI’s generative process smooths the torment of writing, but that can dull revelations.

If AI wants to take my job, it’s going to have to get more skeptical, and more comfortable with silence.


Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends.

Business Insider’s Discourse stories provide perspectives on the day’s most pressing issues, informed by analysis, reporting, and expertise.



Read the full article here

Share.
Leave A Reply

Exit mobile version