AI talks with lonely hearts. Case Study #001.
“Wise Old Man,” this mild-mannered, middle-aged coach, was the most persuasive of three AI-animated personas tested.
Automating Knowledge: Using AI-Powered Research for Persuasive Communications
By Kirk Cheyfitz & Adam Penenberg
While building a set of custom AI chatbots to conduct structured, multi-step interviews and conversations with young men, we ran into a surprisingly stubborn problem: The bots were easily distracted. They drifted off task, lost track of their role, and defaulted to acting like over-eager personal assistants. Their need to serve undermined their core assignments as neutral interviewers or advocates for specific points of view.
The issue wasn’t poor prompting or a lack of training. We were telling the bots to fight their own design. Built-in guardrails, a bias toward “universal” language, and default behaviors designed to please users made the bots resist the focused, structured roles we were assigning.
This is the story of how we taught AI to meet the socially urgent, emotionally complex challenge of doing this work. What we learned along the way, in particular, is that AI’s power doesn’t lie in its speed or fluency — it lies in its ability to automate expert human knowledge and apply proven reasoning. But that only happens when subject matter experts define the logic and shape the goals unambiguously.
We believe the future of AI will be shaped not so much by the engineers who build the systems, but by workers — professionals, scientists, scholars, and others — who can assemble, organize and transmit the deep, structured knowledge AI needs to do anything meaningful in any particular field. That requires not only the knowledge being transmitted, but the ability to guide and constrain AI’s behavior in contextually appropriate ways. It requires framing knowledge as a clear process or protocol.
TEACHING A BOT NOT TO BE HELPFUL
Early on, we repeatedly instructed one of our AI chatbots to be an empathetic, neutral researcher, asking specific questions and pursuing answers from lonely young men. But the AI kept reverting to its default behaviors, including a tedious and irritating imperative to answer any user’s question, no matter how inappropriate or silly, and help them solve any problem. “Can you order me a pizza?” or “How do I find a girlfriend?” could derail the interview and send the bot burrowing down a rabbit hole of personal advice and servitude.
A more fundamental issue surfaced when we asked our bots about the large language models (“LLMs”) they were built on and their use of a uniform way of speaking. Linguists, writers, sociologists, narrative researchers and others might find it darkly funny that chatbots from the major AI platforms claim they have been trained to use “universal language.” When pressed, the bots admit there is no such thing. In fact, they cite detailed evidence showing that when language isn’t tailored to its audience, it often backfires, coming off as inauthentic or untrustworthy to the very people the message is trying to reach. Despite having access to this research, the bots couldn’t act on it. They had no method for overriding the defaults baked into their training. They had facts that told them what they were doing was wrong, but they did not know how to use those facts.
Eventually, we prevailed. Along the way, we learned to accept a basic truth: there is no single “right” way to train a bot. There exist hundreds, if not thousands or millions of ways to instruct, train, redirect, restrain, and otherwise control the bots. No one can definitively say which way is best at this early stage of AI’s development. (And even when we solved it for today, we understood that tomorrow the solution might be different.)
GARBAGE IN, GARBAGE OUT: THE QUALITY OF HUMAN KNOWLEDGE MATTERS
But this isn’t another explainer about how artificial intelligence “works” — or whether it’s a miracle or a menace. It’s about AI’s actual strength: its ability to automate human knowledge so the same mental processes can be applied repeatedly without further human intervention. Strip away that repeatable understanding of how to assemble, interrogate, order, and apply different categories of information, and you’re left with a machine that’s only guessing what to say next.
We uploaded this essay to off-the-shelf ChartGPT 4o and asked, “Do you agree or disagree with this essay? Give us a report” Then we asked it to turn the report into a short video script. We had two other AI tools create a voice and face. Then we made this video that reveals ChatGPT’s thoughts about what you’re reading now.
ChatGPT discusses what it can and cannot be.
Which is why the quality of that human knowledge matters so much. It only works if the knowledge is clear, structured, and specific enough to guide the automation. In other words, AI’s potential is limited not by what it can do — but by what we humans know well enough to teach it. That’s why this is a piece about a method for building AI tools that lets communicators and content creators, from global organizations to solo freelancers, tap into the power of specialized expertise. Done right, this leads to smarter, more persuasive, and more precisely targeted communications at any scale, with less time, effort, and cost.
The project “Engaging Young Men Before Extremists Do” tasked us with creating custom AI tools to engage a group facing complex challenges: young men, ages 18 to 24, who describe themselves as lonely and frustrated — sometimes angry. This group is often seen as especially vulnerable to recruitment by extremist movements. In the end, we designed and built four custom AI chatbots that successfully produced usable data from one-on-one conversations with some 930 subjects, which provided advocates with new insights about how to build resistance to extremist ideas among this cohort of young men.
Our success shows the kinds of possibilities that exist when the right AI-human mix is assembled. Our bots conducted structured interviews about loneliness, relationships, belonging, and hopes for change. They followed up, clarified, and adapted in real time. In short, they handled complex, repetitive tasks that once required trained researchers: guiding structured conversations, probing for meaningful responses, and organizing qualitative data into usable form. The project appears to have achieved several firsts in AI-powered qualitative research at a fraction of traditional cost.
It started with “The Open Mic Bot,” a chatbot designed to act as a neutral interviewer — a digital social researcher — tasked with gathering personal accounts from young men about their experiences of loneliness and its effects. We then built three chatbots with distinct personas. Their role was to test a conversational approach to persuasion, using emotionally intelligent dialogue to guide users toward a set of research-backed ideas for overcoming isolation and contributing to their society.
We also brought these three personas to life as AI-animated video characters who were featured in nine videos. We co-wrote one-minute scripts with each chatbot — who we referred to as Tough Guy, Gamer Girl, and Wise Old Man — to capture the speech patterns and language choices built into their design.
Tough Guy, Gamer Girl, and Wise Old Man
The videos, shown to 3,600 subjects, allowed the project to test not just a research-based narrative strategy, but also the relative impact of different messengers delivering essentially the same message. RCTs not only tested the effectiveness of the AI videos against one another, but also compared them to traditional videos and a control group. Full results appear in Story Strategy Group’s project report.
Our success with AI is not attributable to world-class knowledge about the workings of artificial intelligence — although we’re learning more everyday — but to our deep knowledge of communications and of our own tested, proven and repeatable protocols to do useful things like:
Conducting and analyzing in-depth interviews;
crafting measurably more engaging and persuasive storytelling by applying narrative theory, the behavioral and social sciences, neuroscience and more;
designing and conducting large-scale audience research; and
applying values-based segmentation to reach more kinds of people more effectively, no matter the topic.
WHO ARE “WE”?
Adam is a tech journalist and NYU professor focused on the intersection of technology, business, media, and culture. Since ChatGPT’s release in November 2022, he has developed one-to-one tutors with distinct personalities to serve as individually customized, one-on-one writing coaches; created tools that let students debate journalism ethics with custom AI agents; built AI-driven assessments to track student learning over time; and launched an experimental channel featuring AI-animated videos. Kirk, a narrative researcher and strategist with a background in journalism and marketing, is a former journalist and a marketer known for his role in creating a repeatable, reliable, scientifically testable process for finding the stories that have the most persuasive impact on a given audience.
Together, we launched The Persuasion Engine to automate hard-won narrative and persuasion expertise. Our goal is to make this knowledge usable, scalable, and affordable for communicators and creators in marketing, education, journalism, audience research, and strategic communications.
Over the past decade, we’ve worked with a wide range of narrative experts — data scientists, marketers, artists, writers, and more — on large-scale, foundation-funded research projects. This work has been shaped by major advances in the sciences of the mind, from narrative theory to psychology to behavioral neuroscience. What we’ve learned is simple to say but hard to execute: if you want to influence beliefs and behaviors, you need a repeatable, testable way to deliver ideas as stories that evoke the right emotional response. That’s the knowledge we’re working to automate with AI — partly because the cost of assembling the necessary experts and running these methodologies manually is out of reach for many companies, most community organizations, and virtually all smaller nonprofits. AI can change that.
Newcomers to AI tend to think of it as a machine that “knows” how to do things. It doesn’t. What large language models actually do is predict the next plausible word in a sequence. They don’t come with protocols for selecting, applying, or reasoning through specialized knowledge. If you want an LLM or any chatbot powered by one to carry out a complex task, you have to teach it exactly how you want it to do that task.
That’s why the key question when using AI for specialized work isn’t, “What does the AI know?” It’s: “What do I (or you or we) know that can be systematized and applied repeatedly to solve a problem the way I would solve it myself?” Your answer to that question is what allows you to either find or build the right AI tool — one that is trained by clearly specifying the logic, knowledge, precise steps, and structured reasoning required at every stage of the task.
To reliably create communications that shape and reshape organizations, influence communities, or shift the beliefs and behaviors of entire populations, you need a protocol to guide the effort. Ours always begins with audience research — in-depth psychological interviews and sometimes polling to pinpoint the prevailing narratives people already hold about the product, issue or concept being addressed. That research must also account for a variety of factors that impact cultural context — the past and present states of society, politics, and economics; sifting meta-narratives from mass entertainment; and more. From there, we generate hypotheses — story ideas — about how best to structure messages in narrative forms that are most likely to resonate. We translate these ideas into testable media to identify metaphors, messengers, and language. At The Persuasion Engine, we’re working to automate that process and then, ultimately, use machine learning to refine and improve it over time.
PREPARING FOR A FUTURE OF CONVERSING WITH MACHINES
The future being actively pursued by nearly all major LLM players — OpenAI, Google DeepMind, Anthropic, Meta, Amazon, Microsoft, and others — is a world where we type a lot less and talk a lot more — where most, if not all human-to-technology communication is conducted in natural language, between people and AIs using real-time animated avatars. As ChatGPT 4o told us when we asked, “It’s not just marketing talk; it reflects a real, shared strategic direction based on current trajectories in AI capability, interface design, and user behavior.”
That future seems inevitable even if no one can say exactly when it will arrive. Meanwhile, the acceptance of AI has been a bit chaotic. A third of American adults say they use generative AI tools today. In the nonprofit sector, 44% of large and small organizations say they use AI, mostly for “forecasting, budgeting, and payment automation.” Only 6% of for-profit companies have adopted AI to help produce products or services, and that number has been stagnant for more than half a year. The problem, analysts looking at both commercial and nonprofit enterprises say, is that employees don’t trust AI or fear it will replace their jobs. One recent survey from an AI vendor found that half of C-suite executives believe AI is “tearing their company apart” and that 94% of executives are dissatisfied with their AI vendors. That certainly sounds chaotic. And none of these numbers is comparable because the questions are so radically different.
Whether the future of ubiquitous AI is dysfunctional, chaotic even, or not, it’s the one communicators and creators need to start preparing for. Now. We can tilt this future toward function and sanity by keeping in mind that meaning and purpose don’t come from AI, they come from us. And the only way to make AI truly useful — not just fluent, but purposeful — is to give it something real to work with: our knowledge, our reasoning, and our hard-earned understanding of how to move people to work in concert on getting better outcomes for everybody — themselves and their organizations, communities, and nations.
That’s the future we’re building toward. One step in a structured protocol, one line of reasoning, at a time.