That Texani Piece was AI

Do I use AI to write? Yes. Here’s how.

Let me say it upfront: the theatre essay was written with AI. So was this one. The research, the outline, and the first four drafts all came from AI-assisted workflows. I edited the final version and made it sound like me, but the drafting engine was Claude, the editing engine was ChatGPT, and the research started with two back-to-back deep research sessions on Gemini and ChatGPT. 

I want to explain exactly what that means and exactly what it doesn’t mean. AI didn’t generate the argument. It accelerated the process of building it. Most of the discourse around AI writing gets this wrong in both directions, and I’d rather be honest about what the process actually looks like than let the ambiguity fester.

The Excel Analogy Nobody Wants to Accept

Nobody says “I used Microsoft Excel to build this financial model.” They say “here’s my model.” Everyone understands that the analyst understood the underlying business, knew which assumptions to make, knew what the outputs should mean — and that Excel did the arithmetic.

Excel is a calculator for numbers. Large language models (LLMs) are calculators for words.

You still have to know what you’re calculating and why. You still have to know if the output is right. You still have to understand what you’re trying to say and whether what came back actually says it. The tool does the calculation. You do the thinking.

So why are we comfortable with Excel but panicking about GPT? Part of the answer is that we conflate two very different things when we say “writing.”

Recent research on AI-assisted writing makes a useful distinction between content-focused writers and form-focused writers. A 2025 study by Mohi Reza and colleagues at the University of Toronto found that writers differ significantly in which parts of the writing process they feel they must personally control. The authors interviewed journalists, academic researchers, technical writers, and novelists, and the pattern was consistent: the more informational the writing task, the more comfortable authors were delegating sentence-level drafting to AI systems. “Content-focused writers — academics, analysts, journalists — care most about planning and ideation. Form-focused writers — novelists, poets, essayists — place far greater emphasis on sentence-level drafting and stylistic control.”

In other words: people who write primarily to communicate information care about controlling the ideas. The sentences are implementation. But people who write primarily as a creative act care deeply about the sentences themselves. For them, drafting is not implementation. It is the work.

Both positions are completely legitimate. They describe different relationships to language.

I am, for the most part, a content-focused writer. I care about the argument, the examples, the research, the intellectual architecture. The exact sentences are load-bearing, but they are not the building. That’s why AI-assisted drafting works for me in a way it might not work for a novelist.

Why This Essay in Particular

The theatre piece started with something real: sitting at the Alley Theatre, hearing the multilingual welcome loop, and noticing that Salam wasn’t in it. The curiosity that followed is mine. The question I wanted to answer — why do Muslims feel peripheral to theatre when theatre has existed in Islamic cultures for centuries — is mine.

When I first noticed that the Alley Theatre greeting loop never included “Salam,” it felt like a small observation — almost trivial. But small questions often open large doors. By the end of the research process, that small moment had expanded into a ten-century story about performance in the Islamic world. 

I am not a theatre historian. I had never heard of Ibn Dāniyāl before starting this project. I had only a vague sense of taʿziyeh. I knew nothing about Yaqub Sanu, W.S. Rendra, or Shahid Nadeem. The gap between what I wanted to say and what I actually knew was the size of a library.

That’s when AI became useful. Not because it could do the thinking, but because the research problem was genuinely enormous.

What AI Actually Did

Before I even officially started, I launched two deep research requests. First Gemini: “Do a deep dive on the history of theatre in the Islamic world and then Muslims doing theatre in the west as well. Look through all time periods and geography. Give me a detailed history.” Then I fed Gemini’s results into ChatGPT for a second deep research pass, which gave me more sources and examples. I let it run overnight.

I woke up to more information than I knew what to do with.

AI is good at producing a plausible map of a subject quickly. However, its weakness is that the sources are thin. The deep research outputs had generated a convincing-looking survey of Islamic theatre history without having read the actual books and plays it was citing. The map looked right, but I didn’t know if the territory matched.

So day one was verification. Thirty browser tabs open, PDFs scattered across my desktop, one screen with the AI summary and another with academic articles and play scripts — interrogating every claim the deep research had generated against actual scholarship. Some sources I found online. Some I had to purchase. The play texts of Ibn Dāniyāl’s shadow theatre, the academic scholarship on taʿziyeh, the secondary literature on Yaqub Sanu — I read all of the relevant parts myself.

What AI was actually useful for during the research phase was helping me navigate material I had already gathered. I’d finish reading something and use the AI to test my understanding, find the most analytically relevant passage in a long text, or double-check that my takeaway matched what the source was actually arguing.

It was a research assistant, not a researcher.

Day two was outlining. This is where AI became a genuine thinking partner. I sat with ChatGPT and worked through the architecture: what should the sequence of examples be, how should the argument unfold across sections, what does the Ibn Dāniyāl section need to establish so the taʿziyeh section can build on it. These were not questions the AI answered for me. They were questions I worked through with the AI as a sounding board. When I proposed an approach, it would identify where the logic was thin. When I asked it to outline a specific section in detail, I’d tell it what was wrong with the outline and we’d revise. The final outline was over 15,000 words — almost double the final essay’s 8,300.

Day three was drafting. I put the full outline into Claude and had it produce a first draft of the complete essay. Claude’s prose surprised me — it was closer to my actual voice than I expected. Then I handed Claude’s draft to ChatGPT for three rounds of editing: first, a structural pass; second, a line edit focused on repetition and flow; third, a concision pass. The first draft was about 14,000 words. The fourth draft was 10,500. I edited the final version down by about 2,200 words. 

Where AI Was Useless

Let me be direct about where the workflow broke down, because readers should know this too.

AI cannot yet synthesize. Left to its own devices without detailed prompting, it described. It would tell me about Ibn Dāniyāl’s shadow plays without explaining why they mattered for the argument I was making. It would summarize taʿziyeh without pulling out the actor calling out across the arena, “Is there anyone who will help us?” that makes the tradition’s politics captivating. Every time I needed the material to become an argument rather than a list of facts, I had to push it. The synthesis was mine.

AI also cannot yet know which example matters. I almost included a play by another American Muslim playwright — Omer Abbas Salem’s Mosque4Mosque, about a Muslim character wrestling with sexuality — before deciding one such example was already enough. You have to know what you’re building to make that sort of editorial judgment. 

The personal narrative was completely unusable as AI output. The sections about my mother talking about going to the theatre in Egypt, my wife whispering in the dark at the Alley, the moment when the theatre asked me to record my own voice for the welcome loop — the AI’s versions of these passages were emotionally wrong. Technically fine. But not true. I rewrote all of them, the conclusion especially. The final line of the essay — “Welcome to the Alley Theatre, where everyone is truly welcome” — I wrote myself, because the AI’s ending was too hollow.

The difference between technically fine and captivatingly true is where human writing still lives.

The Retention Experiment

There was one question I kept asking myself throughout: did I actually learn any of this?

The research and drafting happened in four days. That’s fast. Writing is normally how I internalize material — the act of composing forces you to discover where your understanding breaks down. If Claude was doing the composing, had I skipped that step?

After publishing, I used Google’s NotebookLM to generate quizzes from the article. Multiple choice, factual questions about the content. I scored 100% twice. This relieved me more than I expected.

What I think happened is this: even though I wasn’t typing every sentence, I was doing something cognitively demanding the entire time. Judging. Every paragraph of every draft had to be evaluated. That continuous act of interrogation meant I had to hold the entire argument in my head. I wasn’t memorizing the material. I was constantly interrogating it. It turns out that’s enough.

I’ll take another quiz in six months and report back.

On Voice, and Whether It Matters

We already accept collaborative authorship in many forms. Obama’s speeches were written by Jon Favreau and Cody Keenan. They were still Obama’s speeches. The ideas, the moral vision, the political argument — all of that was his. Favreau and Keenan produced the sentences. Nobody thinks this disqualifies the work.

So what’s the problem if the LLM is my speechwriter?

I was genuinely worried at the start that AI would flatten my voice into something generic, and that worry turned out to be partly correct. ChatGPT’s writing, when it’s generating rather than editing, does get stilted. Claude writes more naturally and was a better first drafter. ChatGPT was a sharper editor. So I used them for different things, which turned out to matter.

The only part I refused to outsource was the personal narrative. That’s the part where you can tell that it’s me, a human, behind the text. It’s also the part that does the most important work — it’s what tells you this isn’t a Wikipedia article with better paragraph breaks, but a person who was actually sitting in the Alley Theatre when the lights went down and noticed something missing.

What This Means Going Forward

One more piece of research influenced my decision to disclose all of this publicly.

A 2025 experimental study by Tiffany Zhu, et al. found that readers often cannot distinguish AI-generated text from human writing in blind tests — and in some cases actually preferred the AI version. But once the text is labeled as AI-generated, readers rate it significantly worse, even when the labels are deliberately swapped. The researchers tested this by presenting identical texts with randomly assigned labels — “AI written” or “human written” — and found that the bias is not in the writing. It’s in the label.

I’m disclosing this anyway. I would rather have the ideas judged on what they are than have readers find out later and decide that my whole project is suspect. The information in the theatre essay is accurate and sourced. The argument is one I actually believe. The personal narrative is real. Those things should stand on their own.

There’s a related psychological dynamic worth mentioning as well. A 2024 study by from the Technical University of Darmstadt applied the “IKEA effect” to AI collaboration — the well-documented tendency to value things more when you’ve invested effort in making them. Participants valued AI-assisted work significantly more when they were involved in directing the system rather than passively receiving its output. Collaboration matters psychologically. The human contribution doesn’t disappear just because a machine assisted with the execution.

Which brings me back to the question that cuts through all of this process description: What exactly did I do that the AI couldn’t have done without me?

I conducted the orchestra. 

The tools and information were instruments. I decided which instruments were in the piece, in what order they played, what the movement from one to the next should feel like, and what the whole thing was trying to say. I decided that the Dara material was the right anchor for the Pakistan section. I decided that the Yiddish theatre comparison had to come before the American Muslim section, because without understanding what Fiddler on the Roof accomplished for Jewish immigrant identity, the question of what a Muslim Fiddler would require doesn’t land.

The Real Scarce Skill

What becomes scarce in an AI writing environment is not drafting. It’s judgment.

The ability to know what question to ask, which examples matter, what the argument actually is, when the AI is wrong, and when a paragraph is technically fine but conceptually empty. Basically, critical thought. The ordering of information into meaning. The ethics of what you choose to investigate and why. Creating the meaning that holds it all together.

AI does not replace thinking. It replaces the mechanical act of drafting.

The best information-centered writers will likely become researchers and editors more than drafters — people who spend most of their time on the high-value work of deciding what to say, and less time on the implementation of saying it. That’s the direction this is heading, and I think it’s worth being honest about rather than anxious about.

I spent four days on the theatre essay. The research was the best part. I learned more about the Islamic theatrical tradition in those four days than I would have in months of casual reading, and I retained it. The drafting was, relatively speaking, the easy part — or at least the part I cared least about. For a different kind of writer, the drafting would be the whole point. For me, it was the implementation.

Excel automated arithmetic. AI automates first drafts. Make your peace with the new Excel.