Controls
Animated sprite for writing category

Your Vibes Are Not Evidence

A medieval witch-trial scene where the accused on the stand is a tiny laptop with a startled-cat face on the screen, the prosecutor is a Victorian publisher in a powdered wig holding up a piece of parchment that reads “EVIDENCE: SOMETHING SHIFTED” in dripping red ink, the jury is twelve identical men in tweed jackets furrowing their brows in perfect unison, motivational posters on the courthouse walls read “IF IT FEELS LIKE AI IT IS AI (TRUST YOUR GUT)” and “BURDEN OF PROOF IS A SOCIAL CONSTRUCT,” a single pixel tear rolls down the laptop’s screen, dramatic gothic lighting, the banner behind the judge is a giant magnifying-glass emoji over the word “VIBES”

I’ve been in some discussions lately about AI in writing, which is, at the moment, the hottest topic in publishing. Everyone has an opinion. I have one too. Mine is going to make some people on both sides a little mad, which I think means it’s probably about right.

Here’s where I land. Using AI to write a novel and publishing it without telling anyone you did? That’s bad. Using AI to help you communicate — to draft an email, polish a query letter, summarize a contract you can’t decipher — is not bad. Disclosure is the line. Human input is the heartbeat. The middle is where I live, and I think it’s where most thoughtful people live too.

But that’s not really what this post is about. This post is about how the discourse around AI detection has wandered into genuinely dangerous territory, and we should probably stop and look at where we are.

The Shy Girl Mess

If you missed it: a book called Shy Girl by Mia Ballard had its future publication pulled because people felt it was written by AI. Ballard has said that an acquaintance she hired to edit the self-published version used AI in the editing pass. The publisher conducted an internal investigation (in twelve hours) and pulled the book. They have not, to my knowledge, shared what the investigation actually found.

The proof, as best as I can tell, was vibes.

Here is the thing about vibes. They are not evidence. They are a feeling about evidence, which is a different and much worse thing.

The “AI Tells” List

There’s a list of stylistic tics that supposedly prove a piece of writing was generated by AI. It goes something like this:

  • Something shifted
  • Amplification echoes
  • Decorative compound modifier
  • Trailing irony clause
  • Fragment list of dramatic significance
  • One-sentence gravity paragraph
  • Silence as punctuation

Let me pick on a few of these.

“Something shifted." This is a sentence that has appeared in human-written fiction for approximately as long as fiction has existed. It is a description of an internal beat. “Something shifted in her” was probably the third sentence ever scratched onto a clay tablet. It is not proof of anything. It is proof that the writer is writing about a thing changing.

“One-sentence gravity paragraph." This is. A common stylistic choice. Cormac McCarthy did entire pages of them. So did Hemingway, on a budget. So have thousands of literary fiction writers since. The reason LLMs “do” this is because thousands of humans do this, and the LLMs were trained on those humans. Discovering “AI uses this technique” by reading the people AI was trained on is a circular argument so tight you could use it as a wedding ring.

“Trailing irony clause." Sentences that end on a wry undercut, or so we’re told. This is a rhetorical device with a name in classical Greek. It’s older than Christianity. Calling it an AI tell is calling Aristotle an AI.

“Silence as punctuation." Meaning ellipses and white space, presumably. This one is going to be hard to police given that human writers have been doing it… forever… including, famously, Cormac McCarthy, who I keep mentioning because he is an excellent example of a human writer who looks suspiciously AI-shaped if you measure him against this list.

The pattern, if you’re paying attention, is that the “tells” are mostly just writing. Decisions writers make. Techniques humans have used for centuries. The reason an LLM produces them is because the LLM ate a thousand novels and learned the shape of fiction. You cannot use the shape of fiction to prove someone is not a human writing fiction. That’s not how shapes work.

When Vibes Become Evidence

Here is where I get less polite.

If we accept that “this feels like AI” is enough to end someone’s publishing career, we have made a small but devastating change to how culture works. We’ve decided the burden of proof sits on the writer to demonstrate they are human. That is not how anything should work. The accused is being asked to prove a negative about their own brain. There is no piece of paper you can produce that says “this came out of my own neurons, on my honor.”

And it gets worse, because the “tells” are normal writing. Anyone who happens to compose sentences in a way that overlaps with what LLMs produce is now a potential target. That is going to be a lot of people. That is, eventually, going to be most people, because the LLMs are trained on us. Their vibes are our vibes. The Venn diagram is just a circle.

This is going to keep coming for human writers. Mia Ballard is the case I happen to know about; there are others, and there will be more. People are going to lose deals, careers, reputations, because their prose pattern-matched to a chatbot in the eyes of someone with a posting habit and no evidence. That should scare us. It scares me. I am a person who writes sentences for a living and I have, on multiple occasions, used the phrase “something shifted.” That apparently makes me a robot now.

Disclosure Is the Real Conversation

Here is what I actually think we should be talking about: disclosure.

I use AI tools constantly. I’m using one right now to help me think through this post. I write code with Claude every day. I am deeply uninterested in pretending I don’t, and I think the writers who currently pretend they don’t will, eventually, look as silly as the writers who pretended they didn’t use spell-check fifteen years after Microsoft Word came out.

The honest framework, the one I keep coming back to, is something like this:

  • If AI wrote the words, you disclose it.
  • If AI helped you brainstorm, draft, edit, or organize, that’s a tool. Like an editor. Like Grammarly. Like the friend who reads your stuff before you send it. Disclose if you want, but you don’t have to.
  • If you don’t disclose and the words came from AI, you’ve lied. That’s the bad thing. Not the AI. The lying.

That’s it. That’s the whole thing. It puts the weight where it belongs — on whether the writer is honest about what they made — instead of on a witch hunt where prose gets read like tea leaves.

I would rather live in a world where some bad-faith writers lie about not using AI than a world where a publisher pulls a book because someone on Twitter said “something shifted” felt suspicious. The first world has cheaters in it. The second world has nobody writing in it, because everyone is too scared to publish a sentence that might match a pattern.

Disclose, or don’t publish. Vibes are not evidence. Mia Ballard probably deserves better, and so do the rest of us.