Researchers Explore Why Writers Are Disappointed with LLMs—And Propose a Solution
Despite their transformative impact on writing, communication, and creativity, large language models (LLMs) often leave professional writers unsatisfied. A collaborative study by Stony Brook University and Salesforce AI Research investigates this disconnect, identifying key shortcomings in AI-generated text and proposing a manually refined model to better align machine output with human expression.
While LLMs like GPT, Claude, and Llama have revolutionized tasks—from scientific writing to creative storytelling—they still struggle to match the depth and originality of human-authored content. A recent study led by Stony Brook’s Assistant Professor Tuhin Chakrabarty, in collaboration with professional writers, pinpoints these limitations and suggests pathways for improvement. The paper received a Best Paper nomination and Honorable Mention at CHI 2025.
“A major issue is that LLM-generated text often lacks originality and variation,” says Chakrabarty.
The overreliance on LLMs has led to what researchers call algorithmic monoculture—a homogenization of style, where outputs become repetitive, clichéd, and rhetorically shallow. Unlike human writers, who employ nuanced narrative techniques, LLMs frequently default to telling rather than showing, missing the layered complexity that defines compelling writing.













