New model accurately detects AI text

01 Aug, 2025
New model accurately detects AI text

Bot-created social media posts can influence people’s behaviour and contribute to the spread of mis and disinformation and fraud online — and AI is making this content harder to spot.

But a team of researchers, led by Auckland University of Technology (AUT’s) Dr Weihua Li and Professor Minh Nguyen, with PhD student Jinglong Duan, have developed a system which will do just that.

Bots driven by large language models (LLMs) like ChatGPT and Gemini, can mimic the way humans interact online, producing coherent, logical and contextually-rich text.

“While not all bots are malicious, some have been used to interfere with elections, steal personal information and to manipulate stock markets,” Dr Li, a senior lecturer in AUT’s Department of Data Science and Artificial Intelligence says.  
“New challenges posed by the advancement of generative AI increase this potential for harm and make current bot detection methods less effective.”

But the newly developed model, LLM-BotGuard, combines linguistic pattern analysis, a mixture of expert networks, and graph-based modelling to detect AI-powered bots online, making it outperform existing bot-detection models.

Dr Li says that by looking for linguistic traits in text that are unique to large language models, assessing metadata, and analysing how accounts interact with each other, LLM-BotGuard can differentiate between content written by humans from AI-generated content.

The researchers say AI-generated text tends to be more predictable and shares more similarities with other AI-generated text than it does with human-generated content.

LLM-BotGuard also factors in the degree of sentiment expressed in text (with humans tending to use more emotion – like anger or frustration - than AI-powered bots,) and use of features like punctuation, average words per tweet, lexical diversity, word count, sentence count and the number of unique words per sentence.

Dr Li says that LLM-BotGuard then uses a ‘Mixture of Experts’ approach to process linguistic traits and metadata, and a graph neural network to look at how individual social media accounts are connected and interact with eachother.

“Analysis of LLM-BotGuard's performance found our model significantly outperformed other bot detection models, highlighting its effectiveness in addressing the challenges posed by social media bots,” Dr Li says.

Despite this, Dr Li warns that as LLMs rapidly evolve, bots may begin to generate strategies to avoid detection, highlighting the need for continued research in this area.

The research paper, LLM-BotGuard: A Novel Framework for Detecting LLM-Driven Bots With Mixture of Experts and Graph Neural Networks, was published in IEEE Transactions on Computational Social Systems.

The paper’s authors are AUT’s Jinglong Duan, Weihua Li and Minh Nguyen, University of Tasmania’s Quan Bai, Yanbian University’s Xiaodan Wang, and Jilin University of Finance and Economics’ Jianhua Jiang.

Useful links: