The way we communicate in our native language is a lot more complicated than you think. Say you go to a coffee shop and order a cold brew. You might say, “I’ll have a cold brew.” However, there’s also a good chance you’ll stretch it out a little and say something like, “Ah, it’s a pretty hot out. You know, I think I’ll go with a cold brew this morning, please.” That’s because we’re prone to a lot more complexity in our languages. We like to play with it and get creative without even realizing it—especially if we grew up with it and know it very well.
If you’re just learning a new language, however, the opposite is probably true. Your sentences are simpler. They’re more direct. There are fewer linguistic flourishes and less variation—which makes sense. You’re more concerned about getting things right, versus sounding like a loquacious native speaker..
However, it’s this lack of linguistic complexity that also distinguishes text written by large language models (LLM) like ChatGPT or Bard, from text written by humans. This idea is what underpins many AI detection apps used by professors and teachers to assess whether or not a student’s essay was actually written by one of their students.