Toronto student creates a chatbot detector

Edward Tian of Etobicoke has a solution to help stop people from misusing ChatGPT
A new chatbot is both amazing and worrying people with its abilities. (ID 267212497 © Waingro | Dreamstime.com)


A 22-year-old Princeton University student from Etobicoke named Edward Tian has created an innovation that has the whole world paying attention.

His program, called GPTZero, is a brand new chatbot detector. This allows people to take a section of text and determine whether or not that text was artificially created. It is something that educators from university to high school have been eagerly hoping for. This desperation was made really clear by just how quickly the app's popularity grew once Tian released it.

After going public on January 3, 300,000 people tried the app and over 7 million people viewed it. "It was totally crazy. I was expecting a few dozen people," Tian told CP24.com.

But why did so many people care about this app? Let's dig into the world of chatbots.

"Hello! How can I help you?"

An example of a chatbot used on a smartphone's automated assistant program. (Wikimedia Commons)

A chatbot is an AI (artificial intelligence) computer program that is able to take sentences written by a person and respond to them in real time, much like another human would be able to do. Basically, it is texting with a robot.

They have been used by companies as a first way to interact with customers who have problems. The chatbot helps to identify the problem and offer some early solutions.

Of course, if the problem was too complex, the chatbot would eventually have to pass the customer on to a real human to finish the job. It could only really speak in sentences that were a bit clumsy and awkward.

That was the issue with these bots—they could fake it alright for a bit, but they were never able to respond exactly as a human would. Our languages—and the ways that real people use them to respond to each other—are too complex to turn into a program.

A little too good

Embed from Getty Images

ChatGPT is not your average chatbot. (Getty Embed)

Or so we thought. Then in November, ChatGPT came along. This new chatbot was something completely different to other bots. It was shockingly good at replicating how humans used language. And it could do way more than just respond to simple questions.

You could ask it to do nearly anything: write a rap, compose a letter, do research and write an entire essay (a long, multipage answer to a complex question). And it would do an excellent job. The bot's writing felt original and complex. It felt very much like how a human would communicate.

And in many cases, better than most people.

Plagiarism

Suddenly, educators became worried. If students started using bots to finish their assignments, how could they grade their work?

In school, plagiarism is a very serious issue. This means to use someone else's work as your own. In the past, teachers had to watch out for students copying text from books or websites. They had methods of hunting down this type of word 'theft'.

But what if a robot was doing the work? Though the text would not have been made by the student, it would be 'original' (not taken from another source that they could track down). How do you catch that?

To the rescue

Edward Tian is a Canadian computer science and journalism student at Princeton University. (Edward Tian/Twitter)

This is what Tian's app does. He has spent a couple years studying GPT-3, which is the type of artificial intelligence that powers ChatGPT. And after being amazed by what it could do, he set out to find a way to detect it!

The result, GPTZero, allows a person to copy in a section of text for analysis. Then Tian's chatbot detector delivers a score that tells how likely it is that the text was written by a person or a chatbot.

The app is not a perfect test, but it is a great help. It is an example of the kind of challenge that schools face as technology gets better and better at copying how people think and interact.

"No one wants to be deceived if something they are reading is misrepresented as human," Tian said. We agree!


Write a message

Tell US what you think

Your email address will not be published. Required fields are marked *

*

 :-)  ;-)  :-D  :-(  :-P  :-o  :-x  :-|  :-?  8-)  8-O  :cry:  :lol:  :roll:  :idea:  :!:  :?:  :oops:

The last 10 Science and Tech articles