Science & Tech

ChatGPT logo
Image: ChaptGPT logo Ilgmyzin/Unplash
Reading Time: 5 minutes

I DON’T KNOW, GO ASK CHATGPT

Artificial intelligence (AI) is no new subject. It’s been around since the 1950s and experts in the field of computer science, engineering, and even philosophy have been working for years to perfect the use of AI into something more than just a short-lived popular tool. It even experienced its own “AI winter” a couple of times during the 1970s and 1990s, when many lost interest in it. It wasn’t until a breakthrough in deep learning in 2012 that it managed to come back into the spotlight once again. 

But now it seems AI has found its biggest function yet, generative AI. Since OpenAI introduced ChatGPT to the world for free public use in November 2022, AI found itself being spoken of by every news anchor, tech magazine writer, and cybersecurity reporter across the world.

If the leap in technology really is that large, how do we adapt as a society to work with these tools correctly, without developing an over-reliance? Where do we draw the line on generative AI? How do we ensure its use won’t spark unhealthy habits in students and employees, or open up security risks for organizations? The Fulcrum sat down with U of O computer science professor Diana Inkpen to discuss.

What sets newer generative AI apart?

In the case of older style language models, Inkpen stated they attempted to predict upcoming text based on frequencies of consecutive sequences of words, or n-grams. However, the performance for these models was not very high. There also existed smaller neural networks to help, but they had the problem of being too small to have an impact on AI performance. 

With the advent of deep learning, we now have large neural networks that not only have millions of neurons, but could be properly run using newer hardware. New techniques were also developed to better train those neural networks, in addition to new types of networks altogether (such as transformers). Since they can be run on the current hardware, they are able to achieve a high performance.

Inkpen explained, “it’s nothing particularly new, but the functions for training are more efficient now and the hardware allows for more efficient learning. So suddenly, the performance jumped for speech recognition and for this kind of text prediction.” 

AI’s use in school

If you were to bet on which students would use ChatGPT for their assignments, odds are you’d think first-year students would be the most popular, especially those in coding classes. After all, they are new to their programs and for many, this is the first time they’ve ever been introduced to programming. 

As it turned out she’s not sure exactly how many of her students use generative AI, one theory as to why, “I think [there’s] very little [use], because the danger if they do is that they don’t learn how to program by themselves…the students need to learn how to program, not to rely on a tool to generate code for them.”

But she also brings light to its positive uses: “Now, if [the students] want to write reports for classes, that’s where ChatGPT can help a little bit more to assist you with writing. It could even write the whole report for you, but you would absolutely have to go and edit it, otherwise, it might be full of nonsense and hallucinations.”

She goes on to explain: “I had a few grad students who told me ‘I did parts of these paragraphs [using generative AI], but now I am going to check them and rewrite to make sure it is reasonable. So, I’m not too worried about people using ChatGPT as a tool to assist in writing. If you are a serious programmer trying to write software with ChatGPT, I wouldn’t trust that code very much.”

How the workplace is adapting

Some companies are split regarding allowing their employees to use generative AI in the office. Some permit full use while others have outright banned it, Apple being one example.

When asked about her opinion on using generative AI in the workplace, Inkpen stated, “if we allow it, we should be very careful and check everything manually. And it’s better to not [use generative AI] because we want efficient code, not just any code that runs.”

She mentions that even before ChatGPT, there were ways to generate code, mostly not for complex or useful functions but rather for more basic tasks, like generating menus. This code was automatically generated but was controlled, and you would then plug in your functions thereafter. 

She added, “so now I guess you can write those functions with ChatGPT if you have a description of what the function should do, but is it the most efficient code? Is it wrong? It needs to be carefully tested, especially for critical applications.”

Security concerns

When it comes to what information we input into online generative AI tools, Inkpen stated, “we shouldn’t be using sensitive data, such as patient data in a hospital or any other kind of confidential data, because then [it] stays on their server and they can use it for training. And other people can access it.”

Inkpen also draws an  interesting comparison to a previous mishap that occurred within our own government: “I remember the Canadian government using Google Translate for sensitive documents and there was a big fuss that they should have their own translation system because of the sensitive data that stays on Google’s site.”

When asked about the use of generative AI in malware and malicious code creation in the future, Inkpen says “people who create malware are creative anyway, so they’re always one step ahead of the antivirus industry and they could use anything to create malware. But then the antivirus community will catch up.”

She also agreed with the notion that there may be more of a risk in the future with people who are not very experienced in coding to create malware simply by asking generative AI to create it for them. This was the case at the time of ChatGPT’s initial release, and even though restrictions were soon put into place to prevent this, loopholes continue to pop-up in the security research field.

Future outlook

Professor Inkpen hopes to see ChatGPT being used as an assistance tool, similar to Grammarly, where you can accept or reject suggestions as you see fit. 

“I don’t want people to not use technology that can help make your life easier. But I would say it’s a fine balance to keep if you want to use these tools. [Generative AI] can help you write something faster, but you have to definitely take responsibility, edit it and verify it yourself.”

Finally, the Fulcrum asked none other than ChatGPT itself what it thinks the proper use of generative AI should look like. It highlighted some of its strengths in creating content, but ended with a fair caution on its use:

“It is important to note that generative AI creates artifacts that can be inaccurate or biased, making human validation essential and potentially limiting the time it saves workers. Therefore, the proper use of generative AI should be in conjunction with human expertise and not as a replacement for it.”

Wise words computer, wise words.