UAlbany lab tackles AI's "hallucinations", emotion detection and smart devices

UAlbany lab tackles AI's "hallucinations", emotion detection and smart devices


Tom Eschen speaks with a PhD student at UAlbany (WRGB){p}{/p}
Tom Eschen speaks with a PhD student at UAlbany (WRGB)

Facebook Share IconTwitter Share IconEmail Share Icon

As a New York State lawmaker calls for more warnings and disclaimers on online chatbots, the AI in Complex Systems Lab at UAlbany is researching how to make artificial intelligence more efficient, and impactful in every day life.

"It's always always difficult when you deploy your model out in the wild," Abdullah Canbaz, Assistant Professor at the Department of Information Science and Technology at UAlbany, says. "People start using those models, you figure out people are very creative in asking questions and coming up with new ways of using these models. You start seeing more of the problems you have not accounted for before."

Canbaz and his PhD students are researching ways to avoid those mistakes, or what they call "hallucinations", when it comes to a module providing misinformation. Right now they're collecting data comparing GPT3.5 (generative model) to a module called RoBERTa'23 (supervised model), and their ability to diagnose emotions in thousands of tweets.

"GPT3.5 says that this tweet is optimistic, but on the other side, the actual model [RoBERTa'23] designed to detect emotion, says that, no, that is not optimism, that is surprise," Canbaz says. "Going forward, these models [pointing at GPT] will get more intelligent to understanding emotions and feelings, of what people are thinking or talking."

"When we talk about emotion detection, it involves linguistics, psychology and there might be some reason behind the logic that happens in the model that chooses different emotions over another one," Mahsa Goodarzi, a 2nd year PhD candidate, says.

For this project, Goodarzi has been working on coding the language and emotions within RoBERTa'23, in the effort of more accurate results.

"Machine learning models are able to accumulate this knowledge, but still they're not there yet to say, this is good, this is not good, this is ok," Canbaz says. "Most of the time we generate new versions of these models and then we update those don't do lists, that accumulates over the time, and we create flags around them. Feedback mechanisms are both humans in the loop and the machine is also trying to create its own hit list to try to not make the same mistakes."

Cybersecurity is another element of research underway at the lab, as they're currently collecting data on dozens of smart devices. They're looking to see how many times the devices "talk" to one another...which they're not supposed to do...and creating a language model that sends out an alert when it happens.

"I'm really interested in the cybersecurity area, also artificial intelligence," Hakam Otal, a 1st year PhD candidate says. "This is a perfect field where I can combine these two fields, so we can use AI tools and AI models to increase cybersecurity."

A recent report says as AI gains more prominence in society, companies will prioritize individuals with these skills over experience, which is something these UAlbany students say, in addition to the benefits they believe AI can bring, makes this work even more valuable.

"They will require more people that know AI and how to handle this stuff, and data is another aspect, and we're generating more data, so we need more people that can use this data," Otal says.

"Everybody is using AI in a way somehow, I think it's gonna be very impactful because I care about making a positive impact on society, and I'm sure every person finds joy in that," Goodarzi adds. "This is so entangled in our lives now, that if you don't understand how it works, or you don't learn how to work with it, you probably won't have all the skills or the knowledge that you need to improve in whatever you do."

Stay tuned for more on this story Sunday, May 12th on CBS6 News at 11.

A misspelling of Abdullah Canbaz has been corrected in the copy.

Loading ...