Queen Mary University of London secures a £4.38 million grant for a RAI UK Keystone project - Multiplatform AI

Queen Mary University of London secures a £4.38 million grant for a RAI UK Keystone project

  • Professor Maria Liakata of Queen Mary University of London secures a £4.38 million grant for a RAI UK Keystone project targeting limitations in Large Language Models (LLMs).
  • The project aims to mitigate risks associated with deploying LLMs in healthcare and law, while leveraging their potential for improved services and efficiencies.
  • Despite their capabilities, LLMs pose concerns such as biases and lack of explainability, especially in safety-critical domains.
  • Two key objectives include developing an evaluation benchmark and creating mitigating solutions to address identified limitations.
  • Collaboration with industry partners and stakeholders ensures alignment with real-world needs and fosters responsible AI development.
  • The project emphasizes the importance of interdisciplinary collaboration and responsible AI innovation to maximize benefits and mitigate risks.

Main AI News:

The Queen Mary University of London’s Professor, Maria Liakata, has secured a £4.38 million grant to confront a pressing issue in Artificial Intelligence (AI). As a Turing AI Fellow and a leading authority in Natural Language Processing (NLP), she will lead a prestigious RAI UK Keystone project, made possible by a £31 million investment from the UK Government. This initiative aims to address the critical sociotechnical limitations present in Large Language Models (LLMs), which are cutting-edge AI algorithms like those utilized in ChatGPT and virtual assistants.

While LLMs boast capabilities such as generating human-like text, facilitating language translation, and providing informative responses, their rapid integration into safety-critical sectors like healthcare and law raises significant concerns. Professor Liakata asserts, “Through this project, we have a genuine opportunity to leverage LLMs for improved services in healthcare and law, while mitigating risks associated with deploying inadequately understood systems.”

Despite their utility, LLMs harbor known shortcomings such as biases, privacy breaches, and lack of interpretability. Their introduction into domains like the legal system, where judges employ ChatGPT to summarize court cases, poses potential risks. Imagine the ramifications if an LLM misinterprets the chronology of events or perpetuates existing racial biases in parole decisions. Likewise, medical question-answering services driven by LLMs could disseminate inaccurate or biased information due to inherent technological limitations.

Professor Liakata stresses, “The potential for harm is substantial. This project endeavors to ensure that society capitalizes on the benefits of LLMs while averting adverse outcomes.”

The project focuses on healthcare and law due to their pivotal roles in the UK economy and the juxtaposition of significant risks with groundbreaking advancements. It aims to achieve two primary objectives:

  1. Evaluation benchmark: A comprehensive framework comprising criteria, metrics, and tasks will be devised to assess LLMs across diverse real-world scenarios and applications. Collaboration with partners including Accenture, Bloomberg, Canon Medical, Microsoft, the NHS, and service users will ensure alignment with actual requirements.
  2. Mitigating solutions: Researchers will pioneer innovative machine learning approaches, drawing insights from legal, ethical, and healthcare domains. These solutions will address identified LLM limitations outlined in the evaluation benchmark, with the goal of seamless integration into existing and future LLM-powered systems.

Professor Wen Wang, Vice-Principal and Executive Dean for Science and Engineering at Queen Mary University of London, affirms, “Professor Liakata’s project is a timely and critical endeavor. Responsible AI development and deployment are imperative to foster public trust and optimize benefits across various sectors.

These projects serve as keystones of the Responsible AI UK program,” notes Professor Gopal Ramchurn, CEO of Responsible AI UK (RAI UK). “Selected for their relevance to society’s most pressing AI challenges, they epitomize interdisciplinary collaboration to anticipate and address AI-related issues.”

Conclusion:

The Queen Mary project underscores the urgency of responsible AI development in healthcare and law. Addressing risks associated with Large Language Models not only fosters public trust but also unlocks significant potential for innovation and efficiency in critical sectors. Companies investing in responsible AI solutions stand to gain credibility and long-term success in the evolving AI market.

Source