Credit: Gottlib | Goodfon.com

“First do no harm.”  As a foundational principle of medicine called the Hippocratic Oath, it is the first thing they drill into us at medical school.  It is the first consideration when you are evaluating a patient and considering treatment options – “first do no harm.”  The oath does not prevent us from innovating, caring, or being the best at what we do. However, it ensures we do it responsibly.

As a Yale-educated physician who has worked in direct clinical patient care, worked on health policy at Health and Human Services, served global clients at Price Waterhouse Cooper consulting and now as a CEO of health companies – the first rule has always been – do no harm.

Over my decades working in healthcare, I have seen artificial intelligence and algorithms used across the continuum of care in different ways.  Sometimes AI increases efficiency and frees up bandwidth for the human experts to focus on the more complex tasks. But sometimes AI vendors have taken a step beyond their product’s current capabilities to analyze the variables in a patients case to determine if they need another layer of pre-authorization to receive the treatment the doctor recommends or even recommending a certain course of care for a patient before the AI has been properly trained.

There are times where my team has been asked to correct AI logic after it was already deployed into production. And it’s understandable in our current structure, right? Technology vendors have huge growth and expansion targets and non-clinical technologists without the same training as clinicians find it hard to resist the temptation to extend the use beyond AI’s current clinical knowledge when there is nothing there are no rules of the road in place to make them think twice. And they also don’t have the clinical knowledge to even know that they could or are doing harm.

Medicine and medical knowledge is expanding by leaps and bounds year over year. Diseases that were death sentences decades ago are now manageable or even curable, but there are still more discoveries to be made.

The same can be said for AI, year over year, the applications and utilization of AI is expanding at an ever increasing speed.  Just three years ago no one knew what “Chat GBT” was and now schools are racing to find ways to modify their curriculum to ensure students are learning and not simply using AI to do the work.

AI, unlike medicine, currently has no maxim like “do no harm.” Instead, tech is governed by the idea of “move fast and break things” which may be fine when it comes to consumer technology but when it applies to technology which can impact the medical profession that laissez faire philosophy carries serious risk.

There are few protections that the developers or those who incorporate AI into their products abide by the same rules as the medical profession does.  There is no “do no harm” framework AI is bound by – at least not yet.

There is a growing chorus of lawmakers across the country who realize the power of AI technology to make our lives better but also the importance of providing some  protections – the digital equivalent  of “do no harm” for the technology space.  Connecticut should count itself lucky that one of the national leaders in that effort is our own State Sen. James Maroney.

Late last month the State Senate passed S.B. 2, An Act Concerning Artificial Intelligence which provides a framework to regulate the development and use of AI.  The 53-page, 19 section bill set forth a framework which will regulate the development and utilization of AI. The bill sets forth rules and provides clarity for how this powerful technology will be governed.

There are many ways AI can be utilized to make our lives better, there are also serious societal and personal risk for unregulated AI.  Just last week we saw how nefarious individuals utilized an AI powered “nudifacation” tool to generate images of female athletes, including UConn’s star player.  Currently there is no clear penalty for that action.  With this legislation that utilization of AI would be treated in the judicial system the same as a case of revenge porn.

The opioid crisis and the challenges stemming from the pandemic have shown a light on the mental health challenges our society is grappling with.  No legislation can prevent a bad actor from using AI  to create and distribute intimate images of someone but this legislation can at the very least create a structure for penalties for doing so and more importantly make crystal clear society believes that action is unacceptable.

Those of us in the medical space have seen the benefit of AI and how it can increase people’s quality of life.  Many of us are very optimistic and excited about the potential AI has to increase the quality of life for our society.  But just those of us who have worked in an Emergency Department have seen the results of evil actions and bad luck. We understand the risk AI can pose if development and utilization are allowed without rules.

The first rule of AI, just like medicine should be “do no harm” and this year’s Senate Bill 2 is a significant step to making that a reality.  I hope in the remaining day of this legislative session the House passes this bill and the governor embraces this important framework to harness the power of AI while diminishing the unintended consequences and bad actors. Just as abiding by the Hippocratic Oath has not hampered US medicine from being innovative, caring, and world-class, nor will applying the core concept of ‘first do no harm’ to AI. It will only make us better, healthier, happier, and safer humans.

Kevin Carr, M.D., owns several companies including the Connecticut-based Trusted Medical and Compass Innovations.