New York Issues Guidance On AI Cybersecurity Dangers
BETA
THIS IS A BETA EXPERIENCE. OPT-OUT HERE

More From Forbes

Edit Story

New York Issues Guidance On AI Cybersecurity Dangers

Following

In 2017, New York’s Department of Financial Services was the first state regulator to issue mandatory cybersecurity regulations for financial institutions such as banks, insurance companies and financial services companies

LII / Legal Information InstituteN.Y. Comp. Codes R. & Regs. Tit. 23 § 500.2 - Cybersecurity Program

and now it has taken the lead in issuing new guidelines to deal with the threat and promise of artificial intelligence.

Department of Financial ServicesIndustry Letter - October 16, 2024: Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks

While the new guidelines do not impose specific new requirements beyond the obligations already contained with the present cybersecurity regulations of 23 NYCRR Part 500, they do explain how banks, insurance companies and financial services companies can use the framework of the present regulation to assess and deal with the cybersecurity risks posed by AI.

The new guidelines highlighted four specific areas where AI has increased the danger of cyberattacks.

The first area is that of social engineering which the guidelines describe as “one of the most significant threats to the financial services sector.” Through social engineering cybercriminals have used specifically targeted spear phishing emails, vishing phone calls and smishing text messages in which they pose as a legitimate customer, government agent, vendor or some other trusted source to try to lure their targeted victims into providing personal information or click on a malware infected link. While some sophisticated cybercriminals have proven to be exceedingly adept at social engineering, in the past many attempts were almost laughable, particularly when the socially engineered communications were created by foreign cybercriminals whose primary language wasn’t English. But no more. Through the use of AI, cybercriminals are now able to craft totally believable spear phishing emails, vishing phone calls and smishing text messages. And things aren’t as bad as you think, they are far worse. Readily available deepfake technology is enabling cybercriminals to mimic the voice or appearance of bank officials or others to make their cyberattacks even more believable. According to the identity verification company Onfido, deepfake attacks have increased by 3,000% in just the last year.

OnfidoIdentity Fraud Insights Report 2024

The second area of cybersecurity risk noted in the guidelines is AI-enhanced cybersecurity attacks where the AI enhanced malware, such as ransomware, downloaded through social engineering is more sophisticated and increasingly capable of avoiding defensive security. In addition, increasingly AI is being used by less sophisticated cybercriminals to create highly complex malware. As indicated in the guidelines, “This lower barrier to entry for threat actors, in conjunction with AI-enabled deployment speed has the potential to increase the number and severity of cyberattacks.”

The third area of cybersecurity risk identified in the guidelines is the vulnerability of vast amounts of nonpublic information maintained by banks, insurance companies and financial services companies including biometric data such as facial and fingerprint recognition used for authentication purposes which when stolen by cybercriminals would enable them to bypass some forms of multi-factor authentication as well as create believable deepfakes.

The fourth area of cybersecurity risk described in the guidelines is increased vulnerabilities due to supply chain dependencies. Even if a company has a robust cybersecurity program, it will be vulnerable to supply chain attacks where the cybercriminals attack the developers of software used by banks, insurance companies, financial services and others and infect it with malware that then is downloaded by the users of that software who are the real targets. Supply chain attacks have been responsible for major ransomware attacks and data breaches such as in the SolarWinds supply chain attack that affected 18,000 companies using its software, including Microsoft.

FortinetWhat are Supply Chain Attacks? Examples and Countermeasures | Fortinet

While the guidelines do not require specific cybersecurity steps to be taken, they do advise that in order to comply with 23 NYCRR Part 500 in the light of the ongoing threats of AI that companies “provide multiple layers of security controls with overlapping protections so that if one control fails, other controls are there to prevent or mitigate the impact of an attack.” And while AI can be a weapon used by cybercriminals, it can also be used to defend against cyberattacks. The guidelines state, “organizations should explore the substantial cybersecurity benefits that can be gained by integrating AI into cybersecurity tools, controls and strategies. AI’s ability to analyze vast amounts of data quickly and accurately is tremendously valuable for: automating routine repetitive tasks, such as reviewing security logs and alerts, analyzing behavior, detecting anomalies, and predicting potential security threats; efficiently identifying assets, vulnerabilities and threats; responding quickly once a threat is detected; and expediting recovery of normal operations.” So while AI may be the problem, it may also help provide the solution.

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.