In 2017, New York’s Department of Financial Services was the first state regulator to issue mandatory cybersecurity regulations for financial institutions such as banks, insurance companies and financial services companies
and now it has taken the lead in issuing new guidelines to deal with the threat and promise of artificial intelligence.
While the new guidelines do not impose specific new requirements beyond the obligations already contained with the present cybersecurity regulations of 23 NYCRR Part 500, they do explain how banks, insurance companies and financial services companies can use the framework of the present regulation to assess and deal with the cybersecurity risks posed by AI.
The new guidelines highlighted four specific areas where AI has increased the danger of cyberattacks.
The first area is that of social engineering which the guidelines describe as “one of the most significant threats to the financial services sector.” Through social engineering cybercriminals have used specifically targeted spear phishing emails, vishing phone calls and smishing text messages in which they pose as a legitimate customer, government agent, vendor or some other trusted source to try to lure their targeted victims into providing personal information or click on a malware infected link. While some sophisticated cybercriminals have proven to be exceedingly adept at social engineering, in the past many attempts were almost laughable, particularly when the socially engineered communications were created by foreign cybercriminals whose primary language wasn’t English. But no more. Through the use of AI, cybercriminals are now able to craft totally believable spear phishing emails, vishing phone calls and smishing text messages. And things aren’t as bad as you think, they are far worse. Readily available deepfake technology is enabling cybercriminals to mimic the voice or appearance of bank officials or others to make their cyberattacks even more believable. According to the identity verification company Onfido, deepfake attacks have increased by 3,000% in just the last year.
The second area of cybersecurity risk noted in the guidelines is AI-enhanced cybersecurity attacks where the AI enhanced malware, such as ransomware, downloaded through social engineering is more sophisticated and increasingly capable of avoiding defensive security. In addition, increasingly AI is being used by less sophisticated cybercriminals to create highly complex malware. As indicated in the guidelines, “This lower barrier to entry for threat actors, in conjunction with AI-enabled deployment speed has the potential to increase the number and severity of cyberattacks.”
The third area of cybersecurity risk identified in the guidelines is the vulnerability of vast amounts of nonpublic information maintained by banks, insurance companies and financial services companies including biometric data such as facial and fingerprint recognition used for authentication purposes which when stolen by cybercriminals would enable them to bypass some forms of multi-factor authentication as well as create believable deepfakes.
The fourth area of cybersecurity risk described in the guidelines is increased vulnerabilities due to supply chain dependencies. Even if a company has a robust cybersecurity program, it will be vulnerable to supply chain attacks where the cybercriminals attack the developers of software used by banks, insurance companies, financial services and others and infect it with malware that then is downloaded by the users of that software who are the real targets. Supply chain attacks have been responsible for major ransomware attacks and data breaches such as in the SolarWinds supply chain attack that affected 18,000 companies using its software, including Microsoft.
While the guidelines do not require specific cybersecurity steps to be taken, they do advise that in order to comply with 23 NYCRR Part 500 in the light of the ongoing threats of AI that companies “provide multiple layers of security controls with overlapping protections so that if one control fails, other controls are there to prevent or mitigate the impact of an attack.” And while AI can be a weapon used by cybercriminals, it can also be used to defend against cyberattacks. The guidelines state, “organizations should explore the substantial cybersecurity benefits that can be gained by integrating AI into cybersecurity tools, controls and strategies. AI’s ability to analyze vast amounts of data quickly and accurately is tremendously valuable for: automating routine repetitive tasks, such as reviewing security logs and alerts, analyzing behavior, detecting anomalies, and predicting potential security threats; efficiently identifying assets, vulnerabilities and threats; responding quickly once a threat is detected; and expediting recovery of normal operations.” So while AI may be the problem, it may also help provide the solution.