Hackers have upped their game by taking advantage of artificial intelligence tools to craft cyberattacks ranging from ransomware to election interference and deep fakes.
“They are increasingly using AI tools to build their codes for cyberattacks,” said William Akoto, assistant professor of international politics at Fordham, adding that every new AI feature added to platforms like ChatGPT makes hackers’ work easier and leaves corporations and government agencies vulnerable. “It’s lowering the bar on these attacks.”
President Joe Biden said the “warp speed” at which this technology is advancing prompted him Monday to sign an executive order using the Defense Production Act to steer how companies develop AI so they can make a profit without risking public safety.
Akoto, who studies the international dynamics of cyberattacks, said the executive order is a step in the right direction.
“Presently, the U.S. lags behind global counterparts such as the E.U., U.K., and China in establishing definitive guidelines for AI’s evolution and application,” he said. “So this directive is a much-needed measure in bridging that gap. It is comprehensive, clarifying the U.S. government’s perspective on AI’s potential to drive economic growth and enhance national security.”
The president’s wide-ranging order in part requires AI developers to share safety test results with the government and to follow safety standards that will be created by the National Institute of Standards and Technology. Biden said this is the first step in government regulation of the AI industry in the U.S, a field he said needs to be governed because of its enormous potential for both promising and dangerous ramifications.
But despite its noble intentions, Akoto said, “The practical implementation of these measures will present significant challenges, both for federal oversight bodies and the technology sector. A critical issue is the misalignment between the economic and market forces currently influencing AI technology firms and the Biden administration’s aspirations for cautious, well-evaluated, and transparent AI development. Without realigning these incentives with the administration’s objectives, tangible, positive outcomes from this executive order will remain elusive.”
Ultimately, the effectiveness of this initiative will hinge on how robust enforcement will be to ensure AI technology companies’ compliance, Akoto said.