3 things businesses must do to secure applications in the AI era

Organizations must quickly adapt their application security strategies to address new threats fueled by AI.

They include:

  • More sophisticated bot traffic.
  • More believable phishing attacks.
  • The rise of legitimate AI agents accessing customers’ online accounts on behalf of users.

By understanding the implications of AI on identity access management (IAM) and taking proactive measures, businesses can stay ahead of the AI curve and protect their digital assets. Here are the top three actions organizations preparing their application security for a post-AI world need to consider in their security strategies:

We’re already seeing examples of reverse engineering AI-powered sites to get free AI computing.

Defend against reverse engineering

Any app that exposes AI capabilities client-side is at risk of particularly sophisticated bot attacks looking to “skim” or spam those API endpoints — and we’re already seeing examples of reverse engineering AI-powered sites to get free AI computing.

Consider the example of GPT4Free, a GitHub project dedicated to reverse engineering sites to piggyback on GPT resources. It accumulated an astonishing 15,000+ stars in just a few days in a blatant public example of reverse engineering.

To prevent reverse engineering, organizations should invest in advanced fraud and bot mitigation tools. Standard anti-bot methods like CAPTCHA, rate limiting and JA3 (a form of TLS fingerprinting) can be valuable in defeating ordinary bots, but these standard methods are easily defeated by more complex bot problems like those facing AI endpoints. Protecting against reverse engineering requires more sophisticated tooling like custom CAPTCHAs or tamper-resistant JavaScript and device fingerprinting tools.

Safeguard against far more believable phishing attacks

Regarding the increased believability of phishing attacks, cybersecurity firm Darktrace saw the linguistic complexity jump in phishing emails by nearly 20% in Q1 2023. Additionally, 37% of organizations reported deepfake voice fraud attacks, and 29% reported deepfake videos being used for phishing.

The best way to counter these account takeover attacks is to invest in phishing-resistant multifactor authentication (MFA). Unlike one-time and time-based passcodes, phishing-resistant methods like WebAuthn can’t be accidentally forwarded to an attacker, making it a helpful security measure in the face of AI-generated phishing emails and websites. In addition, WebAuthn includes two distinct categories of device-based authentication factors that help confirm a user’s identity:

  1. Device-based biometrics like TouchID and FaceID.
  2. External hardware keys, like Yubikey.

With the increasing sophistication of phishing attacks, organizations should prioritize integrating phishing-resistant authentication methods. If your organization isn’t ready to adopt these more advanced MFA methods, the best defense is a fine-grained bot detection layer. In addition, Google just announced a phishing-resistant way to authenticate with Gmail: passkeys. This is another great way to both defend against sophisticated phishing attacks and improve the user experience.

Model new user permission and access control scenarios

Finally, with the emergence of autonomous AI agents like AutoGPT, companies will face increasing complexity in managing customers’ identities and role-based access controls (RBAC). These autonomous AI assistants can browse the web and perform tasks on behalf of users. To protect against this, businesses will need to figure out how to discern human vs. bot traffic precisely and manage user-directed bots looking to engage with their accounts.

AI agents come with benefits, of course. They will eliminate tedious tasks in users’ personal and professional lives. However, AI agents also introduce new risks for applications to manage. Most consumers and businesses will not want AI agents to have unrestricted access to all their application settings due to their autonomous nature. An agent could mistakenly delete crucial profile information, sign up for a more expensive subscription than intended, expose sensitive account access data, or take other self-directed actions that harm the user, the application or both parties.

Companies must adapt their permission and access control models to address these challenges to accommodate these well-intentioned bots while thwarting harmful ones. Establishing fine-grained RBAC to mitigate security concerns arising from autonomous AI conducting tasks unsupervised on behalf of real users is crucial.

AI calls for new diligence

As with most technologies, AI comes with advantages, but it also carries substantial risks, especially when it comes to application security. Businesses will need to adopt more sophisticated security technologies to guard against reverse engineering and more powerful phishing attacks. They will need stronger user access controls.

And, when it comes to application security, humans will still play a vital role, as organizations may need to require human-in-the-loop MFA for specific actions initiated by AI agents or restrict some access. The key will be to strike the right balance between convenience and security.