
The discourse continues regarding whether AI can effectively mitigate security threats stemming from AI advancements. Notably, we’ve witnessed instances where security companies integrate AI into their defence mechanisms against cyber threats.
Appdome chief product officer Chris Roeckl stated that his company won’t simply substitute its security engineering team with AI, solely because of the current AI trend.
“We are a security company. We’re not about to replace our highly valuable security engineers with a coding engine anytime soon. The value of our security research team lies in their creativity and deep understanding of our code base. The human element is certainly very important to us in terms of how we provide our solution,” Roeckl told AIM in an exclusive interaction.
Founded in 2012, and headquartered in Redwood City, California, Appdome is a software company that provides a mobile app security and integration platform.
Appdome’s cyber defence automation platform is a no-code platform that enables app developers to add over 300 security, anti-fraud, anti-malware, anti-cheat, and other protections to Android and iOS apps.
Currently, Appdome leverages a machine-learning engine to power its platform but has not integrated large language models (LLMs) into its platform yet.
“It’s an unanswered question. Our research team is currently exploring – do we leverage AI to combat AI or rely on good old traditional practices? This remains an open question as we evaluate whether an AI-driven response could prove advantageous,” he added.
But There Are Benefits
During our discussion, Roeckl touched upon the benefits AI could bring to Appdome’s platform. For instance, AI could play a great role in recommending security features to Appdome’s clients, choosing from the 300+ features they have on their platform.
“Depending on where your app is, and what kind of app it is, we might be able to provide very specific recommendations on how to secure your app. So the idea of using an AI-based recommendation engine is very interesting to us,” he said
However, whether the recommendation engine will be powered by an LLM or a conventional recommendation engine is not determined yet. Roeckl notes that it could be a mixture of AI and their existing machine-learning engine.
AI is Making Bad Actors Good Coders
While AppDome is contemplating using LLMs, numerous threats are emerging due to AI’s increasing popularity. For instance, generative AI is making it easier for bad actors to write malware code.
ChatGPT, a free tool, can write code, while more advanced tools like GitHub Copilot are available for developers at a price of $10/month. However, GitHub has measures in place to prevent Copilot from being used to write malware and similar malicious content.
“We see that AI coding is lowering the barrier of entry for bad actors to actually code attacks. So now we see that mobile apps are under increasing threat of attack because bad actors can write good codes with the help of AI even though they don’t have good coding abilities,” Roeckl said.
Phone-Based AI Agents Will Open Doors to More Security Threats
AI is touching almost every field, and AI agents are expected to be the next big iteration. Experts envision that most consumer interactions online will involve agents performing tasks on their behalf.
Pretty soon, your smartphone’s built-in AI agent might communicate with your preferred food delivery app’s AI agent to place orders automatically on your behalf. This, however, according to Roeckl, opens the doors for more security vulnerabilities.
For these systems to communicate effectively, they rely on application programming interfaces (APIs) to facilitate interaction. However, APIs can be vulnerable to interception and misuse for malicious purposes.
“For instance, many APIs in mobile apps are exposed, leading to potential bot attacks, such as sneaker bots. Mobile channels are increasingly targeted for such attacks due to embedded coding logic within the apps.
“Failing to secure the app can expose the coding logic of the APIs, posing a significant threat. Therefore, as the API economy drives tighter interconnections between systems, brands must heighten vigilance in safeguarding these systems.” Roeckl said.
AI is Not Creating a New Category of Threats, Yet
While generative AI does open the door for more security threats in mobile security, according to Roeckl, no new kinds of threats are emerging. However, he does acknowledge that AI is resulting in a volumetric increase in threats overall.
“We’re not seeing unique attacks from AI. While we can track and identify them as originating from AI-powered systems, they largely resemble the standard types of attacks we encounter every day. Recently, we’ve particularly focused on social engineering attacks as a significant area of concern.”
Social engineering attacks are indeed becoming more sophisticated and dangerous with the integration of AI. “There are deepfakes of both images and voices. Additionally, there are phishing attacks, which can lead to mobile app vulnerabilities. AI elements have begun to infiltrate all these various types of attacks,” said Roeckl.
The post We Won’t Just Replace Our Security Engineering Team with a Coding Engine: AppDome CPO appeared first on AIM.