When Automation Meets Poor Security Hygiene

Representational image of apps

This post is also available in: עברית (Hebrew)

As artificial intelligence becomes a standard feature in mobile applications, a familiar security problem is re-emerging at scale. Developers are racing to integrate cloud services, analytics, and AI models, but many are still embedding sensitive credentials directly into app code. This practice, long considered unsafe, is now exposing vast amounts of infrastructure and user data—often without developers realizing it.

A large-scale analysis of Android applications shows how widespread the issue has become. Tens of thousands of AI-enabled apps were found to contain hardcoded secrets such as cloud project identifiers, API keys, and service endpoints. In most cases, these credentials are tied to backend infrastructure rather than AI models themselves, creating an expanded attack surface that automated tools can easily scan and exploit.

According to CyberNews, the problem is not theoretical. Investigators identified hundreds of cloud databases and storage buckets that were not just misconfigured, but actively compromised. Some lacked authentication entirely, leaving user records openly accessible. Others showed clear signs of intrusion, including test tables labeled as proof-of-concept and administrative accounts created by attackers. In total, exposed cloud storage associated with these apps contained hundreds of millions of files, amounting to hundreds of terabytes of data.

What makes this particularly concerning is how automation amplifies the risk. Attackers are not targeting apps individually; instead, they scan for exposed credentials at scale and exploit whatever they find. Once a secret is embedded in an app, it can be reused indefinitely, even after infrastructure changes, unless the app itself is updated. Many apps still reference cloud resources that no longer exist, reflecting poor security hygiene and limited monitoring.

Interestingly, integrations with large language models appear to be less problematic. Direct API keys for AI models were rarely found and, when exposed, generally carried lower risk. In most configurations, such keys allow new requests but do not grant access to stored conversations or historical data. The greater danger lies in credentials tied to payments, user messaging, analytics, and cloud storage—where leaked keys can enable impersonation, data manipulation, or even direct financial abuse.

From a defense and homeland security perspective, these findings extend beyond consumer privacy. Mobile apps are widely used by government employees, contractors, and first responders. Insecure applications can become entry points for surveillance, data exfiltration, or lateral movement into sensitive networks. Hardcoded secrets also complicate incident response, as compromised credentials may be unknowingly distributed across thousands of devices.

The takeaway is less about AI itself and more about fundamentals. As AI features move into mainstream apps, old security shortcuts are being reused in new contexts. Without stronger development practices and continuous monitoring, the convenience of AI-powered apps risks turning into a quiet but systemic vulnerability.