By Sarang Warudkar - Sr. Technical PMM (CASB & AI), Skyhigh Security
May 8, 2025 5 Minute Read
Welcome to the Wild West of enterprise AI.
Twelve months ago, your CFO was still suspicious of chatbots. Today, they’re asking if you can “get ChatGPT to handle board minutes.” From cautious curiosity to Copilot-powered spreadsheets, enterprises have gone all in on AI. And while the gains are real—speed, scale, and creativity—the risks…? Oh, they’re very real too.
Let’s break down the biggest trends, threats, and facepalm moments from Skyhigh Security’s 2025 Cloud Adoption & Risk Report, with insights from 3M+ users and 2B+ daily cloud events. Buckle up.
AI Usage: From “Maybe Later” to “Make It Do My Job”
AI is now the office MVP. A recent MIT study says ChatGPT cuts writing time by 40%—which is about the time we used to spend wondering where the file was saved. JPMorgan engineers got a 20% productivity boost and, rumor has it, one intern asked Copilot to write their resignation letter before their first day.
At Skyhigh, we’ve seen the AI surge firsthand. In our data, traffic to AI apps has skyrocketed—more than tripling in volume—while uploads of sensitive data to these platforms are rising fast. Meanwhile, traditional “non-AI” business apps? They’re barely keeping up. The workplace isn’t just embracing AI—it’s sprinting toward it.
Translation: AI is winning. Your firewall? Not so much.
The Rise of Shadow AI: When IT Doesn’t Know What HR Is Chatting With
“Shadow AI” might sound like the next must-watch Netflix series, but it’s playing out in real time across enterprises everywhere. Picture this: employees quietly tapping away on ChatGPT, Claude, DeepSeek, and dozens of other AI tools—completely off IT’s radar. It’s a bit like sneaking candy into a movie theater, only this time the candy is customer data, financials, and intellectual property.
The numbers are jaw-dropping: the average enterprise is now home to 320 AI applications. The usual suspects? ChatGPT, Gemini, Poe, Claude, Beautiful.AI—tools as powerful as they are unsanctioned. They’re unapproved. They’re unmonitored. And unless someone drops the word “audit,” they’re potentially unstoppable.
Compliance: AI’s Kryptonite
AI apps are fun—until the GDPR fines show up like uninvited guests at a team offsite. Skyhigh’s data reveals a not-so-super side to all this AI enthusiasm. Turns out, 95% of AI apps fall into the medium to high risk zone under GDPR—basically, red flags with a friendly UI.
When it comes to meeting serious compliance standards like HIPAA, PCI, or ISO? Only 22% make the cut. The rest are winging it. Encryption at rest? Most AI apps skipped that memo—84% don’t bother. And multi-factor authentication? 83% say no thanks. But don’t worry, many of them do support emojis. Priorities.
Regulators are watching. And unlike your boss, they read the full report.
Data Leaks via AI: When Your Bot Becomes a Blabbermouth
Remember that Samsung engineer who fed ChatGPT some buggy code—and accidentally handed over semiconductor secrets? That’s not just a cautionary tale anymore. It’s practically a training example.
According to Skyhigh, 11% of files uploaded to AI apps contain sensitive content. The kicker? Fewer than 1 in 10 companies have proper data loss prevention (DLP) controls in place. Meanwhile, employees are out here asking Claude to write product launch plans using Q3 strategy docs like it’s just another day at the prompt. Because what could possibly go wrong?
Enter DeepSeek: The Rebel AI You Shouldn’t Trust
DeepSeek burst onto the scene in 2025, riding a wave of downloads, buzz, and eye-popping data volumes—including 176 GB of corporate uploads in a single month from Skyhigh customers alone. Impressive? Definitely. Alarming? Absolutely. Here’s the fine print:
- No multi-factor authentication
- No data encryption
- No regard for compliance (GDPR? Never heard of it.)
- No user or admin logging
It’s sleek, fast, and wildly popular—with students. For your SOC 2 audit? It’s a digital landmine.
Microsoft Copilot: The Good AI Child Everyone’s Proud Of
If Shadow AI is the rebellious teen sneaking out past curfew, Copilot is the golden child—polished, popular, and somehow already on the leadership track. It’s now used by 82% of enterprises, with traffic up 3,600x and uploads up 6,000x. Honestly, it’s outperforming your last five interns—and it doesn’t even ask for a coffee break.
But even star students need supervision. Smart enterprises are keeping Copilot in check by scanning everything it touches, wrapping prompts and outputs in DLP, and making sure it doesn’t “learn” anything confidential. (Sorry, Copilot—no spoilers for the Q4 roadmap.)
LLM Risk: When AI Hallucinates… and It’s Not Pretty
Large Language Models (LLMs) are like toddlers with PhDs. Genius one moment, absolute chaos the next. Top LLM risks:
- Jailbreaks (“pretend you’re evil ChatGPT”)
- AI-generated malware (BlackMamba, anyone?)
- Toxic content (see: BlenderBot’s greatest hits)
- Bias in outputs (health advice skewed by race/gender)
Key Stats:
It’s not paranoia if your AI is actually leaking secrets and writing ransomware. Skyhigh found that 94% of AI apps come with at least one large language model (LLM) risk baked in. That’s almost all of them.
Even worse, 90% are vulnerable to jailbreaks—meaning users can trick them into doing things they really shouldn’t. And 76%? They can potentially generate malware on command. So yes, the same app helping draft your meeting notes might also moonlight as a cybercriminal’s intern.
Private AI Apps: DIY AI for the Corporate Soul
Enterprises are saying, “Why trust public tools when you can build your own?”
Private AI apps now handle:
- HR queries
- RFP responses
- Internal ticket resolution
- Sales bot support (who knew the chatbot would know your pricing matrix better than Sales?)
Key Stats:
78% of customers now run their own private AI apps—because if you’re going to experiment with machine intelligence, you might as well do it behind closed doors. Two-thirds are building on AWS (thanks to Bedrock and SageMaker, obviously). It’s the AI equivalent of a gated community.
But “private” doesn’t mean problem-free. These bots might be homegrown, but they can still get into trouble. That’s why smart companies are rolling out SSE solutions with Private Access and DLP—to gently, politely, snoop on their internal AIs before something goes wildly off-script.
Final Thoughts: Don’t Fear AI—Just Govern It
Let’s be clear: AI is not the enemy. Unmanaged AI is.
Skyhigh’s 2025 report shows we’re living through a once-in-a-generation shift in enterprise tech. But here’s the kicker—security isn’t about slowing down innovation. It’s about making sure that the AI you use doesn’t send your board deck to Reddit. So, take a breath, read the report, and remember:
- Block sketchy apps like DeepSeek
- Govern copilots like Microsoft Copilot
- Lock down your private AI deployments
- Build policies that treat LLMs like moody teenagers (firm rules, lots of monitoring)
Because the future is AI-driven—and with the right tools, it can be risk-proof, too.
Bonus: Download the full 2025 Cloud Adoption and Risk Report—or ask your AI assistant to summarize it for you. Just don’t upload it to DeepSeek.
About the Author
Sarang Warudkar
Sr. Technical PMM (CASB & AI)
Sarang Warudkar is a seasoned Product Marketing Manager with over 10+ years in cybersecurity, skilled in aligning technical innovation with market needs. He brings deep expertise in solutions like CASB, DLP, and AI-driven threat detection, driving impactful go-to-market strategies and customer engagement. Sarang holds an MBA from IIM Bangalore and an engineering degree from Pune University, combining technical and strategic insight.
Back to Blogs