Skip to main content
Back to Blogs Industry Perspectives

The AI Shockwave: How DeepSeek’s Meteoric Rise is Reshaping the Enterprise Chatbot Landscape

By Thyaga Vasudevan - Executive Vice President, Product

February 3, 2025 6 Minute Read

DeepSeek, a Chinese artificial intelligence startup founded in 2023, has experienced a meteoric rise in popularity over the past week. Not only did it surpass ChatGPT to become the highest-rated free app on the U.S. App Store, but the AI assistant also had a profound market impact, as major technology stocks experienced significant declines. Nvidia, a leading AI chip manufacturer, saw its shares plummet by nearly 17%, resulting in a loss of approximately $589 billion in market value—the largest single-day loss in Wall Street history.

The innovation around DeepSeek represents an increased democratization of AI, which is good for humanity at large. The AI company’s innovation has led to the offering of an open-source AI model that rivals existing platforms in performance while being more cost-effective and energy-efficient. The app’s user-friendly interface and transparent “thinking out loud” feature have further enhanced its appeal, allowing users to follow the AI’s reasoning process.

The advent of yet another AI chatbot with its own LLM model also poses an important question to companies, especially large enterprises, as they increase their AI investment. How should enterprises evaluate a new AI chatbot for their consumption? What factors go into deciding the benefits and downsides to employees’ consumption of the AI application and corporate adoption? Recent reports and real-world incidents show that certain LLMs—especially open-source variants lacking robust security frameworks—pose significant threats to data security, regulatory compliance, and brand reputation.

In this blog, we explore:

  • The rise of risky LLMs, like DeepSeek
  • Key security vulnerabilities associated with AI
  • How enterprises can evaluate, govern, and secure new AI chatbots
  • Why an integrated approach—such as Skyhigh Security SSE—is crucial

The Rise of Risky LLMs and Chatbots

Open-source LLMs like DeepSeek have sparked both excitement and concern. Unlike enterprise-vetted AI solutions, open-source LLMs often lack the robust security controls needed to safeguard sensitive business data as shown in a recent report from Enkrypt AI:

  • 3x more biased than comparable models
  • 4x more likely to generate insecure code
  • 11x more prone to harmful content

Despite these issues, DeepSeek soared to the top of the Apple App Store, surpassing even ChatGPT by hitting 2.6 million downloads in just 24 hours (on 28th Jan 2025). This explosive adoption highlights a fundamental tension: AI is advancing at breakneck speed, but security oversight often lags behind, leaving enterprises exposed to potential data leaks, compliance violations, and reputational damage.

Key Risk Areas When Evaluating AI Chatbots

As we highlighted in our Skyhigh AI Security Blog, businesses must recognize the inherent risks AI introduces, including:

  • Lack of usage data: Security teams don’t understand how many users within their enterprises are using shadow AI apps to get their work done.
  • Limited understanding of LLM risk: Understanding which AI apps and LLM models are risky is key to governance and this information is not easily acquired.
  • Data exfiltration: In the process of getting work done, users upload corporate data into AI apps and this could lead to exfiltration of sensitive data.
  • Adversarial prompts: AI chatbots can often provide responses which are biased, toxic, or simply incorrect (hallucination). In addition, they can provide code which can contain malware. Consumption of these responses can cause problems for the company.
  • Data Poisoning: Enterprises are creating custom public or private AI applications to suit their business needs. These apps are trained and tuned using company data. If the training data is compromised either inadvertently or by malicious intent, it can lead to the custom AI app providing incorrect information.
  • Compliance & Regulatory risks: Use of AI apps opens enterprises up to greater compliance and regulatory risks, either due to data exfiltration, exposure of sensitive data, or incorrect or adversarial prompts associated with custom AI chatbots.

Why an Integrated Approach Matters: Skyhigh Security SSE

As enterprises evaluate new AI apps or chatbots they should consider if they have the tools to apply the necessary controls to protect their corporate assets. They should ensure that their security stack is positioned not just to apply the controls on AI applications, but also to evaluate and respond to malicious activity and threats that arise from these applications.

Security Services Edge (SSE) solutions such as Skyhigh Security are a key component of enterprise AI security. These tools are already integrated with the enterprise security stack as companies have secured on-prem and cloud traffic. Security teams have already defined governance and data protection policies and these can be easily extended to AI applications. And finally, by covering web, shadow apps, sanctioned apps, and private apps by their flexible deployment modes, SSE solutions can cover the spectrum of AI footprint within the enterprise and provide comprehensive security.

Here are the top controls enterprises are looking to apply on AI apps:

  • Governance of Shadow AI: Driving governance of shadow AI applications requires understanding usage and risk of AI applications as well as applying controls. Leading SSE solutions provide comprehensive visibility into shadow AI applications. In addition, they provide a deep understanding of AI risk, which includes how risky the underlying LLM model is to risks such as jailbreaking, bias, toxicity, and malware. Finally, these applications can be detected, grouped, and controls can be enforced without requiring manual intervention.
  • Data Protection: The primary concern that enterprises have with AI apps is the exfiltration of sensitive corporate data into unsanctioned and risky AI apps as employees are looking to take advantage of the significant productivity gains offered by AI. This problem is not different from any other shadow application, but has gained prominence due to the significant growth that AI apps have seen in a short period of time. Using SSE solutions, enterprises can extend their existing data protection controls to AI apps. While some solutions only offer these capabilities for corporate sanctioned apps which are integrated via APIs, leading SSE solutions, such as Skyhigh Security, offer unified data protection controls. This means the same policy can be applied to a shadow app, sanctioned app, or private app.
  • Adversarial Prompt controls: The advent of LLM models has given rise to a new risk vector in adversarial prompts. This refers to end users attempting to manipulate the LLM models to provide undesirable or illegal information such as with jailbreaking or prompt injections. It could also refer to the AI apps providing toxic, biased, dangerous, NSFW, or incorrect content in their responses. In either of these cases, the company is at risk of this content being used within its corporate material and making it vulnerable to regulatory, governance, and reputational risks. Companies are looking to apply controls to detect and remediate risky prompts just like they do with DLP.
  • Data Poisoning remediation: As enterprises increasingly create their custom AI chatbots using OpenAI GPTs or Custom Copilots, the sanctity of the training data used to train these chatbots has gained importance from a security perspective. If someone with access to this corpus of training data ‘poisons’ it with incorrect or malicious inputs, then it likely impacts the chatbot’s responses. This could subject the company to legal or other business risks, especially if the chatbot is open to public access. Enterprises are already performing on-demand (or data-at-rest) DLP scans on training data to remove sensitive data. They are also looking to perform similar scans to identify potential prompt injection or data poisoning attempts.
  • Compliance and regulatory enforcement: Enterprises are using SSE solutions to enforce governance and regulatory compliance, especially with data being uploaded to cloud apps or shared with external parties. As they adopt AI in multiple corporate use cases, they are looking to the SSE solutions to extend these controls to AI apps and continue to enable access to their employees.

The Future of AI Security

The rapid evolution of AI demands a new security paradigm—one that ensures innovation doesn’t come at the cost of data security. Enterprises looking to leverage LLMs must do so with caution, adopting AI security frameworks that protect against emerging threats.

At Skyhigh Security, we are committed to helping businesses securely embrace AI while safeguarding their most critical assets. To learn more about how to protect your organization from risky AI usage, explore our latest insights in the Skyhigh AI Security Blog.

Thyaga Vasudevan

About the Author

Thyaga Vasudevan

Executive Vice President, Product

Thyaga Vasudevan is a high-energy software professional currently serving as the Executive Vice President, Product at Skyhigh Security, where he leads Product Management, Design, Product Marketing and GTM Strategies. With a wealth of experience, he has successfully contributed to building products in both SAAS-based Enterprise Software (Oracle, Hightail – formerly YouSendIt, WebEx, Vitalect) and Consumer Internet (Yahoo! Messenger – Voice and Video). He is dedicated to the process of identifying underlying end-user problems and use cases and takes pride in leading the specification and development of high-tech products and services to address these challenges, including helping organizations navigate the delicate balance between risks and opportunities. Thyaga loves to educate and mentor and has had the privilege to speak at esteemed events such as RSA, Trellix Xpand, MPOWER, AWS Re:invent, Microsoft Ignite, BoxWorks, and Blackhat. He thrives at the intersection of technology and problem-solving, aiming to drive innovation that not only addresses current challenges but also anticipates future needs.

Back to Blogs

Trending Blogs

Industry Perspectives

Why Unified Data Security is Essential for Modern Enterprises

Hari Prasad Mariswamy January 29, 2025

Industry Perspectives

Data Privacy Day 2025 – Clear and Practical Privacy Tips for Everyday Users

Rodman Ramezanian January 28, 2025

Industry Perspectives

OWASP Top 10 LLM Threats: How Skyhigh SSE Leads the Way

Sarang Warudkar December 16, 2024

Industry Perspectives

Four Steps to Align with NIST AI Framework Using Skyhigh SSE

Sarang Warudkar - Sr. CASB Technical Product Marketing Manager, Skyhigh Security and John Duronio December 12, 2024