DeepSeek, AI Controversy, and Secure Deployment: Running Your Own LLM on Akamai

The rise of open-source large language models (LLMs) like DeepSeek has ushered in a new era of AI innovation, enabling developers and enterprises to run their own AI models without dependency on proprietary platforms. However, recent events—including the DDoS attack on DeepSeek, allegations of OpenAI data misuse, and intensifying competition from Alibaba—highlight a crucial reality: deploying AI isn’t just about innovation; it’s about security, compliance, and control.

Akamai Connected Cloud stands out as the ideal platform for running your own LLM, offering cost-effective GPU compute, global accessibility, and enterprise-grade security controls to safeguard against emerging threats. Given the security implications of these models, the U.S. government is also evaluating the risks associated with DeepSeek, particularly in terms of cybersecurity vulnerabilities, potential foreign influence, and unauthorized use of proprietary AI research.

If you choose to deploy DeepSeek or similar models, be aware of these concerns. While Akamai employs a shared security model, where we protect your cloud infrastructure, you retain full responsibility for your application and data within your environment. Ensuring proper security controls, ethical use, and compliance with AI best practices is critical as this space continues to evolve.

My initial thoughts on DeepSeek’s implications can be found here: CloudPortableTech
For an in-depth deployment guide, check out my colleague’s technical breakdown of DeepSeek on Akamai Connected Cloud: DeepSeek R1 on Akamai

The AI Arms Race: Ethical Questions and Security Risks

The AI landscape is evolving rapidly, with allegations that DeepSeek was trained using data stolen from OpenAI surfacing in recent reports. These claims suggest that DeepSeek may have leveraged model distillation—a technique where a smaller model is trained on the outputs of a larger, more advanced model, essentially replicating its behavior without direct access to its architecture or weights.

While distillation is widely accepted in AI research, using it to mirror proprietary models without authorization raises major concerns about intellectual property theft, ethical AI development, and fair competition.

Meanwhile, Alibaba has entered the race, claiming its latest AI model surpasses DeepSeek V3 in performance, further intensifying global competition in the LLM space. With these developments, we must ask:


1. Are we in the middle of an AI war?
2. What does this mean for security, ethics, and innovation?

While open-source AI fosters collaboration and accessibility, it also creates new vulnerabilities—from data breaches to potential misuse in surveillance and disinformation campaigns. The responsibility lies in how and where these models are deployed.

A high-stakes battle: The AI arms race demands strategic thinking, ethical considerations, and robust security measures.

Why Choose Akamai Connected Cloud for Your LLM Deployment?

While many enterprises and AI startups rely on centralized AI providers like OpenAI, Anthropic, or Google DeepMind, this approach comes with significant trade-offs. Akamai Connected Cloud offers compelling advantages for running DeepSeek and other LLMs:

Cost-Effectiveness

Akamai’s GPU instances are designed to provide competitive price-performance ratios compared to other major cloud providers. Running your own LLM on Akamai allows you to optimize costs by paying only for the resources you need while benefiting from transparent pricing and predictable billing.

Data Control & Privacy

By hosting your own model on Akamai’s infrastructure, you maintain complete control over your data, eliminating exposure to third-party AI providers—critical for regulated industries and security-conscious enterprises.

Scalability & Edge Deployment

Unlike centralized AI platforms, Akamai’s distributed cloud enables you to deploy LLMs closer to end-users, reducing latency for real-time applications. With over 4,100 points of presence worldwide, we ensure your AI services are secure, scalable, and globally accessible.

Step 1: Deploying DeepSeek on Akamai Connected Cloud

Open-source LLMs like DeepSeek, LLaMA, and Mistral offer developers versatile tools to build diverse AI applications. While high-performance GPUs are often the preferred choice for tasks such as inference and training, it’s important to note that AI inference can also be effectively run on CPUs in certain scenarios. Resources like HarnessingCPU Power for AI Inference and the MovieMind GitHub Repository showcase how CPUs can still deliver strong performance for specific use cases.

For this exercise, however, we’ll focus on deploying DeepSeek with high-performance GPUs to maximize efficiency and scalability. Akamai Connected Cloud provides competitive GPU solutions, making it a cost-effective platform for deploying LLM workloads.

To get started:

  1. Provision Akamai’s RTX 4000 Ada GPU instance

  2. Use a Docker container with PyTorch and vLLM for efficient inference

  3. Attach a fast NVMe storage volume for model weights

For a step-by-step DeepSeek deployment guide, refer to: DeepSeek R1 on Akamai

Example deployment via Terraform

Step 2: Securing Your LLM with Akamai’s Shared Security Model

Security is a shared responsibility between Akamai and the user. While Akamai secures the cloud infrastructure, you must protect your application and data. The DDoS attack on DeepSeek highlights how AI deployments can become targets for botnets, prompt injection attacks, and unauthorized access.

Here are some tips to secure your AI model:

DDoS Protection – Use Akamai’s built-in protections to prevent large-scale attacks before they reach your instance.
API Security – Restrict access to your model with authentication, rate limiting, and Web Application Firewall (WAF) rules.
Zero Trust Controls – Limit SSH and API access to authorized users and enforce strong authentication measures.
Data Protection – Store model weights securely, encrypt sensitive data, and monitor for unauthorized access.

A well-secured AI deployment isn’t just about compliance—it’s about ensuring model integrity, availability, and responsible AI use.

Conclusion: Build Smarter, Build Safer on Akamai

Deploying your own DeepSeek-powered LLM on Akamai Connected Cloud offers the best of both worlds: cost-effectiveness, high performance, and robust security. However, securing AI models is not optional.

With the AI race heating up, organizations must make strategic choices about where and how they deploy their models. Akamai provides a secure, cost-effective, and globally scalable environment for AI workloads—helping businesses innovate without compromising on security or ethics.

Start deploying your AI workloads on Akamai today! Get Started