Beyond GPUs: Harnessing CPU Power and Cloud Portability for AI Inference

A Personal Prelude

As an avid reader, I've always been drawn to the boundless landscapes that books offer, where stories unfold in the vast theater of the mind. This journey through narratives has often led me to the vibrant world of cinema, where the imaginations of countless others come to life on the screen. It's within this rich interplay of creation and discovery that the idea for moviemind took root. Conceived as a quest to harness the burgeoning powers of artificial intelligence (AI), Moviemind aims to curate movie recommendations tailored to a user’s preferences, drawing on the movies the user enjoyed. Moviemind utilizes Large Language Models (LLMs) and prompt engineering to recommend movies that resonate with your unique tastes, all powered by CPUs and bolstered by cloud portability.

Deployment Guide

With just a few clicks and basic command-line knowledge, anyone can deploy my personal project moviemind on their Akamai cloud compute account using a terraform template and following the instructions provided here at a cost of $0.22 per hour.

Rethinking GPU Dependency

Traditionally, GPUs have been celebrated for their ability to significantly accelerate the training of Large Language Models (LLMs). However, there's an emerging dialogue challenging the belief that GPUs are an absolute necessity for AI deployment. The crux of deploying any AI application lies not just in following trends but in a nuanced understanding of the specific requirements of each use case. Essentially, it's about asking the right questions: "What level of compute and network latency is acceptable for my application? Is it possible to optimize compute efficiency, reduce cost while also bringing the application closer to the end user?"

Moviemind serves as a testament to this evolving perspective, showcasing that CPUs, with their advanced parallel processing capabilities, offer a viable and accessible option for AI inference tasks. For my use case of personal movie recommendation, a single - double digits milliseconds (ms) of compute latency were acceptable. This project brings to light the importance of tailoring processing resources to the particular demands of an application, heralding CPUs for their proven efficiency and scalability in handling inference workloads. Through moviemind, we're invited to reconsider our hardware choices, emphasizing the strategic alignment of technological resources with the unique needs of each project.

Technical Integration and Synergy

Moviemind infrastructure is a harmonious amalgamation of various technologies, each playing a pivotal role in operational efficiency. Leveraging Terraform for resource provisioning on Akamai Compute services facilitates seamless deployments, while Docker ensures consistency across different deployment stages. Additionally, FastAPI, a modern web framework for building APIs with Python, supports asynchronous operations, enabling moviemind to efficiently manage multiple user requests in sequence. This technical orchestration mirrors the crafting of a compelling narrative, where each element contributes to the seamless delivery of personalized movie recommendations.

The Core of Moviemind: A Blend of Open-Source Innovation and Self-Sufficiency

At the core of moviemind lies a strategic integration of two open-source models, encapsulating the essence of modern AI's potential when leveraged within a closed system. This innovative approach demonstrates the project's capability to harness pre-trained models—that could have been trained on robust cloud infrastructures with GPUs—and deploy them independently, without reliance on external APIs for inference on CPUs powered compute instance. This methodology underscores the autonomy and flexibility of using AI in personalized applications offering a seamless, private, and efficient user experience.

Deep Interactions with mistral-7b-openorca.Q4_K_M.ggu: Serving as the primary engine for understanding and responding to user queries, the mistral-7b-openorca.Q4_K_M.ggu model showcases the profound capability of AI to personalize interactions. This Large Language Model (LLM) sifts through an extensive vector database, meticulously processing user inputs to deliver customized movie recommendations. Pre-trained with an extensive dataset of movie information up to the year 2021, it enables moviemind to offer insights and suggestions with a deep understanding of cinematic content.

Vector Embeddings with sentence-transformers/all-MiniLM-L6-v2: To overcome the temporal constraints of the mistral-7b-openorca.Q4_K_M.ggu model, moviemind utilizes the sentence-transformers/all-MiniLM-L6-v2 model. This model excels at generating vector embeddings from the latest textual data, which, in my case, is sourced from The Movie Database (TMDB). This enhancement allows the system to expand its recommendations to include more recent movies, effectively bridging the gap left by the original model's coverage of content.

Conclusion and Future Outlook

Moviemind, beyond being a personal exploration in AI-driven movie recommendations, showcases the broader implications of evolving technology. Utilizing CPU power and Akamai Compute services, this project stands as a testament to efficient, user-focused AI inference. The benefits of Akamai, including predictable costs and ease of use, pave the way for future innovations. With the upcoming Akamai’s Generalized Edge Compute (Gecko), I anticipate even lower latency, bringing AI inference closer to users and enhancing service quality.

Though moviemind is small, it hints at vast possibilities across industries like Media and Entertainment, Online Gaming, SaaS, and Autonomous Driving, where CPU inference and proximity computing could significantly improve efficiency and personalization.

Looking ahead, moviemind journey highlights the potential within AI's current framework and signals a shift towards more innovative, user-aligned AI applications. This move towards open-source models, combined with the strategic use of cloud portability and CPU capabilities, opens up new avenues for technology to enrich user experiences across various domains, signaling a future rich with opportunities for technological breakthroughs and deeper connections through AI.