Private LLM Local AI Chatbot That Works Offline and Privately on Your iPhone, iPad, and Mac

Do You Care About Protecting Your Privacy?

No Connection? Private and Local AI Available Anytime, Anywhere!

Private LLM is a local AI chatbot for iOS and macOS that works offline, keeping your information completely on-device, safe and private. It doesn't need the internet to work, so your data never leaves your device. It stays just with you. With no subscription fees, you pay once and use it on all your Apple devices. It's designed for everyone, with easy-to-use features for generating text, helping with language, and a whole lot more. Private LLM uses the latest AI models quantized with state of the art quantization techniques to provide a high-quality on-device AI experience without compromising your privacy. It's a smart, secure way to get creative and productive, anytime and anywhere.
A close-up view of an iPhone screen displaying the interface of the Private LLM app, where a text prompt is entered into a chat-like interface, highlighting the app's ability to run sophisticated language models locally on the device for enhanced privacy and offline functionality

Harness the Power of Open-Source AI with Private LLM

Private LLM opens the door to the vast possibilities of AI with support for an extensive selection of open-source LLM models, including the Llama 3, Google Gemma, Microsoft Phi-3, Mixtral 8x7B family and many more on both your iPhones, iPads and Macs. This wide range of model support ensures that users on iOS, iPadOS, and macOS can fully utilize the power of AI, customized specifically for their devices.
Screenshot of the Private LLM app on an iPhone, displaying a user-friendly interface with a list of downloadable Large Language Models (LLMs) available for offline use, showcasing a variety of model names and descriptions, emphasizing the app's capability for personalized AI experiences while highlighting its privacy and offline functionality.

Build Your Custom AI: No-Code Needed with Siri and Apple Shortcuts

Discover the simplicity of bringing AI to your iOS or macOS devices without writing a single line of code. With Private LLM integrated into Siri and Shortcuts, users can effortlessly create powerful, AI-driven workflows that automate text parsing and generation tasks, provide instant information, and enhance creativity. This seamless interaction allows for a personalized experience that brings AI assistance anywhere in your operating system, making every action smarter and more intuitive. Additionally, Private LLM also supports the popular x-callback-url specification, which is supported by over 70 popular iOS and macOS applications. Private LLM can be used to seamlessly add on-device AI functionality to these apps.
An iPhone displaying the Private LLM app interface with an Apple Shortcut integration, showcasing a seamless user experience for personalizing AI interactions on iOS

No Login, No Tracking, No API Key, No Subscriptions

Ditch the subscriptions for a smarter choice with Private LLM. A single purchase unlocks the app across all Apple platforms—iPhone, iPad, and Mac—while enabling Family Sharing for up to six relatives. This approach not only simplifies access but also amplifies the value of your investment, making digital privacy and intelligence universally available in your family.
Screenshot of the Private LLM interface on macOS, featuring a user typing a prompt into the application's text input field, ready to receive instant, offline responses from the local language model

AI Language Services Anywhere in macOS

Transform your writing across all macOS apps with AI-powered tools. From grammar correction to summarization and beyond, our solution supports multiple languages, including English and select Western European ones, for flawless text enhancement.
Screenshot showing the Private LLM integration within the macOS system-wide services menu.

Superior Model Performance With State-Of-The-Art Quantization

The core of Private LLM's superior model performance lies in its use of the state-of-the-art OmniQuant quantization algorithm. While quantizing LLMs for on-device inference, outlier values in LLM weights tend to have a marked adverse effect on text generation quality. Omniquant quantization handles outliers by employing an optimization based learnable weight clipping mechanism, which preserves the model's weight distribution with exceptional precision. RTN (Round to nearest) quantization used by popular open-source LLM inference frameworks and apps based on them, does not handle outlier values during quantization, which leads to inferior text generation quality. OmniQuant quantization paired with optimized model-specific Metal kernels, enables Private LLM to deliver text generation that is not only fast but also of the highest quality, significantly raising the bar for on-device LLMs.
Screenshot of Private LLM running 4-bit OmniQuant quantised Mixtral 8x7B Instruct model, with the Sally prompt.
Screenshot of LMStudio running Q8_0 quantised Mixtral 8x7B Instruct model, with the Sally prompt.

See what our users say about us on the App Store

Remarkably good app, very active developer
by Conventional Dog-Apr 7, 2024

Possibly the single best app purchase I've ever made. The developer is constantly improving it and talking with users on Discord and elsewhere. One price includes Mac, iPhone, and iPad versions (with family sharing). Mac shortcuts can be used to create what amount to custom GPTs. (There's even a user-contributed, quite clever bedtime story generator on the website.) The 10.7B-parameter SOLAR LLM (one of many included) running on my 16 GB M1 MacBook Air gives me fast responses that are subjectively almost on the level of GPT-3.5. For something running completely locally with full privacy, it's remarkable. More RAM allows an even larger choice of language models. But the tiniest model running on my iPhone 12 Pro is usable. (Tip: Experiment with changing the system prompt to fine-tune it for your purposes.)

Version 1.8.3|United States

Download the Best Open Source LLMs

macOS

Google Gemma Based Models

All Intel and Apple Silicon Macs
Gemma 2B IT 💎Gemma 1.1 2B IT 💎

Mixtral 8x7B Based Models

Apple Silicon Macs with at least 32GB of RAM
Mixtral-8x7B-Instruct-v0.1Dolphin 2.6 Mixtral 8x7B 🐬Nous Hermes 2 Mixtral 8x7B DPO

Llama 33B Based Models

Apple Silicon Macs with at least 24GB of RAM
WizardLM 33B v1.0 Uncensored

Llama 2 13B Based Models

Apple Silicon Macs with at least 16GB of RAM
Wizard LM 13BSpicyboros 13B 🌶️Synthia 13B 1.2XWin-LM-13BMythomax L2 13B

CodeLlama 13B Based Models

Apple Silicon Macs with at least 16GB of RAM
WhiteRabbitNeo-13B-v1

Llama 2 7B Based Models

All Intel and Apple Silicon Macs
airoboros-l2-7b-3.0Spicyboros 7b 2.2 🌶️Xwin-LM-7B v0.1

Solar 10.7B Based Models

Apple Silicon Macs with at least 16GB of RAM
Nous-Hermes-2-SOLAR-10.7B

Phi-2 3B Based Models

All Intel and Apple Silicon Macs
Phi-2 Orange 🍊Phi-2 Orange Version 2 🍊Dolphin 2.6 Phi-2 🐬

StableLM 3B Based Models

All Intel and Apple Silicon Macs
StableLM Zephyr 3B 🪁

Yi 6B Based Models

All Intel and Apple Silicon Macs
Yi 6B Chat 🇨🇳

Yi 34B Based Models

Apple Silicon Macs with at least 24GB of RAM
Yi 34B Chat 🇨🇳
iOS

Phi-3 Mini 3.8B Based Models

on devices with 6GB or more RAM
Phi-3 Mini 4K Instruct

Llama 3 8B Based Models

on devices with 6GB or more RAM
Llama 3 8B Instruct 🦙Dolphin 2.9 Llama 3 8B Uncensored 🐬Llama 3 Smaug 8B

Google Gemma Based Models

on devices with 8GB or more RAM
Gemma 2B IT 💎Gemma 1.1 2B IT 💎

Llama 2 7B Based Models

on devices with 6GB or more RAM
Airoboros l2 7b 3.0Spicyboros 7b 2.2 🌶️

Phi-2 3B Based Models

on devices with 4GB or more RAM
Phi-2 Orange 🍊Dolphin 2.6 Phi-2 🐬Phi-2 Super 🤖Phi-2 Orange v2 🍊

H2O Danube Based Models

on all devices
H2O Danube 1.8B Chat

StableLM 3B Based Models

on devices with 4GB or more RAM
StableLM 2 Zephyr 1.6B 🪁Nous-Capybara-3B V1.9Rocket 3B 🚀

TinyLlama 1.1B Based Models

on all devices
TinyLlama 1.1B Chat 🦙TinyDolphin 2.8 1.1B Chat 🐬

Yi 6B Based Models

on devices with 6GB or more RAM
Yi 6B Chat 🇨🇳
How Can We Help?

Whether you've got a question or you're facing an issue with Private LLM, we're here to help you out. Just drop your details in the form below, and we'll get back to you as soon as we can.