Private LLM Blog

Microsoft Phi-3 Mini 4K Instruct Now Available for iOS and macOS

Microsoft Phi-3 Mini 4K Instruct model now available on Private LLM for iPhone, iPad, and Mac. Experience superior AI performance with 4-bit quantization and 4k context length. Download the model on devices with 6GB+ RAM (iOS) or any Intel or Apple Silicon Mac and enjoy private, offline AI conversations, text summarization, and idea generation.
Read more

OpenBioLLM 8B: Llama 3 Biomedical Model Now on iOS and macOS

Private LLM introduces OpenBioLLM-8B, a cutting-edge biomedical AI model for iPhone, iPad, and Mac. Developed by Saama AI Labs, this secure, on-device chatbot delivers unparalleled performance in medical and life sciences applications. Try it today and experience the power of advanced, private AI.
Read more

Run Llama 3 8B Locally on iPhone or Mac With Private LLM

Run Meta Llama 3 8B and other advanced models like Hermes 2 Pro Llama-3 8B, OpenBioLLM-8B, Llama 3 Smaug 8B, and Dolphin 2.9 Llama 3 8B locally on your iPhone, iPad, and Mac with Private LLM, an offline AI chatbot. Engage in private conversations, generate code, and ask everyday questions without the AI chatbot refusing to engage in the conversation.
Read more

Run Local AI Models on iPhone or Mac Easily Using Private LLM

Run Local GPT on iPhone, iPad, and Mac with Private LLM, a secure on-device AI chatbot. Get support for over 30 models, integrate with Siri, Shortcuts, and macOS services, and have unrestricted chats. No API key required. Download Private LLM to harness AI's capabilities on your Apple device today.
Read more

Google Gemma 1.1 2B Now Available on Private LLM for iOS

Private LLM v1.7.6 brings Google Gemma 1.1 2B and Dolphin 2.8 Mistral 7B v0.2 models to iOS devices, offering enhanced performance and uncensored AI interactions while ensuring user privacy and offline functionality. Download now and experience cutting-edge language models on your iPhone or iPad.
Read more

32k Context & Mistral Model Updates: What's New in Private LLM v1.7.8 for macOS

The latest v1.7.8 update for the macOS version of Private LLM introduces significant enhancements, notably to the Mixtral model, which now features unquantized embedding and MoE gates weights, while maintaining 4-bit OmniQuant quantization for other weights. The update also includes an increase in context length for Mistral models to 32k, provided sufficient memory is available, and introduces a grammar correction service that adapts to different English dialects. Additionally, there's experimental support for non-English European languages and a new feature allowing users to edit prompts by right-clicking. The update solidifies Private LLM's status as a top choice for running Mixtral models on Apple Silicon Macs and encourages feedback from users to improve future versions.
Read more