Microsoft Phi-3 Mini 4K Instruct Now Available for iOS and macOS
We are excited to announce that Private LLM now supports downloading the new Phi-3-mini-4k-instruct model recently released by Microsoft. This compact model, with 3.8 billion parameters, performs comparably to larger models like Mixtral 8x7B (available for download in Private LLM for macOS) and GPT-3.5 based on various benchmarks.
The Phi-3-mini-4k-instruct model is 4-bit OmniQuant quantized and provides a 4k context length (subject to available free memory). It can be downloaded on all your iPhones and iPads with 6GB or more RAM, as well as any Intel or Apple Silicon Mac, bringing powerful AI chatbot capabilities to your devices.
Phi-3 Mini 4K Instruct Delivers Impressive Performance
Microsoft's training innovations have allowed the Phi-3 model family to perform well compared to larger GPT-based models. The Phi-3-mini model is the first in this series to be publicly released by Microsoft.
Built for efficiency, small language models like Phi-3-mini are designed to handle simpler tasks effectively. They offer improved accessibility and usability on mobile devices and computers with limited resources, enabling you to use AI power locally on the go or at your desk.
Try out the Phi-3-mini-4k-instruct model's capabilities by updating to Private LLM v1.8.1 on your iPhone, iPad, or Mac. Enjoy casual conversations, text summarization, creative idea generation, and access to information on various topics, all powered by this advanced language model.
How Fast Is Microsoft Phi-3 Mini 4K Instruct on iOS?
The Phi-3 technical report mentions observing more than 12 tokens per second running on an iPhone 14 with A16 Bionic processor. This is likely an error because iPhone 14 and iPhone 14 Plus have A15 Bionic processors. iPhone 14 Pro, iPhone 14 Pro Max, iPhone 15, and iPhone 15 Plus have A16 Bionic processors.
Here's a list of various Apple processors we tested the Phi-3-mini-4k-instruct model on using Private LLM, along with their text generation performance in tokens per second:
- iPhone 12 Pro Max (A14 Bionic) - 9.99 tokens per second
- iPhone 13 Pro Max (A15 Bionic) - 10.18 tokens per second
- iPhone 15 (A16 Bionic) - 15.37 tokens per second
- iPhone 15 Pro (A17 Pro) - 18.13 tokens per second
- 5th Generation iPad Air (M1) - 20.48 tokens per second
- 6th Generation iPad Pro (M2) - 25.89 tokens per second
Run Microsoft Phi-3 Mini 4K Instruct Locally
Private LLM works completely offline on your device, keeping your data secure and private. With no internet connection required, you can use this local chatbot app anytime, anywhere, without data privacy concerns.
As a one-time purchase on the App Store, Private LLM provides unlimited access to its features with no subscription fees. Experience the convenience and power of a private AI chatbot at your fingertips, now enhanced with the Phi-3-mini-4k-instruct model.
Uncensored Phi-3
For users looking for an uncensored Phi-3 fine-tuned model, we've introduced the Kappa-3 Phi Abliterated Model on Private LLM. This model allows you to have unrestricted, uncensored conversations, giving you even more flexibility and freedom in your AI interactions.
Download Private LLM today and unlock the potential of the Phi-3 Mini model on your iPhone, iPad, and Mac. Elevate your AI experience with improved performance, accessibility, and efficiency while maintaining your privacy and security.