32k Context & Mistral Model Updates: What's New in Private LLM v1.7.8 for macOS
We're excited to announce the v1.7.8 update for the macOS version of Private LLM, packed with significant improvements and new features. Here’s what’s new:
Enhancements to Mixtral Model: We've upgraded our Mixtral model, introducing unquantized embedding and MoE gate weights, while maintaining 4-bit OmniQuant quantization for the remaining weights. Although the previous version of the Mixtral model is now deprecated, existing users can continue to use it. This update reaffirms Private LLM as the premier choice for running Mixtral models on Apple Silicon Macs.
Increased Context Length for Mistral Models: The Mistral Instruct v0.2, Nous Hermes 2 Mistral 7B DPO, and BioMistral 7B models now support a full 32k context length, provided there is at least 8.69GB of free memory available. If not, they will default to a 4k context length. This unique feature sets Private LLM apart in providing extensive context lengths.
Grammar Correction Service Update: Our grammar correction service for macOS now adapts to the OS locale, offering spellings for British, American, Canadian, and Australian English.
Experimental Support for Non-English European Languages: We're introducing experimental support for non-English European languages in our macOS app. This feature currently performs best with Western European languages and larger models, and can be enabled in the app settings.
New Editing Feature: Based on user feedback, we've added the ability to right-click on the edge of prompts to edit and continue, mirroring a popular feature from our iOS version.
We hope you enjoy the new updates and features. Your feedback is invaluable, so please don't hesitate to reach out with any issues or suggestions. We’re eager to see the innovative uses of the 32k context 7B models by our macOS users. Happy coding with offline LLMs!