Frequently Asked Questions

Have questions about Private LLM? You're in the right place! Our FAQ page covers everything from basic setup to advanced features, ensuring you have all the information needed to fully leverage Private LLM on your Apple devices. Discover the unique advantages of Private LLM, including its commitment to privacy, offline functionality, and no-subscription model. Explore our FAQs to better understand and use Private LLM today.
  • Private LLM is your private AI chatbot, designed for privacy, convenience, and creativity. It operates entirely offline on your iPhone, iPad, and Mac, ensuring your data stays secure and confidential. Private LLM is a one-time purchase on the App Store, allowing you unlimited access without any subscription fees. nb: We hate subscriptions, and we arenā€™t hypocrites to subject our users to what we hate.

  • Private LLM works offline and uses a decoder only transformer (aka GPT) model that you can casually converse with. It can also help you with summarising paragraphs of text, generating creative ideas, and provide information on a wide range of topics.

  • Private LLM offers a range of models to cater to diverse language needs. Our selection includes the Llama 3 and Qwen 2.5 families, both supporting multiple languages. Llama 3 is proficient in English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. Qwen 2.5 extends support to over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic. For users seeking models tailored to specific non-English languages, Private LLM provides options such as SauerkrautLM Gemma-2 2B IT for German, DictaLM 2.0 Instruct for Hebrew, RakutenAI 7B Chat for Japanese, and Yi 6B Chat or Yi 34B Chat for Chinese. This diverse selection ensures that users can choose the model that best fits their language requirements.

  • Private LLM ensures superior text generation quality and performance by utilizing advanced quantization strategies like OmniQuant and GPTQ, which take numerous hours to carefully quantize each model on GPUs. This meticulous process preserves the model's weight distribution more effectively, resulting in faster inference, improved model fidelity, and higher-quality text generation. Our 3-bit OmniQuant models outperform or match the performance of 4-bit RTN-quantized models used by other platforms. Unlike apps that support readily available GGUF files from Hugging Face, Private LLM quantizes models in-house, ensuring they are optimized for speed, accuracy, and quality. This rigorous approach is one of the reasons Private LLM is a paid app, offering much better quality compared to slower and less capable local AI chat apps.

  • We regularly add new models to Private LLM based on user feedback, as shown in our release notes. To request a specific model, join our Discord community and share your suggestion in the #suggestions channel. We review all requests and prioritize popular ones for future updates.

  • Absolutely not. Private LLM is dedicated to ensuring your privacy, operating solely offline without any internet access for its functions or accessing real-time data. An internet connections is only required when you opt to download updates or new models, during which no personal data is collected or transmitted, exchanged or collected. Our privacy philosophy aligns with Apple's stringent privacy and security guidelines, and our app upholds the highest standards of data protection. It's worth noting that, on occasion, users might inquire if Private LLM can access the internet, leading to potential model hallucinations suggesting it can. However, these responses should not be taken as factual. If users would like to independently verify Private LLMā€™s privacy guarantees, we recommend using network monitoring tools like Little Snitch. This way, you can see for yourself that our app maintains strict privacy controls. For those interested in accessing real-time information, Private LLM integrates seamlessly with Apple Shortcuts, allowing you to pull data from RSS feeds, web pages, and even apps like Calendar, Reminders, Notes and more. This feature offers a creative workaround for incorporating current data into your interactions with Private LLM, while still maintaining its offline privacy-first ethos. If you have any questions or need further clarification, please don't hesitate to reach out to us.

  • Firstly, Private LLM stands out from other local AI solutions through its advanced model quantization technique known as OmniQuant. Unlike the naive Round-To-Nearest (RTN) quantization used by other competing apps, OmniQuant quantization is an optimization based method that uses learnable weight clipping. This method allows for a more precise control over the quantization range, effectively maintaining the integrity of the original weight distribution. As a result, Private LLM achieves superior model performance and accuracy, nearly matching the performance of an un-quantized 16 bit floating point (fp16) model, but with significantly reduced computational requirements at inference time.

    While the process of quantizing models with OmniQuant is computationally intensive, it's a worthwhile investment. This advanced approach ensures that the perplexity (a measure of model's text generation quality) of the quantized model remains much closer to that of the original fp16 model than is possible with the naive RTN quantization. This ensures that Private LLM users enjoy a seamless, efficient, and high-quality AI experience, setting us apart other similar applications.

    Secondly, unlike almost every other competing offline LLM app, Private LLM isnā€™t based on llama.cpp. This means advanced features that arenā€™t available in llama.cpp (and by extension apps that use it) like attention sinks and sliding window attention in Mistral models are available in Private LLM, but unavailable elsewhere. This also means that our app is significantly faster than competition on the same hardware (YouTube videos comparing performance).

    Finally, we are machine learning engineers and carefully tune quantization and parameters in each model to maximize the text generation quality. For instance, we do not quantize the embeddings and gate layers in Mixtral models because quantizing them badly affects the modelā€™s perplexity (needless to mention, our competition naively quantize everything). Similarly with the Gemma models, quantizing the weight tied embeddings hurts the modelā€™s perplexity, so we donā€™t (while our competitors do).

    By prioritizing accuracy and computational efficiency without compromising on privacy and offline functionality, Private LLM provides a unique solution for iOS and macOS users seeking a powerful, private, and personalized AI experience.

  • After a one-time purchase, you can download and use Private LLM on all your Apple devices. The app supports Family Sharing, allowing you to share it with your family members.

  • Unlike almost all other AI chatbot apps that are currently available, Private LLM operates completely offline and does not use an external 3rd party API, ensuring your data privacy. There's no tracking or data sharing. Your data stays on your device. Plus, it's a one-time purchase, giving you lifetime access without having to worry about recurring subscription fees.

  • Private LLM can analyse and summarise lengthy paragraphs of text in seconds. Just paste in the content, and the AI will generate a concise summary, all offline. You could also use Private LLM for rephrasing and paraphrasing with prompts like:

    • Give me a TLDR on this: [paste content here]
    • Youā€™re an expert copywriter. Please rephrase the following in your own words: [paste content]
    • Paraphrase the following text so that it sounds more original: [paste content]
  • Absolutely! Private LLM can generate insightful suggestions and ideas, making it a powerful tool for brainstorming and problem-solving tasks. Here are some example brainstorming prompts that you can try asking Private LLM. Please feel free to experiment and try out your own prompts.

    • Can you give me some potential themes for a science fiction novel?
    • Iā€™m planning to open a vegan fast-food restaurant. What are the weaknesses of this idea?
    • I run a two year old software development startup with one product that has PMF, planning on introducing a new software product in a very different market. Use the six hats method to analyse this.
    • Utilise the Golden Circle Model to create a powerful brand for a management consulting business.
  • Sampling temperature and Top-P are universal inference parameters for all autoregressive causal decoder only transformer (aka GPT) models, and are not specific to Private LLM. The app has them set to reasonable defaults (0.7 for Sampling temperature and 0.95 for Top-p), But you can always tweak them and see what happens. Please bear in mind that changes to these parameters do not take effect until the app is restarted.

    These parameters control the tradeoff between deterministic text generation and creativity. Low values lead to boring but coherent response, higher values lead to creative but sometimes incoherent responses.

  • Yes. Private LLM has two app intents that you can use with Siri and the Shortcuts app. Please look for Private LLM in the Shortcuts app. Additionally, Private LLM also supports the x-callback-url specification which is also supported by Shortcuts and many other apps. Hereā€™s an example shortcut using the x-callback-url functionality in Private LLM.

  • The difference in functionality between iOS and macOS regarding background processing stems primarily from Apple's hardware usage policies. On iOS, Apple restricts background execution of tasks that require intensive GPU usage. This limitation is enforced to preserve battery life and maintain system performance. According to Apple's guidelines, apps attempting to run a Metal kernel in the background will be terminated immediately to prevent unauthorized resource use. For Private LLM, while we can run operations in the background on macOS leveraging the GPU, iOS versions are constrained to CPU processing when the app is not in the foreground. Running Private LLM's AI-driven tasks on the CPU is technically possible, but it would be significantly slowerā€”more than 10 times slower compared to GPU processing. This slow performance would not provide the seamless, efficient user experience we strive for. We are hopeful that future updates to iOS might offer more flexibility in how background processes can utilize system resources, including potential GPU access for apps like Private LLM. Until then, we continue to optimize our iOS app within the current constraints to ensure you get the best possible performance without compromising the health of your device or the efficiency of your applications. For more technical details, you can refer to Apple's official documentation on preparing your Metal app to run in the background: Apple Developer Documentation.

  • This could be due to the device running low on memory, or if the task given to Private LLM is particularly complex. In such cases, consider closing memory hungry apps that might be running in the background and try breaking down the request into smaller, more manageable tasks for the LLM to process. In the latter case, simply responding with ā€œContinueā€, ā€œGo onā€ or ā€œTell meā€ also works.

  • Weā€™re sorry to hear youā€™re considering a refund. You can request a refund through the Apple App Store. Simply navigate to your Apple account's purchase history, find Private LLM, and click on 'Report a Problem' to initiate the refund process. We would also love to hear from you about how we can improve. Please reach out to us with your feedback.

  • We would love to hear from you! Join our Discord community to share your thoughts and get support from other users. Prefer a private conversation? Use the contact form on our website to drop us an email directly.