Offline LLM

A client-side LLM tech stack which enables the user to run inference on mobile even when in Airplane mode. This solution requires an Internet connection in order to download and cache the appropriate webapp files and model weights, however after they have been downloaded, the user can run inference offline. Also includes a PWA.

This is a static site deployment of the web-llm-chat solution. Offline LLM