
LM Studio - Local AI on your computer
Run local AI models like gpt-oss, Llama, Gemma, Qwen, and DeepSeek privately on your computer.
Download LM Studio - Mac, Linux, Windows
Discover, download, and run local LLMs with LM Studio for Mac, Linux, or Windows
Welcome to LM Studio Docs!
Learn how to run Llama, DeepSeek, Qwen, Phi, and other LLMs locally with LM Studio.
Get started with LM Studio | LM Studio Docs
Download and run Large Language Models like Qwen, Mistral, Gemma, or gpt-oss in LM Studio.
Tool Use | LM Studio Docs
LM Studio parses the text output from the model into an OpenAI-compliant chat.completion response object. If the model was given access to tools, LM Studio will attempt to parse the …
Model Catalog - LM Studio
FunctionGemma is a lightweight, open model from Google, built as a foundation for creating your own specialized function calling models.
LM Studio Developer Docs
Build with LM Studio's local APIs and SDKs — TypeScript, Python, REST, and OpenAI‑compatible endpoints.
lmstudio-python (Python SDK) | LM Studio Docs
Getting started with LM Studio's Python SDKRead more about lms get in LM Studio's CLI here. Interactive Convenience, Deterministic Resource Management, or Structured Concurrency? …
System Requirements | LM Studio Docs
Windows LM Studio is supported on both x64 and ARM (Snapdragon X Elite) based systems. CPU: AVX2 instruction set support is required (for x64) RAM: LLMs can consume a lot of …
Offline Operation | LM Studio Docs
LM Studio can operate entirely offline, just make sure to get some model files first.