Best LLMs

Large language models for general-purpose text generation, reasoning, summarization, and conversation. Capable of understanding complex instructions and producing high-quality, coherent text responses.

#1
Top Pick
GPT-5.5

Coming Soon

Best rated

by OpenAI

GPT-5.5 is OpenAI's newest frontier model for complex professional work, with strong performance in coding, reasoning, and tool-using workflows. It supports a 1,050,000 token context window, 128,000 max output tokens, configurable reasoning effort, image input, and a broad tool stack including web search, file search, code interpreter, hosted shell, apply patch, skills, MCP, tool search, and computer use.

Featured Models

Top-performing models in this category, recommended by our community and performance benchmarks.

#2
Kimi K2.6

Coming Soon

by Moonshot AI

Kimi K2.6 is Moonshot AI's latest flagship open model for coding, reasoning, multimodal understanding, and agentic execution. It is designed for long-horizon software tasks, reliable tool use, autonomous multi-step workflows, coordinated agent swarms, and visual understanding across image and video inputs in addition to text.

#3
Claude Opus 4.7

Api Only

by Anthropic

Claude Opus 4.7 is Anthropic's highest-capability generally available Claude model. It is designed for demanding coding, agent orchestration, multimodal reasoning, and high-stakes professional workflows, with stronger instruction following, better high-resolution vision, adaptive thinking, and a 1M-token context window.

#4

by Z.ai

GLM-5.1 is Z.ai’s flagship language model for agentic engineering, coding, reasoning, and tool-driven workflows. It supports a 200K token context window with up to 128K output tokens, deep thinking, function calling, structured output, and streaming tool calls, and is designed to stay effective over long multi-step sessions rather than only short-horizon tasks.

#5

by MiniMax

MiniMax M2.7 is a long‑context LLM designed for agentic workflows across software engineering, search and tool use, and high‑value office productivity tasks. It’s built for multi‑step execution, with strong instruction following and dependable task decomposition, making it a solid default for production assistants that write code, call tools, and handle complex document workflows.

#6

by MiniMax

MiniMax M2.7‑Highspeed is the performance‑tuned variant of M2.7, built for lower latency and higher throughput while keeping output behavior consistent with the standard model. It’s a strong fit for interactive coding agents, tool‑calling pipelines, and office automation flows where responsiveness matters.

#7

by OpenAI

GPT-5.4 Mini is a compact, efficient variant of GPT-5.4 designed for coding assistants, subagent orchestration, and multimodal applications requiring faster responsiveness. It supports a 400K token context window and retains native computer use and configurable reasoning effort at a lower cost than the flagship model.

#8

by OpenAI

GPT-5.4 Nano is the smallest and fastest variant of GPT-5.4, designed for high-throughput, low-latency tasks such as classification, data extraction, ranking, and lightweight automation. It prioritizes speed and cost efficiency for simple, high-volume workloads and is available exclusively via the API.

#9

by OpenAI

GPT-5.4 is OpenAI's flagship large language model, featuring a 1 million token context window, native computer use, and a 33% reduction in factual errors over GPT-5.2. It integrates coding capabilities from GPT-5.3-Codex, is 47% more token-efficient, and supports configurable reasoning effort for complex professional tasks.

#10
GPT-5.4 Pro

Coming Soon

by OpenAI

GPT-5.4 Pro is the high-performance variant of GPT-5.4, optimized for enterprise-grade professional tasks. It offers deeper reasoning, enhanced accuracy, and extended compute for complex multi-step workflows including document creation, spreadsheet analysis, and autonomous agent orchestration. It shares the 1 million token context window and native computer use capabilities of the standard GPT-5.4.

#11

by Google

Gemini 3.1 Flash Lite is Google’s flagship multimodal language model that processes text alongside images, audio, video, code, and documents. It offers high-performance reasoning, complex instruction following, and deep contextual understanding for a wide range of tasks across language, analysis, and problem solving

#12

by Google

Gemini 3.1 Pro is Google’s flagship multimodal language model that processes text alongside images, audio, video, code, and documents. It offers high-performance reasoning, complex instruction following, and deep contextual understanding for a wide range of tasks across language, analysis, and problem solving.

#13
Claude Sonnet 4.6

Api Only

by Anthropic

Claude Sonnet 4.6 is Anthropic's most capable Sonnet model, built for daily production use across coding, agent workflows, long-context reasoning, computer use, and professional knowledge work. It supports adaptive and extended thinking, strong instruction following, high-volume automation, and a 1M-token context window in beta.

#14

by MiniMax

MiniMax-M2.5 is MiniMax’s latest frontier model, optimized for fast, low-cost agentic workflows across coding, search/tool use, and high-value office tasks. Trained with large-scale reinforcement learning in complex real-world environments, it delivers strong reasoning, efficient task decomposition, and high-quality outputs for production assistants and enterprise workflows.

#15

by Z.ai

GLM-4.7 is a 358 billion parameter Mixture-of-Experts language model from Z.ai optimized for agentic coding, complex reasoning, and long-horizon tasks. It features interleaved thinking, preserved thinking for multi-turn consistency, and turn-level thinking control. It supports a 200K token context window with 128K max output, tool calling, and achieves 73.8% on SWE-bench Verified.

#16

by Google

Gemini 3 Flash is Google’s flagship multimodal language model that processes text alongside images, audio, video, code, and documents. It offers high-performance reasoning, complex instruction following, and deep contextual understanding for a wide range of tasks across language, analysis, and problem solving.

#17
Claude Haiku 4.5

Api Only

by Anthropic

Claude Haiku 4.5 is Anthropic's fastest and most cost-efficient Claude model. It is built for latency-sensitive applications, high-volume agents, sub-agent orchestration, coding assistance, and budget-conscious deployments that still need strong reasoning and multimodal understanding.

Explore other collections