Z.ai: GLM 4.7 Flash
Provided by OpenRouter
As a 30B-class SOTA model, GLM-4.7-Flash offers a new option that balances performance and efficiency. It is further optimized for agentic coding use cases, strengthening coding capabilities, long-horizon task planning, and tool collaboration, and has achieved leading performance among open-source models of the same size on several current public benchmark leaderboards.
Specifications
202,752 tokens$0.060/M$0.400/MAbout Z.ai: GLM 4.7 Flash
As a 30B-class SOTA model, GLM-4.7-Flash offers a new option that balances performance and efficiency. It is further optimized for agentic coding use cases, strengthening coding capabilities, long-horizon task planning, and tool collaboration, and has achieved leading performance among open-source models of the same size on several current public benchmark leaderboards.
Strengths
- •Large context window (203k tokens) for long conversations
- •Fast response times for real-time interactions
Use Cases
- •Content creation and writing assistance
- •General conversations and Q&A
Limitations
Performance may vary based on query complexity, context length, and task type. Consider using higher-tier models for production-critical applications.
Sample Prompts
Try these prompts to explore Z.ai: GLM 4.7 Flash's capabilities:
Explain quantum computing in simple terms like I'm 10 years old
Write a compelling email asking for a meeting to discuss a project proposal
Help me brainstorm creative solutions for improving team productivity
Tip: Customize these prompts to fit your specific needs and use cases.
Premium Model
This model requires credits to use. Z.ai: GLM 4.7 Flash offers advanced capabilities and high-performance features for production-grade applications.
Credits required for premium models. Free models are available without credits.
Related Models
Similar models you might be interested in
StepFun: Step 3.5 Flash (free)
FreeStep 3.5 Flash is StepFun's most capable open-source foundation model. Built on a sparse Mixture of Experts (MoE) architecture, it selectively activates only 11B of its 196B parameters per token. It is a reasoning model that is incredibly speed efficient even at long contexts.
StepFun: Step 3.5 Flash
Step 3.5 Flash is StepFun's most capable open-source foundation model. Built on a sparse Mixture of Experts (MoE) architecture, it selectively activates only 11B of its 196B parameters per token. It is a reasoning model that is incredibly speed efficient even at long contexts.
Xiaomi: MiMo-V2-Flash
MiMo-V2-Flash is an open-source foundation language model developed by Xiaomi. It is a Mixture-of-Experts model with 309B total parameters and 15B active parameters, adopting hybrid attention architecture. MiMo-V2-Flash supports a hybrid-thinking toggle and a 256K context window, and excels at reasoning, coding, and agent scenarios. On SWE-bench Verified and SWE-bench Multilingual, MiMo-V2-Flash ranks as the top #1 open-source model globally, delivering performance comparable to Claude Sonnet 4.5 while costing only about 3.5% as much. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config).
Qwen: Qwen3 Coder Flash
Qwen3 Coder Flash is Alibaba's fast and cost efficient version of their proprietary Qwen3 Coder Plus. It is a powerful coding agent model specializing in autonomous programming via tool calling and environment interaction, combining coding proficiency with versatile general-purpose abilities.