š DeepSeek V3 Chat & R1 Reasoning Quick Start
This workflow integrates DeepSeek's latest V3 chat model and R1 inference model, supporting real-time conversations triggered by messages and possessing multi-turn contextual understanding capabilities. Users can flexibly call cloud APIs or local models to quickly build intelligent Q&A and inference services, suitable for scenarios such as customer service, knowledge management, and educational tutoring. By enhancing interaction coherence and accuracy through memory window management, it reduces the complexity of AI integration, making it easier for developers and enterprises to build and test intelligent assistants.

Workflow Name
š DeepSeek V3 Chat & R1 Reasoning Quick Start
Key Features and Highlights
This workflow integrates DeepSeekās latest V3 chat model and R1 reasoning model, enabling real-time conversational triggers through messaging combined with a powerful memory buffer for multi-turn contextual understanding. It supports invoking both the DeepSeek cloud API and the local Ollama model, offering flexible adaptation to various deployment environments. With simple HTTP request configurations, it enables rapid launch of intelligent Q&A and reasoning services.
Core Problems Addressed
- Delivering efficient and intelligent natural language conversation and reasoning capabilities
- Cross-platform access to DeepSeek cloud and local models to meet diverse performance and privacy requirements
- Managing dialogue context through a memory window to enhance coherence and accuracy in multi-turn interactions
- Lowering the complexity barrier for AI integration to quickly build DeepSeek-based intelligent assistants
Application Scenarios
- Customer Service Bots: Intelligently understand user queries and provide precise answers
- Knowledge Management: Assist enterprise internal information retrieval and reasoning
- Educational Tutoring: Offer intelligent Q&A and learning guidance
- Product Prototyping: Rapidly validate interactive experiences based on advanced language models
Main Workflow Steps
- Trigger āWhen chat message receivedā: Listens for chat messages to initiate the workflow
- Basic LLM Chain2: Initializes the conversation message and sets the system assistant role
- Ollama DeepSeek and DeepSeek Nodes: Invoke the local Ollama model and DeepSeek cloud API respectively for language understanding and reasoning
- Window Buffer Memory: Manages dialogue context to support continuous multi-turn conversations
- AI Agent: Acts as the core intelligent agent to process user requests and generate responses
- HTTP Request Node: Directly calls the DeepSeek API, supporting multiple request body formats
- Multiple Sticky Note nodes provide documentation and parameter configuration references
Systems and Services Involved
- DeepSeek API: Cloud-based inference and chat interface compatible with OpenAI format
- Ollama Local Model: Locally running DeepSeek R1 reasoning model
- n8n Platform Nodes: Including LangChain integration nodes (chatTrigger, agent, lmChatOpenAi, lmChatOllama, memoryBufferWindow, etc.)
- HTTP Request Node: Enables flexible extension via direct RESTful API calls
Target Users and Value
- AI Developers and Integration Engineers: Quickly build and test DeepSeek-based intelligent dialogue systems
- Enterprise Digital Transformation Teams: Enhance customer service and knowledge management with advanced language models
- Educational and Training Institutions: Develop intelligent tutoring and Q&A bots
- Product Managers and Tech Enthusiasts: Low-barrier access to experience and validate cutting-edge language reasoning technologies
Leveraging n8nās powerful node orchestration capabilities, this workflow seamlessly integrates DeepSeekās latest V3 chat and R1 reasoning models, supporting multi-turn context management and multi-interface invocation. It significantly reduces the development complexity of AI-powered conversational systems, empowering innovative intelligent interactions across diverse scenarios.