Easily Compare LLMs Using OpenAI and Google Sheets
This workflow is designed to automate the comparison of different large language models by real-time invoking independent responses from multiple models based on user chat input. It records the results and contextual information into Google Sheets for easy subsequent evaluation and comparison. It supports memory isolation management to ensure accurate context transmission while providing user-friendly templates to facilitate the participation of non-technical personnel in model performance evaluation, thereby enhancing the team's decision-making efficiency and testing accuracy.
Tags
Workflow Name
Easily Compare LLMs Using OpenAI and Google Sheets
Key Features and Highlights
- Real-time reception of user chat inputs while independently invoking two different large language models (LLMs) to respond to the same input.
- Automatic synchronization and recording of both models’ responses along with contextual information into Google Sheets for convenient subsequent comparison and evaluation.
- Side-by-side display of the two models’ answers within the chat interface, supporting intuitive comparison.
- Support for session ID-based isolation of model memory to ensure accurate context transfer.
- Flexible compatibility with multiple model providers including OpenRouter, OpenAI, Google Vertex AI, facilitating easy expansion and switching.
- Provides teams with simple, user-friendly Google Sheets templates, enabling non-technical personnel to participate in model performance evaluation.
Core Problem Addressed
In AI agent development, due to the non-deterministic nature of large language models, selecting the most suitable model often requires repeated testing and comparison. This workflow automates the comparison process, eliminating the tedious manual invocation and answer collation, thereby improving efficiency and accuracy.
Application Scenarios
- AI product development teams evaluating the performance of different LLMs.
- Selecting the best language model among multiple options for production deployment.
- Enabling non-technical members within organizations to participate in assessing model response quality.
- Educational and research institutions conducting comparative experiments on language models.
Main Process Steps
- User sends a message via the chat interface to trigger the workflow.
- Define and split the list of models to be compared (defaulting to OpenAI GPT-4.1 and Mistral large model).
- Assign independent session IDs for each model to achieve memory isolation.
- Simultaneously invoke both models to generate responses; AI agent nodes handle model calls and context management.
- Aggregate and organize the two models’ answers, formatting them into easily readable and comparable text.
- Write user input, model responses, context, and evaluation fields into Google Sheets.
- Display both models’ answers in the chat interface to support immediate side-by-side comparison.
Involved Systems or Services
- OpenRouter API (supporting calls to OpenAI, Mistral, and other models)
- Google Sheets (used as the results recording and evaluation platform)
- Core n8n automation platform nodes (such as Set, Split, Loop, Aggregate, etc.)
- LangChain-related nodes (chat trigger, memory management, AI agent)
Target Users and Value
- AI developers and data scientists: Quickly benchmark model performance and optimize model selection.
- Product managers and business personnel: Participate intuitively in model evaluation via Google Sheets.
- Educators and researchers: Conveniently set up multi-model comparison experimental environments.
- Teams: Manage and compare model responses on a unified platform, enhancing decision-making efficiency.
This workflow significantly simplifies the multi-model comparison process. Through automation and structured data recording, it helps teams scientifically and systematically select the best language model, reducing trial-and-error costs in AI projects.
AI Agent to Chat with Your Search Console Data Using OpenAI and Postgres
This workflow builds an intelligent AI chat agent that allows users to converse with it in natural language to query and analyze website data from Google Search Console in real time. Leveraging OpenAI's intelligent conversational understanding and the historical memory storage of a Postgres database, users can easily obtain accurate data reports without needing to understand API details. Additionally, the agent can proactively guide users, optimizing the data querying process and enhancing user experience, while supporting multi-turn conversations to simplify data analysis and decision-making processes.
Intelligent Document Q&A – Vector Retrieval Chat System Based on Google Drive and Pinecone
This workflow primarily implements the automatic downloading of documents from Google Drive, utilizing OpenAI for text processing and vector generation, which are then stored in the Pinecone vector database. Users can quickly ask questions in natural language through a chat interface, and the system will return relevant answers based on vector retrieval. This solution effectively addresses the inefficiencies and inaccuracies of traditional document retrieval, making it widely applicable in scenarios such as corporate knowledge bases, legal, research, and customer service, thereby enhancing the convenience and accuracy of information retrieval.
Automated Document Note Generation and Export Workflow
This workflow automatically extracts new documents, generates intelligent summaries, stores vectors, and produces various formats of documents such as study notes, briefings, and timelines by monitoring a local folder. It supports multiple file formats including PDF, DOCX, and plain text. By integrating advanced AI language models and vector databases, it enhances content understanding and retrieval capabilities, significantly reducing the time required for traditional document organization. This workflow is suitable for scenarios such as academic research, training, content creation, and corporate knowledge management, greatly improving the efficiency of information extraction and utilization.
AI Document Assistant via Telegram + Supabase
This workflow transforms a Telegram bot into an intelligent document assistant. Users can upload PDF documents via Telegram, and the system automatically parses them to generate semantic vectors, which are stored in a Supabase database for easy intelligent retrieval and Q&A. The bot utilizes a powerful language model to answer complex questions in real-time, supporting rich HTML format output and automatically splitting long replies to ensure clear information presentation. Additionally, it integrates a weather query feature to enhance user experience, making it suitable for personal knowledge management, corporate assistance, educational tutoring, and customer support scenarios.
Create AI-Ready Vector Datasets for LLMs with Bright Data, Gemini & Pinecone
This workflow automates the process of web data scraping, extracting and formatting content, generating high-quality text vector embeddings, and storing them in a vector database, forming a complete data processing loop. By combining efficient data crawling, intelligent content extraction, and vector retrieval technologies, users can quickly build vector datasets suitable for training large language models, enhancing data quality and processing efficiency, and making it applicable to various scenarios such as machine learning, intelligent search, and knowledge management.
API Schema Crawler & Extractor
The API architecture crawling and extraction workflow is an intelligent automation tool that efficiently searches, crawls, and extracts API documentation for specified services. By integrating search engines, web crawlers, and large language models, this workflow not only accurately identifies API operations but also structures the information for storage in Google Sheets. Additionally, it generates customized API architecture JSON files for centralized management and sharing, significantly enhancing development and integration efficiency, and helping users quickly obtain and organize API information.
Intelligent Document Q&A and Vector Database Management Workflow
This workflow automatically downloads eBooks from Google Drive, splits the text, and generates vectors, which are stored in the Supabase vector database. Users can ask questions in real-time through a chat interface, and the system quickly provides intelligent answers using vector retrieval and question-answering chain technology. Additionally, it supports operations for adding, deleting, modifying, and querying documents, enhancing the flexibility of knowledge base management. This makes it suitable for enterprise knowledge management, educational tutoring, and content extraction needs in research institutions.
🤖 AI-Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant
This workflow builds an intelligent chatbot that utilizes retrieval-augmented generation technology to extract information from Google Drive documents, combined with natural language processing for smart Q&A. It supports batch downloading of documents, metadata extraction, and text vectorization storage, enabling efficient semantic search. Operations notifications and manual reviews are implemented through Telegram to ensure data security, making it suitable for scenarios such as enterprise knowledge bases, legal consulting, and customer support, thereby enhancing information retrieval and human-computer interaction efficiency.