Testing Multiple Local LLMs with LM Studio

This workflow is designed to automate the testing and analysis of the performance of multiple large language models locally. By dynamically retrieving the list of models and standardizing system prompts, users can easily compare the output performance of different models on specific tasks. The workflow records request and response times, conducts multi-dimensional text analysis, and structures the results for storage in Google Sheets, facilitating subsequent management and comparison. Additionally, it supports flexible parameter configuration to meet diverse testing needs, enhancing the efficiency and scientific rigor of model evaluation.

Workflow Diagram
Testing Multiple Local LLMs with LM Studio Workflow diagram

Workflow Name

Testing Multiple Local LLMs with LM Studio

Key Features and Highlights

This workflow enables fully automated end-to-end testing and performance analysis of multiple large language models (LLMs) deployed locally. Highlights include:

  • Dynamically retrieving and iteratively invoking all loaded models on the local LM Studio server.
  • Standardizing model outputs via a unified system prompt to facilitate comparative evaluation across different models on specific tasks.
  • Automatically capturing request send and response receive timestamps to calculate response latency.
  • Multi-dimensional text analysis, including word count, sentence count, average sentence length, average word length, and Flesch-Kincaid readability scoring.
  • Structuring and automatically saving test results to Google Sheets for convenient aggregation and comparative analysis.
  • Flexible configuration supporting parameters such as temperature, Top P, and presence penalty to meet diverse testing requirements.
  • User guidance through annotations and prompts for quick LM Studio server setup and workflow parameter updates.

Core Problems Addressed

  • Complexity in managing and testing multiple models: Automatically fetches and iterates through all local models, simplifying the testing process.
  • Lack of standardized evaluation for output text quality and readability: Built-in multi-dimensional text analysis algorithms provide quantitative metrics.
  • Dispersed and hard-to-manage test data: Automatically syncs test results to Google Sheets for centralized data management.
  • Difficulty in debugging and reproducing results: Precisely records request and response times to facilitate performance monitoring and issue diagnosis.

Application Scenarios

  • AI researchers and developers comparing performance across different local LLMs.
  • Machine learning engineers tuning local language model parameters and evaluating output quality.
  • Education and content creation sectors assessing readability and conciseness of model-generated text.
  • Enterprises deploying private LLM services for continuous quality monitoring and optimization.
  • Automated workflows requiring batch testing and analysis of model responses.

Main Workflow Steps

  1. LM Studio Server Configuration: Install LM Studio, load required models, and update the server IP address in the workflow.
  2. Chat Message Trigger: Listen for incoming text via webhook to initiate the workflow.
  3. Retrieve Local Model List: Call LM Studio API to obtain the list of currently active model IDs.
  4. Iterative Model Testing: Sequentially send requests to each model using a unified system prompt to standardize response style.
  5. Timestamp Collection: Record request start and end times to compute response latency.
  6. Text Response Analysis: Execute code nodes to calculate word count, sentence count, average sentence length, average word length, and readability scores.
  7. Data Preparation and Saving: Organize all test parameters and analysis results, then automatically append them to a Google Sheets spreadsheet.
  8. Result Review and Reuse: Users can directly view detailed model test reports within Google Sheets.

Involved Systems and Services

  • LM Studio: Locally deployed language model server providing model listings and conversational interfaces.
  • n8n: Automation workflow platform responsible for process control and node orchestration.
  • Google Sheets: Cloud-based spreadsheet service used for storing and managing test data.
  • Webhook: Receives external chat messages to trigger workflow execution.
  • JavaScript Code Nodes: Perform multi-dimensional semantic and readability analysis on text.

Target Users and Value

  • AI Researchers and Data Scientists: Facilitate rapid evaluation and comparison of multiple local models’ text generation quality.
  • Machine Learning Engineers: Assist in debugging and optimizing model parameters to improve performance.
  • Content Review and Editing Teams: Quantify text readability to ensure outputs meet target audience reading levels.
  • Enterprise Technical Teams: Enable automated testing and performance monitoring of private LLM services.
  • Educational and Training Institutions: Assess whether model outputs are suitable for different educational stages.

By providing comprehensive automated testing and analytical capabilities, this workflow significantly lowers the barrier and workload for evaluating multiple local models, enhances efficiency in model selection and optimization, and delivers scientific, systematic insights into model performance for a wide range of users.