Documentation Index
Fetch the complete documentation index at: https://docs.vlm.run/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The VLM Run Feedback API enables you to collect and submit feedback on model predictions to continuously improve accuracy and performance. This feedback data is essential for fine-tuningvlm-1 models to better serve your specific use cases and domains.
Why Feedback Matters
Providing feedback on model predictions serves several critical purposes:- Model Fine-tuning: Feedback data is used to fine-tune models for improved accuracy on your specific data patterns
- Performance Optimization: Helps identify areas where the model needs improvement
- Domain Adaptation: Enables the model to better understand your industry-specific requirements
- Quality Assurance: Provides a mechanism to flag incorrect or problematic predictions
- Evaluations: Structured corrections (and optionally note-derived JSON) are the ground truth the platform compares against stored model outputs when you run Evaluations
Submit Feedback
After receiving a prediction, you can submit feedback to help improve future model performance.
Fine-tuning with Feedback
The feedback you provide is used to create fine-tuned models that perform better on your specific use cases. This process involves:- Data Collection: Feedback is aggregated across your organization
- Model Training: Fine-tuned models are created using your feedback data
- Performance Improvement: Updated models show improved accuracy on similar tasks
Best Practices
- Be Specific: Provide detailed feedback about what was correct or incorrect
- Use Structured Data: Include ratings, categories, and specific metrics when possible
- Add Context: Use the notes field to explain your reasoning
- Consistent Feedback: Maintain consistent criteria across your team for better model training