Providing Feedback
Improve model performance through feedback collection and fine-tuning.
Overview
The VLM Run Feedback API enables you to collect and submit feedback on model predictions to continuously improve accuracy and performance. This feedback data is essential for fine-tuning vlm-1
models to better serve your specific use cases and domains.
Why Feedback Matters
Providing feedback on model predictions serves several critical purposes:
- Model Fine-tuning: Feedback data is used to fine-tune models for improved accuracy on your specific data patterns
- Performance Optimization: Helps identify areas where the model needs improvement
- Domain Adaptation: Enables the model to better understand your industry-specific requirements
- Quality Assurance: Provides a mechanism to flag incorrect or problematic predictions
Submit Feedback
After receiving a prediction, you can submit feedback to help improve future model performance.
Retrieve Feedback
You can retrieve all feedback associated with a specific prediction.
Fine-tuning with Feedback
The feedback you provide is used to create fine-tuned models that perform better on your specific use cases. This process involves:
- Data Collection: Feedback is aggregated across your organization
- Model Training: Fine-tuned models are created using your feedback data
- Performance Improvement: Updated models show improved accuracy on similar tasks
Best Practices
-
Be Specific: Provide detailed feedback about what was correct or incorrect
-
Use Structured Data: Include ratings, categories, and specific metrics when possible
-
Add Context: Use the notes field to explain your reasoning
-
Consistent Feedback: Maintain consistent criteria across your team for better model training
Fine-tuning capabilities are currently only available for our enterprise-tier customers. If you are interested in using this feature, please contact us.