Date: March 2025
Scope: Prompt + API behavior (PRO version)
Status: ✅ Live & stable
🧠 Context
We noticed that OpenAI was rejecting or truncating a significant number of requests on the PRO prompt. These issues were mostly due to complexity, ambiguous instructions, and high token load.
This update introduces a set of improvements to increase response success, stability, and usability — without sacrificing the core value of the advanced optimization.
🔄 What Changed
1. Prompt Simplification
-
Reduced overall length and complexity of the prompt instructions.
-
Removed vague or redundant wording that could confuse the model.
-
Focused on direct and structured outputs.
2. Content Truncation Limit
-
Max input content reduced from 8000 → 6000 characters.
-
Goal: prevent excessive token usage and improve model processing reliability.
3. Simplified JSON Output Structure
-
Removed deeply nested or lower-priority fields:
topicCoherence,readabilityMetrics,aiComparison
-
Flattened objects like
structureReviewandlinkingSuggestions. -
Kept all core fields intact (summary, entities, meta, schema, etc.)
4. Clearer Instructions for Output Format
-
Explicitly required:
-
"Return only valid JSON"
-
No explanations, markdown, or code blocks
-
-
Greatly improved compatibility with response parsing.
5. API Parameter Optimization
-
Reduced
max_tokensfrom 4000 → 3000 -
Maintained
temperatureat 0.7 to balance structure and creativity
6. Improved Error Handling
-
Improved fallback logic and error messages in the API layer.
-
Logs now include part of the raw OpenAI response for better debugging.
-
Partial results support under evaluation for future release.
✅ Outcome
-
Significantly fewer 400/500 errors from OpenAI.
-
Higher rate of complete and valid responses.
-
More stable experience for PRO users while retaining the core depth of analysis.