Script Intelligence
Score screenplays before they shoot, score catalogue films from real evidence.
Two engines on one schema. The catalogue engine grades shipped films against 40 realistic KPIs grounded in box office, articles, decode, and crew metadata. The upload engine scores unseen screenplays against 200 predictive parameters before a frame is shot.
How it works
The exact thing your team is trying to call.
Catalogue: 40 realistic KPIs (theatrical performance, critic-vs-audience, controversy, comparable-film delta) per shipped film, each with a confidence chip. Upload: 200-parameter screenplay score plus calibrated probability the finished film clears its production budget on first window.
- Catalogue KPIs grounded in real BoxOfficeRecord, Source, Review, decode, crew rows
- Upload screenplay analysis across pacing, conflict density, mass moments, and act balance
- Confidence chip per KPI so thin-evidence films degrade gracefully instead of inventing scores
- Cast-fit projection conditioned on the attached lead and director
- Genre-conditioned analog search across the seeded film catalogue
- SHAP-style risk drivers for upload predictions; claim-evidence citations for catalogue KPIs
- Greenlight memo export ready for studio committees
Signal sources
What feeds the script intelligence model.
- 01TMDB and IMDb production metadatalive
- 02BoxOfficeRecord (39 Suriya films seeded)live
- 03NewsAPIlive
- 04NewsData.iolive
- 05Reddit (r/kollywood, r/tamilcinema, r/movies)live
- 06Wikipedia RESTlive
- 07Google News RSSlive
- 08GDELT 2.0live
- 09YouTube trailer / reaction signallive
- 10Behindwoods (reviews + audience polls)planned
- 11Sify Movies reviews archiveplanned
- 12Letterboxd and Rotten Tomatoes early reviewsplanned
Use cases
What teams actually do with Script Intelligence.
Greenlight reviews for script intelligence
Run scoring on a portfolio of candidates and surface the ones whose script intelligence signals diverge most from their internal narrative.
Continuous monitoring
Subscribe via webhook and receive a payload only when the score crosses a threshold you defined. No dashboards required.
Cross-vertical analysis
Pull the same entity into other verticals to see whether the signals agree or disagree. Disagreement is often where the alpha is.
Backtesting before commitment
Score the last 24 months of decisions against the model and compare hit rates. The calibration_bucket field tells you how much to trust the model in the bands you care about.
API
Calibrated, explainable, ready for production.
Catalogue calls return 40 realistic KPIs grounded in real evidence, each with a confidence chip (high / medium / low / unknown). Every prediction also ships with the nearest historical analogs the model leaned on.
Upload calls additionally return a 200-parameter predictive score for unseen content (today: screenplays).
Available via REST and a typed TypeScript SDK.
See full API reference{
"entityId": "film_kj91",
"title": "Soorarai Pottru",
"kpiCountScored": 40,
"kpis": [
{
"id": "theatrical_roi",
"value": 2.4,
"confidence": "high",
"citations": 3
},
{
"id": "critic_audience_delta",
"value": -0.18,
"confidence": "high",
"citations": 12
},
{
"id": "controversy_index_30d",
"value": 0.07,
"confidence": "medium",
"citations": 4
},
{
"id": "comparable_film_lift",
"value": 0.31,
"confidence": "medium",
"citations": 6
},
{
"id": "music_director_lift",
"value": 0.22,
"confidence": "low",
"citations": 2
}
],
"decodeReportRef": "report_v2_film_kj91",
"asOf": "2026-05-10T00:00:00Z"
}{
"entityId": "scr_8af21",
"verdict": "lean_hit",
"parameterCountScored": 200,
"pFlop": 0.18,
"ci95": [
0.11,
0.27
],
"drivers": [
{
"feature": "act2_pacing",
"contribution": -0.09
},
{
"feature": "lead_q_score",
"contribution": -0.07
},
{
"feature": "genre_saturation_q4",
"contribution": 0.05
},
{
"feature": "trailer_engagement_index",
"contribution": -0.04
},
{
"feature": "budget_to_genre_median",
"contribution": 0.03
}
],
"analogs": [
"tt0114369",
"tt1375666",
"tt2543164",
"tt6751668",
"tt7286456"
]
}Compared to
How Script Intelligence differs from what is already out there.
| Vendor | How SignalGrid is different |
|---|---|
| Largo.ai | We expose driver-level explanations and analog films, not just a green or red light. |
| Cinelytic | Our model is conditioned on the full open public signal, not only structured studio data. |
| ScriptBook | We ship calibrated confidence intervals on every prediction. |
Try Script Intelligence on your own entities.
Start free during the public beta. We will pre-select script intelligence for you on signup.
Other verticals