Methodology

This page explains how CivicBrief is built today: what we measure, how we prioritize honesty over completeness, and where automation ends and human editorial judgment begins. If something on the site disagrees with this page, treat this page as the source of truth for claims and limits.

What CivicBrief is

CivicBrief is a Texas-focused civic intelligence surface: start from an address or an entity, see which governments apply, and follow money, documents, agendas, and (where we have modeled it) transparency signals. We ship with explicit coverage labels so empty tables read as "not indexed yet," not as proof that nothing exists.

Address resolution and your stack

When you enter an address, we resolve it to census and boundary context, then attach entity ids for the city, school district, county, and other layers we support. The order and labels you see follow the v1 profile routing contract (/entity/[id] and county redirects). See also Who handles what and Address lookup. Resolution quality depends on geocoder accuracy and boundary data freshness; we surface match metadata where the API exposes it.

Coverage, tiers, and entity profiles

Entity profiles combine database-backed rows (where migrated) with CivicBrief narrative and status chips for documents, money, and agendas. Tier labels describe how much of a jurisdiction’s public record we have actually ingested and linked — not how “good” the government is. A sparse profile with honest gaps is preferable to a full-looking profile backed by stubs.

Some roadmap tools — for example side-by-side compare for subscribers — are reserved until billing and deeper entity joins are production-ready.

Transparency rankings

The transparency table and map use a pilot list of jurisdictions (major metros and a spread of regions), not a full statewide census. Coverage expands monthly as we wire real indexing and scoring.

Scores and per-category values are deterministic placeholders derived from each entity id so the UI, sorting, and sparklines behave consistently. They are not live audits of portals yet. When the transparency engine is fully connected, the same dimensions will be backed by checked sources and review.

Stars (1–5) in the rankings table are a compact banding from the overall score — they are not a count of categories. The sparkline column shows relative stub scores across 10 categories in fixed order:

  1. budget
  2. spending
  3. contracts
  4. meetings
  5. gis
  6. permits
  7. service requests
  8. elections
  9. courts jail
  10. debt

Weekly digest

The /digest page mirrors the composed issue for the current coverage window. Email delivery is advertised as Tuesday mornings (America/Chicago); the on-page “coverage window” label describes the data slice for that compose, not the instant the email leaves our provider. On quiet weeks, we say so plainly; you can append ?sample=1 to see illustrative cards when there are no qualifying items.

Subscribe or manage delivery from /subscribe.

Security, accounts, and updates

Authentication, billing, and automated digest dispatch are still evolving toward production parity with our security checklist. API routes return honest not_implemented or equivalent flags until those systems are fully wired — we do not pretend a feature is live when it is not.

When scoring rubrics, peer groups, or ingestion pipelines change in a material way, we will extend this methodology page and note the version in-product where users rely on comparisons over time.