Why MUTCD instead of descriptive labels

Most annotation projects we see start with descriptive sign labels: "stop sign," "yield sign," "speed limit," "warning sign." Easy to write, easy for annotators to apply, easy for buyers to understand.

And nearly useless to a state DOT, a transportation safety analyst, or any downstream system that has to integrate with existing US road infrastructure data.

The Manual on Uniform Traffic Control Devices (MUTCD) is the federal standard for road signs in the US. Every regulatory sign has a code — R1-1 is a stop sign, R1-2 is yield, R1-5 is stop-here-on-red, R1-1A is the all-way supplement. Every warning sign has a W code. Every guide sign has a D, M, or I code depending on type. The full reference is about 700 pages.

When you annotate against MUTCD, your labels integrate directly with:

  • State DOT asset inventories (every state DOT manages signs by MUTCD code)
  • FHWA reporting (federal highway data is structured this way)
  • Inventory software (Cartegraph, Lucity, ESRI Roads & Highways — all assume MUTCD)
  • Liability and safety analysis (legal evidence in road-condition litigation uses MUTCD)

When you annotate as "stop sign," every downstream consumer has to do the work of mapping that to R1-1 and hoping they got it right.

How we structure the schema

Primary class: the MUTCD code itself

The annotation's class attribute is the MUTCD code: R1-1, W3-1, D9-2, whatever applies. Annotators reference a code chart during work; senior reviewers verify against MUTCD directly for unusual cases.

For non-standard signs (private property signs, custom warnings, foreign jurisdiction signs that appear in the imagery), we use a class=NON_MUTCD with a descriptive subclass attribute. These get flagged in the deliverable so the downstream consumer knows they're not regulatory inventory.

Required attributes for every regulatory sign

Beyond the class, every regulatory sign annotation carries:

  • mounting_type: post, overhead, gantry, bridge, temporary post, vehicle-mounted
  • condition: good, fair, poor, missing (where the post is visible but the sign is gone)
  • visibility: clear, partial obstruction, heavy obstruction
  • face_count: 1 for single-face, 2 for back-to-back, 3+ for multi-face
  • assembly_position: standalone, top of assembly, middle, bottom (for multi-sign posts)

These attributes drive the downstream value. Sign condition feeds maintenance scheduling. Visibility feeds safety analysis. Mounting type feeds replacement cost estimates.

Optional attributes when source supports them

When the source imagery supports it (high resolution, good lighting, multiple angles), we capture additional attributes:

  • legend_text for variable signs (speed limits, exit numbers, distance values)
  • sheeting_material if it's visible enough to call (Type I, III, VIII, IX, XI)
  • mounting_height if vertical accuracy is good enough to call
  • bullet_holes (yes, real attribute, common condition issue on rural signs)
  • graffiti extent
  • retroreflectivity_estimate (low/medium/high) where the source supports the inference

Optional attributes are explicitly nullable. We don't fake them when the source doesn't support the call.

The common edge cases and how to handle them

Sign with multiple regulatory meanings

A R1-1 stop sign with an R1-3 "ALL WAY" supplement plate below it. We annotate as two features (R1-1 stop, R1-3 supplement), both in the same assembly_position group. Some clients prefer them merged into a single "R1-1 with R1-3 supplement" feature; we follow the client preference, set in the schema document.

Variable message signs (VMS)

Digital signs that display different messages at different times. We annotate the physical sign assembly with class=NON_MUTCD subclass=VMS. The displayed message is captured separately if the project needs it.

Construction-zone temporary signs

R-series and W-series codes still apply, but the mounting_type=temporary_post attribute distinguishes them. Some downstream consumers want temporary signs excluded from permanent inventory; the attribute makes that filter trivial.

Signs partially obscured by vegetation

We annotate the visible portion of the sign with a visibility=partial attribute and a visible_area_pct value (rough estimate). Downstream maintenance teams use the partial-visibility flag to prioritize vegetation trimming.

Signs facing perpendicular to the camera

A sign perpendicular to the direction of travel is often unreadable from forward-facing imagery. We mark it visibility=perpendicular and skip the legend attribute. For projects that need full coverage of perpendicular signs, the capture rig should include side-facing cameras.

Signs that have fallen down

Found face-down on the ground or hanging by one bolt. Annotated as condition=failed with the position attribute reflecting reality. These are usually high-priority for maintenance teams and shouldn't be silently mis-classified as good.

Foreign and historical signs

Signs from older MUTCD editions that don't match current codes (the old yellow "STOP" signs from pre-1954, county-specific variants in some rural areas). Annotated as class=NON_MUTCD with subclass=historical and a free-text description.

Quality calibration on MUTCD projects

Three measurements we report on every batch.

Code accuracy — F1 on MUTCD code itself. Our floor on standard sign classes is 98%. Less-common classes (W series 4 and above, regulatory parking signs, specialty guide signs) usually run 95-97% and we report the spread.

Attribute completeness — percentage of features where every required attribute was captured. Should be 100%; we audit the few cases below to understand why (usually source imagery problems).

Spatial accuracy — RMSE between annotated sign location and the road right-of-way centerline, measured per state DOT centerline data. For mobile-mapping projects with surveyed control, typical RMSE is 1-3 meters horizontal. Worse than that and we flag for source review.

What to ask of a vendor labeling traffic signs

Three questions.

Do they label by MUTCD code or descriptive name? If descriptive, your downstream integration cost will be 20-30% of the project cost on top.

Do their annotators reference MUTCD directly for edge cases, or guess? Most generic vendors guess. Ask for an example of their edge-case log on a real project.

Do they spatial-validate against road network data? Without spatial QA, the annotations look right in the image and are subtly wrong on the map.


Got a road-infrastructure annotation project? Send a sample of your imagery and tell us which MUTCD series matter most for your use case. We'll come back with a schema draft and pilot scope.