AI Workflow Open Spec
A common language for weaving together modular programmatic steps with agentic intelligence.
⚠️ This specification is a work in progress and subject to change.
10. Annexes (Informative)
Example workflows, schema references, and supporting implementation materials.
The annexes provide shared artifacts that accelerate adoption across ecosystems. They are informative rather than normative, but specifications should treat them as living references maintained alongside the core document. Each annex clarifies how to instantiate the patterns described earlier using concrete templates, vocabularies, or supporting materials. Readers can approach this section as the reference library: whenever a clause mentions “see Annex B” or similar, the supporting assets live here.
10.1 Example Workflows
Example workflows illustrate how the specification maps to real-world automations without binding implementers to a particular stack. They should showcase diverse scenarios—content moderation, lead enrichment, document summarization—highlighting how AI steps, human checkpoints, and integrations interact. These narratives bridge the gap between abstract requirements and day-to-day implementation choices.
Recommendations:
- Provide at least one end-to-end example per major profile (standard automation, regulated workflow, edge deployment).
- Annotate each graph with capability usages, data contract references, and guardrail configurations.
- Offer both narrative walkthroughs and machine-readable examples (YAML/JSON) to support tooling tests.
10.2 Schema Definitions and JSON Examples
Reusable schema fragments prevent divergence across implementations. This annex aggregates canonical definitions for the Result Pattern, step metadata, connector descriptors, and audit events. Treat these artifacts as the shared dictionary that keeps different teams aligned on payload structure.
Recommendations:
- Publish schemas in an interchange format (JSON Schema Draft 2020-12, Protocol Buffers, or Zod AST export) with accompanying version markers.
- Include positive and negative sample payloads for each schema to guide contract testing.
- Note any optional fields and default behaviors so engines can validate partial payloads consistently.
10.3 Glossary of Terms
A shared vocabulary reduces ambiguity across human and machine consumers. The glossary should cross-reference the terminology introduced in Section 1.2 and expand with domain-specific phrases encountered during implementation. Glossary updates are also the fastest way to onboard new stakeholders who were not part of the original drafting team.
Recommendations:
- Maintain alphabetical order and include aliases or deprecated terms.
- Link glossary entries to relevant clauses (e.g., “Result Pattern” → Section 3.3).
- Capture distinctions between similar concepts (e.g., step vs. node, workflow vs. profile) to support onboarding materials.
10.4 References and Further Reading
Workflow designers benefit from curated references to standards, best practices, and research. This annex aggregates material that influenced the spec or helps implementers deepen their understanding. The list should feel like a syllabus, guiding teams toward reputable resources when questions extend beyond the scope of the spec.
Recommendations:
- Cite related standards (OpenTelemetry, W3C Trace Context, RFC 2119), AI safety guidelines, and governance frameworks.
- Organize references by theme (observability, ethics, model evaluation) for quick lookup.
- Update the list as the ecosystem evolves, ensuring links remain accessible.
While non-normative, the annexes form the knowledge base that keeps implementations aligned and equip teams to extend the specification responsibly. Maintaining them alongside the main clauses ensures the spec remains a living document rather than a static artifact.
Curious about this specification? Reach out at ai-workflow@crca.fyi