DocsHow it WorksProgramming LanguageProgramming Language Integration pipelines are fundamentally programs, they orchestrate tasks, route data, react to events, and coordinate work across multiple machines. Yet most pipeline tools rely on approaches that were never designed for this purpose. Why Existing Approaches Fall Short YAML-Based Configuration Tools like GitHub Actions, GitLab CI, and CircleCI represent pipelines as YAML files describing sequences of shell commands. This works for simple cases but quickly hits structural limits: No native data passing: data between jobs must be serialized to disk, uploaded as artifacts, then downloaded and deserialized in the next job. There is no concept of a stream. Coarse parallelism: jobs can run in parallel, but coordination within a job is sequential. Fine-grained concurrency requires workarounds. Limited expressivity: conditionals are awkward, loops are limited or absent, and anything beyond a linear sequence requires external scripting. Flat observability: a failed run produces flat logs. There is no structured trace of what treatment received what input, diagnosing failures means grepping through output rather than inspecting state. Imperative Scripting Groovy (Jenkinsfile), Python, and similar scripting languages offer more flexibility, but introduce their own problems when used for pipeline orchestration: Sequential mental model: scripts execute line by line. Expressing concurrent execution, fan-out, or reactive behavior requires explicit thread management or async constructs not designed for this use case. No native distribution: distributing work across machines requires writing coordination code, SSH calls, job queues, or custom orchestration. The pipeline author manages distribution, not the language. Manual stream handling: passing a stream of bytes or events between machines means writing buffering, chunking, and error recovery code by hand. Opaque topology: a script has no intrinsic structure that tooling can inspect. Debugging means reading execution logs, not understanding the shape of the program. Why Mélodium Fits Mélodium was designed specifically for programs that CI/CD pipelines actually are: distributed, stream-oriented, reactive, and inspectable. YAML pipelines pass data through sequential artifact uploads. Mélodium treatments connect via typed streams and execute concurrently as soon as their inputs are available. Streams as First-Class Citizens Data flows between treatments as typed streams, no serialization round-trips between stages. Files, bytes, and structured values move directly from one treatment to the next, even across machine boundaries. The type system ensures that a Stream<byte> cannot be accidentally wired to an input expecting a Block<string>, catching connection errors before any execution begins. Native Distribution A Mélodium program declares what runs where. The runtime handles spawning processes on remote machines, establishing connections between them, and coordinating execution. The pipeline author writes logic; Mélodium handles the distribution. There is no coordination code, no SSH scaffolding, and no job queue to manage. The same Mélodium program can distribute treatments across machines on different infrastructures. The connection topology is declared in the program, the runtime handles the rest. Reactive Execution Treatments execute when their inputs arrive, not according to a predefined schedule. Parallel execution, fan-out, and synchronization are expressed as connections in the program graph. There is no need to explicitly mark jobs as parallel: true or manage wait conditions in imperative code, concurrency is a natural consequence of the data flow topology. Static Validation The entire program, its types, connections, and configuration, is validated before any execution begins. Type mismatches, missing connections, and invalid configurations are caught at startup, not discovered halfway through a 20-minute build when a downstream job fails to receive its expected input. Deep Debuggability Because the execution model has a well-defined topology of treatments connected by typed streams, Cadence.CI can observe inputs, outputs, and intermediate state at each node in the graph. This is what makes treatment-level debugging possible, something that is structurally impossible in script-based pipelines, where there is no concept of “the data flowing between step A and step B.” Summary YAML ConfigImperative ScriptMélodiumData passingArtifact uploadManual serializationTyped streamsParallelismJob-levelExplicit asyncTopology-drivenDistributionNot nativeManual coordinationBuilt into the languageValidationAt runtimeAt runtimeAt startup (static)DebuggabilityFlat logsFlat logsPer-treatment I/O tracingGeneralIntegration with Cloud Providers