Getting Started with Declarative Orchestration
See how Kestra can simplify your data pipelines—and scale beyond them.
Prefect orchestrates Python workflows for data engineering teams. Kestra orchestrates workflows across your entire stack, in any language. One serves Python-native teams. The other orchestrates data pipelines, infrastructure, and business processes across languages and teams.
Universal orchestration platform built on declarative YAML and an API-first architecture. Orchestrate data pipelines, ETL jobs, and complex workflows in Python, SQL, Bash, R, or any language without forcing everyone into a single framework.
Python-native orchestration platform where workflows are defined as @flow-decorated functions. Add decorators to existing Python code and deploy to managed or self-hosted workers.
Prefect offers a quick path with flow.serve(), but production deployments need a running server, a work pool, and worker processes. Kestra's single Docker Compose command stands up everything, including the database and UI, in a format that's already production-shaped.
curl -o docker-compose.yml \https://raw.githubusercontent.com/kestra-io/kestra/develop/docker-compose.ymldocker compose up
# Open localhost:8080# Pick a Blueprint, run it. Done.Download the Docker Compose file, spin it up, and you're ready (database and config included). Open the UI, pick a Blueprint, run it. No Python environment, no workers to configure.
pip install prefect
# Quick path: flow.serve() runs a single flow# Production path: server + work pool + workerprefect server start &prefect work-pool create my-pool --type processprefect worker start --pool my-pool
# Now deploy and run your flow...A basic flow runs quickly with flow.serve(), but production scheduling requires a Prefect server, a work pool, and a worker process. Moving from prototype to production means configuring additional infrastructure.
YAML is readable on day 1. Our docs are embedded in the UI for easy reference, the AI Copilot writes workflows for you, or start with our library of Blueprints. No Python environment required.
Requires a Python environment and @flow/@task decorators. Readable for Python developers. SQL analysts and ops engineers can interact with outputs but need Python to modify workflows.
Orchestrate across data pipelines, infrastructure operations, business processes, and customer workflows in one unified platform. Event-driven at its core, with no limits on event-driven automations even in open source.
Focused on Python data engineering workflows. Supports shell tasks for non-Python work, but the orchestration layer is Python. Event-driven automations are capped on Prefect Cloud's free tier.
| | | |
|---|---|---|
| Primary use case | Universal workflow orchestration | Python workflow orchestration |
| Workflow definition | Declarative YAML | Python @flow/@task decorators |
| Languages supported | Agnostic (Python, R, Bash, Node.js, SQL & more) | Python-first (shell tasks available for Bash/R) |
| Event-driven workflows (OSS) | Unlimited in open source | Limited (10 automations on free tier) |
| Infrastructure to start | Single Docker Compose command | flow.serve() for basics; server + work pool + worker(s) for production |
| Visual workflow editor | Live DAG topology, updates as you type | Observability UI only (no editor) |
| Self-service for non-engineers | Kestra Apps | Not designed for this |
| Infrastructure automation | Native support | Possible via Python, not first-class |
| Business process automation | Native support | Possible via Python, not a primary use case |
Run Python, SQL, Bash, R, and Node.js in isolated containers. No framework wrappers, no language lock-in. Teams write in the language that fits the task, not the language the orchestrator requires.
Prefect Cloud's free tier caps event-driven automations at 10 per workspace, each configured through the UI. Kestra ships unlimited event-driven triggers in open source, all defined in YAML: webhooks, Kafka, file arrivals, database changes, API callbacks.
Tasks, triggers, schedules, and retry logic live in a single YAML file. No separate deployment YAML, no decorator layer, no worker pool configuration before you run your first workflow.
Find answers to your questions right here, and don't hesitate to Contact Us if you couldn't find what you're looking for.
See how Kestra can simplify your data pipelines—and scale beyond them.