What is Buster?
What is Buster?
Buster is a platform for building autonomous AI agents that automate dbt and data engineering workflows. Agents are YAML configuration files you commit to your repository that describe tasks in natural language—like updating documentation, reviewing code, or adapting to schema changes. The agents execute these tasks automatically in response to triggers like pull requests, schedules, or data stack events.
What problems does Buster solve?
What problems does Buster solve?
Data engineering with dbt involves repetitive tasks that require context about your data and code: keeping documentation current as models change, reviewing PRs for best practices and breaking changes, adapting staging models when upstream schemas change, and monitoring data quality. These tasks are tedious to do manually but perfect for AI agents with access to your warehouse and repository.
How is this different from writing scripts?
How is this different from writing scripts?
Traditional scripts are procedural—you write explicit code defining every step. Buster agents are declarative—you describe what you want accomplished in natural language, and the agent figures out how to do it. Agents can read your code, query your warehouse, reason about patterns, and adapt to different scenarios without requiring you to handle every edge case.
Is it safe? What if an agent does something wrong?
Is it safe? What if an agent does something wrong?
Safety is built into Buster’s design:
- Sandboxed execution: Agents run in isolated containers with precisely controlled permissions
- Granular permissions: You define which files agents can access, what SQL they can run, and what actions they can take
- Approval gates: Require human approval for sensitive operations like PR creation or file deletion
- Complete audit logs: Every agent action is logged—files read, queries run, decisions made
- Version control: Agent configs are committed to git, so you can review changes and roll back if needed
- Test locally first: Use the CLI to test agents with mock data before deploying to production
safe or standard permission level with read-only database access.What do I need to get started?
What do I need to get started?
You need:
- A GitHub account with admin access to your repository
- A dbt project
- A data warehouse (Snowflake, BigQuery, Redshift, Databricks, Postgres, MySQL, ClickHouse, SQL Server, or Supabase)
- Warehouse credentials (read-only access is sufficient)
Do I need to know Python or programming?
Do I need to know Python or programming?
No. Agents are configured using simple YAML files with natural language instructions. If you can write dbt YAML and describe a task in plain English, you can create agents. No Python, no complex scripting—just describe what you want done.
Can I test agents before deploying?
Can I test agents before deploying?
Yes. The Buster CLI lets you test agents locally in a sandbox environment. You can simulate pull requests, scheduled runs, and events with mock data to see exactly what the agent will do before deploying to production. All agents also support dry-run mode that executes logic without making actual changes.
What data warehouses do you support?
What data warehouses do you support?
Buster integrates with:
- Snowflake
- BigQuery
- Amazon Redshift
- Databricks
- PostgreSQL
- MySQL
- ClickHouse
- SQL Server
- Supabase
How much does it cost?
How much does it cost?
Buster offers Cloud (managed service), CLI (local testing), and Self-Hosted (enterprise) deployment options. For pricing details, visit buster.so or contact support@buster.so.
What kinds of tasks can agents handle?
What kinds of tasks can agents handle?
Common use cases include:
- Documentation: Auto-update model and column descriptions when code changes
- Code review: Check PRs for SQL anti-patterns, naming conventions, and missing tests
- Schema changes: Detect upstream schema changes and adapt staging models automatically
- Data quality: Monitor freshness, null rates, referential integrity, and anomalies
- Testing: Generate dbt tests for new or modified models
- Audits: Regular checks for documentation coverage, unused models, or compliance issues
How do agents understand my dbt project?
How do agents understand my dbt project?
When you connect your repository and warehouse, Buster automatically profiles your dbt project: runs metadata queries, analyzes models, discovers patterns, and generates comprehensive documentation. This documentation gives agents deep understanding of your models, business logic, and data patterns—enabling them to make context-aware decisions.
Where do agents run?
Where do agents run?
Agents run in secure, isolated sandboxes managed by Buster (Cloud) or your infrastructure (Self-Hosted). Each agent execution:
- Clones your repository into a fresh container
- Connects to your warehouse using configured credentials
- Loads project context and documentation
- Executes with only the tools and permissions you’ve defined
- Takes actions like creating PRs or running queries
- Logs everything for audit trails
Can I run agents locally?
Can I run agents locally?
Yes. The Buster CLI runs agents locally on your machine for testing and development. Local execution connects to your warehouse but uses mock GitHub operations for safety. Once tested, deploy agents to Buster Cloud or your self-hosted instance for automated production runs.
What if I need help?
What if I need help?
Check these resources:
- Quickstart - Build your first agent in 10 minutes
- How It Works - Understand the architecture and execution model
- Guides - Comprehensive configuration documentation
- Examples - Production agent examples
- Testing & Debugging - Troubleshooting common issues