Data Platform Engineer
ContractHands-on Data Platform Engineer to build and operate production-grade data pipelines and services. Work across ingestion, transformation, orchestration, and APIs; automate with Terraform; add observability; and write tests for reliability. Azure preferred; strong AWS acceptable. Clear communication and prudent use of AI coding tools expected.
About the Role
You'll build/operate pipelines and services, automate environments with Terraform, instrument observability, and maintain reliability with tests. Azure preferred; strong AWS acceptable. Communicate clearly in async and live settings and use AI coding tools effectively without over-reliance.
What You'll Do
- Data pipelines: Design/build/operate batch and change-driven pipelines; schedule/orchestrate jobs; handle schema evolution and failures.
- Transformations & modeling: Implement clean, tested merges/transformations; produce analytics- and product-friendly models; tune SQL.
- APIs & integration: Build/collaborate on APIs; define contracts; manage auth (OAuth2/OIDC/JWT), idempotency, validation, auditing.
- Infrastructure & DevOps: Provision/manage cloud via Terraform; implement CI/CD; write shell scripts.
- Observability & reliability: Instrument logs/metrics/traces; define alerts/runbooks; track SLOs and prevent regressions.
- Security basics: Least-privilege, secrets management, encryption in transit/at rest; partner on compliance.
- Quality & tests: Unit/integration/data-quality tests (including SQL).
- Collaboration & docs: Clear comms; concise docs (contracts, mappings, runbooks); cross-functional partnership.
What We're Looking For
- 4+ years in Data Engineering / Backend Engineering / DevOps.
- Strong SQL and RDBMS fundamentals (Postgres, SQL Server, MySQL).
- Python for data/services; shell scripting (bash preferred; Powershell acceptable).
- Terraform and IaC; CI/CD with automated testing and promotion.
- Cloud: Azure preferred or strong AWS background.
- Data integration: Ingestion (batch + change-driven), orchestration, schema/versioning, resilient retries/replays.
- Workflow orchestration: Prefect preferred; Airflow/Dagster acceptable.
- APIs & auth: REST, pagination, validation, rate limiting, OAuth2/OIDC/JWT.
- Observability: Logs/metrics/traces and actionable alerting.
- Testing mindset: Code + SQL tests; fixtures; CI pipelines.
- Communication: Clear written/verbal; async + live.
- AI tooling: Use Claude Code (and similar) with judgment.
Preferred Qualifications
- Regulatory/compliance awareness.
- Analytics experience (dim/fact, BI).
- Full‑stack exposure for simple ops views.
- Data platform architecture concepts (storage/layout, catalog/lineage, governance).
- Familiarity with streaming/queues and columnar formats (Parquet).
- Experience with cloud monitoring stacks.
Engagement Details
- Duration: 10+ months (extension/full-time possible).
- Schedule: Flexible, time-zone overlap for key meetings.
- Compensation: Competitive, experience-based.
- Mode: Remote; virtual collaboration and on-call windows as agreed.
Benefits & Perks
- Competitive contract pay.
- Remote-first flexibility.
- Hands-on ownership of modern data stacks.
- Learning budget and mentorship opportunities.
- Exposure to real-world reliability and observability practices.
Ready to Join Our Team?
We're excited to hear from you! Click the button below to apply via email and take the next step in your career with Black Dog Labs.
Note: Clicking "Apply via Email" will open your default email client with a pre-filled application template. Please attach your resume/CV and fill in the requested details before sending.
If your settings aren't configured correctly for the button above, drop us a line at careers@blackdoglabs.io.