We’re seeking a DevOps professional with at least five years of experience to architect, evolve, and fine-tune highly scalable data systems, with a strong emphasis on AWS-based environments. This position is perfect for someone who enjoys tackling intricate engineering problems, collaborating across multiple teams, and contributing to real-time, high-throughput data solutions. You’ll partner with engineering, product, and data teams to keep mission-critical platforms running reliably and efficiently.
The position includes alternating morning/day shifts and participation in an on-call rotation.
Location: Remote (Serbia)
Key Responsibilities
Design, support, and enhance resilient data pipelines and automated ETL processes
Set up and administer Elasticsearch or OpenSearch clusters to power complex search and indexing needs
Build cloud-native infrastructure for secure data ingestion, processing, and storage using AWS or comparable platforms
Apply DevOps methodologies to streamline CI/CD workflows and deployment operations
Oversee system performance, ensuring stability, availability, and smooth operation of data-centric platforms
Work closely with technical and analytical teams to embed data systems into core products and strategic decisions
Help drive initiatives related to data quality, governance, and security standards
Create automation for infrastructure and data processes using Python and IaC technologies
Requirements
5+ years working in DevOps, data engineering, or data operations roles
Demonstrated expertise with Elasticsearch or OpenSearch, including configuration, performance tuning, and query optimization
Strong background with DevOps tooling (e.g., Terraform, Jenkins, GitHub Actions)
Advanced Python skills for automation and scripting
Solid SQL knowledge and experience with structured/unstructured datasets (Postgres, Athena, Parquet, Iceberg)