About the company:
The company isn't just another high-frequency trading (HFT) firm. It’s a team of like-minded innovators: creative thinkers, collaborative problem-solvers, and bold risk-takers. The team maintains a constant, 24/7 presence across the world’s leading cryptocurrency exchanges. Their system reacts to market events in real time, delivering 99% uptime reliability. More than one million trades are executed daily, powered by a global network of 300+ servers deployed in strategically optimized locations around the globe.
But behind the tech are real people—a talented, diverse team you’d be proud to join. Hailing from elite tech and finance firms like Google, Yandex, and Raiffeisen Bank, they’re engineers and analysts who code at the edge, master markets, and thrive under pressure.
Key Responsibilities:
Required Skills & Experience
✅Must Have:
🚀 Will Be a Plus:
What's in it for you?
The company isn't just another high-frequency trading (HFT) firm. It’s a team of like-minded innovators: creative thinkers, collaborative problem-solvers, and bold risk-takers. The team maintains a constant, 24/7 presence across the world’s leading cryptocurrency exchanges. Their system reacts to market events in real time, delivering 99% uptime reliability. More than one million trades are executed daily, powered by a global network of 300+ servers deployed in strategically optimized locations around the globe.
But behind the tech are real people—a talented, diverse team you’d be proud to join. Hailing from elite tech and finance firms like Google, Yandex, and Raiffeisen Bank, they’re engineers and analysts who code at the edge, master markets, and thrive under pressure.
Key Responsibilities:
- Build, monitor, and scale robust data pipelines to support internal analytics and ML projects.
- Own operational data flows including loading into our data warehouse, data pipeline reliability, and incident resolution.
- Help define and implement data quality, observability, and governance standards.
- Collaborate with engineering and business teams to ensure data systems meet both technical and operational needs.
- Proactively optimize performance across SQL workloads, workflows, and infrastructure components.
Required Skills & Experience
✅Must Have:
- 5+ years of professional experience in data engineering or backend infrastructure;
- Strong proficiency in Python , including object-oriented programming and writing unit tests;
- Solid experience with SQL , including complex joins, window functions, and query performance optimization;
- Hands-on experience with ClickHouse (especially the MergeTree engine family) or similar columnar databases;
- Familiarity with workflow orchestration tools such as Argo Workflows, Apache Airflow, or Kubeflow;
- Understanding of Kafka architecture , including topics, partitions, producers, and consumers;
- Experience working with CI/CD pipelines (e.g., GitLab CI, ArgoCD, GitHub Actions);
- Practical knowledge of monitoring and BI tools like Grafana for building technical and business dashboards.
🚀 Will Be a Plus:
- Experience with AWS services such as S3, EKS, and RDS;
- Familiarity with Kubernetes and Helm for deployment and scaling;
- Exposure to data quality and observability frameworks;
- Experience supporting ML infrastructure , including feature pipelines and training data workflows.
What's in it for you?
- High financial rewards based on your expertise;
- Location: remote;
- If you want to work in an office in Dubai, the guys will help you move and settle in a new place;
- Flexible working hours and a healthy work-life balance;
- The opportunity to work in a thriving, multicultural, fun environment in one of the world’s fastest-growing industries;
- Corporate workations: the team regularly goes on corporate trips to unique locations all over the world to work, explore the local culture, and get to know each other better;
- Chance to work across the full lifecycle—from ingestion to analytics—with a high-performing cross-functional team;
- Direct involvement in building ML-ready infrastructure.