Job Description

Summary

We’re looking for a Senior Software Engineer to join our Data Cloud team. This group accelerates innovation & helps unlock new business lines, by empowering anyone at Chainalysis to quickly and reliably discover, access, analyze, and build on top of any and all data. You’ll be a key player in creating and optimizing our data platform and data processing systems. If you’re passionate about building scalable platforms to enable next-gen real-time streaming data applications, establishing frameworks for data governance and data engineering best practices, deploying cloud infrastructure at scale, we want you to join our talented and growing team!

In this role, you’ll:

  1. Design, develop, and optimize high-performance, scalable data platforms, with a strong focus on real time streaming data processing, large data volumes, and cloud infrastructure
  2. Build seamless integrations between Data Cloud and various relational and noSQL OLTP databases
  3. Design and build frameworks and abstractions to accelerate the development of data pipelines while embedding data engineering best practices
  4. Deploy cloud infrastructure at scale, implement and maintain infrastructure automation and self-service, and creating robust CI/CD pipelines
  5. Establish and maintain observability, security, and data governance solutions to ensure high quality, efficiency, and reliability of data pipelines
  6. Help define the technical vision of the team/org, articulate how our data platform and architecture could evolve

We’re looking for candidates who have:

  1. 6+ years of experience as a Data Platform Engineer or Software Engineer or Data Infrastructure Engineer, with hands-on expertise in building and maintaining cloud-based data platforms at large scale
  2. Passion for leading/contributing towards the technical vision of the team/org, strong ownership of mission critical systems, dedication to honing their craft while mentoring others  
  3. Expertise in building and maintaining streaming data pipelines using Apache Flink, as well as its underlying infrastructure and deployment
  4. Solid experience with AWS services, a good understanding of cloud architecture, and proficiency with Terraform for provisioning and managing cloud infrastructure
  5. Deep understanding of modern data lakehouse architectures and ecosystem such as Kafka, Flink, Spark, Databricks, Snowflake, DBT, Airflow, Debezium, Delta/Iceberg/Paimon, StarRocks, Trino, and proficient with Python/Java and SQL
  6. Experience with networking and security concepts within AWS, including VPCs, subnets, routing, security groups, IAM, etc.

Nice to have Experience:

  1. Exposure to or interest in the cryptocurrency technology ecosystem
  2. Experience working with different blockchain technologies is a plus

Technologies we use:

  1. Data Lakehouse: Kafka, Flink, Spark, Databricks, DBT, Debezium
  2. AWS Services: MSK, EC2, VPC, IAM, S3, SQS, Managed Flink, EKS, etc.

Skills
  • AWS
  • Database Management
  • Development
  • Java
  • Leadership
  • Python
  • Software Engineering
  • SQL
© 2025 cryptojobs.com. All right reserved.