Job Description

Summary

As a Data Engineer II at JPMorgan Chase within the JPM - US Wealth Management Tech, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

  1. Supports review of controls to ensure sufficient protection of enterprise data
  2. Advise and making custom configuration changes in one to two tools to generate a product at the business or customer request also updates logical or physical data models based on new use cases
  3. Frequently uses SQL and understands NoSQL databases and their niche in the marketplace
  4. Adds to team culture of diversity, equity, inclusion, and respect also creates secure and high-quality production code.
  5. Produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development also gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems.
  6. Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture.

Required qualifications, capabilities, and skills

  1. Formal training or certification on software engineering concepts and 2+ years applied experience
  2. Experience across the data lifecycle spark-based Frameworks for end-to-end ETL, ELT & reporting solutions using key components like Spark SQL & Spark Streaming. 
  3. Strong hands on working experience of Big Data stack including Spark and Python (Pandas, Spark SQL).
  4. Good understanding on RDMS database, Relational, No SQL databases and Linux/UNIX.
  5. Strong knowledge of multi-threading and high volume batch processing.
  6. Should be good in performance tuning on for Python and Spark along with Autosys or Control-M scheduler.
  7. Cloud implementation experience with AWS including, AWS Data Services: Proficiency in Lake formation, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Kinesis (or) MSK, Airflow (or) Lambda + Step Functions + Event Bridge, Data De/Serialization: Expertise in at least 2 of the formats: Parquet, AVRO, Fixed Width, AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager.

Preferred qualifications, capabilities, and skills

  1. Proficiency in automation and continuous delivery methods. 
  2. Proficient in all aspects of the Software Development Life Cycle.
  3. Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security.

Skills
  • AWS
  • Database Management
  • Development
  • Python
  • SQL
© 2024 cryptojobs.com. All right reserved.