Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


https://bayt.page.link/D7z1mJwY2GpctNMHA
Back to the job results

Data Engineer-Data Platforms-AWS

27 days ago 2026/05/30
IT Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Introduction

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.





In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.





Your role and responsibilities
  • As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform
  • Responsibilities:
  • Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS
  • Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform
  • Experience in developing streaming pipelines
  • Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc


Required education
Bachelor's Degree

Preferred education
Master's Degree

Required technical and professional expertise
  • Total 5-8+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills
  • Minimum 5+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ;
  • Minimum 3 years of experience on Cloud Data Platforms on AWS;
  • Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB


Preferred technical and professional experience
  • Certification in AWS and Data Bricks or Cloudera Spark Certified developers


Years of Experience:
5-8




This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.