Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


https://bayt.page.link/qtCCv1HtD4mowf7L7
Back to the job results

Data Engineer-Data Platforms-AWS

5 days ago 2026/05/30
10-49 Employees · IT Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Introduction

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.





In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.





Your role and responsibilities

* Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive.
* Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL.
* Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows.
* Write efficient code in Python and/or Scala for data manipulation and processing tasks.
* Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions





Required education
Bachelor's Degree

Preferred education
Master's Degree

Required technical and professional expertise

* Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive.
* Strong understanding of AWS services, particularly S3, Redshift, and EMR.
* Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization.
* Experience using scheduling tools such as Airflow, Control M, or shell scripting.
* Practical experience in Python and/or Scala programming languages





Preferred technical and professional experience

* Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning.
* Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to haveFamiliarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have
* Experience in data modelling, data quality assurance, and load assurance is a nice-to-have





Years of Experience:
0-7




This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.