Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


https://bayt.page.link/28dZe4Vckj5rwzqo8
Back to the job results

Staff/Senior Data Engineer - ETL/AWS/Python/Apache

29 days ago 2026/05/13
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Company Overview: 
10Pearls is an end-to-end digital technology services partner helping businesses utilize technology as a competitive advantage. We help our customers digitalize their existing business, build innovative new products, and augment their existing teams with high performance team members. Our broad expertise in product management, user experience/design, cloud architecture, software development, data insights and intelligence, cyber security, emerging tech, and quality assurance ensures that we are delivering solutions that address business needs. 10Pearls is proud to have a diverse clientele including large enterprises, SMBs and high growth startups. We work with clients across industries, including healthcare/life sciences, education, energy, communications/media, financial services, and hi-tech. Our many long-term, successful partnerships are built upon trust, integrity and successful delivery and execution.  
Role 
We are seeking a highly skilled and experienced Data Engineer to join our team. The ideal candidate will have 5+ years of experience with a strong background in Python, SQL, Data Pipelines, data modeling, Apache Spark and Snowflake. The role involves designing, building, and maintaining scalable data solutions that support analytics and business decision-making. 

Responsibilities 
• Develop, construct, test, and maintain production-grade, scalable data pipelines 
• Design and implement robust data models for analytics and reporting 
• Assemble large, complex data sets that meet functional and non-functional business requirements 
• Improve data reliability, quality, and performance across pipelines 
• Prepare curated datasets for analytics and advanced modeling use cases 
• Identify opportunities to automate data workflows and processes 
• Build and manage data workflows using Apache Airflow 
• Optimize data processing using Apache Spark (batch and/or streaming workloads) 
• Collaborate with Product, Analytics, and Engineering teams to understand evolving business requirements and deliver scalable data solutions. 
• Monitor pipeline health and implement logging, alerting, data quality checks, and performance tuning 
• Apply best practices for version control, CI/CD, and deployment using Git and Docker 
• Design and implement cloud-native data solutions on AWS or GCP following the best practices for cloud platforms 

• Ensure data security, governance, access control, and schema evolution best practices are followed 

Requirements 
• Bachelor’s degree in Computer Science, Engineering, or a related field 
• Minimum of 5 years of hands-on experience in data engineering, building production data pipelines. 
• Strong hands-on experience with Python and SQL 
• Proven experience building ELT/ETL pipelines at scale 
• Solid understanding of data modeling concepts including dimensional, star, and analytical schemas 
• Hands-on experience with Apache Spark / PySpark for large-scale data processing 
• Experience with workflow orchestration tools such as Apache Airflow 
• Experience with cloud data warehouses such as Snowflake or BigQuery 
• Hands-on experience building data engineering solutions on cloud platforms (AWS or GCP). 
• Experience using Docker for containerized applications 

• Familiarity with CI/CD pipelines and modern DevOps practices for data platforms 
• Strong problem-solving skills and attention to detail 
• Strong communication skills 



This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.