Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


https://bayt.page.link/LLesAizWkjZokPa87
Back to the job results

Lead Specialist, Integrated Data, AI & Analytics Platform II

Yesterday 2026/06/05
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Job purpose


The Lead Specialist, Data Engineering is responsible for designing, developing, and


optimizing scalable data engineering solutions across both OT (Operational Technology) and IT environments. The role ensures that data products, enterprise data layers, pipelines, and integration flows are consistent, governed, high-performing, and aligned with enterprise architecture.


This position operates as a senior technical specialist providing guidance across multidisciplinary engineering teams and ensuring cohesive OT–IT data ingestion, transformation, and consumption patterns.


Key Accountabilities


Support Enterprise Architecture Direction


  1. Design, build, and optimize ingestion, transformation, and orchestration pipelines across OT and IT data sources.


  2. Implement scalable ELT/ETL workflows supporting enterprise analytics, data products, AI/ML, and operational use cases.


  3. Engineer unified data flows across remote mining sites, disconnected OT environments,manufacturing systems, edge computing devices, and IT cloud platforms.


  4. Embed metadata capture, lineage tracking, and quality checks into data pipelines to support data governance and cataloging requirements.


  5. Translate architectural models into physical engineering implementations.


  6. Partner with business domains to understand data requirements and ensure delivered pipelines meet operational and analytical needs.


  7. Address OT-specific latency, connectivity, protocol variability, and historian/SCADA integration considerations in pipeline designs.


Minimum Qualification, Experience and Competencies


Minimum Qualification


  • Bachelor’s degree in Computer Science/Eng, Data Engineering, Information Systems, or related field.
  • Master’s degree preferred (Data Architecture, Software Engineering, or AI/Analytics).


Minimum Experience


  • 8+ years of hands-on experience in data engineering across modern architectures (cloud, data laken and warehouse, OT/IT). 
  • Strong experience building data pipelines integrating OT systems (SCADA, DCS, historians) and IT data sources, ERP Fusion. 
  • Practical experience with Azure, modern data lakehouse patterns, virtualization , and workflow orchestration. 
  • Exposure to DevOps pipelines, containerization, data quality, and metadata management systems.

Maaden High Performance Competencies


  • Execution Excellence: Delivers high-quality pipelines and engineering solutions with strong reliability.


  • Collaboration & Influence: Works effectively across OT, IT, and business domains.


  • Problem Solving: Tackles complex integration challenges across industrial and corporate environments.


  • Adaptability & Innovation: Applies modern engineering techniques and identifies improvements in architecture/tooling.


  • Technical Leadership: Guides engineering teams and junior specialists.


Skills


  • Advanced ELT/ETL development (e.g., Data Factory, Informatica, custom pipelines).


  • Strong programming in Python, SQL, and data transformation frameworks.


  • Experience with time-series, historian, and sensor data coming from OT.


  • Deep understanding of streaming pipelines, micro-batching, and real-time architectures.


  • Strong knowledge of cloud platforms (Azure preferred), data lakehouse, Delta/Parquet formats, API integration, and orchestration tools.


  • Familiarity with DevOps practices, CI/CD, and Git-based workflows



This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.