كلما زادت طلبات التقديم التي ترسلينها، زادت فرصك في الحصول على وظيفة!
إليك لمحة عن معدل نشاط الباحثات عن عمل خلال الشهر الماضي:
عدد الفرص التي تم تصفحها
عدد الطلبات التي تم تقديمها
استمري في التصفح والتقديم لزيادة فرصك في الحصول على وظيفة!
هل تبحثين عن جهات توظيف لها سجل مثبت في دعم وتمكين النساء؟
اضغطي هنا لاكتشاف الفرص المتاحة الآن!ندعوكِ للمشاركة في استطلاع مصمّم لمساعدة الباحثين على فهم أفضل الطرق لربط الباحثات عن عمل بالوظائف التي يبحثن عنها.
هل ترغبين في المشاركة؟
في حال تم اختياركِ، سنتواصل معكِ عبر البريد الإلكتروني لتزويدكِ بالتفاصيل والتعليمات الخاصة بالمشاركة.
ستحصلين على مبلغ 7 دولارات مقابل إجابتك على الاستطلاع.
Project description We are seeking a highly skilled Databricks Platform Engineer with strong experience in data engineering. The candidate will have a deep understanding of both data platforms and software engineering, enabling them to effectively integrate and operate the platform within a broader IT ecosystem. This role requires a hands-on individual contributor who takes full ownership of deliverables end-to-end, including design, development, testing, deployment, and ongoing support. Responsibilities Manage and optimize Databricks data platform including workspace setup, cluster policies, job orchestration, Unity Catalog, cost controls, multi-tenancy. Design, write and maintain APIs used for Platform automation, Serverless workflows, Deployment pipelines, release management and repository management Ensure high availability, security, and performance of data systems which includes access control, secrets management, RBAC, monitoring, alerting, RLS, incident handling, performance tuning. Provide valuable insights about the data platform (Databricks) usage which includes cost attribution, usage analytics, workload patterns, telemetry. Aware of new features of Databricks including serverless, Declarative Pipelines, Agents, lakebase , etc. Design and maintain system libraries (Python) used in ETL pipelines and platform governance (Databricks). Optimize ETL Processes Enhance and tune existing ETL processes for better performance, scalability, and reliability. Skills Must have Minimum 10 Years of experience in IT/Data. Minimum 3 years of experience as a Databricks Data Platform Engineer. 3+ years of experience in designing, writing, and maintaining APIs used for Platform automation, Serverless workflows, Deployment pipelines, release management and repository management Bachelor's in IT or related field. Infrastructure & Cloud: Azure, AWS (expertise in storage, networking, compute). Programming: Proficiency in PySpark for distributed computing. minimum 4 years of Python experience for ETL development. SQL: Expertise in writing and optimizing SQL queries, preferably with experience in databases such as PostgreSQL, MySQL, Oracle, or Snowflake. Data Warehousing: Experience working with data warehousing concepts and Databricks platform. ETL Tools: Familiarity with ETL tools & processes Data Modelling: Experience with dimensional modelling, normalization/denormalization, and schema design. Version Control: Proficiency with version control tools like Git to manage codebases and collaborate on development. Data Pipeline Monitoring: Familiarity with monitoring tools (e.g., Prometheus, Grafana, or custom monitoring scripts) to track pipeline performance. Data Quality Tools: Experience implementing data validation, cleaning, and quality frameworks, ideally Monte Carlo. Nice to have Containerization & Orchestration: Docker, Kubernetes. Infrastructure as Code (IaC): Terraform. Understanding of Investment Data domain (desired). Other Languages English: C1 Advanced Seniority Lead
لن يتم النظر في طلبك لهذة الوظيفة، وسيتم إزالته من البريد الوارد الخاص بصاحب العمل.