Submitting more applications increases your chances of landing a job.

Here’s how busy the average job seeker was last month:

Opportunities viewed

Applications submitted

Keep exploring and applying to maximize your chances!

Looking for employers with a proven track record of hiring women?

Click here to explore opportunities now!
We Value Your Feedback

You are invited to participate in a survey designed to help researchers understand how best to match workers to the types of jobs they are searching for

Would You Be Likely to Participate?

If selected, we will contact you via email with further instructions and details about your participation.

You will receive a $7 payout for answering the survey.


https://bayt.page.link/RJ7tQP7Hrp3eZiro9
Back to the job results

Python + AWS + Pyspark (6+ Years) - Dataengineering

4 days ago 2026/06/07
Other Business Support Services
Create a job alert for similar positions
Job alert turned off. You won’t receive updates for this search anymore.

Job description

Senior Data Migration Engineer


About Oracle FSGIU - Finergy:


The Finergy division within Oracle FSGIU is dedicated to the Banking, Financial Services, and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers, Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency, and our expert consulting services ensure technology aligns with our clients' business goals.


Job Summary:


We are seeking a skilled Senior Data Migration Engineer with expertise in AWS, Databricks, Python, PySpark, and SQL to lead and execute complex data migration projects. The ideal candidate will design, develop, and implement data migration solutions to move large volumes of data from legacy systems to modern cloud-based platforms, ensuring data integrity, accuracy, and minimal downtime.


Job Responsibilities


Software Development:


  • Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL
  • Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
  • Implement efficient and maintainable code using best practices and coding standards.

AWS & Databricks Implementation:


  • Work with Databricks platform for big data processing and analytics.
  • Develop and maintain ETL processes using Databricks notebooks.
  • Implement and optimize data pipelines for data transformation and integration.
  • Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines.
  • Leverage PySpark for large-scale data processing and transformation tasks.

Continuous Learning:


  • Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks.
  • Share knowledge with the team and contribute to a culture of continuous improvement.

SQL Database Management:


  • Utilize expertise in SQL to design, optimize, and maintain relational databases.
  • Write complex SQL queries for data retrieval, manipulation, and analysis.

Qualifications & Skills:


  • Education: Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field. Advanced degrees are a plus.
  • 6 to 10 Years of experience in Databricks and big data frameworks
  • Proficient in AWS services and data migration
  • Experience in Unity Catalogue  
  • Familiarity with Batch and real time processing
  • Data engineering with strong skills in Python, PySpark, SQL
  • Certifications: AWS Certified Solutions Architect, Databricks Certified Professional, or similar are a plus.

Soft Skills:


  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration abilities.
  • Ability to work in a fast-paced, agile environment.







As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity.


We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all.


Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.


We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States.


Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. 



Responsibilities:

Job Responsibilities


Software Development:


  • Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL
  • Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
  • Implement efficient and maintainable code using best practices and coding standards.

AWS & Databricks Implementation:


  • Work with Databricks platform for big data processing and analytics.
  • Develop and maintain ETL processes using Databricks notebooks.
  • Implement and optimize data pipelines for data transformation and integration.
  • Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines.
  • Leverage PySpark for large-scale data processing and transformation tasks.

Continuous Learning:


  • Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks.
  • Share knowledge with the team and contribute to a culture of continuous improvement.

SQL Database Management:


  • Utilize expertise in SQL to design, optimize, and maintain relational databases.
  • Write complex SQL queries for data retrieval, manipulation, and analysis.







Qualifications:

Career Level - IC3


This job post has been translated by AI and may contain minor differences or errors.

You’ve reached the maximum limit of 15 job alerts. To create a new alert, please delete an existing one first.
Job alert created for this search. You’ll receive updates when new jobs match.
Are you sure you want to unapply?

You'll no longer be considered for this role and your application will be removed from the employer's inbox.