Job Description
Software Requirements:
- Proficiency in Python programming with a strong understanding of its ecosystems and frameworks.
- Experience with data validation techniques and methodologies to ensure the accuracy and integrity of data.
- Hands-on experience with DB2 for database management and operations.
- Experience working with Snowflake for cloud-based data warehousing solutions.
- Proficiency in Databricks for big data processing and analytics on a unified data platform.
Overall Responsibilities:
- Develop and maintain scalable data pipelines and build out new API integrations to support ongoing increases in data volume and complexity.
- Write high-quality, efficient, testable code in Python and other object-oriented languages.
- Collaborate with cross-functional teams to understand data needs, gather requirements, and implement data solutions.
- Conduct data validation to ensure the accuracy and quality of data through all pipelines.
- Implement processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Perform data analysis required to troubleshoot data-related issues and assist in the resolution of data issues.
- Work closely with a team of frontend and backend engineers, product managers, and analysts.
- Define and manage Service Level Agreements (SLAs) for all data sets in allocated areas of ownership.
Technical Skills:
- Strong knowledge of Python and its libraries for data analysis (e.g., Pandas, NumPy, SciPy).
- Experience with SQL databases such as DB2, and familiarity with Snowflake and Databricks platforms.
- Understanding of data modeling, data access, and data storage techniques.
- Ability to use version control systems such as Git.
- Knowledge of RESTful APIs and various data formats (JSON, XML).
- Experience with Agile/Scrum development methodologies.
Experience:
- Minimum of 12+ years of experience in a similar role, working with Python, data validation, DB2, Snowflake, and Databricks.
- Proven track record of designing and deploying high-quality, scalable, and reliable data solutions.
- Experience working with large-scale data-driven applications is a plus.
Qualification:
- Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field, or equivalent work experience.
- Strong analytical and problem-solving skills.
- Excellent verbal and written communication skills.
- Ability to work in a fast-paced, team-oriented environment.
SYNECHRON’S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Candidate Application Notice