Integration Data Engineers

    

Location: YONDU HQ

Status: Urgent

# of Positions Available: 6

DUTIES AND RESPONSIBILITIES:

● Develops automated data pipelines based on recommended solution by Data Engineering Lead using official ETL tools determined by Data Engineering Head

● Creates automated data pipeline design documentation, Source-to-Target spreadsheets, and technical design specifications. ● Provides technical expertise, troubleshoots, guides and supports during design, development, testing, production and post-production.

● Understands the source system, captures and documents business rules.

● Analyzes and proposes changes to maximize code reusability and maintain business value.

● Develops, implements and maintains highly efficient and scalable automated data pipeline processes.

● Provides expert technical knowledge and solutions pertaining to Data integration.

● Identify opportunities to optimize the automated data pipeline including environment setup, implement monitoring, quality and validation processes to ensure data accuracy and integrity.

● Provides inputs into data governance organization to drive the data governance charter.

● Ensures compliance to internal standards and guidelines.

● Conducts unit testing and component integration testing for solutions developed

● Reviews business and technical requirements and ensures the data integration platform meets requirements.

● Applies industry best practices for automated data pipeline design and development.

● Implementation of an end-to-end complex automated data pipeline system using common Big Data Tools such as (but not limited to) Apache Spark, Talend, Kafka, Airflow, NiFi i and Hadoop Tools.

● Implements monitoring and measurement of data quality as defined by business rules.

● Ensures adherence to architectural governance standards and practices.

● Develops best practices, standards of excellence, and guidelines for programming teams.

● Ensures compliance to the DevSecOps practice.

● Conducts System Testing - execute job flows, investigate system defects, resolve defects and document results.

● Provides a level of effort estimates for new initiatives and change controls in order for projects to be evaluated, budgeted and tracked for success.

JOB SPECIFICATIONS:

● Education Bachelor’s degree in an information technology field or Computer Science or related fields

● Related Work Experience:

  • 4-6 years Java Experience. Strong SQL & PL/SQL skills with the ability to solve highly complex challenges. Technically strong in ETL concept, design and development.
  • 4-6 years' experience developing medium to large complex Integration solutions.
  • 4+ years of experience in providing data warehousing solutions.
  • Experience in automated data pipeline design, implementation, and support.
  • Experience in Big Data Platforms and distributed computing.
  • Technical experience and business knowledge in various SDLC methodologies including waterfall, iterative, agile software development life cycle or related disciplines/processes is preferred.
  • Experience in DevSecOps Process and Tools (Git, Nexus, Sonarqube, Jenkins and others)

● Knowledge:

○ Required: Java, Spark, Linux / Unix, Scala, SQL

○ Advantage: Hadoop, AWS Service knowledge, Jenkins, Bitbucket, CI/CD process,ETl/DW, BigData

● Skills:

○ Good communication and interpersonal skills for interacting and collaborating with developers, analysts, and business staff throughout the organization.

○ Ability to communicate clearly in writing to document data requirements and translate into technical solutions