Talend Big Data Engineer (100% Remote)

Apply for this position Please mention DailyRemote when applying
timePosted 3 days ago location United States salarySalary undisclosed
Before you apply - make sure the job is legit.

Attempting to apply for jobs might take you off this site to a different website not owned by us. Any consequence as a result for attempting to apply for jobs is strictly at your own risk and we assume no liability.

Job Description

Currently, we are looking for talented resources for one of our listed clients. If interested please reply to me with your updated resume or feel free to reach out to me for more details at 949-371-8011. Title Talend Big Data Engineer Location Remote Duration 1 year Job Description As a Talend Big Data Engineer you will participate in all aspects of the software development lifecycle which includes estimating, technical design, implementation, documentation, testing, deployment and support of application developed for our clients. As a member working in a team environment you will work with solution architects and developers on interpretationtranslation of wireframes and creative designs into functional requirements, and subsequently into technical design. Responsibilities Create Code Talend data pipelines for a state of the art analytics applications. Deploy data processing jobs to production Configure and setup Talend environments. Work with stakeholders to identify and document requirements. Configure and schedule data pipelines Translate business requirements to technical specifications and coded data pipelines. Troubleshoot data pipelines Qualifications Passionate coders with 3-5 years of application development experience with Talend. Proficiency with Talend Open Data Studio or Talend Big Data is a must. Must have worked on projects that have resulted in code being deployed to production Experience with Snowflake, Redshift, or Azure Synapse strongly desired. Expert knowledge of developing with Talend Cloud. Knowledge of data formats and ETL and ELT processes in a Hadoop environment including Spark Hive, Parquet, MapReduce, YARN, HBase and other NoSQL databases. Experience in dealing with structured, semi-structured and unstructured data in batch and real-time environments. Experience with working in AWS, Azure, andor Google Cloud environments. Familiarity with DevOps and CICD as well as Agile tools and processes including Git, Jenkins, Jira and Confluence. Good Knowledge on Map Reduce and Spark design patterns Experience in Spark to process large stream of data. Experienced in running MapReduce and Spark jobs over YARN. Client facing or consulting experience highly preferred. Skilled problem solvers with the desire and proven ability to create innovative solutions. Flexible and adaptable attitude, disciplined to manage multiple responsibilities and adjust to varied environments. Future technology leaders- dynamic individuals energized by fast paced personal and professional growth. Phenomenal communicators who can explain and present concepts to technical and non-technical audiences alike, including high level decision makers. Bachelor's Degree in MIS, Computer Science, Math, Engineering or comparable major. Solid foundation in Computer Science, with strong competencies in data structures, algorithms and software design. Knowledge and experience in developing software using agile methodologies. Proficient in authoring, editing and presenting technical documents. Ability to communicate effectively via multiple channels (verbal, written, etc.) with technical and non-technical staff.