Remote AWS Big Data Engineer

Apply for this position Please mention DailyRemote when applying
timePosted 2 days ago location United States salarySalary undisclosed
Before you apply - make sure the job is legit.

Attempting to apply for jobs might take you off this site to a different website not owned by us. Any consequence as a result for attempting to apply for jobs is strictly at your own risk and we assume no liability.

Job Description

Job Title: AWS Big Data Engineer

Duration: 3 months Contract to Fulltime

Location: Full Remote

** This is basically a cloud development role, thus looking for an Engineer with some architecture background who is heavily focused on hands-on Development with Python, Spark and Glue**

Must Have:

  • 3+ years of software development in Python
  • Experience with Python, Spark, Glue and EMR
  • 2+ years of AWS Data Engineering experience

Job Description

  • Collabera is looking to hire an experienced and highly motivated AWS Big Data engineer to design and develop data pipelines using AWS Big Data tools and services and other modern data technologies.
  • In this role, they will play a crucial part in shaping the future big data and analytics initiatives for many customers for years to come.
  • Who is passionate about building at scale on (AWS).
  • Thrive at simplifying hard problems and can articulate the solution to both technical and non-technical stakeholders.

Key Responsibilities

  • Build end-to-end big data pipelines on AWS, including:
  • Ingestion/replication from traditional on-prem RDBMS (e.g. Oracle, MS SQL Server, MySQL, Postgres) to AWS
  • Streaming ingestion with Kinesis Streams, Kinesis Firehose, and Kinesis Analytics
  • Change Data Capture (CDC) logic and partitioning
  • ETL and Analytics with AWS Glue, Glue Streaming, EMR, Spark, Presto, Athena, Flink, Python, PySpark
  • Refactoring of existing RDBMS scripts (e.g. PL/SQL. T-SQL, PL/pgSQL) to PySpark jobs
  • Buildout of data warehouse and published data sets using RedShift, Aurora, RDS, ElasticSearch
  • Python scripting with AWS Lambda

Experience Requirements

  • 3+ years of experience in software development with Python
  • 2+ years of development experience with Spark/PySpark, Pandas
  • 3+ years of database development experience with RDBMS, including development of stored procedures
  • 2+ years of ETL development experience
  • 2+ years of hands-on data engineering on AWS, including S3, Kinesis, Glue, Athena, RDS/Aurora, RedShift
  • AWS Solutions Architect Professional and Data Analytics Specialty (formerly Big Data Specialty) Certifications are a plus
  • A Bachelor's Degree from an accredited college in Computer Science or equivalent experience

python,spark,glue,EMR,AWS,Engineer,Development,Architecture,Data ,data pipelines - provided by Dice