AWS Data Engineer (or Azure) contract for hire, 100% remote, no sponsorship

Apply for this position Please mention DailyRemote when applying
timePosted 9 days ago location United States salarySalary undisclosed
Before you apply - make sure the job is legit.

Attempting to apply for jobs might take you off this site to a different website not owned by us. Any consequence as a result for attempting to apply for jobs is strictly at your own risk and we assume no liability.

Job Description

Data Engineer Our client hired a vendor to take their existing "poor man's" Data Warehouse and modernize it on AWS. This candidate will own the new environment and build upon it. MUST have experience working with Redshift and building the data pipelines that feed it. Communication is "critical" as they will be working closely with the business SME's and vendor (vendor's engagement wraps up in March). Need someone with a minimum of 3 years of relevant experience. There is flexibility to learn along the way 100 remote. Role InformationRole TitleAWS Data EngineerEdit Desired Start DateJob DescriptionSoft Skills (Very important) Willing to color outside the lines Comfortable with a start-up pace and ambiguity Comfortable leading through abstract requirements SDLC processes Partner with 3rd party development teams, be a bridge to share knowledge and develop alongside Good communication skills Willing to be the ldquoseedrdquo to an internal development team that will evolve Very strong communication skills and really confident being the face of IT to various business partners Ability and willingness to play hands-on role as well as conceptual architect role. Qualifications bull Bachelorrsquos degree in Computer Engineering, Computer Science, Information Systems or related discipline bull 3+ years relevant experience bull Experience in capturing end user requirements and align technical solutions to the business objectives bull Understanding of different types of storage (filesystem, relation, MPP, NoSQL) and working with various kinds of data (structured, unstructured, metrics, logs, etc.) bull Understanding of data architecture concepts such as data modeling, metadata, workflow management, ETLELT, real-time streaming), data quality bull 5+ years of experience working with SQL bull Experience with setting up and operating data pipelines using Python or SQL bull 3+ years of experience working on AWS, Google Cloud Platform or Azure bull Experience working with data warehouses such as Redshift, BigQuery or Snowflake bull Exposure to open source and proprietary cloud data pipeline tools such as Airflow, Glue and Dataflow bull Experience working with relational databases bull Experience with data serialization languages such as JSON, XML, YAML bull Experience with code management tools (e.g. Git, SVN) and DevOps tools (e.g. Docker, Bamboo, Jenkins) bull Strong analytical problem-solving ability bull Great presentation skills, written and verbal communication skills bull Self-starter with the ability to work independently or as part of a project team bull Capability to conduct performance analysis, troubleshooting and remediation