Senior Data Engineer Senior Data Engineer

Senior Data Engineer

Location: • Zagreb, Croatia
• San Jose, Costa Rica
• Remote, Latin America

Category:Apache Spark

What's the Project?

COVID-19 has created health and economic issues in the world. US-based healthcare jobs are even more desirable because they are stable and very important in fighting today’s challenges.
Meanwhile, healthcare data is complex, important, and private.
Our client provides a highly scalable platform that aggregates and normalizes clinical data from EHR (Electronic Healthcare Records) and uses data science and data engineering to draw actionable insights from that data to provide better care. You would be joining a project with modern technologies and minimize “tech debt”. The client’s business is growing, and they need talented team members to support their future.

You Perfectly Match If you have:
  • Strong experience with Apache Spark (Databricks experience would be a plus).
  • Strong experience with SQL and No DBs like MongoDB, Postgres, MS SQL, or MySQL.
  • Advanced understanding of data modeling, index analysis & optimization in both non-SQL (MongoDB) and SQL (PostgreSQL and SQL Server) database environments.
  • Advanced understanding of replication, sharding, partitioning, and performance tuning.
  • Advanced understanding of database ETL (Extract, Transform, and Load) and reporting processes and tools. (CloverDX, SSIS, or Talend).
  • Strong SQL abilities and experience with massive relational database systems. Experience with Databricks, Redshift is a huge plus.
  • Strong development skills in Python or other languages that apply to data engineering and science.
  • Strong knowledge of all traditional Data Warehouse-related components (Sourcing, ETL, Data Modeling, Infrastructure, BI, Reporting) and the modern tools to support those components.
  • Flexibility and creativity in solution design – including leveraging emerging technologies.
  • Ability to clearly explain and justify ideas when faced with competing alternatives.
  • Ability to design, communicate and apply effective and architectural design patterns across a wide range of technical problems.
  • Familiarity with continuous delivery and DevOps.
  • Familiarity with GIT and release engineering strategies.
  • 3+ years of commercial experience with hands-on Data Engineering and Data Warehousing.
  • 3+ years of experience with a modern Big Data processing stack including Apache Spark, Storm, Kafka, Kinesis, or equivalent technologies.
  • Bachelor’s degree or higher in a technical field of study.
  • Track record of working in Scrum / Agile software teams.
  • Proficient in spoken and written English.
Your day-to-day activities:
  • Build and deploy the infrastructure for ingesting high-volume data from various sources.
  • Develop and maintain the data-related scripting for build/test/deployment automation.
  • Research individually and in collaboration with other teams on how to solve problems.
  • Research, design, test, and evaluate new technologies and services as it applies to data engineering and science.
  • Maintain an organization-wide view of current and future strategy and approach as it applies to data engineering and science.
  • Provide leadership and expertise in the development of standards, architectural governance, design patterns, and practices in data engineering and science.
  • Identify and resolve bottlenecks and bugs.
  • Support with Scrum / Agile software development approach (e.g., sprint, standups, retros, planning, pointing, grooming, etc.

Ready to dive in?

Contact us today or apply below.

Apply Now

5mb max, .pdf,.xlsx,.xls,.doc,.docx,.ppt,.pptx formats.

© 2022 Newfire

Amazon Consulting Partner