Middle Data Pipeline Engineer Middle Data Pipeline Engineer

Middle Data Pipeline Engineer

Location: • San Jose, Costa Rica
• Remote, Latin America

Category: Python

What's the Project?

We exist to power better care.
We are on a mission to power better care by optimizing every patient journey. We help payers, providers, and life sciences companies deliver better care, therapies, and outcomes by providing the most actionable patient journey insights and value-based payments platform. Healthcare organizations benefit from big data efficiencies and self-service, on-demand enterprise insights that light the path to higher-value care.
We are seeking a talented Midleve Data Engineer to play a crucial role in designing and implementing scalable, high-performance, and resilient data pipelines using cutting-edge Big Data technologies. Joining our data ingestion and transformation team, you will collaborate closely with software and data engineers, contributing your expertise to deliver top-quality data solutions for platform teams and business stakeholders. The ideal candidate will possess strong communication skills, proven experience in data design and implementation, a passion for innovation, and a consistent drive to deliver results.
Does this sound like you?

  • You are excited to help solve healthcare problems with big data
  • You are eager to learn new technologies as well as our existing architecture
  • You have an enthusiastic, energetic personality; inquiring, investigative mind.
  • You embrace change as an opportunity to learn.
  • You take great care in the details.
You Perfectly Match If you have:
  • 3+ years of hands-on experience in developing big data pipelines.
  • Proficient Python and SQL skills.
  • 1+ years of experience working with distributed data processing frameworks, with a preference for expertise in Spark.
  • Practical knowledge of AWS services, including EMR, S3, and Athena.
  • Strong programming and algorithmic skills.
  • Proficiency in tools like Git, Airflow, relational databases, and APIs.
  • A bachelor’s or equivalent degree in Computer Science or a related field and relevant experience.

Your day-to-day activities:
  • Develop, test, and deploy sophisticated data pipelines for big data processing.
  • Demonstrate an understanding of our platform architecture and actively contributing to its advancement.
  • Participate in code optimization and scalability efforts, ensuring the efficiency of new and existing code.
  • Assist in evaluation and implementation of new technologies.
  • Participate in collaborative code reviews seeking guidance from senior members of data engineering team.
  • Follow best practices and coding standards within the team.
  •  

Ready to dive in?

Contact us today or apply below.

Apply Now
Refer a friend
Apply Now
Fill out the form to apply for this position.
Ensure all required fields are completed accurately to be considered for the role.
Recommend a Candidate
Know someone perfect for this job? Use this form to refer a friend.
Make sure all required fields are filled accurately to help us reach the candidate.

© 2024 Newfire LLC,
45 Prospect St, Cambridge, MA 02139, USA

Privacy Policy
Amazon Consulting PartnerClutch