Senior Data Engineer Senior Data Engineer

Senior Data Engineer

Location: • Europe, Remote
• Zagreb, Croatia

Category:• Python
• AWS
• SQL
• Snowflake

What's the Project?

Join a high-performing, tight-knit data team at a fast-growing company that is using the Internet of Things (IoT) to transform how organizations sense, monitor, and make decisions. Founded out of MIT in 2005, SmartSense is trusted by more than 2,000 organizations, including Walmart, SpaceX, Apple, CVS Health, Coca-Cola, and the US State Department to help them make sensor-driven decisions. We have a solution that our customers rely on every day to make mission critical decisions.
In this Sr. Data Engineer role, you will contribute to strategic data engineering solutions moving data from raw to cold storage, through ETL (Extract, Transform, Load) pipelines, to data sets used to train ML (Machine Learning) models.  You will not accept the status quo and be an agent of positive change.  You will collaborate with our Data Science, Business Analysts and Machine Learning Engineers towards improved data products for the customer and facilitate discovery through democratization of our data.  Join us on our data journey.
Core technologies we use:
SQL, Python, Snowflake, RESTful Services, AWS, Kubernetes, Atlassian, Git

You Perfectly Match If you have:
  • BA/BS degree in a technical/quantitative field, 5+ years of experience working in Data Engineering.
  • Proven SQL and Python skills.
  • Ability to work independently, solve problems, and learn quickly as part of a larger agile team
  • Experience delivering and articulating data models to support enterprise and data product needs; including, physical, logical, and conceptual modeling.
  • Experience in the design, monitoring, and delivery of Data Pipelines/ETL, Relational and Non-Relational Databases, Data Warehousing, Data Lake, Business Intelligence Platforms
  • Able to demonstrate coding practice (comments, style, consistency, efficiency, reuse, etc.) and have experienced applying them in code reviews.
  • Must have experience with managed services in AWS, Azure or GCP (Google Cloud Platform)
  • Must have experience validating data quality, preferably with test automation (pytest, DBT, etc.)
  • Must have experience authoring stories and bugs independently and in team grooming sessions.
Nice to have:
  • Practical knowledge of warehousing slowly changing dimension
  • Experience with the Data Science Life Cycle and working with Data Scientists and MLE’s
  • Experience with Machine Learning Data Ops, Data Processing and Architecture
  • Proven experience building data pipelines with orchestration tools such as Luigi and or AirFlow.
  • Experience with data governance, PII, and data access paradigms.
  • Proven experience in handling time series telemetry data and aggregations
  • Experience with Snowflake or other Cloud Data Platform and or Lakehouse architecture.
  • Experience working with Kubernetes or other container-orchestration system.
  • Experience with BI Platform Data Integrations and Workflows

Within 1 Month, you’ll:

  • Join a tightly knit team solving hard problems the right way
  • Understand the various sensors and environments critical to our customers’ success
  • Learn the data models and flows that are currently in use to transform raw data into analytic products
  • Build relationships with the awesome team members across other functional groups
  • Learn our code practice, work in our code base, write tests, and collaborate with us in our workflows
  • Drink from the fire hose
  • Contribute to on-boarding processes and make recommendations to make on-boarding process better

Within 3 Months, you’ll

  • Demonstrate your capabilities defining solutions, implementing, and delivering data products for your user stories and tasks
  • Implement data quality tests
  • Work closely with the product team and stakeholders to understand how our products are used
  • Identify opportunities to improve our infrastructure, operational performance, and data pipeline deliverables and influence us all to be better

Within 6 Months, you’ll

  • Evaluate new technologies and build proof-of-concept systems to enhance Data Engineering capabilities and data products
  • Contribute to improving the efficiency of our automation and general data operations
  • Design and implement new features and be accountable for their performance
  • Deliver high quality operational data
  • Generate high quality documentation and detailed analysis
  • Articulate conceptual, logical, and physical data models in confluence

Within 12 Months, you’ll

  • Establish a reputation as a partner in data analysis and contextualization with clear articulations about our data space for targeted internal audiences.
  • Improve the velocity of data ingestion, orchestration, fusion, transformation, and data analysis deliverables.
  • Deliver infrastructure required for optimal extraction, transformation, and data loading in predictive analytic contexts
  • Transform ETL development with optimizations for efficient storage, retention policies, access, and computation while accounting for cost
  • Contribute to the strategic maturity of our operations and delivery of product requests
  • Key Player in the design and delivery of the data pipelines and engineering infrastructure which support machine learning systems at scale
  • Collaborate with your teammates to advance our architecture in support of the predictive analytics roadmap

Ready to dive in?

Contact us today or apply below.

Emilio Zinaja

Emilio Zinaja
Recruiter

Apply Now
Refer a friend

Apply Now

5mb max, .pdf,.doc,.docx formats.
5mb max, .pdf,.doc,.docx formats.

© 2024 Newfire LLC,
45 Prospect St, Cambridge, MA 02139, USA

Privacy Policy
Amazon Consulting PartnerClutch