UP.Labs's logo

Data Engineer at UP.Labs

  • Full-time
  • Remote, Worldwide

Data Engineer

UPLabs is a dynamic venture studio dedicated to building innovative startup companies from the ground up. Our team thrives on solving complex problems, driving technological advancements, and creating impactful digital products. We’re seeking a highly skilled professional to join our growing team and contribute to our mission of launching the next wave of successful startups.

Responsibilities

  • Design, build, and maintain scalable data pipelines and workflows using multiple Cloud services and tools
  • Collaborate with data scientists, machine learning engineers, and business stakeholders to understand data requirements and deliver appropriate solutions.
  • Optimize data storage solutions and implement best practices for data governance, security, and performance.
  • Implement Python-based solutions for data processing and analysis.
  • Build and refine CI/CD processes to improve data workflows and ensure seamless deployments.
  • Monitor and troubleshoot data pipelines to ensure reliability and minimize downtime.
  • Stay up to date with advancements in data engineering and cloud computing.

Skills

  • Bachelor's or Master's degree in Data Engineering, Computer Science or a related field (or equivalent practical experience)
  • 5+ years of experience in designing and implementing data pipelines and ETL workflows
  • Proven track record of delivering production-grade software
  • Strong problem-solving skills and attention to detail
  • Excellent English communication
  • Strong problem-solving and analytical skills, with the ability to work collaboratively in a team environment
  • Ability to thrive in a fast-paced, agile environment, with the capability to drive frontend and backend architectural decisions
  • Strong experience with Python programming language
  • Exceptional SQL proficiency for querying and managing complex data structures
  • Deep experience with Databricks for managing and optimizing large-scale data systems
  • Strong experience with at least one cloud service (AWS, Azure, or GCP)
  • Deep understanding of database systems, data warehousing, and data modeling techniques
  • Familiarity with distributed computing and Big Data frameworks like Apache Spark or Hadoop

Benefits

  • 🌟 Competitive salary
  • πŸš€ Opportunities for career growth
  • πŸ’» Flexible work environment

Qualifications

  • Bachelor's or Master's degree in Data Engineering, Computer Science or a related field (or equivalent practical experience)
  • 5+ years of experience in designing and implementing data pipelines and ETL workflows
  • Proven track record of delivering production-grade software
  • Strong problem-solving skills and attention to detail
  • Excellent English communication
  • Strong problem-solving and analytical skills, with the ability to work collaboratively in a team environment
  • Ability to thrive in a fast-paced, agile environment, with the capability to drive frontend and backend architectural decisions
  • Strong experience with Python programming language
  • Exceptional SQL proficiency for querying and managing complex data structures
  • Deep experience with Databricks for managing and optimizing large-scale data systems
  • Strong experience with at least one cloud service (AWS, Azure, or GCP)
  • Deep understanding of database systems, data warehousing, and data modeling techniques
  • Familiarity with distributed computing and Big Data frameworks like Apache Spark or Hadoop

Application

Apply through Gem website.

Published 8 days ago • Expires August 25, 2025 06:01