
Data Engineer (GCP, Snowflake)
- Remote
- Poland
- Big Data
Job description
Addepto is a leading AI consulting (https://addepto.com/ai-consulting/) and data engineering (https://addepto.com/data-engineering-services/) company that builds scalable, ROI-focused AI solutions for some of the world's largest enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. With an exclusive focus on Artificial Intelligence and Big Data, Addepto helps organizations unlock the full potential of their data through systems designed for measurable business impact and long-term growth.
The company's work extends beyond client engagements. Drawing from real-world challenges and insights, Addepto has developed its own product - ContextClue - and actively contributes open-source solutions to the AI community. This commitment to transforming practical experience into scalable innovation has earned Addepto recognition by Forbes as one of the top 10 AI consulting companies worldwide.
As part of KMS Technology, a US-based global technology group, Addepto combines deep AI specialization with enterprise-scale delivery capabilities—enabling the partnership to move clients from AI experimentation to production impact, securely and at scale.
As a Data Engineer, you will be part of data engineering projects within GCP environments, including migrating, analyzing, and managing data structures across cloud platforms such as BigQuery and Snowflake. You will design, develop, and maintain scalable data solutions, working closely with clients and cross-functional teams. You will play a key role in building data pipelines, integrating data from multiple sources, and ensuring data quality, security, and performance across the data platform.
📍 Location: This role requires on-site work in the United States
🚀 Your main responsibilities:
Design, develop, test, and maintain data pipelines and ETL/ELT processes using GCP and Snowflake.
Implement data ingestion, transformation, and storage solutions for structured, semi-structured, and unstructured data.
Build and optimize batch, micro-batch, and real-time data pipelines.
Support data migration from legacy systems to cloud platforms (GCP, Snowflake).
Collaborate with business and technical stakeholders to translate requirements into scalable data solutions.
Work with GCP services such as BigQuery, Cloud SQL, Cloud Spanner, and Cloud Bigtable.
Integrate data from various sources and support data platform development.
Ensure data quality by implementing validation rules, testing frameworks, and monitoring solutions.
Work closely with security teams to ensure data protection, access control, and compliance.
Support development of data models and schemas for analytics and reporting.
Contribute to CI/CD processes, version control, and infrastructure automation (e.g., Git, Terraform).
Collaborate with data scientists, analysts, and engineers to support data-driven use cases.
Job requirements
🎯 What you'll need to succeed in this role:
5+ years of experience in Data Engineering or similar role.
2+ years of experience working with GCP or similar cloud platforms (AWS, Azure).
Hands-on experience with GCP managed data services (e.g., BigQuery, Cloud SQL, Cloud Spanner, Cloud Bigtable).
Experience working with Snowflake.
Strong knowledge of SQL and experience with data transformation tools (e.g., DBT or similar).
Proficiency in Python for data processing and scripting.
Experience with ETL/ELT processes and data pipeline development.
Experience working with structured, semi-structured, and unstructured data.
Familiarity with data orchestration tools (e.g., Airflow, Dagster or similar).
Experience with version control (Git) and CI/CD practices.
Experience with Infrastructure as Code (e.g., Terraform, Ansible, or similar).
Strong analytical, problem-solving, and troubleshooting skills.
Excellent written and verbal communication skills in English.
Experience in a client-facing or consulting environment is a plus.
➕ Nice to have:
Experience with real-time data streaming solutions.
Familiarity with data governance and data quality frameworks.
Experience with BI tools (Looker, Power BI, Tableau).
Experience supporting data platform migrations to cloud.
🎁 Discover our perks & benefits:
Work in a supportive team of passionate enthusiasts of AI & Big Data.
Engage with top-tier global enterprises and cutting-edge startups on international projects.
Enjoy flexible work arrangements, allowing you to work remotely or from modern offices and coworking spaces.
Accelerate your professional growth through career paths, knowledge-sharing initiatives, language classes, and sponsored training or conferences, including a partnership with Databricks, which offers industry-leading training materials and certifications.
Choose your preferred form of cooperation: B2B or a contract of mandate, and make use of 20 fully paid days off.
Participate in team-building events and utilize the integration budget.
Celebrate work anniversaries, birthdays, and milestones.
Access medical and sports packages, eye care, and well-being support services, including psychotherapy and coaching.
Get full work equipment for optimal productivity, including a laptop and other necessary devices.
With our backing, you can boost your personal brand by speaking at conferences, writing for our blog, or participating in meetups.
Experience a smooth onboarding with a dedicated buddy, and start your journey in our friendly, supportive, and autonomous culture.
or
All done!
Your application has been successfully submitted!
You've already applied for this job
We appreciate your interest in this position. Unfortunately, you have already applied for this job.
