Data Engineer Remote Jobs

105 Results

7h

Senior ETL Data Engineer

AETOSremote, REMOTE, Remote
agileBachelor's degreesqlsalesforceoraclelinux

AETOS is hiring a Remote Senior ETL Data Engineer

Job Description

Aetos LLC is seeking a Senior ETL Data Engineer team member to join an existing team providing Extract, Transform and Load (ETL) solutions to a government client. The ideal individual will have 5+ years of experience with Informatica PowerCenter. Must be responsible for successful technical delivery and support of Data Warehousing, Data Migration and Transformation, and Business Intelligence projects using an Agile project management methodology. The duties of this role will include all aspects of data processing, storage, and ingestion, as well as data analysis and visualization of relative multi-program data. 

Qualifications

Responsibilities:  

ETL/Data Warehouse: 

  • Create, maintain, and reverse engineer the Extract, Transform, and Load (ETL) procedures for the Data Warehouse (DW) environment using the Informatica PowerCenter suite. 
  • Perform analysis of RDBMS tables and PowerCenter objects to answer questions pertaining to the data warehouse and the data transformations. 
  • Create and maintain scripts and files that perform various functions on the Informatica integration servers.  Use Putty or other Unix text editor to maintain Linux environment.  
  • Maintain data model documentation (ERwin) if changes to the ETL require database changes, and develop, test, and deploy associated DDL.  
  • Manage releases of changes to ETL, scripts, DDL, and scheduling components from Development to Test to Production.   
  • Provide support for the Test, Certification, and Production DW environments.  
  • Maintain Consolidated Data Model (CDM).  
  • Any knowledge of Informatica Cloud Integration Services a plus 
  • Provide ongoing development and maintenance of financial data marts and enterprise data warehouse using BI best practices, relational structures, dimensional data, structured query language skills, data warehouse and reporting techniques.  
  • Collaborate with end users to identify needs and opportunities for improved delivery of data supporting agency financial operations and mission.   
  • Convert business requirements and high-level data collection needs into well-specified ETL, analyses, reporting and visualizations.  
  • Define and log work using JIRA.  
  • Participate in recurring team meetings (Agile). 

Education & Qualifications Required:   

  • Bachelor's degree in computer science, Software Engineering, or commensurate experience in a related field.   
  • 5 + years of experience using Informatica PowerCenter at development level (creating mappings, workflows, etc). 
  • 7+ years of relevant experience in ETL development support and maintenance. 
  • Strong SQL (Oracle) abilities. 
  • Proficiency in shell scripting 
  • ETL environment where Salesforce a source a plus 
  • ETL environment where Control-M used a plus 
  • 2+ years of Informatica PowerCenter administration.  If in a Linux environment a plus. 
  • Knowledge or usage of Informatica IICS, EDC, and or AXON a plus.  
  • Excellent analytical, organizational, verbal, and written communication skills.  
  • Experience in gathering requirements and formulating business metrics for reporting.  
  • Familiarity with Erwin data modeling tool.  
  • Experience working in a Microsoft SharePoint environment.  
  • Experience with AGILE and writing User Stories.  
  • Must be able to present diagnostic, troubleshooting steps and conclusions to varied audiences.  
  • Experience monitoring and maintaining enterprise Data Warehouse platforms and BI reporting services.  
  • Banking and lending domain experience a plus.   

See more jobs at AETOS

Apply for this job

10h

Senior Data Engineer

GeminiRemote (USA)
remote-firstairflowsqlDesigncsskubernetespythonjavascript

Gemini is hiring a Remote Senior Data Engineer

About the Company

Gemini is a global crypto and Web3 platform founded by Tyler Winklevoss and Cameron Winklevoss in 2014. Gemini offers a wide range of crypto products and services for individuals and institutions in over 70 countries.

Crypto is about giving you greater choice, independence, and opportunity. We are here to help you on your journey. We build crypto products that are simple, elegant, and secure. Whether you are an individual or an institution, we help you buy, sell, and store your bitcoin and cryptocurrency. 

At Gemini, our mission is to unlock the next era of financial, creative, and personal freedom.

In the United States, we have a flexible hybrid work policy for employees who live within 30 miles of our office headquartered in New York City and our office in Seattle. Employees within the New York and Seattle metropolitan areas are expected to work from the designated office twice a week, unless there is a job-specific requirement to be in the office every workday. Employees outside of these areas are considered part of our remote-first workforce. We believe our hybrid approach for those near our NYC and Seattle offices increases productivity through more in-person collaboration where possible.

The Department: Data

The Role: Senior Data Engineer

As a member of our data engineering team, you'll deliver high quality work while solving challenges that impact the whole or part of the team's data architecture. You'll update yourself with recent advances in Big data space and provide solutions for large-scale applications aligning with team's long term goals. Your work will help resolve complex problems with identifying root causes, documenting the solutions, and implementing Operations excellence (Data auditing, validation, automation, maintainability) in mind. Communicating your insights with leaders across the organization is paramount to success.

Responsibilities:

  • Design, architect and implement best-in-class Data Warehousing and reporting solutions
  • Lead and participate in design discussions and meetings
  • Mentor data engineers and analysts
  • Design, automate, build, and launch scalable, efficient and reliable data pipelines into production using Python
  • Build real-time data and reporting solutions
  • Design, build and enhance dimensional models for Data Warehouse and BI solutions
  • Research new tools and technologies to improve existing processes
  • Develop new systems and tools to enable the teams to consume and understand data more intuitively
  • Partner with engineers, project managers, and analysts to deliver insights to the business
  • Perform root cause analysis and resolve production and data issues
  • Create test plans, test scripts and perform data validation
  • Tune SQL queries, reports and ETL pipelines
  • Build and maintain data dictionary and process documentation

Minimum Qualifications:

  • 5+ years experience in data engineering with data warehouse technologies
  • 5+ years experience in custom ETL design, implementation and maintenance
  • 5+ years experience with schema design and dimensional data modeling
  • Experience building real-time data solutions and processes
  • Advanced skills with Python and SQL are a must
  • Experience with one or more MPP databases(Redshift, Bigquery, Snowflake, etc)
  • Experience with one or more ETL tools(Informatica, Pentaho, SSIS, Alooma, etc)
  • Strong computer science fundamentals including data structures and algorithms
  • Strong software engineering skills in any server side language, preferable Python
  • Experienced in working collaboratively across different teams and departments
  • Strong technical and business communication

Preferred Qualifications:

  • Kafka, HDFS, Hive, Cloud computing, machine learning, text analysis, NLP & Web development experience is a plus
  • Experience with Continuous integration and deployment
  • Knowledge and experience of financial markets, banking or exchanges
  • Web development skills with HTML, CSS, or JavaScript
It Pays to Work Here
 
The compensation & benefits package for this role includes:
  • Competitive starting salary
  • A discretionary annual bonus
  • Long-term incentive in the form of a new hire equity grant
  • Comprehensive health plans
  • 401K with company matching
  • Paid Parental Leave
  • Flexible time off

Salary Range: The base salary range for this role is between $136,000 - $170,000 in the State of New York, the State of California and the State of Washington. This range is not inclusive of our discretionary bonus or equity package. When determining a candidate’s compensation, we consider a number of factors including skillset, experience, job scope, and current market data.

At Gemini, we strive to build diverse teams that reflect the people we want to empower through our products, and we are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Equal Opportunity is the Law, and Gemini is proud to be an equal opportunity workplace. If you have a specific need that requires accommodation, please let a member of the People Team know.

#LI-AA1

Apply for this job

12h

Sr Data Engineer

BeyondTrustRemote United States

BeyondTrust is hiring a Remote Sr Data Engineer

Job Application for Sr Data Engineer at BeyondTrust{"@context":"schema.org","@type":"JobPosting","hiringOrganization":{"@type":"Organization","name":"BeyondTrust","logo":"https://recruiting.cdn.greenhouse.io/external_greenhouse_job_boards/logos/000/010/289/resized/Beyond_Trust.png?1555420135"},"title":"Sr Data Engineer","datePosted":"2024-11-19","jobLocation":{"@type":"Place","address":{"@type":"PostalAddress","addressLocality":null,"addressRegion":null,"addressCountry":null,"postalCode":null}},"description":"\u003cp\u003eBeyondTrust is a place where you can bring your purpose to life through the work that you do, creating a safer world through our cyber security SaaS portfolio.\u003c/p\u003e\n\u003cp\u003eOur culture of flexibility, trust, and continual learning means you will be recognized for your growth, and for the impact you make on our success. You will be surrounded by people who challenge, support, and inspire you to be the best version of yourself.\u003c/p\u003e\n\u003cp\u003e\u003cu\u003eThe Role\u003c/u\u003e\u003c/p\u003e\n\u003cp\u003eAs a Senior Data Engineer at BeyondTrust, you will help build and enhance our state of the art datalakehouse which is responsible for consuming billions of events each day. With security and computational efficiency at top of mind, you will help cut through the noise and create valuable, actionable insights from our vast quantity of data to deliver immediate value to our customers.\u0026nbsp;Our engineers are problem solvers at heart and will tackle both business problems and technical engineering challenges alike with a focus on how \u0026amp; why before solving\u003c/p\u003e\n\u003cp\u003e\u003cu\u003eWhat You’ll Do\u003c/u\u003e\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eOptimize data workloads at a software level by improving processing efficiency.\u003c/li\u003e\n\u003cli\u003eDevelop new data processing routes to remove redundancy or reduce transformation overhead.\u003c/li\u003e\n\u003cli\u003eMonitor and maintain existing data workflows.\u003c/li\u003e\n\u003cli\u003eUse observability best practices to ensure pipeline performance.\u003c/li\u003e\n\u003cli\u003ePerform complex transformations on both real time and batch data assets.\u003c/li\u003e\n\u003cli\u003eCreate new ML/Engineering solutions to tackle existing issues in the cybersecurity space.\u003c/li\u003e\n\u003cli\u003eLeverage CI/CD best practices to effectively develop and release source code.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003e\u003cu\u003eWhat You’ll Bring\u003c/u\u003e\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003eStrong programming and technology knowledge in cloud data processing.\u003c/li\u003e\n\u003cli\u003ePrevious experience working in matured data lakes\u003c/li\u003e\n\u003cli\u003eStrong data modelling skills for analytical workloads.\u003c/li\u003e\n\u003cli\u003eSpark (or equivalent parallel processing framework) experience is needed, existing Databricks knowledge is a plus.\u003c/li\u003e\n\u003cli\u003eInterest and aptitude for cybersecurity; interest in identity security is highly preferred.\u003c/li\u003e\n\u003cli\u003eTechnical understanding of underlying systems and computation minutiae.\u003c/li\u003e\n\u003cli\u003eExperience working with distributed systems and data processing on object stores.\u003c/li\u003e\n\u003cli\u003eAbility to work autonomously\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003e\u003cu\u003eNeed to Know\u003c/u\u003e\u003c/p\u003e\n\u003cul\u003e\n\u003cli\u003e

See more jobs at BeyondTrust

Apply for this job

Databricks is hiring a Remote Big Data engineer

Job Application for Big Data engineer at Databricks

See more jobs at Databricks

Apply for this job

3d

Data Engineer

Derex Technologies IncOak Park Heights, MN, Remote
DevOPS8 years of experiencesqlsalesforceazurepython

Derex Technologies Inc is hiring a Remote Data Engineer

Job Description


Job Title: Data Engineer

Location: Oak Park Heights, MN (First option)/ Twin Cities , MN (first option)
***Must have Manufacture Domain experience
***Contract to hire role so only USC or GCs

Technology – Fivetran HVR, HVA, DBT Cloud, Snowflake and Manufacture Domain, Strong SQL skill, Python.

Nice to have Kafka, Any knowledge Infor LN, BAAN ERP, Salesforce, WMS,
Years of experience 8-15

Job summary
We are seeking a highly skilled and experienced Data Engineer with a strong background in Fivetran HVR, HVA, DBT Cloud, Snowflake and Manufacture Domain, Strong SQL skill, Python
The ideal candidate will have 8-15 years of experience and will be responsible for managing and overseeing complex projects

Responsibilities
Lead the planning and implementation of projects, ensuring they align with company goals and objectives.
Oversee the development and execution of project plans, including timelines, budgets, and resource allocation.
Provide technical expertise in T-SQL, Database and SQL, Cloud SQL, Data Build Tool, Azure DevOps, iPython, Kafka, Snowflake SQL, Snowflake, and Fivetran to guide project teams
Coordinate with cross-functional teams to ensure seamless integration and delivery of project components.
Monitor project progress and performance, identifying and addressing any issues or risks that may arise
Ensure that all project deliverables meet quality standards and are completed on time and within budget.
Facilitate communication and collaboration among project stakeholders, including clients, team members, and senior management.
Conduct regular project status meetings and provide updates to stakeholders on project progress and any changes to the project plan.
Develop and maintain project documentation, including project plans, status reports, and risk management plans.
Implement best practices and methodologies for project management to improve efficiency and effectiveness.
Provide mentorship and guidance to project team members, fostering a collaborative and productive work environment .
Utilize Azure DevOps for project tracking and management, ensuring all tasks and deliverables are properly documented and tracked.
Leverage Snowflake and Fivetran to manage and analyze data, providing insights and recommendations to support project decision-making.

Qualifications
Possess a minimum of 8 years of experience, with a strong technical background in T-SQL, Database and SQL, Cloud SQL, Data Build Tool, Azure DevOps, iPython, Kafka, Snowflake SQL, Snowflake, and Fivetran.
Excellent communication skills in both written and spoken forms.
Exhibit strong organizational and time management skills, with the ability to manage multiple projects simultaneously.
Show a proven track record of successfully delivering complex projects on time and within budget
Display strong problem-solving and analytical skills, with the ability to identify and address project risks and issues
Possess a strong understanding of project management methodologies and best practices.
Show experience in using Azure DevOps for project tracking and management.
Exhibit proficiency in using Snowflake and Fivetran for data management and analysis.
Display excellent leadership and team management skills, with the ability to motivate and guide project teams.
Demonstrate the ability to effectively communicate and collaborate with cross-functional teams and stakeholders.
Show a commitment to continuous improvement and professional development in the field of program management.
Possess a relevant certification such as PMP, PRINCE2, or equivalent

 

 

 

 

 

Regards,

 

Manoj

Derex Technologies INC

Contact :973-834-5005Ext 206

Qualifications

See more jobs at Derex Technologies Inc

Apply for this job

Tiger Analytics is hiring a Remote Senior Data Engineer

Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives.

  • Bachelor’s degree in Computer Science or similar field
  • 8+ years of experience in a Data Engineer role
  • Experience with relational SQL and NoSQL databases like MySQL, Postgres
  • Strong analytical skills and advanced SQL knowledge
  • Development of ETL pipelines using Python & SQL
  • Having a good experience with Customer Data Platforms (CDP)
  • Experience in SQL optimization and performance tuning
  • Experience with data modeling and building high-volume ETL pipelines.
  • Working experience with any cloud platform
  • Experience with Google Tag Manager and Power BI is a plus
  • Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
  • Experience extracting/querying/joining large data sets at scale
  • A desire to work in a collaborative, intellectually curious environment
  • Strong communication and organizational skills

This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

See more jobs at Tiger Analytics

Apply for this job

4d

Data Platform Engineer - Remote

Two95 International,Pennsylvania,United States, Remote
3 years of experience10 years of experiencesql

Two95 International is hiring a Remote Data Platform Engineer - Remote

Title– Data Platform Engineer

     Position – 6+ Months contract to hire

     Location– Remote

     Rate -$Open (Best Possible)

  • 8-10 years of experience with IBM i server administration and DB2 for i platform (DB2/400) required.
  • At least 5 years of hands-on experience with DB2 for i as a database administrator (DBA).
  • Minimum of 3 years of experience with relational database management systems (RDBMS) and OLTP/OLAP concepts.
  • Proficiency in IBM i navigation, including ACS, Schemas, Run SQL, plan cache, stored procedures, and creating user-defined functions (UDFs).
  • Strong skills in writing complex SQL queries, including joins, sub-selects, and with statements.
  • Proven experience managing high-volume, high-velocity application/data environments.

Note: If interested please send your updated resume and include your rate requirement along with your contact details with a suitable time when we can reach you. If you know of anyone in your sphere of contacts, who would be a perfect match for this job then, we would appreciate if you can forward this posting to them with a copy to us.

We look forward to hearing from you at the earliest!

See more jobs at Two95 International

Apply for this job

6d

Lead Data Engineer

Extreme ReachLondon,England,United Kingdom, Remote Hybrid
DevOPSagileDesign

Extreme Reach is hiring a Remote Lead Data Engineer

XR is a global technology platform powering the creative economy. Its unified platform moves creative and productions forward, simplifying the fragmentation and delivering global insights that drive increased business value. XR operates in 130 countries and 45 languages, serving the top global advertisers and enabling $150 billion in video ad spend around the world. More than half a billion creative brand assets are managed in XR’s enterprise platform. 

Above all, we are a supportive and collaborative culture dedicated to DEI. We are caring, dedicated, positive, genuine, trustworthy, experienced, passionate and fun people with loyalty to our customers and our fellow teammates. It is our belief that the better we work together to help our clients achieve their goals, the more successful XR will be.  

The Opportunity 

We are looking for a motivated and results driven Lead Data Engineer to join our Development Team; responsible for designing, and managing the infrastructure and data systems that power analytics and business intelligence within an organization including, but not limited to, Lake House architecture and solution development, performance optimization, data feeds development, and opportunities to contribute to Machine Learning & AI initiatives. This role blends advanced technical skills with leadership capabilities to drive the development and integration solutions at scale. You will contribute to bringing the product up to modern cloud and tool stack. You will play a crucial role in collaborating and managing cross-functional relationships to ensure seamless integration and alignment of data initiatives and translate business requirements into technical solutions. 

Job Responsibilities: 

  • Lead the design and implementation of data lake architecture based on variety of technologies such as Databricks, Exasol, S3. 
  • Take accountability and ownership for deploying technical frameworks, processes and best practices which allow engineers of all levels to build extensible, performant and maintainable solutions. 
  • Manage cross-team and stakeholder relationships to drive collaboration and meet shared goals. 
  • Design and implement scalable, reliable, and high-performance data architectures to support large-scale data processing and machine learning workflows. 
  • Architect and develop end-to-end data pipelines, including data extraction, transformation, and loading (ETL) processes. 
  • Optimize data pipelines and storage solutions for performance, scalability, and cost efficiency.  
  • Design the process for monitoring and troubleshooting of data infrastructure issues, identifying performance bottlenecks and ensuring high uptime. 
  • Utilize containerized, serverless architecture patterns in system design; 
  • Promote and drive automated testing, DevOps & CI/CD methodologies to work successfully within an agile environment. 
  • Ensure that data governance, privacy, and security policies are adhered to, in compliance with industry standards and regulations (e.g., GDPR, etc). 
  • Lead, mentor, and support a team of data engineers, providing guidance and support for their technical development. 
  • Collaborate with global cross-functional teams including DevOps, security teams and business stakeholders. 
  • Collaborate with data scientists and machine learning engineers to ensure seamless integration with AI/ML projects. 
  • Stay current with emerging data technologies and trends, evaluating and implementing new tools, frameworks, and platforms to improve the data engineering workflows. 
  • Foster a culture of continuous improvement, encouraging innovation and the adoption of modern tools and best practices in data engineering. 

  • MS/BS in Computer Science or related background is essential; 
  • Significant hands-on experience (7+ years) in data engineering, with 2+ years in lead or senior technical role; 
  • Proficiency with Python and SQL is essential; 
  • Proficiency with Spark is essential;  
  • Proven track record of successfully managing large-scale data architectures; 
  • Strong expertise in designing and managing data lakes, data warehouses, data modelling, ETL processes, and database design; 
  • Strong leadership and mentoring skills to guide and develop junior team members; 
  • Experience with shell scripting, system diagnostic and automation tooling; 
  • Experience with various database technologies (MS SQL, Postgres, MySQL) including database performance optimization (e.g., indexing, query optimization); 
  • Experience with No-SQL technologies; 
  • Experience with cloud services (AWS); 
  • Proven experience in implementing DevOps practices; 
  • Experience implementing data quality and code quality practices; 
  • Experience with various programming languages (Java, Scala, Javascript, etc) is beneficial; 
  • Proficiency with infrastructure as a code, code automation, CI/CD is beneficial; 
  • Experience in data governance and compliance is beneficial; 
  • Experience with Docker and containers is desirable; 
  • Experience in visualization tools such PowerBI is desirable; 
  • Excellent interpersonal skills with the ability to collaborate and communicate effectively across diverse teams; 
  • Strong problem solving, organization and analytical skills; 
  • Ability to manage competing priorities, handle complexity, and drive projects to completion; 
  • Keen eye for detail. 

See more jobs at Extreme Reach

Apply for this job

8d

Senior Data Engineer

Nile BitsCairo, Egypt, Remote
agileairflowsqlDesigndockerlinuxpythonAWS

Nile Bits is hiring a Remote Senior Data Engineer

Job Description

  • Designing and implementing core functionality within our data pipeline in order to support key business processes
  • Shaping the technical direction of the data engineering team
  • Supporting our Data Warehousing approach and strategy
  • Maintaining our data infrastructure so that our jobs run reliably and at scale
  • Taking responsibility for all parts of the data ecosystem, including data governance, monitoring and alerting, data validation, and documentation
  • Mentoring and upskilling other members of the team

Qualifications

  • Experience building data pipelines and/or ETL processes
  • Experience working in a Data Engineering role
  • Confident writing performant and readable code in Python, building upon the rich Python ecosystem wherever it makes sense to do so.
  • Good software engineering knowledge & skills: OO programming, design patterns, SOLID design principles and clean code
  • Confident writing SQL and good understanding of database design.
  • Experience working with web APIs.
  • Experience leading projects from a technical perspective
  • Knowledge of Docker, shell scripting, working with Linux
  • Experience with a cloud data warehouse
  • Experience in managing deployments and implementing observability and fault tolerance in cloud based infrastructure (i.e. CI/CD, Infrastructure as Code, container-based infrastructure, auto-scaling, monitoring and alerting)
  • Pro-active with a self-starter mindset; able to identify elegant solutions to difficult problems and able to suggest new and creative approaches.
  • Analytical, problem-solving and an effective communicator; leveraging technology and subject matter expertise in the business to accelerate our roadmap.
  • Able to lead technical discussions, shape the direction of the team, identify opportunities for innovation and improvement
  • Able to lead and deliver projects, ensuring stakeholders are kept up-to-date through regular communication
  • Willing to support the rest of the team when necessary, sharing knowledge and best practices, documenting design decisions, etc.
  • Willing to step outside your comfort zone to broaden your skills and learn new technologies.
  • Experience working with open source orchestration frameworks like Airflow or data analytics tools such as dbt
  • Experience with AWS services or those of another cloud provider
  • Experience with Snowflake
  • Good understanding of Agile

See more jobs at Nile Bits

Apply for this job

10d

Sr Big Data Engineer

Ingenia AgencyMexico - Remote
Bachelor's degreesqloraclepython

Ingenia Agency is hiring a Remote Sr Big Data Engineer

At Ingenia Agency we’re looking for a Data Engineerto join our team.

Responsible for creating and sustaining pipelines that allow for the analysis of data.

What will you be doing?

  • Conceptualizing and generating infrastructure that allows data to be accessed and analyzed in a global setting.
  • Load raw data from our SQL Servers, manipulate and save it into Google Cloud databases.
  • Detecting and correcting errors in data and writing scripts to clean such data up.
  • Work with scientists and clients in the business to gather requirements and ensure easy flow of data.

What are we looking for?

  • Age indifferent.
  • Bachelor's degree in Data Engineering, Big Data Analytics, Computer Engineering, or related field.
  • Master's degree in a relevant field is advantageous.
  • Proven experience as a Data Engineer.
  • Expert proficiency in Python, ETL and SQL.
  • Familiarity with Google Cloud/ AWS/Azure or suitable equivalent.
  • Excellent analytical and problem-solving skills.
  • A knack for independent and group work.
  • Knowledge of Oracle and MDM Hub.
  • Capacity to successfully manage a pipeline of duties with minimal supervision.
  • Advanced English.
  • Be Extraordinary!

What are we offering?

  • Competitive salary
  • Contract for specific period of time
  • Law benefits:
    • 10 days of vacations to the first year fulfilled
    • IMSS
  • Additional benefits:
    • Contigo Membership (Insurance of minor medical expenses)
      • Personal accident policy.
      • Funeral assistance.
      • Dental and visual health assistance.
      • Emotional wellness.
      • Benefits & discounts.
      • Network of medical services and providers with a discount.
      • Medical network with preferential prices.
      • Roadside assistance with preferential price, among others.
    • 3 special permits a year, to go out to any type of procedure that you have to do half day equivalent
    • Half day off for birthdays
    • 5 days of additional vacations in case of marriage
    • 50% scholarship in language courses in the Anglo
    • Percentage scholarship in the study of graduates or masters with the Tec. de Mty.
    • Agreement with ticket company for preferential rates for events of entertainment.



See more jobs at Ingenia Agency

Apply for this job

10d

Sr Data Engineer GCP

Ingenia AgencyMexico - Remote
Bachelor's degree5 years of experience3 years of experienceairflowsqlapipython

Ingenia Agency is hiring a Remote Sr Data Engineer GCP


AtIngenia Agency we’re looking for a Sr Data Engineer to join our team.

Responsible for creating and sustaining pipelines that allow for the analysis of data.

What will you be doing?

  • Sound understanding of Google Cloud Platform.
  • Should have worked on Big Query, Workflow or Composer.
  • Should know how to reduce BigQuery costs by reducing the amount of data processed by the queries.
  • Should be able to speed up queries by using denormalized data structures, with or without nested repeated fields.
  • Exploring and preparing data using BigQuery.
  • Experience in delivering artifacts scripts Python, dataflow components, SQL, Airflow and Bash/Unix scripting.
  • Building and productionizing data pipelines using dataflow.

What are we looking for?

  • Bachelor's degree in Data Engineering, Big Data Analytics, Computer Engineering, or related field.
  • Age indifferent.
  • 3 to 5 years of experience in GCP is required.
  • Must have Excellent GCP, Big Query and SQL skills.
  • Should have at least 3 years of experience in BigQuery Dataflow and Experience with Python and Google Cloud SDK API Scripting to create reusable framework.
  • Candidate should have strong hands-on experience in PowerCenter
  • In depth understanding of architecture, table partitioning, clustering, type of tables, best practices.
  • Proven experience as a Data Engineer, Software Developer, or similar.
  • Expert proficiency in Python, R, and SQL.
  • Candidates with Google Cloud certification will be preferred
  • Excellent analytical and problem-solving skills.
  • A knack for independent and group work.
  • Capacity to successfully manage a pipeline of duties with minimal supervision.
  • Advanced English.
  • Be Extraordinary!

What are we offering?

  • Competitive salary
  • Law benefits:
    • 10 days of vacations to the first year fulfilled
    • IMSS
  • Additional benefits:
    • Contigo Membership (Insurance of minor medical expenses)
      • Personal accident policy.
      • Funeral assistance.
      • Dental and visual health assistance.
      • Emotional wellness.
      • Benefits & discounts.
      • Network of medical services and providers with a discount.
      • Medical network with preferential prices.
      • Roadside assistance with preferential price, among others.
    • 3 special permits a year, to go out to any type of procedure that you have to do half day equivalent
    • Half day off for birthdays
    • 5 days of additional vacations in case of marriage
    • 50% scholarship in language courses in the Anglo
    • Percentage scholarship in the study of graduates or masters with the Tec. de Mty.
    • Agreement with ticket company for preferential rates for events of entertainment.

See more jobs at Ingenia Agency

Apply for this job

10d

Data Engineer

In All Media IncArgentina - Remote
DevOPSmongodbapidockerkubernetespythonAWSbackend

In All Media Inc is hiring a Remote Data Engineer

Data Analyst Engineer

The objective of this project and role is to focus on delivering the solution your business partners need to grow the business, e.g. an application, an API, a rules engine, or a Data Pipeline. You know what it takes to deliver the best possible, within the given deadline

Deliverables
Tool called conversion, delivering recommendations on this tool , solving technical debt updates and maintaining add net new recommendations, we can directly measure these recommendations by count and impact (example - how many more features were adopted)
CS - simplify data on active points and deliver the best recommendations, more net new


    Requirements:

      Technologies Backend * Python * Flask * SQLAlchemy * PyMySQL * MongoDB * Internal SOA libraries * Healthcheck tools * Tracing tools DevOps * GitLab CI/CD * Docker * Kubernetes * AWS We're seeking a talented Software Engineer Level 2 to join our dynamic team responsible for developing a suite of innovative tools. These tools are essential in automating and streamlining communication processes with our clients. If you are passionate about solving complex problems and improving user experiences, we want you on our team.

    See more jobs at In All Media Inc

    Apply for this job

    10d

    Data Engineer

    Charlotte TilburyLondon,England,United Kingdom, Remote Hybrid
    terraformairflowsqlDesigngitpythonAWSjavascript

    Charlotte Tilbury is hiring a Remote Data Engineer

    About Charlotte Tilbury Beauty

    Founded by British makeup artist and beauty entrepreneur Charlotte Tilbury MBE in 2013, Charlotte Tilbury Beauty has revolutionised the face of the global beauty industry by de-coding makeup applications for everyone, everywhere, with an easy-to-use, easy-to-choose, easy-to-gift range. Today, Charlotte Tilbury Beauty continues to break records across countries, channels, and categories and to scale at pace.

    Over the last 10 years, Charlotte Tilbury Beauty has experienced exceptional growth and is one of the most talked about brands in the beauty industry and beyond. It has become a global sensation across 50 markets (and growing), with over 2,300 employees globally who are part of the Dream Team making the magic happen.

    Today, Charlotte Tilbury Beauty is a truly global business, delivering market-leading growth, innovative retail and product launches fuelled by industry-leading tech — all with an internal culture of embracing challenges, disruptive thinking, winning together, and sharing the magic. The energy behind the bran­d is infectious, and as we grow, we are always looking for extraordinary talent who want to be part of this our success and help drive our limitless ambitions.

    The Role

     

    Data is at the heart of our strategy to engage and delight our customers, and we are determined to harness its power to go as far as we can to deliver a euphoric, personalised experience that they'll love. 

     

    We're seeking a skilled and experienced Data Engineer to join our Data function to join our team of data engineers in the design, build & maintenance of the pipelines that support this ambition. The ideal candidate will not only be able to see many different routes to engineering success, but also to work collaboratively with Engineers, Analysts, Scientists & stakeholders to design & build robust data products to meet business requirements.

     

    Our stack is primarily GCP, with Fivetran handling change detection capture, Google Cloud Functions for file ingestion, Dataform & Composer (Airflow) for orchestration, GA & Snowplow for event tracking and Looker as our BI Platform. We use Terraform Cloud to manage our infrastructure programmatically as code.

     

    Reporting Relationships

     

    This role will report into the Lead Data Engineer

     

    About you and attributes we're looking for



    • Extensive experience with cloud data warehouses and analytics query engines such as BigQuery, Redshift or Snowflow and a good understanding of cloud technologies in general. 
    • Proficient in SQL, Python and Git 
    • Prior experience with HCL (Terraform configuration language), YAML, JavaScript, CLIs and Bash.
    • Prior experience with serverless tooling e.g. Google Cloud Functions, AWS Lambdas, etc.
    • Familiarity with tools such as Fivetran and Dataform/DBT 
    • Bachelor's or Master's degree in Computer Science, Data Science, or related field 
    • Collaborative mindset and a passion for sharing ideas & knowledge
    • Demonstrable experience developing high quality code in the retail sector is a bonus

    At Charlotte Tilbury Beauty, our mission is to empower everybody in the world to be the most beautiful version of themselves. We celebrate and support this by encouraging and hiring people with diverse backgrounds, cultures, voices, beliefs, and perspectives into our growing global workforce. By doing so, we better serve our communities, customers, employees - and the candidates that take part in our recruitment process.

    If you want to learn more about life at Charlotte Tilbury Beauty please follow ourLinkedIn page!

    See more jobs at Charlotte Tilbury

    Apply for this job

    10d

    Data Engineer

    LegalistRemote
    agilenosqlsqlDesignc++dockerkubernetesAWS

    Legalist is hiring a Remote Data Engineer

    Intro description:

    Legalist is an institutional alternative asset management firm. Founded in 2016 and incubated at Y Combinator, the firm uses data-driven technology to invest in credit assets at scale. We are always looking for talented people to join our team.

    As a highly collaborative organization, our data engineers work cross-functionally with software engineering, data science, and product management to optimize growth and strategy of our data pipeline. In this position, you will be joining the data engineering team in an effort to take our data pipeline to the next level.

    Where you come in:

    • Design and develop scalable data pipelines to collect, process, and analyze large volumes of data efficiently.
    • Collaborate with cross-functional teams including data scientists, software engineers, and product managers to understand data requirements and deliver solutions that meet business needs.
    • Develop ELT processes to transform raw data into actionable insights, leveraging tools and frameworks such as Airbyte, BigQuery, Dagster, DBT or similar technologies.
    • Participate in agile development processes, including sprint planning, daily stand-ups, and retrospective meetings, to deliver iterative improvements and drive continuous innovation.
    • Apply best practices in data modeling and schema design to ensure data integrity, consistency, and efficiency.
    • Continuously monitor and optimize data pipelines and systems for performance, availability, scalability, and cost-effectiveness.

    What you’ll be bringing to the team:

    • Bachelor’s degree (BA or BS) or equivalent.
    • A minimum of 2 years of work experience in data engineering or similar role.
    • Advanced SQL knowledge and experience working with a variety of databases (SQL, NoSQL, Graph, Multi-model).
    • A minimum of 2 years professional experience with ETL//ELT, data modeling and Python.
    • Familiarity with cloud environments like GCP, AWS, as well as cloud solutions like Kubernetes, Docker, BigQuery, etc.
    • You have a pragmatic, data-driven mindset and are not dogmatic or overly idealistic about technology choices and trade-offs.
    • You have an aptitude for learning new things quickly and have the confidence and humility to ask clarifying questions.

    Even better if you have, but not necessary:

    • Experience with one or more of the following: data processing automation, data quality, data warehousing, data governance, business intelligence, data visualization.
    • Experience working with TB scale data.

    See more jobs at Legalist

    Apply for this job

    13d

    Senior ML & Data Engineer

    XeBrazil, Remote
    MLDevOPSDesignmobileapidockerpythonAWSbackend

    Xe is hiring a Remote Senior ML & Data Engineer

    At XE, we live currencies. We provide a comprehensive range of currency services and products, including our Currency Converter, Market Analysis, Currency Data API and quick, easy, secure Money Transfers for individuals and businesses. We leverage technology to deliver these services through our website, mobile apps and by phone. Last year, we helped nearly 300 million people access information about the currencies that matter to them, and over 150,000 people used us to send money overseas. Thousands of businesses relied on us for information about the currency markets, advice on managing their foreign exchange risk or trusted us with their business-critical international payments. At XE, we share the belief that behind every currency exchange, query or transaction is a person or business trying to accomplish something important, so we work together to develop new and better currency services that put our customers first. We are proud to be part of Euronet Worldwide (Nasdaq: EEFT), a global leader in processing secure electronic financial transactions. Under Euronet, we have brought together our key brands – XE, HiFX and Currency Online– to become the business that XE is today.

    The Senior ML and Data Engineer will be responsible for designing, building, and maintaining the infrastructure, platform, and processes required to successfully deploy and manage machine learning models in a production environment. This includes tasks such as developing Entity Resolution Solutions, building production features, and integrating ML solutions into production systems. 

    This role will work closely with data scientists and software engineers to ensure that machine learning models can seamlessly integrate into existing systems and processes. The role will also be responsible for identifying and implementing best practices for managing and optimizing machine learning models in production. 

    Thea ideal candidate for this role will have extensive experience in both software engineering and machine learning, as well as a deep understanding of the challenges and best practices involved in deploying machine learning models in production. Experience working with cloud computing platforms such as AWS or GCP is a plus. 

     

    What You'll Do

    • Build and maintain production-level real-time and batch MLOps pipelines. 
    • Deploy backend and real-time machine learning features and models. 
    • Design and develop multiple ML microservices and APIs. 
    • Monitor and optimize the performance of machine learning systems in production. 
    • Work closely with data scientists and software engineers to ensure the successful integration of machine learning microservices into existing systems and processes. 
    • Mentor junior engineers and provide technical leadership within the team. 
    • Stay updated with the latest advancements in machine learning and data engineering technologies and methodologies. 

    Who You Are

    • Degree in Computer Science, Software Engineering, or a related discipline. 
    • Extensive experience in developing and maintaining API services in a cloud environment. 
    • Strong object and service-oriented programming skills in Python to write efficient, scalable code. 
    • Knowledge of modern containerization techniques - Docker, Docker Compose. 
    • Experience with relational and unstructured databases and data lakes. 
    • An understanding of business goals and how data policies can affect them. 
    • Effective communication and collaboration skills. 
    • A strong understanding of the concepts associated with privacy and data security. 
    • Proven experience in mentoring and leading engineering teams. 
    • Familiarity with CI/CD pipelines and DevOps practices. 

    Perks & Benefits

    • Annual salary increase review
    • End of the year bonus (Christmas bonus)
    • ESPP (Employee Stock Purchase Plan)
    • 30 days vacation per year
    • Insurance guaranteed for employees ( Health, Oncological , Dental , Life Insurance)
    • No fee when using RIA service/wire transfers

    We want Xe to be a great place to work and to ensure that our communities are represented across our workforce.  A vital part of this is ensuring we are a truly inclusive organisation that encourages diversity in all respects. 

    At Xe we are committed to making our recruitment practices barrier-free and as accessible as possible for everyone.  This includes making adjustments or changes for disabled people, neurodiverse people or people with long-term health conditions. If you would like us to do anything differently during the application, interview or assessment process, including providing information in an alternative format, please contact us on recruitment@xe.com 

    See more jobs at Xe

    Apply for this job

    13d

    Senior ML & Data Engineer

    XeChile, Remote
    MLDevOPSDesignmobileapidockerpythonAWSbackend

    Xe is hiring a Remote Senior ML & Data Engineer

    At XE, we live currencies. We provide a comprehensive range of currency services and products, including our Currency Converter, Market Analysis, Currency Data API and quick, easy, secure Money Transfers for individuals and businesses. We leverage technology to deliver these services through our website, mobile apps and by phone. Last year, we helped nearly 300 million people access information about the currencies that matter to them, and over 150,000 people used us to send money overseas. Thousands of businesses relied on us for information about the currency markets, advice on managing their foreign exchange risk or trusted us with their business-critical international payments. At XE, we share the belief that behind every currency exchange, query or transaction is a person or business trying to accomplish something important, so we work together to develop new and better currency services that put our customers first. We are proud to be part of Euronet Worldwide (Nasdaq: EEFT), a global leader in processing secure electronic financial transactions. Under Euronet, we have brought together our key brands – XE, HiFX and Currency Online– to become the business that XE is today.

    The Senior ML and Data Engineer will be responsible for designing, building, and maintaining the infrastructure, platform, and processes required to successfully deploy and manage machine learning models in a production environment. This includes tasks such as developing Entity Resolution Solutions, building production features, and integrating ML solutions into production systems. 

    This role will work closely with data scientists and software engineers to ensure that machine learning models can seamlessly integrate into existing systems and processes. The role will also be responsible for identifying and implementing best practices for managing and optimizing machine learning models in production. 

    Thea ideal candidate for this role will have extensive experience in both software engineering and machine learning, as well as a deep understanding of the challenges and best practices involved in deploying machine learning models in production. Experience working with cloud computing platforms such as AWS or GCP is a plus. 

     

    What You'll Do

    • Build and maintain production-level real-time and batch MLOps pipelines. 
    • Deploy backend and real-time machine learning features and models. 
    • Design and develop multiple ML microservices and APIs. 
    • Monitor and optimize the performance of machine learning systems in production. 
    • Work closely with data scientists and software engineers to ensure the successful integration of machine learning microservices into existing systems and processes. 
    • Mentor junior engineers and provide technical leadership within the team. 
    • Stay updated with the latest advancements in machine learning and data engineering technologies and methodologies. 

    Who You Are

    • Degree in Computer Science, Software Engineering, or a related discipline. 
    • Extensive experience in developing and maintaining API services in a cloud environment. 
    • Strong object and service-oriented programming skills in Python to write efficient, scalable code. 
    • Knowledge of modern containerization techniques - Docker, Docker Compose. 
    • Experience with relational and unstructured databases and data lakes. 
    • An understanding of business goals and how data policies can affect them. 
    • Effective communication and collaboration skills. 
    • A strong understanding of the concepts associated with privacy and data security. 
    • Proven experience in mentoring and leading engineering teams. 
    • Familiarity with CI/CD pipelines and DevOps practices. 

    Perks & Benefits

    • Annual salary increase review
    • End of the year bonus (Christmas bonus)
    • ESPP (Employee Stock Purchase Plan)
    • Paid day off for birthday
    • 15 days vacation per year
    • Insurance guaranteed for employees (Health, Oncological, Dental, Life)
    • No fee when using Ria service/wire transfers

    We want Xe to be a great place to work and to ensure that our communities are represented across our workforce.  A vital part of this is ensuring we are a truly inclusive organisation that encourages diversity in all respects. 

    At Xe we are committed to making our recruitment practices barrier-free and as accessible as possible for everyone.  This includes making adjustments or changes for disabled people, neurodiverse people or people with long-term health conditions. If you would like us to do anything differently during the application, interview or assessment process, including providing information in an alternative format, please contact us on recruitment@xe.com 

    See more jobs at Xe

    Apply for this job

    13d

    Senior ML & Data Engineer

    XeEl Salvador, Remote
    MLDevOPSDesignmobileapidockerpythonAWSbackend

    Xe is hiring a Remote Senior ML & Data Engineer

    At XE, we live currencies. We provide a comprehensive range of currency services and products, including our Currency Converter, Market Analysis, Currency Data API and quick, easy, secure Money Transfers for individuals and businesses. We leverage technology to deliver these services through our website, mobile apps and by phone. Last year, we helped nearly 300 million people access information about the currencies that matter to them, and over 150,000 people used us to send money overseas. Thousands of businesses relied on us for information about the currency markets, advice on managing their foreign exchange risk or trusted us with their business-critical international payments. At XE, we share the belief that behind every currency exchange, query or transaction is a person or business trying to accomplish something important, so we work together to develop new and better currency services that put our customers first. We are proud to be part of Euronet Worldwide (Nasdaq: EEFT), a global leader in processing secure electronic financial transactions. Under Euronet, we have brought together our key brands – XE, HiFX and Currency Online– to become the business that XE is today.

    The Senior ML and Data Engineer will be responsible for designing, building, and maintaining the infrastructure, platform, and processes required to successfully deploy and manage machine learning models in a production environment. This includes tasks such as developing Entity Resolution Solutions, building production features, and integrating ML solutions into production systems. 

    This role will work closely with data scientists and software engineers to ensure that machine learning models can seamlessly integrate into existing systems and processes. The role will also be responsible for identifying and implementing best practices for managing and optimizing machine learning models in production. 

    Thea ideal candidate for this role will have extensive experience in both software engineering and machine learning, as well as a deep understanding of the challenges and best practices involved in deploying machine learning models in production. Experience working with cloud computing platforms such as AWS or GCP is a plus. 

     

    What You'll Do

    • Build and maintain production-level real-time and batch MLOps pipelines. 
    • Deploy backend and real-time machine learning features and models. 
    • Design and develop multiple ML microservices and APIs. 
    • Monitor and optimize the performance of machine learning systems in production. 
    • Work closely with data scientists and software engineers to ensure the successful integration of machine learning microservices into existing systems and processes. 
    • Mentor junior engineers and provide technical leadership within the team. 
    • Stay updated with the latest advancements in machine learning and data engineering technologies and methodologies. 

    Who You Are

    • Degree in Computer Science, Software Engineering, or a related discipline. 
    • Extensive experience in developing and maintaining API services in a cloud environment. 
    • Strong object and service-oriented programming skills in Python to write efficient, scalable code. 
    • Knowledge of modern containerization techniques - Docker, Docker Compose. 
    • Experience with relational and unstructured databases and data lakes. 
    • An understanding of business goals and how data policies can affect them. 
    • Effective communication and collaboration skills. 
    • A strong understanding of the concepts associated with privacy and data security. 
    • Proven experience in mentoring and leading engineering teams. 
    • Familiarity with CI/CD pipelines and DevOps practices. 

    We want Xe to be a great place to work and to ensure that our communities are represented across our workforce.  A vital part of this is ensuring we are a truly inclusive organisation that encourages diversity in all respects. 

    At Xe we are committed to making our recruitment practices barrier-free and as accessible as possible for everyone.  This includes making adjustments or changes for disabled people, neurodiverse people or people with long-term health conditions. If you would like us to do anything differently during the application, interview or assessment process, including providing information in an alternative format, please contact us on recruitment@xe.com 

    See more jobs at Xe

    Apply for this job

    Multi Media is hiring a Remote Lead Data Platform Engineer

    About us: Multi Media LLC is a leader in digital innovation, focusing on creating modern products for the content creator community. Our main platform, Chaturbate, is a key player in the adult entertainment industry, bringing billions of people together worldwide. We aim to make Chaturbate the best place for users and creators to interact and connect, offering a safe, creative, and engaging space for everyone.

    We’re looking for a Lead Data Platform Engineer to help us scale a data platform that processes petabytes of data to support our analytics, data science, and machine learning teams. In this role, you’ll lead the team that handles the entire ETL, data warehousing, and data governance processes, and you will also focus on architectural decisions for greenfield projects. In addition to that, you will coordinate closely with the head of engineering, head of product, machine learning, and analytics teams to build roadmaps, prioritize work, and drive team success.

    In particular, you will: 

    • Lead and provide ongoing mentorship and support to the data platform team; build a culture that prepares them for high-impact contributions and encourages their professional growth.
    • Administer and manage Snowflake and other data infrastructure.
    • Serve as a subject-matter expert and decision-maker for data security, governance, and performance. 
    • Collaborate with analytics and machine learning teams to ensure they have the tools and infrastructure to deliver game-changing data products. 
    • Manage team’s daily operations. 

    About you:

    • Proven technical leadership or management experience in the areas of data engineering, analytics engineering, or data infrastructure.
    • Deep expertise in ETL architecture and infrastructure tools. 
    • Excellent knowledge of SQL and data modeling tools. 
    • Excellent knowledge of Snowflake platform. 

    Nice to have: 

    • Experience with DBT (data build tool), GCP, and AWS.
    • Expertise in DataOps and security. 

    What you’ll get:

    • Fair and competitive base salary
    • Fully Remote Optional
    • Health, Vision, Dental, and Life Insurance for you and any dependents, with policy premiums covered by the Company
    • Long & Short term disability insurance
    • Unlimited PTO
    • Annual Year-End Company Closure
    • Optional 401k with 5% matching
    • 12 Paid Holidays
    • Paid Lunches in-office, or if Remote, a $125/week stipend via Sharebite
    • Employee Assistance and Employee Recognition Programs
    • And much more!

    The Base Salary range for this position is $180,000 to $215,000 USD. This range reflects base salary only and does not include additional compensation or benefits. The range displayed reflects the minimum and maximum range for a new hire across the US for the posted position. A candidate’s specific pay will be determined on a case-by-case basis and may vary based on the candidate’s job-related skills, relevant education, training, experience, certifications, and abilities of the candidate, as well as other factors unique to each candidate.

    Multi Media, LLC is an equal opportunity employer and strives for diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We encourage people from underrepresented groups to apply!

    See more jobs at Multi Media

    Apply for this job

    15d

    Senior Data Engineer

    ecobeeRemote in Canada
    sqlDesignpython

    ecobee is hiring a Remote Senior Data Engineer

    Hi, we are ecobee. 

    ecobee introduced the world’s first smart Wi-Fi thermostat to help millions of consumers save money, conserve energy, and bring home automation into their lives. That was just the beginning. We continue our pursuit to create technology that brings peace of mind into the home and allows people to focus on the moments that matter most. We take pride in making a meaningful difference to the environment, all while being part of the exciting, connected home revolution. 

    In 2021, ecobee became a subsidiary of Generac Power Systems.Generac introduced the first affordable backup generator and later created the category of automatic home standby generator. The company is committed to sustainable, cleaner energy products poised to revolutionize the 21st century electrical grid. Together,we take pride in making a meaningful difference to the environment.

    Why we love to do what we do: 

    We’re helping build the world of tomorrow with solutions that improve everyday life while making a positive impact on the planet. Our products and services work in harmony to provide comfort, efficiency, and peace of mind for millions of homes and businesses. While we’re proud of what we’ve done so far, there’s still a lot we can do—and you can be part of it.  

    Join our extraordinary team. 

    We're a rapidly growing global tech company headquartered in Canada, in the heart of downtown Toronto, with a satellite office in Leeds, UK (and remote ecopeeps in the US). We get to work with some of North America and UK's leading professionals. Our colleagues are proud to bring their authentic selves to work, confident that what we do is grounded in a greater purpose. We’re always looking for curious, talented, and passionate people to join our team.

    This role is open to being 100% remote within Canada while our home office is located in Toronto, Ontario. You may be required to travel to Toronto once per quarter for team and/or company events.

    Who You’ll Be Joining: 

    You will be part of the dynamic data engineering and machine learning services focused group at ecobee focused on leveraging data to enhance the smart home experience for customers. This team is responsible for building and maintaining the data infrastructure and machine learning capabilities that power intelligent features across ecobee’s product ecosystem, such as integrated AI services, energy optimization, home automation, personalized climate control, predictive maintenance.

    How You’ll Make an Impact:   

    • Design, build, and maintain scalable and efficient ETL/ELT pipelines for both batch and real-time data ingestion and transformation.
    • Implement data extraction and processing solutions to support analytics, machine learning, and operational use cases.
    • Integrate diverse data sources, including IoT device data, third-party APIs, and internal systems, into centralized data repositories.
    • Develop and maintain data warehousing solutions and ensure data is structured and available for downstream analytics.
    • Monitor and optimize data workflows and infrastructure to ensure high performance and reliability.
    • Implement monitoring, alerting, and logging for data pipelines to proactively identify and resolve issues.
    • Collaborate with data scientists, analysts, product managers, and other engineering teams to understand data requirements and deliver high-quality data solutions.
    • Translate business requirements into technical specifications and provide guidance on data engineering best practices.
    • Implement data quality checks, validation, and cleansing procedures to ensure data integrity and accuracy.
    • Create and maintain comprehensive documentation for data pipelines, architectures, and processes.
    • Share knowledge and best practices with the team, and contribute to the growth and development of the data engineering community within the organization.
    • Architect and implement sophisticated data pipelines that handle massive IoT data streams, ensuring data quality, consistency, and low-latency processing.
    • Introduce frameworks and best practices for feature engineering, data versioning, and experimentation in collaboration with machine learning teams.

    What You’ll Bring to the Table:    

    • Proficiency in building data pipelines using Python, SQL, and tools like Apache Spark, Apache Kafka, and Apache Airflow.
    • Experience with cloud-based data platforms (GCP preferred), including services like Big Query, Big Table, and Dataflow
    • Familiarity working with SQL based operational databases like
    • Familiarity with data processing and storage solutions tailored for machine learning workflows.
    • Good understanding of the machine learning lifecycle and experience in supporting data preparation, feature engineering, and model deployment processes.
    • Experience working with machine learning frameworks and libraries is a plus.
    • Strong experience in data modeling, schema design, and optimization for data warehousing and data lake solutions.
    • Experience with designing data solutions that support both batch and real-time processing requirements.
    • Excellent communication skills, with the ability to work effectively in a collaborative environment and convey technical concepts to non-technical stakeholders.
    • Proven track record of working in cross-functional teams and driving alignment between technical and business goals.

    Just so you know:The hired candidate will be required to complete a background check

    What happens after you apply:   

    Application review. It will happen. By an actual person in Talent Acquisition. We get upwards of 100+ applications for some roles, it can take a few days, but every applicant can expect a note regarding their application status.  

    Interview Process (4 stages):  

    • A 30-minute phone call with a member of Talent Acquisition  
    • A 45 minute call with the Director of Data Engineering and Machine Learning services focused on behavioural, situational and culture fit questions
    • A 90 minute virtual interview with a cross-functional group of engineers - this will be technical interview where you will be presented with a case study to solve a real life problem. This interview will test your design and coding skills necessary to succeed in this position.
    • The final interview will be a 45 minute interview with leadership.

    With ecobee, you’ll have the opportunity to: 

    • Be part of something big: Get to work in a fresh, dynamic, and ever-growing industry.  
    • Make a difference for the environment: Make a sustainable impact while on your daily job, and after it through programs like ecobee acts. 
    • Expand your career: Learn with our in-house learning enablement team, and enjoy our generous professional learning budget. 
    • Put people first: Benefit from competitive salaries, health benefits, and a progressive Parental Top-Up Program (75% top-up or five bonus days off). 
    • Play a part on an exceptional culture: Enjoy a fun and casual workplace with an open concept office, located at Queens Quay W & York St.ecobeeLeeds is based at our riverside office on the Calls. 
    • Celebrate diversity: Be part of a truly welcoming workplace. We offer a mentorship program and bias training.  

    Are you interested? Let's make it work. 

    Our people are empowered to take ownership of their schedules with workflows that allow for flexible hours. Based on your job, you have an option of a office-based, fully remote, or hybrid work environment. New team members working remotely, will have all necessary equipment provided and shipped to them, and we conduct our interviews and onboarding sessions primarily through video.

    We’re committed to inclusion and accommodation. 

    ecobee believes that openness and diversity make us better. We welcome applicants from all backgrounds to apply regardless of race, gender, age, religion, identity, or any other aspect which makes them unique. Accommodations can be made upon request for candidates taking part in all aspects of the selection process. Our recruitment team is happy to answer any questions candidates may have about virtual interviewing, onboarding, and future work locations.

    We’re up to incredible things. Come and be part of them. 

    Discover our products and services and learn more about who we are.  

    Ready to join ecobee? View current openings. 

    Please note, ecobee does not accept unsolicited resumes.  

    Apply for this job

    15d

    Staff Data Engineer

    ecobeeRemote in Canada
    DesignazureAWS

    ecobee is hiring a Remote Staff Data Engineer

    Hi, we are ecobee. 

    ecobee introduced the world’s first smart Wi-Fi thermostat to help millions of consumers save money, conserve energy, and bring home automation into their lives. That was just the beginning. We continue our pursuit to create technology that brings peace of mind into the home and allows people to focus on the moments that matter most. We take pride in making a meaningful difference to the environment, all while being part of the exciting, connected home revolution. 

    In 2021, ecobee became a subsidiary of Generac Power Systems.Generac introduced the first affordable backup generator and later created the category of automatic home standby generator. The company is committed to sustainable, cleaner energy products poised to revolutionize the 21st century electrical grid. Together,we take pride in making a meaningful difference to the environment.

    Why we love to do what we do: 

    We’re helping build the world of tomorrow with solutions that improve everyday life while making a positive impact on the planet. Our products and services work in harmony to provide comfort, efficiency, and peace of mind for millions of homes and businesses. While we’re proud of what we’ve done so far, there’s still a lot we can do—and you can be part of it.  

    Join our extraordinary team. 

    We're a rapidly growing global tech company headquartered in Canada, in the heart of downtown Toronto, with a satellite office in Leeds, UK (and remote ecopeeps in the US). We get to work with some of North America and UK's leading professionals. Our colleagues are proud to bring their authentic selves to work, confident that what we do is grounded in a greater purpose. We’re always looking for curious, talented, and passionate people to join our team.

    This role is open to being 100% remote within Canada while our home office is located in Toronto, Ontario. You may be required to travel to Toronto once per quarter for team and/or company events.

    Who You’ll Be Joining:

    You will be part of the dynamic data engineering and machine learning services group at ecobee focused on leveraging data to enhance the smart home experience for customers. This team is responsible for building and maintaining the data infrastructure and machine learning capabilities that power intelligent features across ecobee’s product ecosystem, such as integrated AI services, energy optimization, home automation, personalized climate control, predictive maintenance.

    How You’ll Make an Impact:   

    • Cross-Domain Problem Solving: Lead the design and implementation of scalable data pipelines and systems for complex problems that require detailed understanding across multiple domains (e.g., data, machine learning, IoT, cloud infrastructure). These problems will often come with high levels of ambiguity, incomplete data, and evolving requirements.
    • Architectural Impact: Contribute to ecobee’s system architecture with designs that have been battle-tested, resulting in significant, long-lasting impact within a specific domain. Solutions are expected to integrate elegantly with ecobee’s broader enterprise architecture and align with company-wide standards.
    • Enterprise-Wide Architecture: Start to think beyond individual components or domains, considering ecobee’s broader architectural strategy. Collaborate with principal engineers and directors to ensure designs complement the company’s vision.
    • Technical Proposals: Propose technical solutions and strategies that have a significant impact on ecobee’s data ecosystem. These solutions should drive improvement in the scalability, performance, and resilience of the company’s products and services.
    • Component Ownership: Take end-to-end ownership of full components within your domain of expertise, ensuring that their design, implementation, testing, deployment, and operations meet high standards. These components will likely interact with systems in other domains, requiring careful consideration of cross-team dependencies.
    • System Operations & SLAs: Define and track SLAs for the components you own, ensuring they meet operational excellence standards and contribute to the system’s overall reliability.
    • Maintainability & Scalability: Systematically consider maintainability in designs and implementations, with a focus on ensuring systems can scale to support ecobee’s growing data needs.
    • Mentor & Lead: Actively mentor engineers across the organization, helping them achieve concrete technical and professional goals. Drive knowledge-sharing initiatives through code reviews, technical talks, and training sessions.
    • Cross-Team Collaboration: Facilitate and guide technical discussions across squads, ensuring decisions are aligned with ecobee’s strategic goals. You’ll help foster an inclusive environment where all team members feel heard and respected.
    • Technical Expertise Development: Participate in “bar-raiser” groups that focus on elevating engineering standards across ecobee, including leading post-mortem reviews, design sessions, and code reviews.
    • Challenging Best Practices: Continuously review existing processes, best practices, and rituals across ecobee’s engineering organization. Propose and implement improvements that enhance efficiency, collaboration, and quality.
    • Delivery Metrics & Quality: Educate teams on key software delivery metrics and help track progress. Ensure that the team’s testing approaches align with accepted frameworks, and work to close gaps in quality metrics.
    • Documentation & Knowledge Sharing: Foster a culture of documentation and transparency within the team and across stakeholders, ensuring that key processes and decisions are well-documented and accessible.
    • Forward-Thinking Design: Anticipate future data challenges, such as scalability and security concerns, and propose strategies to avoid roadblocks. You’ll look for opportunities to improve existing solutions and identify novel approaches that haven’t been tried before.
    • Technology Evaluation: Stay ahead of industry trends by evaluating and recommending new technologies that align with ecobee’s goals in data engineering, machine learning, and IoT.
    • Domain-Wide Impact: Your work will have a measurable impact across multiple teams within the Data Engineering & Machine Learning Services group. This impact will often have significant customer implications, driving improvements in performance, scalability, and product capabilities.
    • Economic Thinking & Risk Management: Drive a culture of thoughtful decision-making, balancing technical innovation with practical constraints like time, cost, and risk. Work closely with partner teams to prioritize capabilities that will deliver the highest business impact.
    • Proactive Issue Resolution: Anticipate blockers and delays in projects before they require escalation. Proactively work to resolve these challenges by engaging with stakeholders and partner teams.

    What You’ll Bring to the Table:  

    • 10+ years of experience in data/software engineering, with a proven track record of owning and delivering complex, cross-domain projects at scale.
    • Extensive experience in building and maintaining scalable data pipelines and architecture using tools like Apache Spark, Kafka, and Airflow.
    • Expertise in cloud data platforms (AWS, GCP, or Azure), with a strong focus on distributed systems, cloud managed open source frameworks and services, and IoT data integration.
    • Solid understanding of end-to-end data systems, from ingestion to machine learning model deployment and inference.
    • Expertise in data security, data governance, and compliance regulations relevant to the industry.
    • Extensive experience in data architecture, database design and data engineering methodologies across multiple industries, with at least 5 years in a technical leadership role.
    • Ability to solve problems that span multiple domains, including data engineering, machine learning, IoT, and cloud infrastructure. A deep understanding of how these domains interact is essential.
    • Experience with real-time data processing, analytics platforms, and machine learning integration is highly valued.
    • Proven ability to mentor and guide engineers, from juniors to senior engineers, across multiple teams. Experience facilitating technical discussions and driving consensus.
    • Demonstrated ability to lead cross-functional initiatives and work effectively across squads.
    • A strategic mindset, with the ability to think ahead about potential roadblocks and design systems that can scale and evolve with ecobee’s needs.
    • Experience driving large technical initiatives from ideation through implementation, with a focus on creating systems that deliver high business impact.
    • Demonstrated track record of contributing to new processes and practices within engineering teams. You’re comfortable challenging the status quo and driving improvements.
    • Experience with software delivery metrics and ensuring that teams follow best practices in testing, code quality, and maintainability.

    Just so you know:The hired candidate will be required to complete a background check. 

    What happens after you apply:   

    Application review. It will happen. By an actual person in Talent Acquisition. We get upwards of 100+ applications for some roles, it can take a few days, but every applicant can expect a note regarding their application status.  

    Interview Process:

    • A 30-minute phone call with a member of Talent Acquisition  
    • A 45 minutes interview with the candidate with Director of Engineering. This is to discuss experience designing and building scalable data architectures, pipelines, and processing systems.
    • 90 minutes with staff and senior engineers for technical interview on System Design & Architecture and coding challenge
    • 1-hour with engineering leaders for a technical deep dive discussion
    • The final interview itself will be a 90 minutes interview divided into two parts where you will meet with the Director of Engineering for 45 minutes followed by a 45 minute interview with the VP

    With ecobee, you’ll have the opportunity to: 

    • Be part of something big: Get to work in a fresh, dynamic, and ever-growing industry.  
    • Make a difference for the environment: Make a sustainable impact while on your daily job, and after it through programs like ecobee acts. 
    • Expand your career: Learn with our in-house learning enablement team, and enjoy our generous professional learning budget. 
    • Put people first: Benefit from competitive salaries, health benefits, and a progressive Parental Top-Up Program (75% top-up or five bonus days off). 
    • Play a part on an exceptional culture: Enjoy a fun and casual workplace with an open concept office, located at Queens Quay W & York St.ecobeeLeeds is based at our riverside office on the Calls. 
    • Celebrate diversity: Be part of a truly welcoming workplace. We offer a mentorship program and bias training.  

    Are you interested? Let's make it work. 

    Our people are empowered to take ownership of their schedules with workflows that allow for flexible hours. Based on your job, you have an option of a office-based, fully remote, or hybrid work environment. New team members working remotely, will have all necessary equipment provided and shipped to them, and we conduct our interviews and onboarding sessions primarily through video.

    We’re committed to inclusion and accommodation. 

    ecobee believes that openness and diversity make us better. We welcome applicants from all backgrounds to apply regardless of race, gender, age, religion, identity, or any other aspect which makes them unique. Accommodations can be made upon request for candidates taking part in all aspects of the selection process. Our recruitment team is happy to answer any questions candidates may have about virtual interviewing, onboarding, and future work locations.

    We’re up to incredible things. Come and be part of them. 

    Discover our products and services and learn more about who we are.  

    Ready to join ecobee? View current openings. 

    Please note, ecobee does not accept unsolicited resumes.  

    Apply for this job