airflow Remote Jobs

141 Results

1d

Senior Analytics Engineer

Up LearnLondon,England,United Kingdom, Remote
airflowpython

Up Learn is hiring a Remote Senior Analytics Engineer

Are you looking for a way to reinvent the way the world learns? Do you want to establish best practices in analytics on a modern data stack? Are you excited about being a key part of a growing data team? Up Learn may be the right place for you.  You will be helping to lay Up Learn’s foundations for scale and contributing to a data practice that is helping to tackle one of the society’s most meaningful problems: Education.

About us

Up Learn has built the world’s most effective learning experience. We’ve done this by combining cognitive science, instructional theory and artificial intelligence.

Our mission is to create the most effective learning experiences in the world, and distribute access to as many students as possible ????

Up Learn started with A Levels and developed courses that are:

  1. Effective: 97% of students that complete Up Learn courses achieve an A*/A, starting from grades as low as Ds and Es
  2. Engaging: 23.5 million hours of learning thanks to Up Learn, and rising
  3. Scaling: tens of thousands of students use Up Learn today, either independently, or through one of our 400 schools, university or charity partners

Up Learn has been growing fast, and is backed by investors that share their vision, including leading venture capital firm Forward Partners and the Branson family (Virgin). Social impact is critical to Up Learn’s mission - for every student that pays, Up Learn gives a full scholarship to a student who can’t. We are growing our incredible, 40+ strong team.

Our data stack

Our data stack is built on the following tools, with a particular emphasis on leveraging open-source technologies:

  • Google Cloud Platform for all of our analytics infrastructure
  • dbt and BigQuery for our data modelling and warehousing
  • Python and Streamlit for data science and analysis
  • gitlab for version control and CI/CD
  • Lightdash for BI/dashboards
  • Airflow for orchestration

You should have:

  • Excellent SQL knowledge with strong hands-on data modelling and data warehousing skills
  • Strong attention to detail in order to highlight and address data quality issues
  • Experience in designing and managing data tools and infrastructure
  • Great time management and proactive problem-solving abilities in order to meet deadlines
  • Strong communication and data presentation skills through the use of effective data visualisation and BI tools, e.g. Looker, Tableau, Power BI, etc

You should be:

  • Self-motivated, responsible and technology-driven individual who performs well both independently and as a team member
  • Excited about learning - enthusiastic to try learn something new, and then apply it
  • Effective at building strong, influential relationships with colleagues and partners, with demonstrated success in delivering impactful analytics to stakeholders

Bonus points for:

  • Having used dbt in a business environment
  • Demonstrable track record in mentorship and educating non-technical stakeholders
  • Exposure to Python for data manipulation and analysis

What we offer

Up Learn offers generous remuneration, equity share options, and a fun, friendly, high-calibre team that trusts you and gives you the freedom to be brilliant.

You will have the chance to define the future of education and make a meaningful contribution to the lives of thousands of students, and:

Remuneration

  • ???? A competitive salary
  • ???? Employer-matched pension
  • ???? Perks scheme offering discounts & rewards at 30,000+ brands including up to 55% off cinema tickets

Health & Wellbeing

  • ???? Level 6 (highest level) dental insurance
  • ???? Significantly enhanced maternity and paternity leave
  • ???? Cycle-to-Work: we are registered so you can buy a bike and accessories tax-free
  • ???? Eye test & glasses reimbursement
  • ???? Company library: we have hundreds of books in our company library, topped up monthly with the most highly requested books. You can borrow a book whenever you like
  • ???? Unlimited budget for any work-related books you need
  • ???? Emergency support salary advance
  • ???? Mental health first aiders
  • ????‍????‍???? Family access to Up Learn: your family and close relatives get unlimited access to any Up Learn course for free!

Time

  • ????️ Minimum 35 days of paid holiday per year made up of: 26 days of bookable holiday, plus UK bank holidays, plus unlimited ‘extra days’ (i.e. if you need a few more days, no problem)
  • ???? Ability to work remotely for longer periods
  • ????️ Flexible working hours
  • ⭐ 1 fully paid day for volunteering at a charity or not-for-profit of your choice each year

Social

  • ???? Annual company off-site where we get out of the city and take a break together
  • ???? Free sporting activities like 5-a-side football games, lunch-time jogs, badminton games, paid-for monthly CrossFit sessions
  • ☕ Unlimited delicious coffee (high-end coffee beans) at the office, tea selection and other soft drinks, plus unlimited snacks and fresh fruit
  • ???? Weekly ‘Friday celebrations’ with a huge range of drinks, from craft beer to frozen margaritas, alongside soft drinks, smoothies, and fruit juice
  • ☕ Paid for coffee breaks (a great chance to get to know the team)
  • ???? Regular team outings like go-karting and skiing

All in addition to

  • Influence, trust and impact inside a well-funded VC-backed startup that's scaling
  • A spacious and bright private office in Old Street, with delicious coffee, a selection of teas and unlimited snacks and drinks

Our Core Values

  • Live for Learning - We are open-minded and have a never-quenched thirst for learning, expanding our experiences, getting feedback, iterating and improving
  • Strive for Consistent Excellence - We hold an extremely high standard, pay attention to the details and take pride in consistency
  • Objective and Rational - We think from first principles, avoid biases, use believability, regulate our emotions and are obligated to dissent when we disagree
  • Relentlessly Resourceful - We are honey badgers, we don’t compromise, we work smart and get the job done
  • Caring and Compassionate - We demonstrate care and compassion for ourselves, each other and for students

How to apply

If this sounds like it’s for you, we can’t wait to hear from you!

Use the Apply button below to send us your CV and tell us in 150 words or less why you’d be great for this role.

Inviting someone to join our team is a big deal for us and we put a lot of care and effort into the process, whilst making it take as little of your time as possible. If we figure out we’re not perfect for each other at any stage we’ll let you know quickly and make sure we provide you with feedback (if you want it!).

See more jobs at Up Learn

Apply for this job

2d

Senior Data Engineer

BrazeRemote - Ontario
SalesBachelor's degreeairflowsqlDesignkubernetes

Braze is hiring a Remote Senior Data Engineer

At Braze, we have found our people. We’re a genuinely approachable, exceptionally kind, and intensely passionate crew.

We seek to ignite that passion by setting high standards, championing teamwork, and creating work-life harmony as we collectively navigate rapid growth on a global scale while striving for greater equity and opportunity – inside and outside our organization.

To flourish here, you must be prepared to set a high bar for yourself and those around you. There is always a way to contribute: Acting with autonomy, having accountability and being open to new perspectives are essential to our continued success. Our deep curiosity to learn and our eagerness to share diverse passions with others gives us balance and injects a one-of-a-kind vibrancy into our culture.

If you are driven to solve exhilarating challenges and have a bias toward action in the face of change, you will be empowered to make a real impact here, with a sharp and passionate team at your back. If Braze sounds like a place where you can thrive, we can’t wait to meet you.

WHAT YOU’LL DO

Join our dynamic team dedicated to revolutionizing data infrastructure and products for impactful decision-making at Braze. We collaboratively shape data engineering strategies, optimizing data pipelines and architecture to drive business growth and enhance customer experiences.

Responsibilities:

  • Lead the design, implementation, and monitoring of scalable data pipelines and architectures using tools like Snowflake and dbt
  • Develop and maintain robust ETL processes to ensure high-quality data ingestion, transformation, and storage
  • Collaborate closely with data scientists, analysts, and other engineers to design and implement data solutions that drive customer engagement and retention
  • Optimize and manage data flows and integrations across various platforms and applications
  • Ensure data quality, consistency, and governance by implementing best practices and monitoring systems
  • Work extensively with large-scale event-level data, aggregating and processing it to support business intelligence and analytics
  • Implement and maintain data products using advanced techniques and tools
  • Collaborate with cross-functional teams including engineering, product management, sales, marketing, and customer success to deliver valuable data solutions
  • Continuously evaluate and integrate new data technologies and tools to enhance our data infrastructure and capabilities

WHO YOU ARE

The ideal candidate for this role possesses:

  • 5+ years of hands-on experience in data engineering, cloud data warehouses, and ETL development, preferably in a customer-facing environment
  • Proven expertise in designing and optimizing data pipelines and architectures
  • Strong proficiency in advanced SQL and data modeling techniques
  • A track record of leading impactful data projects from conception to deployment
  • Effective collaboration skills with cross-functional teams and stakeholders
  • In-depth understanding of technical architecture and data flow in a cloud-based environment
  • Ability to mentor and guide junior team members on best practices for data engineering and development
  • Passion for building scalable data solutions that enhance customer experiences and drive business growth
  • Strong analytical and problem-solving skills, with a keen eye for detail and accuracy
  • Extensive experience working with and aggregating large event-level data
  • Familiarity with data governance principles and ensuring compliance with industry regulations
  • Prefer, but don’t require, experience with Kubernetes for container orchestration and Airflow for workflow management

 #LI-Remote

WHAT WE OFFER

Details of these benefits plan will be provided if a candidate receives an offer of employment. Benefits may vary by location.

From offering comprehensive benefits to fostering flexible environments, we’ve got you covered so you can prioritize work-life harmony.

  • Competitive compensation that may include equity
  • Retirement and Employee Stock Purchase Plans
  • Flexible paid time off
  • Comprehensive benefit plans covering medical, dental, vision, life, and disability
  • Family services that include fertility benefits and equal paid parental leave
  • Professional development supported by formal career pathing, learning platforms, and tuition reimbursement
  • Community engagement opportunities throughout the year, including an annual company wide Volunteer Week
  • Employee Resource Groups that provide supportive communities within Braze
  • Collaborative, transparent, and fun culture recognized as a Great Place to Work®

ABOUT BRAZE

Braze is a leading customer engagement platform that powers lasting connections between consumers and brands they love. Braze allows any marketer to collect and take action on any amount of data from any source, so they can creatively engage with customers in real time, across channels from one platform. From cross-channel messaging and journey orchestration to Al-powered experimentation and optimization, Braze enables companies to build and maintain absolutely engaging relationships with their customers that foster growth and loyalty.

Braze is proudly certified as a Great Place to Work® in the U.S., the UK and Singapore. We ranked #3 on Great Place to Work UK’s 2024 Best Workplaces (Large), #3 on Great Place to Work UK’s 2023 Best Workplaces for Wellbeing (Medium), #4 on Great Place to Work’s 2023 Best Workplaces in Europe (Medium), #10 on Great Place to Work UK’s 2023 Best Workplaces for Women (Large), #19 on Fortune’s 2023 Best Workplaces in New York (Large). We were also featured in Built In's 2024 Best Places to Work, U.S. News Best Technology Companies to Work For, and Great Place to Work UK’s 2023 Best Workplaces in Tech.

You’ll find many of us at headquarters in New York City or around the world in Austin, Berlin, Chicago, Jakarta, London, Paris, San Francisco, Singapore, Sydney and Tokyo – not to mention our employees in nearly 50 remote locations.

BRAZE IS AN EQUAL OPPORTUNITY EMPLOYER

At Braze, we strive to create equitable growth and opportunities inside and outside the organization.

Building meaningful connections is at the heart of everything we do, and that includes our recruiting practices. We're committed to offering all candidates a fair, accessible, and inclusive experience – regardless of age, color, disability, gender identity, marital status, maternity, national origin, pregnancy, race, religion, sex, sexual orientation, or status as a protected veteran. When applying and interviewing with Braze, we want you to feel comfortable showcasing what makes you you.

We know that sometimes different circumstances can lead talented people to hesitate to apply for a role unless they meet 100% of the criteria. If this sounds familiar, we encourage you to apply, as we’d love to meet you.

Please see ourCandidate Privacy Policy for more information on how Braze processes your personal information during the recruitment process and, if applicable based on your location, how you can exercise any privacy rights.

See more jobs at Braze

Apply for this job

2d

Network Reliability Engineer

RustairflowDesignansiblemetalc++dockerkuberneteslinuxpython

Cloudflare is hiring a Remote Network Reliability Engineer

About Us

At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. 

We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! 

Hiring Locations: Austin Texas, Atlanta, Denver, New York City, San Francisco, Seattle, or Washington D.C.

About the Role (or What you'll do)

Cloudflare operates a large global network spanning hundreds of cities (data centers). You will join a team of talented network engineers who are building software solutions to improve network resilience and reduce operational toil.
This position will be responsible for the technical operation and engineering of the Cloudflare's core data center network, including the planning, installation and management of the hardware and software as well as the day-to-day operations of the network. The core network supports our critical internal needs such as databases, high volume logging, and internal application clusters. This is an opportunity to be part of the team that is building a high­-performance network that is accessible to any web property online.

You will build tools to automate operational tasks, streamline deployment processes and provide a platform for other engineering teams to build upon. You will nurture a passion for an “automate everything” approach that makes systems failure-resistant and ready-to-scale. Furthermore, you will be required to play a key role in system design and demonstrate the ability to bring an idea from design all the way to production.

 

Examples of desirable skills, knowledge and experience

  • 5+ years of relevant Network/Site Reliability Engineering experience
  • BA/BS in Computer Science or equivalent experience
  • Solid foundation on configuration management frameworks: Saltstack, Ansible, Chef
  • Experience with NX-OS, JUNOS, EOS, Cumulus, or Sonic Network Operating Systems 
  • Solid Linux systems administration experience
  • Linux networking - iproute2, Traffic Control, Devlink, etc. 
  • Strong software development skills in Go and Python

Bonus Points

  • Deep knowledge of BGP and other routing protocols
  • Workflow Management (AirFlow, Temporal)
  • Open Source Routing Daemons (FRR, Bird, GoBGP)
  • Experience with bare metal switching
  • Experience with network programming in C, C++ or rust
  • Experience with the Linux kernel and Linux software packaging
  • Strong tooling and automations development experience
  • Time series databases (Prometheus, Grafana, Thanos, Clickhouse) 
  • Other Tools - Kubernetes, Docker, Prometheus, Consul

Compensation

Compensation may be adjusted depending on work location and level. 

  • For Colorado-based hires: Estimated annual salary of $137,000 - $187,000.
  • For New York City-based and California (excluding Bay Area) and Washington hires: Estimated annual salary of $154,000- $208,000.
  • For Bay Area-based hires: Estimated annual salary of $162,000 - $218,000.

Equity

This role is eligible to participate in Cloudflare’s equity plan.

Benefits

Cloudflare offers a complete package of benefits and programs to support you and your family.  Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!  The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.

Health & Welfare Benefits

  • Medical/Rx Insurance
  • Dental Insurance
  • Vision Insurance
  • Flexible Spending Accounts
  • Commuter Spending Accounts
  • Fertility & Family Forming Benefits
  • On-demand mental health support and Employee Assistance Program
  • Global Travel Medical Insurance

Financial Benefits

  • Short and Long Term Disability Insurance
  • Life & Accident Insurance
  • 401(k) Retirement Savings Plan
  • Employee Stock Participation Plan

Time Off

  • Flexible paid time off covering vacation and sick leave
  • Leave programs, including parental, pregnancy health, medical, and bereavement leave

What Makes Cloudflare Special?

We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

1.1.1.1: We released 1.1.1.1to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitmentand ensure that no user data is sold to advertisers or used to target consumers.

Sound like something you’d like to be a part of? We’d love to hear from you!

This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.

Cloudflare is proud to be an equal opportunity employer.  We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness.  All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.We are an AA/Veterans/Disabled Employer.

Cloudflare provides reasonable accommodations to qualified individuals with disabilities.  Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment.  If you require a reasonable accommodation to apply for a job, please contact us via e-mail athr@cloudflare.comor via mail at 101 Townsend St. San Francisco, CA 94107.

See more jobs at Cloudflare

Apply for this job

3d

Senior Data Engineer

GeminiRemote (USA)
remote-firstairflowsqlDesigncsskubernetespythonjavascript

Gemini is hiring a Remote Senior Data Engineer

About the Company

Gemini is a global crypto and Web3 platform founded by Tyler Winklevoss and Cameron Winklevoss in 2014. Gemini offers a wide range of crypto products and services for individuals and institutions in over 70 countries.

Crypto is about giving you greater choice, independence, and opportunity. We are here to help you on your journey. We build crypto products that are simple, elegant, and secure. Whether you are an individual or an institution, we help you buy, sell, and store your bitcoin and cryptocurrency. 

At Gemini, our mission is to unlock the next era of financial, creative, and personal freedom.

In the United States, we have a flexible hybrid work policy for employees who live within 30 miles of our office headquartered in New York City and our office in Seattle. Employees within the New York and Seattle metropolitan areas are expected to work from the designated office twice a week, unless there is a job-specific requirement to be in the office every workday. Employees outside of these areas are considered part of our remote-first workforce. We believe our hybrid approach for those near our NYC and Seattle offices increases productivity through more in-person collaboration where possible.

The Department: Data

The Role: Senior Data Engineer

As a member of our data engineering team, you'll deliver high quality work while solving challenges that impact the whole or part of the team's data architecture. You'll update yourself with recent advances in Big data space and provide solutions for large-scale applications aligning with team's long term goals. Your work will help resolve complex problems with identifying root causes, documenting the solutions, and implementing Operations excellence (Data auditing, validation, automation, maintainability) in mind. Communicating your insights with leaders across the organization is paramount to success.

Responsibilities:

  • Design, architect and implement best-in-class Data Warehousing and reporting solutions
  • Lead and participate in design discussions and meetings
  • Mentor data engineers and analysts
  • Design, automate, build, and launch scalable, efficient and reliable data pipelines into production using Python
  • Build real-time data and reporting solutions
  • Design, build and enhance dimensional models for Data Warehouse and BI solutions
  • Research new tools and technologies to improve existing processes
  • Develop new systems and tools to enable the teams to consume and understand data more intuitively
  • Partner with engineers, project managers, and analysts to deliver insights to the business
  • Perform root cause analysis and resolve production and data issues
  • Create test plans, test scripts and perform data validation
  • Tune SQL queries, reports and ETL pipelines
  • Build and maintain data dictionary and process documentation

Minimum Qualifications:

  • 5+ years experience in data engineering with data warehouse technologies
  • 5+ years experience in custom ETL design, implementation and maintenance
  • 5+ years experience with schema design and dimensional data modeling
  • Experience building real-time data solutions and processes
  • Advanced skills with Python and SQL are a must
  • Experience with one or more MPP databases(Redshift, Bigquery, Snowflake, etc)
  • Experience with one or more ETL tools(Informatica, Pentaho, SSIS, Alooma, etc)
  • Strong computer science fundamentals including data structures and algorithms
  • Strong software engineering skills in any server side language, preferable Python
  • Experienced in working collaboratively across different teams and departments
  • Strong technical and business communication

Preferred Qualifications:

  • Kafka, HDFS, Hive, Cloud computing, machine learning, text analysis, NLP & Web development experience is a plus
  • Experience with Continuous integration and deployment
  • Knowledge and experience of financial markets, banking or exchanges
  • Web development skills with HTML, CSS, or JavaScript
It Pays to Work Here
 
The compensation & benefits package for this role includes:
  • Competitive starting salary
  • A discretionary annual bonus
  • Long-term incentive in the form of a new hire equity grant
  • Comprehensive health plans
  • 401K with company matching
  • Paid Parental Leave
  • Flexible time off

Salary Range: The base salary range for this role is between $136,000 - $170,000 in the State of New York, the State of California and the State of Washington. This range is not inclusive of our discretionary bonus or equity package. When determining a candidate’s compensation, we consider a number of factors including skillset, experience, job scope, and current market data.

At Gemini, we strive to build diverse teams that reflect the people we want to empower through our products, and we are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Equal Opportunity is the Law, and Gemini is proud to be an equal opportunity workplace. If you have a specific need that requires accommodation, please let a member of the People Team know.

#LI-AA1

Apply for this job

5d

Machine Learning Operations Engineer - Intermediate

AssentOttawa, Canada, Remote
MLS3EC2Lambda4 years of experienceairflowsqldockerpythonAWS

Assent is hiring a Remote Machine Learning Operations Engineer - Intermediate

Job Description

The Intermediate Machine Learning Operations Engineer is responsible for developing and maintaining key machine learning infrastructure throughout its lifecycle. This role will ensure that ML and AI models deployed in production at Assent are performant, reliable, maintain (and improve in) accuracy over time, and integrate well with the overall Assent software suite. The Intermediate ML-Ops Engineer will also support data pipeline development with respect to ML model inputs and outputs. This person will have an important role in designing cutting edge infrastructure, supporting key product offerings from Assent. The Intermediate Machine Learning Operations Engineer  is a data-oriented, out-of-the-box thinker who is passionate about data, machine learning, understanding the business, and driving business value.

Key Requirements and Responsibilities

  • Develop key data pipelines moving data into and out of ML and AI models in production environments for Assent's digital products;
  • Support the development of a robust ML-Ops framework to support model tracking and continuous improvement;
  • Support ongoing and automated statistical analysis of ML models in production;
  • Work closely with adjacent teams to proactively identify potential issues in performance and availability, with respect to data systems impacting ML models;
  • Be curious, proactive and iterative, prepared to try unconventional ideas to find solutions to difficult problems;
  • Apply engineering principles to proactively identify issues, develop solutions, and recommend improvements to existing ML Operations;
  • Be self-motivated and highly proactive at exploring new technologies;
  • Stay up to date with machine learning and AI principles, models, tools and their applications in data processing and analysis;
  • Find creative solutions to challenges involving data that is difficult to obtain, complex or ambiguous;
  • Manage multiple concurrent projects, priorities and timelines;
  • Support the Machine Learning team in the pursuit of building business critical data products;
  • Configure & deploy relevant implementations to the Amazon Web Service (AWS) Cloud;

This is not an exhaustive list of duties. Responsibilities may be altered and/or added from time to time to meet business needs.

Qualifications

We strongly value your talent, energy and passion. It will also be valuable to Assent if you have the following qualifications:

  • 2-4 years of experience in MLOps, machine learning engineering, or related fields, with hands-on experience deploying and maintaining ML models in production environments.
  • A degree in Computer Science, Engineering, or a related field (A Masters level degree , or higher, is highly preferred)
  • A demonstrable understanding of machine learning and AI principles, models, tools, and their applications in data processing and analysis.
  • Strong knowledge of SQL for data retrieval.
  • Excellent ability to use Python for data extraction and manipulation
  • Solid working knowledge of AWS systems and services; comfort working with SageMaker, EC2, S3, Lambda, Terraform.
  • Solid working knowledge of MLOps, versioning, orchestration and containerization tools; MLFlow, Kubeflow, Airflow, DVC, Weights & Biases, Docker, Kubernetes.
  • Strong understanding of statistical analysis methods and procedures
  • Ability to apply engineering principles to proactively identify issues, develop solutions, and recommend improvements
  • Excellent analytical ability and creative problem-solving skills, including the ability to deal with situations where information is difficult to obtain, complex or ambiguous
  • Excellent organizational skills and ability to manage multiple priorities and timelines
  • You’re a great team player, constantly looking to support your teammates on their mission of building great data products.

Reasonable Accommodations Statement:To perform this job successfully, an individual must be able to perform the aforementioned duties and responsibilities satisfactorily. Reasonable accommodations may be made to enable qualified individuals with disabilities to perform these essential functions. Assent is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

See more jobs at Assent

Apply for this job

9d

Sr. Engineer II, Analytics

MLagiletableauairflowsqlgitc++pythonbackend

hims & hers is hiring a Remote Sr. Engineer II, Analytics

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

​​About the Role:

We're looking for a savvy and experienced Senior Analytics Engineerto build seamless data products in collaboration with our data engineering, analytics, engineering, business, and product management teams.

You Will:

  • Take the data products to the next level by developing scalable data models 
  • Manage transformations of data after load of raw data through both technical processes and business logic
  • Create an inventory of the data sources and documents needed to implement self-service analytics
  • Define quality standards of the data and partner with the analytics team to define minimum acceptance criteria for the data sources
  • Data cataloging & documentation of the data sources
  • Regularly meet with business partners and analytics teams to understand and solve data needs, short-term and medium-term
  • Build trust with internal stakeholders to encourage data-driven decision-making
  • Work with all organizations to continually grow the value of our data products by onboarding new data from our backend and 3rd party system

You Have:

  • 8+ years of experience with SQL, preferably for data transformation or analytical use cases
  • 4+ years of experience building scalable data models for analytical and BI purposes
  • 3+ years of solid experience with dbt
  • Mastery of data warehouse methodologies and techniques from transactional databases to dimensional data modeling, to wide denormalized data marts
  • Solid experience with BI tools like Tableau and Looker
  • Experience using version control (command-line, Git)
  • Familiarity with one of the data warehouses like Google Big Query, Snowflake, Redshift, Databricks
  • Domain expertise in one or more of the Finance, Product, Marketing, Operations, Customer Experience
  • Demonstrated experience engaging and influencing senior leaders across functions, including an ability to communicate effectively with both business and technical teams
  • Strong analytical and quantitative skills with the ability to use data and metrics to back up assumptions and recommendations to drive actions
  • Ability to articulate vision, mission, and objectives, and change the narrative appropriate to the audience
  • Experience working with management to define and measure KPIs and other operating metrics
  • Understanding of SDLC and Agile frameworks
  • Project management skills and a demonstrated ability to work autonomously

Nice to Have:

  • Experience working in telehealth or e-commerce
  • Previous working experience at startups
  • Knowledge of Python programming
  • Knowledge of Airflow, and modern data stack (Airflow, Databricks, dbt, Fivetran, Tableau / Looker)
  • ML training model development

Our Benefits (there are more but here are some highlights):

  • Competitive salary & equity compensation for full-time roles
  • Unlimited PTO, company holidays, and quarterly mental health days
  • Comprehensive health benefits including medical, dental & vision, and parental leave
  • Employee Stock Purchase Program (ESPP)
  • Employee discounts on hims & hers & Apostrophe online products
  • 401k benefits with employer matching contribution
  • Offsite team retreats

#LI-Remote

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions, including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range is
$150,000$180,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims considers all qualified applicants for employment, including applicants with arrest or conviction records, in accordance with the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance, the California Fair Chance Act, and any similar state or local fair chance laws.

It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.

Hims & Hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, please contact us at accommodations@forhims.com and describe the needed accommodation. Your privacy is important to us, and any information you share will only be used for the legitimate purpose of considering your request for accommodation. Hims & Hers gives consideration to all qualified applicants without regard to any protected status, including disability. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

9d

Desenvolvedor Backend Pleno

SossegoRemoto, Brazil, Remote
DevOPSairflowsqlscrumtypescriptpython

Sossego is hiring a Remote Desenvolvedor Backend Pleno

Descrição da vaga

DESAFIO

Atuar no desenvolvimento e manutenção de captura de dados, usando APIs e automação de scraping, com foco em eficiência de execução e qualidade dos dados capturados.

Extrair dados de fontes variadas, como páginas HTML, arquivos XLS/CSV, arquivos PDF, e outras fontes.

Atuar conjuntamente com a Squad, de forma a proporcionar melhoria contínua nos processos e produtos.

DESCRIÇÃO DA VAGA

Estamos buscando um Desenvolvedor Pleno com experiência em APIs e automações com TypeScript e Playwright. O profissional terá a oportunidade de expandir suas atividades com Python e Airflow como diferenciais. Experiência com RegEx e SQL é essencial. 

RESPONSABILIDADES

  • Desenvolver e manter automações de captura de dados utilizando TypeScript e Playwright

  • Desenvolver e manter automações de extração de dados de arquivos variados (PDF, XLS, etc)

  • Documentar e mapear os processos automatizados de captura, extração e tratamento de dados

  • Monitorar o desempenho e qualidade das automações de captura de dados

  • Garantir conformidade com regras de segurança e privacidade no tratamento de dados

  • Manipular dados de forma eficiente usando SQL

  • Manter-se atualizado com as tendências e avanços relacionados à ETL

  • Colaborar com equipes multidisciplinares para identificar oportunidades de melhoria nos processos e produtos

Qualificações

REQUISITOS:

  • Experiência profissional de pelo menos 3 anos 
  • Experiência com automação em TypeScript e Playwright ou tecnologias similares (Puppeteer, Selenium, etc)
  • Experiência com TypeScript
  • Experiência com expressões regulares (RegEx)
  • Experiência com SQL
  • Conhecimento em Python
  • Formação em Ciência da Computação, Engenharia, Matemática ou área relacionada
  • Habilidades analíticas para análise e resolução de problemas
  • Excelente comunicação escrita e verbal

DIFERENCIAIS:

  • Familiaridade com ambiente Cloud e processos de Devops
  • Experiencia com Airflow ou outros orquestradores de ETL
  • Conhecimento em BPMN
  • Conhecimento no setor de seguros
  • Familiaridade com metodologias ágeis (Scrum, Kanban)

See more jobs at Sossego

Apply for this job

9d

Senior Azure Scala Data Engineer

DevOPSagileBachelor's degree5 years of experiencescalaairflowsqlDesignazurescrum

FuseMachines is hiring a Remote Senior Azure Scala Data Engineer

Senior Azure Scala Data Engineer - Fusemachines - Career PageSee more jobs at FuseMachines

Apply for this job

11d

Senior Data Engineer

Nile BitsCairo, Egypt, Remote
agileairflowsqlDesigndockerlinuxpythonAWS

Nile Bits is hiring a Remote Senior Data Engineer

Job Description

  • Designing and implementing core functionality within our data pipeline in order to support key business processes
  • Shaping the technical direction of the data engineering team
  • Supporting our Data Warehousing approach and strategy
  • Maintaining our data infrastructure so that our jobs run reliably and at scale
  • Taking responsibility for all parts of the data ecosystem, including data governance, monitoring and alerting, data validation, and documentation
  • Mentoring and upskilling other members of the team

Qualifications

  • Experience building data pipelines and/or ETL processes
  • Experience working in a Data Engineering role
  • Confident writing performant and readable code in Python, building upon the rich Python ecosystem wherever it makes sense to do so.
  • Good software engineering knowledge & skills: OO programming, design patterns, SOLID design principles and clean code
  • Confident writing SQL and good understanding of database design.
  • Experience working with web APIs.
  • Experience leading projects from a technical perspective
  • Knowledge of Docker, shell scripting, working with Linux
  • Experience with a cloud data warehouse
  • Experience in managing deployments and implementing observability and fault tolerance in cloud based infrastructure (i.e. CI/CD, Infrastructure as Code, container-based infrastructure, auto-scaling, monitoring and alerting)
  • Pro-active with a self-starter mindset; able to identify elegant solutions to difficult problems and able to suggest new and creative approaches.
  • Analytical, problem-solving and an effective communicator; leveraging technology and subject matter expertise in the business to accelerate our roadmap.
  • Able to lead technical discussions, shape the direction of the team, identify opportunities for innovation and improvement
  • Able to lead and deliver projects, ensuring stakeholders are kept up-to-date through regular communication
  • Willing to support the rest of the team when necessary, sharing knowledge and best practices, documenting design decisions, etc.
  • Willing to step outside your comfort zone to broaden your skills and learn new technologies.
  • Experience working with open source orchestration frameworks like Airflow or data analytics tools such as dbt
  • Experience with AWS services or those of another cloud provider
  • Experience with Snowflake
  • Good understanding of Agile

See more jobs at Nile Bits

Apply for this job

11d

Digital Analytics Manager

Nile BitsCairo, Egypt, Remote
tableauairflowsqlsalesforceFirebasemobileqadockerpython

Nile Bits is hiring a Remote Digital Analytics Manager

Job Description

We’re looking for a hands-on, highly technical and analytically minded individual who can work cross-functionally with the product team, tech team, marketing team and data team to:

  • Identify what and how we should be collecting data (client-side, server-side) to support deep understanding of customer behavior
  • Devise the technical specifications for data collection, writing and QA-ing code where needed
  • Oversee the tracking implementation and QA-ing process end to end
  • Implement processes to ensure tracking stays robust and up to date
  • Maintain compliance and ethical values with regards to user behavioral tracking
  • Ensure our data collection keeps up with the business!

Key Responsibilities

  • Take ownership of all tag implementations in GTM and Server side GTM to feed data to tools and partners such as Snowplow, Google Analytics, Firebase, Criteo, Epsilon.
  • Working closely with Marketing teams to ensure efficient and well structured tracking code
  • Devising and owning new tracking specifications to be implemented
  • Managing all project correspondence with stakeholders
  • Experience of managing tracking implementation projects
  • Set the direction of our digital analytics strategy and enforce best practices
  • Audit the existing client-side/server-side data collection setup, identifying gaps in tracking and processes, identify inefficiencies and opportunities to improve the richness and quality of data collection at every step of the process
  • Responsible for the end to end delivery of tracking projects, this encapsulates data capture, testing/validating results and surfacing data in the data warehouse
  • Maintaining and creating documentation of tracking and processes
  • Maintaining our tracking architecture to ensure we follow best practices and reduce tech debt
  • Set up tracking monitoring processes to ensure we minimize downtime and preserve high quality data
  • Administration and maintenance of various tracking related tools including -but not limited to- Snowplow, GA4, GTM, OneTrust

 

The Deal Breakers

  • Expert technical knowledge of Google Tag Manager and its ecosystem
  • Proven experience setting up and managing complex, large-scale implementations across web and mobile
  • Experience implementing or working with clickstream data
  • Some experience with SQL
  • Comfortable with exploring large datasets, with an emphasis on event data to ensure our tracking is meeting downstream requirements
  • Good understanding of the flow of data from data collection to reporting and insights and the impacts tracking can have on business processes
  • Highly competent in translating and presenting complex technical information to a less informed audience

 

And you are…

  • A doer! Willing to step outside your comfort zone to broaden your skills and learn new technologies
  • Meticulous when it comes to devising processes, documentation and QA work
  • Proactive and highly organized, with strong time management and planning skills
  • Approachable personality, happy to help resolve ad-hoc unscheduled problems
  • Proactive, self-starter mindset; identifying elegant solutions to difficult problems and being able to suggest new and creative approaches
  • Great time management skills with the ability to identify priorities

Qualifications

Nice to have

  • Experience working with Snowplow or other event-level analytics platform is a big plus
  • Experience setting up Server Side Google Tag Manager to reduce page load times
  • Exposure to cloud based data warehousing and modelling
  • Experience setting up analytics integrations with AB testing platforms (we use Optimizely)
  • Knowledge or experience of server-side tracking implementation
  • An engineering mindset looking to leverage modern tools and technologies to drive efficiencies
  • Exposure to Python/R or similar procedural programming language

 

Our data stack

We collect data from dozens of data sources, ranging from transactional data, availability data, payments data, customer event-level data, voice-of-customer data, third party data and much much more. Our historical data runs into tens of billions of records and grows at a rate of tens of millions of records every day. Our data is extremely varied, some being very finely-grained, event-level data, other being already aggregated to various degrees. It also arrives on different schedules!

Our tracking infrastructure contains tools such as GTM, SS GTM, Snowplow, GA4.

Our data stack is Python for the data pipeline, Airflow for orchestration and Snowflake is our data warehousing technology of choice. On top of our warehouse we have Tableau to assist with standardized reporting and self service, there is also a Tableau embedding within Salesforce.

Our wider ecosystem of tools and partners includes Iterable, Docker, Branch, GA4, Salesforce, Tableau. Everything runs in AWS.

Our team culture

The data platform team is an enthusiastic group who are passionate about our profession. We are continuously maintaining our team culture via things like retrospective meetings, weekly socials, open door mentality and cross profession knowledge sharing. We adopt a fail fast mentality that promotes a safe environment for our team to upskill comfortably. Our team make up reflects the company ethos of inclusion and diversity, we are made up of a collection of different people/genders/backgrounds and celebrate our differences. Ultimately we are a team and we work as one together as one, no individual is solely responsible for any area of our pipeline, our successes and failures are shared.

See more jobs at Nile Bits

Apply for this job

11d

Senior Engineer - ML Ops

PindropUS - Remote
MLBachelor's degreeremote-firstterraformairflowDesignazuregitc++dockerkubernetespythonAWS

Pindrop is hiring a Remote Senior Engineer - ML Ops

Who we are

Are you passionate about innovating at the intersection of technology and personal security? At Pindrop, we recognize that the human voice is a unique personal identifier, increasingly susceptible to sophisticated fraud, including the threat of deepfakes. We're leading the way in developing cutting-edge authentication, fraud prevention, and deepfake detection. Our mission is to provide seamless and secure digital experiences, safeguarding the most personal aspect of our identity: our voice. Here, you'll be part of a team driven by values of Innovation, Customer Advocacy, Excellence, and Impact. We're not just creating a safer digital landscape by fortifying trust and integrity with those we serve, we’re also building a dynamic, supportive workplace where your contributions make a real difference.

Headquartered in Atlanta, GA, Pindrop is backed by world-class investors such as Andreessen-Horowitz, IVP, and CapitalG.

 

What you’ll do

As a Senior Software Engineer, you will play a critical role in the development and maintenance of software applications and systems. You will be responsible for leading and contributing to complex software projects, providing technical expertise, and mentoring junior engineers. You will expand capabilities, bring new solutions to market. As a member of the MLOps team you will be responsible for systems which train models and produce predictions.

 

More specifically, you will:

  • Software Development: Design, develop, test, and maintain our complex software applications, ensuring high-quality code and adherence to best practices. Play a critical role in the development and maintenance of our software products by designing, building, evolving, and scaling state-of-the-art solutions for our Pindrop platform.
  • Technical Leadership: Provide technical leadership and guidance to junior engineers and the development team, including code reviews, architecture decisions, and mentoring. 
  • Architecture and Design: Contribute to the design and architecture of software systems, ensuring scalability, maintainability, and performance
  • Problem Solving: Analyze and solve complex technical problems, and make recommendations for improvements and optimizations.
  • Quality Assurance: Implement and advocate for best practices in testing and quality assurance, including unit testing, integration testing, and automated testing.
  • Code Review: Participate in code reviews and provide constructive feedback to ensure code quality and consistency.
  • Research and Innovation: Stay current with emerging technologies, tools, and programming languages and apply them where relevant to improve software development processes.
  • Security and Compliance: Ensure software adheres to security standards and compliance requirements, addressing vulnerabilities and potential risks.
  • Design and implement cloud solutions, build MLOps on cloud (AWS, Azure, or GCP)
  • Build CI/CD pipelines orchestration by GitLab CI, GitHub Actions, Circle CI, Airflow or similar tools
  • Data science model review: run code and refactor, optimize, containerize, deploy, version, and monitor its quality
  • Validate and add automated tests for Data Science models
  • Work closely with a team of researchers and data scientists to productionize and document research innovations

 

Who you are

  • You are resilient in the face of challenges, change, and ambiguity
  • You are optimistic and believe that you can make a problem into a solution
  • You are resourceful, excited to uncover innovative solutions and teach yourself something new when needed
  • You take accountability, do the things you say you’ll do, under-promise and over-deliver
  • You are a strong problem-solver with excellent analytical skills.
  • You are an owner and enjoy taking on project leadership as well as mentorship
  • You are a strong verbal and written communicator 

Your skill-set: 

  • Must Have
    • 5-7 Years of Software engineering experience
    • Experience with cloud computing environments, especially AWS and container-based deployment using Docker and Kubernetes
    • Experience working with python 2-3 years minimum 
    • Experience operating services in production environments
    • A strong understanding of software design principles, software architecture and design patterns as well as software development best practices, including testing, version control, and continuous integration
    • Experience with infrastructure as code tools like Terraform or AWS CDK
    • Experience in monitoring and performance of Production platforms using tech stacks and tools such as Datadog, ELK, Grafana, Prometheus
    • Participation in on-call rotation required
  • Nice to Have
    • Experience with Machine Learning frameworks and libraries such as XGBoost, SciKit-Learn, H2O, TensorFlow, PyTorch, Keras, Spark MLlib
    • Experience with leading industry Machine Learning tools and operation frameworks such as MLflow, Kubeflow, Airflow, Seldon Core, TFServing
    • Experience building microservices and RESTful APIs
    • CI/CD pipelines using tools such as GIT, Jenkins.

 

What’s in it for you:

As a Pindropper, you join a rapidly growing company making technology more human with the power of voice. You will work alongside some of the best and brightest. We’re a passionate group committed to excellence - but that doesn’t stop us from enjoying the journey as a team with chess and poker tournaments, catered lunches and happy hours, wellness programming, and more. Because we take our jobs seriously, we add in time for rest with Unlimited PTO, Focus Thursday, and Company-wide Rest Days. 

0-30 (Acclimating)

    • Complete onboarding and attend New Employee Orientation sessions with other new Pindroppers
    • Onboard in the MLOps Team
    • 1:1s with all the team members
    • Get started with your first project, first PR merged

30-60 (Learning)

    • Be a part of planning and contribute to the smaller tasks to fix existing issues
    • Be a part of triaging only the most important issues for the team to be focusing on
    • Add small features/resolve tech debt for the MLOps team

60-90 (Assimilating)

    • Fully acclimated with the team
    • Be able to pick up any task that comes out of sprint planning
    • Be able to design enhancements to the ML platform
    • Teach us something new

 

What we offer 

As a part of Pindrop, you’ll have a direct impact on our growing list of products and the future of security in the voice-driven economy. We hire great people and take care of them. Here’s a snapshot of the benefits we offer:

  • Competitive compensation, including equity for all employees
  • Unlimited Paid Time Off (PTO)
  • 4 company-wide rest days in 2024 where the entire company rests and recharges!
  • Generous health and welfare plans to choose from - including one employer-paid “employee-only” plan!
  • Best-in-class Health Savings Account (HSA) employer contribution
  • Affordable vision and dental plans for you and your family
  • Employer-provided life and disability coverage with additional supplemental options
  • Paid Parental Leave - Equal for all parents, including birth, adoptive & foster parents
    • One year of diaper delivery for your newest addition to the family! It’s our way of welcoming new Pindroplets to the family!
  • Identity protection through Norton LifeLock
  • Remote-first culture with opportunities for in-person team events
  • Recurring monthly home office allowance
  • When we need a break, we keep it fun with happy hours, ping pong and foosball, drinks and snacks, and monthly massages!
  • Remote and in-person team activities (think cheese tastings, chess tournaments, talent shows, murder mysteries, and more!)
  • Company holidays
  • Annual professional development and learning benefit
  • Pick your own Apple MacBook Pro
  • Retirement plan with competitive 401(k) match
  • Wellness Program including Employee Assistance Program, 24/7 Telemedicine

 

What we live by

At Pindrop, our Core Values are fundamental beliefs at the center of all we do. They are our guiding principles that dictate our actions and behaviors. Our Values are deeply embedded into our culture in big and small ways and even help us decide right from wrong when the path forward is unclear. At Pindrop, we believe in taking accountability to make decisions and act in a way that reflects who we are. We truly believe making decisions and acting with our Core Values in mind will help us to achieve our goals and keep Pindrop a great place to work:    

  • Audaciously Innovate - We continue to change the world, and the way people safely engage and interact with technology. As first principle thinkers, we challenge standards, take risks and learn from our mistakes in order to make positive change and continuous improvement. We believe nothing is impossible.
  • Evangelical Customers for Life - We delight, inspire and empower customers from day one and for life. We create a partnership and experience that results in a shared passion.   We are champions for our customers, and our customers become our champions, creating a universal commitment to one another. 
  • Execution Excellence - We do what we say and say what we do. We are accountable for making the tough decisions and necessary tradeoffs to deliver quality and effective solutions on time.
  • Win as a Company - Every time we win, we win as a company. Every time we lose, we lose as a company. We break down silos, support one another, embrace diversity and celebrate our successes. We are better together. 
  • Make a Difference - Every day we have the opportunity to make a positive impact. We operate with dedication, passion, and uncompromising integrity, creating a safer, more secure world.

Not sure if this is you?

We want a diverse, global team, with a broad range of experience and perspectives. If this job sounds great, but you’re not sure if you qualify, apply anyway! We carefully consider every application and will either move forward with you, find another team that might be a better fit, keep in touch for future opportunities, or thank you for your time.

Pindrop is an Equal Opportunity Employer

Here at Pindrop, it is our mission to create and maintain a diverse and inclusive work environment. As an equal opportunity employer, all qualified applicants receive consideration for employment without regard to race, color, age, religion, sex, gender, gender identity or expression, sexual orientation, national origin, genetic information, disability, marital and/or veteran status.

#LI-REMOTE

See more jobs at Pindrop

Apply for this job

12d

Sr Data Engineer GCP

Ingenia AgencyMexico - Remote
Bachelor's degree5 years of experience3 years of experienceairflowsqlapipython

Ingenia Agency is hiring a Remote Sr Data Engineer GCP


AtIngenia Agency we’re looking for a Sr Data Engineer to join our team.

Responsible for creating and sustaining pipelines that allow for the analysis of data.

What will you be doing?

  • Sound understanding of Google Cloud Platform.
  • Should have worked on Big Query, Workflow or Composer.
  • Should know how to reduce BigQuery costs by reducing the amount of data processed by the queries.
  • Should be able to speed up queries by using denormalized data structures, with or without nested repeated fields.
  • Exploring and preparing data using BigQuery.
  • Experience in delivering artifacts scripts Python, dataflow components, SQL, Airflow and Bash/Unix scripting.
  • Building and productionizing data pipelines using dataflow.

What are we looking for?

  • Bachelor's degree in Data Engineering, Big Data Analytics, Computer Engineering, or related field.
  • Age indifferent.
  • 3 to 5 years of experience in GCP is required.
  • Must have Excellent GCP, Big Query and SQL skills.
  • Should have at least 3 years of experience in BigQuery Dataflow and Experience with Python and Google Cloud SDK API Scripting to create reusable framework.
  • Candidate should have strong hands-on experience in PowerCenter
  • In depth understanding of architecture, table partitioning, clustering, type of tables, best practices.
  • Proven experience as a Data Engineer, Software Developer, or similar.
  • Expert proficiency in Python, R, and SQL.
  • Candidates with Google Cloud certification will be preferred
  • Excellent analytical and problem-solving skills.
  • A knack for independent and group work.
  • Capacity to successfully manage a pipeline of duties with minimal supervision.
  • Advanced English.
  • Be Extraordinary!

What are we offering?

  • Competitive salary
  • Law benefits:
    • 10 days of vacations to the first year fulfilled
    • IMSS
  • Additional benefits:
    • Contigo Membership (Insurance of minor medical expenses)
      • Personal accident policy.
      • Funeral assistance.
      • Dental and visual health assistance.
      • Emotional wellness.
      • Benefits & discounts.
      • Network of medical services and providers with a discount.
      • Medical network with preferential prices.
      • Roadside assistance with preferential price, among others.
    • 3 special permits a year, to go out to any type of procedure that you have to do half day equivalent
    • Half day off for birthdays
    • 5 days of additional vacations in case of marriage
    • 50% scholarship in language courses in the Anglo
    • Percentage scholarship in the study of graduates or masters with the Tec. de Mty.
    • Agreement with ticket company for preferential rates for events of entertainment.

See more jobs at Ingenia Agency

Apply for this job

12d

Data Engineer

Charlotte TilburyLondon,England,United Kingdom, Remote Hybrid
terraformairflowsqlDesigngitpythonAWSjavascript

Charlotte Tilbury is hiring a Remote Data Engineer

About Charlotte Tilbury Beauty

Founded by British makeup artist and beauty entrepreneur Charlotte Tilbury MBE in 2013, Charlotte Tilbury Beauty has revolutionised the face of the global beauty industry by de-coding makeup applications for everyone, everywhere, with an easy-to-use, easy-to-choose, easy-to-gift range. Today, Charlotte Tilbury Beauty continues to break records across countries, channels, and categories and to scale at pace.

Over the last 10 years, Charlotte Tilbury Beauty has experienced exceptional growth and is one of the most talked about brands in the beauty industry and beyond. It has become a global sensation across 50 markets (and growing), with over 2,300 employees globally who are part of the Dream Team making the magic happen.

Today, Charlotte Tilbury Beauty is a truly global business, delivering market-leading growth, innovative retail and product launches fuelled by industry-leading tech — all with an internal culture of embracing challenges, disruptive thinking, winning together, and sharing the magic. The energy behind the bran­d is infectious, and as we grow, we are always looking for extraordinary talent who want to be part of this our success and help drive our limitless ambitions.

The Role

 

Data is at the heart of our strategy to engage and delight our customers, and we are determined to harness its power to go as far as we can to deliver a euphoric, personalised experience that they'll love. 

 

We're seeking a skilled and experienced Data Engineer to join our Data function to join our team of data engineers in the design, build & maintenance of the pipelines that support this ambition. The ideal candidate will not only be able to see many different routes to engineering success, but also to work collaboratively with Engineers, Analysts, Scientists & stakeholders to design & build robust data products to meet business requirements.

 

Our stack is primarily GCP, with Fivetran handling change detection capture, Google Cloud Functions for file ingestion, Dataform & Composer (Airflow) for orchestration, GA & Snowplow for event tracking and Looker as our BI Platform. We use Terraform Cloud to manage our infrastructure programmatically as code.

 

Reporting Relationships

 

This role will report into the Lead Data Engineer

 

About you and attributes we're looking for



  • Extensive experience with cloud data warehouses and analytics query engines such as BigQuery, Redshift or Snowflow and a good understanding of cloud technologies in general. 
  • Proficient in SQL, Python and Git 
  • Prior experience with HCL (Terraform configuration language), YAML, JavaScript, CLIs and Bash.
  • Prior experience with serverless tooling e.g. Google Cloud Functions, AWS Lambdas, etc.
  • Familiarity with tools such as Fivetran and Dataform/DBT 
  • Bachelor's or Master's degree in Computer Science, Data Science, or related field 
  • Collaborative mindset and a passion for sharing ideas & knowledge
  • Demonstrable experience developing high quality code in the retail sector is a bonus

At Charlotte Tilbury Beauty, our mission is to empower everybody in the world to be the most beautiful version of themselves. We celebrate and support this by encouraging and hiring people with diverse backgrounds, cultures, voices, beliefs, and perspectives into our growing global workforce. By doing so, we better serve our communities, customers, employees - and the candidates that take part in our recruitment process.

If you want to learn more about life at Charlotte Tilbury Beauty please follow ourLinkedIn page!

See more jobs at Charlotte Tilbury

Apply for this job

13d

Staff Ledger Operations Engineer

GeminiRemote (USA)
remote-firstairflowsqlDesignjavapython

Gemini is hiring a Remote Staff Ledger Operations Engineer

About the Company

Gemini is a global crypto and Web3 platform founded by Tyler Winklevoss and Cameron Winklevoss in 2014. Gemini offers a wide range of crypto products and services for individuals and institutions in over 70 countries.

Crypto is about giving you greater choice, independence, and opportunity. We are here to help you on your journey. We build crypto products that are simple, elegant, and secure. Whether you are an individual or an institution, we help you buy, sell, and store your bitcoin and cryptocurrency. 

At Gemini, our mission is to unlock the next era of financial, creative, and personal freedom.

In the United States, we have a flexible hybrid work policy for employees who live within 30 miles of our office headquartered in New York City and our office in Seattle. Employees within the New York and Seattle metropolitan areas are expected to work from the designated office twice a week, unless there is a job-specific requirement to be in the office every workday. Employees outside of these areas are considered part of our remote-first workforce. We believe our hybrid approach for those near our NYC and Seattle offices increases productivity through more in-person collaboration where possible.

The Department: Customer Support (Ledger Operations)

As a team within the Support group, the Ledger Operations team is data driven and customer-centric. Team members work closely with data scientists, engineers, product managers, and corporate operational stakeholders to reconcile data. Projects range from the very urgent, short sprint to redesigning long-term solutions to improve scalability and automation. Ledger Operations primary goal is to ensure high quality, scalable data reconciliation with proactive monitoring and reducing delays for reactive corrections. This team is responsible for building the “next generation” of reconciliation tools to maintain and expand Gemini’s internal reconciliation processes. 

The Role: Staff Ledger Operations Engineer

Responsibilities:

  • Mentor engineers while also self-managing as an individual contributor
  • Design data pipelines and automate ETL processes and SQL optimization, which will influence the “next generation” for Gemini’s ledger reconciliation processes and reporting (including improving the data warehouse changes, if necessary)
  • Partner with the third party vendors, Data Analytics team and Crypto Core engineers for data processing
  • Be responsible for maintaining and creating data adaptors to process real-time data, create data validation processes, reporting, and root cause analysis for exception reports

Minimum Qualifications:

  • 7+ years experience with schema design and dimensional data modeling
  • 7+years with design and implementing a reconciliation system while improving existing data pipelines
  • Must have experience with cryptocurrency data 
  • Must have experience in trade or ledger reconciliation
  • Must have advanced SQL skills and database design experience 
  • Experience building real-time data solutions and processes to automate reconciliation analysis and reporting
  • Experience building and integrating web analytics solutions
  • Experience with FIX, Kafka, REST, and other data messaging types
  • Experience and expertise in Airflow, Databricks, Spark, Hadoop etc.
  • Skilled in programming languages Python and/or Java
  • Experience with one or more MPP databases(Redshift, Bigquery, Snowflake, etc)
  • Extensive ETL and database experience with financial transaction systems for (banking systems, exchange systems)
  • Extensive experience in financial reporting for “above the line” revenue for building annual, audited financials
  • Strong technical and business communication skills

Preferred Qualifications:

  • Experience with financial Reporting Requirements for publicly traded companies
It Pays to Work Here
 
The compensation & benefits package for this role includes:
  • Competitive starting salary
  • A discretionary annual bonus
  • Long-term incentive in the form of a new hire equity grant
  • Comprehensive health plans
  • 401K with company matching
  • Paid Parental Leave
  • Flexible time off

Salary Range: The base salary range for this role is between $152,000 - $190,000 in the State of New York, the State of California and the State of Washington. This range is not inclusive of our discretionary bonus or equity package. When determining a candidate’s compensation, we consider a number of factors including skillset, experience, job scope, and current market data.

At Gemini, we strive to build diverse teams that reflect the people we want to empower through our products, and we are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Equal Opportunity is the Law, and Gemini is proud to be an equal opportunity workplace. If you have a specific need that requires accommodation, please let a member of the People Team know.

Apply for this job

16d

Senior Data Engineer (Portfolio Companies)

IFSColombo, Sri Lanka, Remote
S3EC2golang6 years of experienceagilenosqlairflowsqlDesignmongodbdockerelasticsearchjenkinsAWS

IFS is hiring a Remote Senior Data Engineer (Portfolio Companies)

Job Description

  • Design, develop, and maintain a generic ingestion framework capable of processing various types of data (structured, semi-structured, unstructured) from customer sources.
  • Implement and optimize ETL (Extract, Transform, Load) pipelines to ensure data integrity, quality, and reliability as it flows into the centralized datastore like Elasticsearch.
  • Ensure the ingestion framework is scalable, secure, efficient and capable of handling large volumes of data in real-time or batch processes.
  • Continuously monitor and enhance the data ingestion process to improve performance, reduce latency, and handle new data sources and formats.
  • Develop automated testing and monitoring tools to ensure the framework operates smoothly and can quickly adapt to changes in data sources or requirements.
  • Provide documentation, support, and training to other team members and stakeholders on using the ingestion framework.
  • Implement large-scale near real-time streaming data processing pipelines.
  • Design, support and continuously enhance the project code base, continuous integration pipeline, etc.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
  • Perform POCs and evaluate different technologies and continue to improve the overall architecture.

Qualifications

  • Experience building and optimizing Big Data data pipelines, architectures and data sets.
  • Strong proficiency in Elasticsearch, its architecture and optimal querying of data.
  • Strong analytic skills related to working with unstructured datasets.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data systems.
  • One plus years of experience contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems.
  • Candidates must have 4 to 6 years of experience in a Data Engineer role with  Bachelors or Masters (preferred) in Computer Science or Information Systems or equivalent field. Candidate should have knowledge of using following technologies/tools:
    • Experience working on Big Data processing systems like Hadoop, Spark, Spark Streaming, or Flink Streaming.
    • Experience with SQL systems like Snowflake or Redshift
    • Direct, hands-on experience in two or more of these integration technologies; Java/Python, React, Golang, SQL, NoSQL (Mongo), Restful API.
    • Versed in Agile, APIs, Microservices, Containerization etc.
    • Experience with CI/CD pipeline running on GitHub, Jenkins, Docker, EKS.
    • Knowledge of at least one distributed datastores like MongoDb, DynamoDB, HBase.
    • Experience using batch scheduling frameworks like Airflow (preferred), Luigi, Azkaban etc is a plus.
    • Experience with AWS cloud services: EC2, S3, DynamoDB, Elasticsearch

See more jobs at IFS

Apply for this job

17d

Analytics Engineer

CLEAR - CorporateNew York, New York, United States (Hybrid)
airflowsqlDesignpython

CLEAR - Corporate is hiring a Remote Analytics Engineer

At CLEAR, we are pioneers in digital and biometric identification, known for reducing friction wherever identity verification is needed. Now, we’re evolving further, building the next generation of products to go beyond ID, empowering our members to harness the power of a networked digital identity. As an Analytics Engineer, you will play a pivotal role in designing and enhancing our data platform, ensuring it supports data-driven insights while safeguarding member privacy and security.


A brief highlight of our tech stack:

  • SQL / Python / Looker / Snowflake / Dagster / dbt

What you'll do:

  • Design and maintain scalable, self-service data platforms enabling Analysts and Engineers to drive automation, testing, security, and high-quality analytics.
  • Develop robust processes for data transformation, structuring, metadata management, and workflow optimization.
  • Own and manage end-to-end data pipelines—from ingestion to transformation, modeling, and visualization—ensuring high data quality.
  • Collaborate with stakeholders across product and business teams to understand requirements and deliver actionable insights.
  • Lead the development of data models and analytics workflows that support strategic decision-making and reporting.
  • Maintain a strong focus on privacy, ensuring that member data is used securely and responsibly.
  • Drive architectural improvements in data processes, continuously improving CLEAR’s data infrastructure.

 What you're great at:

  • 6+ years of experience in data engineering, with a focus on data transformation, analytics, and cloud-based solutions.
  • Proficient in building and managing data pipelines using orchestration tools (Airflow, Dagster,) and big data tools (Spark, Kafka, Snowflake, Databricks).
  • Expertise in modern data tools like dbt and data visualization platforms like Looker, Tableau.
  • Ability to communicate complex technical concepts clearly to both technical and non-technical stakeholders.
  • Experience mentoring and collaborating with team members to foster a culture of learning and development.
  • Comfortable working in a dynamic, fast-paced environment with a passion for leveraging data to solve complex business challenges.

How You'll be Rewarded:

At CLEAR we help YOU move forward - because when you’re at your best, we’re at our best. You’ll work with talented team members who are motivated by our mission of making experiences safer and easier. Our hybrid work environment provides flexibility. In our offices, you’ll enjoy benefits like meals and snacks. We invest in your well-being and learning & development with our stipend and reimbursement programs. 

We offer holistic total rewards, including comprehensive healthcare plans, family building benefits (fertility and adoption/surrogacy support), flexible time off, free OneMedical memberships for you and your dependents, and a 401(k) retirement plan with employer match. The base salary range for this role is $145,000 - $175,000, depending on levels of skills and experience.

The base salary range represents the low and high end of CLEAR’s salary range for this position. Salaries will vary depending on various factors which include, but are not limited to location, education, skills, experience and performance. The range listed is just one component of CLEAR’s total compensation package for employees and other rewards may include annual bonuses, commission, Restricted Stock Units.

About CLEAR

Have you ever had that green-light feeling? When you hit every green light and the day just feels like magic. CLEAR's mission is to create frictionless experiences where every day has that feeling. With more than 25+ million passionate members and hundreds of partners around the world, CLEAR’s identity platform is transforming the way people live, work, and travel. Whether it’s at the airport, stadium, or right on your phone, CLEAR connects you to the things that make you, you - unlocking easier, more secure, and more seamless experiences - making them all feel like magic.

CLEAR provides reasonable accommodation to qualified individuals with disabilities or protected needs. Please let us know if you require a reasonable accommodation to apply for a job or perform your job. Examples of reasonable accommodation include, but are not limited to, time off, extra breaks, making a change to the application process or work procedures, policy exceptions, providing documents in an alternative format, live captioning or using a sign language interpreter, or using specialized equipment.

See more jobs at CLEAR - Corporate

Apply for this job

17d

Cloud Data Engineer

DevoteamTunis, Tunisia, Remote
airflowsqlscrum

Devoteam is hiring a Remote Cloud Data Engineer

Description du poste

Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

Votre rôle consistera à contribuer à des projets data en apportant votre expertise sur les tâches suivantes :

  • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur Google Cloud Plateform (GCP), en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
  • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
  • Optimiser les performances des traitements des données et des processus ELT en utilisant AirFlow, DBT et BigQuery.
  • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
  • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
  • Rester à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

 

    Qualifications

    ???? Compétences

    Quels atouts pour rejoindre l’équipe ?

    Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.

    • Au moins 2 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
    • Maîtrise avancée de SQL pour l'optimisation et le traitement des données.
    • Certification Google Professional Data Engineer est un plus.
    • Très bonne communication écrite et orale (livrables et reportings de qualité).

    Alors, si vous souhaitez progresser, apprendre et partager, rejoignez-nous !

    See more jobs at Devoteam

    Apply for this job

    18d

    Data Quality Analyst - Remote

    Paramo TechnologiesBuenos Aires, AR - Remote
    airflowsqlpython

    Paramo Technologies is hiring a Remote Data Quality Analyst - Remote

    To apply for this position, you must be based in the Americas, preferably Latin America (the United States of America is not applicable). Applications from other locations will be disqualified from this selection process.

    We are

    a cutting-edge e-commerce company developing products for our own technological platform. Our creative, smart and dedicated teams pool their knowledge and experience to find the best solutions to meet project needs, while maintaining sustainable and long-lasting results. How? By making sure that our teams thrive and develop professionally. Strong advocates of hiring top talent and letting them do what they do best, we strive to create a workplace that allows for an open, collaborative and respectful culture.

    What you will be doing

    As a Data Quality Analyst your primary priority is to ensure that the data used by the company is accurate, complete, and consistent. You will be ensuring that the data used by the company is of high quality, which would elevate the business decision-making, improves operational efficiency, and enhances customer satisfaction. This role also provides continuous automated and manual testing of data sets for use in internal data systems and for delivery from internal systems.

    As part of your essential functions you will have to:

    • Identify data quality issues and working with other teams to resolve them.
    • Establish data quality standards and metrics: The data analyst/engineer would work with stakeholders to define the quality standards that data must met, and establish metrics to measure data quality.
    • Monitor data quality: The data analyst/engineer would monitor data quality on an ongoing basis, using automated tools and manual checks to identify issues.
    • Investigate data quality issues: When data quality issues are identified, the analyst/engineer would investigate them to determine their root cause and work with other teams to resolve them.
    • Develop data quality processes: The analyst/engineer would develop processes to ensure that data is checked for quality as it is collected and processed, and that data quality issues are addressed promptly. This stage includes the use of tools and technology in order to promote efficiency in the data check activities.
    • Train stakeholders: The analyst/engineer would train other stakeholders in the company on data quality best practices, to ensure that everyone is working towards the same quality goals.
    • Promote improvements in the development process in order to ensure the integrity and availability of the data.

    Some other responsibilities are:

    • Provide technical directions and mentor other data engineers about data quality.
    • Perform data validity, accuracy, and integrity test across different components of the Data Platform.
    • Build automated test framework and tools to automate the Data Platform services and applications.
    • Automate regression tests and perform functional, integration, and load testing.
    • Articulate issues to BI developers during meetings and particularly in the daily standups.
    • Articulate issues to data engineers and analysts during meetings and particularly in the daily standups.
    • Triage production-level issues if data if affected, and working with the involved teams until the issues are resolved.
    • Proactively solve problems and suggest process improvements.
    • Provide test case coverage and defect metrics to substantiate release decisions.

    Knowledge and skills you need to have

    • Bachelor in Computer Science or Information Systems or 2+ years’ experience with corporate data management systems in high-compliance contexts.
    • 2+ years of experience writing complex SQL on large customer data sets (complex queries).
    • High proficiency in relational or non-relational databases.
    • Knowledgeable about industry data compliance strategies and practices, such as continuous integration, regression testing, and versioning.
    • Familiarity with Big Data environments, dealing with large diverse data sets.
    • Experience with BI projects.
    • Strong scripting experience with any of the scripting languages.
    • Accountability for receiving challenges.
    • Excellent communication skills, with the ability to drive & collaborate with cross teams.

    Bonus points for the following

    Additional requirements, not essential but "nice to have".

    • Python experience (for data analysis - Airflow)

    Why choose us?

    We provide the opportunity to be the best version of yourself, develop professionally, and create strong working relationships, whether working remotely or on-site. While offering a competitive salary, we also invest in our people's professional development and want to see you grow and love what you do. We are dedicated to listening to our team's needs and are constantly working on creating an environment in which you can feel at home.

    We offer a range of benefits to support your personal and professional development:

    Benefits:

    • 22 days of annual leave.
    • 10 days of public/national holidays.
    • Maternity and paternity leave.
    • Health insurance options.
    • Access to online learning platforms.
    • On-site English classes in some countries, and many more.

    Join our team and enjoy an environment that values and supports your well-being. If this sounds like the place for you, contact us now!

    See more jobs at Paramo Technologies

    Apply for this job

    20d

    Senior Software Engineer (Generative AI)

    ExperianCosta Mesa, CA, Remote
    MLLambdajiraterraformairflowsqlslackpythonAWSjavascriptNode.js

    Experian is hiring a Remote Senior Software Engineer (Generative AI)

    Job Description

    The Experian Consumer Services Generative AI team is accelerating Experian's impact by bringing together data, technology, and data science to build game-changing products and services for our customers. We are looking for a Senior Software Engineer, reporting to the Head of AI/ML Innovation, to support developing and integrating our Generative AI Models with Consumer Services products. These new capabilities will help provide Financial Power to All our customers. As a growing team, we embrace a startup mentality while operating in a large organization. We value speed and effect – and our results and ways of working are transforming the culture of the larger organizations around us.

    Role accountabilities and essential activities

    • You'll develop and integrate our generative AI solutions with existing software teams building products
    • Develop a scalable machine learning framework for data science products
    • You will develop scalable pipelines, tools, and services for building production-ready machine-learning models
    • Work with our data scientists to pilot our products with beta customers
    • Maintain our culture of simple, streamlined code and full CI/CD automation
    • Develop simple, streamlined, and well-tested ML pipeline components

    Qualifications

    • You have 5+ years of experience
    • Strong coding experience in Python with some familiarity to PySpark and SQL
    • Familiarity with popular python libraries such as pandas, numpy, flask, matplotlib
    • Familiarity with the AWS platform and services, including CI/CD automation methods
    • Familiarity with AWS serverless methodology, particularly Fargate, Lambda, ECR
    • Familiarity with CloudFormation, Terraform, or equivalent
    • You have experience working with machine learning or generative AI libraries such as Langchain, llamaindex, Langsmith, and llamaguard
    • Open source Foundation models: llama3, llama2, mistral, falcon, phi
    • Orchestration Frameworks: Mlflow, airflow
    • Some knowledge of javascript (node.js) as a front end
    • You have experience or familiarity with Databricks AI services and Mosaic branded services
    • Comfortable supporting troubleshooting and assessment of production issues when needed
    • You are experienced with monitoring tools such as Splunk, Datadog, Dynatrace
    • Test writing discipline in standard development tools and processes, e.g., gitub, Jira, Slack
    • Record of building and maintaining large-scale software systems in production

    See more jobs at Experian

    Apply for this job

    21d

    Data Engineer

    Clover HealthRemote - Canada
    MLremote-firsttableauairflowpostgressqlDesignqac++pythonAWS

    Clover Health is hiring a Remote Data Engineer

    At Clover, the Business Enablement team spearheads our technological advancement while ensuring robust security and compliance. We deliver user-friendly corporate applications, manage complex data ecosystems, and provide efficient tech solutions across the organization. Our goal is simple, we make it easy for the business to do what’s right for Clover. 

    We are looking for a Data Engineer to join our team. You'll work on the development of data pipelines and tools to support our analytics and machine learning development. Applying insights through data is a core part of our thesis as a company — and you will work on a team that is a central part of helping to deliver that promise through making a wide variety of data easily accessible for internal and external consumers. We work primarily in SQL, Python and our data is stored primarily in Snowflake. You will work with data analysts, other engineers, and healthcare professionals in a unique environment building tools to improve the health of real people. You should have extensive experience leading data warehousing projects with advanced knowledge in data cleansing, ingestion, ETL and data governance.

    As a Data Engineer, you will:

    • Collaborate closely with operations, IT and vendor partners to understand the data landscape and contribute to the vision, development and implementation of the Data Warehouse solution.
    • Recommend technologies and tools to support the future state architecture.
    • Develop standards, processes and procedures that align with best practices in data governance and data management.
    • Be responsible for logical and physical data modeling, load and query performance.
    • Develop new secure data feeds with external parties as well as internal applications.
    • Perform regular analysis and QA, diagnose ETL and database related issues, perform root cause analysis, and recommend corrective actions to management.
    • Work with cross-functional teams to support the design, development, implementation, monitoring, and maintenance of new ETL programs.

    Success in this role looks like:

    • First 90 days:
      • Develop a strong understanding of our existing data ecosystem and data pipelines.
      • Build relationships with various stakeholder departments to understand their day to day operation and their usage and need of Data Eng products.
      • Contribute in the design and implementation of new ETL programs to support the growth and operation efficiency of Clover.
      • Perform root cause analysis after issues have been identified and propose solutions for both short term and long term fixture to increase the stability and accuracy of our pipelines.
    • First 6 months:
      • Provide feedback and propose opportunities for improvement on current data engineering processes and procedures.
      • Work with platform engineers on improving data ecosystem stability, data quality monitoring and data governance.
      • Lead discussion with key stakeholders, propose, design and implement new data eng projects that solve critical business problems.
    • How will success be measured in the future?
      • Continue the creation and management of ETL program and data assets.
      • Be the technical Data Eng lead of our data squad’s day to day operation.
      • Guide and mentor other junior members of the team.

    You should get in touch if:

    • You have a Bachelor’s degree in Computer Science or related field along with 5+ years of experience in ETL programming.
    • You have professional experience working in a healthcare setting. Health Plan knowledge highly desired, Medicare preferred.  
    • You have expertise in most of these technologies: 
      • Python 
      • Snowflake 
      • DBT
      • Airflow 
      • GCP
      • AWS
      • BigQuery
      • Postgres 
      • Data Governance 
      • Some experience with analytics, data science, ML collaboration tools such as Tableau, Mode, Looker

    #LI-Remote

    Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.We are an E-Verify company.


    Benefits Overview:

    • Financial Well-Being: Our commitment to attracting and retaining top talent begins with a competitive base salary and equity opportunities. Additionally, we offer a performance-based bonus program and regular compensation reviews to recognize and reward exceptional contributions.
    • Physical Well-Being: We prioritize the health and well-being of our employees and their families by offering comprehensive group medical coverage that include coverage for hospitalization, outpatient care, optical services, and dental benefits.
    • Mental Well-Being: We understand the importance of mental health in fostering productivity and maintaining work-life balance. To support this, we offer initiatives such as No-Meeting Fridays, company holidays, access to mental health resources, and a generous annual leave policy. Additionally, we embrace a remote-first culture that supports collaboration and flexibility, allowing our team members to thrive from any location. 
    • Professional Development: We are committed to developing our talent professionally. We offer learning programs, mentorship, professional development funding, and regular performance feedback and reviews.

    Additional Perks:

    • Reimbursement for office setup expenses
    • Monthly cell phone & internet stipend
    • Flexibility to work from home, enabling collaboration with global teams
    • Paid parental leave for all new parents
    • And much more!

    About Clover:We are reinventing health insurance by combining the power of data with human empathy to keep our members healthier. We believe the healthcare system is broken, so we've created custom software and analytics to empower our clinical staff to intervene and provide personalized care to the people who need it most.

    We always put our members first, and our success as a team is measured by the quality of life of the people we serve. Those who work at Clover are passionate and mission-driven individuals with diverse areas of expertise, working together to solve the most complicated problem in the world: healthcare.

    From Clover’s inception, Diversity & Inclusion have always been key to our success. We are an Equal Opportunity Employer and our employees are people with different strengths, experiences and backgrounds, who share a passion for improving people's lives. Diversity not only includes race and gender identity, but also age, disability status, veteran status, sexual orientation, religion and many other parts of one’s identity. All of our employee’s points of view are key to our success, and inclusion is everyone's responsibility.


    See more jobs at Clover Health

    Apply for this job