Data Engineer Remote Jobs

105 Results

+30d

BI Data Engineer

CredibleRemote, United States
c++

Credible is hiring a Remote BI Data Engineer

Job Application for BI Data Engineer at Credible

See more jobs at Credible

Apply for this job

+30d

Senior Data Engineer

SmartMessageİstanbul, TR - Remote
MLS3SQSLambdaMaster’s DegreenosqlDesignmongodbazurepythonAWS

SmartMessage is hiring a Remote Senior Data Engineer

Who are we?

We are a globally expanding software technology company that helps brands communicate more effectively with their audiences. We are looking forward to expand our people capabilities and success in developing high-end solutions beyond existing boundaries and establish our brand as a Global Powerhouse.

We are free to work from wherever we want and go to the office whenever we like!!!

What is the role?

We are looking for a highly skilled and motivated Senior Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in building and managing data pipelines, noSQL databases, and cloud-based data platforms. You will work closely with data scientists and other engineers to design and implement scalable data solutions.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines and architectures.
  • Implement data lake solutions on cloud platforms.
  • Develop and manage noSQL databases (e.g., MongoDB, Cassandra).
  • Work with graph databases (e.g., Neo4j) and big data technologies (e.g., Hadoop, Spark).
  • Utilize cloud services (e.g., S3, Redshift, Lambda, Kinesis, EMR, SQS, SNS).
  • Ensure data quality, integrity, and security.
  • Collaborate with data scientists to support machine learning and AI initiatives.
  • Optimize and tune data processing workflows for performance and scalability.
  • Stay up-to-date with the latest data engineering trends and technologies.

Detailed Responsibilities and Skills:

  • Business Objectives and Requirements:
    • Engage with business IT and data science teams to understand their needs and expectations from the data lake.
    • Define real-time analytics use cases and expected outcomes.
    • Establish data governance policies for data access, usage, and quality maintenance.
  • Technology Stack:
    • Real-time data ingestion using Apache Kafka or Amazon Kinesis.
    • Scalable storage solutions such as Amazon S3, Google Cloud Storage, or Hadoop Distributed File System (HDFS).
    • Real-time data processing using Apache Spark or Apache Flink.
    • NoSQL databases like Cassandra or MongoDB, and specialized time-series databases like InfluxDB.
  • Data Ingestion and Integration:
    • Set up data producers for real-time data streams.
    • Integrate batch data processes to merge with real-time data for comprehensive analytics.
    • Implement data quality checks during ingestion.
  • Data Processing and Management:
    • Utilize Spark Streaming or Flink for real-time data processing.
    • Enrich clickstream data by integrating with other data sources.
    • Organize data into partitions based on time or user attributes.
  • Data Lake Storage and Architecture:
    • Implement a multi-layered storage approach (raw, processed, and aggregated layers).
    • Use metadata repositories to manage data schemas and track data lineage.
  • Security and Compliance:
    • Implement fine-grained access controls.
    • Encrypt data in transit and at rest.
    • Maintain logs of data access and changes for compliance.
  • Monitoring and Maintenance:
    • Continuously monitor the performance of data pipelines.
    • Implement robust error handling and recovery mechanisms.
    • Monitor and optimize costs associated with storage and processing.
  • Continuous Improvement and Scalability:
    • Establish feedback mechanisms to improve data applications.
    • Design the architecture to scale horizontally.

Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • 5+ years of experience in data engineering or related roles.
  • Proficiency in noSQL databases (e.g., MongoDB, Cassandra) and graph databases (e.g., Neo4j).
  • Strong experience with cloud platforms (e.g., AWS, GCP, Azure).
  • Hands-on experience with big data technologies (e.g., Hadoop, Spark).
  • Proficiency in Python and data processing frameworks.
  • Experience with Kafka, ClickHouse, Redshift.
  • Knowledge of ETL processes and data integration.
  • Familiarity with AI, ML algorithms, and neural networks.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and teamwork skills.
  • Entrepreneurial spirit and a passion for continuous learning.

Join our team!

See more jobs at SmartMessage

Apply for this job

+30d

Sr. Data Engineer, Marketing Tech

MLDevOPSLambdaagileairflowsqlDesignapic++dockerjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer, Marketing Tech

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving Million+ Hims & Hers subscribers.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability.
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources.
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake.
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance 
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling.
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics.
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them.
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources.
  • Partner with machine learning engineers to deploy predictive models.
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies.
  • Partner with DevOps to build IaC and CI/CD pipelines.
  • Support code versioning and code deployments for data Pipelines.

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages.
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed.
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets.
  • Experience working with customer behavior data. 
  • Experience with Javascript, event tracking tools like GTM, tools like Google Analytics, Amplitude and CRM tools. 
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform.
  • Experience with serverless architecture (Google Cloud Functions, AWS Lambda).
  • Experience with IaC technologies like Terraform.
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres.
  • Experience building event streaming pipelines using Kafka/Confluent Kafka.
  • Experience with modern data stack like Airflow/Astronomer, Fivetran, Tableau/Looker.
  • Experience with containers and container orchestration tools such as Docker or Kubernetes.
  • Experience with Machine Learning & MLOps.
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI).
  • Thorough understanding of SDLC and Agile frameworks.
  • Project management skills and a demonstrated ability to work autonomously.

Nice to Have:

  • Experience building data models using dbt
  • Experience designing and developing systems with desired SLAs and data quality metrics.
  • Experience with microservice architecture.
  • Experience architecting an enterprise-grade data platform.

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range for US-based employees is
$140,000$170,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

+30d

Sr. Data Engineer, Kafka

DevOPSagileterraformairflowpostgressqlDesignapic++dockerkubernetesjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer, Kafka

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving over a million Hims & Hers users.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources
  • Partner with machine learning engineers to deploy predictive models
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies
  • Partner with DevOps to build IaC and CI/CD pipelines
  • Support code versioning and code deployments for data Pipelines

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform
  • Experience building event streaming pipelines using Kafka/Confluent Kafka
  • Experience with IaC technologies like Terraform
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres
  • Experience with Databricks platform
  • Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker
  • Experience with containers and container orchestration tools such as Docker or Kubernetes
  • Experience with Machine Learning & MLOps
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)
  • Thorough understanding of SDLC and Agile frameworks
  • Project management skills and a demonstrated ability to work autonomously

Nice to Have:

  • Experience building data models using dbt
  • Experience with Javascript and event tracking tools like GTM
  • Experience designing and developing systems with desired SLAs and data quality metrics
  • Experience with microservice architecture
  • Experience architecting an enterprise-grade data platform

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range for US-based employees is
$140,000$170,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

EXUS is hiring a Remote Junior/Mid Data Analytics Engineer

EXUS is an enterprise software company, founded in 1989 with the vision to simplify risk management software. EXUS launched its Financial Suite (EFS) in 2003 to support financial entities worldwide to improve their results. Today, our EXUS Financial Suite (EFS) is trusted by risk professionals in more than 32 countries worldwide (MENAEUSEA). We introduce simplicity and intelligence in their business processes through technology, improving their collections performance.

Our people constitute the source of inspiration that drives us forward and helps us fulfill our purpose of being role models for a better world.
This is your chance to be part of a highly motivated, diverse, and multidisciplinary team, which embraces breakthrough thinking and technology to create software that serves people.

Our shared Values:

  • We are transparent and direct
  • We are positive and fun, never cynical or sarcastic
  • We are eager to learn and explore
  • We put the greater good first
  • We are frugal and we do not waste resources
  • We are fanatically disciplined, we deliver on our promises

We are EXUS! Are you?

Join our dynamic Data Analytics Teamas we expand our capabilities into data Lakehouse architecture. We are seeking a Junior/Mid Data Analytics Engineer who is enthusiastic about creating compelling data visualizations, effectively communicating them with customers, conducting training sessions, and gaining experience in managing ETL processes for big data.

Key Responsibilities:

  • Develop and maintain reports and dashboards using leading visualization tools, and craft advanced SQL queries for additional report generation.
  • Deliver training sessions on our Analytic Solution and effectively communicate findings and insights to both technical and non-technical customer audiences.
  • Collaborate with business stakeholders to gather and analyze requirements.
  • Debug issues in the front-end analytic tool, investigate underlying causes, and resolve these issues.
  • Monitor and maintain ETL processes as part of our transition to a data lakehouse architecture.
  • Proactively investigate and implement new data analytics technologies and methods.

Required Skills and Qualifications:

  • A BSc or MSc degree in Computer Science, Engineering, or a related field.
  • 1-5 years of experience with data visualization tools and techniques. Knowledge of MicroStrategy and Apache Superset is a plus.
  • 1-5 years of experience with Data Warehouses, Big Data, and/or Cloud technologies. Exposure to these areas in academic projects, internships, or entry-level roles is also acceptable.
  • Familiarity with PL/SQL and practical experience with SQL for data manipulation and analysis. Hands-on experience through academic coursework, personal projects, or job experience is valued.
  • Familiarity with data Lakehouse architecture.
  • Excellent analytical skills to understand business needs and translate them into data models.
  • Organizational skills with the ability to document work clearly and communicate it professionally.
  • Ability to independently investigate new technologies and solutions.
  • Strong communication skills, capable of conducting presentations and engaging effectively with customers in English.
  • Demonstrated ability to work collaboratively in a team environment.
  • Competitive salary
  • Friendly, pleasant, and creative working environment
  • Remote Working
  • Development Opportunities
  • Private Health Insurance Allowance

Privacy Notice for Job Applications: https://www.exus.co.uk/en/careers/privacy-notice-f...

See more jobs at EXUS

Apply for this job

+30d

Junior/Mid Data Analytics Engineer

EXUSBucharest,Romania, Remote

EXUS is hiring a Remote Junior/Mid Data Analytics Engineer

EXUS is an enterprise software company, founded in 1989 with the vision to simplify risk management software. EXUS launched its Financial Suite (EFS) in 2003 to support financial entities worldwide to improve their results. Today, our EXUS Financial Suite (EFS) is trusted by risk professionals in more than 32 countries worldwide (MENAEUSEA). We introduce simplicity and intelligence in their business processes through technology, improving their collections performance.

Our people constitute the source of inspiration that drives us forward and helps us fulfill our purpose of being role models for a better world.
This is your chance to be part of a highly motivated, diverse, and multidisciplinary team, which embraces breakthrough thinking and technology to create software that serves people.

Our shared Values:

  • We are transparent and direct
  • We are positive and fun, never cynical or sarcastic
  • We are eager to learn and explore
  • We put the greater good first
  • We are frugal and we do not waste resources
  • We are fanatically disciplined, we deliver on our promises

We are EXUS! Are you?

Join our dynamic Data Analytics Teamas we expand our capabilities into data Lakehouse architecture. We are seeking a Junior/Mid Data Analytics Engineer who is enthusiastic about creating compelling data visualizations, effectively communicating them with customers, conducting training sessions, and gaining experience in managing ETL processes for big data.

Key Responsibilities:

  • Develop and maintain reports and dashboards using leading visualization tools, and craft advanced SQL queries for additional report generation.
  • Deliver training sessions on our Analytic Solution and effectively communicate findings and insights to both technical and non-technical customer audiences.
  • Collaborate with business stakeholders to gather and analyze requirements.
  • Debug issues in the front-end analytic tool, investigate underlying causes, and resolve these issues.
  • Monitor and maintain ETL processes as part of our transition to a data lakehouse architecture.
  • Proactively investigate and implement new data analytics technologies and methods.

Required Skills and Qualifications:

  • A BSc or MSc degree in Computer Science, Engineering, or a related field.
  • 1-5 years of experience with data visualization tools and techniques. Knowledge of MicroStrategy and Apache Superset is a plus.
  • 1-5 years of experience with Data Warehouses, Big Data, and/or Cloud technologies. Exposure to these areas in academic projects, internships, or entry-level roles is also acceptable.
  • Familiarity with PL/SQL and practical experience with SQL for data manipulation and analysis. Hands-on experience through academic coursework, personal projects, or job experience is valued.
  • Familiarity with data Lakehouse architecture.
  • Excellent analytical skills to understand business needs and translate them into data models.
  • Organizational skills with the ability to document work clearly and communicate it professionally.
  • Ability to independently investigate new technologies and solutions.
  • Strong communication skills, capable of conducting presentations and engaging effectively with customers in English.
  • Demonstrated ability to work collaboratively in a team environment.
  • Competitive salary
  • Friendly, pleasant, and creative working environment
  • Remote Working
  • Development Opportunities
  • Private Health Insurance Allowance

Privacy Notice for Job Applications: https://www.exus.co.uk/en/careers/privacy-notice-f...

See more jobs at EXUS

Apply for this job

+30d

Sr Data Engineer

VeriskJersey City, NJ, Remote
LambdasqlDesignlinuxpythonAWS

Verisk is hiring a Remote Sr Data Engineer

Job Description

We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data pipeline architecture. The ideal candidate is an experienced data pipeline builder and data wrangler with strong experience in handling data at scale. The Data Engineer will support our software developers, data analysts and data scientists on various data initiatives.

This is a remote role that can be done anywhere in the continental US; work is on Eastern time zone hours.

Why this role

This is a highly visible role within the enterprise data lake team. Working within our Data group and business analysts, you will be responsible for leading creation of data architecture that produces our data assets to enable our data platform.  This role requires working closely with business leaders, architects, engineers, data scientists and wide range of stakeholders throughout the organization to build and execute our strategic data architecture vision.

Job Duties

  • Extensive understanding of SQL queries. Ability to fine tune queries based on various RDBMS performance parameters such as indexes, partitioning, Explain plans and cost optimizers.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies stack
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Working with data scientists and industry leaders to understand data needs and design appropriate data models.
  • Participate in the design and development of the AWS-based data platform and data analytics.

Qualifications

Skills Needed

  • Design and implement data ETL frameworks for secured Data Lake, creating and maintaining an optimal pipeline architecture.
  • Examine complex data to optimize the efficiency and quality of the data being collected, resolve data quality problems, and collaborate with database developers to improve systems and database designs
  • Hands-on building data applications using AWS Glue, Lake Formation, Athena, AWS Batch, AWS Lambda, Python, Linux shell & Batch scripting.
  • Hands on experience with AWS Database services (Redshift, RDS, DynamoDB, Aurora etc.)
  • Experience in writing advanced SQL scripts involving self joins, windows function, correlated subqueries, CTE’s etc.
  • Strong understanding and experience using data management fundamentals, including concepts such as data dictionaries, data models, validation, and reporting.  

Education and Training

  • 10 years full-time software engineering experience preferred with at least 4 years in an AWS environment focused on application development.
  • Bachelor’s degree or foreign equivalent degree in Computer Science, Software Engineering, or related field
  • US citizenship required

#LI-LM03
#LI-Hybrid

See more jobs at Verisk

Apply for this job

+30d

Senior Data Engineer

SynackRemote in the US
c++

Synack is hiring a Remote Senior Data Engineer

Job Application for Senior Data Engineer at Synack

See more jobs at Synack

Apply for this job

+30d

Data Engineer

DevoteamTunis, Tunisia, Remote
airflowsqlscrum

Devoteam is hiring a Remote Data Engineer

Description du poste

Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

Votre rôle consistera à contribuer à des projets data en apportant votre expertise sur les tâches suivantes :

  • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur Google Cloud Plateform (GCP), en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
  • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
  • Optimiser les performances des traitements des données et des processus ELT en utilisant AirFlow, DBT et BigQuery.
  • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
  • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
  • Rester à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

 

    Qualifications

    • Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.
    • Au moins 4 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
    • Maîtrise avancée de SQL pour l'optimisation et le traitement des données.
    • Certification Google Professional Data Engineer est un plus.
    • Très bonne communication écrite et orale (livrables et reportings de qualité).

    See more jobs at Devoteam

    Apply for this job

    +30d

    Data Engineer (Australia)

    DemystDataAustralia, Remote
    SalesS3EC2Lambdaremote-firstDesignpythonAWS

    DemystData is hiring a Remote Data Engineer (Australia)

    Our Solution

    Demyst unlocks innovation with the power of data. Our platform helps enterprises solve strategic use cases, including lending, risk, digital origination, and automation, by harnessing the power and agility of the external data universe. We are known for harnessing rich, relevant, integrated, linked data to deliver real value in production. We operate as a distributed team across the globe and serve over 50 clients as a strategic external data partner. Frictionless external data adoption within digitally advancing enterprises is unlocking market growth and allowing solutions to finally get out of the lab. If you like actually to get things done and deployed, Demyst is your new home.

    The Opportunity

    As a Data Engineer at Demyst, you will be powering the latest technology at leading financial institutions around the world. You may be solving a fintech's fraud problems or crafting a Fortune 500 insurer's marketing campaigns. Using innovative data sets and Demyst's software architecture, you will use your expertise and creativity to build best-in-class solutions. You will see projects through from start to finish, assisting in every stage from testing to integration.

    To meet these challenges, you will access data using Demyst's proprietary Python library via our JupyterHub servers, and utilize our cloud infrastructure built on AWS, including Athena, Lambda, EMR, EC2, S3, and other products. For analysis, you will leverage AutoML tools, and for enterprise data delivery, you'll work with our clients' data warehouse solutions like Snowflake, DataBricks, and more.

    Demyst is a remote-first company. The candidate must be based in Australia.

    Responsibilities

    • Collaborate with internal project managers, sales directors, account managers, and clients’ stakeholders to identify requirements and build external data-driven solutions
    • Perform data appends, extracts, and analyses to deliver curated datasets and insights to clients to help achieve their business objectives
    • Understand and keep current with external data landscapes such as consumer, business, and property data.
    • Engage in projects involving entity detection, record linking, and data modelling projects
    • Design scalable code blocks using Demyst’s APIs/SDKs that can be leveraged across production projects
    • Govern releases, change management and maintenance of production solutions in close coordination with clients' IT teams
    • Bachelor's in Computer Science, Data Science, Engineering or similar technical discipline (or commensurate work experience); Master's degree preferred
    • 1-3 years of Python programming (with Pandas experience)
    • Experience with CSV, JSON, parquet, and other common formats
    • Data cleaning and structuring (ETL experience)
    • Knowledge of API (REST and SOAP), HTTP protocols, API Security and best practices
    • Experience with SQL, Git, and Airflow
    • Strong written and oral communication skills
    • Excellent attention to detail
    • Ability to learn and adapt quickly
    • Distributed working team and culture
    • Generous benefits and competitive compensation
    • Collaborative, inclusive work culture: all-company offsites and local get togethers in Bangalore
    • Annual learning allowance
    • Office setup allowance
    • Generous paid parental leave
    • Be a part of the exploding external data ecosystem
    • Join an established fast growth data technology business
    • Work with the largest consumer and business external data market in an emerging industry that is fueling AI globally
    • Outsized impact in a small but rapidly growing team offering real autonomy and responsibility for client outcomes
    • Stretch yourself to help define and support something entirely new that will impact billions
    • Work within a strong, tight-knit team of subject matter experts
    • Small enough where you matter, big enough to have the support to deliver what you promise
    • International mobility available for top performer after two years of service

    Demyst is committed to creating a diverse, rewarding career environment and is proud to be an equal opportunity employer. We strongly encourage individuals from all walks of life to apply.

    See more jobs at DemystData

    Apply for this job

    +30d

    Data Engineer - AWS

    Tiger AnalyticsJersey City,New Jersey,United States, Remote
    S3LambdaairflowsqlDesignAWS

    Tiger Analytics is hiring a Remote Data Engineer - AWS

    Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

    As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives. The ideal candidate will have strong experience with AWS services, Databricks, and Apache Airflow.

    Key Responsibilities:

    • Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
    • Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.
    • Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring.
    • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.
    • Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies.
    • 8+ years of experience building and deploying large-scale data processing pipelines in a production environment.
    • Hands-on experience in designing and building data pipelines on AWS cloud infrastructure.
    • Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.
    • Strong experience with Databricks and Apache Spark for data processing and analytics.
    • Hands-on experience with Apache Airflow for orchestrating and scheduling data pipelines.
    • Solid understanding of data modeling, database design principles, and SQL.
    • Experience with version control systems (e.g., Git) and CI/CD pipelines.
    • Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
    • Strong problem-solving skills and attention to detail.

    This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

    See more jobs at Tiger Analytics

    Apply for this job

    +30d

    Data Engineer - Snowflake

    Tiger AnalyticsChicago,Illinois,United States, Remote Hybrid
    Design

    Tiger Analytics is hiring a Remote Data Engineer - Snowflake

    Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

    The Data Engineer will be responsible for architecting, designing, and implementing advanced analytics capabilities. The right candidate will have broad skills in database design, be comfortable dealing with large and complex data sets, have experience building self-service dashboards, be comfortable using visualization tools, and be able to apply your skills to generate insights that help solve business challenges.We are looking for someone who can bring their vision to the table and implement positive change in taking the company's data analytics to the next level.

    Key Responsibilities:

    Data Integration:

    Implement and maintain data synchronization between on-premises Oracle databases and Snowflake using Kafka and CDC tools.

    Support Data Modeling:

    Assist in developing and optimizing the data model for Snowflake, ensuring it supports our analytics and reporting requirements.

    Data Pipeline Development:

    Design, build, and manage data pipelines for the ETL process, using Airflow for orchestration and Python for scripting, to transform raw data into a format suitable for our new Snowflake data model.

    Reporting Support:

    Collaborate with data architect to ensure the data within Snowflake is structured in a way that supports efficient and insightful reporting.

    Technical Documentation:

    Create and maintain comprehensive documentation of data pipelines, ETL processes, and data models to ensure best practices are followed and knowledge is shared within the team.

    Tools and Skillsets:

    Data engineering: proven track record of developing and maintaining data pipelines and data integration projects

    Databases: Strong experience with Oracle, Snowflake, and Databricks.

    Data Integration Tools: Proficiency in using Kafka and CDC tools for data ingestion and synchronization.

    Orchestration Tools: Expertise in Airflow for managing data pipeline workflows.

    Programming: Advanced proficiency in Python and SQL for data processing tasks.

    Data Modeling: Understanding of data modeling principles and experience with data warehousing solutions.

    Cloud Platforms: Knowledge of cloud infrastructure and services, preferably Azure, as it relates to Snowflake and Databricks integration.

    Collaboration Tools: Experience with version control systems (like Git) and collaboration platforms.

    CI/CD Implementation: Utilize CI/CD tools to automate the deployment of data pipelines and infrastructure changes, ensuring high-quality data processing with minimal manual intervention.

    Communication: Excellent communication and teamwork skills, with a detail-oriented mindset. Strong analytical skills, with the ability to work independently and solve complex problems.

    Requirements

    • 8+ years of overall industry experience specifically in data engineering
    • 5+ years of experience building and deploying large-scale data processing pipelines in a production environment.
    • Strong experience in Python, SQL, and PySpark
    • Creating and optimizing complex data processing and data transformation pipelines using python
    • Experience with “Snowflake Cloud Datawarehouse” and DBT tool
    • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
    • Understanding of Datawarehouse (DWH) systems, and migration from DWH to data lakes/Snowflake
    • Understanding of ELT and ETL patterns and when to use each. Understanding of data models and transforming data into the models
    • Strong analytic skills related to working with unstructured datasets
    • Build processes supporting data transformation, data structures, metadata, dependency and workload management

    This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

    See more jobs at Tiger Analytics

    Apply for this job

    +30d

    Senior Data Engineer

    AltUS Remote
    airflowpostgresDesignc++pythonAWS

    Alt is hiring a Remote Senior Data Engineer

    At Alt, we’re on a mission to unlock the value of alternative assets, and looking for talented people who share our vision. Our platform enables users to exchange, invest, value, securely store, and authenticate their collectible cards. And we envision a world where anything is an investable asset. 

    To date, we’ve raised over $100 million from thought leaders at the intersection of culture, community, and capital. Some of our investors include Alexis Ohanian’s fund Seven Seven Six, the founders of Stripe, Coinbase co-founder Fred Ehrsam, BlackRock co-founder Sue Wagner, the co-founders of AngelList, First Round Capital, and BoxGroup. We’re also backed by professional athletes including Tom Brady, Candace Parker, Giannis Antetokounmpo, Alex Morgan, Kevin Durant, and Marlon Humphrey.

    Alt is a dedicated equal opportunity employer committed to creating a diverse workforce. We celebrate our differences and strive to create an inclusive environment for all. We are focused on fostering a culture of empowerment which starts with providing our employees with the resources needed to reach their full potential.

    What we are looking for:

    We are seeking a Senior Data Engineer who is eager to make a significant impact. In this role, you'll get the opportunity to leverage your technical expertise and problem-solving skills to solve some of the hardest data problems in the hobby. Your primary focus in this role will be on enhancing and optimizing our pricing engine to support strategic business goals. Our ideal candidate is passionate about trading cards, has a strong sense of ownership, and enjoys challenges. At Alt, data is core to everything we do and is a differentiator for our customers. The team’s scope covers data pipeline development, search infrastructure, web scraping, detection algorithms, internal toolings and data quality. We give our engineers a lot of individual responsibility and autonomy, so your ability to make good trade-offs and exercise good judgment is essential.

    The impact you will make:

    • Partner with engineers, and cross-functional stakeholders to contribute to all phases of algorithm development including: ideation, prototyping, design, and production
    • Build, iterate, productionize, and own Alt's valuation models
    • Leverage background in pricing strategies and models to develop innovative pricing solutions
    • Design and implement scalable, reliable, and maintainable machine learning systems
    • Partner with product to understand customer requirements and prioritize model features

    What you bring to the table:

    • Experience: 5+ years of experience in software development, with a proven track record of developing and deploying models in production. Experience with pricing models preferred.
    • Technical Skills: Proficiency in programming languages and tools such as Python, AWS, Postgres, Airflow, Datadog, and JavaScript.
    • Problem-Solving: A knack for solving tough problems and a drive to take ownership of your work.
    • Communication: Effective communication skills with the ability to ship solutions quickly.
    • Product Focus: Excellent product instincts, with a user-first approach when designing technical solutions.
    • Team Player: A collaborative mindset that helps elevate the performance of those around you.
    • Industry Knowledge: Knowledge of the sports/trading card industry is a plus.

    What you will get from us:

    • Ground floor opportunity as an early member of the Alt team; you’ll directly shape the direction of our company. The opportunities for growth are truly limitless.
    • An inclusive company culture that is being built intentionally to foster an environment that supports and engages talent in their current and future endeavors.
    • $100/month work-from-home stipend
    • $200/month wellness stipend
    • WeWork office Stipend
    • 401(k) retirement benefits
    • Flexible vacation policy
    • Generous paid parental leave
    • Competitive healthcare benefits, including HSA, for you and your dependent(s)

    Alt's compensation package includes a competitive base salary benchmarked against real-time market data, as well as equity for all full-time roles. We want all full-time employees to be invested in Alt and to be able to take advantage of that investment, so our equity grants include a 10-year exercise window. The base salary range for this role is: $194,000 - $210,000. Offers may vary from the amount listed based on geography, candidate experience and expertise, and other factors.

    See more jobs at Alt

    Apply for this job

    FanDuel is hiring a Remote Senior Data Platform Engineer

    Job Application for Senior Data Platform Engineer at FanDuel

    See more jobs at FanDuel

    Apply for this job

    +30d

    Lead Data Engineer

    DevoteamTunis, Tunisia, Remote
    airflowsqlscrum

    Devoteam is hiring a Remote Lead Data Engineer

    Description du poste

    Au sein de la direction « Plateforme Data », le consultant intégrera une équipe SCRUM et se concentrera sur un périmètre fonctionnel spécifique.

    Votre rôle consistera à contribuer à des projets data en apportant votre expertise sur les tâches suivantes :

    • Concevoir, développer et maintenir des pipelines de données robustes et évolutifs sur Google Cloud Plateform (GCP), en utilisant des outils tels que BigQuery, Airflow, Looker et DBT.
    • Collaborer avec les équipes métier pour comprendre les exigences en matière de données et concevoir des solutions adaptées.
    • Optimiser les performances des traitements des données et des processus ELT en utilisant AirFlow, DBT et BigQuery.
    • Mettre en œuvre des processus de qualité des données pour garantir l'intégrité et la cohérence des données.
    • Travailler en étroite collaboration avec les équipes d'ingénierie pour intégrer les pipelines de données dans les applications et les services existants.
    • Rester à jour avec les nouvelles technologies et les meilleures pratiques dans le domaine du traitement des données et de l'analyse.

     

      Qualifications

      • Diplômé(e) d’un Bac+5 en école d'ingénieur ou équivalent universitaire avec une spécialisation en informatique.
      • Au moins 3 ans d'expérience dans le domaine de l'ingénierie des données, avec une expérience significative dans un environnement basé sur le cloud GCP.
      • Maîtrise avancée de SQL pour l'optimisation et le traitement des données.
      • Certification Google Professional Data Engineer est un plus.
      • Très bonne communication écrite et orale (livrables et reportings de qualité).

      See more jobs at Devoteam

      Apply for this job

      +30d

      Sr. Data Engineer

      Talent ConnectionPleasanton, CA, Remote
      Designjava

      Talent Connection is hiring a Remote Sr. Data Engineer

      Job Description

      Position Overview: As a Sr. Data Engineer, you will be pivotal in developing and maintaining data solutions that enhance our client's reporting and analytics capabilities. You will leverage a variety of data technologies to construct scalable, efficient data pipelines that support critical business insights and decision-making processes.

      Key Responsibilities:

      • Architect and design data pipelines that meet reporting and analytics requirements.
      • Develop robust and scalable data pipelines to integrate data from diverse sources into a cloud-based data platform.
      • Convert business needs into architecturally sound data solutions.
      • Lead data modernization projects, providing technical guidance and setting design standards.
      • Optimize data performance and ensure prompt resolution of issues.
      • Collaborate with cross-functional teams to create efficient data flows.

      Qualifications

      Required Skills and Experience:

      • 7+ years of experience in data engineering and pipeline development.
      • 5+ years of experience in data modeling for data warehousing and analytics.
      • Proficiency with modern data architecture and cloud data platforms, including Snowflake and Azure.
      • Bachelor’s Degree in Computer Science, Information Systems, Engineering, Business Analytics, or a related field.
      • Strong skills in programming languages such as Java and Python.
      • Experience with data orchestration tools and DevOps/Data Ops practices.
      • Excellent communication skills, capable of simplifying complex information.

      Preferred Skills:

      • Experience in the retail industry.
      • Familiarity with reporting tools such as MicroStrategy and Power BI.
      • Experience with tools like Streamsets and dbt.

      See more jobs at Talent Connection

      Apply for this job

      +30d

      Data Engineer

      SonderMindDenver, CO or Remote
      S3scalaairflowsqlDesignjavac++pythonAWS

      SonderMind is hiring a Remote Data Engineer

      About SonderMind

      At SonderMind, we know that therapy works. SonderMind provides accessible, personalized mental healthcare that produces high-quality outcomes for patients. SonderMind's individualized approach to care starts with using innovative technology to help people not just find a therapist, but find the right, in-network therapist for them, should they choose to use their insurance. From there, SonderMind's clinicians are committed to delivering best-in-class care to all patients by focusing on high-quality clinical outcomes. To enable our clinicians to thrive, SonderMind defines care expectations while providing tools such as clinical note-taking, secure telehealth capabilities, outcome measurement, messaging, and direct booking.

      To follow the latest SonderMind news, get to know our clients, and learn about what it’s like to work at SonderMind, you can follow us on Instagram, Linkedin, and Twitter. 

      About the Role

      In this role, you will be responsible for designing, building, and managing the information infrastructure systems used to collect, store, process, and distribute data. You will also be tasked with transforming data into a format that can be easily analyzed. You will work closely with data engineers on data architectures and with data scientists and business analysts to ensure they have the data necessary to complete their analyses.

      Essential Functions

      • Strategically design, construct, install, test, and maintain highly scalable data management systems
      • Develop and maintain databases, data processing procedures, and pipelines
      • Integrate new data management technologies and software engineering tools into existing structures
      • Develop processes for data mining, data modeling, and data production
      • Translate complex functional and technical requirements into detailed architecture, design, and high-performing software and applications
      • Create custom software components and analytics applications
      • Troubleshoot data-related issues and perform root cause analysis to resolve them
      • Manage overall pipeline orchestration
      • Optimize data warehouse performance

       

      What does success look like?

      Success in this role will be by the seamless and efficient operations of data infrastructure. This includes minimal downtime, accurate and timely data delivery and the successful implementation of new technologies and tools. The individual will have demonstrated their ability to collaborate effectively to define solutions with both technical and non-technical team members across data science, engineering, product and our core business functions. They will have made significant contributions to improving our data systems, whether through optimizing existing processes or developing innovative new solutions. Ultimately, their work will enable more informed and effective decision-making across the organization.

       

      Who You Are 

      • Bachelor’s degree in Computer Science, Engineering, or a related field
      • Minimum three years experience as a Data Engineer or in a similar role
      • Experience with data science and analytics engineering is a plus
      • Experience with AI/ML in GenAI or data software - including vector databases - is a plus
      • Proficient with scripting and programming languages (Python, Java, Scala, etc.)
      • In-depth knowledge of SQL and other database related technologies
      • Experience with Snowflake, DBT, BigQuery, Fivetran, Segment, etc.
      • Experience with AWS cloud services (S3, RDS, Redshift, etc.)
      • Experience with data pipeline and workflow management tools such as Airflow
      • Strong negotiation and interpersonal skills: written, verbal, analytical
      • Motivated and influential – proactive with the ability to adhere to deadlines; work to “get the job done” in a fast-paced environment
      • Self-starter with the ability to multi-task

      Our Benefits 

      The anticipated salary rate for this role is between $130,000-160,000 per year.

      As a leader in redesigning behavioral health, we are walking the walk with our employee benefits. We want the experience of working at SonderMind to accelerate people’s careers and enrich their lives, so we focus on meeting SonderMinders wherever they are and supporting them in all facets of their life and work.

      Our benefits include:

      • A commitment to fostering flexible hybrid work
      • A generous PTO policy with a minimum of three weeks off per year
      • Free therapy coverage benefits to ensure our employees have access to the care they need (must be enrolled in our medical plans to participate)
      • Competitive Medical, Dental, and Vision coverage with plans to meet every need, including HSA ($1,100 company contribution) and FSA options
      • Employer-paid short-term, long-term disability, life & AD&D to cover life's unexpected events. Not only that, we also cover the difference in salary for up to seven (7) weeks of short-term disability leave (after the required waiting period) should you need to use it.
      • Eight weeks of paid Parental Leave (if the parent also qualifies for STD, this benefit is in addition which allows between 8-16 weeks of paid leave)
      • 401K retirement plan with 100% matching which immediately vests on up to 4% of base salary
      • Travel to Denver 1x a year for annual Shift gathering
      • Fourteen (14) company holidays
      • Company Shutdown between Christmas and New Years
      • Supplemental life insurance, pet insurance coverage, commuter benefits and more!

      Application Deadline

      This position will be an ongoing recruitment process and will be open until filled.

       

      Equal Opportunity 
      SonderMind does not discriminate in employment opportunities or practices based on race, color, creed, sex, gender, gender identity or expression, pregnancy, childbirth or related medical conditions, religion, veteran and military status, marital status, registered domestic partner status, age, national origin or ancestry, physical or mental disability, medical condition (including genetic information or characteristics), sexual orientation, or any other characteristic protected by applicable federal, state, or local laws.

      Apply for this job

      +30d

      Data Engineer

      Tiger AnalyticsLondon,England,United Kingdom, Remote Hybrid

      Tiger Analytics is hiring a Remote Data Engineer

      Tiger Analytics is pioneering what AI and analytics can do to solve some of the toughest problems faced by organizations globally. We develop bespoke solutions powered by data and technology for several Fortune 100 companies. We have offices in multiple cities across the US, UK, Canada, India, and Singapore, and a substantial remote global workforce.

      If you are passionate about working on business problems that can be solved using structured and unstructured data on a large scale, Tiger Analytics would like to talk to you. We are seeking an experienced and dynamic Data Engineer to play a key role in designing and implementing robust data solutions that help in solving the client's complex business problem

      Responsibilities:

      • Design, develop, and maintain scalable data pipelines using Scala, DBT, and SQL.
      • Implement and optimize distributed data processing solutions using MPP databases and technologies.
      • Build and deploy machine learning models using distributed processing frameworks such as Spark, Glue, and Iceberg.
      • Collaborate with data scientists and analysts to operationalize ML models and integrate them into production systems.
      • Ensure data quality, reliability, and integrity throughout the data lifecycle.
      • Continuously optimize and improve data processing and ML workflows for performance and scalability.

      Requirements:

      • 5+ years of experience in data engineering and machine learning.
      • Proficiency in Scala programming language for building data pipelines and ML models.
      • Hands-on experience with DBT (Data Build Tool) for data transformation and modeling.
      • Strong SQL skills for data querying and manipulation.
      • Experience with MPP (Massively Parallel Processing) databases and distributed processing technologies.
      • Familiarity with distributed processing frameworks such as Spark, Glue, and Iceberg.
      • Ability to work independently and collaboratively in a team environment.

      Significant career development opportunities exist as the company grows. The position offers a unique opportunity to be part of a small, fast-growing, challenging and entrepreneurial environment, with a high degree of individual responsibility.

      See more jobs at Tiger Analytics

      Apply for this job

      +30d

      Senior Data Engineer

      Expression NetworksDC, US - Remote
      agileAbility to travelnosqlsqlDesignscrumjavapythonjavascript

      Expression Networks is hiring a Remote Senior Data Engineer

      Expression is looking to hire a Senior Data Engineer (individual contributor) to add to the continued growth we are seeing with our Data Science department. This position will daily report to the program manager and data team manager on projects and will be responsible for the design and execution of high-impact data architecture and engineering solutions to customers across a breadth of domains and use cases.

      Location:

      • Remote with the ability to travel monthly when needed
        • Local (DC/VA/MD Metropolitan area) is preferred but not required
        • Relocation assistance available for highly qualified candidates

      Security Clearance:

      • US Citizenship required
      • Ability to obtain Secret Clearance or higher

      Primary Responsibilities:

      • Directly working and leading others on the development, testing, and documentation of software code and data pipelines for data extraction, ingestion, transformation, cleaning, correlation, and analytics
      • Leading end-to-end architectural design and development lifecycle for new data services/products, and making them operate at scale
      • Partnering with Program Managers, Subject Matter Experts, Architects, Engineers, and Data Scientists across the organization where appropriate to understand customer requirements, design prototypes, and optimize existing data services/products
      • Setting the standard for Data Science excellence in the teams you work with across the organization, and mentoring junior members in the Data Science department

      Additional Responsibilities:

      • Participating in technical development of white papers and proposals to win new business opportunities
      • Analyzing and providing feedback on product strategy
      • Participating in research, case studies, and prototypes on cutting-edge technologies and how they can be leveraged
      • Working in a consultative fashion to improve communication, collaboration, and alignment amongst teams inside the Data Science department and across the organization
      • Helping recruit, nurture, and retain top data engineering talent

      Required Qualifications:

      • 4+ years of experience bringing databases, data integration, and data analytics/ML technologies to production with a PhD/MS in Computer Science/Data Science/Computer Engineering or relevant field, or 6+ years of experience with a Bachelor’s degree
      • Mastery in developing software code in one or more programming languages (Python, JavaScript, Java, Matlab, etc.)
      • Expert knowledge in databases (SQL, NoSQL, Graph, etc.) and data architecture (Data Lake, Lakehouse)
      • Knowledgeable in machine learning/AI methodologies
      • Experience with one or more SQL-on-Hadoop technology (Spark SQL, Hive, Impala, Presto, etc.)
      • Experience in short-release cycles and the full software lifecycle
      • Experience with Agile development methodology (e.g., Scrum)
      • Strong writing and oral communication skills to deliver design documents, technical reports, and presentations to a variety of audiences

      Benefits:

      • 401k matching
      • PPO and HDHP medical/dental/vision insurance
      • Education reimbursement
      • Complimentary life insurance
      • Generous PTO and holiday leave
      • Onsite office gym access
      • Commuter Benefits Plan

      About Expression:

      Founded in 1997 and headquartered in Washington DC, Expression provides data fusion, data analytics, software engineering, information technology, and electromagnetic spectrum management solutions to the U.S. Department of Defense, Department of State, and national security community. Expression’s “Perpetual Innovation” culture focuses on creating immediate and sustainable value for our clients via agile delivery of tailored solutions built through constant engagement with our clients. Expression was ranked #1 on the Washington Technology 2018's Fast 50 list of fastest growing small business Government contractors and a Top 20 Big Data Solutions Provider by CIO Review.

      Equal Opportunity Employer/Veterans/Disabled

      See more jobs at Expression Networks

      Apply for this job

      +30d

      Senior Data Engineer

      Tiger AnalyticsJersey City,New Jersey,United States, Remote

      Tiger Analytics is hiring a Remote Senior Data Engineer

      Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for several Fortune 100 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best analytics global consulting team in the world.

      We are seeking an experienced Data Engineer to join our data team. As a Data Engineer, you will be responsible for designing, building, and maintaining data pipelines, data integration processes, and data infrastructure using Dataiku. You will collaborate closely with data scientists, analysts, and other stakeholders to ensure efficient data flow and support data-driven decision making across the organization.

      • 8+ years of overall industry experience specifically in data engineering
      • Strong knowledge of data engineering principles, data integration, and data warehousing concepts.
      • Strong understanding of the pharmaceutical/ Life Science domain, including knowledge of patient data, Commercial data, drug development processes, and healthcare data.
      • Proficiency in data engineering technologies and tools, such as SQL, Python, ETL frameworks, data integration platforms, and data warehousing solutions.
      • Experience with data modeling, database design, and data architecture principles.
      • Familiarity with big data technologies (e.g., Hadoop, Spark) and cloud platforms - AWS, Azure
      • Strong analytical and problem-solving skills, with the ability to work with large and complex datasets.
      • Strong communication and collaboration abilities.
      • Attention to detail and a focus on delivering high-quality work.

      Significant career development opportunities exist as the company grows. The position offers a unique opportunity to be part of a small, challenging, and entrepreneurial environment, with a high degree of individual responsibility.

      See more jobs at Tiger Analytics

      Apply for this job