Lambda Remote Jobs

109 Results

+30d

Data Engineer (AWS, Azure, GCP)

CapTech ConsultingDenver, CO, Remote
DevOPSLambdasqloracleDesignazuregitjavac++dockerpostgresqlkubernetesjenkinspythonAWS

CapTech Consulting is hiring a Remote Data Engineer (AWS, Azure, GCP)

Job Description

CapTech Data Engineering consultants enable clients to build and maintain advanced data systems that bring together data from disparate sources in order to enable decision-makers. We build pipelines and prepare data for use by data scientists, data analysts, and other data systems. We love solving problems and providing creative solutions for our clients. Cloud Data Engineers leverage the client’s cloud infrastructure to deliver this value today and to scale for the future. We enjoy a collaborative environment and have many opportunities to learn from and share knowledge with other developers, architects, and our clients. 

Specific responsibilities for the Data Engineer – Cloud position include:  

  • Developing data pipelines and other data products using Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) 
  • Advising clients on specific technologies and methodologies for utilizing cloud resources to efficiently ingest and process data quickly 
  • Utilizing your skills in engineering best practices to solve complex data problems 
  • Collaborating with end users, development staff, and business analysts to ensure that prospective data architecture plans maximize the value of client data across the organization.  
  • Articulating architectural differences between solution methods and the advantages/disadvantages of each 

Qualifications

Typical experience for successful candidates includes:    

  • Experience delivering solutions on a major cloud platform 
  • Ability to think strategically and relate architectural decisions/recommendations to business needs and client culture 
  • Experience in the design and implementation of data architecture solutions 
  • A wide range of production database experience, usually including substantial SQL expertise, database administration, and scripting data pipelines 
  • Ability to assess and utilize traditional and modern architectural components required based on business needs.   
  • A demonstrable ability to deliver production data pipelines and other data products. This could be hands on experience, degree, certification, bootcamp, or other learning. 

Skills: 

Successful candidates usually have demonstrable experience with technologies in some of these categories:  

  • Languages: SQL, Python, Java, R, C# / C++ / C 
  • Database: SQL Server, PostgreSQL, Snowflake, Redshift, Aurora, Presto, BigQuery, Oracle 
  • DevOps: git, docker, subversion, Kubernetes, Jenkins 
  • Additional Technologies: Spark, Databricks, Kafka, Kinesis, Hadoop, Lambda, EMR 
  • Popular Certifications: AWS Cloud Practitioner, Microsoft Azure Data Fundamentals, Google Associate Cloud Engineer

See more jobs at CapTech Consulting

Apply for this job

+30d

Data Engineer--US Citizens/Green Card

Software Technology IncBrentsville, VA, Remote
Lambdanosqlsqlazureapigit

Software Technology Inc is hiring a Remote Data Engineer--US Citizens/Green Card

Job Description

I am a Lead Talent Acquisition Specialist at STI (Software Technology Inc) and currently looking for a Data Engineer.

Below is a detailed job description. Should you be interested, please feel free to reach me via call or email. Amrutha.duddula@ AT tiorg.com/732-664-8807

Title:  Data Engineer
Location: Manassas, VA (Remote until Covid)
Duration: Long Term Contract

 Required Skills:

•             Experience working in Azure Databricks, Apache Spark
•             Proficient programming in Scala/Python/Java
•             Experience developing and deploying data pipelines for streaming and batch data sources getting from multiple sources
•             Experience creating data models and implementing business logic using tools and languages listed
•             Working knowledge in Kafka, Structured Streaming, DataFrame API, SQL, NoSQL Database
•             Comfortable with API, Azure Datalake, Git, Notebooks, Spark Cluster, Spark Jobs, Performance tuning
•             Must have excellent communication skills
•             Familiarity with Power BI, Delta Lake, Lambda Architecture, Azure Data Factory, Azure Synapse a plus
•             Telecom domain experience not necessary but really helpful

Thank you,
Amrutha Duddula
Lead Talent Acquisition Specialist
Software Technology Inc (STI)

Email: amrutha.duddula@ AT tiorg.com
Phone : 732-664-8807
www.stiorg.com
www.linkedin.com/in/amruthad/

Qualifications

See more jobs at Software Technology Inc

Apply for this job

+30d

Tech Lead - Microservices

expert global solutions Pvt. LtdAurangabad, India, Remote
S3SQSLambda9 years of experienceDesignjavadockerangularjenkinsAWSreactjs

expert global solutions Pvt. Ltd is hiring a Remote Tech Lead - Microservices

Job Description

Tech Lead – Java(JSP,Spring,MVC)

  • 7 to 9 years of experience with Java / J2EE application development stack
  • Solid experience in design, coding, unit testing and debugging
  • Experience with: Spring framework(Core, JPA, Security, Boot), Hibernate, Secure APIs using JWT/OAUTH
  • Should be able to design product features/modules and lead the team on technical aspects
  • Proficient understanding of code quality standards and ensure team follows the same 
  • Mentor team members as-in when required on Design/Functional Requirement/Development/Testing 
  • Good understanding of Single Page App (Angular or ReactJS)
  • Knowledge of Jenkins CI/CD, Build Tools (Maven, Gradle), Docker
  • Good understanding of AWS services such as S3, SQS, ALB, Lambda, RDS, CDN

Qualifications

See more jobs at expert global solutions Pvt. Ltd

Apply for this job

+30d

Solution Architect (AWS)

MazeGeek Technologies BD Ltd.Dhaka, Bangladesh, Remote
SalesDevOPSLambdaagiletableauDesignscrumpythonAWS

MazeGeek Technologies BD Ltd. is hiring a Remote Solution Architect (AWS)

Job Description

MazeGeek is looking for an experienced Amazon Web Services architect to be part of its Cloud practice. You will be working on industry leading AWS platform to design, develop, and implement next generation data modernization solutions leveraging Cloud native and Commercial/Open Source Big Data technologies.

Key Responsibilities

As a AWS Solutions Architect, you will

  • Be the force behind shaping the future of our AWS practice by leading innovation efforts and developing Intellectual Property
  • Leading the attainment of AWS competencies, specializations, certifications and accreditation by MazeGeek
  • Lead efforts on listing MazeGeek solutions on AWS marketplace
  • Lead the development of minimum viable products for MazeGeek & AWS joint innovation solutions
  • Advise clients on AWS optimized architectures for enterprise data management, craft strategy and roadmaps for on-prem to Cloud migration initiatives
  • Be hands-on in developing prototypes and conducting Proof of Concepts
  • conduct architectural reviews and develop cloud economic models
  • Act as a trusted advisor to client on all things AWS
  • Assist pre-sales team in client pitches
  • Participate in technical interactions with client’s executives and senior management
  • Mentor client/MazeGeek resources on AWS and its services
  • Evaluate and analyze new commercial/open-source services/technologies offered by AWS, and other Big Data technology vendors
  • Interface with AWS Solution Architects and Engineers on:
  • Development of reference architectures and design pattern library for typical AWS based solution implementations
  • Creation of MazeGeek’s new services capabilities, packaged solution offerings
  • Advise on AWS set up, security and role based access implementation, and network optimizations
  • Advise on DevOps strategy and methodology
  • Create customer facing pre-sales collateral such as demos, technical presentations
  • Assist in MazeGeek’s achievement of AWS competencies and specializations

Qualifications

  • Hands-on AWS professional
  • Possess thorough understanding of AWS technical landscape, business drivers, and emerging computing trends in the industry
  • AWS Professional/Associate Solutions Architect certification (or similar) along with AWS Specialty level certification(s) in Big Data or Data & Analytics space
  • Minimum 6 years overall experience in systems integration field with strong hands-on experience in data integration, data warehousing, and analytical projects
  • Intimate familiarity with enterprise data modernization efforts, EDW migration to Cloud
  • Proven track record of building technical and business relationships with senior executives
  • Proven track record of driving decisions collaboratively, resolving conflicts and ensuring follow up
  • Problem-solving mentality leveraging internal and/or external resources, where and when needed, to do what’s right for the customer
  • Exceptional verbal and written communication
  • Ability to connect technology with measurable business value
  • Demonstrated technical thought leadership in customer facing situations

Requirements:

  • Certified AWS Professional/Associate Solutions Architect or similar AWS Professional level certification
  • 3+ years of technical management and thought leadership role with a Global/Regional System Integrator delivering success in complex data analytics projects on AWS
  • 3+ year’s design and/or implementation of highly distributed applications on AWS
  • 2+ years’ experience in “migrating” on premise workloads (especially EDW, Data Lake, Analytic sandboxes) to AWS
  • Demonstrated experience of designing and building Big Data solutions on AWS stack leveraging Redshift, EMR, Glue, Lambda, Athena, Sagemaker, etc.
  • 3+ years experience using Python as programming and scripting language
  • Experience in DevOps to build CI/CD pipelines
  • Experience in AGILE development, SCRUM and Application Lifecycle Management (ALM)
  • Technical architectural and development experience on Massively Parallel Processing technologies, such as Redshift, Hadoop, Spark, Teradata, Netezza
  • Familiarity with technical architectural experience on Data Visualization technologies, such as Tableau, Qlik, etc.
  • Technical architectural experience on data modeling, designing data structures for business reporting
  • Deep understanding of Advanced Analytics, such as predictive, prescriptive, etc.
  • Working knowledge of cloud components: Software design and development, Systems Operations / Management, Database architecture, Virtualization, IP Networking, Storage, IT Security
  • Technical prowess and passion-especially for public Cloud, modern Application design practices and principles. Accrediations on AWS preferred.
  • Oversight experience on major transformation projects and successful transitions to operations support teams
  • Presentation skills with a high degree of comfort to both large and small audiences

See more jobs at MazeGeek Technologies BD Ltd.

Apply for this job

+30d

Full Stack Developer, eCommerce (Magento)

Prayag HealthAnywhere, India, Remote
DevOPSS3EC2LambdaPWAredismariadbmagentoRabbitMQDesignmobilehtml5elasticsearchMySQLcssubuntulinuxAWSjavascriptNode.jsPHP

Prayag Health is hiring a Remote Full Stack Developer, eCommerce (Magento)

Job Description

We are looking for an experienced Full Stack Ecommerce Developer (Magento 2) to join our platinum team of entrepreneurial, enthusiastic, and caring people. You will be responsible in developing, maintaining, securing, and optimizing our eCommerce platform based on Magento 2 on hyper scaler cloud environment including underlying components. You will identify new and exciting functionalities and develop new modules. If you are the person who can design and develop web / mobile platforms securely at an unbelievable pace, we would like to meet you. To succeed in this role, you must have a growth mindset and be a self-starter, top-notch communicator and passionate hands-on programmer.

Responsibilities

  • Optimize existing Magento 2 installation
  • Provide regular and emergency support for break-fixes
  • Troubleshoot integration and performance issues
  • Design interfaces, themes, templates while following best practices and maintain world-class coding styles and standard
  • Anticipate the performance requirements and communicate with management on remediation plans to CTO/CIO
  • Develop new Magento modules and functionalities
  • Manage and upgrade Magento add-ons and conduct code reviews/scans
  • Establish DevOps pipelines, Release/Change management procedures, etc.
  • Develop testing scenarios including unit testing, integration testing and automate those
  • Work with UI/UX and graphic designers to implement front-end changes
  • Work with content writer to implement new contents
  • Implement additional components such as Elastic Search, Varnish, Redis, CDN, RabbitMQ, etc.
  • Manage SSL certificates, software accounts, keys, etc.
  • Rearchitect environment and provide vertical and horizontal scaling as demand grows
  • Fine-tune batch jobs (Crons) or Consumers
  • Migrate databases and data
  • Develop integrations for files, images, etc.
  • Manage stack components and upgrade them (Ubuntu, PHP, Percona, Nginx, Composer, Firewall, Redis, etc.)
  • Create plans for developing mobile experience, mobile apps or PWA front end environment
  • Maintain confidentiality about IP, methods, processes and data
  • Participate in conferences, webinars and provide presentations while socializing and partnering with startups
  • Install security patches and monitor web traffic
  • brand and perform soft-sell while managing and improving brand reputation
  • Collaborate with other stakeholders such as content developer (copywriter), graphics and media designer, and management for consistent brand messaging and best customer experience

Qualifications

  • Strong experience with Magento 2 development and debugging
  • Strong experience with high-performance technologies such as Nginx, Varnish, Redis, ElasticSearch
  • JavaScript and AJAX experience for Front-end
  • Extensive HTML5, LESS and CSS knowledge
  • Understanding of modern UI/UX trends
  • Experience or knowledge of React.js, Vue.js and Node.js
  • Experience with AWS components such as VPC, IAM, EC2, ECS, Lambda, S3, CloudFront, WAF & Cloud Formation
  • Linux or Ubuntu server operations working and know-hows
  • Experience in working with MySQL databases like MariaDB and Percona
  • Basic understanding of networking and ports
  • Experience with Google Tag Manager, SEO, Google Analytics, PPC, Facebook Pixels. A/B Testing
  • Strong attention to details about design styles to improve look and feel
  • Very good PHP knowledge and understating of various frameworks
  • Ability to work in a professional capacity and team environment is must
  • Ability to manage projects and work in strict deadlines

Education & Certifications

  • Bachelors/Masters in Computer Science, Information Technology or a related field
  • Magento 2 Certification is required
  • Cloud Computing Certification (AWS/GCP/Azure) is a plus

See more jobs at Prayag Health

Apply for this job

+30d

Staff Operations Engineer

MozillaRemote
DevOPSRustLambdaterraformnosqlDesignc++MySQLkubernetesjenkinspythonAWSjavascript

Mozilla is hiring a Remote Staff Operations Engineer

To learn the Hiring Ranges for this position, please select your location from the Apply Now dropdown menu.

To learn more about our Hiring Range System, please click this link.

Why Mozilla?

Mozilla Corporation is the non-profit-backed technology company that has shaped the internet for the better over the last 25 years. We make pioneering brands like Firefox, the privacy-minded web browser, and Pocket, a service for keeping up with the best content online. Now, with more than 225 million people around the world using our products each month, we’re shaping the next 25 years of technology. Our work focuses on diverse areas including AI, social media, security and more. And we’re doing this while never losing our focus on our core mission – to make the internet better for everyone. 

The Mozilla Corporation is wholly owned by the non-profit 501(c) Mozilla Foundation. This means we aren’t beholden to any shareholders — only to our mission. Along with 60,000+ volunteer contributors and collaborators all over the world, Mozillians design, build and distribute open-source software that enables people to enjoy the internet on their terms. 

The Role:

Mozilla is seeking a Staff Operations Engineer who will be responsible for maintaining and improving critical systems vital to the everyday operations of the enterprise! The person in this role will succeed in contributing to others’ success and seeking opportunities to collaborate for the collective benefit. Expectations of work for this position will vary from performing a task to being more involved by mentoring skills. This will be someone who enjoys working with a wide range of technologies, solutions, and experiences, passionately solutions to the immediate needs of the day while keeping an eye on the bigger picture.

What you’ll do:

  • Be responsible for a comprehensive Identity Access Management (IAM) system which includes:
    • Manage an Auth0 (Okta CIC) IdP Platform
    • Support team members with integrations of various third-party OIDC/SAML connections
  • Coordinate deployment, configuration, and lifecycle management of various resources and services in GCP and AWS
  • Handle issuance of SSL/TLS certificates
  • Administrate DNS and IPAM
  • Be responsible for backup processes and disaster recovery strategies
  • Contribute to all aspects of a project, including but not limited to leading, planning, testing, implementation, monitoring, and maintenance
  • Lead multi-functional groups to identify, evaluate, and propose solutions for business problems.
  • Function with a customer service mentality, addressing their needs in perspective of the macro outcome
  • Roadmap and plan opportunities for continuous improvement
  • Document status, process, procedures, etc., transparently, whereas anyone else can see who/what/when/where/why/how something was/should be done
  • Lead independently in a dynamic environment

What you’ll bring:

  • 6+ years of:
    • Experience in DevOps, SRE, or CloudOps
    • Proficiency in Python and JavaScript (NodeJS) programming languages
    • Proficiency in Terraform, Serverless, Cloudformation and/or other IaC tools
    • Experience with cloud computing such as AWS and GCP, including GCP Functions and/or Lambda
  • 3+ years experience in Identity Access Management products and frameworks such as Auth0, OIDC, SAML, OAuth 2.0, and LDAP
  • Experience in the following is a bonus but not a requirement:
    • Rust programming language
    • Platforms such as GKE, Kubernetes, and Cloud Run
    • CI/CD pipelines such as Github Actions, AWS Codebuild, GCP Cloud Build, and Jenkins
    • RDS and NoSQL database schemas and administration, such as DynamoDB and MySQL
    • Monitoring and logging with services such as CloudWatch, Splunk, Stackdriver
    • Infoblox DNS/IPAM appliance
    • LDAP server administration
  • Commitment to our values:
    • Welcoming differences
    • Being relationship-minded
    • Practicing responsible participation
    • Having grit

What you’ll get:

  • Generous performance-based bonus plans to all eligible employees - we share in our success as one team
  • Rich medical, dental, and vision coverage
  • Generous retirement contributions with 100% immediate vesting (regardless of whether you contribute)
  • Quarterly all-company wellness days where everyone takes a pause together
  • Country specific holidays plus a day off for your birthday
  • One-time home office stipend
  • Annual professional development budget
  • Quarterly well-being stipend
  • Considerable paid parental leave
  • Employee referral bonus program
  • Other benefits (life/AD&D, disability, EAP, etc. - varies by country)

About Mozilla 

When you work at Mozilla, you give yourself a chance to make a difference in the lives of web users everywhere. And you give us a chance to make a difference in your life every single day. Join us to work on the web as the platform and help create more opportunity and innovation for everyone online.  We’re not a normal tech company. The things we create prioritize people and their privacy over profits. We exist to make the internet a healthier,  happier place for everyone

Commitment to diversity, equity and inclusion

Mozilla believes in the value of diverse creative practices and forms of knowledge, and knows diversity, equity and inclusion are crucial to and enrich the company’s core mission. We encourage applications from everyone, including members of all equity-seeking communities, such as (but not limited to) women, racialized and Indigenous persons, persons with disabilities, persons of all sexual orientations, gender identities and expressions.

We will ensure that qualified individuals with disabilities are provided reasonable accommodations to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment, as appropriate. Please contact us at hiringaccommodation@mozilla.com to request accommodation.

We are an equal opportunity employer. We do not discriminate on the basis of race (including hairstyle and texture), religion (including religious grooming and dress practices), gender, gender identity, gender expression, color, national origin, pregnancy, ancestry, domestic partner status, disability, sexual orientation, age, genetic predisposition, medical condition, marital status, citizenship status, military or veteran status, or any other basis covered by applicable laws. Mozilla will not tolerate discrimination or harassment based on any of these characteristics or any other unlawful behavior, conduct, or purpose.


Group: C 

#LI-DNI

Req ID: R2408

See more jobs at Mozilla

Apply for this job

+30d

Principal Engineer (Backend)

Octillion Media LLCBengaluru, India, Remote
EC2LambdaredisnosqlDesignmongodbapijavadockerpostgresqlMySQLpythonAWS

Octillion Media LLC is hiring a Remote Principal Engineer (Backend)

Job Description

  • You’re a techie to the core & you love to code
  • You have at least five years of experience in commercial software development focusing on API development.
  • You’ve work Experience contains writing high availability, low latency applications in production.
  • Willing to push the boundaries to achieve submillisecond performance.
  • You’re a team player, a thinker, and a doer.
  • Bonus if you know Java, Docker, AWS, NoSQL.
  • Willing to work in a fast-paced environment.
  • Can do attitude.

 

Qualifications

  • A degree in Computer Science, Software Engineering, Information Technology or related fields
  • Strong software programming capabilities, exhibits good code design and coding style.
  • 2+ years of experience with at least one of the programming languages: Java ,Python or Go
  • Knowledge of building high throughput systems is a big plus (Our server handles 50k+ web requests per second)
  • Deep understanding of data structure, algorithm design and analysis, networking, data security and highly scalable systems design
  • Experience with big data, data pipelines, loggers, ELK
  • Familiar with distributed cache, message middleware, RPC framework, load balancing, security defense and other technologies.
  • Experience working with relational and Nosql databases (MySQL, Postgresql, MongoDB, Redis, Hazelcast, Cassandra, Aerospike, or other NoSQL databases)
  • Experience with AWS technologies like EC2, Lambda function,Beanstalk, API gateway, CloudFront, FarGate/ECS, tasks, services, clusters, docker container,log analysis (Athena, parquet )
  • Big Data/ML experience is a plus
  • Experience with RTB, Google IMA SDK, VAST, VPAID, Header bidding is a plus.

See more jobs at Octillion Media LLC

Apply for this job

+30d

Senior Software Engineer - MERN Stack

Octillion Media LLCBengaluru, India, Remote
EC2LambdaredisnosqlDesignmongodbapidockerpostgresqlMySQLangularpythonAWSNode.js

Octillion Media LLC is hiring a Remote Senior Software Engineer - MERN Stack

Job Description

  • You’re a techie to the core & you love to code
  • You have at least two years of experience in commercial software development focusing on API development.
  • You’ve work Experience contains writing node.js + Express + PostgreSQL / mySQL or MongoDB code in production.
  • Willing to push the boundaries to achieve submillisecond performance.
  • You’re a team player, a thinker, and a doer.
  • Bonus if you know Angular (5-8), Docker, AWS, Socket.io as well as HTML / CSS.
  • Willing to work in a fast-paced environment.
  • Can do attitude.

Qualifications

  • A degree in Computer Science, Software Engineering, Information Technology or related fields
  • Strong software programming capabilities, exhibits good code design and coding style.
  • 2+ years of experience with at least one of the programming languages: Nodejs, Python, Javascripts
  • Experience working with relational and Nosql databases (MySQL, Postgresql, MongoDB, Redis, or other NoSQL databases)
  • Experience with AWS technologies like EC2, Lambda function,Beanstalk, API gateway, CloudFront, FarGate/ECS, tasks, services, clusters, docker container,log analysis (Athena, parquet )

See more jobs at Octillion Media LLC

Apply for this job

+30d

Staff Engineer - Java

Octillion Media LLCBengaluru, India, Remote
EC2LambdaredisnosqlDesignmongodbapijavadockerpostgresqlMySQLpythonAWS

Octillion Media LLC is hiring a Remote Staff Engineer - Java

Job Description

Responsibilities

- Design, build Octillion sophisticated video advertising solutions with core emphasis on back end technologies

- Willingness to take end to end ownership and being accountable for the success of the product.

- Experience in architecting & building real-time bidding engine, large scale video ads platform.

- Break down business requirements into technical solutions at global scale

- Participate in daily stand ups and provide time estimates.

 

Qualifications

- A degree in Computer Science, Software Engineering, Information Technology or related fields

- Strong software programming capabilities, exhibits good code design and coding style.

- 5+ years of experience with at least one of the programming languages: Java ,Python or Go

- Knowledge of building high throughput system is a big plus (Our server handles 50k+ web requests per second)

- Deep understanding of data structure, algorithm design and analysis, networking, data security and highly scalable systems design

- Experience with big data, data pipelines, loggers, ELK

- Familiar with distributed cache, message middleware, RPC framework, load balancing, security defense and other technologies.

- Experience working with relational and Nosql databases (MySQL, Postgresql, MongoDB, Redis, Hazelcast, Cassandra, Aerospike, or other NoSQL databases)

- Experience with AWS technologies like EC2, Lambda function,Beanstalk, API gateway, CloudFront, FarGate/ECS, tasks, services, clusters, docker container,log analysis (Athena, parquet )

- Big Data/ML experience is a plus

- Experience with RTB, Google IMA SDK, VAST, VPAID, Header bidding is a plus.


 

See more jobs at Octillion Media LLC

Apply for this job

+30d

Backend Software Engineer, Java (remote, based in the US)

GremlinRemote, based in the US
LambdasqloracleDesignjavac++dockertypescriptkuberneteslinuxAWSjavascriptbackend

Gremlin is hiring a Remote Backend Software Engineer, Java (remote, based in the US)

Today’s complex, fast-paced systems have become a minefield of reliability risks—any of which could cause an outage that costs millions and destroys customer confidence. That’s why high-availability teams use the Gremlin to find and fix ‌reliability risks before they become incidents.

Gremlin Reliability Platform helps software teams proactively monitor and test their systems for common reliability risks, build and enforce reliability standards, and automate their reliability practices organization-wide. As the industry leader in Chaos Engineering and reliability testing, we work with hundreds of the world’s largest organizations where high availability is non-negotiable.

About the Role of the Senior Software Engineer

As a Software Engineer at Gremlin, you will have the opportunity to improve the reliability of the internet at large by developing Chaos Engineering tooling. You will be able to leverage your engineering experience to inform product design as well as solve complex technical problems that directly impact our customers (which range from the Fortune 500 to smaller organizations). You will work closely with a small, talented team focused on quality, delivery, and predictability.

In this role, you’ll get to:

  • Work closely with engineers, product managers, and other stakeholders to design and build the latest and greatest in Chaos Engineering tooling
  • Leverage strong collaboration and communication skills to deliver new features within a remote culture
  • Partner with product and other business units to understand business problems and present technical solutions and tradeoffs
  • Actively mentor and grow your teammates
  • Care deeply about the customer experience

We'll expect you to have:

  • 5+ years professional Java software engineering experience
  • Experience in Go & Systems Level Programming
  • Experience in cloud technologies: e.g AWS, Lambda, Serverless. Experience with other cloud technologies like Google, Oracle also considered
  • Experience in DynamoDB and/or other no-sql DB or experience in any major relational databases
  • Experience in infrastructure & systems level technologies: e.g., Linux, Docker, Kubernetes, OpenShiftExperience in architecting complex distributed systems and integrating with external systems
  • Strong advocate and practitioner of automated testing, CI/CD, and engineering best practices

Bonus Experience:

  • Has been on-call and participated in an incident management program
  • Familiarity with modern JavaScript frameworks & web development practices: e.g., React, TypeScript, etc.
  • Experience taking features from concept to full production release

*The role does not offer sponsorship employment benefits. 

**If you don't think you meet all of the criteria below but still are interested in the job, please apply. Nobody checks every box—we’re looking for candidates that are particularly strong in a few areas, and have some interest and capabilities in others.

About Gremlin:

Gremlin is a team of industry veterans and people eager to learn from one another. We set the standard for reliability and equip leading organizations with the mindset and expertise needed to drive reliability improvements that move the world forward. We’re backed by top-tier investors Index Ventures, Amplify Partners, and Redpoint Ventures. Our customers love us, and we’re thrilled to be a partner in their success.

What Do We Care About:

  • We Care about our People
    People are our critical differentiators. The company strives to treat our people with respect, empathy, and dignity. We expect that our people will treat each other similarly. In both cases, we will assume good intent. All are welcome at Gremlin. We know our differences make us stronger and that our best ideas and contributions can come from anyone at any level.
  • We Care about Collaboration
    Gremlin is strongest when we come together as one team with shared goals. Be the glue, not the glitter. But as a remote company, teamwork and collaboration won’t happen by accident. We approach every challenge as a shared challenge. We rely on each other for diverse perspectives and creative ideas. We celebrate our wins as a team.
  • We Care about Results
    Be high productivity, low drama. Results matter. To keep our pace, everyone owns the outcomes of their actions and takes action when needed. We reward speed over perfection. We empower each other to iterate and experiment.You are welcome at Gremlin for who you are. The more voices and ideas we have represented in our business, the more we will all flourish, contribute, and build a more reliable internet. Gremlin is a place where everyone can grow and is encouraged. However you identify and whatever background you bring with you, please apply if this sounds like a role that would make you excited to come into work everyday. It’s in our differences that we will find the power to keep building a more reliable internet by building and designing tools used by the best companies in the world.

You are welcome at Gremlin for who you are. The more voices and ideas we have represented in our business, the more we will all flourish, contribute, and build a more reliable internet. Gremlin is a place where everyone can grow and is encouraged. However you identify and whatever background you bring with you, please apply if this sounds like a role that would make you excited to come into work everyday. It’s in our differences that we will find the power to keep building a more reliable internet by building and designing tools used by the best companies in the world. 

Visit our website to learn more - https://www.gremlin.com/about

See more jobs at Gremlin

Apply for this job

+30d

Senior Software Engineer, Frontend (remote, based in the US)

GremlinRemote, based in the US
LambdaagileDesignjavac++dockertypescriptcsskuberneteslinuxAWSjavascriptfrontend

Gremlin is hiring a Remote Senior Software Engineer, Frontend (remote, based in the US)

Job Description: 

Today’s complex, fast-paced systems have become a minefield of reliability risks—any of which could cause an outage that costs millions and destroys customer confidence. That’s why high-availability teams use the Gremlin to find and fix ‌reliability risks before they become incidents.

Gremlin Reliability Platform helps software teams proactively monitor and test their systems for common reliability risks, build and enforce reliability standards, and automate their reliability practices organization-wide. As the industry leader in Chaos Engineering and reliability testing, we work with hundreds of the world’s largest organizations where high availability is non-negotiable.

About the Role of the Senior Software Engineer, Frontend 

As a Senior Software Engineer, Frontend at Gremlin, you will have the opportunity to improve the reliability of the internet at large by developing Reliability Engineering tooling. You will be able to leverage your engineering experience to inform product design as well as solve complex technical problems that directly impact our customers (which range from the Fortune 500 to smaller organizations). You will work closely with a small, talented team focused on quality, delivery, and predictability with an emphasis on providing our customers a great user experience.

In this role, you'll get to:

  • Work closely with engineers, designers, product managers, and other stakeholders to design and build the latest and greatest in Chaos Engineering tooling
  • Leverage strong collaboration and communication skills to deliver new features within a remote culture
  • Partner with design to understand the customer’s needs and design interfaces and experiences that lead our customers to success
  • Partner with product and other business units to understand business problems and present technical solutions and tradeoffs
  • Actively mentor and grow your teammates
  • Care deeply about the customer experience

We'll expect you to have:

  • Experience as a self-driven and collaborative problem solver with strong communication skills
  • 7+ years professional Frontend software engineering experience in modern technologies (TypeScript, JavaScript, React, CSS, etc.)
  • Experience or strong interest in infrastructure & systems level technologies: e.g., Linux, Docker, Kubernetes, OpenShift, etc.
  • Experience with Java software development
  • Experience with agile development environments and practices
  • Strong advocate and practitioner of unit testing and integration testing (Jest/Cypress), CI/CD, code quality, and engineering best practices
  • Leverage your own design skills to collaborate with designers and stakeholders to implement designs and features to required specifications
  • Strong at breaking down ambiguous problems into concrete actions and milestones

Bonus Experience:

  • Experience in cloud technologies: e.g., AWS, DynamoDB, Lambda, Serverless
  • Has been on-call and participated in an incident management program

*The role does not offer sponsorship employment benefits. 

**If you don't think you meet all of the criteria below but still are interested in the job, please apply. Nobody checks every box—we’re looking for candidates that are particularly strong in a few areas, and have some interest and capabilities in others.*

About Gremlin:

Gremlin is a team of industry veterans and people eager to learn from one another. We set the standard for reliability and equip leading organizations with the mindset and expertise needed to drive reliability improvements that move the world forward. We’re backed by top-tier investors Index Ventures, Amplify Partners, and Redpoint Ventures. Our customers love us, and we’re thrilled to be a partner in their success.

What Do We Care About:

  • We Care about our People
    People are our critical differentiators. The company strives to treat our people with respect, empathy, and dignity. We expect that our people will treat each other similarly. In both cases, we will assume good intent. All are welcome at Gremlin. We know our differences make us stronger and that our best ideas and contributions can come from anyone at any level.
  • We Care about Collaboration
    Gremlin is strongest when we come together as one team with shared goals. Be the glue, not the glitter. But as a remote company, teamwork and collaboration won’t happen by accident. We approach every challenge as a shared challenge. We rely on each other for diverse perspectives and creative ideas. We celebrate our wins as a team.
  • We Care about Results
    Be high productivity, low drama. Results matter. To keep our pace, everyone owns the outcomes of their actions and takes action when needed. We reward speed over perfection. We empower each other to iterate and experiment.You are welcome at Gremlin for who you are. The more voices and ideas we have represented in our business, the more we will all flourish, contribute, and build a more reliable internet. Gremlin is a place where everyone can grow and is encouraged. However you identify and whatever background you bring with you, please apply if this sounds like a role that would make you excited to come into work everyday. It’s in our differences that we will find the power to keep building a more reliable internet by building and designing tools used by the best companies in the world.

You are welcome at Gremlin for who you are. The more voices and ideas we have represented in our business, the more we will all flourish, contribute, and build a more reliable internet. Gremlin is a place where everyone can grow and is encouraged. However you identify and whatever background you bring with you, please apply if this sounds like a role that would make you excited to come into work everyday. It’s in our differences that we will find the power to keep building a more reliable internet by building and designing tools used by the best companies in the world. 

Visit our website to learn more - https://www.gremlin.com/about

See more jobs at Gremlin

Apply for this job

+30d

AWS Senior Cloud Architect

DevoteamMilano, Italy, Remote
DevOPSEC2LambdaterraformnosqlsqlansibleazurejenkinsAWS

Devoteam is hiring a Remote AWS Senior Cloud Architect

Descrizione del lavoro

All’interno della direzione Cloud, il Cloud Architect AWS  ha la responsabilità di progettare, disegnare ed implementare applicazioni e servizi cloud-native portando competenze e skills per una guida tecnica ed architetturale nel panorama dei servizi cloud di AWS. Promuove e realizza progetti di cloud-migration, cloud-transformation e modernization  e supporta l’adozione di pratiche e modelli cloud native con conoscenza di tool e framework in ambito DevOps come Kubernetes. Fornisce supporto consulenziale e strategico ai clienti che intendono sia intraprendere percorsi di migrazione di applicazioni legacy che di sviluppo di nuove applicazioni cloud native.

Qualifiche

3-5 anni di esperienza su AWS (gradita conoscenza Azure e GCP)  con almeno 3 anni di esperienza prativa delle seguenti soluzioni AWS: EC2, Lambda, Cloudwatch, RDS, DynamoDB, Migration HUB, Control Tower, Organizations

Competenze ed esperienze tecnico architetturali in ambito AWS  con capacità di bilanciare i requisiti tecnico economici;

·Forte competenza nelle tecnologie AWS, nell'architettura cloud e nelle metodologie di integrazione;

·Esperienze nel disegno, pianificazione ed implementazione di progetti di cloud migrazione e modernizzazione applicativa;

·Conoscenze degli strumenti di gestione nativi dell’hyperscaler;

·Capacità di lavorare in team e di guidare, se richiesto, tecnicamente l’esecuzione di progetti di adozione e trasformazione al cloud;

·Conoscenza di soluzioni, architetture e tecnologie in ambito server, storage, backup, networking, security, virtualizzazione e delle principali versioni di OS e DBMS (SQL, noSQL);

·Esperienza con architetture e servizi basati su containers e/o serverless;

·Esperienza nella progettazione e provisioning di servizi cloud utilizzando metodologie e tool di IaC nativi dei principali hyperscaler e di mercato (es. Terraform, Cloudformation);

· strumenti di CI/CD: Gitlab, Jenkins, AWS CodePipeline…

·Conoscenza di uno o più tool di configuration management, ad esempio Chef, Ansible;

·Conoscenza e/o certificazioni in ambito architetture e tecnologie di microservizi: Kubernetes/Openshift, EKS…

See more jobs at Devoteam

Apply for this job

+30d

Senior Data Engineer

SmartMessageİstanbul, TR - Remote
MLS3SQSLambdaMaster’s DegreenosqlDesignmongodbazurepythonAWS

SmartMessage is hiring a Remote Senior Data Engineer

Who are we?

We are a globally expanding software technology company that helps brands communicate more effectively with their audiences. We are looking forward to expand our people capabilities and success in developing high-end solutions beyond existing boundaries and establish our brand as a Global Powerhouse.

We are free to work from wherever we want and go to the office whenever we like!!!

What is the role?

We are looking for a highly skilled and motivated Senior Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in building and managing data pipelines, noSQL databases, and cloud-based data platforms. You will work closely with data scientists and other engineers to design and implement scalable data solutions.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines and architectures.
  • Implement data lake solutions on cloud platforms.
  • Develop and manage noSQL databases (e.g., MongoDB, Cassandra).
  • Work with graph databases (e.g., Neo4j) and big data technologies (e.g., Hadoop, Spark).
  • Utilize cloud services (e.g., S3, Redshift, Lambda, Kinesis, EMR, SQS, SNS).
  • Ensure data quality, integrity, and security.
  • Collaborate with data scientists to support machine learning and AI initiatives.
  • Optimize and tune data processing workflows for performance and scalability.
  • Stay up-to-date with the latest data engineering trends and technologies.

Detailed Responsibilities and Skills:

  • Business Objectives and Requirements:
    • Engage with business IT and data science teams to understand their needs and expectations from the data lake.
    • Define real-time analytics use cases and expected outcomes.
    • Establish data governance policies for data access, usage, and quality maintenance.
  • Technology Stack:
    • Real-time data ingestion using Apache Kafka or Amazon Kinesis.
    • Scalable storage solutions such as Amazon S3, Google Cloud Storage, or Hadoop Distributed File System (HDFS).
    • Real-time data processing using Apache Spark or Apache Flink.
    • NoSQL databases like Cassandra or MongoDB, and specialized time-series databases like InfluxDB.
  • Data Ingestion and Integration:
    • Set up data producers for real-time data streams.
    • Integrate batch data processes to merge with real-time data for comprehensive analytics.
    • Implement data quality checks during ingestion.
  • Data Processing and Management:
    • Utilize Spark Streaming or Flink for real-time data processing.
    • Enrich clickstream data by integrating with other data sources.
    • Organize data into partitions based on time or user attributes.
  • Data Lake Storage and Architecture:
    • Implement a multi-layered storage approach (raw, processed, and aggregated layers).
    • Use metadata repositories to manage data schemas and track data lineage.
  • Security and Compliance:
    • Implement fine-grained access controls.
    • Encrypt data in transit and at rest.
    • Maintain logs of data access and changes for compliance.
  • Monitoring and Maintenance:
    • Continuously monitor the performance of data pipelines.
    • Implement robust error handling and recovery mechanisms.
    • Monitor and optimize costs associated with storage and processing.
  • Continuous Improvement and Scalability:
    • Establish feedback mechanisms to improve data applications.
    • Design the architecture to scale horizontally.

Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
  • 5+ years of experience in data engineering or related roles.
  • Proficiency in noSQL databases (e.g., MongoDB, Cassandra) and graph databases (e.g., Neo4j).
  • Strong experience with cloud platforms (e.g., AWS, GCP, Azure).
  • Hands-on experience with big data technologies (e.g., Hadoop, Spark).
  • Proficiency in Python and data processing frameworks.
  • Experience with Kafka, ClickHouse, Redshift.
  • Knowledge of ETL processes and data integration.
  • Familiarity with AI, ML algorithms, and neural networks.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and teamwork skills.
  • Entrepreneurial spirit and a passion for continuous learning.

Join our team!

See more jobs at SmartMessage

Apply for this job

+30d

Senior Data Scientist

SmartMessageİstanbul, TR - Remote
MLS3SQSLambdaMaster’s DegreenosqlmongodbazurepythonAWS

SmartMessage is hiring a Remote Senior Data Scientist

Who are we?

We are a globally expanding software technology company that helps brands communicate more effectively with their audiences. We are looking forward to expand our people capabilities and success in developing high-end solutions beyond existing boundaries and establish our brand as a Global Powerhouse.

We are free to work from wherever we want and go to the office whenever we like!!!

What is the role?

We are seeking an innovative and analytical Senior Data Scientist to join our growing team. The ideal candidate will have a strong background in machine learning, AI, and data analysis. You will work on developing models and algorithms to enhance our RTDM capabilities and drive data-driven decision-making.

Key Responsibilities:

  • Develop, implement, and maintain machine learning models and algorithms.
  • Work with large datasets to extract insights and drive data-driven decisions.
  • Collaborate with data engineers to build scalable data solutions.
  • Utilize cloud-based data platforms (e.g., S3, Redshift, Lambda, Kinesis, EMR).
  • Conduct exploratory data analysis and feature engineering.
  • Choose appropriate algorithms based on the problem type and data characteristics.
  • Implement and optimize AI and neural network models.
  • Create data visualizations and reports to communicate findings.
  • Stay current with the latest research and advancements in data science and AI.
  • Mentor and guide junior data scientists and analysts.

Technical Expertise:

  • Proficiency in Python and data science libraries (e.g., TensorFlow, scikit-learn, PyTorch).
  • Strong experience with noSQL databases (e.g., MongoDB, Cassandra) and big data technologies (e.g., Spark, Hadoop).
  • Experience with cloud platforms (e.g., AWS, GCP, Azure).
  • Knowledge of data engineering processes and data integration.
  • Familiarity with graph databases (e.g., Neo4j) and message queues (e.g., Kafka, SQS).
  • Experience with a wide range of ML and AI algorithms:
  • Supervised Learning:Linear Regression, Logistic Regression, SVM, Naive Bayes, Decision Trees, Random Forests, Gradient Boosting Machines (GBM), AdaBoost, K-Nearest Neighbors (KNN), Neural Networks.
  • Unsupervised Learning: K-Means Clustering, Hierarchical Clustering, Principal Component Analysis (PCA), Anomaly Detection, Autoencoders, Generative Adversarial Networks (GANs).
  • Reinforcement Learning: Q-Learning, Deep Q-Networks (DQN), Policy Gradient Methods, Actor-Critic Methods.
  • Deep Learning: Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory Networks (LSTMs), Transformer Models (e.g., BERT, GPT), Capsule Networks.
  • Predictive Recommendation Engines: Collaborative Filtering, Content-Based Filtering, Hybrid Systems.

Qualifications:

  • Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, or related field.
  • 5+ years of experience in data science or related roles.
  • Understand the business problem and its relevance to business objectives.
  • Evaluate model performance using appropriate metrics.
  • Strong analytical and problem-solving skills.
  • Excellent communication and teamwork skills.
  • Entrepreneurial spirit and a passion for continuous learning.

Join our team!

See more jobs at SmartMessage

Apply for this job

+30d

Principal Software Engineer I - Inventory Management Systems

Stitch FixRemote, USA
DevOPSS3LambdaTDDagileterraformpostgresDesignslackgraphqlrubytypescriptAWSjavascriptbackend

Stitch Fix is hiring a Remote Principal Software Engineer I - Inventory Management Systems

 

About the Team

The Inventory Management Systems (IMS) Team at Stitch Fix is crucial to our mission of delivering personalized styling services and a seamless client experience. Our team is responsible for designing, developing, and maintaining advanced inventory management solutions that ensure the efficient tracking, control, and optimization of our inventory across the entire supply chain. By leveraging cutting-edge technologies and innovative approaches, we enable Stitch Fix to manage inventory levels accurately, minimize costs, and meet client demand effectively. We build modern software with modern techniques like TDD, continuous delivery, DevOps, and service-oriented architecture. We focus on high-value products that solve clearly identified problems but are designed in a sustainable way and deliver long term value.

About the Role

Stitch Fix’s Inventory Management Systems (IMS) team is looking for a dynamic and forward-thinking Principal Engineer who is dedicated to solving complex inventory challenges. You will work within a distributed team of 4-8 software engineers and cross-functional partners including product, design, algorithms and operations. You're expected to have strong written communication skills and be able to develop strong working relationships with coworkers and business partners. This is a remote position available within the United States. We operate in an agile-inspired manner; collaborating across multiple time zones. You will have the opportunity to develop your non-technical skills by mentoring engineers on your team, leading projects your team is responsible for, and influencing the roadmap of our team. You’ll also participate in our team’s on-call rotation. You will also have the opportunity to be involved in engineering-wide initiatives that aim to improve our culture & developer experience.

You're excited about this opportunity because you will…

  • Work collaboratively as a distributed team—we are a primarily remote team and we use GitHub, Slack, and video conferencing extensively to collaborate.
  • Be at the forefront of tech and fashion, helping Stitch Fix redefine the shopping experience for the next generation.
  • Lead a team in designing solutions that enable our business.
  • Help design, develop, and grow the foundation of client data at Stitch Fix.
  • Have a significant impact on  understanding client needs and preferences by building flexible, scalable systems.
  • Collaborate with stakeholders while leading the technical discovery, decision-making, and project execution.
  • Play a key role in steering design reviews and overseeing solution implementation. 
  • Engage actively in project planning and team ceremonies.
  • Proactively communicate status updates or changes to the scope or timeline of projects to stakeholders and leadership. 
  • Share the responsibility of directing the team’s investment in impactful directions.
  • Contribute to a culture of technical collaboration and scalable, resilient systems.
  • Lead the design of complex systems, recommend solutions and 3rd party integrations, and provide input on technical design documents & project plans
  • Model consistently sustainable results against measurable goals. 
  • Break down projects into actionable milestones.
  • Provide technical leadership, mentorship, pairing opportunities, timely feedback, and code reviews to encourage the growth of others.
  • Invest in the professional development and career growth of your teammates and peers.
  • Frame business problems using high-quality data analysis and empirical evidence for leadership.
  • Find new and better ways of doing things that align with business priorities.
  • Influence other engineers toward right-sized solutions.

We’re excited about you because…

  • Have roughly 10+ years of professional programming experience and are comfortable with multiple modern software development languages.
  • 2+ years of Go experience is preferred.
  • Have strong skills and hands-on experience in backend systems within large-scale service-oriented architectures.
  • Have 3+ years of experience in technical leadership - including driving technical decisions and guiding broader project goals.
  • Experience in integrating and managing third-party APIs, with a strong focus on ensuring seamless data flow, robust error handling, and ensuring business continuity in case of external service failures.
  • Have excellent analytical skills as well as communication skills both verbal and written.
  • Possess an end-to-end mindset, breaking through team silos to deliver best global outcomes.
  • Treasure helping your team members grow and learn.
  • Take initiative and operate with accountability.
  • Are motivated by solving problems and finding creative client-focused solutions.
  • Build high-quality solutions and are pragmatic about weighing project scope and value.
  • Are flexible, dedicated to your craft, and curious.
  • Have expertise in designing high-scale distributed systems, including microservice architecture, containerization and orchestration
  • Mighthave experience with GraphQL schema design.
  • Mighthave experience working remotely alongside a distributed software engineering team.

Technologies we rely on to pursue solutions to business problems include things like:

  • Go, Ruby, Rails
  • React, JavaScript, TypeScript
  • GraphQL and Postgres
  • Kafka
  • AWS services such as Lambda, S3, CloudWatch
  • Terraform

Why you'll love working at Stitch Fix...

  • We are a group of bright, kind people who are motivated by challenge. We value integrity, innovation and trust. You’ll bring these characteristics to life in everything you do at Stitch Fix.
  • We cultivate a community of diverse perspectives— all voices are heard and valued.
  • We are an innovative company and leverage our strengths in fashion and tech to disrupt the future of retail. 
  • We win as a team, commit to our work, and celebrate grit together because we value strong relationships.
  • We boldly create the future while keeping equity and sustainability at the center of all that we do. 
  • We are the owners of our work and are energized by solving problems through a growth mindset lens. We think broadly and creatively through every situation to create meaningful impact.
  • We offer comprehensive compensation packages and inclusive health and wellness benefits.

About Stitch Fix

We're changing the industry and bringing personal styling to every body. We believe in a service and a workplace where you can show up as your best, most authentic self. The Stitch Fix experience is not merely curated—it’s truly personalized to each client we style. We are changing the way people find what they love. We’re disrupting the future of retail with the precision of data science by combining it with human instinct to find pieces that fit our client’s unique style. This novel juxtaposition attracts a highly diverse group of talented people who are both thinkers and doers. This results in a simple, yet powerful offering to our customers and a successful, growing business serving millions of men, women and kids throughout the US. We believe we are only scratching the surface and are looking for incredible people like you to help us boldly create our future. 

Compensation and Benefits

Our anticipated compensation reflects the cost of labor across several US geographic markets, and the range below indicates the low end of the lowest-compensated market to the high end of the highest-compensated market. This position is eligible for new hire and ongoing grants of restricted stock units depending on employee and company performance. In addition, the position is eligible for medical, dental, vision, and other benefits. Applicants should apply via our internal or external careers site.
Salary Range
$218,000$232,000 USD

This link leads to the machine readable files that are made available in response to the federal Transparency in Coverage Rule and includes negotiated service rates and out-of-network allowed amounts between health plans and healthcare providers. The machine-readable files are formatted to allow researchers, regulators, and application developers to more easily access and analyze data.

Please review Stitch Fix's US Applicant Privacy Policy and Notice at Collection here: https://stitchfix.com/careers/workforce-applicant-privacy-policy

Recruiting Fraud Alert: 

To all candidates: your personal information and online safety are top of mind for us.  At Stitch Fix, recruiters only direct candidates to apply through our official career pages at https://www.stitchfix.com/careers/jobs or https://web.fountain.com/c/stitch-fix.

Recruiters will never request payments, ask for financial account information or sensitive information like social security numbers. If you are unsure if a message is from Stitch Fix, please email careers@stitchfix.com

You can read more about Recruiting Scam Awareness on our FAQ page here: https://support.stitchfix.com/hc/en-us/articles/1500007169402-Recruiting-Scam-Awareness 

 

See more jobs at Stitch Fix

Apply for this job

+30d

Sr. Data Engineer, Marketing Tech

MLDevOPSLambdaagileairflowsqlDesignapic++dockerjenkinspythonAWSjavascript

hims & hers is hiring a Remote Sr. Data Engineer, Marketing Tech

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

We're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving Million+ Hims & Hers subscribers.

You Will:

  • Architect and develop data pipelines to optimize performance, quality, and scalability.
  • Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources.
  • Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake.
  • Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance 
  • Orchestrate sophisticated data flow patterns across a variety of disparate tooling.
  • Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics.
  • Partner with the rest of the Data Platform team to set best practices and ensure the execution of them.
  • Partner with the analytics engineers to ensure the performance and reliability of our data sources.
  • Partner with machine learning engineers to deploy predictive models.
  • Partner with the legal and security teams to build frameworks and implement data compliance and security policies.
  • Partner with DevOps to build IaC and CI/CD pipelines.
  • Support code versioning and code deployments for data Pipelines.

You Have:

  • 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages.
  • Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed.
  • Demonstrated experience writing complex, highly optimized SQL queries across large data sets.
  • Experience working with customer behavior data. 
  • Experience with Javascript, event tracking tools like GTM, tools like Google Analytics, Amplitude and CRM tools. 
  • Experience with cloud technologies such as AWS and/or Google Cloud Platform.
  • Experience with serverless architecture (Google Cloud Functions, AWS Lambda).
  • Experience with IaC technologies like Terraform.
  • Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres.
  • Experience building event streaming pipelines using Kafka/Confluent Kafka.
  • Experience with modern data stack like Airflow/Astronomer, Fivetran, Tableau/Looker.
  • Experience with containers and container orchestration tools such as Docker or Kubernetes.
  • Experience with Machine Learning & MLOps.
  • Experience with CI/CD (Jenkins, GitHub Actions, Circle CI).
  • Thorough understanding of SDLC and Agile frameworks.
  • Project management skills and a demonstrated ability to work autonomously.

Nice to Have:

  • Experience building data models using dbt
  • Experience designing and developing systems with desired SLAs and data quality metrics.
  • Experience with microservice architecture.
  • Experience architecting an enterprise-grade data platform.

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range for US-based employees is
$140,000$170,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law. Hims considers all qualified applicants in accordance with the San Francisco Fair Chance Ordinance.

Hims & hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, you may contact us at accommodations@forhims.com. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

+30d

Sr Data Engineer

VeriskJersey City, NJ, Remote
LambdasqlDesignlinuxpythonAWS

Verisk is hiring a Remote Sr Data Engineer

Job Description

We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data pipeline architecture. The ideal candidate is an experienced data pipeline builder and data wrangler with strong experience in handling data at scale. The Data Engineer will support our software developers, data analysts and data scientists on various data initiatives.

This is a remote role that can be done anywhere in the continental US; work is on Eastern time zone hours.

Why this role

This is a highly visible role within the enterprise data lake team. Working within our Data group and business analysts, you will be responsible for leading creation of data architecture that produces our data assets to enable our data platform.  This role requires working closely with business leaders, architects, engineers, data scientists and wide range of stakeholders throughout the organization to build and execute our strategic data architecture vision.

Job Duties

  • Extensive understanding of SQL queries. Ability to fine tune queries based on various RDBMS performance parameters such as indexes, partitioning, Explain plans and cost optimizers.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies stack
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Working with data scientists and industry leaders to understand data needs and design appropriate data models.
  • Participate in the design and development of the AWS-based data platform and data analytics.

Qualifications

Skills Needed

  • Design and implement data ETL frameworks for secured Data Lake, creating and maintaining an optimal pipeline architecture.
  • Examine complex data to optimize the efficiency and quality of the data being collected, resolve data quality problems, and collaborate with database developers to improve systems and database designs
  • Hands-on building data applications using AWS Glue, Lake Formation, Athena, AWS Batch, AWS Lambda, Python, Linux shell & Batch scripting.
  • Hands on experience with AWS Database services (Redshift, RDS, DynamoDB, Aurora etc.)
  • Experience in writing advanced SQL scripts involving self joins, windows function, correlated subqueries, CTE’s etc.
  • Strong understanding and experience using data management fundamentals, including concepts such as data dictionaries, data models, validation, and reporting.  

Education and Training

  • 10 years full-time software engineering experience preferred with at least 4 years in an AWS environment focused on application development.
  • Bachelor’s degree or foreign equivalent degree in Computer Science, Software Engineering, or related field
  • US citizenship required

#LI-LM03
#LI-Hybrid

See more jobs at Verisk

Apply for this job

+30d

Cloud NetOps Engineer

In All Media IncArgentina - Remote
DevOPSS3EC2LambdaterraformDesignansiblelinuxpythonAWS

In All Media Inc is hiring a Remote Cloud NetOps Engineer

Job Summary:

We are seeking a highly skilled Cloud NetOps Engineer to design, deploy, and manage our scalable, secure, and high-availability AWS cloud infrastructure. The ideal candidate will have extensive experience in network engineering, security solutions implementation, automation, scripting, system administration, and monitoring and optimization.

Key Responsibilities:

Cloud Infrastructure Management:

  • Design, deploy, and manage scalable, secure, and high-availability AWS cloud infrastructure.
  • Optimize AWS services (EC2, VPC, S3, RDS, Lambda, etc.) to ensure efficient operation and cost management.

Network Engineering:

  • Configure, manage, and troubleshoot network routing and switching across cloud and on-premises environments.
  • Implement and maintain advanced network security solutions, including firewalls, VPNs, and intrusion detection/prevention systems.

Security Solutions Implementation:

  • Develop and implement end-to-end network security solutions to protect against internal and external threats.
  • Monitor network traffic and security logs to identify and mitigate potential security breaches.

Automation and Scripting:

  • Automate infrastructure provisioning, configuration management, and deployment processes using tools such as Terraform and Ansible.
  • Develop custom scripts and tools in Python to improve operational efficiency and reduce manual intervention.
  • Implement automation strategies to streamline repetitive tasks and enhance productivity.

System Administration:

  • Perform system administration tasks for Linux servers, including installation, configuration, maintenance, and troubleshooting.
  • Manage and integrate Active Directory services for authentication and authorization.

Firewall and Security Management:

  • Administer and troubleshoot Palo Alto firewalls and Panorama for centralized management and policy enforcement.
  • Manage Cisco Meraki wireless and security stacks, ensuring robust network performance and security compliance.

Monitoring and Optimization:

  • Implement monitoring solutions to track performance metrics, identify issues, and optimize network and cloud resources.
  • Conduct regular performance tuning, capacity planning, and system audits to ensure optimal operation.

Collaboration and Support:

  • Work closely with cross-functional teams, including DevOps, Security, and Development, to support infrastructure and application needs.
  • Provide technical support and guidance to internal teams, ensuring timely resolution of network and system issues.

Documentation and Compliance:

  • Maintain comprehensive documentation of network configurations, infrastructure designs, and operational procedures.
  • Ensure compliance with industry standards and regulatory requirements through regular audits and updates.

Continuous Improvement:

  • Stay updated with the latest trends and technologies in cloud computing, networking, and cybersecurity.
  • Propose and implement improvements to enhance system reliability, security, and performance.

Qualifications:

  • Bachelor’s degree in computer science, Information Technology, or a related field.
  • Proven experience as a Cloud Engineer, Network Engineer, or similar role.
  • Strong knowledge of AWS services and cloud infrastructure management.
  • Proficiency in network engineering, including routing, switching, and security solutions.
  • Experience with automation tools such as Terraform, Ansible, and scripting languages like Python.
  • Solid system administration skills, particularly with Linux servers.
  • Experience managing firewalls and security solutions (e.g., Palo Alto, Cisco Meraki).
  • Strong problem-solving skills and the ability to work in a collaborative environment.
  • Excellent documentation and communication skills.

Preferred Qualifications:

  • AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified SysOps Administrator).
  • Familiarity with DevOps practices and tools.
  • Knowledge of regulatory requirements and compliance standards (e.g., PCI, CIS).

See more jobs at In All Media Inc

Apply for this job

+30d

Senior SRE Engineer (Viator)

TripadvisorOxford, London, Lisbon, Krakow hybrid
DevOPSS3EC2LambdaagilejiraterraformnosqlsqlDesigngitjavadockerelasticsearchkubernetesjenkinspythonAWS

Tripadvisor is hiring a Remote Senior SRE Engineer (Viator)

Viator, a Tripadvisor company, is the leading marketplace for travel experiences. We believe that making memories is what travel is all about. And with 300,000+ travel experiences to explore—everything from simple tours to extreme adventures (and all the niche, interesting stuff in between)—making memories that will last a lifetime has never been easier. With industry-leading flexibility and last-minute availability, it's never too late to make any day extraordinary. Viator. One app, 300,000+ travel experiences you’ll remember.

We are looking for a Senior Software Engineer with a blend of skills of software engineering and operations. A person who truly believes and lives by DevOps principles and values. The roles includes working within the SRE team but interacting with all feature and platform teams to deliver state of the art solutions that ensures availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of our service and applications. If you are looking to be challenged technically and have fun, this is the place for you!

What will you do

  • As part of the SRE team you will be participating in design and implementing parts of our engineering platform that enables scaling, metrics and observability, ensures and improves reliability.
  • Identify gaps in our engineering platform that improves availability, latency, performance, efficiency, change management, monitoring, emergency response
  • Guide and mentor other people on the team and help them grow their skills and knowledge
  • Evangelise DevOps and SRE culture and lead the innovation across engineering feature teams
  • Become part of a PagerDuty based on-call rotation

Skills & Experience

  • Comfortable and happy to code in Python and Java. Experience writing commercial application code in Java.
  • Deep knowledge and understanding of Computer Engineering fundamentals and first principles
  • Deep understanding of scaling solutions both on infrastructure level ( caching layers, database replicas, sharding, partitioning, etc ) and architectural level (denormalisation, CQRS-ES, Federation, etc )
  • Experience building and working with and monitoring microservice architectures in large distributed cloud environments (ideally AWS).
  • Experience with Observability tooling – having proficiency using tools like Elasticsearch, Kibana, APM, Sentry, Grafana, Prometheus, Overops, or similar
  • The ability to guide and mentor other members within the team and improve the way we collaborate, learn, and share ideas
  • Documentation and internal team members alignment; therefore strong written and verbal communication skills are required
  • Excellent collaboration skills to be able to work closely with product engineers and product owners to understand their context and co-design appropriate solutions which balance feature velocity with site reliability
  • Version control and CI/CD – Jenkins, git, bitbucket, GitLab, liquibase
  • Experience in using SQL / NoSQL data stores – RDS, DynamoDB, ElastiCache, Solr
  • Jira and Agile methodologies

Desired Skills & Knowledge

  • Excellent GNU/Linux system administration skills
  • Experience with OpenTelemetry
  • Experience of managing Kubernetes cluster and containerisation
  • AWS and IaC – Terraform, CloudFormation, VPC, IAM, EC2, EKS, Lambda, RDS, S3, CloudWatch, puppet, docker
  • Experience building and running monitoring infrastructure at a large scale. For example, Elasticsearch clusters, Prometheus, Kibana, Grafana, etc
  • Web applications and HTTP servers – Java, apache, nginx
  • Load balancers – ELB, HAProxy, nginx
  • Experience in running  SQL / NoSQL data stores – RDS, DynamoDB, ElastiCache, Solr

 

Perks of Working at Viator

  • Competitive compensation packages, including base salary, annual bonus, and equity.
  • “Work your way” with flexibility to suit your lifestyle. We take a remote-friendly approach to collaboration, with the option to join on-site as often as you’d like in select locations. 
  • Flexible schedule. Work-life balance is ingrained in our culture by design. Trust and accountability make it work.
  • Donation matching. Give back? Give more! We match qualifying charitable donations annually.
  • Tuition assistance. Want to level up your career? We love to hear it! Receive annual support for qualified programs.
  • Lifestyle benefit. An annual benefit to spend on yourself. Use it on travel, wellness, or whatever suits you.
  • Travel perks. We believe that travel is employee development, so we provide discounts and more.
  • Employee assistance program. We’re here for you with resources and programs to help you through life’s challenges.
  • Health benefits. We offer great coverage and competitive premiums.

Our Values

We aspire to lead; We’re relentlessly curious;... want to know more? Read up on our values: 

  • We aspire to lead. Tap into your talent, ambition, and knowledge to bring us – and you – to new heights.
  • We’re relentlessly curious. We push beyond the usual, the known, the “that’s just how it’s done.”
  • We’re better together. We learn from, accept, respect, support, and value one another– and are creating something remarkable in the process.
  • We serve our customers, always. We listen, question, respond, and strive for wow moments.  

We strive for better, not perfect. We won’t get it right the first time – or every time. We’ll provide a safe environment in which to make mistakes, iterate, improve, and grow.

Our workplace is for everyone, as is our people powered platform. At Tripadvisor, we want you to bring your unique identities, abilities, and experiences, so we can collectively revolutionize travel and together find the good out there.

Application process

  • 30 minute call with a recruiter to learn more about the role
  • 1 hour technical coding interview with someone from the Viator Engineering team
  • Three one-hour interviews with members of the team, covering technical topics - including some coding - and what you would bring to Viator.

If you need a reasonable accommodation or support during the application or the recruiting process due to a medical condition or disability, please reach out to your individual recruiter or send an email to AccessibleRecruiting@Tripadvisor.com and let us know the nature of your request . Please include the job requisition number in your message.

#LI-TA1

#Viator

#LI-Hybrid

 

 

 

See more jobs at Tripadvisor

Apply for this job

+30d

Data Engineer (Australia)

DemystDataAustralia, Remote
SalesS3EC2Lambdaremote-firstDesignpythonAWS

DemystData is hiring a Remote Data Engineer (Australia)

Our Solution

Demyst unlocks innovation with the power of data. Our platform helps enterprises solve strategic use cases, including lending, risk, digital origination, and automation, by harnessing the power and agility of the external data universe. We are known for harnessing rich, relevant, integrated, linked data to deliver real value in production. We operate as a distributed team across the globe and serve over 50 clients as a strategic external data partner. Frictionless external data adoption within digitally advancing enterprises is unlocking market growth and allowing solutions to finally get out of the lab. If you like actually to get things done and deployed, Demyst is your new home.

The Opportunity

As a Data Engineer at Demyst, you will be powering the latest technology at leading financial institutions around the world. You may be solving a fintech's fraud problems or crafting a Fortune 500 insurer's marketing campaigns. Using innovative data sets and Demyst's software architecture, you will use your expertise and creativity to build best-in-class solutions. You will see projects through from start to finish, assisting in every stage from testing to integration.

To meet these challenges, you will access data using Demyst's proprietary Python library via our JupyterHub servers, and utilize our cloud infrastructure built on AWS, including Athena, Lambda, EMR, EC2, S3, and other products. For analysis, you will leverage AutoML tools, and for enterprise data delivery, you'll work with our clients' data warehouse solutions like Snowflake, DataBricks, and more.

Demyst is a remote-first company. The candidate must be based in Australia.

Responsibilities

  • Collaborate with internal project managers, sales directors, account managers, and clients’ stakeholders to identify requirements and build external data-driven solutions
  • Perform data appends, extracts, and analyses to deliver curated datasets and insights to clients to help achieve their business objectives
  • Understand and keep current with external data landscapes such as consumer, business, and property data.
  • Engage in projects involving entity detection, record linking, and data modelling projects
  • Design scalable code blocks using Demyst’s APIs/SDKs that can be leveraged across production projects
  • Govern releases, change management and maintenance of production solutions in close coordination with clients' IT teams
  • Bachelor's in Computer Science, Data Science, Engineering or similar technical discipline (or commensurate work experience); Master's degree preferred
  • 1-3 years of Python programming (with Pandas experience)
  • Experience with CSV, JSON, parquet, and other common formats
  • Data cleaning and structuring (ETL experience)
  • Knowledge of API (REST and SOAP), HTTP protocols, API Security and best practices
  • Experience with SQL, Git, and Airflow
  • Strong written and oral communication skills
  • Excellent attention to detail
  • Ability to learn and adapt quickly
  • Distributed working team and culture
  • Generous benefits and competitive compensation
  • Collaborative, inclusive work culture: all-company offsites and local get togethers in Bangalore
  • Annual learning allowance
  • Office setup allowance
  • Generous paid parental leave
  • Be a part of the exploding external data ecosystem
  • Join an established fast growth data technology business
  • Work with the largest consumer and business external data market in an emerging industry that is fueling AI globally
  • Outsized impact in a small but rapidly growing team offering real autonomy and responsibility for client outcomes
  • Stretch yourself to help define and support something entirely new that will impact billions
  • Work within a strong, tight-knit team of subject matter experts
  • Small enough where you matter, big enough to have the support to deliver what you promise
  • International mobility available for top performer after two years of service

Demyst is committed to creating a diverse, rewarding career environment and is proud to be an equal opportunity employer. We strongly encourage individuals from all walks of life to apply.

See more jobs at DemystData

Apply for this job