8 Results

Filtered by Denver, CO, USA

Job Title

Lead Data Engineer

Company Description:

Infostretch is a pure-play digital engineering services firm focused on helping companies accelerate their digital initiatives from strategy and planning through execution. We leverage deep technical expertise, Agile methodologies and data-driven intelligence to modernize systems of engagement and simplify human/tech interaction. We deliver custom solutions that meet customers’ technology needs wherever they are in their digital lifecycle. Backed by Goldman Sachs and Everstone Capital, Infostretch works with both large enterprises and emerging innovators -- putting digital to work to enable new products and business models, engage with customers in new ways, and create sustainable competitive differentiation.

Infostretch is a digital-native professional services firm. By combining our in-depth experience with niche digital technologies, ready-made tools, frameworks, technologies, and partnerships, we help enterprises get digital right, the first time. Backed by leading private equity firms Goldman Sachs and Everstone Group, the company is trusted by leading Fortune 100 companies as well as emerging innovators to deliver solutions that work seamlessly across channels, leverage predictive analytics to optimize the software lifecycle, and support continuous innovation.
The Company has been Certified as Great Place to Work for consecutive years, thanks to it’s employee friendly culture and this has enabled consistent double digit growth.

Job Description:

Infostretch is looking for Lead Data Engineer based in Denver, CO

Job Summary

As  a Data Engineer at  the Analytics  Centre  of  Excellence, you  shall be  very  hands-on,  level, you shall be responsible for building and maintaining a scalable and robust Data Platform involving development of complex data processing pipelines, ETL, data integration, . You shall work closely with our architect team to design and develop the Enterprise  Data  Platform. You  shall advocate  and inculcate best engineering practices in  the development of the Enterprise Data Platform. The Data Engineer is responsible for the development of analytics big data transformation flows on a large-scale service analytics platform for various in house customers. Responsibility includes delivery of high quality use case Data Pipeline ready for use and deployment towards customers, meeting the business requirements and aligning with the solution vision and strategy. The Data Engineer will have a strong influence on the technical design, and be expected to have a deep understanding of business and solution development context.


Key Responsibilities


  • Create and maintain optimal data pipeline architecture for both stream processing and batch processing,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and  implement  internal  process  improvements:  automating  manual  processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders  including  the Business,  Product,  Data  and  Design  teams  to  assist  with  data-related technical issues and support their data infrastructure needs.
  • Work with data and analytics experts to strive for greater functionality in our data systems.


Required Knowledge/Skills/Abilities:

Must Haves

  1. Data Engineer with at least 5to 10 years of experience in Batch and Stream Processing
  2. Experience in Apache Spark and Apache Flink (Highly Desirable)
  3. Experience in working with Apache Kafka, Kafka Connect
  4. Very Strong in Analytical SQL
  5. Experience developing large scale and high-volume data pipelines is a must
  6. Experience in Java/Scala and/or Python


Highly Desirable

  1. Experience on Ingestion, Rollups and Real Time Analytics with Apache Druid
  2. Experience in Stateful Stream Processing in Apache Flink
  3. Experience on AWS Data Analytics Stack -AWS EMR, AWS Lake Formation, AWS S3, AWS Glue,
  4. Experience in Apache Airflow
  5. Experience in Data Quality

If you feel that this is a good match for your skillsets, please submit a current word version of your resume along with a cover letter describing your skills, experience and salary expectations. We are an Equal Opportunity Employer (EOE). You can read our job applicant privacy policy here.

Apply Online Print

Job Code: IS-SCF2022040803

Category: Engineering

Job Type: Full-Time

Location: Denver, CO, USA

Open Positions: 1

Get Directions

Join Our
Talent Network
Join our talent community and get Infostretch job alerts delivered directly to your inbox.

By submitting this form, you agree that you have read and understand Infostretch’s Terms and Conditions. You can opt-out of communications at any time. We respect your privacy.