Lilly Jobs

Job Information

Lilly Data Engineering Associate Consultant/ Senior Associate Consultant in Bengaluru, India

At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our 35,000 employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We’re looking for people who are determined to make life better for people around the world.

Eli Lilly Services India Pvt Ltd

Business Insights & Analytics Team: Data Engineer

The purpose of the LCCI Business Insights & Analytics team is to partner with US Business Insights & Analytics in providing high quality analytical support to Brand, Market Research and other internal partners through right data, smart analytics and actionable insights. Our team will be responsible for setting up the data warehouses necessary to handle large volumes of data, create meaningful analyses, and deliver recommendations to leadership.

As part of the LCCI team, we are excited to offer the role of Data Engineer who will be an integral part of the Data Governance and analytics team in 2023 and beyond.

Core Responsibilities

  • Create and maintain optimal data pipeline architecture ETL/ ELT into Structured data

  • Assemble large, complex data sets that meet functional / non-functional business requirements and create and maintain Multi-Dimensional modelling like Star Schema and Snowflake Schema, normalization, de-normalization, joining of datasets.

  • Expert level experience creating Fact tables, Dimensional tables and ingest datasets into Cloud based tools. Job Scheduling, automation experience is must.

  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  • Setup and maintain data ingestion, streaming, scheduling and job monitoring automation. Connectivity between Lambda, Glue, S3, Redshift, Power BI needs to be maintained for uninterrupted automation.

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and “big data” technologies like AWS and Google

  • Build analytics tools that utilize the data pipeline to provide actionable insight into customer acquisition, operational efficiency and other key business performance metrics

  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs

  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader


  • 2-7 years of in-depth hands-on experience in data warehousing Redshift or any OLAP to support business/data analytics, business intelligence (BI)

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases and Cloud Data warehouse like Redshift

  • Data Model development, additional Dims and Facts creation and creating views and procedures, enable programmability to facilitate Automation

  • Prior Data Modelling, OLAP cube modelling in SQL Server, SSAS and Power BI experience

  • Experience with Redshift and OLAP systems is must. GLUE pipeline skill is must

  • Data compression into PARQUET to improve processing and finetuning SQL programming skills required

  • Experience building and optimizing “big data” data pipelines, architectures and data sets

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement

  • Strong analytic skills related to working with structured and unstructured datasets

  • Experience with manipulating, processing and extracting value from large disconnected unrelated datasets

  • Working knowledge of message queuing, stream processing, and highly scalable “big data” stores

  • Experience supporting and working with cross-functional teams and Global IT

  • Familiarity of working in an agile based working models

Preferred Qualifications/Expertise

  • Experience with relational SQL and NoSQL databases, including AWS Redshift, Postgres and Cassandra

  • Experience with AWS cloud services Preferable: S3, EC2, EMR, RDS, Sage Maker

  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.

  • Experience with object-oriented/object function scripting languages: Python, Java, R, C++ Scala, etc.


  • Bachelor’s or master’s degree on Technology and Computer Science background

Eli Lilly and Company, Lilly USA, LLC and our wholly owned subsidiaries (collectively “Lilly”) are committed to help individuals with disabilities to participate in the workforce and ensure equal opportunity to compete for jobs. If you require an accommodation to submit a resume for positions at Lilly, please email Lilly Human Resources ( ) for further assistance. Please note This email address is intended for use only to request an accommodation as part of the application process. Any other correspondence will not receive a response.

Lilly does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status.


At Lilly we strive to ensure our employees are part of a team that cares about them and our shared purpose of making life better for those around the world. How do we do this? We continue to look for ways to include, innovate, accelerate and deliver while maintaining integrity, excellence and respect for people.​ We hope that you seek to join us on our journey as we create medicine and deliver improved outcomes for patients across the globe!