Senior Data Engineer

Data Science

Bangalore, India

< Back to search for jobs
Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Have you ever found a new favourite series on Netflix, picked up groceries curbside at Walmart, or paid for something using Square? That’s the power of data in motion in action—giving organisations instant access to the massive amounts of data that is constantly flowing throughout their business. At Confluent, we’re building the foundational platform for this new paradigm of data infrastructure. Our cloud-native offering is designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organisation. With Confluent, organisations can create a central nervous system to innovate and win in a digital-first world.

We’re looking for self-motivated team members who crave a challenge and feel energised to roll up their sleeves and help realise Confluent’s enormous potential. Chart your own path and take healthy risks as we solve big problems together. We value having diverse teams and want you to grow as we grow—whether you’re just starting out in your career or managing a large team, you’ll be amazed at the magnitude of your impact.

About the Team:
The mission of the Data Science/Data Engineering team at Confluent is to serve as the central nervous system of all things data for the company: we build data and analytics infrastructure, insights, models and tools, to empower data-driven thinking, and optimize every part of the business. This position offers limitless opportunities for an ambitious data engineer to make an immediate and meaningful impact within a hyper growth start-up, and contribute to a highly engaged open source community.

About the Role:
This is a partnership-heavy role. As a member of the Data team, you will enable various functions of the company, i.e. Product, Engineering, Go-to-Market, etc.,, to be data-driven As a Data Engineer, you will take on big data challenges in an agile way. You will build data pipelines that enable data scientists, operation teams, and executives to make data accessible to the entire company. You will also build data models to deliver insightful analytics while ensuring the highest standard in data integrity. You are encouraged to think out of the box and play with the latest technologies while exploring their limits. Successful candidates will have strong technical capabilities, a can-do attitude, and are highly collaborative.

Job Responsibilities:

  • Design, build and launch extremely efficient and reliable data pipelines to move data across a number of platforms including Data Warehouse and real-time systems.
  • Developing strong subject matter expertise and managing the SLAs for those data pipelines.
  • Set up and improve BI tooling and platforms to help the team create dynamic tools and reporting.
  • Partnering with Data Scientists and business partners to develop internal data products to improve operational efficiencies organizationally.
  • Here are some examples of our work: 
  • Data Pipelines - Create new pipelines or rewrite existing pipelines using SQL, Python on Airflow & DBT
  • Data Quality and Anomaly Detection - Improve existing tools to detect anomalies real time and through offline metrics 
  • Data Modeling - Partner with analytic consumers to improve existing datasets and build new ones

Job Qualifications:

  • 4 to 6 years of experience in a Data Engineering role, with a focus on data warehouse technologies, data pipelines and BI tooling.
  • Bachelor or advanced degree in Computer Science, Mathematics, Statistics, Engineering, or related technical discipline
  • Expert knowledge of SQL and of relational & cloud database systems and concepts.
  • Strong knowledge of data architectures and data modeling and data infrastructure ecosystem.
  • Experience with enterprise business systems such as Salesforce, Marketo, Zendesk, Clari, Anaplan, etc.
  • Experience with ETL pipeline tools like Airflow, DBT, and with code version control systems like Git.
  • The ability to communicate cross-functionally, derive requirements and architect shared datasets; ability to synthesize, simplify and explain complex problems to different types of audiences, including executives.
  • The ability to thrive in a dynamic environment. That means being flexible and willing to jump in and do whatever it takes to be successful.

  • Nice to haves
  • Experience with Apache Kafka
  • Knowledge of batch and streaming data architectures
  • Product mindset to understand business needs, and come up with scalable engineering solutions

Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.



Best Teammates on Planet Earth
Adjustable Working Arrangements
Robust Benefits
Rest and Recharge Days
Weekly Lunch Spend
Flexible Paid Time Off (PTO)

Confluent is Remote-First

At Confluent, we care about how you work - not where. We encourage you to apply for positions outside of the listed location or your immediate region.

Share this post