About the Role:
We are looking for a highly skilled Analytics/AI Engineer to bridge the gap between data engineering and data analysis. You will play a critical role in building scalable data pipelines, transforming raw data into actionable insights, and enabling data-driven decision-making across the organization.
Key Responsibilities:
- Design and implement robust, scalable data pipelines using PySpark, Python, Polars and Gen-AI.
- Develop data models and transformation logic to support analytics and business intelligence needs.
- Leverage Python and Object-Oriented Programming (OOPs) principles to build reusable and maintainable data tools and workflows.
- Utilize Databricks and cloud-based platforms to process and manage large datasets efficiently.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver clean, trusted data.
- Ensure high data quality and consistency across different systems and reports.
Must-Have Skills:
- Strong knowledge of Python programming.
- Good understanding of Object-Oriented Programming (OOPs) concepts.
- Advanced SQL skills for data manipulation and querying.
- Experience with PySpark, Polars for distributed and in-memory data processing.
- Ability to work independently in a fast-paced, agile environment.
- Problem-solving mindset with eagerness to learn and experiment.
- Good communication and collaboration skills.
Nice to Have:
- Understanding of data architecture, ETL/ELT processes, and data modelling.
- Knowledge on Databricks and Azure Cloud Environment.
- Good to have an understanding on Data Structures and Algorithms.
- Familiarity with best coding practices, CI/CD & version control (Git).