Data Engineer with 3+ years of experience in designing and optimizing scalable ETL/ELT pipelines, big data processing, and cloud-based data solutions. Proficient in Python, SQL, PySpark, AWS, Snowflake, DBT, and Azure, with strong expertise in building metadata-driven workflows and automating data pipelines. Currently working with Azure Data Factory (ADF) and Databricks to orchestrate and transform large datasets — ingesting CSV/Excel files from SFTP, performing complex transformations in Databricks, and exporting cleansed datasets back to SFTP as well as Delta tables. Skilled in integrating Snowflake for modern data warehousing and leveraging Streamlit for building lightweight analytics applications. Proven ability to enhance data quality, improve pipeline performance, and support analytics/BI teams through robust data modeling and modern warehousing practices.