Skip to content

Latest commit

 

History

History
3 lines (2 loc) · 670 Bytes

File metadata and controls

3 lines (2 loc) · 670 Bytes

Azure Data Pipeline - (Databricks - Azure Data Foctory - Azure Data Lake - Azure Blob Storage - Python) - Earthquake Events and Risks Project

Ingested data using python within Databricks, to get it from a API endpoint into a bronze layer of an medallion architecture. Azure data lake storage container within a storage account. Then used Databricks to transform the raw data from bronze container into a silver container and then to the gold container to be used directly for business use. I have then brought these notebooks into Azure Data Factory, extracted variables out of the notebook into Data factory and orchestrated the pipeline from Data factory as well.