Skip to content

A research-driven initiative that collects and analyzes real-world driving data to train machine learning models for human-aligned hazard detection. The project integrates eye-tracking technology, driving simulations, and deep learning models to improve autonomous vehicle decision-making.

Notifications You must be signed in to change notification settings

lennox55555/Human-Alignment-Hazardous-Driving-Detection

 
 

Repository files navigation

Human-Aligned Hazardous Driving (HAHD) Project

For more detailed Research Documentation, visit: HAHD Research Documentation

Overview

The Human-Aligned Hazardous Driving (HAHD) project is an initiative focused on collecting, processing, and analyzing driving behavior data to train machine learning models that align autonomous vehicle decision-making with human driving tendencies. This project consists of three main components:


Folder Structure

HAHD/
├── data/
│   ├── processed/              # Processed data after running the transform and processing (ETL)
│   |       ├── driving_videos/ 
│   |       ├── badgazedata.csv 
│   |       ├── normalized_gaze_data.csv 
│   |       ├── final_user_survey_data.csv
│   |       ├── binned_video_dat_wo_user.csv
│   |       ├── aggregate_gaze_data_by_video.csv
│   ├── raw/
│   |       ├── driving_videos/ # Videos from s3 bucket after running extraction (ETL)
│   |       ├── survey_results_raw.csv # Data from MongoDB running extraction (ETL)
│   |       ├── users_data.csv # Data from MongoDB running extraction (ETL)
├── EDA/                        # EDA Folder
├── ETL/                        # Folder with ETL process
├── frontend/                   # code for frontend of the data collection (survey) web app
├── server/                     # code for backend of the data collection (survey) web app
├── VideoProcessingManagement   # code to process the driving footage before upload to S3 bucket
├── .env                         
├── README.md  
├── package.json 
├── package-lock.json                  
├── .gitignore    
├── requirements.txt    
├── sumulationGazePipeline.py   # TBD                 

Documentation Links


Getting Started

Step 1: Clone the Repository

git clone https://github.com/Onyx-AI-LLC/Human-Alignment-Hazardous-Driving-Detection.git
cd Human-Alignment-Hazardous-Driving-Detection

Step 2: Create and Activate a Virtual Environment

On macOS/Linux:

python3 -m venv venv
source venv/bin/activate

On Windows:

python -m venv venv
venv\Scripts\activate

Step 3: Install Required Dependencies

pip install -r requirements.txt

Step 4: Run the Main Script

python main.py

-- This research is made possible due to collaboration between Duke University & Onyx AI LLC.

About

A research-driven initiative that collects and analyzes real-world driving data to train machine learning models for human-aligned hazard detection. The project integrates eye-tracking technology, driving simulations, and deep learning models to improve autonomous vehicle decision-making.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 84.6%
  • Python 7.3%
  • TypeScript 6.3%
  • Other 1.8%