India faces significant challenges in disaster management, including:
- Delayed Response: Lack of real-time data to predict and respond to disasters.
- Inefficient Resource Allocation: Mismanagement of relief materials and response teams.
- Poor Communication: Ineffective coordination between authorities, responders, and citizens.
- Lack of Predictive Insights: Absence of AI-powered models to predict disasters and mitigate damage.
In large-scale disasters like floods, earthquakes, and cyclones, these inefficiencies result in increased casualties, prolonged suffering, and immense economic loss.
Sahyog is an AI-powered disaster response platform designed to enhance preparedness, efficiency, and coordination in disaster management. It leverages Google Cloud Technologies and AI models to predict disasters, optimize resource management, and provide real-time coordination.
- Disaster Prediction and Early Warnings using AI models deployed on Vertex AI.
- Incident Reporting and Monitoring through mobile and web apps using Gemini APIs for image, video, and text analysis.
- Resource Inventory Management with real-time tracking using RFID sensors and transparent logging through Hyperledger Fabric.
- Task Allocation and Response Management using AI algorithms to assign the nearest responders.
- Multi-channel Communication for real-time notifications using Twilio and Firebase Cloud Messaging.
- Post-Disaster Analysis to generate actionable insights for future disaster management.
Technology | Purpose |
---|---|
Vertex AI | Training and deploying AI models (LSTMs, CNNs, GANs) for disaster prediction. |
Gemini APIs | Multimodal analysis of images, videos, and text for damage assessment. |
OpenCV | Image processing and damage detection from satellite and drone imagery. |
Technology | Purpose |
---|---|
Node.js + Express | Building RESTful APIs for the backend. |
FastAPI | Serving AI models and handling AI-related requests. |
Apache Kafka | Real-time data streaming from satellites, sensors, and drones. |
Apache Spark | Large-scale data processing and post-disaster analysis. |
Technology | Purpose |
---|---|
React.js | Building the web dashboard for real-time monitoring and visualization. |
Flutter | Developing the mobile app for citizen reporting and alerts. |
Technology | Purpose |
---|---|
PostgreSQL | Storing structured data like incident reports and resource details. |
MongoDB | Storing unstructured data like images, videos, and sensor logs. |
Technology | Purpose |
---|---|
RFID Sensors | Real-time tracking of relief resources like medical kits and food supplies. |
Hyperledger Fabric | Blockchain-based transparent logging of resource distribution. |
Technology | Purpose |
---|---|
Twilio | Sending SMS and voice alerts to citizens and authorities. |
Firebase Cloud Messaging | Push notifications for real-time updates. |
Technology | Purpose |
---|---|
Google Maps API | GIS-based visualizations for disaster-affected areas and resource allocation. |
Google Data Studio | Generating real-time reports and insights for stakeholders. |
Technology | Purpose |
---|---|
Google Cloud Storage (GCS) | Storing satellite images, sensor data, and AI models. |
Docker | Containerization of backend and AI services. |
Kubernetes | Orchestrating containerized applications for scalability. |
Terraform | Infrastructure as Code (IaC) for managing cloud resources. |
- Data Collection: Real-time data from satellites, sensors, and drones is streamed via Apache Kafka and stored in Google Cloud Storage (GCS).
- Disaster Prediction: Vertex AI trains and deploys AI models (LSTMs, CNNs, GANs) for disaster prediction using real-time data.
- Incident Reporting: Citizens report incidents via a Flutter app, and Gemini APIs analyze images, text, and voice inputs for severity assessment.
- Decision Making: AI models process predictions and reports to generate alerts using Vertex AI and Gemini APIs.
- Alert Generation: Multilingual alerts are sent via Twilio (SMS/calls) and Firebase Cloud Messaging (push notifications).
- Resource Management: Resources are tracked using RFID sensors, and Hyperledger Fabric ensures transparent distribution.
- Task Allocation: AI optimizes task assignments using Dijkstraβs Algorithm and real-time data from Google Maps API.
- Post-Disaster Analysis: Data is aggregated using Apache Spark, and Vertex AI generates insights for future preparedness.
This AI service powers the core intelligence layer of the Sahyog disaster management system, providing:
- Real-time disaster prediction using satellite/drone/sensor data
- Multimodal damage assessment (images, text reports, sensor data)
- Optimal resource allocation during emergencies
- Automated reporting & alerting
Component | Technology | Purpose |
---|---|---|
Core Framework | Python 3.9, FastAPI | API development |
AI/ML | TensorFlow 2.12, Vertex AI | Model training/deployment |
Multimodal AI | Gemini API | Text/image analysis |
Cloud Services | GCP (Storage, BigQuery, PubSub) | Data pipeline |
Optimization | OR-Tools | Resource allocation |
Containerization | Docker, Gunicorn | Production deployment |
- Google Cloud account with:
- Vertex AI enabled
- Service account with AI Platform Admin role
- Docker installed
- Configure Environment Variables
# Required Variables
export GCP_PROJECT_ID="your-project-id"
export GCP_REGION="us-central1"
export GEMINI_API_KEY="your-api-key"
export GCS_BUCKET="your-bucket-name"
# Optional Overrides
export DAMAGE_MODEL_ENDPOINT="projects/.../locations/.../endpoints/..."
export RESOURCE_MODEL_ENDPOINT="projects/.../locations/.../endpoints/..."
- Build Docker Image
docker build -t sahyog-ai-service \
--build-arg GCP_PROJECT_ID=$GCP_PROJECT_ID \
--build-arg GEMINI_API_KEY=$GEMINI_API_KEY \
-f ai-service/Dockerfile .
- Run Container
docker run -d \
-p 8000:8000 \
-e GCP_PROJECT_ID=$GCP_PROJECT_ID \
-e GEMINI_API_KEY=$GEMINI_API_KEY \
-e GCS_BUCKET=$GCS_BUCKET \
--name sahyog-ai \
sahyog-ai-service
- Verify Deployment
curl http://localhost:8000/
# Expected: {"status":"healthy","services":{"vertex":true,"gemini":true,"gcp":true}}
Ensure you have these enviornment variables
Variable | Required | Description |
---|---|---|
GCP_PROJECT_ID | Yes | Your Google Cloud Platform project ID |
GEMINI_API_KEY | Yes | API key for Google Gemini services |
GCS_BUCKET | Yes | Default Google Cloud Storage bucket for disaster data |
GCP_REGION | No | GCP region (default: us-central1) |
DAMAGE_MODEL_ENDPOINT | No | Vertex AI endpoint for pre-trained damage assessment model |
RFID_API_ENDPOINT | No | URL for RFID inventory tracking system |
MAX_CONCURRENT_TASKS | No | Limits parallel processing (default: CPU cores * 2) |
Models used with architecture
Model File | Architecture | Input Shape | Output | Pretrained Weights |
---|---|---|---|---|
lstm_model.py |
Stacked LSTM | (24, 5) | Binary probability | No |
cnn_model.py |
EfficientNetB4 | (256, 256, 3) | 5-class disaster | ImageNet |
gan_model.py |
DCGAN | (100,) latent dim | (256, 256, 3) synthetic images | No |
Model File | Architecture | Input Shape | Output | Key Features |
---|---|---|---|---|
image_classifier.py |
EfficientNetB4 | (512, 512, 3) | 4 damage levels | Transfer learning |
object_detector.py |
YOLOv8x | (640, 640, 3) | BBox + 6 classes | COCO pretrained |
Model File | Architecture | Input Features | Output | Optimization Method |
---|---|---|---|---|
predictive_model.py |
Hybrid LSTM-RF | 7 temporal features | 5 resource demands | Adam + Gini impurity |
allocation_model.py |
OR-Tools MIP | Demand constraints | Allocation plan | Linear Programming |
Endpoint | Avg Latency | Throughput (req/s) |
---|---|---|
Disaster Prediction | 320ms | 45 |
Damage Assessment | 480ms | 32 |
Resource Optimization | 210ms | 68 |
Tested on: n1-standard-4 VM, 100 concurrent requests
Note: This can vary , I have taken avg latency to get this
To set up the project locally, follow these steps:
- Clone the repository:
git clone https://github.com/CodeCanvas/Sahyog.git cd Sahyog
cd backend/api
npm install
cd ../ai-service
pip install -r requirements.txt
cd ../../frontend/web
npm install
cp .env.example .env
Copy .env.example to .env and fill in the required values.
cd ../../backend/api
npm start
cd ../ai-service
uvicorn main:app --reload
cd ../../frontend/web
npm start
Build the image:
docker build -t sahyog-ai-service
Then run the command:
docker run -p 8000:8000 -e GCP_PROJECT_ID=your-project sahyog-ai-service
Sahyog/
βββ backend/
β βββ api/ # Main API server (Node.js + Express)
β βββ ai-service/ # AI model service (Python + FastAPI)
β βββ data-pipeline/ # Data processing pipeline
β βββ blockchain/ # Hyperledger Fabric setup
β
βββ frontend/
β βββ web/ # React.js web application
β βββ mobile/ # Flutter mobile application
β
βββ infrastructure/
β βββ terraform/ # Infrastructure as Code
β βββ docker/ # Docker configurations
β βββ kubernetes/ # Kubernetes configs
β
βββ docs/ # Documentation
βββ scripts/ # Utility scripts
βββ .env.example # Example environment variables
βββ .gitignore # Git ignore file
βββ README.md # Project overview
Contributions to project Sahyog are always welcome! Please follow these steps:
-
Fork the repository.
-
Create a new branch (git checkout -b feature/YourFeatureName).
-
Commit your changes (git commit -m 'Add some feature').
-
Push to the branch (git push origin feature/YourFeatureName).
-
Open a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.