Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
ojh6404 authored Aug 3, 2024
1 parent fc6d9fd commit 9f306f6
Showing 1 changed file with 21 additions and 1 deletion.
22 changes: 21 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,12 @@ sam_node publishes segmentation prompt which is used by cutie_node to track obje
### Detecting and tracking object using SAM, GroundingDINO and DEVA.
![Alt text](asset/deva_example.gif)

deva_ndoe queries objects GroundingDINO and SAM at some intervals, so it can track new object after tracking is started. It runs ~15hz and you can adjust `cfg['detection_every']` for performance.
deva_node queries objects GroundingDINO and SAM at some intervals, so it can track new object after tracking is started. It runs ~15hz and you can adjust `cfg['detection_every']` for performance.
See [`node_scripts/model_config.py`](node_scripts/model_config.py)

### Part detection and segmentation using VLPart.
![Alt text](asset/vlpart.gif)

## Setup

### Prerequisite
Expand Down Expand Up @@ -121,6 +124,23 @@ and use dynamic reconfigure to set detection and object tracking by
rosrun dynamic_reconfigure dynparam set /deva_node classes "cloth; cup; bottle;"
```

### Detecting and Segmentation using VLPart.
```bash
roslaunch deep_vision_ros vlpart_segment.launch \
input_image:=/kinect_head/rgb/image_rect_color \
vocabulary:=custom \
classes:="cup handle; bottle cap;" \
device:=cuda:0
```
or
```bash
./run_docker -cache -host pr1040 -launch vlpart_segment.launch \
input_image:=/kinect_head/rgb/image_rect_color \
vocabulary:=custom \
classes:="cup handle; bottle cap;" \
device:=cuda:0
```

### TODO
- add rostest and docker build test
- add [CoTracker](https://github.com/facebookresearch/co-tracker.git) and [Track Any Point](https://github.com/google-deepmind/tapnet.git).

0 comments on commit 9f306f6

Please sign in to comment.