Towards Better Person Re-Identification through Interactive Visual Exploration and Incremental User Feedback
This repository contains source code used to explore and analyze the search space in person Re-Identification(Re-ID).
We propose ReIDVis, which is a novel data exploration and data visualization prototype for person Re-ID.
ReIDVis integrates a user-feedback mechanism that incorporates the person Re-ID model with human insights, and a composite visualization that support efficient visual browsing, retrieval, and exploration of candidate targets. To help identify person-of-interest, we propose an extended semi-supervised learning method by introducing a k-fusion post-rank algorithm to support incremental user feedback. In the visualization component, we develop a novel cluster-based visualization with an optimized layout to reduce visual occlusion and preserve user’s mental map. We also propose a multi-scale, pixel-based view to guide user exploration in the search space.
The interface of the system. (A) the probe panel allows users to select the person-of-interest (partially covered by an umbrella) as probe and set up the visual parameter of the search space. (B) the search space view to support exploration and provide feedback on the retrieval results. (C) the spatiotemporal information view which summarizes the spatiotemporal information of the retrieval results after three iterations. (D) the ranking list with the pixel-based visual encoding which allows users to quickly troubleshoot hard negative samples. (G) the ranking list with the raw image. (E) the exploration of the “path” node, where a front photo of the probe is found. (F) a node with the pixel-based visual encoding.
The system demonstration video of ReIDVis addresses at:https://youtu.be/8FWy6Yr4cos
-
Input data: human-labeled data ('l_index' is the index of the labeled data in the ranking list, 'Yl' is the value of labels).
label = {'l_index': [0, 2, 1, 3, 4], 'Yl': [1, 1, -1, -1, -1]}
Output data: a new ranking list generated via LapSVM.py.
-
Input data: branch ranking lists and a parameter k.
Output data: a new ranking list generated by k-fusion.py.
-
-
Ensure the stability of the incremental layout.
Input data: the coordinates of samples at the previous time step t-1, the feature of samples at the current time step t.
Output data: the coordinates of samples at the current time step t generated by ComputingWeight.py.
-
Remove the occlusion between nodes.
Input data: the coordinate, width, and height of each node.
Output data: the removed coordinate of each node generated by occlussion.py.
-
# cd to your preferred directory and clone this repo
git clone https://github.com/xiawang157/Video_Object_FeatureVis.git
# create dependencies
cd Video_Object_FeatureVis/
pip install -r requirements.txt
(1) Install and configure the development environment according to requirements.txt.
(2) Run views.py, and start the front-end web service.