You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Very nice work! However, I have a question:
How can I reproject the estimated 3D pose to the Camera Coordinate System, considering that the training data use a which is minimized from , while for inference, it seems hard to get even if I have the R/ T/ f/ c, etx.
The text was updated successfully, but these errors were encountered:
I've noticed that in lib/data/datareader_h36m.py the code snippet: train_labels = self.dt_dataset['train']['joint3d_image'][::self.sample_stride, :, :3].astype(np.float32) # [N, 17, 3]
suggests that MB didn’t use for training. Instead, it appears to rely on the 3D pose in the Camera Coordinate System. Is that correct?
Very nice work! However, I have a question: which is minimized from , while for inference, it seems hard to get even if I have the R/ T/ f/ c, etx.
How can I reproject the estimated 3D pose to the Camera Coordinate System, considering that the training data use a
The text was updated successfully, but these errors were encountered: