-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Kinect point cloud as input #12
Comments
So I just went through your paper again and noticed this line:
I think according to this, it makes sense that the input in training data should include incompleteness to learn to reconstruct such examples. and it seems for the weights and sample you have uploaded the input was already complete. So in case my assumption is right, I was wondering if you could upload the weights of the model that was trained on incomplete input data ? |
Hey regarding the SMPL fitting issue, Their registration code works really well if you initialize with an SMPL mesh with roughly aligned pose. I did this by doing something similar to how they mention they generate data in their paper. By doing the foll
This I give as initialization to their SMPL registration code and it works really well I presume you can face some issues with generating keypoints for your scan, but it can be worth a shot |
@kfarivar @ashwath98 @bharat-b7 What' wrong with it? How can I get reasonable SMPL/SMPL+D fitting output? |
Afaik the code shared tries to fit SMPL -> SMPLD from scratch using scan2mesh distance and other terms. In my experiments I found much better results if you have a good SMPL initialization ( which I got Estimating 3D joints using openpose in multiple views -> Optimizing SMPL on those joints). This repo has code for joint optimization, but you will have to get 3D joints yourself (by method I described or any other method like VNECT etc). Also your scan seems to be really good I think using SMPL -> SMPLD directly should give better results (opinion) than running it on IPNet surface |
@ashwath98 But what puzzles me most now is: I really want to know if I'm doing something wrong with my own 3D scan input? |
@Bingoang |
@Daixinxin-cc |
Hi
I gave the non parametric part of the IPNet a point cloud captured by Azure kinect. (running
test_IPNet.py
) But I feel like the results are not very promising. The results are for the body file. I wanted to check with you to see if I'm doing something wrong. (I know I have to fit the SMPL model to the non-parametric reconstruction to get the final result, But I feel like if the SMPL fitting code used these results as input it wouldn't work very well. ) In all the following I use-batch_points 100000
since otherwise I would get a GPU out of memory error.This is the point cloud file I have used as a csv file (just three columns of x,y,z first row is column names.)
frame_20.zip
I added these lines in the
test_IPNet.py
under the main function to read the point cloud:so instead of sending pc.vertices to
pc2vox
I sent kinect_pc which is a numpy array.I have tried this point cloud as input : (it is around 36k points)
my first attempt: (I reversed the direction of the Y axis since I thought that might help the network since your sample was also like that)
python kiya_test_IPNet.py assets/scan.obj experiments/IPNet_p5000_01_exp_id01/checkpoints/checkpoint_epoch_249.tar kiya_out_dir -m IPNet -batch_points 100000
Then I tried reversing the direct of the Z axis as well and I got :
Finally I also tried increasing the resolution of the input voxelized figure to the network but it actually made the result worse:
python kiya_test_IPNet.py assets/scan.obj experiments/IPNet_p5000_01_exp_id01/checkpoints/checkpoint_epoch_249.tar kiya_out_dir -m IPNet -batch_points 100000 -res 200
The text was updated successfully, but these errors were encountered: