You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello @bharat-b7, the work you've done here is fascinating - thank you!
I'm working on generating my own dataset, where I perform PGN semantic segmentation to generate the segmentation, and convert it into the input data that I can use for this MGN.
Here's the semantic segmentation that I was able to generate:
Your README says some additional "manual" process was needed, and I was wondering how you've processed the outputs of the PGN semantic segmentation to generate your segmentation, like this:
In summary, I feel like these are the inputs that is needed, but I don't know how to generate:
multi_tex.jpg
scan_tex.jpg
registered_tex.jpg
Pants.obj
Shirt.obj
Segmentation.png
scan.obj
scan_labels.npy
smpl_registered.obj
registration.pkl
Any further guidance would be greatly appreciated. Thank you!
The text was updated successfully, but these errors were encountered:
Digging the repo a little bit further, I found these issues that might be related:
Consultation on test_data.pkl and assets folder #8 (comment)
This is as far as I've gotten (I have these three files like the commenter: *.mat, *.png, and *_vis.png). How can these be converted to the likes of what's represented inside of the image_x in test_data.pkl?
Please see the data processing steps in the readme. Once you get the segmentation labels from PGN, you just need to change the colours as follows: Pants (65, 0, 65), Short-Pants (0, 65, 65), Shirt (145, 65, 0), T-Shirt (145, 0, 65) and Coat (0, 145, 65), skin and hair are set as white. The colour choice was arbitrarily decided when training MGN, nothing technical about it.
Does the tuple represent the RGB value? I am currently not able to run the open pose due to some compilation issues, but once I do, I imagine I'll get a (2,25,3) shaped output that tells me about which of the 25 body parts each of the pixels should fall under
Hello @bharat-b7, the work you've done here is fascinating - thank you!
I'm working on generating my own dataset, where I perform PGN semantic segmentation to generate the segmentation, and convert it into the input data that I can use for this MGN.
Here's the semantic segmentation that I was able to generate:
Your README says some additional "manual" process was needed, and I was wondering how you've processed the outputs of the PGN semantic segmentation to generate your segmentation, like this:
In summary, I feel like these are the inputs that is needed, but I don't know how to generate:
Any further guidance would be greatly appreciated. Thank you!
The text was updated successfully, but these errors were encountered: