-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collision detection failure #35
Comments
Hi, Glad you like it! How many instances are you running in parallel? I haven't seen the behavior where collisions stop working. Perhaps it's a resource issue? |
Hello, Thanks for the reply. The training would have crashed if it was a resource issue. Best, |
No problem! Ok, thanks for letting me know. I have run jobs in 5-10 million step range, so well beyond 500k. I'm currently rerunning a job with some collision logging, I'll let you know if I observe the behavior you've described. |
Well.... Then, it has to be something I am doing wrong.
Thanks a lot for letting me know.
Best,
2020년 10월 29일 (목) 오전 4:35, ZacRavichandran <notifications@github.com>님이 작성:
… No problem!
Ok, thanks for letting me know.
I have run jobs in 5-10 million step range, so well beyond 500k. I'm
currently rerunning a job with some collision logging, I'll let you know if
I observe the behavior you've described.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#35 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA7PL2YHKGXNKC6T2OFTTX3SNBW63ANCNFSM4SMHDTOA>
.
|
Just out of curiosity... Approximately how many time-steps did you need to train your agent with the demo script (tesse-gym/baselines/goseek-ppo.ipynb)? Would 1M time-steps be sufficient? I am trying to train it (not precisely GoSeek problem, though) with some other ROS-node connected. As a result, my system is relatively slower than a pure-unity system, which worries me a lot. Lastly, please let me know how did the collision logging trial go. Best, |
We ran for 3M steps over 4 environments. But, if I remember correctly, 1M should be sufficient to reach near convergence. I did run the collision logging trial and did not see any anomalies. Though here's one possibility - does the issue occur when you simultaneously read metadata from ROS? The collisions flag is cleared after sending metadata to a client so as to detect collision instances. If you're reading metadata from ROS then trying to access it again in a Gym environment, the collision flag would always be false. If this is the case, you can find logic in |
Hello,
Thanks for the nice simulator.
I am trying to train an agent with having the "restart on collision" flag on.
That is, an episode would terminate and restart if a collision occurs.
It works OK for quite a long period of time. However, after 6 ~ 12 hrs of training the agent becomes to be numb with collision. This could happen earlier or later. However, once this behavior starts, the agent never recovers its ability to detect collisions.
Could you please tell me if I am missing anything here?
Best,
The text was updated successfully, but these errors were encountered: