-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean up events more rigorously #163
Conversation
Codecov ReportAttention:
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #163 +/- ##
==========================================
+ Coverage 83.37% 90.35% +6.98%
==========================================
Files 20 19 -1
Lines 1287 1192 -95
==========================================
+ Hits 1073 1077 +4
+ Misses 214 115 -99 ☔ View full report in Codecov by Sentry. |
But why not just drain events once at disconnect? |
Because it's best to mimic the renet interface as much as possible. Messages sent to renet while connecting will be dropped. It might seem like a good idea to be lenient and allow some message buffering, but in my experience that just makes reconnects ambiguous and way more difficult to handle properly (I ran into this with bevy_simplenet). It's best to make reconnects as clean and precise as possible, which means pessimistically dropping messages.
Delay does not matter for dropping events. For precision, we drop everything until the moment renet connects. |
In Replicon we reset everything right after disconnect. I would prefer to keep it consistent. I don't think that #166 will affect robustness in any way. |
We only reset internal caches of the client that won't be used until the next replication message. Event resetting is an entirely different subject. The only reason to not reset every tick is because you want to allow message buffering while connecting. In my opinion and experience that is not justified and makes the API harder to reason about. |
Resetting events every tick is just weird. Why doing so? |
Resetting events is just a means to an end - we are discarded messages sent to a client that isn't connected. The semantics of a disconnected client need to be very precise: messages will fail to send in the span of time between a disconnect (which happens implicitly - we don't know precisely when messages start to fail) and a connect event. This ensures connect events cleanly mark the start of a new session. |
I understand your point, but I'm not sure if it's justifies running a system for each event to cleanup the event queue. It's not a cost-free check. |
This is a great example of my point. If we need to say "make an effort to do things right", then the API is suboptimal. The API should be right automatically.
We only run it when the client is not connected, which should definitely not be the hot path. If you are really concerned then we can |
Right, I actually forgot about it. Then performance-wise it should be fine. |
src/network_event/server_event.rs
Outdated
/// Gets the event queue length. | ||
pub fn len(&mut self) -> usize { | ||
self.0.values_len() | ||
} | ||
|
||
/// Tests whether there are any events queued. | ||
pub fn is_empty(&mut self) -> bool { | ||
self.len() == 0 | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we agree on #164, let's remove these methods and just use .0
.
src/client.rs
Outdated
ClientSet::ResetEvents | ||
.after(NetcodeClientPlugin::update_system) | ||
.before(ClientSet::Receive) | ||
.run_if(not(has_authority())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need not(has_authority())
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of the tests was breaking because local clients don't have a RenetClient (I guess?) so not(is_connected())
was always true and local resending wasn't working. The new run condition should not have that problem though, so I will remove this.
/// This is a separate set from `ClientSet::Reset` in case you want to enable/disable it separately. | ||
/// In general it is best practice to clear queued server events after a disconnect, whereas you may want to | ||
/// preserve replication state (in which case `ClientSet::Reset` should be disabled). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe keep this part? Looks like a useful explanation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Could you rebase to the latest master? Because there are some conflitcs because of my PR. And while you are at it, I left two more pure-style suggestions.
Will away from the computer for a couple hours, so I approving now, feel free to merge.
src/client.rs
Outdated
use bevy_renet::{client_connected, client_just_connected, client_just_disconnected, renet::Bytes}; | ||
use bevy_renet::{renet::RenetClient, transport::NetcodeClientPlugin, RenetClientPlugin}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While you are at it, could you group these imports, please?
src/network_event/client_event.rs
Outdated
@@ -139,6 +139,7 @@ impl ClientEventAppExt for App { | |||
self.add_event::<T>() | |||
.init_resource::<Events<FromClient<T>>>() | |||
.insert_resource(ClientEventChannel::<T>::new(channel_id)) | |||
.add_systems(PreUpdate, reset_system::<T>.in_set(ClientSet::ResetEvents)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you group all PreUpdate
systems here and in server_event
?
Just for consistency with Bevy and the rest of the codebase.
ClientSet::ResetEvents
before renet updates, to ensure events are cleared immediately before connecting.ClientSet::ResetEvents
runs every tick while waiting to connect to ensure the tick where you connect is as clean as possible.