-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Diff is not sent when websocket connects on new request #72
Comments
Thanks for this very cool project! I should have led with that :) |
I do see that it does say that... At the moment the difficulty is, live maintains the list of who is connected via a websocket and then removes them when they disconnect. On the other hand it throws away the initial HTTP render because how would we bound how long we keep that for if the client never makes the websocket request? I suppose the solution would be to store it for some arbitrary short period of time and the clean it up if a websocket request never comes. This might also help with reconnections |
Yes, this matches my thoughts exactly. I was looking for a connection mechanism because it already does store the first render to the HTTP socket (which is then thrown away), and it does attempt to diff the previous render on page connect. So almost all of the plumbing is there already, which is why we thought there might be a mechanism in place to handle that. The solution we were reaching for was a page that could render components with dynamic IDs. Currently, because the page is completely re-rendered when the websocket starts, all the component state on the server is wiped on the re-mount. Unless you have fixed IDs for the components, the IDs in the already-rendered HTML is then out of sync with the IDs in the components on the server. For updates over the websocket, you are always connected to the same node and so state in memory works. But since we are talking about state across two requests here, which will potentially land on different nodes, it implies a shared cache of some kind, and a very short eviction policy. I don't know, but I assume that Phoenix Liveview probably uses distribution here to do something similar (if it does?). You could make stickiness a load-balancer issue, assume it, and then still do a local LRU cache with a short eviction window. Something like that. Thanks for confirmation. I will ponder what we could do about it. |
The lifecycle of the request (intentionally) causes render to happen twice on a new request:
Our understanding is that the code should be patching the page if it changed between (1) and (2). This does not seem to happen.
According to the comment here: https://github.com/jfyne/live/blob/master/http.go#L404-L406, when the websocket connects, and render is called the second time, the diff should be sent on the websocket. However, that is conditional on
LatestRender
not beingnil
here: https://github.com/jfyne/live/blob/master/render.go#L38From our testing,
LatestRender
is alwaysnil
and so the diff is never sent on the websocket. The render happens on the HTTP socket and is stored to the socket. However, the browser makes a second request with theupgrade
header set. It seems that the state of the previous render is lost between these calls._serveWS
seems to indicate that it should be retrieving the socket from the session: https://github.com/jfyne/live/blob/master/http.go#L323 I don't yet see how it can connect those two renders.Any insight into what's going on here is appreciated. Thanks.
The text was updated successfully, but these errors were encountered: