Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add merge method for combining changes from multiple stores #154

Closed
wants to merge 6 commits into from

Conversation

mpiannucci
Copy link
Contributor

Still experimental, but putting in the open

let repository = repository_lock.into_inner();
Ok(repository)
})
.collect::<Result<Vec<_>, StoreError>>()?;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is required to check for errors and exit early if there are some. IDK if this is the right approach...

If one fails should the others still merge? What about ones that come before?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is the right approach. We never want jobs to successfully commit if one of their workers failed or is still working. This may even be somewhat common, bad concurrent code tries to merge while there is other thread still doing work, and in that case, Arc::try_unwrap will fail.

If they want to commit anyway, they can do it explicitly, by not passing those stores. What is very important is recoverability: We let them know something is still going, they wait and try again. So, I think there are better return types for this function, something like:

-> StoreResult<Vec<(usize, Store)>>

returns the list of Store that are still pending. The user can wait on those somehow and try to merge them again. The ones that succeeded are gone (not really gone, just merged).

Things we should think more about:

  • I don't love the Vec in the return type though, we may want to think some more. I'm a bit worried about the ugly case in which people use dask and every chunk becomes a task, and we have millions of things to merge.
  • How do we help them "wait" until they can merge?
  • There is a possible answer to both points: this function keep retrying until it succeeds. So we don't need a return type, and we are the ones waiting, but that approach sounds quite unsatisfying.

nit: try_collect is usually more readable.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more thoughts, always about this real wold scenario where a thread is trying to do set while other is trying to merge all repos.

  • Should this merge require a &mut instead? It may be more faithful to reality
  • Either way, my "unsatisfying approach" is not only unsatisfying but potentially deadllocking if not done carefully, both threads are trying to write to the repo.
  • I think I have a much better result type
-> Result<(), I::Iterator>

or whatever way you write t hat, there is probably some amount of as IntoItorator missing.

  • The idea is, I start merging, I'll stop as soon as I find one Repository that is not ready to commit (by that we mean: which Arc cannot be unwrapped), and when I stop I'll give you back the iterator of the remaining repos.
  • Calling code can decide what to do next: simply retry with the remaining repos, wait and retry, skip the first repo, etc.
  • Not sure how useful it would be but we can provide a ready_to_merge function that verifies the ref count on the arc == 1

We should talk more about all this, fun stuff.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just had another idea. All the issues arise because we are trying to merge multiple stores. This also makes it harder on the python side, because we need to be very careful using a generator and not a list. I think there is a much easier way, only allow merging one store into self. Let the user deal with gathering all of them and calling merge one by one. I think this is also easier on the user, they just need to get results as soon as they are produced, and call merge on them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also the approach I started doing in Python, so we got to the same place! I wound up reverting it because the lifetimes were driving me nuts, but I totally agree that is the approach we should use

let repository = repository_lock.into_inner();
Ok(repository)
})
.collect::<Result<Vec<_>, StoreError>>()?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is the right approach. We never want jobs to successfully commit if one of their workers failed or is still working. This may even be somewhat common, bad concurrent code tries to merge while there is other thread still doing work, and in that case, Arc::try_unwrap will fail.

If they want to commit anyway, they can do it explicitly, by not passing those stores. What is very important is recoverability: We let them know something is still going, they wait and try again. So, I think there are better return types for this function, something like:

-> StoreResult<Vec<(usize, Store)>>

returns the list of Store that are still pending. The user can wait on those somehow and try to merge them again. The ones that succeeded are gone (not really gone, just merged).

Things we should think more about:

  • I don't love the Vec in the return type though, we may want to think some more. I'm a bit worried about the ugly case in which people use dask and every chunk becomes a task, and we have millions of things to merge.
  • How do we help them "wait" until they can merge?
  • There is a possible answer to both points: this function keep retrying until it succeeds. So we don't need a return type, and we are the ones waiting, but that approach sounds quite unsatisfying.

nit: try_collect is usually more readable.

@@ -281,6 +281,11 @@ impl Repository {
!self.change_set.is_empty()
}

/// Discard all uncommitted changes and return them as a `ChangeSet`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

everything in this file lgtm

@mpiannucci
Copy link
Contributor Author

Closed in favor of #361

@mpiannucci mpiannucci closed this Oct 30, 2024
@mpiannucci mpiannucci deleted the matt/merge-repo branch December 18, 2024 15:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants