Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an explicit #watch method to RedisClient::Cluster #339

Conversation

KJTsanaktsidis
Copy link
Contributor

This returns a "watcher" object, which can either be used for three things:

  • To add keys to be watched on the same connection (by calling #watch
  • To begin a MULTI transaction on the connection (by calling #multi)
  • To UNWATCH the connection and return it to its original state (by calling... #unwatch)

This means that the following pattern becomes possible in redis-cluster-client:

client.watch(["{slot}k1", "{slot}k2"]) do |watcher|
  # Further reads can be performed with client directly; this is
  # perfectly safe and they will be individually redirected if required
  # as normal.
  # If a read on a slot being watched is redirected, that's also OK,
  # because it means the final EXEC will fail (since the watch got
  # modified).
  current_value = client.call('GET', '{slot}k1')
  some_other_thing = client.call('GET', '{slot}something_unwatched')

  # You can add more keys to the watch if required
  # This could raise a redireciton error, and cause the whole watch
  # block to be re-attempted
  watcher.watch('{slot}differet_key')
  different_value = client.call('GET', '{slot}different_key')

  if do_transaction?
    # Once you're ready to perform a transaction, you can use multi...
    watcher.multi do |tx|
      # tx is now a pipeliend RedisClient::Cluster::Transaction
      # instance, like normal multi
      tx.call('SET', '{slot}k1', 'new_value')
      tx.call('SET', '{slot}k2', 'new_value')
    end
    # At this point, the transaction is committed
  else
    # You can abort the transaction by calling unwatch
    # (this will also happen if an exception is thrown)
    watcher.unwatch
  end
end

This interface is what's required to make redis-clustering/redis-rb work correctly, I think.

@supercaracal
Copy link
Member

I think we should be carefully to add the public method to the cluster class. I don't think this way is proper for the redis-cluster-client gem. As you said:

redis/redis-rb#1255 (comment)

Basically, redis-clustering is the "adapter" that takes the redis-cluster-client interface and makes it fit the redis-rb API pattern.

Can we implement it to the redis-clustering gem side?

@KJTsanaktsidis
Copy link
Contributor Author

I don't think this way is proper for the redis-cluster-client gem

I think something like this is required on the redis-cluster-client side. Maybe the following question would help us get on the same page:

With redis-cluster-client, how would you perform the following sequence of operations?

  • Watch a key {slot}k1 (WATCH {slot}k1)
  • Read the value (GET {slot}k1)
  • Decide, based on the value you read, to not proceed with a transaction (UNWATCH)

Or maybe, what about this?

  • Watch a key {slot}k1 (WATCH {slot}k1)
  • Read the value (GET {slot}k1)
  • Decide, based on the value you read, to watch a different key (maybe {slot}k1 contains the name of a different key {slot}k2); so, WATCH {slot}k2
  • Then do a transaction on those keys (MULTI ... EXEC).

In either of these cases, redis-cluster-client needs to remember where to send a command related to the transaction (either UNWATCH in the first example, or WATCH {slot}k2 in the second example). Using an explicit "handle" object yielded from a #watch method is, I think, cleaner than trying to implicitly remember this inside the router.

@supercaracal
Copy link
Member

I mean, why don't you use redis-cluster-client's internal moduels in the redis-clustering side? I think we can do it.

I can't agree to add the public method to our user-facing class. It makes a differnce between redis-client and redis-cluster-client. I say repeatedly, we shouldn't modify the design as a library by only the redis-clustering's convenience. Our redis-cluster-client works as a client by itself.

@KJTsanaktsidis
Copy link
Contributor Author

Our redis-cluster-client works as a client by itself.

I definitely agree with this! I think this #watch API is actually something that is needed in redis-cluster-client though on its own.

Let's ignore redis-clustering/redis-rb for a moment. In an app using only redis-cluster-client, how would you perform the following sequences of Redis operations from my previous comment?

  1. WATCH '{slot}k1'
  2. GET '{slot}k1' # Can be on any connection (to the right node), but
  3. UNWATCH # This must be on the same connection as 1!

Or....

  1. WATCH '{slot}k1'
  2. GET '{slot}k1' # Any connection is fine
  3. WATCH '{slot}k2' # Must be on the same connection as 1!
  4. GET '{slot}k2'
  5. MULTI # must be on the same connection as 1
  6. ...
  7. EXEC

Both of these examples want to manipulate the watches on a connection, without necessarily yet calling MULTI. The #multi API does not provide a way to do this.


I mean, why don't you use redis-cluster-client's internal moduels in the redis-clustering side? I think we can do it.

This will of course work, but we'd still need the changes to optimistic_locking.rb from this PR i think. And it seems to me that the above use-case should be supported in redis-cluster-client by itself, as well as when using redis-rb.

Hopefully I've explained what I mean here a bit better! If you still don't agree I can remove the #watch public method from this PR.

This returns a "watcher" object, which can either be used for three
things:

* To add keys to be watched on the same connection (by calling #watch
* To begin a MULTI transaction on the connection (by calling #multi)
* To UNWATCH the connection and return it to its original state
  (by calling... #unwatch)

This means that the following pattern becomes possible in
redis-cluster-client:

```
client.watch(["{slot}k1", "{slot}k2"]) do |watcher|
  # Further reads can be performed with client directly; this is
  # perfectly safe and they will be individually redirected if required
  # as normal.
  # If a read on a slot being watched is redirected, that's also OK,
  # because it means the final EXEC will fail (since the watch got
  # modified).
  current_value = client.call('GET', '{slot}k1')
  some_other_thing = client.call('GET', '{slot}something_unwatched')

  # You can add more keys to the watch if required
  # This could raise a redireciton error, and cause the whole watch
  # block to be re-attempted
  watcher.watch('{slot}differet_key')
  different_value = client.call('GET', '{slot}different_key')

  if do_transaction?
    # Once you're ready to perform a transaction, you can use multi...
    watcher.multi do |tx|
      # tx is now a pipeliend RedisClient::Cluster::Transaction
      # instance, like normal multi
      tx.call('SET', '{slot}k1', 'new_value')
      tx.call('SET', '{slot}k2', 'new_value')
    end
    # At this point, the transaction is committed
  else
    # You can abort the transaction by calling unwatch
    # (this will also happen if an exception is thrown)
    watcher.unwatch
  end
end
```

This interface is what's required to make redis-clustering/redis-rb work
correctly, I think.
@KJTsanaktsidis KJTsanaktsidis force-pushed the ktsanaktsidis/watch_method_for_redisrb branch from df9cb70 to 55a28ca Compare February 26, 2024 23:33
@KJTsanaktsidis
Copy link
Contributor Author

I had a think about this over the weekend.... I pushed up a change to this PR to make the #watch method private, so we can take our time trying to find the right interface before exposing any new public API. Then redis-rb can just send(:watch) for now.

I did think about deleting it and having redis-rb construct a ::RedisClient::Cluster::OptimisticLocking directly (by calling instance_variable_get(:@router) etc to get access to the router), but there were already test-cases for this functionality in test_cluster.rb and by keeping the watch method in this repo we can keep the test_transaction_with_dedicated_watch_command test case here too.

@KJTsanaktsidis
Copy link
Contributor Author

I opened a new version of this PR because I wanted to resolve the conflicts, but I'm on holiday without my work laptop and so can't push to the zendesk/redis-cluster-client fork!

@supercaracal
Copy link
Member

Have a good one!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants