-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathPartition_Architecture.txt
73 lines (45 loc) · 2.56 KB
/
Partition_Architecture.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
This details a possible way to achieve node partitioning using consistent hashing and
virtual nodes. It outlines 4 main operations: initial view, key operations, key count,
and view changes.
-------------------------------------------------------------------------------------
A. Initial view:
Each node is given a list of ip/port pairs for all other nodes.
In order to maintain an even distribution of keys during a view change, we will
use the concept of virtual nodes. This just means that for each node, we create
several aliases.
The procedure is; for each node, hash it to an int and hash several aliases of
that node. For each hashed node we also store a translation from its "ring"
number to its IP address. We then sort all ring numbers so we can match keys
to nodes quickly.
ex.
Node1's hash_view = [5, 31, 120, 209]
Node1's translation_view = {120:node0_ip, 5:node1_ip, 209:node2_ip, 31:node3_ip}
-------------------------------------------------------------------------------------
B. Key operations:
PUT, GET, DELETE. All of these are essentially the same.
When a node gets a key request:
1. Hash the key to an int on the "ring".
2. Return the largest node number from local hash_view that is less than the key
hash number.
3. Send the key/val to the ip address of the returned node number. Send some
metadata that says if this is a PUT, GET, or DELETE.
-------------------------------------------------------------------------------------
C. Key count:
In order to find the number of keys, we ask each non virtual node in
our view how many keys they have. We then just sum them up and return to the
client.
-------------------------------------------------------------------------------------
D. View change:
This is the key part of consistent hashing, we dont want
to have to rebalance the whole system when a new node comes or an old one
is deleted. When a new node is added, we want it to take on some (1/n) of the
stored keys but the question is which keys.
Two scenerios. 1. a node is added, 2. a node is removed
A. Send new view to all nodes
1. When recieving a request for a view change, hash new node and virtual alias
nodes and add them to the local view. Once we insert all the new "ring" numbers,
ownership of keys must be updated according to our definition of ownership where
the next node on the ring owns the key.
2. When removing a node, we need to update the ownership of keys for all virtual
nodes being removed. Once a virtual node X is removed, the next node Y on the
"ring" must take X's keys.