4. A distributed lock service spanning two locations

This section defines a lock service spanning two locations, identified by addresses 0 and 1. At each address, three functions can be called by a local user:

  • acq(): acquire the lock.
    • Return only if the remote user does not hold the lock, else block.
    • Call only if the local user does not hold the lock and end() has not been called at any location.
  • rel(): release the lock.
    • Return None.
    • Call only if the local user holds the lock
  • end(): end the lock.
    • Return None.
    • Call only if the following holds at both locations: end() has not been called; there is no ongoing call; and the local user does not hold the lock.

Service program (abstract version)

The intended lock service is formally defined by the pseudo-Python class Service in distlock2.service.ppy. As usual, parameter j in a function indicates the address of the corresponding function (e.g., acq(j) corresponds to function acq() at address j).

# pseudo-Python module sesf.distlock2.service.ppy
# class Service is distlock2 service (abstract version)

class Service():
  def __init__(self):
    """Maintains the following variables:
    - acqrd: address at which user holds lock; None if lock is free.
    - ongoing[j]: true iff acq(j), rel(j) or end(j) call ongoing; j in 0..1.
    - ended: true iff end(j) has been called for some j
    """
    self.acqrd = None
    self.ongoing = [False, False]
    self.ended = False

  def acq(self, j):
    CIC:
      j in (0, 1) and not self.ended and not self.ongoing[j]
      and self.acqrd != j
    CU:
      self.ongoing[j] = True

    RETURN():
      ROC:
        self.acqrd == None
      RU:
        self.ongoing[j] = False
        self.acqrd = j
   

  def rel(self, j):
    CIC:
      j in (0, 1) and not self.ended and not self.ongoing[j]
      and self.acqrd == j 
    CU:
      self.ongoing[j] = True
      self.acqrd = None

    RETURN():
      ROC:
        True
      RU:
        self.ongoing[j] = False


  def end(self, j):
    CIC:
      j in (0, 1) and not self.ended and self.acqrd == None
      and not self.ongoing[0] and not self.ongoing[1]
    CU:
      self.ongoing[j] = True
      self.ended = True

    RETURN():
      ROC:
        True
      RU:
        self.ongoing[j] = False

Note that rel(j) gives up the lock in the call step. So if rel(j) and acq(1-j) are both ongoing, acq(1-j) can return before rel(j) returns.

Service program (concrete version)

The Python version of the service is in class Service (same name in the pseudo-Python version):

Module sesf.distlock2.service has class Service, which is the concrete version of distlock2 service. RPC-accessable via pyrodaemon sesf.distlock2.service. Run python service.py to start a Service instance, which can then be accessed by a distributed program that uses distlock2 service.

class distlock2.service.Service[source]

Service tester (abstract version)

TODO

Service tester (concrete version)

The servicetester is a program that can be used in place of the users of the service, for testing an implementation of the service. For testing, each node of the implementation is wrapped in a “test node”. The service tester makes RPC calls to the test nodes. A test node redirects incoming RPC calls to its implementation node.

The service tester can also use a checker, if one is provided with the implementation, to check implementation-specific assertions. In this case, each implementation node can inform the servicetester of local events by calling the node’s inform_tester() function. This function has no effect if the implementation node has not been started by a test node.

test_node

Module sesf.distlock2.test_node has class TestNode that implements an RPC wrapper for a node of an implementation of distlock2 service. Run python test_node.py 2 1 --use imp_0.node 2 1 in a shell to start a TestNode object wrapping an implementation imp_0.node.Node.

class distlock2.test_node.TestNode(myid, use)[source]

servicetester

Module sesf.distlock2.servicetester has class ServiceTester that implements the distlock2 servicetester. This can test a distributed implementation of the distlock2 service. For example, to test a distributed system of two imp_0 nodes:

  • Start a Pyro4 nameserver if one is not already running: run python -m Pyro4.naming &.
  • Run python test_node.py 2 0 --use imp_0.node 2 0 and python test_node.py 2 1 --use imp_0.node 2 1. (Or python start_nodes.py 2 test_node.py --use imp_0.node.)
  • Run python servicetester.py 2 to test against the service. Or, run python servicetester.py 2 --checker imp_0.checker 2 to test against the service and the implementation-specific assertions in imp_0.checker.Checker (if present).
class distlock2.servicetester.ServiceTester(num_nodes, num_acqrels, checker_argv)[source]

Implementation Imp_0 (uses msgtransfer2 service)

../_images/distlock2-config.svg

Module sesf.distlock2.imp_0.node has class Node that implements a node of a distlock2 implementation making use of a msgtransfer2 node. The following commands start a distributed system of two processses, each with a Node instance that uses a msgtransfer2 node.

  • python imp_0/node.py 2 0 --use sesf.msgtransfer2.imp_0.node 2 0
  • python imp_0/node.py 2 1 --use sesf.msgtransfer2.imp_0.node 2 1
  • Or instead of the above, python ../start_nodes.py 2 imp_0/node.py --use sesf.msgtransfer2.imp_0.node.

The two processes exercise the distlock2 service provided by the two distlock2 nodes and then end the distlock2 service (which also ends the msgtransfer2 service being used).

If the commands do not indicate a msgtransfer2 implementation (ie, the “use” argument is not present), the distlock2 nodes use a default msgtransfer2 implementation.

The distlock2 nodes can also make use of a running msgtransfer2 servicemodel, instead of a msgtransfer2 implementation. Just replace msgtransfer2.imp_0.node by msgtransfer2.model_node in the above commands.

class distlock2.imp_0.node.Node(argv, testerproxy=None)[source]

A node of a distributed system that implements the distlock2 service. The distributed system consists of two Node instances with addresses 0 and 1. For brevity, refer to the node at adddress j as node_j, where j ranges over 0 and 1.

After node_j completes initialization, it maintains the following variables:

  • myid: id of this node; 0 or 1.
  • mtnode: msgtransfer2 node used by this node.
  • status: ‘T’ (‘thinking’), ‘H’ (‘hungry’), ‘E’ (‘eating’); initially ‘T’.
  • token: True iff node holds token; initially true iff j=0.
  • req: True iff node holds request for token; initially true diff j=1.
  • recv_thread: local thread that receives messages from peer.

Module distlock2.imp_0.checker has class Checker, a skeleton checker for a distlock2 implementation, to be fleshed out by you.

class distlock2.imp_0.checker.Checker(tester, argv)[source]

Maintain global state of a distributed system of two distlock2.imp_0 nodes. Receive updates from the nodes and check that the updated global state satisfies desired global assertions. Instantiated by distlock2.servicetester when testing the distributed system.


Running servicetester on an implementation using msgtransfer service

Start a Pyro4 nameserver if one is not already running (python -m Pyro4.naming &).

Start the msgtransfer2 service:

  • python msgtransfer2/service.py

Run the following two commands:

  • python distlock2/test_node 2 0 --use imp_0.node 2 0
  • python distlock2/test_node 2 1 --use imp_0.node 2 1
  • Or instead of above, run python start_nodes.py 2 distlock/test_node.py --use imp_0.py

Run the service tester with your checker:

  • python distlock2/servicetester --checker imp_0.checker