More stuff:
Killing the video early is losing some output.  Some ideas:

1) Kill the source of the video stream, but delay in killing the receiver.
2) Kill MediaNet before killing the receiver, to give the tracereceiver a
chance to flush.



Tried a couple of things:
1) only reconfig due to creep after 5 secs since last reconfig.  Not sure
how well this is working.
2) reset counters for monitoring on reconfig.  Consequences:
   - does not include potentially irrelevant past information, so the
     reading is more accurate (in some sense)
   - because initial sender was probably doing major queuing, it will get
     a big burst of data and dropped packets.  We need to give this time
     to shake out, right?  The question is how/where to do this.
     - add some kind of hysteresis to the local schedulers so that they
       don't report as fast/often following a reconfig.  Not sure of the
       consequence to this, other than that it's baking in some policy.
     - figure out how to do it in the global scheduler.  The problem now
       is that even if I delay reconfig, it will set an upper bound
       optimistically, so I might flap.  Not sure.  This seems worth a try;
       how long to wait though?  Should I just have a general delay in
       between reconfigs?  The problem is that the b/w updates will include
       the "benefit" of queuing, so it will appear we have more capacity
       than we do.  This would argue for delaying for one cycle in the local
       scheduler.

Want to look into:
- how long does it take to do a reconfig?  Are things cascading properly?
Could create a long line of local schedulers (5?) and make sure the reconfig
is pretty fast.  That is, the first one doesn't have to wait until the last
one is ready.

*** Read over the tcp-bw.pdf and atp.pdf papers in ~ with regard
to packet pair estimators.

Redesigns:
- Need to have a coherent strategy for estimating the bandwidth available on
  network links.  Here are the inputs:

  send reports:
  1) if try > send, then BW = send; implies that we can reset an heuristics
     that we've accumulated that went into calculating the bandwidth; it
     can now be set to send.
  2) if try == send, then BW >= send; implies that if BW < send, we can set
     BW = send.

  Thus it's easy to detect that bandwidth is not available; how do we detect
  that it has returned?

  1) do some kind of additive increase on a regular basis to estimate it.
     Right now we increment it by 3% per second.

     QUESTION: how could we better derive a reasonable rate?
     PROBLEMS: different links will be adding at different rates.  For a
       large network, each increment could cause a reconfig, which is a
       pretty dicey operation; that is, we don't want lots of reconfigs
       one after another.

     How to do this well?  I want to weight the accuracy of my estimate, I
     think, to avoid too many reconfigurations.  Seems like I want to be
     able to 

     a) adjust the rate at which I creep, based on network information.
        That is, I could assume that past performance is an indicator of
        current performance, and thus curb the additive rate.

     b) otherwise, I should increase by an absolute amount, rather than a %
        of the current bandwidth?  I believe this is what TCP does (it adds
        one MSS per RTT to the window size).

     XXX more on when to apply creep estimates into network model, which
     itself plays into reconfigs.

  2) do some kind of on-line estimate.

     a) use packet-pair based techniques.  The basic packet pair thing I
        have now doesn't seem to work.  Not sure how easy this would be to
        fix in my infrastructure.

     b) create dummy components that send a lot of useless data to figure
        out where the upper bound is.  In particular, have some creeping
        estimate such that the component wakes up at a regular interval and
        sends enough packets to meet the estimate B/W + increment.  Have a
        size 0 queue so that if a packet can't be sent, we drop back the
        estimate in half.

	The benefits of this approach are:

	1) accurate.  The measurements going to the GS will be for packets
	   actually transmitted successfully.

        2) Moreover, when a packet is dropped, the try/send will be
	   non-equal, accurately setting the link bandwidth, but not causing
	   a reconfig (because we'll still have enough bandwidth).

        The drawbacks are

        1) We'll need to schedule this guy at pretty regular intervals or
           else when it wakes up it will send a ton of data which will max
           out the queue and make the link appear to have too little b/w.
           Piggy backing on other packets sent would be sensible (but not
           sure how to do this in our setup).

	2) Uses up a lot of bandwidth!  I don't think this will impede flows
           out of MN, because they will have queues that should be able to
           overcome the short-term lack of capacity.  However, not sure how
           this would interact with cross-traffic.  Could claim that a good
           estimator could be dropped in as a replacement.

  If time has passed since the last 1) event, then we want to try to bump up
  the bandwidth by some amount to optimistically assume that 

Fixes:
- Use maximum spanning tree instead of AP shortest paths; E lg E should be
  better running time than V^3.
- Has redundant prio comps with multicast.  What we want to do is insert the
  prio before the first send/recv, and then remember the prio as the src
  comp.  This means we need to do things breadth first rather than depth
  first, as we're doing now, I think.

----------------------------------------------------------------------
Current algorithm for find_good_config:

  calculate the score (involving the current locations of comps)
    insert_necessary_conns
      clears all existing intermediate connections
      creates trees between existing user-comp pairs
      inserts send/recvs along these trees
    install_monitors
    calculate the scores
      updates the load on the network links

  for each non-pinned comp
    move it somewhere else
    calculate the score
      (see above)
    if the score improves, keep this configuration

  return the best score

----------------------------------------------------------------------

Overview of mmsched.c.

mmsched_enter --> update_sched --> linear_search_for_sched ----+
                       |                                       |
                       +---------> optimize_sched              |
                       |                \                      |
                        \                \                     v
                         \                -----------> create_comp_list
                           --------------------------> find_good_config
                                                               |
                                                               v
                                                       calculate_score
                                                    insert_necessary_comps
                                                               |
                                                               v
                                                       compute_all_paths
                                                            get_path
                                                       create_all_paths
                                                               |
                                                               v
                                                          make_path

mmsched_enter:
When a user specification (e.g. MPEGtraceopt.xml) is received via HTTP, the
function mmsched_enter is ultimately called.  mmsched_enter will simply
parse the XML in the request and then update the global schedule based on
the new information, by calling update_sched.

update_sched:

1) initializes user utility array (each utility to 0.0)

2) saves the current configuration.  This configuration is stored in the
   computations global variable (an array).  The saved configuration is
   compared to the newly generated configuration (due to rerunning the
   scheduler with a new user spec, for example), and if different, then the
   local schedulers are sent the new configurations.

3) linear_search_for_sched will generate a new configuration such that all
   users have the same utility.  Then optimize_sched will try to adjust
   individual user utilities, one at a time, until no more improvements can
   be made.

4) is_new_config compares the old and new configuration, and if found, will
   increase the version number.  create_comp_list is called to create the
   final schedule; this function was also called by linear_search_for_sched
   and optimize_sched; it is the main part of determining a schedule.  By
   calling find_good_config(1) (as opposed to find_good_config(0)), the
   actual schedule is created.  Because there is a new version, this
   schedule will then be sent out to the various local schedulers by calling
   mmsched_send_assignments.

linear_search_for_sched:
Does binary search on utility.  Calls create_comp_list and
find_good_config(0) for each possible utility value, which will generate a
schedule.  find_good_config returns the overall score (<=1); if 0 <= score
<= 1, then the schedule could be met, but if the score < 0 then no possible
configuration was found.  If a reasonable config is found, then it saves it.

create_comp_list: 
Initializes the computations global array with the relevant user
computations under consideration.  So, for each user, its computations are
added to the global array, and the optional computations are added to the
opt_computations array.  Computations that have the same name are shared
between users.

** find_good_config:
while things keep getting better:
  calculates score
    this assigns computations to particular nodes in the network
  for each user computation (i<nspec_comps)
    try relocating the computation to a different node, if its location
      is not fixed.
    any time the score improves, note the change to the computations

  ** might try to try moving *any* send/recv computation pair, in addition
     to moving user computations.  Not sure how to do this exactly.

  NOTE: computation.assigned_loc is where the global scheduler has
          put a computation
        computation.location is the location specified by the user.
          if location is nonzero, then assigned_loc == location

** calculate_score:
calculates the total score for the computations, based on their current
locations; this has three parts ...  Before generating the three parts, we
must insert the send/recv nodes and the monitors.  This happens in
insert_necessary_connections, and install_all_monitors.

insert_necessary_connections:

1) clear the inputs from the existing computations (in_links and ninput
   computation fields), so as to insert new connections.

2) calls compute_all_paths to determine the all-pairs shortest paths between
   nodes in the network, given the current configuration.

3) for each user computation C, it finds the shortest path between each of
   the computations who output to that computation (determined by C's inputs
   array), by calling get_path.  The results are stored ina path tree, so
   that paths can be merged.

4) finally, creates the paths based on the trees that were generated by
   calling create_paths.  This will in turn call make_path, which will
   insert the send/recv computations along the given paths.
  


======================================================================

Network configurations are at the top of mmsched.c.  Struct definitions are
in mmsched.h:

  struct node
    name: as referred to in specs (e.g. as in the <location>...</location>
      field)
    capacity
    host: where located
    port: HTTP port listening on (for reconfigs)

  struct network
    name: link name
    capacity: current b/w in bytes per second
    link_capacity: maximum b/w in bytes per second
    latency: minimum latency in seconds (ignore for now)
    nodes: string containing node names that are connected by the link

