Due to the theoretical content throughout, I found this paper to be a slow read. However, I also found it to be well written. The authors did a nice job of taking a detailed look at the issues invovled with on-line software updates. Some of the results provided a good theoretical understanding of problems that would have otherwise been difficult to solve. For example, the first major conclusion reached is that validity is undecidable. While an interesting conclusion, at first this seemed like a huge stumbling block for the software update process. However, the author was able to show simplify that problem enough in order to continue moving toward a solution. In the end, as is often the case with theoretical papers, the authors had to make several such simplifications to reach their conclusions. Normally I find this irritating because it makes the work seems less applicable to read-world problems. However, in the case of this paper, I found that the restrictions were flexable enough so as to allow for better applicability. After reading the paper I'm left with one major question: Why is this something that we want to do? I suppose that question may seem naive on the surface, but it seems that software tends to fall under two categories of use: 1) software is used in an environment that can afford downtime now and then, 2) software downtime is completely unacceptable. In (1), the update problem is simply not a problem because the program can just be restarted. And in (2), it seems that code would be run in a redundent environment such that updates could be made incrementally throughout the environment without causing any percivable downtime to the end users. I image that there are cases when this logic fails, and perhaps those would be good candidates for the techniques discussed in this paper. But I also suspect that such cases are less frequent that many people may think.

The paper describes a formal framework for online software version change. Online software version change means software of older version is changed to a newer version without shutting down the programs running on the older version. This is a very important concept and has got far reaching consequences in real time applications. The paper does not propose it own online software version change mechanism but gives formal ways to analyze existing online software version change mechanisms fro validity. The paper describes a formal framework for online version change and says that it deals with three issues: Programs Processes and Process States. On the basis of these three concepts the paper defines the validity of an online version change. It says that any change in the process from an old program to a new program is valid only if the process is able to reach any reachable state of the new program in finite time. At the same time paper also gives the concept of undecidability of the validity. It says that it is impossible to predict the reachable state of the new program to which we are switching. The paper also describes sufficient conditions for validity and proves them on procedural and object oriented languages. The paper makes a very good point that while practically implementing a version change our main concern is the time taken to actually install the version change rather than the time taken to analyze the control points at which changes can be made. The paper has not dealt with real time applications and has left it for research.

This paper on On-line Software version Change emphasises the need of the online software system.I found this paper to be hard to understand .The framework is based on the concept of process states.According to the paper the program can be divided into the two components ie.process and a state.The concept of state is defined in the paper pretty well.In the begining of the paper the author has shown that the problem of the validity is undeciable.The paper also explains the timing for the updation to be applied.The major issue involved in online software version change is validity.The validity can be infered as testing of software. Author has given a lot of proofs to support his implemetation So due to undeciablity of the vaidity it is not possible to have automatic validity .Validation of the online change is the important aspect of the any version change.The paper doesn't say much about these validations and opens a room for research in this area.

This paper addressed the theoretical aspects of software updating on-line. By modeling the process of dynamic program transition and the validity, they proved that the validity of an on-line change couldnĄŻt be decided. This conclusion is useful and referred by HicksĄŻ paper to provide an alternative way to ensure the robustness of dynamic updating. Authors built up several models from simple model to general one to prove their point and try to find general conditions for validity, which is the main purpose of the paper. Although this paper tried to use formal language to model software process, I think it is readable. Also the theorems and proofs are not that hard to understand.

This paper introduces a small framework for deciding conditions for which an online version change is valid. The base definition is (paraphrase)'a transition is valid if in a finite amount of time the updated version enters a state that would be reachable if the new version was seperately compiled, loaded, and run with no updating'. The authors move to a functional language, and then focus on the unit of replacement being a function. Their model for transitions is then based on determining which functions must be off the stack for a transition to be valid. The authors make the obseration that it is sufficient for a valid upgrade to consist only of 'functional enhancements', where the pre and postconditions of said functions are unchanged- or all changed functions that are not functional enhancements are called only from functions that are. Somewhere in the beginning of their paper they comment on the necessity of speed in an upgrade. At the end though there implementation, for which they understandably give no performance results, is to load an entire separate copy of the program being updated and map the state of the old program to the new. This seems like a lot of overhead for what might be a very small function replacement. One of the key points of this paper, which I think was drastically overemphasized, was the "Undecidability of Validity". After sever pages were spent proving it, undecidability was repeatedly invoked as a justification for lack of automation. Paraphrasing: 'it's undecidable so we couldn't hope to automate everything, let the programmer do it'. Are humans somehow exempt from halting problem results? My favorite quote: "The user can, based on his knowledge of the semantics of the change, determine which of the theorems applies and then use the result to determine F" ( F the set of functions which much be off the stack for a transition to be valid). Yes, the programmer will *study the theorems* to see which apply....

In this paper an effort is made to create a notational framework that could be used to describe and analyze the dynamic patching of running code. At first, the problem is presented for a very simple hypothesis, a language without functions. The undecidability of the problem is proved, and a way to define validity of the transition is defined. Following, he authors discuss a language using functions and place restrictions that should apply for validity to be ensured. The handling of variables and changed functions is discussed in detail, followed by a brief presentation of what could be done in object-oriented languages and distributed programs. Overall, I think the main contribution of this paper is the notation it introduces to be used in encoding the constraints of dynamic upgrading in a concise mathematical way. It's models are simple, and the more realistic (obj.oriented & distributed) cases are overlooked.

Gupta et al.'s paper attempts to capture the task of altering a running program in a formal manner. They begin by defining points in the program's execution where it is safe to transition to a different program (probably a modified version of the original). There is a transition period, where the new program modifies variables after which the new program behaves as if it had been started normally. Unfortunately, they point out that it is undecidable whether or not a program will do this. Therefore, they find a subset of the control points, for which the validity of the change will be decidable. This set of points is chosen by control-flow analysis, so they suggest a further subset to allow large programs with functions to be modified. To do this, they declare the changed functions and some of the functions that call changed functions to be off-limits. The switch can only happen when those functions are not running. This is conservative, but it means that inside the functions where the switch can happen, it can happen at any control point, and it will be a valid change. Although this paper is written in formal terms, the authors also made it directly relevant to applied work in this area. They've spelled out what is undecidable, and they've suggested ways to get conservative estimates for correct behavior. The only thing I disliked about this paper was the downbeat ending. "No realistic system can hope to automatically guarantee validity?" That is sad news.

This paper discusses the theoretical requirements for performing online software updating. It introduces the notion of a state mapping from a state in a current program (P) to a state in the new program (P'), and that the new state does not have to be a reachable state of P' (that is, the new state may not be reachable from any initial state of P') The paper does not treat the case of replacing a procedure while the PC is inside that procedure. It also proves several actions are undecidable, including the validity of a state change. While these are good assumptions from a theoretical standpoint, in practice they would severely reduce the usefulness of dynamic updating - meaning that additional conditions are required to make this framework useful.

This paper presents an abstract system for dynamic software updating. Since arbitrary updates to a program mean arbitrary output, some conditions are necessary to determine what forms of updates are valid. Undecidability makes this task much more complicated, because many sets of conditions are not decidable. There are trade-offs involved between the power of updates, when updates can occur, and decidability of validity of updates. As a result, there are many design decisions to be made at even the most abstract level, where this paper works. In general, I found the paper confusing. It did not seem to provide any decidable guarantees for validity except in trivial cases (though I may have missed it). Also, it is not clear that the notion of validity based on reachability given in this paper is a very useful one, because it gives neither necessary nor sufficient conditions for an update to do "what the programmer wants". If validity encompassed preservation of type safety, then it at least would be a necessary condition. Sufficient conditions are of course undecidable, just as in the case of a static program, so it seems that preserving the guarantees that we can provide for static programs is a reasonable goal.

The authors explain their stipulations well, with plenty of examples, and that made this paper much easier to read than most theory papers. At first, I was surprised that Computer Scientists had continued hacking on the online software change problem without proper formalization for such an extended period of time; however, then I realized that this problem does not naturally lend itself to formalization. I see this as a greater issue moving forward - as we introduce more and more complex areas, formalization will become increasingly difficult. When function enhancements were first introduced by the authors, I came up with many useful scenarios that would not work with Theorem 3 and I was relieved to see that the model was relaxed, yet still sound, with the introduction of Theorem 4. I think the methodology started in this paper will play a big role in bringing high-availability to the masses. So far, the most common solution has been to throw a lot of money a the problem. This paper also popped some questions in my head. For example, what if the mapping process is more time consuming then rerunning Prog' with the same set of inputs Prog saw in its lifetime?

Generated: Thu Mar 6 15:39:24 EST 2003 by mwh