Dealing with Novelty in Open Worlds: DNOW
Jan 4th, held in conjunction with WACV 2022
Computer vision algorithms are often developed inside a closed-world paradigm, for example, recognizing objects from a fixed set of categories. However, the "world" is open and constantly and dynamically changes. As a result, when the “world” changes, these computer vision algorithms are unable to detect the change and then continue to perform their tasks with incorrect and sometimes misleading predictions. In this workshop, we aim to facilitate research directions that aim to operate well in the open world while maintaining performance in the “closed” world. Many real-world applications considered at WACV must deal with changing worlds where a variety of novelty is introduced (e.g., new classes of objects).
Let's consider the following motivational examples:
- An autonomous vehicle may not recognize an overturned truck
- A visual recognition system used in entertainment might not fairly recognize people of different races
- A fashion recommendation system might not produce satisfactory recommendation results for "new-arrival" clothes.
- An e-commerce system might not check new-products that are posted in reviews or uploaded for sale.
Addressing novelties in open worlds have broad societal impacts. For example,
- It improves the safety and robustness of vision systems, e.g., flagging an alert when seeing an unknown object in autonomous driving
- It helps interdisciplinary research, e.g., discovering novel species of biological organisms
- It helps mitigate bias and promotes fairness in AI or machine learning applications, e.g., detecting and carefully handling underrepresented subpopulations rather than blindly making predictions about them.
We expect that our workshop will
- offer new insights to the audience with respect to new challenges and opportunities when studying computer vision systems in the open-world
- give voice to the need for more attention to open-world paradigms and will continue the discussion, from previous such workshops/seminars, on the formalization of metrics and datasets in this space
- provide a platform to exchange ideas among people that have different backgrounds and that come from different fields
- bridge the gaps of academic research experiments and the requirements of real-world applications
- explore mechanisms to measure competence at recognizing and dealing with novelty. This is especially important since the distribution of the test data (and presumably the evaluation date) will by definition, differ from the training data. This introduces challenges for evaluation as well as achieving competence.