Abstract:
The recent success in mapping between two domains in an unsupervised way and without any existing knowledge, other than network hyperparameters, is nothing less than extraordinary and has far-reaching consequences. As far as we know, nothing in the existing machine learning or cognitive science literature suggests that this would be possible.
We conjecture that functions of minimal complexity play a pivotal role in this success. If our hypothesis is correct, simply by training networks that are not too complex, the "correct" target mapping stands out from all other alternative mappings. Our analysis leads directly to a new unsupervised cross-domain mapping algorithm that is able to avoid the ambiguity of such mapping, yet enjoy the expressiveness of deep neural networks.
Taking this approach a step further, we define a general Occam’s razor property and employ it in order to obtain generalization bounds for unsupervised learning. The bounds hold both in expectation, with application to hyperparameter selection, and per sample, thus supporting dynamic confidence-based runtime behavior. The latter is crucial for real-world computer vision systems and was never shown for functions learned in an unsupervised way.
I will also present new results on the AI task of Identifying analogies across domains without supervision. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching sample from the other domain. Our work tackles this very task of finding exact analogies between datasets e.g. for every image from domain A find an analogous image in domain B.