Neural networks encode the complex structure of data manifolds in high-dimensional spaces through latent representations. The distribution of data points in the latent space should ideally rely solely on the task, data, loss, and specific architecture constraints. However, factors like random weight initialization, training hyperparameters, and other sources of randomness during training can lead to incoherent latent spaces that hinder reuse.
Notably, a consistent phenomenon emerges when data semantics remain unchanged: the angles between encodings within distinct latent spaces exhibit similarity. During this talk, we will delve into two empirical strategies that harness this phenomenon, facilitating latent communication across diverse architectures and data modalities:
In both cases, we facilitate efficient communication between latent spaces, bridging gaps between distinct domains, models, and modalities; enabling zero-shot model stitching, reuse and latent evaluation. This holds true for both generation and classification tasks, showcasing the versatility and applicability of these strategies.