While conventional data are represented as discrete vectors, Implicit Neural Representations (INRs) utilize neural networks to represent data points as continuous functions. By incorporating a shared network that maps latent vectors to individual functions, one can model the distribution of functional data, which has proven effective in many applications, such as learning 3D shapes, surface reflectance, and operators. However, the infinite-dimensional nature of these representations makes them prone to overfitting, necessitating sufficient regularization. Naïve regularization methods -- those commonly used with discrete vector representations -- may enforce smoothness to increase robustness but result in a loss of data fidelity due to improper handling of function coordinates. To overcome these challenges, we start by interpreting the mapping from latent variables to INRs as a parametrization of a Riemannian manifold. We then recognize that preserving geometric quantities -- such as distances and angles -- between the latent space and the data manifold is crucial. As a result, we obtain a manifold with minimal intrinsic curvature, leading to robust representations while maintaining high-quality data fitting. Our experiments on various data modalities demonstrate that our method effectively discovers a well-structured latent space, leading to robust data representations even for challenging datasets, such as those that are small or noisy.
Toy example - We show the interpolation results between two circles (r=0.1 & r=0.5) in the latent space. The auto-decoder (AD) fails to preserve the shape of circle, while our method (IsoAD) successfully interpolates with constant velocity between the circles.
N=300
N=1500
2D Surface Reconstruction - We show the surface reconstruction results of MNIST digits from zero-level set. N denotes the number of digits in training dataset.
N=271 (5%)
N=542 (10%)
3D Surface Reconstruction - We show the surface reconstruction results of ShapeNet chair dataset from partial observations. Our method (IsoDeepSDF) outperforms the baselines in terms of reconstruction quality.
N=20
N=80
BRDF Reconstruction - We show the BRDF reconstruction results of MERL dataset from BRDF samples. Our method (IsoAD) outperforms the baselines in terms of both data-fidelity and generalization performance.
Reaction-diffusion
Darcy flow
Neural operator learning - We demonstrate that incorporating isometric regularization can significantly improve the performance of operator learning. IsoDONet produces the best qualitative results, predicting the output function with relatively less distortion and without overfitting to the noise.