Isometric Regularization for
Manifolds of Functional Data

ICLR 2025
Seoul National University1
Yonsei Univeristy2
Chung-Ang University3
Korea Institute For Advanced Study4
* Corresponding authors
1 3

We present isometric regularization for manifolds of functional data, leading to robust data representation learning.

Abstract

While conventional data are represented as discrete vectors, Implicit Neural Representations (INRs) utilize neural networks to represent data points as continuous functions. By incorporating a shared network that maps latent vectors to individual functions, one can model the distribution of functional data, which has proven effective in many applications, such as learning 3D shapes, surface reflectance, and operators. However, the infinite-dimensional nature of these representations makes them prone to overfitting, necessitating sufficient regularization. Naïve regularization methods -- those commonly used with discrete vector representations -- may enforce smoothness to increase robustness but result in a loss of data fidelity due to improper handling of function coordinates. To overcome these challenges, we start by interpreting the mapping from latent variables to INRs as a parametrization of a Riemannian manifold. We then recognize that preserving geometric quantities -- such as distances and angles -- between the latent space and the data manifold is crucial. As a result, we obtain a manifold with minimal intrinsic curvature, leading to robust representations while maintaining high-quality data fitting. Our experiments on various data modalities demonstrate that our method effectively discovers a well-structured latent space, leading to robust data representations even for challenging datasets, such as those that are small or noisy.

Results

- Neural SDFs -

GT
AD
IsoAD (Ours)

Toy example - We show the interpolation results between two circles (r=0.1 & r=0.5) in the latent space. The auto-decoder (AD) fails to preserve the shape of circle, while our method (IsoAD) successfully interpolates with constant velocity between the circles.

N=300

    GT
  DeepSDF
LipDeepSDF
IsoDeepSDF (Ours)

N=1500

    GT
  DeepSDF
LipDeepSDF
IsoDeepSDF (Ours)

2D Surface Reconstruction - We show the surface reconstruction results of MNIST digits from zero-level set. N denotes the number of digits in training dataset.

N=271 (5%)

   Input
  GT
DeepSDF
LipDeepSDF
IsoDeepSDF (Ours)

N=542 (10%)

   Input
  GT
DeepSDF
LipDeepSDF
IsoDeepSDF (Ours)

3D Surface Reconstruction - We show the surface reconstruction results of ShapeNet chair dataset from partial observations. Our method (IsoDeepSDF) outperforms the baselines in terms of reconstruction quality.

- Neural BRDFs -

N=20

GT
AD
LipAD
IsoAD (Ours)

N=80

GT
AD
LipAD
IsoAD (Ours)

BRDF Reconstruction - We show the BRDF reconstruction results of MERL dataset from BRDF samples. Our method (IsoAD) outperforms the baselines in terms of both data-fidelity and generalization performance.

- Neural Operators -

Reaction-diffusion

Noise level σ:
0.1
0.2
0.5
0.1
0.2
0.5
0.1
0.2
0.5
Input
GT
DONet
LipDONet
IsoDONet (Ours)

Darcy flow

Noise level σ:
0.1
0.2
0.5
0.1
0.2
0.5
0.1
0.2
0.5
Input
GT
DONet
LipDONet
IsoDONet (Ours)

Neural operator learning - We demonstrate that incorporating isometric regularization can significantly improve the performance of operator learning. IsoDONet produces the best qualitative results, predicting the output function with relatively less distortion and without overfitting to the noise.

BibTeX