Papers
arxiv:2503.21166

Unveiling the Potential of Superexpressive Networks in Implicit Neural Representations

Published on Mar 27, 2025
Authors:
,
,
,
,

Abstract

Superexpressive networks with additional width, depth, and height dimensions outperform implicit neural representations using specialized activation functions in computer vision and scientific machine learning tasks.

AI-generated summary

In this study, we examine the potential of one of the ``superexpressive'' networks in the context of learning neural functions for representing complex signals and performing machine learning downstream tasks. Our focus is on evaluating their performance on computer vision and scientific machine learning tasks including signal representation/inverse problems and solutions of partial differential equations. Through an empirical investigation in various benchmark tasks, we demonstrate that superexpressive networks, as proposed by [Zhang et al. NeurIPS, 2022], which employ a specialized network structure characterized by having an additional dimension, namely width, depth, and ``height'', can surpass recent implicit neural representations that use highly-specialized nonlinear activation functions.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2503.21166
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.21166 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.21166 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.