Neural Networks Trained to Solve Differential Equations Learn General Representations

Abstract

In this work, we introduce a technique based on the singular vector canonical correlation analysis (SVCCA) for measuring the generality of neural network layers across a continuously-parametrized set of tasks. We illustrate this method by studying generality in neural networks trained to solve parametrized boundary value problems based on the Poisson partial differential equation. Specifically, each neural network in our experiment is trained to solve for the electric potential produced by a localized charge distribution on a square domain with grounded edges. We find that the first hidden layer of these networks is general, in that the same representation is consistently learned across different random initializations and across different problem parameters. Conversely, deeper layers are successively more specific. We validate our method against an existing technique that measures layer generality using transfer learning experiments. We find excellent agreement between the two methods, and note that our method is much faster, particularly for continuously-parametrized problems. Finally, we visualize the general representations that we discovered in the first layers of these networks, and interpret them as generalized coordinates over the input domain.

Date
Location
Toronto, ON
Links