GINOT

Geometry-Informed Neural Operator Transformer

Overview of model architectures

Figure 1: Overview of GINOT architecture. The boundary points cloud is initially processed through sampling and grouping layers to extract local geometric features. These local features are then fused with global geometric information via a cross-attention layer. This is followed by a series of self-attention layers and a final linear layer, producing the KEY and VALUE matrices for the cross-attention layer in the solution decoder. In the solution decoder, an MLP encodes the query points into the QUERY matrix for the cross-attention layer, which integrates the geometry information from the encoder. The output of the cross-attention layer is subsequently decoded into solution fields at the query points using another MLP.

Examples

Animation of pub median case

Figure 2: Visualization of Mises stress and displacement solutions for the median testing case of the micro-periodic unit cell. The first column shows the input surface points cloud, the second column presents the true stress on the actual deformed shape, the third column depicts the predicted stress on the predicted deformed shape, and the fourth column highlights the absolute error of stress on the actual deformed shape.

Figure 3: Mises stress solutions for the median (top row) and worst (bottom row) testing samples of the JEB dataset. Each row shows (from left to right): the input surface point cloud, the ground truth from finite element analysis, the GINOT prediction, and the absolute error between prediction and ground truth.

Dataset

The dataset used for training and evaluation is publicly available on Zenodo, except for the micro-periodic unit cell dataset, which is avilable on https://doi.org/10.5281/zenodo.15121966. Please download these datasets and unzip into ./data

Reference