跳转至

2D-Darcy

AI Studio快速体验

python darcy2d.py
python darcy2d.py mode=eval EVAL.pretrained_model_path=https://paddle-org.bj.bcebos.com/paddlescience/models/darcy2d/darcy2d_pretrained.pdparams
python darcy2d.py mode=export
python darcy2d.py mode=infer
预训练模型 指标
darcy2d_pretrained.pdparams loss(Residual): 0.36500
MSE.poisson(Residual): 0.00006

1. 背景简介

Darcy Flow是一个基于达西定律的工具,用于计算液体的流动。在地下水模拟、水文学、水文地质学和石油工程等领域中,Darcy Flow被广泛应用。

例如,在石油工程中,Darcy Flow被用来预测和模拟石油在多孔介质中的流动。多孔介质是一种由小颗粒组成的物质,颗粒之间存在空隙。石油会填充这些空隙并在其中流动。通过Darcy Flow,工程师可以预测和控制石油的流动,从而优化石油开采和生产过程。

此外,Darcy Flow也被用于研究和预测地下水的流动。例如,在农业领域,通过模拟地下水流动可以预测灌溉对土壤水分的影响,从而优化作物灌溉计划。在城市规划和环境保护中,Darcy Flow也被用来预测和防止地下水污染。

2D-Darcy 是达西渗流(Darcy flow)的一种,流体在多孔介质中流动时,渗流速度小,流动服从达西定律,渗流速度和压力梯度之间呈线性关系,这种流动称为线性渗流。

2. 问题定义

假设达西流模型中,每个位置 \((x,y)\) 上的流速 \(\mathbf{u}\) 和压力 \(p\) 之间满足以下关系式:

\[ \begin{cases} \begin{aligned} \mathbf{u}+\nabla p =& 0,(x,y) \in \Omega \\ \nabla \cdot \mathbf{u} =& f,(x,y) \in \Omega \\ p(x,y) =& \sin(2 \pi x )\cos(2 \pi y), (x,y) \in \partial \Omega \end{aligned} \end{cases} \]

3. 问题求解

接下来开始讲解如何将问题一步一步地转化为 PaddleScience 代码,用深度学习的方法求解该问题。 为了快速理解 PaddleScience,接下来仅对模型构建、方程构建、计算域构建等关键步骤进行阐述,而其余细节请参考 API文档

3.1 模型构建

在 darcy-2d 问题中,每一个已知的坐标点 \((x, y)\) 都有对应的待求解的未知量 \(p\) ,我们在这里使用比较简单的 MLP(Multilayer Perceptron, 多层感知机) 来表示 \((x, y)\)\(p\) 的映射函数 \(f: \mathbb{R}^2 \to \mathbb{R}^1\) ,即:

\[ p = f(x, y) \]

上式中 \(f\) 即为 MLP 模型本身,用 PaddleScience 代码表示如下

# set model
model = ppsci.arch.MLP(**cfg.MODEL)

为了在计算时,准确快速地访问具体变量的值,我们在这里指定网络模型的输入变量名是 ("x", "y"),输出变量名是 "p",这些命名与后续代码保持一致。

接着通过指定 MLP 的层数、神经元个数,我们就实例化出了一个拥有 5 层隐藏神经元,每层神经元数为 20 的神经网络模型 model

3.2 方程构建

由于 2D-Poisson 使用的是 Poisson 方程的2维形式,因此可以直接使用 PaddleScience 内置的 Poisson,指定该类的参数 dim 为2。

# set equation
equation = {"Poisson": ppsci.equation.Poisson(2)}

3.3 计算域构建

本文中 2D darcy 问题作用在以 (0.0, 0.0), (1.0, 1.0) 为对角线的二维矩形区域, 因此可以直接使用 PaddleScience 内置的空间几何 Rectangle 作为计算域。

# set geometry
geom = {"rect": ppsci.geometry.Rectangle((0.0, 0.0), (1.0, 1.0))}

3.4 约束构建

在本案例中,我们使用了两个约束条件在计算域中指导模型的训练分别是作用于采样点上的 darcy 方程约束和作用于边界点上的约束。

在定义约束之前,需要给每一种约束指定采样点个数,表示每一种约束在其对应计算域内采样数据的数量,以及通用的采样配置。

# set dataloader config
train_dataloader_cfg = {
    "dataset": "IterableNamedArrayDataset",
    "iters_per_epoch": cfg.TRAIN.iters_per_epoch,
}

3.4.1 内部点约束

以作用在内部点上的 InteriorConstraint 为例,代码如下:

# set constraint
def poisson_ref_compute_func(_in):
    return (
        -8.0
        * (np.pi**2)
        * np.sin(2.0 * np.pi * _in["x"])
        * np.cos(2.0 * np.pi * _in["y"])
    )

pde_constraint = ppsci.constraint.InteriorConstraint(
    equation["Poisson"].equations,
    {"poisson": poisson_ref_compute_func},
    geom["rect"],
    {**train_dataloader_cfg, "batch_size": cfg.NPOINT_PDE},
    ppsci.loss.MSELoss("sum"),
    evenly=True,
    name="EQ",
)

InteriorConstraint 的第一个参数是方程表达式,用于描述如何计算约束目标,此处填入在 3.2 方程构建 章节中实例化好的 equation["Poisson"].equations

第二个参数是约束变量的目标值,在本问题中我们希望 Poisson 方程产生的结果被优化至与其标准解一致,因此将它的目标值全设为 poisson_ref_compute_func 产生的结果;

第三个参数是约束方程作用的计算域,此处填入在 3.3 计算域构建 章节实例化好的 geom["rect"] 即可;

第四个参数是在计算域上的采样配置,此处我们使用全量数据点训练,因此 dataset 字段设置为 "IterableNamedArrayDataset" 且 iters_per_epoch 也设置为 1,采样点数 batch_size 设为 9801(表示99x99的采样网格);

第五个参数是损失函数,此处我们选用常用的MSE函数,且 reduction 设置为 "sum",即我们会将参与计算的所有数据点产生的损失项求和;

第六个参数是选择是否在计算域上进行等间隔采样,此处我们选择开启等间隔采样,这样能让训练点均匀分布在计算域上,有利于训练收敛;

第七个参数是约束条件的名字,我们需要给每一个约束条件命名,方便后续对其索引。此处我们命名为 "EQ" 即可。

3.4.2 边界约束

同理,我们还需要构建矩形的四个边界的约束。但与构建 InteriorConstraint 约束不同的是,由于作用区域是边界,因此我们使用 BoundaryConstraint 类,代码如下:

bc = ppsci.constraint.BoundaryConstraint(
    {"p": lambda out: out["p"]},
    {
        "p": lambda _in: np.sin(2.0 * np.pi * _in["x"])
        * np.cos(2.0 * np.pi * _in["y"])
    },
    geom["rect"],
    {**train_dataloader_cfg, "batch_size": cfg.NPOINT_BC},
    ppsci.loss.MSELoss("sum"),
    name="BC",
)

BoundaryConstraint 类第一个参数表示我们直接对网络模型的输出结果 out["p"] 作为程序运行时的约束对象;

第二个参数是指我们约束对象的真值如何获得,这里我们直接通过其解析解进行计算,定义解析解的代码如下:

lambda _in: np.sin(2.0 * np.pi * _in["x"]) * np.cos(2.0 * np.pi * _in["y"])

BoundaryConstraint 类其他参数的含义与 InteriorConstraint 基本一致,这里不再介绍。

在微分方程约束、边界约束、初值约束构建完毕之后,以我们刚才的命名为关键字,封装到一个字典中,方便后续访问。

# wrap constraints together
constraint = {
    pde_constraint.name: pde_constraint,
    bc.name: bc,
}

3.5 超参数设定

接下来我们需要指定训练轮数和学习率,此处我们按实验经验,使用一万轮训练轮数。

# training settings
TRAIN:
  epochs: 10000
  iters_per_epoch: 1
  lr_scheduler:
    epochs: ${TRAIN.epochs}
    iters_per_epoch: ${TRAIN.iters_per_epoch}
    max_learning_rate: 1.0e-3

3.6 优化器构建

训练过程会调用优化器来更新模型参数,此处选择较为常用的 Adam 优化器,并配合使用机器学习中常用的 OneCycle 学习率调整策略。

# set optimizer
lr_scheduler = ppsci.optimizer.lr_scheduler.OneCycleLR(**cfg.TRAIN.lr_scheduler)()
optimizer = ppsci.optimizer.Adam(lr_scheduler)(model)

3.7 评估器构建

在训练过程中通常会按一定轮数间隔,用验证集(测试集)评估当前模型的训练情况,因此使用 ppsci.validate.GeometryValidator 构建评估器。

# set validator
residual_validator = ppsci.validate.GeometryValidator(
    equation["Poisson"].equations,
    {"poisson": poisson_ref_compute_func},
    geom["rect"],
    {
        "dataset": "NamedArrayDataset",
        "total_size": cfg.NPOINT_PDE,
        "batch_size": cfg.EVAL.batch_size.residual_validator,
        "sampler": {"name": "BatchSampler"},
    },
    ppsci.loss.MSELoss("sum"),
    evenly=True,
    metric={"MSE": ppsci.metric.MSE()},
    name="Residual",
)
validator = {residual_validator.name: residual_validator}

3.8 可视化器构建

在模型评估时,如果评估结果是可以可视化的数据,我们可以选择合适的可视化器来对输出结果进行可视化。

本文中的输出数据是一个区域内的二维点集,因此我们只需要将评估的输出数据保存成 vtu格式 文件,最后用可视化软件打开查看即可。代码如下:

# set visualizer(optional)
# manually collate input data for visualization,
vis_points = geom["rect"].sample_interior(
    cfg.NPOINT_PDE + cfg.NPOINT_BC, evenly=True
)
visualizer = {
    "visualize_p_ux_uy": ppsci.visualize.VisualizerVtu(
        vis_points,
        {
            "p": lambda d: d["p"],
            "p_ref": lambda d: paddle.sin(2 * np.pi * d["x"])
            * paddle.cos(2 * np.pi * d["y"]),
            "p_diff": lambda d: paddle.sin(2 * np.pi * d["x"])
            * paddle.cos(2 * np.pi * d["y"])
            - d["p"],
            "ux": lambda d: jacobian(d["p"], d["x"]),
            "ux_ref": lambda d: 2
            * np.pi
            * paddle.cos(2 * np.pi * d["x"])
            * paddle.cos(2 * np.pi * d["y"]),
            "ux_diff": lambda d: jacobian(d["p"], d["x"])
            - 2
            * np.pi
            * paddle.cos(2 * np.pi * d["x"])
            * paddle.cos(2 * np.pi * d["y"]),
            "uy": lambda d: jacobian(d["p"], d["y"]),
            "uy_ref": lambda d: -2
            * np.pi
            * paddle.sin(2 * np.pi * d["x"])
            * paddle.sin(2 * np.pi * d["y"]),
            "uy_diff": lambda d: jacobian(d["p"], d["y"])
            - (
                -2
                * np.pi
                * paddle.sin(2 * np.pi * d["x"])
                * paddle.sin(2 * np.pi * d["y"])
            ),
        },
        prefix="result_p_ux_uy",
    )
}

3.9 模型训练、评估与可视化

3.9.1 使用 Adam 训练

完成上述设置之后,只需要将上述实例化的对象按顺序传递给 ppsci.solver.Solver,然后启动训练、评估、可视化。

# initialize solver
solver = ppsci.solver.Solver(
    model,
    constraint,
    cfg.output_dir,
    optimizer,
    lr_scheduler,
    cfg.TRAIN.epochs,
    cfg.TRAIN.iters_per_epoch,
    eval_during_train=cfg.TRAIN.eval_during_train,
    eval_freq=cfg.TRAIN.eval_freq,
    equation=equation,
    geom=geom,
    validator=validator,
    visualizer=visualizer,
)
# train model
solver.train()
# evaluate after finished training
solver.eval()
# visualize prediction after finished training
solver.visualize()

3.9.2 使用 L-BFGS 微调[可选]

在使用 Adam 优化器训练完毕之后,我们可以将优化器更换成二阶优化器 L-BFGS 继续训练少量轮数(此处我们使用 Adam 优化轮数的 10% 即可),从而进一步提高模型精度。

OUTPUT_DIR = cfg.TRAIN.lbfgs.output_dir
logger.init_logger("ppsci", osp.join(OUTPUT_DIR, f"{cfg.mode}.log"), "info")
EPOCHS = cfg.TRAIN.epochs // 10
optimizer_lbfgs = ppsci.optimizer.LBFGS(
    cfg.TRAIN.lbfgs.learning_rate, cfg.TRAIN.lbfgs.max_iter
)(model)
solver = ppsci.solver.Solver(
    model,
    constraint,
    OUTPUT_DIR,
    optimizer_lbfgs,
    None,
    EPOCHS,
    cfg.TRAIN.lbfgs.iters_per_epoch,
    eval_during_train=cfg.TRAIN.lbfgs.eval_during_train,
    eval_freq=cfg.TRAIN.lbfgs.eval_freq,
    equation=equation,
    geom=geom,
    validator=validator,
    visualizer=visualizer,
)
# train model
solver.train()
# evaluate after finished training
solver.eval()
# visualize prediction after finished training
solver.visualize()
提示

在常规优化器训练完毕之后,使用 L-BFGS 微调少量轮数的方法,在大多数场景中都可以进一步有效提高模型精度。

4. 完整代码

darcy2d.py
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

#     http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from os import path as osp

import hydra
import numpy as np
import paddle
from omegaconf import DictConfig

import ppsci
from ppsci.autodiff import jacobian
from ppsci.utils import logger


def train(cfg: DictConfig):
    # set random seed for reproducibility
    ppsci.utils.misc.set_random_seed(cfg.seed)
    # initialize logger
    logger.init_logger("ppsci", osp.join(cfg.output_dir, f"{cfg.mode}.log"), "info")

    # set model
    model = ppsci.arch.MLP(**cfg.MODEL)

    # set equation
    equation = {"Poisson": ppsci.equation.Poisson(2)}

    # set geometry
    geom = {"rect": ppsci.geometry.Rectangle((0.0, 0.0), (1.0, 1.0))}

    # set dataloader config
    train_dataloader_cfg = {
        "dataset": "IterableNamedArrayDataset",
        "iters_per_epoch": cfg.TRAIN.iters_per_epoch,
    }

    # set constraint
    def poisson_ref_compute_func(_in):
        return (
            -8.0
            * (np.pi**2)
            * np.sin(2.0 * np.pi * _in["x"])
            * np.cos(2.0 * np.pi * _in["y"])
        )

    pde_constraint = ppsci.constraint.InteriorConstraint(
        equation["Poisson"].equations,
        {"poisson": poisson_ref_compute_func},
        geom["rect"],
        {**train_dataloader_cfg, "batch_size": cfg.NPOINT_PDE},
        ppsci.loss.MSELoss("sum"),
        evenly=True,
        name="EQ",
    )

    bc = ppsci.constraint.BoundaryConstraint(
        {"p": lambda out: out["p"]},
        {
            "p": lambda _in: np.sin(2.0 * np.pi * _in["x"])
            * np.cos(2.0 * np.pi * _in["y"])
        },
        geom["rect"],
        {**train_dataloader_cfg, "batch_size": cfg.NPOINT_BC},
        ppsci.loss.MSELoss("sum"),
        name="BC",
    )
    # wrap constraints together
    constraint = {
        pde_constraint.name: pde_constraint,
        bc.name: bc,
    }

    # set optimizer
    lr_scheduler = ppsci.optimizer.lr_scheduler.OneCycleLR(**cfg.TRAIN.lr_scheduler)()
    optimizer = ppsci.optimizer.Adam(lr_scheduler)(model)

    # set validator
    residual_validator = ppsci.validate.GeometryValidator(
        equation["Poisson"].equations,
        {"poisson": poisson_ref_compute_func},
        geom["rect"],
        {
            "dataset": "NamedArrayDataset",
            "total_size": cfg.NPOINT_PDE,
            "batch_size": cfg.EVAL.batch_size.residual_validator,
            "sampler": {"name": "BatchSampler"},
        },
        ppsci.loss.MSELoss("sum"),
        evenly=True,
        metric={"MSE": ppsci.metric.MSE()},
        name="Residual",
    )
    validator = {residual_validator.name: residual_validator}

    # set visualizer(optional)
    # manually collate input data for visualization,
    vis_points = geom["rect"].sample_interior(
        cfg.NPOINT_PDE + cfg.NPOINT_BC, evenly=True
    )
    visualizer = {
        "visualize_p_ux_uy": ppsci.visualize.VisualizerVtu(
            vis_points,
            {
                "p": lambda d: d["p"],
                "p_ref": lambda d: paddle.sin(2 * np.pi * d["x"])
                * paddle.cos(2 * np.pi * d["y"]),
                "p_diff": lambda d: paddle.sin(2 * np.pi * d["x"])
                * paddle.cos(2 * np.pi * d["y"])
                - d["p"],
                "ux": lambda d: jacobian(d["p"], d["x"]),
                "ux_ref": lambda d: 2
                * np.pi
                * paddle.cos(2 * np.pi * d["x"])
                * paddle.cos(2 * np.pi * d["y"]),
                "ux_diff": lambda d: jacobian(d["p"], d["x"])
                - 2
                * np.pi
                * paddle.cos(2 * np.pi * d["x"])
                * paddle.cos(2 * np.pi * d["y"]),
                "uy": lambda d: jacobian(d["p"], d["y"]),
                "uy_ref": lambda d: -2
                * np.pi
                * paddle.sin(2 * np.pi * d["x"])
                * paddle.sin(2 * np.pi * d["y"]),
                "uy_diff": lambda d: jacobian(d["p"], d["y"])
                - (
                    -2
                    * np.pi
                    * paddle.sin(2 * np.pi * d["x"])
                    * paddle.sin(2 * np.pi * d["y"])
                ),
            },
            prefix="result_p_ux_uy",
        )
    }

    # initialize solver
    solver = ppsci.solver.Solver(
        model,
        constraint,
        cfg.output_dir,
        optimizer,
        lr_scheduler,
        cfg.TRAIN.epochs,
        cfg.TRAIN.iters_per_epoch,
        eval_during_train=cfg.TRAIN.eval_during_train,
        eval_freq=cfg.TRAIN.eval_freq,
        equation=equation,
        geom=geom,
        validator=validator,
        visualizer=visualizer,
    )
    # train model
    solver.train()
    # evaluate after finished training
    solver.eval()
    # visualize prediction after finished training
    solver.visualize()

    # fine-tuning pretrained model with L-BFGS
    OUTPUT_DIR = cfg.TRAIN.lbfgs.output_dir
    logger.init_logger("ppsci", osp.join(OUTPUT_DIR, f"{cfg.mode}.log"), "info")
    EPOCHS = cfg.TRAIN.epochs // 10
    optimizer_lbfgs = ppsci.optimizer.LBFGS(
        cfg.TRAIN.lbfgs.learning_rate, cfg.TRAIN.lbfgs.max_iter
    )(model)
    solver = ppsci.solver.Solver(
        model,
        constraint,
        OUTPUT_DIR,
        optimizer_lbfgs,
        None,
        EPOCHS,
        cfg.TRAIN.lbfgs.iters_per_epoch,
        eval_during_train=cfg.TRAIN.lbfgs.eval_during_train,
        eval_freq=cfg.TRAIN.lbfgs.eval_freq,
        equation=equation,
        geom=geom,
        validator=validator,
        visualizer=visualizer,
    )
    # train model
    solver.train()
    # evaluate after finished training
    solver.eval()
    # visualize prediction after finished training
    solver.visualize()


def evaluate(cfg: DictConfig):
    # set random seed for reproducibility
    ppsci.utils.misc.set_random_seed(cfg.seed)
    # initialize logger
    logger.init_logger("ppsci", osp.join(cfg.output_dir, f"{cfg.mode}.log"), "info")

    # set model
    model = ppsci.arch.MLP(**cfg.MODEL)

    # set equation
    equation = {"Poisson": ppsci.equation.Poisson(2)}

    # set geometry
    geom = {"rect": ppsci.geometry.Rectangle((0.0, 0.0), (1.0, 1.0))}

    # set constraint
    def poisson_ref_compute_func(_in):
        return (
            -8.0
            * (np.pi**2)
            * np.sin(2.0 * np.pi * _in["x"])
            * np.cos(2.0 * np.pi * _in["y"])
        )

    # set validator
    residual_validator = ppsci.validate.GeometryValidator(
        equation["Poisson"].equations,
        {"poisson": poisson_ref_compute_func},
        geom["rect"],
        {
            "dataset": "NamedArrayDataset",
            "total_size": cfg.NPOINT_PDE,
            "batch_size": cfg.EVAL.batch_size.residual_validator,
            "sampler": {"name": "BatchSampler"},
        },
        ppsci.loss.MSELoss("sum"),
        evenly=True,
        metric={"MSE": ppsci.metric.MSE()},
        name="Residual",
    )
    validator = {residual_validator.name: residual_validator}

    # set visualizer
    # manually collate input data for visualization,
    vis_points = geom["rect"].sample_interior(
        cfg.NPOINT_PDE + cfg.NPOINT_BC, evenly=True
    )
    visualizer = {
        "visualize_p_ux_uy": ppsci.visualize.VisualizerVtu(
            vis_points,
            {
                "p": lambda d: d["p"],
                "p_ref": lambda d: paddle.sin(2 * np.pi * d["x"])
                * paddle.cos(2 * np.pi * d["y"]),
                "p_diff": lambda d: paddle.sin(2 * np.pi * d["x"])
                * paddle.cos(2 * np.pi * d["y"])
                - d["p"],
                "ux": lambda d: jacobian(d["p"], d["x"]),
                "ux_ref": lambda d: 2
                * np.pi
                * paddle.cos(2 * np.pi * d["x"])
                * paddle.cos(2 * np.pi * d["y"]),
                "ux_diff": lambda d: jacobian(d["p"], d["x"])
                - 2
                * np.pi
                * paddle.cos(2 * np.pi * d["x"])
                * paddle.cos(2 * np.pi * d["y"]),
                "uy": lambda d: jacobian(d["p"], d["y"]),
                "uy_ref": lambda d: -2
                * np.pi
                * paddle.sin(2 * np.pi * d["x"])
                * paddle.sin(2 * np.pi * d["y"]),
                "uy_diff": lambda d: jacobian(d["p"], d["y"])
                - (
                    -2
                    * np.pi
                    * paddle.sin(2 * np.pi * d["x"])
                    * paddle.sin(2 * np.pi * d["y"])
                ),
            },
            prefix="result_p_ux_uy",
        )
    }

    solver = ppsci.solver.Solver(
        model,
        output_dir=cfg.output_dir,
        equation=equation,
        geom=geom,
        validator=validator,
        visualizer=visualizer,
        pretrained_model_path=cfg.EVAL.pretrained_model_path,
    )
    solver.eval()
    # visualize prediction
    solver.visualize()


def export(cfg: DictConfig):
    # set model
    model = ppsci.arch.MLP(**cfg.MODEL)

    # initialize solver
    solver = ppsci.solver.Solver(
        model,
        pretrained_model_path=cfg.INFER.pretrained_model_path,
    )
    # export model
    from paddle.static import InputSpec

    input_spec = [
        {key: InputSpec([None, 1], "float32", name=key) for key in model.input_keys},
    ]

    solver.export(input_spec, cfg.INFER.export_path)


def inference(cfg: DictConfig):
    from deploy.python_infer import pinn_predictor

    predictor = pinn_predictor.PINNPredictor(cfg)

    # set geometry
    geom = {"rect": ppsci.geometry.Rectangle((0.0, 0.0), (1.0, 1.0))}
    # manually collate input data for visualization,
    input_dict = geom["rect"].sample_interior(
        cfg.NPOINT_PDE + cfg.NPOINT_BC, evenly=True
    )
    output_dict = predictor.predict(
        {key: input_dict[key] for key in cfg.MODEL.input_keys}, cfg.INFER.batch_size
    )
    # mapping data to cfg.INFER.output_keys
    output_dict = {
        store_key: output_dict[infer_key]
        for store_key, infer_key in zip(cfg.MODEL.output_keys, output_dict.keys())
    }
    ppsci.visualize.save_vtu_from_dict(
        "./visual/darcy2d.vtu",
        {**input_dict, **output_dict},
        input_dict.keys(),
        cfg.MODEL.output_keys,
    )


@hydra.main(version_base=None, config_path="./conf", config_name="darcy2d.yaml")
def main(cfg: DictConfig):
    if cfg.mode == "train":
        train(cfg)
    elif cfg.mode == "eval":
        evaluate(cfg)
    elif cfg.mode == "export":
        export(cfg)
    elif cfg.mode == "infer":
        inference(cfg)
    else:
        raise ValueError(
            f"cfg.mode should in ['train', 'eval', 'export', 'infer'], but got '{cfg.mode}'"
        )


if __name__ == "__main__":
    main()

5. 结果展示

下方展示了模型对正方形计算域中每个点的压力\(p(x,y)\)、x(水平)方向流速\(u(x,y)\)、y(垂直)方向流速\(v(x,y)\)的预测结果、参考结果以及两者之差。

darcy 2d

左:预测压力 p,中:参考压力 p,右:压力差

darcy 2d

左:预测x方向流速 p,中:参考x方向流速 p,右:x方向流速差

darcy 2d

左:预测y方向流速 p,中:参考y方向流速 p,右:y方向流速差