跳转至

2D-LDC(2D Lid Driven Cavity Flow)

AI Studio快速体验

python ldc2d_unsteady_Re10.py
python ldc2d_unsteady_Re10.py mode=eval EVAL.pretrained_model_path=https://paddle-org.bj.bcebos.com/paddlescience/models/ldc2d_unsteady_Re10/ldc2d_unsteady_Re10_pretrained.pdparams
预训练模型 指标
ldc2d_unsteady_Re10_pretrained.pdparams loss(Residual): 155652.67530
MSE.momentum_x(Residual): 6.78030
MSE.continuity(Residual): 0.16590
MSE.momentum_y(Residual): 12.05981

1. 背景简介

顶盖方腔驱动流LDC问题在许多领域中都有应用。例如,这个问题可以用于计算流体力学(CFD)领域中验证计算方法的有效性。虽然这个问题的边界条件相对简单,但是其流动特性却非常复杂。在顶盖驱动流LDC中,顶壁朝x方向以U=1的速度移动,而其他三个壁则被定义为无滑移边界条件,即速度为零。

此外,顶盖方腔驱动流LDC问题也被用于研究和预测空气动力学中的流动现象。例如,在汽车工业中,通过模拟和分析车体内部的空气流动,可以帮助优化车辆的设计和性能。

总的来说,顶盖方腔驱动流LDC问题在计算流体力学、空气动力学以及相关领域中都有广泛的应用,对于研究和预测流动现象、优化产品设计等方面都起到了重要的作用。

2. 问题定义

本案例中我们对于 16 个时刻内长宽均为 1 的方腔内部作为计算域,并应用以下公式进行顶盖驱动方腔流研究瞬态流场问题:

质量守恒:

\[ \dfrac{\partial u}{\partial x} + \dfrac{\partial v}{\partial y} = 0 \]

\(x\) 动量守恒:

\[ \dfrac{\partial u}{\partial t} + u\dfrac{\partial u}{\partial x} + v\dfrac{\partial u}{\partial y} = -\dfrac{1}{\rho}\dfrac{\partial p}{\partial x} + \nu(\dfrac{\partial ^2 u}{\partial x ^2} + \dfrac{\partial ^2 u}{\partial y ^2}) \]

\(y\) 动量守恒:

\[ \dfrac{\partial v}{\partial t} + u\dfrac{\partial v}{\partial x} + v\dfrac{\partial v}{\partial y} = -\dfrac{1}{\rho}\dfrac{\partial p}{\partial y} + \nu(\dfrac{\partial ^2 v}{\partial x ^2} + \dfrac{\partial ^2 v}{\partial y ^2}) \]

令:

\(t^* = \dfrac{L}{U_0}\)

\(x^*=y^* = L\)

\(u^*=v^* = U_0\)

\(p^* = \rho {U_0}^2\)

定义:

无量纲时间 \(\tau = \dfrac{t}{t^*}\)

无量纲坐标 \(x:X = \dfrac{x}{x^*}\);无量纲坐标 \(y:Y = \dfrac{y}{y^*}\)

无量纲速度 \(x:U = \dfrac{u}{u^*}\);无量纲速度 \(y:V = \dfrac{v}{u^*}\)

无量纲压力 \(P = \dfrac{p}{p^*}\)

雷诺数 \(Re = \dfrac{L U_0}{\nu}\)

则可获得如下无量纲Navier-Stokes方程,施加于方腔内部:

质量守恒:

\[ \dfrac{\partial U}{\partial X} + \dfrac{\partial U}{\partial Y} = 0 \]

\(x\) 动量守恒:

\[ \dfrac{\partial U}{\partial \tau} + U\dfrac{\partial U}{\partial X} + V\dfrac{\partial U}{\partial Y} = -\dfrac{\partial P}{\partial X} + \dfrac{1}{Re}(\dfrac{\partial ^2 U}{\partial X^2} + \dfrac{\partial ^2 U}{\partial Y^2}) \]

\(y\) 动量守恒:

\[ \dfrac{\partial V}{\partial \tau} + U\dfrac{\partial V}{\partial X} + V\dfrac{\partial V}{\partial Y} = -\dfrac{\partial P}{\partial Y} + \dfrac{1}{Re}(\dfrac{\partial ^2 V}{\partial X^2} + \dfrac{\partial ^2 V}{\partial Y^2}) \]

对于方腔边界,则需施加 Dirichlet 边界条件:

上边界:

\[ u=1, v=0 \]

下边界

\[ u=0, v=0 \]

左边界:

\[ u=0, v=0 \]

右边界:

\[ u=0, v=0 \]

3. 问题求解

接下来开始讲解如何将问题一步一步地转化为 PaddleScience 代码,用深度学习的方法求解该问题。 为了快速理解 PaddleScience,接下来仅对模型构建、方程构建、计算域构建等关键步骤进行阐述,而其余细节请参考 API文档

3.1 模型构建

在 2D-LDC 问题中,每一个已知的坐标点 \((t, x, y)\) 都有自身的横向速度 \(u\)、纵向速度 \(v\)、压力 \(p\) 三个待求解的未知量,我们在这里使用比较简单的 MLP(Multilayer Perceptron, 多层感知机) 来表示 \((t, x, y)\)\((u, v, p)\) 的映射函数 \(f: \mathbb{R}^3 \to \mathbb{R}^3\) ,即:

\[ u, v, p = f(t, x, y) \]

上式中 \(f\) 即为 MLP 模型本身,用 PaddleScience 代码表示如下

# set model
model = ppsci.arch.MLP(**cfg.MODEL.model)

为了在计算时,准确快速地访问具体变量的值,我们在这里指定网络模型的输入变量名是 ["t", "x", "y"],输出变量名是 ["u", "v", "p"],这些命名与后续代码保持一致。

接着通过指定 MLP 的层数、神经元个数以及激活函数,我们就实例化出了一个拥有 9 层隐藏神经元,每层神经元数为 50,使用 "tanh" 作为激活函数的神经网络模型 model

3.2 方程构建

由于 2D-LDC 使用的是 Navier-Stokes 方程的2维瞬态形式,因此可以直接使用 PaddleScience 内置的 NavierStokes

# set equation
equation = {"NavierStokes": ppsci.equation.NavierStokes(cfg.NU, cfg.RHO, 2, True)}

在实例化 NavierStokes 类时需指定必要的参数:动力粘度 \(\nu=0.01\), 流体密度 \(\rho=1.0\)

3.3 计算域构建

本文中 2D-LDC 问题作用在以 [-0.05, -0.05], [0.05, 0.05] 为对角线的二维矩形区域,且时间域为 16 个时刻 [0.0, 0.1, ..., 1.4, 1.5], 因此可以直接使用 PaddleScience 内置的空间几何 Rectangle 和时间域 TimeDomain,组合成时间-空间的 TimeXGeometry 计算域。

# set timestamps(including initial t0)
timestamps = np.linspace(0.0, 1.5, cfg.NTIME_ALL, endpoint=True)
# set time-geometry
geom = {
    "time_rect": ppsci.geometry.TimeXGeometry(
        ppsci.geometry.TimeDomain(0.0, 1.5, timestamps=timestamps),
        ppsci.geometry.Rectangle((-0.05, -0.05), (0.05, 0.05)),
    )
}
提示

RectangleTimeDomain 是两种可以单独使用的 Geometry 派生类。

如输入数据只来自于二维矩形几何域,则可以直接使用 ppsci.geometry.Rectangle(...) 创建空间几何域对象;

如输入数据只来自一维时间域,则可以直接使用 ppsci.geometry.TimeDomain(...) 构建时间域对象。

3.4 约束构建

根据 2. 问题定义 得到的无量纲公式和和边界条件,对应了在计算域中指导模型训练的两个约束条件,即:

  1. 施加在矩形内部点上的无量纲 Navier-Stokes 方程约束(经过简单移项)

    \[ \dfrac{\partial U}{\partial X} + \dfrac{\partial U}{\partial Y} = 0 \]
    \[ \dfrac{\partial U}{\partial \tau} + U\dfrac{\partial U}{\partial X} + V\dfrac{\partial U}{\partial Y} + \dfrac{\partial P}{\partial X} - \dfrac{1}{Re}(\dfrac{\partial ^2 U}{\partial X^2} + \dfrac{\partial ^2 U}{\partial Y^2}) = 0 \]
    \[ \dfrac{\partial V}{\partial \tau} + U\dfrac{\partial V}{\partial X} + V\dfrac{\partial V}{\partial Y} + \dfrac{\partial P}{\partial Y} - \dfrac{1}{Re}(\dfrac{\partial ^2 V}{\partial X^2} + \dfrac{\partial ^2 V}{\partial Y^2}) = 0 \]

    为了方便获取中间变量,NavierStokes 类内部将上式左侧的结果分别命名为 continuity, momentum_x, momentum_y

  2. 施加在矩形上、下、左、右边界上的 Dirichlet 边界条件约束

    \[ 上边界:u=1,v=0 \]
    \[ 下边界:u=0,v=0 \]
    \[ 左边界:u=0,v=0 \]
    \[ 右边界:u=0,v=0 \]

接下来使用 PaddleScience 内置的 InteriorConstraintBoundaryConstraint 构建上述两种约束条件。

在定义约束之前,需要给每一种约束指定采样点个数,这表示某种约束在其对应计算域内采样数据的数量,以及指定通用的采样配置。

# set dataloader config
train_dataloader_cfg = {
    "dataset": "IterableNamedArrayDataset",
    "iters_per_epoch": cfg.TRAIN.iters_per_epoch,
}

# pde/bc constraint use t1~tn, initial constraint use t0
NPOINT_PDE, NTIME_PDE = 99**2, cfg.NTIME_ALL - 1
NPOINT_TOP, NTIME_TOP = 101, cfg.NTIME_ALL - 1
NPOINT_DOWN, NTIME_DOWN = 101, cfg.NTIME_ALL - 1
NPOINT_LEFT, NTIME_LEFT = 99, cfg.NTIME_ALL - 1
NPOINT_RIGHT, NTIME_RIGHT = 99, cfg.NTIME_ALL - 1
NPOINT_IC, NTIME_IC = 99**2, 1

3.4.1 内部点约束

以作用在矩形内部点上的 InteriorConstraint 为例,代码如下:

# set constraint
pde = ppsci.constraint.InteriorConstraint(
    equation["NavierStokes"].equations,
    {"continuity": 0, "momentum_x": 0, "momentum_y": 0},
    geom["time_rect"],
    {**train_dataloader_cfg, "batch_size": NPOINT_PDE * NTIME_PDE},
    ppsci.loss.MSELoss("sum"),
    evenly=True,
    weight_dict=cfg.TRAIN.weight.pde,  # (1)
    name="EQ",
)
  1. 本案例中PDE约束损失的数量级远大于边界约束损失,因此需要给PDE约束权重设置一个较小的值,有利于模型收敛

InteriorConstraint 的第一个参数是方程表达式,用于描述如何计算约束目标,此处填入在 3.2 方程构建 章节中实例化好的 equation["NavierStokes"].equations

第二个参数是约束变量的目标值,在本问题中我们希望 Navier-Stokes 方程产生的三个中间结果 continuity, momentum_x, momentum_y 被优化至 0,因此将它们的目标值全部设为 0;

第三个参数是约束方程作用的计算域,此处填入在 3.3 计算域构建 章节实例化好的 geom["time_rect"] 即可;

第四个参数是在计算域上的采样配置,此处我们使用全量数据点训练,因此 dataset 字段设置为 "IterableNamedArrayDataset" 且 iters_per_epoch 也设置为 1,采样点数 batch_size 设为 9801 * 15(表示99x99的等间隔网格,共有 15 个时刻的网格);

第五个参数是损失函数,此处我们选用常用的MSE函数,且 reduction 设置为 "sum",即我们会将参与计算的所有数据点产生的损失项求和;

第六个参数是选择是否在计算域上进行等间隔采样,此处我们选择开启等间隔采样,这样能让训练点均匀分布在计算域上,有利于训练收敛;

第七个参数是权重系数,该配置可以精确调整每一个变量参与损失计算时的权重,设置为 0.0001 是一个比较合适的值;

第八个参数是约束条件的名字,我们需要给每一个约束条件命名,方便后续对其索引。此处我们命名为 "EQ" 即可。

3.4.2 边界约束

同理,我们还需要构建矩形的上、下、左、右四个边界的 Dirichlet 边界约束。但与构建 InteriorConstraint 约束不同的是,由于作用区域是边界,因此我们使用 BoundaryConstraint 类。

其次约束的目标变量也不同,Dirichlet 条件约束对象是 MLP 模型输出的 \(u\)\(v\)(本文不对 \(p\) 做约束),因此第一个参数使用 lambda 表达式直接返回 MLP 的输出结果 out["u"]out["v"] 作为程序运行时的约束对象。

然后给 \(u\)\(v\) 设置约束目标值,请注意在 bc_top 上边界中,\(u\) 的约束目标值要设置为 1。

采样点与损失函数配置和 InteriorConstraint 类似,单个时刻的点数设置为 100 左右。

由于 BoundaryConstraint 默认会在所有边界上进行采样,而我们需要对四个边界分别施加约束,因此需通过设置 criteria 参数,进一步细化出四个边界,如上边界就是符合 \(y = 0.05\) 的边界点集

bc_top = ppsci.constraint.BoundaryConstraint(
    {"u": lambda out: out["u"], "v": lambda out: out["v"]},
    {"u": 1, "v": 0},
    geom["time_rect"],
    {**train_dataloader_cfg, "batch_size": NPOINT_TOP * NTIME_TOP},
    ppsci.loss.MSELoss("sum"),
    criteria=lambda t, x, y: np.isclose(y, 0.05),
    name="BC_top",
)
bc_down = ppsci.constraint.BoundaryConstraint(
    {"u": lambda out: out["u"], "v": lambda out: out["v"]},
    {"u": 0, "v": 0},
    geom["time_rect"],
    {**train_dataloader_cfg, "batch_size": NPOINT_DOWN * NTIME_DOWN},
    ppsci.loss.MSELoss("sum"),
    criteria=lambda t, x, y: np.isclose(y, -0.05),
    name="BC_down",
)
bc_left = ppsci.constraint.BoundaryConstraint(
    {"u": lambda out: out["u"], "v": lambda out: out["v"]},
    {"u": 0, "v": 0},
    geom["time_rect"],
    {**train_dataloader_cfg, "batch_size": NPOINT_LEFT * NTIME_LEFT},
    ppsci.loss.MSELoss("sum"),
    criteria=lambda t, x, y: np.isclose(x, -0.05),
    name="BC_left",
)
bc_right = ppsci.constraint.BoundaryConstraint(
    {"u": lambda out: out["u"], "v": lambda out: out["v"]},
    {"u": 0, "v": 0},
    geom["time_rect"],
    {**train_dataloader_cfg, "batch_size": NPOINT_RIGHT * NTIME_RIGHT},
    ppsci.loss.MSELoss("sum"),
    criteria=lambda t, x, y: np.isclose(x, 0.05),
    name="BC_right",
)

3.4.3 初值约束

最后我们还需要对 \(t=t_0\) 时刻的矩形内部点施加 N-S 方程约束,代码如下:

ic = ppsci.constraint.InitialConstraint(
    {"u": lambda out: out["u"], "v": lambda out: out["v"]},
    {"u": 0, "v": 0},
    geom["time_rect"],
    {**train_dataloader_cfg, "batch_size": NPOINT_IC * NTIME_IC},
    ppsci.loss.MSELoss("sum"),
    evenly=True,
    name="IC",
)

在微分方程约束、边界约束、初值约束构建完毕之后,以我们刚才的命名为关键字,封装到一个字典中,方便后续访问。

# wrap constraints together
constraint = {
    pde.name: pde,
    bc_top.name: bc_top,
    bc_down.name: bc_down,
    bc_left.name: bc_left,
    bc_right.name: bc_right,
    ic.name: ic,
}

3.5 超参数设定

接下来需要在配置文件中指定训练轮数,此处我们按实验经验,使用两万轮训练轮数和带有 warmup 的 Cosine 余弦衰减学习率。

# training settings
TRAIN:
  epochs: 20000
  iters_per_epoch: 1

3.6 优化器构建

训练过程会调用优化器来更新模型参数,此处选择较为常用的 Adam 优化器。

# set optimizer
optimizer = ppsci.optimizer.Adam(lr_scheduler)(model)

3.7 评估器构建

在训练过程中通常会按一定轮数间隔,用验证集(测试集)评估当前模型的训练情况,因此使用 ppsci.validate.GeometryValidator 构建评估器。

# set validator
NPOINT_EVAL = NPOINT_PDE * cfg.NTIME_ALL
residual_validator = ppsci.validate.GeometryValidator(
    equation["NavierStokes"].equations,
    {"momentum_x": 0, "continuity": 0, "momentum_y": 0},
    geom["time_rect"],
    {
        "dataset": "NamedArrayDataset",
        "total_size": NPOINT_EVAL,
        "batch_size": cfg.EVAL.batch_size.residual_validator,
        "sampler": {"name": "BatchSampler"},
    },
    ppsci.loss.MSELoss("sum"),
    evenly=True,
    metric={"MSE": ppsci.metric.MSE()},
    with_initial=True,
    name="Residual",
)
validator = {residual_validator.name: residual_validator}

方程设置与 约束构建 的设置相同,表示如何计算所需评估的目标变量;

此处我们为 momentum_x, continuity, momentum_y 三个目标变量设置标签值为 0;

计算域与 约束构建 的设置相同,表示在指定计算域上进行评估;

采样点配置则需要指定总的评估点数 total_size,此处我们设置为 9801 * 16(99x99等间隔网格,共 16 个评估时刻);

评价指标 metric 选择 ppsci.metric.MSE 即可;

其余配置与 约束构建 的设置类似。

3.8 可视化器构建

在模型评估时,如果评估结果是可以可视化的数据,我们可以选择合适的可视化器来对输出结果进行可视化。

本文中的输出数据是一个区域内的二维点集,每个时刻 \(t\) 的坐标是 \((x^t_i,y^t_i)\),对应值是 \((u^t_i, v^t_i, p^t_i)\),因此我们只需要将评估的输出数据按时刻保存成 16 个 vtu格式 文件,最后用可视化软件打开查看即可。代码如下:

# set visualizer(optional)
NPOINT_BC = NPOINT_TOP + NPOINT_DOWN + NPOINT_LEFT + NPOINT_RIGHT
vis_initial_points = geom["time_rect"].sample_initial_interior(
    (NPOINT_IC + NPOINT_BC), evenly=True
)
vis_pde_points = geom["time_rect"].sample_interior(
    (NPOINT_PDE + NPOINT_BC) * NTIME_PDE, evenly=True
)
vis_points = vis_initial_points
# manually collate input data for visualization,
# (interior+boundary) x all timestamps
for t in range(NTIME_PDE):
    for key in geom["time_rect"].dim_keys:
        vis_points[key] = np.concatenate(
            (
                vis_points[key],
                vis_pde_points[key][
                    t
                    * (NPOINT_PDE + NPOINT_BC) : (t + 1)
                    * (NPOINT_PDE + NPOINT_BC)
                ],
            )
        )

visualizer = {
    "visualize_u_v": ppsci.visualize.VisualizerVtu(
        vis_points,
        {"u": lambda d: d["u"], "v": lambda d: d["v"], "p": lambda d: d["p"]},
        num_timestamps=cfg.NTIME_ALL,
        prefix="result_u_v",
    )
}

3.9 模型训练、评估与可视化

完成上述设置之后,只需要将上述实例化的对象按顺序传递给 ppsci.solver.Solver,然后启动训练、评估、可视化。

# initialize solver
solver = ppsci.solver.Solver(
    model,
    constraint,
    cfg.output_dir,
    optimizer,
    lr_scheduler,
    cfg.TRAIN.epochs,
    cfg.TRAIN.iters_per_epoch,
    eval_during_train=cfg.EVAL.pretrained_model_path,
    eval_freq=cfg.TRAIN.eval_freq,
    equation=equation,
    geom=geom,
    validator=validator,
    visualizer=visualizer,
)
# train model
solver.train()
# evaluate after finished training
solver.eval()
# visualize prediction after finished training
solver.visualize()

4. 完整代码

ldc2d_steady_Re10.py
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

#     http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from os import path as osp

import hydra
import numpy as np
from omegaconf import DictConfig

import ppsci
from ppsci.utils import logger


def train(cfg: DictConfig):
    # set random seed for reproducibility
    ppsci.utils.misc.set_random_seed(cfg.seed)
    # initialize logger
    logger.init_logger("ppsci", osp.join(cfg.output_dir, "train.log"), "info")

    # set model
    model = ppsci.arch.MLP(**cfg.MODEL.model)

    # set equation
    equation = {"NavierStokes": ppsci.equation.NavierStokes(cfg.NU, cfg.RHO, 2, True)}

    # set timestamps(including initial t0)
    timestamps = np.linspace(0.0, 1.5, cfg.NTIME_ALL, endpoint=True)
    # set time-geometry
    geom = {
        "time_rect": ppsci.geometry.TimeXGeometry(
            ppsci.geometry.TimeDomain(0.0, 1.5, timestamps=timestamps),
            ppsci.geometry.Rectangle((-0.05, -0.05), (0.05, 0.05)),
        )
    }

    # set dataloader config
    train_dataloader_cfg = {
        "dataset": "IterableNamedArrayDataset",
        "iters_per_epoch": cfg.TRAIN.iters_per_epoch,
    }

    # pde/bc constraint use t1~tn, initial constraint use t0
    NPOINT_PDE, NTIME_PDE = 99**2, cfg.NTIME_ALL - 1
    NPOINT_TOP, NTIME_TOP = 101, cfg.NTIME_ALL - 1
    NPOINT_DOWN, NTIME_DOWN = 101, cfg.NTIME_ALL - 1
    NPOINT_LEFT, NTIME_LEFT = 99, cfg.NTIME_ALL - 1
    NPOINT_RIGHT, NTIME_RIGHT = 99, cfg.NTIME_ALL - 1
    NPOINT_IC, NTIME_IC = 99**2, 1

    # set constraint
    pde = ppsci.constraint.InteriorConstraint(
        equation["NavierStokes"].equations,
        {"continuity": 0, "momentum_x": 0, "momentum_y": 0},
        geom["time_rect"],
        {**train_dataloader_cfg, "batch_size": NPOINT_PDE * NTIME_PDE},
        ppsci.loss.MSELoss("sum"),
        evenly=True,
        weight_dict=cfg.TRAIN.weight.pde,  # (1)
        name="EQ",
    )
    bc_top = ppsci.constraint.BoundaryConstraint(
        {"u": lambda out: out["u"], "v": lambda out: out["v"]},
        {"u": 1, "v": 0},
        geom["time_rect"],
        {**train_dataloader_cfg, "batch_size": NPOINT_TOP * NTIME_TOP},
        ppsci.loss.MSELoss("sum"),
        criteria=lambda t, x, y: np.isclose(y, 0.05),
        name="BC_top",
    )
    bc_down = ppsci.constraint.BoundaryConstraint(
        {"u": lambda out: out["u"], "v": lambda out: out["v"]},
        {"u": 0, "v": 0},
        geom["time_rect"],
        {**train_dataloader_cfg, "batch_size": NPOINT_DOWN * NTIME_DOWN},
        ppsci.loss.MSELoss("sum"),
        criteria=lambda t, x, y: np.isclose(y, -0.05),
        name="BC_down",
    )
    bc_left = ppsci.constraint.BoundaryConstraint(
        {"u": lambda out: out["u"], "v": lambda out: out["v"]},
        {"u": 0, "v": 0},
        geom["time_rect"],
        {**train_dataloader_cfg, "batch_size": NPOINT_LEFT * NTIME_LEFT},
        ppsci.loss.MSELoss("sum"),
        criteria=lambda t, x, y: np.isclose(x, -0.05),
        name="BC_left",
    )
    bc_right = ppsci.constraint.BoundaryConstraint(
        {"u": lambda out: out["u"], "v": lambda out: out["v"]},
        {"u": 0, "v": 0},
        geom["time_rect"],
        {**train_dataloader_cfg, "batch_size": NPOINT_RIGHT * NTIME_RIGHT},
        ppsci.loss.MSELoss("sum"),
        criteria=lambda t, x, y: np.isclose(x, 0.05),
        name="BC_right",
    )
    ic = ppsci.constraint.InitialConstraint(
        {"u": lambda out: out["u"], "v": lambda out: out["v"]},
        {"u": 0, "v": 0},
        geom["time_rect"],
        {**train_dataloader_cfg, "batch_size": NPOINT_IC * NTIME_IC},
        ppsci.loss.MSELoss("sum"),
        evenly=True,
        name="IC",
    )
    # wrap constraints together
    constraint = {
        pde.name: pde,
        bc_top.name: bc_top,
        bc_down.name: bc_down,
        bc_left.name: bc_left,
        bc_right.name: bc_right,
        ic.name: ic,
    }

    # set training hyper-parameters
    lr_scheduler = ppsci.optimizer.lr_scheduler.Cosine(
        **cfg.TRAIN.lr_scheduler,
        warmup_epoch=int(0.05 * cfg.TRAIN.epochs),
    )()

    # set optimizer
    optimizer = ppsci.optimizer.Adam(lr_scheduler)(model)

    # set validator
    NPOINT_EVAL = NPOINT_PDE * cfg.NTIME_ALL
    residual_validator = ppsci.validate.GeometryValidator(
        equation["NavierStokes"].equations,
        {"momentum_x": 0, "continuity": 0, "momentum_y": 0},
        geom["time_rect"],
        {
            "dataset": "NamedArrayDataset",
            "total_size": NPOINT_EVAL,
            "batch_size": cfg.EVAL.batch_size.residual_validator,
            "sampler": {"name": "BatchSampler"},
        },
        ppsci.loss.MSELoss("sum"),
        evenly=True,
        metric={"MSE": ppsci.metric.MSE()},
        with_initial=True,
        name="Residual",
    )
    validator = {residual_validator.name: residual_validator}

    # set visualizer(optional)
    NPOINT_BC = NPOINT_TOP + NPOINT_DOWN + NPOINT_LEFT + NPOINT_RIGHT
    vis_initial_points = geom["time_rect"].sample_initial_interior(
        (NPOINT_IC + NPOINT_BC), evenly=True
    )
    vis_pde_points = geom["time_rect"].sample_interior(
        (NPOINT_PDE + NPOINT_BC) * NTIME_PDE, evenly=True
    )
    vis_points = vis_initial_points
    # manually collate input data for visualization,
    # (interior+boundary) x all timestamps
    for t in range(NTIME_PDE):
        for key in geom["time_rect"].dim_keys:
            vis_points[key] = np.concatenate(
                (
                    vis_points[key],
                    vis_pde_points[key][
                        t
                        * (NPOINT_PDE + NPOINT_BC) : (t + 1)
                        * (NPOINT_PDE + NPOINT_BC)
                    ],
                )
            )

    visualizer = {
        "visualize_u_v": ppsci.visualize.VisualizerVtu(
            vis_points,
            {"u": lambda d: d["u"], "v": lambda d: d["v"], "p": lambda d: d["p"]},
            num_timestamps=cfg.NTIME_ALL,
            prefix="result_u_v",
        )
    }

    # initialize solver
    solver = ppsci.solver.Solver(
        model,
        constraint,
        cfg.output_dir,
        optimizer,
        lr_scheduler,
        cfg.TRAIN.epochs,
        cfg.TRAIN.iters_per_epoch,
        eval_during_train=cfg.EVAL.pretrained_model_path,
        eval_freq=cfg.TRAIN.eval_freq,
        equation=equation,
        geom=geom,
        validator=validator,
        visualizer=visualizer,
    )
    # train model
    solver.train()
    # evaluate after finished training
    solver.eval()
    # visualize prediction after finished training
    solver.visualize()


def evaluate(cfg: DictConfig):
    # set random seed for reproducibility
    ppsci.utils.misc.set_random_seed(cfg.seed)
    # initialize logger
    logger.init_logger("ppsci", osp.join(cfg.output_dir, "eval.log"), "info")

    # set model
    model = ppsci.arch.MLP(**cfg.MODEL.model)

    # set equation
    equation = {"NavierStokes": ppsci.equation.NavierStokes(cfg.NU, cfg.RHO, 2, True)}

    # set timestamps(including initial t0)
    timestamps = np.linspace(0.0, 1.5, cfg.NTIME_ALL, endpoint=True)
    # set time-geometry
    geom = {
        "time_rect": ppsci.geometry.TimeXGeometry(
            ppsci.geometry.TimeDomain(0.0, 1.5, timestamps=timestamps),
            ppsci.geometry.Rectangle((-0.05, -0.05), (0.05, 0.05)),
        )
    }

    # pde/bc constraint use t1~tn, initial constraint use t0
    NPOINT_PDE = 99**2
    NPOINT_TOP = 101
    NPOINT_DOWN = 101
    NPOINT_LEFT = 99
    NPOINT_RIGHT = 99
    NPOINT_IC = 99**2
    NTIME_PDE = cfg.NTIME_ALL - 1

    # set validator
    NPOINT_EVAL = NPOINT_PDE * cfg.NTIME_ALL
    residual_validator = ppsci.validate.GeometryValidator(
        equation["NavierStokes"].equations,
        {"momentum_x": 0, "continuity": 0, "momentum_y": 0},
        geom["time_rect"],
        {
            "dataset": "NamedArrayDataset",
            "total_size": NPOINT_EVAL,
            "batch_size": cfg.EVAL.batch_size.residual_validator,
            "sampler": {"name": "BatchSampler"},
        },
        ppsci.loss.MSELoss("sum"),
        evenly=True,
        metric={"MSE": ppsci.metric.MSE()},
        with_initial=True,
        name="Residual",
    )
    validator = {residual_validator.name: residual_validator}

    # set visualizer(optional)
    NPOINT_BC = NPOINT_TOP + NPOINT_DOWN + NPOINT_LEFT + NPOINT_RIGHT
    vis_initial_points = geom["time_rect"].sample_initial_interior(
        (NPOINT_IC + NPOINT_BC), evenly=True
    )
    vis_pde_points = geom["time_rect"].sample_interior(
        (NPOINT_PDE + NPOINT_BC) * NTIME_PDE, evenly=True
    )
    vis_points = vis_initial_points
    # manually collate input data for visualization,
    # (interior+boundary) x all timestamps
    for t in range(NTIME_PDE):
        for key in geom["time_rect"].dim_keys:
            vis_points[key] = np.concatenate(
                (
                    vis_points[key],
                    vis_pde_points[key][
                        t
                        * (NPOINT_PDE + NPOINT_BC) : (t + 1)
                        * (NPOINT_PDE + NPOINT_BC)
                    ],
                )
            )

    visualizer = {
        "visualize_u_v": ppsci.visualize.VisualizerVtu(
            vis_points,
            {"u": lambda d: d["u"], "v": lambda d: d["v"], "p": lambda d: d["p"]},
            num_timestamps=cfg.NTIME_ALL,
            prefix="result_u_v",
        )
    }

    # directly evaluate pretrained model(optional)
    solver = ppsci.solver.Solver(
        model,
        output_dir=cfg.output_dir,
        equation=equation,
        geom=geom,
        validator=validator,
        visualizer=visualizer,
        pretrained_model_path=cfg.EVAL.pretrained_model_path,
    )
    solver.eval()
    # visualize prediction for pretrained model(optional)
    solver.visualize()


@hydra.main(
    version_base=None, config_path="./conf", config_name="ldc2d_unsteady_Re10.yaml"
)
def main(cfg: DictConfig):
    if cfg.mode == "train":
        train(cfg)
    elif cfg.mode == "eval":
        evaluate(cfg)
    else:
        raise ValueError(f"cfg.mode should in ['train', 'eval'], but got '{cfg.mode}'")


if __name__ == "__main__":
    main()

5. 结果展示

下方展示了模型对于最后一个时刻,边长为 1 的正方形计算域的内部点进行预测的结果、OpeFOAM求解结果,包括每个点的水平(x)方向流速\(u(x,y)\)、垂直(y)方向流速\(v(x,y)\)、压力\(p(x,y)\)

说明

本案例只作为demo展示,尚未进行充分调优,下方部分展示结果可能与 OpenFOAM 存在一定差别。

u_pred_openfoam

左:模型预测结果 u ,右:OpenFOAM结果 u

v_pred_openfoam

左:模型预测结果 v ,右:OpenFOAM结果 v

p_pred_openfoam

左:模型预测结果 p ,右:OpenFOAM结果 p

可以看到模型预测结果与OpenFOAM的预测结果大致相同。


最后更新: November 14, 2023
创建日期: November 6, 2023