跳转至

2D-Cylinder(2D Flow Around a Cylinder)

AI Studio快速体验

# linux
wget https://paddle-org.bj.bcebos.com/paddlescience/datasets/cylinder2d_unsteady_Re100/cylinder2d_unsteady_Re100_dataset.tar
# windows
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/cylinder2d_unsteady_Re100/cylinder2d_unsteady_Re100_dataset.tar --output cylinder2d_unsteady_Re100_dataset.tar
# unzip it
tar -xvf cylinder2d_unsteady_Re100_dataset.tar
python cylinder2d_unsteady_Re100.py
# linux
wget https://paddle-org.bj.bcebos.com/paddlescience/datasets/cylinder2d_unsteady_Re100/cylinder2d_unsteady_Re100_dataset.tar
# windows
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/cylinder2d_unsteady_Re100/cylinder2d_unsteady_Re100_dataset.tar --output cylinder2d_unsteady_Re100_dataset.tar
# unzip it
tar -xvf cylinder2d_unsteady_Re100_dataset.tar
python cylinder2d_unsteady_Re100.py mode=eval EVAL.pretrained_model_path=https://paddle-org.bj.bcebos.com/paddlescience/models/cylinder2d_unsteady_Re100/cylinder2d_unsteady_Re100_pretrained.pdparams
预训练模型 指标
cylinder2d_unsteady_Re100_pretrained.pdparams loss(Residual): 0.00398
MSE.continuity(Residual): 0.00126
MSE.momentum_x(Residual): 0.00151
MSE.momentum_y(Residual): 0.00120

1. 背景简介

圆柱绕流问题可以应用于很多领域。例如,在工业设计中,它可以被用来模拟和优化流体在各种设备中的流动,如风力发电机、汽车和飞机的流体动力学性能等。在环保领域,圆柱绕流问题也有应用,如预测和控制河流的洪水、研究污染物的扩散等。此外,在工程实践中,如流体动力学、流体静力学、热交换、空气动力学等领域,圆柱绕流问题也具有实际意义。

2D Flow Around a Cylinder,中文名称可译作“2维圆柱绕流”,是指二维圆柱低速定常绕流的流型只与 \(Re\) 数有关。在 \(Re \le 1\) 时,流场中的惯性力与粘性力相比居次要地位,圆柱上下游的流线前后对称,阻力系数近似与 \(Re\) 成反比(阻力系数为 10~60),此 \(Re\) 数范围的绕流称为斯托克斯区;随着 \(Re\) 的增大,圆柱上下游的流线逐渐失去对称性。

2. 问题定义

质量守恒:

\[ \dfrac{\partial u}{\partial x} + \dfrac{\partial v}{\partial y} = 0 \]

\(x\) 动量守恒:

\[ \dfrac{\partial u}{\partial t} + u\dfrac{\partial u}{\partial x} + v\dfrac{\partial u}{\partial y} = -\dfrac{1}{\rho}\dfrac{\partial p}{\partial x} + \nu(\dfrac{\partial ^2 u}{\partial x ^2} + \dfrac{\partial ^2 u}{\partial y ^2}) \]

\(y\) 动量守恒:

\[ \dfrac{\partial v}{\partial t} + u\dfrac{\partial v}{\partial x} + v\dfrac{\partial v}{\partial y} = -\dfrac{1}{\rho}\dfrac{\partial p}{\partial y} + \nu(\dfrac{\partial ^2 v}{\partial x ^2} + \dfrac{\partial ^2 v}{\partial y ^2}) \]

令:

\(t^* = \dfrac{L}{U_0}\)

\(x^*=y^* = L\)

\(u^*=v^* = U_0\)

\(p^* = \rho {U_0}^2\)

定义:

无量纲时间 \(\tau = \dfrac{t}{t^*}\)

无量纲坐标 \(x:X = \dfrac{x}{x^*}\);无量纲坐标 \(y:Y = \dfrac{y}{y^*}\)

无量纲速度 \(x:U = \dfrac{u}{u^*}\);无量纲速度 \(y:V = \dfrac{v}{u^*}\)

无量纲压力 \(P = \dfrac{p}{p^*}\)

雷诺数 \(Re = \dfrac{L U_0}{\nu}\)

则可获得如下无量纲Navier-Stokes方程,施加于流体域内部:

质量守恒:

\[ \dfrac{\partial U}{\partial X} + \dfrac{\partial U}{\partial Y} = 0 \]

\(x\) 动量守恒:

\[ \dfrac{\partial U}{\partial \tau} + U\dfrac{\partial U}{\partial X} + V\dfrac{\partial U}{\partial Y} = -\dfrac{\partial P}{\partial X} + \dfrac{1}{Re}(\dfrac{\partial ^2 U}{\partial X^2} + \dfrac{\partial ^2 U}{\partial Y^2}) \]

\(y\) 动量守恒:

\[ \dfrac{\partial V}{\partial \tau} + U\dfrac{\partial V}{\partial X} + V\dfrac{\partial V}{\partial Y} = -\dfrac{\partial P}{\partial Y} + \dfrac{1}{Re}(\dfrac{\partial ^2 V}{\partial X^2} + \dfrac{\partial ^2 V}{\partial Y^2}) \]

对于流体域边界和流体域内部圆周边界,则需施加 Dirichlet 边界条件:

流体域入口边界:

\[ u=1, v=0 \]

圆周边界:

\[ u=0, v=0 \]

流体域出口边界:

\[ p=0 \]

3. 问题求解

接下来开始讲解如何将问题一步一步地转化为 PaddleScience 代码,用深度学习的方法求解该问题。 为了快速理解 PaddleScience,接下来仅对模型构建、方程构建、计算域构建等关键步骤进行阐述,而其余细节请参考 API文档

在开始构建代码之前,请先按照下列命令下载训练、评估所需的数据集

wget https://paddle-org.bj.bcebos.com/paddlescience/datasets/cylinder2d_unsteady_Re100/cylinder2d_unsteady_Re100_dataset.tar
tar -xf cylinder2d_unsteady_Re100_dataset.tar

3.1 模型构建

在 2D-Cylinder 问题中,每一个已知的坐标点 \((t, x, y)\) 都有自身的横向速度 \(u\)、纵向速度 \(v\)、压力 \(p\) 三个待求解的未知量,我们在这里使用比较简单的 MLP(Multilayer Perceptron, 多层感知机) 来表示 \((t, x, y)\)\((u, v, p)\) 的映射函数 \(f: \mathbb{R}^3 \to \mathbb{R}^3\) ,即:

\[ u, v, p = f(t, x, y) \]

上式中 \(f\) 即为 MLP 模型本身,用 PaddleScience 代码表示如下

# set model
model = ppsci.arch.MLP(**cfg.MODEL)

为了在计算时,准确快速地访问具体变量的值,我们在这里指定网络模型的输入变量名是 ["t", "x", "y"],输出变量名是 ["u", "v", "p"],这些命名与后续代码保持一致。

接着通过指定 MLP 的层数、神经元个数以及激活函数,我们就实例化出了一个拥有 9 层隐藏神经元,每层神经元数为 50,使用 "tanh" 作为激活函数的神经网络模型 model

3.2 方程构建

由于 2D-Cylinder 使用的是 Navier-Stokes 方程的2维瞬态形式,因此可以直接使用 PaddleScience 内置的 NavierStokes

# set equation
equation = {
    "NavierStokes": ppsci.equation.NavierStokes(cfg.VISCOSITY, cfg.DENSITY, 2, True)
}

在实例化 NavierStokes 类时需指定必要的参数:动力粘度 \(\nu=0.02\), 流体密度 \(\rho=1.0\)

3.3 计算域构建

本文中 2D-Cylinder 的计算域由 CSV 文件储存的点云构成,因此可以直接使用 PaddleScience 内置的点云几何 PointCloud 和时间域 TimeDomain,组合成时间-空间的 TimeXGeometry 计算域。

# set timestamps
train_timestamps = np.linspace(
    cfg.TIME_START, cfg.TIME_END, cfg.NUM_TIMESTAMPS, endpoint=True
).astype("float32")
train_timestamps = np.random.choice(train_timestamps, cfg.TRAIN_NUM_TIMESTAMPS)
train_timestamps.sort()
t0 = np.array([cfg.TIME_START], dtype="float32")

val_timestamps = np.linspace(
    cfg.TIME_START, cfg.TIME_END, cfg.NUM_TIMESTAMPS, endpoint=True
).astype("float32")

logger.message(f"train_timestamps: {train_timestamps.tolist()}")
logger.message(f"val_timestamps: {val_timestamps.tolist()}")

# set time-geometry
geom = {
    "time_rect": ppsci.geometry.TimeXGeometry(
        ppsci.geometry.TimeDomain(
            cfg.TIME_START,
            cfg.TIME_END,
            timestamps=np.concatenate((t0, train_timestamps), axis=0),
        ),
        ppsci.geometry.PointCloud(
            reader.load_csv_file(
                "./datasets/domain_train.csv",
                ("x", "y"),
                alias_dict={"x": "Points:0", "y": "Points:1"},
            ),
            ("x", "y"),
        ),
    ),
    "time_rect_eval": ppsci.geometry.PointCloud(
        reader.load_csv_file(
            "./datasets/domain_eval.csv",
            ("t", "x", "y"),
        ),
        ("t", "x", "y"),
    ),
}
  1. 评估数据点已包含时间戳信息,因此无需额外再与 TimeDomain 组合成 TimeXGeometry,只需使用 PointCloud 读入数据即可。
提示

PointCloudTimeDomain 是两种可以单独使用的 Geometry 派生类。

如输入数据只来自于点云几何,则可以直接使用 ppsci.geometry.PointCloud(...) 创建空间几何域对象;

如输入数据只来自一维时间域,则可以直接使用 ppsci.geometry.TimeDomain(...) 构建时间域对象。

3.4 约束构建

根据 2. 问题定义 得到的无量纲公式和和边界条件,对应了在计算域中指导模型训练的三个约束条件,即:

  1. 施加在流体域内部点上的无量纲 Navier-Stokes 方程约束(经过简单移项)

    \[ \dfrac{\partial U}{\partial X} + \dfrac{\partial U}{\partial Y} = 0 \]
    \[ \dfrac{\partial U}{\partial \tau} + U\dfrac{\partial U}{\partial X} + V\dfrac{\partial U}{\partial Y} + \dfrac{\partial P}{\partial X} - \dfrac{1}{Re}(\dfrac{\partial ^2 U}{\partial X^2} + \dfrac{\partial ^2 U}{\partial Y^2}) = 0 \]
    \[ \dfrac{\partial V}{\partial \tau} + U\dfrac{\partial V}{\partial X} + V\dfrac{\partial V}{\partial Y} + \dfrac{\partial P}{\partial Y} - \dfrac{1}{Re}(\dfrac{\partial ^2 V}{\partial X^2} + \dfrac{\partial ^2 V}{\partial Y^2}) = 0 \]

    为了方便获取中间变量,NavierStokes 类内部将上式左侧的结果分别命名为 continuity, momentum_x, momentum_y

  2. 施加在流体域入口、内部圆周、流体域出口的 Dirichlet 边界条件约束

    流体域入口边界:

    \[ u=1, v=0 \]

    流体域出口边界:

    \[ p=0 \]

    圆周边界:

    \[ u=0, v=0 \]
  3. 施加在初始时刻流体域内部点上的初值条件约束:

    \[ u=u_{t0}, v=v_{t0}, p=p_{t0} \]

接下来使用 PaddleScience 内置的 InteriorConstraintSupervisedConstraint 构建上述两种约束条件。

在定义约束之前,需要给每一种约束指定采样点个数,表示每一种约束在其对应计算域内采样数据的数量,以及通用的采样配置。

# pde/bc/sup constraint use t1~tn, initial constraint use t0
NTIME_PDE = len(train_timestamps)
ALIAS_DICT = {"x": "Points:0", "y": "Points:1", "u": "U:0", "v": "U:1"}

3.4.1 内部点约束

以作用在流体域内部点上的 InteriorConstraint 为例,代码如下:

# set constraint
pde_constraint = ppsci.constraint.InteriorConstraint(
    equation["NavierStokes"].equations,
    {"continuity": 0, "momentum_x": 0, "momentum_y": 0},
    geom["time_rect"],
    {
        "dataset": "IterableNamedArrayDataset",
        "batch_size": cfg.NPOINT_PDE * NTIME_PDE,
        "iters_per_epoch": cfg.TRAIN.iters_per_epoch,
    },
    ppsci.loss.MSELoss("mean"),
    name="EQ",
)

InteriorConstraint 的第一个参数是方程表达式,用于描述如何计算约束目标,此处填入在 3.2 方程构建 章节中实例化好的 equation["NavierStokes"].equations

第二个参数是约束变量的目标值,在本问题中我们希望 Navier-Stokes 方程产生的三个中间结果 continuity, momentum_x, momentum_y 被优化至 0,因此将它们的目标值全部设为 0;

第三个参数是约束方程作用的计算域,此处填入在 3.3 计算域构建 章节实例化好的 geom["time_rect"] 即可;

第四个参数是在计算域上的采样配置,此处我们使用全量数据点训练,因此 dataset 字段设置为 "IterableNamedArrayDataset" 且 iters_per_epoch 也设置为 1,采样点数 batch_size 设为 9420 * 30(表示一个时刻产生 9420 个数据点,共有 30 个时刻);

第五个参数是损失函数,此处我们选用常用的MSE函数,且 reduction 设置为 "mean",即我们会将参与计算的所有数据点产生的损失项求和取平均;

第六个参数是约束条件的名字,我们需要给每一个约束条件命名,方便后续对其索引。此处我们命名为 "EQ" 即可。

3.4.2 边界约束

同理,我们还需要构建流体域的流入边界、流出边界、圆周边界共三个边界的 Dirichlet 边界约束。以 bc_inlet_cylinder 边界约束为例,由于作用区域是边界且边界上的数据由 CSV 文件记录,因此我们使用 SupervisedConstraint 类,并按照如下规则指定第一个参数 dataloader_cfg 配置字典:

  • 该配置字典的第一个参数为包含 CSV 文件的路径 ./datasets/domain_inlet_cylinder.csv 在内的配置字典;

  • 该配置字典的第一个参数指定数据加载方式,此处我们使用 IterableCSVDataset 作为全量数据加载器;

  • 该配置字典的第二个参数指定数据的加载路径,此处填写 ./datasets/domain_inlet_cylinder.csv

  • 该配置字典的第三个参数指定要从文件中读取的输入列,对应转换后关键字,此处填写为 ("x", "y")

  • 该配置字典的第四个参数指定要从文件中读取的标签列,对应转换后关键字,此处填写为 ("u", "v")

  • 考虑到同一个变量在不同 CSV 文件中可能具有不同的字段名,而且有的字段名过长在编写代码时容易写错,因此该配置字典的第五个参数用于指定字段列的别名,此处填写为 {"x": "Points:0", "y": "Points:1", "u": "U:0", "v": "U:1"}

  • 该配置字典的第六个参数指定每个标签在计算损失时的权重,此处我们放大 "u" 和 "v" 的权重至 10,填写 {"u": 10, "v": 10}

  • 该配置字典的第七个参数指定数据读取是否涉及时间信息,此处我们设定为训练时间戳,即填写 train_timestamps

第二个参数是损失函数,此处我们选用常用的MSE函数,且 reduction 设置为 "mean",即我们会将参与计算的所有数据点产生的损失项求和取平均;

第三个参数是约束条件的名字,我们需要给每一个约束条件命名,方便后续对其索引。此处我们命名为 "BC_inlet_cylinder" 即可。

剩下的 bc_outlet 按照相同原理构建,代码如下所示:

bc_inlet_cylinder = ppsci.constraint.SupervisedConstraint(
    {
        "dataset": {
            "name": "IterableCSVDataset",
            "file_path": cfg.DOMAIN_INLET_CYLINDER_PATH,
            "input_keys": ("x", "y"),
            "label_keys": ("u", "v"),
            "alias_dict": ALIAS_DICT,
            "weight_dict": {"u": 10, "v": 10},
            "timestamps": train_timestamps,
        },
    },
    ppsci.loss.MSELoss("mean"),
    name="BC_inlet_cylinder",
)
bc_outlet = ppsci.constraint.SupervisedConstraint(
    {
        "dataset": {
            "name": "IterableCSVDataset",
            "file_path": cfg.DOMAIN_OUTLET_PATH,
            "input_keys": ("x", "y"),
            "label_keys": ("p",),
            "alias_dict": ALIAS_DICT,
            "timestamps": train_timestamps,
        },
    },
    ppsci.loss.MSELoss("mean"),
    name="BC_outlet",
)

3.4.3 初值约束

对于 \(t=t_0\) 时刻的流体域内的点,我们还需要对 \(u\), \(v\), \(p\) 施加初值约束,代码如下:

ic = ppsci.constraint.SupervisedConstraint(
    {
        "dataset": {
            "name": "IterableCSVDataset",
            "file_path": cfg.IC0_1_PATH,
            "input_keys": ("x", "y"),
            "label_keys": ("u", "v", "p"),
            "alias_dict": ALIAS_DICT,
            "weight_dict": {"u": 10, "v": 10, "p": 10},
            "timestamps": t0,
        },
    },
    ppsci.loss.MSELoss("mean"),
    name="IC",
)

3.4.4 监督约束

本案例在流体域内部加入了一定数量的监督点来保证模型最终的收敛情况,因此最后还需要加入一个监督约束,数据同样来自 CSV 文件,代码如下:

sup_constraint = ppsci.constraint.SupervisedConstraint(
    {
        "dataset": {
            "name": "IterableCSVDataset",
            "file_path": cfg.PROBE1_50_PATH,
            "input_keys": ("t", "x", "y"),
            "label_keys": ("u", "v"),
            "alias_dict": ALIAS_DICT,
            "weight_dict": {"u": 10, "v": 10},
            "timestamps": train_timestamps,
        },
    },
    ppsci.loss.MSELoss("mean"),
    name="Sup",
)

在微分方程约束、边界约束、初值约束、监督约束构建完毕之后,以我们刚才的命名为关键字,封装到一个字典中,方便后续访问。

# wrap constraints together
constraint = {
    pde_constraint.name: pde_constraint,
    bc_inlet_cylinder.name: bc_inlet_cylinder,
    bc_outlet.name: bc_outlet,
    ic.name: ic,
    sup_constraint.name: sup_constraint,
}

3.5 超参数设定

接下来我们需要指定训练轮数和学习率,此处我们按实验经验,使用四万轮训练轮数,评估间隔为四百轮,学习率设为 0.001。

3.6 优化器构建

训练过程会调用优化器来更新模型参数,此处选择较为常用的 Adam 优化器。

# set optimizer
optimizer = ppsci.optimizer.Adam(cfg.TRAIN.learning_rate)(model)

3.7 评估器构建

在训练过程中通常会按一定轮数间隔,用验证集(测试集)评估当前模型的训练情况,因此使用 ppsci.validate.GeometryValidator 构建评估器。

# set validator
NPOINT_EVAL = (
    cfg.NPOINT_PDE + cfg.NPOINT_INLET_CYLINDER + cfg.NPOINT_OUTLET
) * cfg.NUM_TIMESTAMPS
residual_validator = ppsci.validate.GeometryValidator(
    equation["NavierStokes"].equations,
    {"continuity": 0, "momentum_x": 0, "momentum_y": 0},
    geom["time_rect_eval"],
    {
        "dataset": "NamedArrayDataset",
        "total_size": NPOINT_EVAL,
        "batch_size": cfg.EVAL.batch_size,
        "sampler": {"name": "BatchSampler"},
    },
    ppsci.loss.MSELoss("mean"),
    metric={"MSE": ppsci.metric.MSE()},
    name="Residual",
)
validator = {residual_validator.name: residual_validator}

方程设置与 约束构建 的设置相同,表示如何计算所需评估的目标变量;

此处我们为 momentum_x, continuity, momentum_y 三个目标变量设置标签值为 0;

计算域与 约束构建 的设置相同,表示在指定计算域上进行评估;

采样点配置则需要指定总的评估点数 total_size,此处我们设置为 9662 * 50(9420个流体域内的点+161个流体域流入边界点+81个流体域流出边界点,共 50 个评估时刻);

评价指标 metric 选择 ppsci.metric.MSE 即可;

其余配置与 约束构建 的设置类似。

3.8 可视化器构建

在模型评估时,如果评估结果是可以可视化的数据,我们可以选择合适的可视化器来对输出结果进行可视化。

本文中的输出数据是一个区域内的二维点集,每个时刻 \(t\) 的坐标是 \((x^t_i, y^t_i)\),对应值是 \((u^t_i, v^t_i, p^t_i)\),因此我们只需要将评估的输出数据按时刻保存成 50 个 vtu格式 文件,最后用可视化软件打开查看即可。代码如下:

# set visualizer(optional)
vis_points = geom["time_rect_eval"].sample_interior(
    (cfg.NPOINT_PDE + cfg.NPOINT_INLET_CYLINDER + cfg.NPOINT_OUTLET)
    * cfg.NUM_TIMESTAMPS,
    evenly=True,
)
visualizer = {
    "visualize_u_v_p": ppsci.visualize.VisualizerVtu(
        vis_points,
        {"u": lambda d: d["u"], "v": lambda d: d["v"], "p": lambda d: d["p"]},
        num_timestamps=cfg.NUM_TIMESTAMPS,
        prefix="result_u_v_p",
    )
}

3.9 模型训练、评估与可视化

完成上述设置之后,只需要将上述实例化的对象按顺序传递给 ppsci.solver.Solver,然后启动训练、评估、可视化。

# initialize solver
solver = ppsci.solver.Solver(
    model,
    constraint,
    cfg.output_dir,
    optimizer,
    None,
    cfg.TRAIN.epochs,
    cfg.TRAIN.iters_per_epoch,
    eval_during_train=cfg.TRAIN.eval_during_train,
    eval_freq=cfg.TRAIN.eval_freq,
    equation=equation,
    geom=geom,
    validator=validator,
    visualizer=visualizer,
    checkpoint_path=cfg.TRAIN.checkpoint_path,
)
# train model
solver.train()
# evaluate after finished training
solver.eval()
# visualize prediction after finished training
solver.visualize()

4. 完整代码

cylinder2d_unsteady_Re100.py
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

#     http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from os import path as osp

import hydra
import numpy as np
from omegaconf import DictConfig

import ppsci
from ppsci.utils import logger
from ppsci.utils import reader


def train(cfg: DictConfig):
    # set random seed for reproducibility
    ppsci.utils.misc.set_random_seed(cfg.seed)

    # initialize logger
    logger.init_logger("ppsci", osp.join(cfg.output_dir, "train.log"), "info")

    # set model
    model = ppsci.arch.MLP(**cfg.MODEL)

    # set equation
    equation = {
        "NavierStokes": ppsci.equation.NavierStokes(cfg.VISCOSITY, cfg.DENSITY, 2, True)
    }

    # set timestamps
    train_timestamps = np.linspace(
        cfg.TIME_START, cfg.TIME_END, cfg.NUM_TIMESTAMPS, endpoint=True
    ).astype("float32")
    train_timestamps = np.random.choice(train_timestamps, cfg.TRAIN_NUM_TIMESTAMPS)
    train_timestamps.sort()
    t0 = np.array([cfg.TIME_START], dtype="float32")

    val_timestamps = np.linspace(
        cfg.TIME_START, cfg.TIME_END, cfg.NUM_TIMESTAMPS, endpoint=True
    ).astype("float32")

    logger.message(f"train_timestamps: {train_timestamps.tolist()}")
    logger.message(f"val_timestamps: {val_timestamps.tolist()}")

    # set time-geometry
    geom = {
        "time_rect": ppsci.geometry.TimeXGeometry(
            ppsci.geometry.TimeDomain(
                cfg.TIME_START,
                cfg.TIME_END,
                timestamps=np.concatenate((t0, train_timestamps), axis=0),
            ),
            ppsci.geometry.PointCloud(
                reader.load_csv_file(
                    cfg.DOMAIN_TRAIN_PATH,
                    ("x", "y"),
                    alias_dict={"x": "Points:0", "y": "Points:1"},
                ),
                ("x", "y"),
            ),
        ),
        "time_rect_eval": ppsci.geometry.PointCloud(
            reader.load_csv_file(
                cfg.DOMAIN_EVAL_PATH,
                ("t", "x", "y"),
            ),
            ("t", "x", "y"),
        ),
    }

    # pde/bc/sup constraint use t1~tn, initial constraint use t0
    NTIME_PDE = len(train_timestamps)
    ALIAS_DICT = {"x": "Points:0", "y": "Points:1", "u": "U:0", "v": "U:1"}

    # set constraint
    pde_constraint = ppsci.constraint.InteriorConstraint(
        equation["NavierStokes"].equations,
        {"continuity": 0, "momentum_x": 0, "momentum_y": 0},
        geom["time_rect"],
        {
            "dataset": "IterableNamedArrayDataset",
            "batch_size": cfg.NPOINT_PDE * NTIME_PDE,
            "iters_per_epoch": cfg.TRAIN.iters_per_epoch,
        },
        ppsci.loss.MSELoss("mean"),
        name="EQ",
    )
    bc_inlet_cylinder = ppsci.constraint.SupervisedConstraint(
        {
            "dataset": {
                "name": "IterableCSVDataset",
                "file_path": cfg.DOMAIN_INLET_CYLINDER_PATH,
                "input_keys": ("x", "y"),
                "label_keys": ("u", "v"),
                "alias_dict": ALIAS_DICT,
                "weight_dict": {"u": 10, "v": 10},
                "timestamps": train_timestamps,
            },
        },
        ppsci.loss.MSELoss("mean"),
        name="BC_inlet_cylinder",
    )
    bc_outlet = ppsci.constraint.SupervisedConstraint(
        {
            "dataset": {
                "name": "IterableCSVDataset",
                "file_path": cfg.DOMAIN_OUTLET_PATH,
                "input_keys": ("x", "y"),
                "label_keys": ("p",),
                "alias_dict": ALIAS_DICT,
                "timestamps": train_timestamps,
            },
        },
        ppsci.loss.MSELoss("mean"),
        name="BC_outlet",
    )
    ic = ppsci.constraint.SupervisedConstraint(
        {
            "dataset": {
                "name": "IterableCSVDataset",
                "file_path": cfg.IC0_1_PATH,
                "input_keys": ("x", "y"),
                "label_keys": ("u", "v", "p"),
                "alias_dict": ALIAS_DICT,
                "weight_dict": {"u": 10, "v": 10, "p": 10},
                "timestamps": t0,
            },
        },
        ppsci.loss.MSELoss("mean"),
        name="IC",
    )
    sup_constraint = ppsci.constraint.SupervisedConstraint(
        {
            "dataset": {
                "name": "IterableCSVDataset",
                "file_path": cfg.PROBE1_50_PATH,
                "input_keys": ("t", "x", "y"),
                "label_keys": ("u", "v"),
                "alias_dict": ALIAS_DICT,
                "weight_dict": {"u": 10, "v": 10},
                "timestamps": train_timestamps,
            },
        },
        ppsci.loss.MSELoss("mean"),
        name="Sup",
    )

    # wrap constraints together
    constraint = {
        pde_constraint.name: pde_constraint,
        bc_inlet_cylinder.name: bc_inlet_cylinder,
        bc_outlet.name: bc_outlet,
        ic.name: ic,
        sup_constraint.name: sup_constraint,
    }

    # set optimizer
    optimizer = ppsci.optimizer.Adam(cfg.TRAIN.learning_rate)(model)

    # set validator
    NPOINT_EVAL = (
        cfg.NPOINT_PDE + cfg.NPOINT_INLET_CYLINDER + cfg.NPOINT_OUTLET
    ) * cfg.NUM_TIMESTAMPS
    residual_validator = ppsci.validate.GeometryValidator(
        equation["NavierStokes"].equations,
        {"continuity": 0, "momentum_x": 0, "momentum_y": 0},
        geom["time_rect_eval"],
        {
            "dataset": "NamedArrayDataset",
            "total_size": NPOINT_EVAL,
            "batch_size": cfg.EVAL.batch_size,
            "sampler": {"name": "BatchSampler"},
        },
        ppsci.loss.MSELoss("mean"),
        metric={"MSE": ppsci.metric.MSE()},
        name="Residual",
    )
    validator = {residual_validator.name: residual_validator}

    # set visualizer(optional)
    vis_points = geom["time_rect_eval"].sample_interior(
        (cfg.NPOINT_PDE + cfg.NPOINT_INLET_CYLINDER + cfg.NPOINT_OUTLET)
        * cfg.NUM_TIMESTAMPS,
        evenly=True,
    )
    visualizer = {
        "visualize_u_v_p": ppsci.visualize.VisualizerVtu(
            vis_points,
            {"u": lambda d: d["u"], "v": lambda d: d["v"], "p": lambda d: d["p"]},
            num_timestamps=cfg.NUM_TIMESTAMPS,
            prefix="result_u_v_p",
        )
    }

    # initialize solver
    solver = ppsci.solver.Solver(
        model,
        constraint,
        cfg.output_dir,
        optimizer,
        None,
        cfg.TRAIN.epochs,
        cfg.TRAIN.iters_per_epoch,
        eval_during_train=cfg.TRAIN.eval_during_train,
        eval_freq=cfg.TRAIN.eval_freq,
        equation=equation,
        geom=geom,
        validator=validator,
        visualizer=visualizer,
        checkpoint_path=cfg.TRAIN.checkpoint_path,
    )
    # train model
    solver.train()
    # evaluate after finished training
    solver.eval()
    # visualize prediction after finished training
    solver.visualize()


def evaluate(cfg: DictConfig):
    # set random seed for reproducibility
    ppsci.utils.misc.set_random_seed(cfg.seed)

    # initialize logger
    logger.init_logger("ppsci", osp.join(cfg.output_dir, "eval.log"), "info")

    # set model
    model = ppsci.arch.MLP(**cfg.MODEL)

    # set equation
    equation = {
        "NavierStokes": ppsci.equation.NavierStokes(cfg.VISCOSITY, cfg.DENSITY, 2, True)
    }

    # set timestamps
    val_timestamps = np.linspace(
        cfg.TIME_START, cfg.TIME_END, cfg.NUM_TIMESTAMPS, endpoint=True
    ).astype("float32")

    logger.message(f"val_timestamps: {val_timestamps.tolist()}")

    # set time-geometry
    geom = {
        "time_rect_eval": ppsci.geometry.PointCloud(
            reader.load_csv_file(
                cfg.DOMAIN_EVAL_PATH,
                ("t", "x", "y"),
            ),
            ("t", "x", "y"),
        ),
    }

    # set validator
    NPOINT_EVAL = (
        cfg.NPOINT_PDE + cfg.NPOINT_INLET_CYLINDER + cfg.NPOINT_OUTLET
    ) * cfg.NUM_TIMESTAMPS
    residual_validator = ppsci.validate.GeometryValidator(
        equation["NavierStokes"].equations,
        {"continuity": 0, "momentum_x": 0, "momentum_y": 0},
        geom["time_rect_eval"],
        {
            "dataset": "NamedArrayDataset",
            "total_size": NPOINT_EVAL,
            "batch_size": cfg.EVAL.batch_size,
            "sampler": {"name": "BatchSampler"},
        },
        ppsci.loss.MSELoss("mean"),
        metric={"MSE": ppsci.metric.MSE()},
        name="Residual",
    )
    validator = {residual_validator.name: residual_validator}

    # set visualizer(optional)
    vis_points = geom["time_rect_eval"].sample_interior(
        (cfg.NPOINT_PDE + cfg.NPOINT_INLET_CYLINDER + cfg.NPOINT_OUTLET)
        * cfg.NUM_TIMESTAMPS,
        evenly=True,
    )
    visualizer = {
        "visualize_u_v_p": ppsci.visualize.VisualizerVtu(
            vis_points,
            {"u": lambda d: d["u"], "v": lambda d: d["v"], "p": lambda d: d["p"]},
            num_timestamps=cfg.NUM_TIMESTAMPS,
            prefix="result_u_v_p",
        )
    }

    # initialize solver
    solver = ppsci.solver.Solver(
        model,
        geom=geom,
        output_dir=cfg.output_dir,
        validator=validator,
        visualizer=visualizer,
        pretrained_model_path=cfg.EVAL.pretrained_model_path,
    )
    # evaluate
    solver.eval()
    # visualize prediction
    solver.visualize()


@hydra.main(
    version_base=None,
    config_path="./conf",
    config_name="cylinder2d_unsteady_Re100.yaml",
)
def main(cfg: DictConfig):
    if cfg.mode == "train":
        train(cfg)
    elif cfg.mode == "eval":
        evaluate(cfg)
    else:
        raise ValueError(f"cfg.mode should in ['train', 'eval'], but got '{cfg.mode}'")


if __name__ == "__main__":
    main()

5. 结果展示

预测结果如下所示,图像的横轴是水平方向,纵轴代表竖直方向,流体流向为从左到右,图片中展示的是模型预测50个时刻对应流场的横向流速\(u(t,x,y)\)的结果。

说明

本案例只作为demo展示,尚未进行充分调优,下方部分展示结果可能与 OpenFOAM 存在一定差别。

u_pred.gif

横向流速u的模型预测结果

最后更新: November 14, 2023
创建日期: November 6, 2023