跳转至

VIV(vortex induced vibration)

AI Studio快速体验

1. 问题简介

当涡流脱落频率接近结构的固有频率时,圆柱会发生涡激振动,VIV系统相当于一个弹簧-阻尼系统:

VIV_1D_SpringDamper

2. 问题定义

本问题涉及的控制方程涉及三个物理量:\(λ_1\)\(λ_2\)\(ρ\),分别表示自然阻尼、结构特性刚度和质量,控制方程定义如下所示:

\[ \rho \dfrac{\partial^2 \eta}{\partial t^2} + \lambda_1 \dfrac{\partial \eta}{\partial t} + \lambda_2 \eta = f \]

该模型基于无量纲速度 \(U_r=\dfrac{u}{f_n*d}=8.5\) 对应 \(Re=500\) 的假设。我们使用通过圆柱的流体引起的圆柱振动的横向振幅 \(\eta\) 和相应的升力 \(f\) 作为监督数据。

3. 问题求解

接下来开始讲解如何将问题一步一步地转化为 PaddleScience 代码,用深度学习的方法求解该问题。 为了快速理解 PaddleScience,接下来仅对模型构建、方程构建、计算域构建等关键步骤进行阐述,而其余细节请参考 API文档

3.1 模型构建

在 VIV 问题中,给定时间 \(t\),上述系统都有横向振幅 \(\eta\) 和升力 \(f\) 作为待求解的未知量,并且该系统本身还包含两个参数 \(\lambda_1, \lambda_2\)。因此我们在这里使用比较简单的 MLP(Multilayer Perceptron, 多层感知机) 来表示 \(t\)\((\eta, f)\) 的映射函数 \(g: \mathbb{R}^1 \to \mathbb{R}^2\) ,即:

\[ \eta, f = g(t) \]

上式中 \(g\) 即为 MLP 模型本身,用 PaddleScience 代码表示如下

# set model
model = ppsci.arch.MLP(("t_f",), ("eta",), 5, 50, "tanh", False, False)

为了在计算时,准确快速地访问具体变量的值,我们在这里指定网络模型的输入变量名是 ("t_f",),输出变量名是 ("eta",)t_f 代表输入时间 \(t\)eta 代表输出振幅 \(\eta\) 这些命名与后续代码保持一致。

接着通过指定 MLP 的层数、神经元个数以及激活函数,我们就实例化出了一个拥有 5 层隐藏神经元,每层神经元数为 50,使用 "tanh" 作为激活函数的神经网络模型 model

3.2 方程构建

由于 VIV 使用的是 VIV 方程,因此可以直接使用 PaddleScience 内置的 VIV

# set equation
equation = {"VIV": ppsci.equation.Vibration(2, -4, 0)}

我们在该方程中添加了两个可学习的参数 k1k2 来估计 \(\lambda_1\)\(\lambda_2\),且它们的关系是 \(\lambda_1 = e^{k1}, \lambda_2 = e^{k2}\)

因此我们在实例化 VIV 类时需指定必要的参数:质量 rho=2,初始化值k1=-4k2=0

3.3 计算域构建

本文中 VIV 问题作用在 \(t \in [0.0625, 9.9375]\) 中的 100 个离散时间点上,这 100 个时间点已经保存在文件 examples/fsi/VIV_Training_Neta100.mat 作为输入数据,因此不需要显式构建计算域。

3.4 约束构建

本文采用监督学习的方式,对模型输出 \(\eta\) 和基于 \(\eta\) 计算出的升力 \(f\),这两个物理量进行约束。

在定义约束之前,需要给监督约束指定文件路径等数据读取配置。

# set dataloader config
ITERS_PER_EPOCH = 1
train_dataloader_cfg = {
    "dataset": {
        "name": "MatDataset",
        "file_path": "./VIV_Training_Neta100.mat",
        "input_keys": ("t_f",),
        "label_keys": ("eta", "f"),
        "weight_dict": {"eta": 100},
    },
    "batch_size": 100,
    "sampler": {
        "name": "BatchSampler",
        "drop_last": False,
        "shuffle": True,
    },
}

3.4.1 监督约束

由于我们以监督学习方式进行训练,此处采用监督约束 SupervisedConstraint

# set constraint
sup_constraint = ppsci.constraint.SupervisedConstraint(
    train_dataloader_cfg,
    ppsci.loss.MSELoss("mean"),
    {"eta": lambda out: out["eta"], **equation["VIV"].equations},
    name="Sup",
)

SupervisedConstraint 的第一个参数是监督约束的读取配置,此处填入在 3.4 方程构建 章节中实例化好的 train_dataloader_cfg

第二个参数是损失函数,此处我们选用常用的MSE函数,且 reduction 设置为 "mean",即我们会将参与计算的所有数据点产生的损失项求和取平均;

第三个参数是方程表达式,用于描述如何计算约束目标,此处填入 eta 的计算函数和在 3.2 方程构建 章节中实例化好的 equation["VIV"].equations

第四个参数是约束条件的名字,我们需要给每一个约束条件命名,方便后续对其索引。此处我们命名为 "Sup" 即可。

在监督约束构建完毕之后,以我们刚才的命名为关键字,封装到一个字典中,方便后续访问。

# wrap constraints together
constraint = {
    sup_constraint.name: sup_constraint,
}

3.5 超参数设定

接下来我们需要指定训练轮数和学习率,此处我们按实验经验,使用十万轮训练轮数,并每隔1000个epochs评估一次模型精度。

# set training hyper-parameters
EPOCHS = 100000 if args.epochs is None else args.epochs
EVAL_FREQ = 1000

3.6 优化器构建

训练过程会调用优化器来更新模型参数,此处选择较为常用的 Adam 优化器和 Step 间隔衰减学习率。

# set optimizer
lr_scheduler = ppsci.optimizer.lr_scheduler.Step(
    EPOCHS, ITERS_PER_EPOCH, 0.001, step_size=20000, gamma=0.9
)()
optimizer = ppsci.optimizer.Adam(lr_scheduler)((model,) + tuple(equation.values()))
说明

VIV 方程含有两个 可学习参数 k1和k2,因此需要将方程与 model 一起传入优化器。

3.7 评估器构建

在训练过程中通常会按一定轮数间隔,用验证集(测试集)评估当前模型的训练情况,因此使用 ppsci.validate.SupervisedValidator 构建评估器。

# set validator
valida_dataloader_cfg = {
    "dataset": {
        "name": "MatDataset",
        "file_path": "./VIV_Training_Neta100.mat",
        "input_keys": ("t_f",),
        "label_keys": ("eta", "f"),
    },
    "batch_size": 32,
    "sampler": {
        "name": "BatchSampler",
        "drop_last": False,
        "shuffle": False,
    },
}
eta_mse_validator = ppsci.validate.SupervisedValidator(
    valida_dataloader_cfg,
    ppsci.loss.MSELoss("mean"),
    {"eta": lambda out: out["eta"], **equation["VIV"].equations},
    metric={"MSE": ppsci.metric.MSE()},
    name="eta_mse",
)
validator = {eta_mse_validator.name: eta_mse_validator}

评价指标 metric 选择 ppsci.metric.MSE 即可;

其余配置与 监督约束构建 的设置类似。

3.8 可视化器构建

在模型评估时,如果评估结果是可以可视化的数据,我们可以选择合适的可视化器来对输出结果进行可视化。

本文需要可视化的数据是 \(t-\eta\)\(t-f\) 两组关系图,假设每个时刻 \(t\) 的坐标是 \(t_i\),则对应网络输出为 \(\eta_i\),升力为 \(f_i\),因此我们只需要将评估过程中产生的所有 \((t_i, \eta_i, f_i)\) 保存成图片即可。代码如下:

# set visualizer(optional)
visu_mat = ppsci.utils.reader.load_mat_file(
    "./VIV_Training_Neta100.mat",
    ("t_f", "eta_gt", "f_gt"),
    alias_dict={"eta_gt": "eta", "f_gt": "f"},
)
visualizer = {
    "visulzie_u": ppsci.visualize.VisualizerScatter1D(
        visu_mat,
        ("t_f",),
        {
            r"$\eta$": lambda d: d["eta"],  # plot with latex title
            r"$\eta_{gt}$": lambda d: d["eta_gt"],  # plot with latex title
            r"$f$": equation["VIV"].equations["f"],  # plot with latex title
            r"$f_{gt}$": lambda d: d["f_gt"],  # plot with latex title
        },
        num_timestamps=1,
        prefix="viv_pred",
    )
}

3.9 模型训练、评估与可视化

完成上述设置之后,只需要将上述实例化的对象按顺序传递给 ppsci.solver.Solver,然后启动训练、评估、可视化。

# initialize solver
solver = ppsci.solver.Solver(
    model,
    constraint,
    OUTPUT_DIR,
    optimizer,
    lr_scheduler,
    EPOCHS,
    ITERS_PER_EPOCH,
    eval_during_train=True,
    eval_freq=EVAL_FREQ,
    equation=equation,
    validator=validator,
    visualizer=visualizer,
)
# train model
solver.train()
# # evaluate after finished training
solver.eval()
# visualize prediction after finished training
solver.visualize()

# directly evaluate model from pretrained_model_path(optional)
logger.init_logger("ppsci", f"{OUTPUT_DIR}/eval.log", "info")
solver = ppsci.solver.Solver(
    model,
    constraint,
    OUTPUT_DIR,
    equation=equation,
    validator=validator,
    visualizer=visualizer,
    pretrained_model_path=f"{OUTPUT_DIR}/checkpoints/latest",
)
solver.eval()
# visualize prediction from pretrained_model_path(optional)
solver.visualize()

4. 完整代码

viv.py
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

#     http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import ppsci
from ppsci.utils import config
from ppsci.utils import logger

if __name__ == "__main__":
    args = config.parse_args()
    # set random seed for reproducibility
    ppsci.utils.misc.set_random_seed(42)
    # set output directory
    OUTPUT_DIR = "./output_viv" if args.output_dir is None else args.output_dir
    # initialize logger
    logger.init_logger("ppsci", f"{OUTPUT_DIR}/train.log", "info")

    # set model
    model = ppsci.arch.MLP(("t_f",), ("eta",), 5, 50, "tanh", False, False)

    # set equation
    equation = {"VIV": ppsci.equation.Vibration(2, -4, 0)}

    # set dataloader config
    ITERS_PER_EPOCH = 1
    train_dataloader_cfg = {
        "dataset": {
            "name": "MatDataset",
            "file_path": "./VIV_Training_Neta100.mat",
            "input_keys": ("t_f",),
            "label_keys": ("eta", "f"),
            "weight_dict": {"eta": 100},
        },
        "batch_size": 100,
        "sampler": {
            "name": "BatchSampler",
            "drop_last": False,
            "shuffle": True,
        },
    }
    # set constraint
    sup_constraint = ppsci.constraint.SupervisedConstraint(
        train_dataloader_cfg,
        ppsci.loss.MSELoss("mean"),
        {"eta": lambda out: out["eta"], **equation["VIV"].equations},
        name="Sup",
    )
    # wrap constraints together
    constraint = {
        sup_constraint.name: sup_constraint,
    }

    # set training hyper-parameters
    EPOCHS = 100000 if args.epochs is None else args.epochs
    EVAL_FREQ = 1000

    # set optimizer
    lr_scheduler = ppsci.optimizer.lr_scheduler.Step(
        EPOCHS, ITERS_PER_EPOCH, 0.001, step_size=20000, gamma=0.9
    )()
    optimizer = ppsci.optimizer.Adam(lr_scheduler)((model,) + tuple(equation.values()))

    # set validator
    valida_dataloader_cfg = {
        "dataset": {
            "name": "MatDataset",
            "file_path": "./VIV_Training_Neta100.mat",
            "input_keys": ("t_f",),
            "label_keys": ("eta", "f"),
        },
        "batch_size": 32,
        "sampler": {
            "name": "BatchSampler",
            "drop_last": False,
            "shuffle": False,
        },
    }
    eta_mse_validator = ppsci.validate.SupervisedValidator(
        valida_dataloader_cfg,
        ppsci.loss.MSELoss("mean"),
        {"eta": lambda out: out["eta"], **equation["VIV"].equations},
        metric={"MSE": ppsci.metric.MSE()},
        name="eta_mse",
    )
    validator = {eta_mse_validator.name: eta_mse_validator}

    # set visualizer(optional)
    visu_mat = ppsci.utils.reader.load_mat_file(
        "./VIV_Training_Neta100.mat",
        ("t_f", "eta_gt", "f_gt"),
        alias_dict={"eta_gt": "eta", "f_gt": "f"},
    )
    visualizer = {
        "visulzie_u": ppsci.visualize.VisualizerScatter1D(
            visu_mat,
            ("t_f",),
            {
                r"$\eta$": lambda d: d["eta"],  # plot with latex title
                r"$\eta_{gt}$": lambda d: d["eta_gt"],  # plot with latex title
                r"$f$": equation["VIV"].equations["f"],  # plot with latex title
                r"$f_{gt}$": lambda d: d["f_gt"],  # plot with latex title
            },
            num_timestamps=1,
            prefix="viv_pred",
        )
    }

    # initialize solver
    solver = ppsci.solver.Solver(
        model,
        constraint,
        OUTPUT_DIR,
        optimizer,
        lr_scheduler,
        EPOCHS,
        ITERS_PER_EPOCH,
        eval_during_train=True,
        eval_freq=EVAL_FREQ,
        equation=equation,
        validator=validator,
        visualizer=visualizer,
    )
    # train model
    solver.train()
    # # evaluate after finished training
    solver.eval()
    # visualize prediction after finished training
    solver.visualize()

    # directly evaluate model from pretrained_model_path(optional)
    logger.init_logger("ppsci", f"{OUTPUT_DIR}/eval.log", "info")
    solver = ppsci.solver.Solver(
        model,
        constraint,
        OUTPUT_DIR,
        equation=equation,
        validator=validator,
        visualizer=visualizer,
        pretrained_model_path=f"{OUTPUT_DIR}/checkpoints/latest",
    )
    solver.eval()
    # visualize prediction from pretrained_model_path(optional)
    solver.visualize()

5. 结果展示

Viv_result

振幅 eta 与升力 f 的预测结果和参考结果