Loss(损失函数) 模块¶
ppsci.loss
¶
Loss
¶
Bases: Layer
Base class for loss.
Source code in ppsci/loss/base.py
FunctionalLoss
¶
Bases: Loss
Functional loss class, which allows to use custom loss computing function from given loss_expr for complex computation cases.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
loss_expr |
Callable
|
expression of loss calculation. |
required |
reduction |
Literal['mean', 'sum']
|
Reduction method. Defaults to "mean". |
'mean'
|
weight |
Optional[Union[float, Dict[str, float]]]
|
Weight for loss. Defaults to None. |
None
|
Examples:
>>> import ppsci
>>> import paddle.nn.functional as F
>>> def loss_expr(output_dict, *args):
... losses = 0
... for key in output_dict:
... length = int(len(output_dict[key])/2)
... out_dict = {key: output_dict[key][:length]}
... label_dict = {key: output_dict[key][length:]}
... losses += F.mse_loss(out_dict, label_dict, "sum")
... return losses
>>> loss = ppsci.loss.FunctionalLoss(loss_expr)
Source code in ppsci/loss/func.py
L1Loss
¶
Bases: Loss
Class for l1 loss.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction |
Literal['mean', 'sum']
|
Reduction method. Defaults to "mean". |
'mean'
|
weight |
Optional[Union[float, Dict[str, float]]]
|
Weight for loss. Defaults to None. |
None
|
Examples:
Source code in ppsci/loss/l1.py
L2Loss
¶
Bases: Loss
Class for l2 loss.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction |
Literal['mean', 'sum']
|
Reduction method. Defaults to "mean". |
'mean'
|
weight |
Optional[Union[float, Dict[str, float]]]
|
Weight for loss. Defaults to None. |
None
|
Examples:
Source code in ppsci/loss/l2.py
L2RelLoss
¶
Bases: Loss
Class for l2 relative loss.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction |
Literal['mean', 'sum']
|
Specifies the reduction to apply to the output: 'mean' | 'sum'. Defaults to "mean". |
'mean'
|
weight |
Optional[Union[float, Dict[str, float]]]
|
Weight for loss. Defaults to None. |
None
|
Examples:
Source code in ppsci/loss/l2.py
MAELoss
¶
Bases: Loss
Class for mean absolute error loss.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction |
Literal['mean', 'sum']
|
Reduction method. Defaults to "mean". |
'mean'
|
weight |
Optional[Union[float, Dict[str, float]]]
|
Weight for loss. Defaults to None. |
None
|
Examples:
Source code in ppsci/loss/mae.py
MSELoss
¶
Bases: Loss
Class for mean squared error loss.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction |
Literal['mean', 'sum']
|
Reduction method. Defaults to "mean". |
'mean'
|
weight |
Optional[Union[float, Dict[str, float]]]
|
Weight for loss. Defaults to None. |
None
|
Examples:
Source code in ppsci/loss/mse.py
MSELossWithL2Decay
¶
Bases: MSELoss
MSELoss with L2 decay.
\(M\) is the number of which apply regularization on.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction |
Literal['mean', 'sum']
|
Specifies the reduction to apply to the output: 'mean' | 'sum'. Defaults to "mean". |
'mean'
|
regularization_dict |
Optional[Dict[str, float]]
|
Regularization dictionary. Defaults to None. |
None
|
weight |
Optional[Union[float, Dict[str, float]]]
|
Weight for loss. Defaults to None. |
None
|
Raises:
Type | Description |
---|---|
ValueError
|
reduction should be 'mean' or 'sum'. |
Examples:
Source code in ppsci/loss/mse.py
IntegralLoss
¶
Bases: Loss
Class for integral loss with Monte-Carlo integration algorithm.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction |
Literal['mean', 'sum']
|
Reduction method. Defaults to "mean". |
'mean'
|
weight |
Optional[Union[float, Dict[str, float]]]
|
Weight for loss. Defaults to None. |
None
|
Examples:
Source code in ppsci/loss/integral.py
PeriodicL1Loss
¶
Bases: Loss
Class for periodic l1 loss.
\(\mathbf{x_l} \in \mathcal{R}^{N}\) is the first half of batch output, \(\mathbf{x_r} \in \mathcal{R}^{N}\) is the second half of batch output.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction |
Literal['mean', 'sum']
|
Reduction method. Defaults to "mean". |
'mean'
|
weight |
Optional[Union[float, Dict[str, float]]]
|
Weight for loss. Defaults to None. |
None
|
Examples:
Source code in ppsci/loss/l1.py
PeriodicL2Loss
¶
Bases: Loss
Class for Periodic l2 loss.
\(\mathbf{x_l} \in \mathcal{R}^{N}\) is the first half of batch output, \(\mathbf{x_r} \in \mathcal{R}^{N}\) is the second half of batch output.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction |
Literal['mean', 'sum']
|
Reduction method. Defaults to "mean". |
'mean'
|
weight |
Optional[Union[float, Dict[str, float]]]
|
Weight for loss. Defaults to None. |
None
|
Examples:
Source code in ppsci/loss/l2.py
PeriodicMSELoss
¶
Bases: Loss
Class for periodic mean squared error loss.
\(\mathbf{x_l} \in \mathcal{R}^{N}\) is the first half of batch output, \(\mathbf{x_r} \in \mathcal{R}^{N}\) is the second half of batch output.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
reduction |
Literal['mean', 'sum']
|
Reduction method. Defaults to "mean". |
'mean'
|
weight |
Optional[Union[float, Dict[str, float]]]
|
Weight for loss. Defaults to None. |
None
|
Source code in ppsci/loss/mse.py
ppsci.loss.mtl
¶
LossAggregator
¶
Base class of loss aggregator mainly for multitask learning.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
Layer
|
Training model. |
required |
Source code in ppsci/loss/mtl/base.py
PCGrad
¶
Bases: LossAggregator
Projecting Conflicting Gradients
Gradient Surgery for Multi-Task Learning
[https://github.com/tianheyu927/PCGrad/blob/master/PCGrad_tf.py](\
https://github.com/tianheyu927/PCGrad/blob/master/PCGrad_tf.py)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
Layer
|
Training model. |
required |
Examples:
>>> import paddle
>>> from ppsci.loss import mtl
>>> model = paddle.nn.Linear(3, 4)
>>> loss_aggregator = mtl.PCGrad(model)
>>> for i in range(5):
... x1 = paddle.randn([8, 3])
... x2 = paddle.randn([8, 3])
... y1 = model(x1)
... y2 = model(x2)
... loss1 = paddle.sum(y1)
... loss2 = paddle.sum((y2 - 2) ** 2)
... loss_aggregator([loss1, loss2]).backward()
Source code in ppsci/loss/mtl/pcgrad.py
AGDA
¶
Bases: LossAggregator
Adaptive Gradient Descent Algorithm
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
Layer
|
Training model. |
required |
M |
int
|
Smoothing period. Defaults to 100. |
100
|
gamma |
float
|
Smooth factor. Defaults to 0.999. |
0.999
|
Examples:
>>> import paddle
>>> from ppsci.loss import mtl
>>> model = paddle.nn.Linear(3, 4)
>>> loss_aggregator = mtl.AGDA(model)
>>> for i in range(5):
... x1 = paddle.randn([8, 3])
... x2 = paddle.randn([8, 3])
... y1 = model(x1)
... y2 = model(x2)
... loss1 = paddle.sum(y1)
... loss2 = paddle.sum((y2 - 2) ** 2)
... loss_aggregator([loss1, loss2]).backward()
Source code in ppsci/loss/mtl/agda.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 |
|
创建日期: November 6, 2023