| funasr/models/encoder/opennmt_encoders/self_attention_encoder.py | ●●●●● 补丁 | 查看 | 原始文档 | blame | 历史 |
funasr/models/encoder/opennmt_encoders/self_attention_encoder.py
@@ -272,7 +272,7 @@ position embedded tensor and mask """ masks = (~make_pad_mask(ilens)[:, None, :]).to(xs_pad.device) xs_pad *= self.output_size()**0.5 xs_pad = xs_pad * self.output_size()**0.5 if self.embed is None: xs_pad = xs_pad elif (