Skip to content

In loss(), there is a risk to return 0.

Here the loss return tensor is initialized with a 0. tensor. Depending on the logic of the function, it may happen that this 0. tensor is returned as the loss, and passed to the backward step. I do not know the expected behaviour of this, since 0. is often the best possible loss value.

https://gitlab.orfeo-toolbox.org/releo/releo_mvp/-/blob/develop/src/releo_mvp/models/sits_perceiver.py?ref_type=heads#L600

Also, this multiplication by 0.0 is suspicious: https://gitlab.orfeo-toolbox.org/releo/releo_mvp/-/blob/develop/src/releo_mvp/models/sits_perceiver.py?ref_type=heads#L649

Why is it needed ?