Classicly, Relu computes the following on input:

Relu(x) = max(0, x)

In TF Encrypted, how Relu behaves will depend on the underlying protocol you are using.

With Pond, Relu will be approximated using Chebyshev Polynomial Approximation
With SecureNN, Relu will behave as you expect (Relu(x) = max(0, x))
class tf_encrypted.layers.activation.Relu(input_shape: List[int])[source]

Relu Layer

backward(d_y, *args)[source]

backward is not implemented for Relu

Parameters:x (PondTensor) – The input tensor
Return type:PondTensor
Returns:A pond tensor with the same backing type as the input tensor.
get_output_shape() → List[int][source]

Returns the layer’s output shape

initialize(*args, **kwargs) → None[source]

Initialize any necessary tensors.