ReluΒΆ

Classicly, Relu computes the following on input:

Relu(x) = max(0, x)

In tf-encrypted, how Relu behaves will depend on the underlying protocol you are using.

With Pond, Relu will be approximated using Chebyshev Polynomial Approximation
With SecureNN, Relu will behave as you expect (Relu(x) = max(0, x))