Classicly, Relu computes the following on input:
Relu(x) = max(0, x)
In tf-encrypted, how Relu behaves will depend on the underlying protocol you are using.
Pond, Relu will be approximated using Chebyshev Polynomial Approximation
SecureNN, Relu will behave as you expect (Relu(x) = max(0, x))