SecureNN

SecureNN is an implementation from the SecureNN paper. SecureNN is an extension of the Pond protocol. ie SecureNN is a superset of the SPDZ protocol. The main difference between SecureNN and SPDZ is exact Relu and Maxpooling layers. In SPDZ, Maxpooling is simply not supported, and Relu will be approximated.

Approximation can be quicker in some cases but it will break down when inputs are sufficiently large. This requires users to implement workaround techniques such as adding a Batchnorm layer before a Relu.

class tf_encrypted.protocol.securenn.SecureNN(server_0, server_1, server_2, prime_factory, odd_factory, **kwargs)[source]

Implementation of SecureNN from Wagh et al.

argmax(x, axis) → PondTensor[source]

Find the index of the max value along an axis.

>>> argmax([[10, 20, 30], [11, 13, 12], [15, 16, 17]], axis=0)
[[2], [1], [2]]
See:

tf.argmax

Parameters:
  • x (PondTensor) – Input tensor.
  • axis (int) – The tensor axis to reduce along.
Return type:

PondTensor

Returns:

A new tensor with the indices of the max values along specified axis.

bits(x, factory) → PondPublicTensor[source]

Convert a fixed-point precision tensor into its bitwise representation.

Parameters:x (PondPublicTensor) – A fixed-point tensor to extract into a bitwise representation.
bitwise_and(x, y) → PondTensor[source]

Computes the bitwise AND of the given inputs, .

Parameters:
bitwise_not(x) → PondTensor[source]

Computes the bitwise NOT of the input, i.e. .

Parameters:x (PondTensor) – Input tensor.
bitwise_or(x, y) → PondTensor[source]

Computes the bitwise OR of the given inputs, .

Parameters:
bitwise_xor(x, y) → PondTensor[source]

Compute the bitwise XOR of the given inputs,

Parameters:
equal_zero(x, dtype) → PondTensor[source]

Evaluates the Boolean expression .

>>> equal_zero([1,0,1])
[0, 1, 0]
Parameters:
  • x (PondTensor) – The tensor to evaluate.
  • dtype (AbstractFactory) – An optional tensor factory, defaults to dtype of x.
greater(x, y) → PondTensor[source]

Returns .

>>> greater([1,2,3], [0,1,5])
[1, 1, 0]
Parameters:
greater_equal(x, y) → PondTensor[source]

Returns .

>>> greater_equal([1,2,3], [0,1,3])
[1, 1, 1]
Parameters:
less(x, y) → PondTensor[source]

Returns .

>>> less([1,2,3], [0,1,5])
[0, 0, 1]
Parameters:
less_equal(x, y) → PondTensor[source]

Returns .

>>> less_equal([1,2,3], [0,1,3])
[0, 0, 1]
Parameters:
lsb(x) → PondTensor[source]

Computes the least significant bit of the provided tensor.

Parameters:x (PondTensor) – The tensor to take the least significant bit of.
maximum(x, y) → PondTensor[source]

Computes .

Returns the greater value of each tensor per index.

>>> maximum([10, 20, 30], [11, 19, 31])
[11, 20, 31]
Parameters:
maxpool2d(x, pool_size, strides, padding) → PondTensor[source]

Performs a MaxPooling2d operation on x.

Parameters:
  • x (PondTensor) – Input tensor.
  • pool_size (List[int]) – The size of the pool.
  • strides (List[int]) – A list describing how to stride over the convolution.
  • padding (str) – Which type of padding to use (“SAME” or “VALID”).
msb(x) → PondTensor[source]

Computes the most significant bit of the provided tensor.

Parameters:x (PondTensor) – The tensor to take the most significant bit of
negative(x: tf_encrypted.protocol.pond.PondTensor) → tf_encrypted.protocol.pond.PondTensor[source]

Returns .

>>> negative([-1, 0, 1])
[1, 0, 0]
Parameters:x (PondTensor) – The tensor to check.
non_negative(x) → PondTensor[source]

Returns .

>>> non_negative([-1, 0, 1])
[0, 1, 1]

Note this is the derivative of the ReLU function.

Parameters:x (PondTensor) – The tensor to check.
reduce_max(x, axis) → PondTensor[source]

Find the max value along an axis.

>>> reduce_max([[10, 20, 30], [11, 13, 12], [15, 16, 17]], axis=0)
[[30], [13], [17]]
See:

tf.reduce_max

Parameters:
  • x (PondTensor) – Input tensor.
  • axis (int) – The tensor axis to reduce along.
Return type:

PondTensor

Returns:

A new tensor with the specified axis reduced to the max value in that axis.

relu(x) → PondTensor[source]

Returns the exact ReLU by computing ReLU(x) = x * nonnegative(x).

>>> relu([-12, -3, 1, 3, 3])
[0, 0, 1, 3, 3]
Parameters:x (PondTensor) – Input tensor.
select(choice_bit, x, y) → PondTensor[source]

The select protocol from Wagh et al. Secretly selects and returns elements from two candidate tensors.

>>> option_x = [10, 20, 30, 40]
>>> option_y = [1, 2, 3, 4]
>>> select(choice_bit=1, x=option_x, y=option_y)
[1, 2, 3, 4]
>>> select(choice_bit=[0,1,0,1], x=option_x, y=option_y)
[10, 2, 30, 4]

NOTE: Inputs to this function in real use will not look like above. In practice these will be secret shares.

Parameters:
  • choice_bit (PondTensor) – The bits representing which tensor to choose. If choice_bit = 0 then choose elements from x, otherwise choose from y.
  • x (PondTensor) – Candidate tensor 0.
  • y (PondTensor) – Candidate tensor 1.