Implementing GeneralRelu in Swift

How could we implement GeneralRelu from lesson 10 or 11 in Swift, I tried the following

public struct GeneralReLU {
    public var leakOptional: Float? // slope for X when it's negative
    public var subOptional: Float?  // value used to substract from final result
    public var maxvOptional: Float? // maximum value
    
    public init(_ leak: Float? = nil, _ sub: Float? = nil, _ maxv: Float? = nil) {
        self.leakOptional = leak
        self.subOptional = sub
        self.maxvOptional = maxv
    }
    @differentiable
    func applied(to input: Tensor<Float>) -> Tensor<Float> {
        var output = max(input, 0)
        if let leak = self.leakOptional {
            output += min(input, 0) * leak
        }
        if let sub = self.subOptional {
            output -= sub
        }
        if let maxv = self.maxvOptional {
            output = min(output, maxv)
        }
        return output
    }
}

But I’m getting this error

error: <Cell 27>:14:6: error: function is not differentiable
    @differentiable
    ~^~~~~~~~~~~~~~

<Cell 27>:15:10: note: when differentiating this function definition
    func applied(to input: Tensor<Float>) -> Tensor<Float> {
         ^

<Cell 27>:17:28: note: differentiating control flow is not supported yet
        if let leak = self.leakOptional {
                           ^

You can’t have if statements or for loops in a differentiable function (yet).

Yeah this is what it seems to be from the error message, but are there any workarounds?

One simple approach would be to avoid nil by using some defaults, e.g. default to 0 for leak, and to Int.Max for maxv, and to 0 for sub. Then you don’t need any conditionals.

1 Like

Lol. Was just writing that up for him.

public struct GeneralReLU {
   public var leak: Float // slope for X when it's negative
   public var sub: Float  // value used to substract from final result
   public var maxv: Float // maximum value

   public init(_ leak: Float = 0.0, _ sub: Float = 0.0, _ maxv: Float = 
       Float.greatestFiniteMagnitude) {
       self.leak = leak
      self.sub = sub
      self.maxv = maxv
   }

    @differentiable
     func applied(to input: Tensor<Float>) -> Tensor<Float> {
        return min(max(input, 0) + (min(input, 0) * leak) - sub, maxv)
     }
}
1 Like

Swift is yelling about max not differentiable

error: <Cell 26>:14:6: error: function is not differentiable
    @differentiable
    ~^~~~~~~~~~~~~~

<Cell 26>:15:10: note: when differentiating this function definition
    func applied(to input: Tensor<Float>) -> Tensor<Float> {
         ^

<Cell 26>:19:20: note: cannot differentiate an external function that has not been marked '@differentiable'
        return min(max(input, 0) + min(input, 0) * self.leak - self.sub, self.maxv)
                   ^

error: <Cell 26>:14:6: error: function is not differentiable
    @differentiable
    ~^~~~~~~~~~~~~~

<Cell 26>:15:10: note: when differentiating this function definition
    func applied(to input: Tensor<Float>) -> Tensor<Float> {
         ^

<Cell 26>:19:20: note: expression is not differentiable
        return min(max(input, 0) + min(input, 0) * self.leak - self.sub, self.maxv)
                   ^

min(_:_:) and max(_:_:) that take a scalar on one side are defined here:

They are not marked @differentiable yet, but they should be. We’ll make sure it’s part of the next release!

In the meantime, the variants of min(_:_:) and max(_:_:) that take Tensors on both sides do have a derivative. As a workaround, you can turn scalars into a Tensor first.

 return min(max(input, Tensor(0)) + min(input, Tensor(0)) * self.leak - self.sub, Tensor(maxv))
1 Like

it turns out having everything in one I got this error

error: <Cell 27>:19:16: error: the compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions
        return min(max(input, Tensor(0)) + min(input, Tensor(0)) * self.leak - self.sub, Tensor(self.maxv))
               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I turned the expression into the following and now it’s not complaining any more

        let positives = max(input, Tensor(0))
        let leaked = min(input, Tensor(0)) * self.leak
        return min(positives + leaked - self.sub, Tensor(self.maxv))

thnaks a lot for the hint @rxwei

1 Like

@rxwei Collab kernel is reliably crashing when I make GeneralReLU extend Layer like this

struct GeneralReLU: Layer {
    var leak: Float // slope for X when it's negative
    var sub: Float  // value used to substract from final result
    var maxv: Float // maximum value
    
    init(_ leak:Float = 0.0, _ sub:Float = 0.0, _ maxv:Float = Float.greatestFiniteMagnitude) {
        self.leak = leak
        self.sub = sub
        self.maxv = maxv
    }
    @differentiable
    func applied(to input: Tensor<Float>, in context: Context) -> Tensor<Float> {
        let positives = max(input, Tensor(0))
        let leaked = min(input, Tensor(0)) * self.leak
        return min(positives + leaked - self.sub, Tensor(self.maxv))
    }
}

If I don’t extend Layer then the cell finishes with no errors!

Same things for this standalone function, kernel crashes reliably:

  @differentiable
  func grelu (_ input: Tensor<Float>, _ leak:Float = 0.0, _ sub:Float = 0.0, _ maxv:Float = Float.greatestFiniteMagnitude) -> Tensor<Float> {
     let positives = max(input, Tensor(0))
     let leaked = min(input, Tensor(0)) * leak
     return min(positives + leaked - sub, Tensor(maxv))
  }

I noticed from your API usage (applied(to:in:)) that you are using v0.2, which is an older release of Swift for TensorFlow. The issue may have been fixed in v0.3. I tried it and can’t reproduce the crash.

Here’s a slightly modified version of your code that uses v0.3 APIs (call(_:) instead of applied(to:in:)).

import TensorFlow

struct GeneralReLU: Layer {
    var leak: Float // slope for X when it's negative
    var sub: Float  // value used to substract from final result
    var maxv: Float // maximum value

    init(_ leak:Float = 0.0, _ sub:Float = 0.0, _ maxv:Float = Float.greatestFiniteMagnitude) {
        self.leak = leak
        self.sub = sub
        self.maxv = maxv
    }
    @differentiable
    func call(_ input: Tensor<Float>) -> Tensor<Float> {
        let positives = max(input, Tensor(0))
        let leaked = min(input, Tensor(0)) * self.leak
        return min(positives + leaked - self.sub, Tensor(self.maxv))
    }
}

You can follow the instructions here to get newer builds in Colab:

2 Likes

Thanks again @rxwei I guess you’re right i’m on an older version as you solution does not even compile, I see this error

error: <Cell 11>:4:8: error: type 'GeneralReLU' does not conform to protocol 'Layer'
struct GeneralReLU: Layer {
       ^

TensorFlow.Layer:2:20: note: protocol requires nested type 'Input'; do you want to add it?
    associatedtype Input : Differentiable
                   ^

TensorFlow.Layer:3:20: note: protocol requires nested type 'Output'; do you want to add it?
    associatedtype Output : Differentiable

Is there a way (or a variable) that contains current swift for tensforlow version, it could be helpful to print it at the beginning of the kernel.

@marcrasi

1 Like

It seems that the current version on colab crashes randomly. Hopefully many issues will get fixed next release on Tuesday.

Yeah, this is a good idea. You can kinda already do it, see my post here: S4TF in colab (error)

It would be nice if the commit hash and version string were exposed as variables in Swift itself so that you can query them from a running swift notebook. I don’t know if they are. Finding and/or exposing them might be a good starter task for someone interested in hacking the Swift compiler/stdlib!