Jeremy AMA


Thank you for all the resources.

I found this video playlist with good visualisation.
PS. I added it to my watch later list.

1 Like

Yes, there are a few great GA courses out there on YouTube. Another good YouTube series is from “Life and Math”, e.g. “The Geometric Algebra: The Greatest Secret in Math. Lesson 1: First Introduction”.

One way I stay up to date on resources is via the excellent & friendly LinkedIn group “Pre-University Geometric Algebra” (which I’d also recommend to @ilovescience & @jeremy) run by Jim Smith out of Chiapas, Mexico. It’s largely a group of high school and intro-level university teachers who share teaching resources about making GA accessible.

Another name I’ll mention is Electrical Engineer Peeter Joot, whose resources include a textbook.


@ilovescience Your question has inspired me to work up the courage to just now write to Chris Doran since I’ll be in Cambridge UK this Tuesday, to see if he’d be up for meeting with me. :open_mouth: :crossed_fingers: Normally I wouldn’t have the temerity to cold-write someone of his stature, but turns out he’s a music buff and into computer gaming too, so maybe there’s a chance! hahah :pray:

UPDATE: Doran said yes!! Looking forward to Tuesday. :partying_face:


Hi @jeremy ,
I am working with tabular data but it is time series. Capturing financial history for all employees, to be precise. This means my dataset has records of employees’ salaries, incentives, leaves etc. for each month, along with those for employees who have resigned. I am trying RF, like you mentioned in the course, but struggling with being able to create train, validate, and test sets that are time sensitive, which is making me doubt if RF are the right way to go. I want the model to be able to predict high risk employees who might resign in the next 6 months, using the historical patterns for such employees. Any advice would be appreciated. Would love to know from the community too, if they would tackle the problem in a different way?
Thank you :slight_smile:

@ilovescience Follow-up on lunch with Doran: It was great. I had no idea how active he was in the video game industry (via company Geometrics whose tech is under the hood of a fair number of big-name titles).

Most instructive was his statement that “You can only take ReLU of a scalar,” describing how he (an expert mathematician) had tried to come up with various ways of applying nonlinear activation to vectors but none of them offered any significant advantages or particular meaning that he found to be useful. He speculated that GA approaches would be helpful in situations in where that which is being modeled has some particular geometric structure such as 3D models of things (i.e. in the final output of the network), but that using GA representations in between layers didn’t seem to make such sense or offer any advantages.

Contrast this with the paper I mentioned earlier by Grassucci et al who claim to employ their “parameterized hypercomplex convolutional (PHC)” layers to great effect. One part of their paper involves building a “HyperComplex ResNet” on page 5 via
$$F(x, {Hj}) = PHC(ReLU (PHC(x)).$$ (…aw darn no MathJax support??)
He & I did not discuss this paper in particular, having a number of other things to talk about (including music!). But I did send him a link to the paper and noted I’d be happy to hear any thoughts he might have.

1 Like

Here’s an open-ended equation for @jeremy: Do you have any recommendations or “tips” for trying to use jupyter notebooks & nbdev-based codes in an HPC environment?

i.e. where we’re often submiting non-interactive jobs to a queue via some job-manager (such as SLURM)?

This seems to be a case where just writing pure text-based .py console scripts seems the most expedient, but I might like to still develop using nbdev & notebooks, and then just export my main executable code(s) as .py console script(s) — using nbdev.export.notebook2script, or some other method? — before doing big training runs.

But…that’s just my intuition. Wondering if you have more to add? Thanks.


That sounds like a fine approach - or use execnb.


very interesting, glad to hear about your conversations…

huh, I don’t think I understand why this is true but I’ll take his word for it (unless you have some intuition on this?).

Any thoughts on using geometric algebra instead to understand deep learning in any way?


Thanks for your response, Jeremy.

Another HPC-based Jupyter notebook usage idea I just stumbled upon tonight: And perhaps this is just obvious in retrospect: Multiple ssh tunnels, same as many of us are accustomed to already when having to connect to compute resources through a firewall.

Whether running an “interactive” job or even a batch job, if one can get the IP address of the pod/node running the notebook and grab the notebook-authorization url+token from the stdout log. then…

ssh tunnel (& port forward) from my laptop to cluster’s head node
ssh tunnel from headnode to node running process
paste in the jupyter url that was written to stdout…

And boom, you’re running on a GPU node of the cluster from your laptop.
Did this tonight on the Stability AI cluster.

Started with exec_nb (H/T @ilovescience) but then got full interactive Jupyter notebook running in my browser by the above method, while the nb process itself was being executed by a “non-interactive” batch job!


Hoping to schedule a webinar soon in which we can ask Eleonara questions like that. :wink:

1 Like