Lovely Tensors

Hi. I’ve put together a library to display tensors in a more human-readable way:

Documentation

GitHub

I’m not completely set on the UX/workflow, but I feel it’s at a stage where I could use some a bit of feedback. Would anyone have any thoughts/suggestions?

Install

pip install lovely-tensors

How to use

gif
How often do you find yourself debuggin a neural network? You dump a
tensor to the cell output, and see this:

numbers
tensor([[[-0.3541, -0.3369, -0.4054,  ..., -0.5596, -0.4739,  2.2489],
         [-0.4054, -0.4226, -0.4911,  ..., -0.9192, -0.8507,  2.1633],
         [-0.4739, -0.4739, -0.5424,  ..., -1.0390, -1.0390,  2.1975],
         ...,
         [-0.9020, -0.8335, -0.9363,  ..., -1.4672, -1.2959,  2.2318],
         [-0.8507, -0.7822, -0.9363,  ..., -1.6042, -1.5014,  2.1804],
         [-0.8335, -0.8164, -0.9705,  ..., -1.6555, -1.5528,  2.1119]],

        [[-0.1975, -0.1975, -0.3025,  ..., -0.4776, -0.3725,  2.4111],
         [-0.2500, -0.2325, -0.3375,  ..., -0.7052, -0.6702,  2.3585],
         [-0.3025, -0.2850, -0.3901,  ..., -0.7402, -0.8102,  2.3761],
         ...,
         [-0.4251, -0.2325, -0.3725,  ..., -1.0903, -1.0203,  2.4286],
         [-0.3901, -0.2325, -0.4251,  ..., -1.2304, -1.2304,  2.4111],
         [-0.4076, -0.2850, -0.4776,  ..., -1.2829, -1.2829,  2.3410]],

        [[-0.6715, -0.9853, -0.8807,  ..., -0.9678, -0.6890,  2.3960],
         [-0.7238, -1.0724, -0.9678,  ..., -1.2467, -1.0201,  2.3263],
         [-0.8284, -1.1247, -1.0201,  ..., -1.2641, -1.1596,  2.3786],
         ...,
         [-1.2293, -1.4733, -1.3861,  ..., -1.5081, -1.2641,  2.5180],
         [-1.1944, -1.4559, -1.4210,  ..., -1.6476, -1.4733,  2.4308],
         [-1.2293, -1.5256, -1.5081,  ..., -1.6824, -1.5256,  2.3611]]])

Was it really useful?

What is the shape?
What are the statistics?
Min / max values?
Are any of the values nan or inf?

Is it an image of a man holding a tench?

import lovely_tensors.tensors as lt
lt.monkey_patch()
numbers
'tensor[3, 196, 196] n=115248 x∈[-2.118, 2.640] μ=-0.388 σ=1.073 x=...'
t = numbers.view(-1)[:12].clone()

# A spicy tensor
t[0] *= 10000
t[1] /= 10000
t[2] = float('inf')
t[3] = float('-inf')
t[4] = float('nan')

t.reshape((2,6))
tensor[2, 6] n=12 x∈[-3.541e+03, -3.369e-05] μ=-393.776 σ=1.180e+03 +inf! -inf! nan! x=...
print(torch.zeros(10, 10))
print(torch.zeros(10, 10).to(torch.device("cuda")))
print(torch.zeros(10, 10).to(torch.device("cuda")).requires_grad_(True))
print(torch.zeros(10, 10).to(torch.device("cuda")).requires_grad_(True)+1)
tensor[10, 10] all_zeros 
tensor[10, 10] cuda:0 all_zeros 
tensor[10, 10] grad cuda:0 all_zeros 
tensor[10, 10] n=100 x∈[1.000, 1.000] μ=1.000 σ=0. grad_fn cuda:0 x=...

But back to the numbers. Is it our man?

numbers.rgb

normalized

That’s a maybe, let’s fix him up.

in_stats = { "mean": (0.485, 0.456, 0.406),
              "std": (0.229, 0.224, 0.225) }
numbers.denorm=in_stats
numbers.rgb

normalized

It’s our hero, the Tenchman!

Update 21/10/2022: I’m releasing version 0.0.3 of lovely-tensors.

The changes include:

  • Colors! (I’d love to hear your feedback on the choice of colour scheme)

  • Layers!
#per-channel stats
numbers.deeper
tensor[3, 196, 196] n=115248 x∈[-2.118, 2.640] μ=-0.388 σ=1.073
  tensor[196, 196] n=38416 x∈[-2.118, 2.249] μ=-0.324 σ=1.036
  tensor[196, 196] n=38416 x∈[-1.966, 2.429] μ=-0.274 σ=0.973
  tensor[196, 196] n=38416 x∈[-1.804, 2.640] μ=-0.567 σ=1.178
# You can go even deeper if you want to
dt = torch.randn(3, 3, 5)
dt.deeper(2)
tensor[3, 3, 5] n=45 x∈[-1.398, 3.680] μ=0.312 σ=1.020
  tensor[3, 5] n=15 x∈[-1.198, 3.680] μ=0.289 σ=1.199
    tensor[5] x∈[-1.198, 3.680] μ=0.702 σ=1.804 x=[3.680, 0.083, 0.621, 0.322, -1.198]
    tensor[5] x∈[-1.122, 0.467] μ=-0.426 σ=0.739 x=[-1.122, 0.255, -1.029, -0.699, 0.467]
    tensor[5] x∈[-0.206, 1.124] μ=0.592 σ=0.512 x=[0.678, 1.124, 0.913, -0.206, 0.451]
  tensor[3, 5] n=15 x∈[-1.398, 1.547] μ=0.207 σ=0.859
    tensor[5] x∈[-0.300, 1.252] μ=0.407 σ=0.597 x=[0.558, 1.252, -0.006, 0.532, -0.300]
    tensor[5] x∈[-0.378, 1.547] μ=0.642 σ=0.900 x=[-0.378, 1.547, -0.252, 1.332, 0.964]
    tensor[5] x∈[-1.398, 0.654] μ=-0.428 σ=0.791 x=[0.021, -0.833, 0.654, -1.398, -0.583]
  tensor[3, 5] n=15 x∈[-0.969, 2.827] μ=0.440 σ=1.033
    tensor[5] x∈[-0.744, 1.555] μ=0.373 σ=0.820 x=[0.222, 1.555, -0.744, 0.326, 0.507]
    tensor[5] x∈[-0.441, 2.827] μ=1.072 σ=1.291 x=[0.428, 2.827, -0.441, 1.905, 0.641]
    tensor[5] x∈[-0.969, 0.825] μ=-0.124 σ=0.704 x=[-0.218, -0.553, -0.969, 0.297, 0.825]
  • A saner way to pass norm statistics to tensor.rgb(). tensor.rgb still works as property.
    image

  • Bug fixes here and there

Update 25/10/2022: I’m releasing version 0.0.5 of lovely-tensors.

The changes include:

Update 12/11/2022: Version 0.1.3 is out.

I missed a few updates here. The main new features since 0.0.5 are:

.chans

Color-maps and display a tensor [0., 1.] → RGB:

Grouping:

Works for both .rgb and .chans, and with a native PyTorch implementation, it’s very fast!

7 Likes

This does sound very interesting. My only comment at the moment (without having tried out the library) is that I got more from your description on the package page than the image here :slight_smile: Not a criticism and I do realize that some people are visual and others are more text-oriented, but basically, on reading the description, I was immediately nodding along and saying “Yep, yep!”

So might want to consider adding the description here so that others like me can immediately see how useful the library can be.

Will install it and provide any feedback after I’ve used it.

2 Likes

Thank you! I improved the description a bit.
I did not realize it was as easy as copy-pasting the markdown. :slight_smile:

Thank you! this looks super useful, and from a UX perspective, a small suggestion, I’d love to see colors with more contrast? Dark purple on black is not easy for me to see, maybe others don’t have this issue. This maybe due to the terminal theme so please disregard if so.

1 Like

Thank you.

Yes, this is just Jupiter notebook in the dark mode.
I wish I could use colours, but they don’t work in this case:
image

1 Like

I actually did find a way to use colours. What do you think?

I’m releasing version 0.0.5
The changes include:

  • plots with .plt:

  • Bug fixes. It will no longer use significant amounts of GPU memory on huge tensors.

I made a demo heavily inspired by the Stable Diffusion deep dive:

https://github.com/xl0/diffusion-nbs/blob/master/Lovely-diffusion.ipynb

1 Like

I love this! Will definitely try this out :smiley:

2 Likes

If you want to display a closer colour approximation of the latents. See keturn’s example and discussion here. Decoding latents to RGB without upscaling - 🧨 Diffusers - Hugging Face Forums

1 Like

Thank you, it’s surprising how easily the latent space colours mapped back to RGB!

For the deep dive, I think it’s better to make it clear that it’s a 4-channel latent space image, but good to know and thank you for sharing.

2 Likes

What a lovely (pun intended) package! I really like using it - you really get a better feeling for the data you’re working with. Also, you can see at a glance whether e.g. a normalization worked as intended, without manually calling min, max, mean, std. Amazing and very much a fast.ai way of working :slight_smile:

I looked into the GitHub issues and would like to tackle one - unfortunately, I’m getting import errors from lovely_numpy, maybe you could help me out @xl0 ? I tried installing lovely_numpy from pip as well as directly from Github.

This line in 00_repr_str.ipynb

from lovely_numpy.utils import np_to_str_common, pretty_str, sparse_join, plain_repr, PRINT_OPTS

…results in this error message:

ImportError                               Traceback (most recent call last)
Cell In [23], line 7
      5 import warnings
      6 import torch
----> 7 from lovely_numpy.utils import np_to_str_common, pretty_str, sparse_join, plain_repr, PRINT_OPTS

ImportError: cannot import name 'plain_repr' from 'lovely_numpy.utils' (/home/daflowjoe/mambaforge/envs/nbdev/lib/python3.8/site-packages/lovely_numpy/utils/__init__.py)
1 Like

Hii. Thank you! :two_hearts:

Yes, my bad. I’m in the process of converting the genetic code to Lovely-NumPy. I will get it fixed tomorrow. You can safely roll back 1 commit if you’d like to get a working git version.

Which Issue are interested in working on? If it’s the int one, maybe it would be best to implement it in lovely-numpy, and it will be picked up by tensors with no extra effort.

1 Like

Yes I was interested in the int one and the Parameter one. Thank you for the explanation - I think I will get to it next week :slight_smile:

1 Like

Excited to see your work! When you start on it, please drop a line in the issue. I moved the int one to lovely-numpy repo).

Should be good now. :two_hearts: