Lesson 11 official topic

Background can have many attributes - Its will be difficult to engineer a prompt for that. But, I will look on it what can be done.

What is the approach of the course teachers to scan literature and pick up papers of interest this fast? Alert filters?

1 Like

Yes, It will work when there is a single object in foreground, But it is difficult when there are multiple objects in foreground.

You could define your foreground object and then expand the query engine so that you can use a symbol like ^ to select everything but that foreground object. So say if you want to change the background of the horse image, you could use “^horse on seaside” as a query.

1 Like

Twitter:
https://twitter.com/_akhaliq ← must follow!
https://twitter.com/arankomatsuzaki
https://twitter.com/HochreiterSepp

Also if I find a paper I really like I will check the authors and if they consistently do good work I often will follow them on Twitter. Sometimes they will post their own tweets or even threads about their paper.

14 Likes

This thread could also be useful as well ML news - real-world examples - #2 by bencoman

2 Likes

These sites are also quite good to find trending papers:
https://mlfeed.tech/ and https://papers.labml.ai/papers/weekly

5 Likes

This site shows a aggregated list of new developments in AI. Kind of useful if you don’t want to go all over the place to find all the new papers :slightly_smiling_face:

2 Likes

A pure python implementation of broadcasting broadcasting.py · GitHub

def broadcast(a, b, op):

    if isinstance(a, Number) and isinstance(b, Number):
        return op(a, b)

    result = []
    if a.ndim == b.ndim:
        if a.shape[0] != b.shape[0]:
            if a.shape[0] == 1:
                a = cycle(a)
            elif b.shape[0] == 1:
                b = cycle(b)
            else:
                raise ValueError(
                    f"Could not broadcast together with shapes {a.shape} {b.shape}")
    elif a.ndim < b.ndim:
        a = cycle([a])
    else:
        b = cycle([b])

    for a_in, b_in in zip(a, b):
        result.append(broadcast(a_in, b_in, op))

    return np.array(result)```
2 Likes

Be friends with @johnowhitaker . Who in turn stalks https://twitter.com/_akhaliq on twitter. Noone knows how _akhaliq does what he does. Magic I think.

17 Likes

Found this through twitter(a tool to assist in reading papers), you could highlight some text from the paper and get some explanation for it. You could also ask further questions regarding the highlighted text. I am not sure about the quality of the results, but any way sharing this here:

Here is the twitter post from the developer of the tool if you are interested:

15 Likes

Great find! Haven’t tried it yet, but if it works even half as well as I expect, I would still know 100% more than I know now :stuck_out_tongue:

1 Like

What sorcery is this?! I just uploaded a paper on historical linguistics, and it does a really good job of explaining it! (Occasionally it misfires, though, and provides an explanation for something the authors could have said, but didn’t.)

I will be recommending this far and wide!

6 Likes

Ya, labml is really good for finding pytorch implementations of papers.

This also might be a good thing to try to get GPT-3 to see if it can simplify this using instructions.

I LOVE that this course may go on longer than expected!!! :heart_eyes: :star_struck: :innocent: :partying_face: :boom:

23 Likes

A course that never ends would be something! :+1: I mean I understand that @jeremy has a life, but how cool would that be?? Even if once every two weeks… :roll_eyes:

6 Likes

Yes. In this course there are so many interesting topics covered and discussions with great people.

2 Likes

One of the best resources on reading scientific papers is from Andrew Ng https://youtu.be/733m6qBH-jI

4 Likes

I’m playing around with DiffEdit and I want to implement the paper, but I am not sure where I should start.

I figure “Step 1: Compute Mask” is probably a reasonable starting point, but I am having troubles starting. My understanding on what we have done up to this point is that the U-Net is being used to estimate the noise in our pictures so it seems like maybe I should use the U-Net, but I’m wondering if I need to also include the VAE in order to bring the picture down into latent space.

My current plan is to use the diffusion-nbs as a starting point, but just wanted to see if I am on the right track with doing that or if I should be going a different route.

Thanks!

3 Likes