Julia ML Community Call and FastAI.jl

Meeting link: Launch Meeting - Zoom

Previous Meeting (2021-06-22)

The minutes for the previous meeting can be found here, and the video recording can be found here.

Next Meeting (2021-08-02)

The agenda for the next meeting can be found here. Note that the time for this meeting will be 6PM EDT.

Important Links

Design Docs

The evolving design document (you can think of this as meeting notes - refined edition) is located here.

Minutes and Video Recordings

We keep minutes and video recordings. The agenda for (future) meetings is at the top of the minutes.

Tracking Repo

We have a tracking repo where we host materials. It also serves as a centralized issue hub, since our ML ecosystem is distributed across several Julia packages. We also use issues + GitHub Projects to track long-term progress. You can find a list of our projects on the tracking repo’s project pane.

Original Post

(original links edited to reflect update calendar invite)

Hi everyone,

As most of you may already know, there has been an effort to port fast.ai to Julia (see FluxML/FastAI.jl). This work is led by a group ML community members trying to improve the Julia ML ecosystem.

This Tuesday 8 Sept @ 10AM EST (new time is Tuesday @ 12PM EST), we will be having video call to discuss our efforts (calendar link here new calendar link here and on the Julia community page). The agenda can be found on our tracking repo (this will be updated with minutes from the meeting after it concludes). The purpose of this meeting will be to discuss and debug FastAI.jl and other community projects that support it.

Please join us if you are interested in listening in or helping. Several of us are on the fast.ai discord server and this forum, but all of us discuss offline on the Julia Zulip in the #ml-ecosystem-coordination channel. Even if you can’t make the call this week, please let us know here or on Zulip of your interest in helping.

16 Likes

Thanks @darsnack for setting this up. Will a recording be provided after the meeting for those who couldn’t attend live?

There will definitely be written minutes (at the same tracking repo link above). I’ll look into recording the call as well.

Hi Kyle,

I just tried to join (via the link in the calendar link - https://uwmadison.zoom.us/j/96153131898?pwd=Q1pOdlIzSVIzOTkxMzFsdzA1cFJVQT09), but I’m being asked for a UW-Madison login of “NetID” and password…am I doing something wrong?

I found this forum link from Jeremy Howard’s tweet (https://twitter.com/jeremyphoward/status/1301921873740005381?s=20), where there’s a reference to a “passcode” in the comments but no reference to needing a UW-Madison account?

1 Like

I’m having the same issue

Sorry for the issues. We had several non-UW accounts join, and from the settings it appears that UW NetID shouldn’t be required. Are you trying to join through the browser client? I’m trying to figure out what the issue was. Do you have a paid Zoom account for another domain?

Update: the sign-in issues should be fixed for all clients

The post is updated with the minutes. In the future, I won’t make an additional reply. There was no recording this time due to technical issues, but I will make one for next time.

2 Likes

Hi, I am very interested in this project as I am using Julia (Knet so far) and PyTorch/fastai for deep learning.

I joined the Julia Zulip and I will try to join the next meetings to learn and hopefully also contribute something useful! :slight_smile:

1 Like

Just bumping this to inform everyone that Every other Tuesday at 12 pm est, starting this Tuesday sept 29, will be the new recurring time. New calendar link here

Login info:

https://uwmadison.zoom.us/j/96153131898?pwd=Q1pOdlIzSVIzOTkxMzFsdzA1cFJVQT09

Meeting ID: 961 5313 1898

Passcode: 405290

Please see the OP for details and supporting links / discussion, agenda and minutes. It will be continually updated and we won’t be posting new replies unless there are new logistical changes.

1 Like

I believe:

(1) gpu mem if often the bottleneck in DL tasks
(2) Python’s GC is ref-count + loop detection (so most cpu tensors are freed when they are no longer needed)

what is the intended solution to address this in Julia? To be able to somehow get the GC to recognize when objects holding GPU mem can be freed asap.

1 Like

Sorry for the late response. While improving GC is always in the headlights for the team behind GPUCompiler.jl, we do have a batched iterator called CuIterator in CUDA.jl. This iterator was specifically created for the case where all the data cannot fit in GPU memory, so it moves data to the GPU in batches and eagerly frees the batches to relieve memory pressure.

We are actually discussing the data loading pipeline on today’s call, and CuIterator is one piece of it. If you can’t make it, be sure to check out the minutes to see how it is used in practice.

1 Like

I just wanted to ping this thread to let everyone know that our meetings are now recorded. You can find links to the videos in the wiki post above.

1 Like

Hi, Just want to expound on Kyle’s excellent answer - the new compiler work coming up in Julia 1.6 is setting things up to be able to do custom type specific static memory management analysis/passes in pure Julia. The longer ish term plan is to use these to further improve GPU memory management over the work currently being done.

There’s also a mechanism to early free memory before it’s seen the by GC.

2 Likes