The minutes for the previous meeting can be found here, and the video recording can be found here.
Next Meeting (2021-08-02)
The agenda for the next meeting can be found here. Note that the time for this meeting will be 6PM EDT.
Important Links
Design Docs
The evolving design document (you can think of this as meeting notes - refined edition) is located here.
Minutes and Video Recordings
We keep minutes and video recordings. The agenda for (future) meetings is at the top of the minutes.
Tracking Repo
We have a tracking repo where we host materials. It also serves as a centralized issue hub, since our ML ecosystem is distributed across several Julia packages. We also use issues + GitHub Projects to track long-term progress. You can find a list of our projects on the tracking repo’s project pane.
Original Post
(original links edited to reflect update calendar invite)
Hi everyone,
As most of you may already know, there has been an effort to port fast.ai to Julia (see FluxML/FastAI.jl). This work is led by a group ML community members trying to improve the Julia ML ecosystem.
This Tuesday 8 Sept @ 10AM EST(new time is Tuesday @ 12PM EST), we will be having video call to discuss our efforts (calendar link here new calendar link here and on the Julia community page). The agenda can be found on our tracking repo (this will be updated with minutes from the meeting after it concludes). The purpose of this meeting will be to discuss and debug FastAI.jl and other community projects that support it.
Please join us if you are interested in listening in or helping. Several of us are on the fast.ai discord server and this forum, but all of us discuss offline on the Julia Zulip in the #ml-ecosystem-coordination channel. Even if you can’t make the call this week, please let us know here or on Zulip of your interest in helping.
Sorry for the issues. We had several non-UW accounts join, and from the settings it appears that UW NetID shouldn’t be required. Are you trying to join through the browser client? I’m trying to figure out what the issue was. Do you have a paid Zoom account for another domain?
The post is updated with the minutes. In the future, I won’t make an additional reply. There was no recording this time due to technical issues, but I will make one for next time.
Just bumping this to inform everyone that Every other Tuesday at 12 pm est, starting this Tuesday sept 29, will be the new recurring time. New calendar link here
Please see the OP for details and supporting links / discussion, agenda and minutes. It will be continually updated and we won’t be posting new replies unless there are new logistical changes.
(1) gpu mem if often the bottleneck in DL tasks
(2) Python’s GC is ref-count + loop detection (so most cpu tensors are freed when they are no longer needed)
what is the intended solution to address this in Julia? To be able to somehow get the GC to recognize when objects holding GPU mem can be freed asap.
Sorry for the late response. While improving GC is always in the headlights for the team behind GPUCompiler.jl, we do have a batched iterator called CuIterator in CUDA.jl. This iterator was specifically created for the case where all the data cannot fit in GPU memory, so it moves data to the GPU in batches and eagerly frees the batches to relieve memory pressure.
We are actually discussing the data loading pipeline on today’s call, and CuIterator is one piece of it. If you can’t make it, be sure to check out the minutes to see how it is used in practice.
Hi, Just want to expound on Kyle’s excellent answer - the new compiler work coming up in Julia 1.6 is setting things up to be able to do custom type specific static memory management analysis/passes in pure Julia. The longer ish term plan is to use these to further improve GPU memory management over the work currently being done.
There’s also a mechanism to early free memory before it’s seen the by GC.