Things Jeremy says to do (Part 2)

Pulling out and compiling Jeremy’s advice from Part 1 really helped me. Eventually I think one thread is good but since the course isn’t public yet I don’t want to post in the other thread.

Lesson 8
  1. The cutting edge of deep learning is really about engineering, not about papers. The difference between really effective people in deep learning and the rest is really about who can make things in code that work properly and there’s very few of those people. The Cutting Edge

  2. There are many opportunities for you in this class. Experiment lots, particularly in your domain area. Write stuff down for the you of six months ago, that’s your audience. It doesn’t have to be perfect. The Opportunities are Much Greater Now

  3. If you don’t understand something from Part 1, go back and watch the video about that thing. Don’t just keep blasting forwards. Especially the later lessons of Part 1 where we went into more detail. It’s Assumed You Understand Part 1

  4. Overfit -> Reduce Overfitting -> There is no step 3. Remember overfitting doesn’t mean having a lower training loss than validation loss, that is normal. It means you have seen your validation error getting worse, until you see that happening you’re not overfitting. 3 Steps to Training a Really Good Model

  5. Learn to pronounce Greek letters. A big part of reading papers is getting past the fear of Greek letters. It’s very hard to read something you can’t pronounce. You’re saying to yourself "squiggle bracket 1+squiggle G squiggle 1-squiggle. Time to Start Reading Papers

  6. Get very used to…

  7. Homework: Take our most mind-bending broadcast and convince yourself of why it works. Put it in Excel, or do it on paper if it’s not already clear to you why this works. Mind Bending Broadcast

  8. Apply the simple broadcasting rules when working with higher ranked tensors. Don’t try to keep it all in your head. (rules are listed in notebook) Apply the Broadcasting Rules

  9. Always make sure your validation and training set were normalized in the same way.

  10. Reading papers from competition winners is a very very good idea. Normal Papers vs Competition Papers

  11. More Homework: Go read section 2.2 of the Resnet Paper

  12. When you’re writing your own code, put some comments in your code to let the next person know what the hell you’re doing. Jeremy Finds Undocumented Suboptimal Constant in PyTorch

  13. More Homework: If you don’t remember the chain rule, go to Khan Academy, they have a great tutorial on the chain rule

Lesson 9
  1. When you see something in a library, don’t assume that it’s right or that it makes sense. When it comes to deep learning, none of us know what we’re doing. It doesn’t take too much to dig into it yourself. Lesson 9: Don’t assume libraries are correct

  2. I dont set a random seed, this is very intentional because I want to see variation in my model, I don’t want it to be hidden away behind a fixed seed. Lesson 9: When not to use a seed

  3. Coroutines in python are worth looking up and learning about, we’ll be using them lots.

  4. You can and should schedule everything, your dropout amount, what kind of data augmentation you do, weight decay, learning rate, momentum, everything. It’s very unlikely you would want the same hyperparameters throughout. Lesson 9: Hyperparameter Scheduling

Lesson 10
  1. I want to remind you that it’s totally okay if you’re not keeping up with everything, I’m trying to give you enough to keep you busy until Part 2 next year, but don’t feel like you need to understand everything within a week of first hearing it. Lesson 10: Don’t worry
    Reminder: This was also advice from Part 1 Lesson 2. If you’re here you got through that and you’ll get through this too!

  2. What will tend to happen with stuff in fast.ai is that we’ll start with something trivially easy and at some point in the next hour or two you might reach a point where you’re feeling totally lost,
    the trick is to go back to the point where it was trivially easy and figure out the bit where you suddenly noticed you were totally lost and find the bit in the middle where you kind of missed a bit because we are going to keep building up from trivially easy stuff. Lesson 10: Where’d you get lost?

  3. Dunder methods, there’s a particular list I suggest you know, and this is the list: Lesson 10: Dunder Methods

  • __getitem__
  • __getattr__
  • __setattr__
  • __del__
  • __init__
  • __new__
  • __enter__
  • __exit__
  • __len__
  • __repr__
  • __str__
  1. You need to be really good at browsing source code. This is a list of things you should know how to do in your editor of choice: Lesson 10: What an editor needs
  • Jump to tag/symbol by with(with completions)
  • Jump to current tag
  • Jump to library tags
  • Go back
  • Search
  • Outlining/folding
  1. Standard deviation is more sensitive to outliers than mean absolute deviation. For that reason the mean absolute deviation is very often the thing you want to be using because, in ML, outliers are often more of a problem than a help. Mean absolute deviation is really underused, you should get used to it. Lesson 10: Mean absolute deviation

  2. Replacing things involving squares with things that use absolute values often works better. It’s a good tip to remember. Lesson 10: Squares are for squares

  3. From now on, you’re not allowed to look at an equation, or type it in LaTeX without also typing it in python, actually calculating some values, and plotting it, because this is the only way we get a sense of what these variables and equations actually mean. Lesson 10: Always play with equations

  4. Your problem to play with during the week is "how accurate can you make a model just using the layers we’ve created so far, and, for the ones that are great accuracy, what does the telemetry look like? How can you tell it’s going to be good? And then what insights can
    you gain from that to make it even better? Try to beat me (0.9898 at best point, 0.9822 at end) You can beat it pretty easily with some playing around, but do some experiments. Lesson 10: Toy Problem #1 (Homework)

  5. Epsilon appears in lots of places in deep learning and is a fantastic hyperparameter that you should be using to train things better Lesson 10: More about epsilon

  6. It’s really good to create interesting little games to play, in research we call them toy problems. Almost everything in research is toy problems you come up with and try to find solutions to. Another toy problem to try during the week is, what’s the best accuracy you can get in a single epoch using whatever normalization you like and only architectures we’ve used up to lesson 7? Lesson 10: Toy Problem #2 (Homework)

Lesson 11
  1. A big part of getting good at deep learning in your domain is knowing how to create small workable useful datasets. In your domain area, whether it’s audio or sanskrit texts, try to come up with a toy problem or two which you hope might give insight into your full problem. Lesson 11: Small useful datasets

  2. If you haven’t seen compose used in programming before, google it. It’s a super useful concept and it comes up all the time. Lesson 11: Compose

  3. A great opportunity to contribute is using telemetry to view activations of different layers and seeing what happens experimentally. The theory people generally don’t know how to train models, and the practitioners forget about actually thinking about the foundations at all, but if you can combine the two and try some experiments to see what happens when you adjust weight decay, you can find some really interesting results. Lesson 11: Another opportunity

  4. The trick to making Adam and Adam-like things work well is to make epsilon 0.1 (or between 1e-3 and 1e-1). Most people use 1e-7 and that’s never going to be a good idea.
    Lesson 11: Another place where epsilon matters

  5. When you’re augmenting your data, look at or listen to your augmented data. Don’t just chuck it in a model but look at the augmented output and try to find something to study to see if you’re losing information. Lesson 11: Mindful data augmentation

  6. Image augmentation isn’t just about throwing some transformation functions in there, but think about when you’re going to do it because you have this pipeline where you start with bytes, which become floats. Think about where you’re gonna do the work. Do whatever you can while they’re still bytes, but be careful to not do anything that will cause rounding errors or saturation problems. Lesson 11: Optimizing data augmentation

  7. When doing data augmentation on non-images, ask yourself what kind of changes could occur in data that wouldn’t cause the label to change but would still leave that data as a reasonable example that could show up in your dataset. Lesson 11: Data augmentation in other domains

Lesson 12
  1. Be careful about using automated formatting and creating too many rules. Sometimes unconventional formatting can help you to understand your code better. This is the only way to make your code work. Debugging machine learning code is awful so you have to make sure the thing you write makes sense and is simple. Lesson 12: Unconventional Formatting

  2. The mixup paper is a pretty easy read by paper standards. I suggest you check it out.

  3. Make your equations in code as close as possible to the paper you’re implementing. When you’re comparing something to a paper, you want something that you can look at and straightaway say “oh, that looks very familiar”. As long as it’s not familiar you may want to think about how to make it more familiar. Lesson 12: When to use Greek Letters

  4. Don’t listen to people in your organization saying we can’t start modeling until we do all this cleanup work. Start modeling right now, see if the results are okay, and if they are, maybe you can skip the cleanup or do them both simultaneously.

  5. Read the Bag of Tricks Paper and think about for each of those resnet tweaks, why did they do that? It wasn’t some brainless random search where they tried everything. They sat back and thought “how do we use all the inputs we have and take advantage of all the computation we are doing?” Spend time thinking about architectures and experimenting with them, it’s really important for effective practitioners to be able to write nice concise architectures so that you can change them and understand them. Lesson 12: Build Architectures Thoughtfully

  6. A super important thing to remember if you’re doing fine-tuning is don’t ever freeze the weights in the batchnorm layers if you’re ever doing partial layer training. Lesson 12: Batchnorm Does Weird Things

  7. The way to debug in DL is to not make mistakes in the first place. The only way to do that is to make your code so simple that it can’t have a mistake, and to check every single intermediate result along the way. You also need to be a great scientist, which means keeping a journal notebook to keep track of your results. Lesson 12: A War Story About DL Debugging (really long and detailed response that’s worth a second listen)

  8. When preprocessing data for neural nets, leave it as raw as you can is the rule of thumb.

  9. Anybody who has made it to lesson 12 in this course should be learning Swift for TensorFlow. This one comes with some great advice.

  • Python’s days are numbered.
  • DL libraries change all the time so if you’re spending all the time learning one library in one language, you won’t be prepared for that change.
  • I’ve spent time using, in real world scenarios, at least a couple dozen languages, and each time I learn a new language, I become a better developer. It’s just a good idea to learn a new language
    Lesson 12: Why Swift for Tensorflow
Lesson 13
  1. Importing Python into Swift is a super cool feature that you should use to fill in the gaps while this ecosystem doesn’t exist, but as soon as possible, fill in the gaps. We shouldn’t use this as such a crutch that we never write our own even better dataframes and libraries. Lesson 13: Use It, Don’t Abuse It

  2. If you’re hacking around over the coming months and you find things aren’t the way you want then you can, and should, change it. Lesson 13: Swift Is Fully Customizable

  3. Take a look at the code below after the lesson and think “what is going on here?” because this is something really interesting. Lesson 13: Only Possible In Swift
    lesson13%20img

  4. Some advice from Chris: “The thing I want to emphasize is that optimizing unsafe buffer pointers is a super low-level geeky thing that not everyone should do. It exists because at certain points in your journey you may find it useful, but usually you’re not working at this level”. Lesson 13: You Don’t Have To Use Every Feature

Lesson 14
  1. There are many ways to get involved. Pick some piece of this that might be interesting in your domain, and over the next 12-24 months, explore with us so that by the time all the pieces are together you’ll have what you need for your domain. It’s not so much, how do I build this custom derivative thing in Swift for TensorFlow (hard), but rather, starting to use little bits of this in the work that you’re doing. Start by just building a notebook, or converting a small library from Python to Swift, or writing a blogpost (Chris). Lesson 14: How You Can Help
59 Likes

Also, I am prone to missing stuff so if Jeremy gives some awesome advice (especially general, non-technical advice about how to become a great practitioner), and I don’t catch it, please let me know and I’ll add it. Cheers.

3 Likes

Jeremy has a training set and a validation set, and normalizes using train mean and train std for both sets.
I’m thinking that maybe a more natural way to do this is to normalize your entire training set before setting aside a validation set.

This does make me wonder what I’ve been doing with my test sets; I might have normalized my test sets separately in the past…

Normalizing on the whole set does introduce some amount of leakage between your train and validation set. If you want your validation set to give you a good sense of how the model will do on examples it hasn’t seen, then it’s not the best idea to include information from it in your training.

11 Likes

That’s a good point!

Jeremy’s Breadcrumb Trails from Lesson 8

Don’t be overwhelmed… just sniff out a few tantalizing trails and follow your nose!

  1. Math notation: https://simple.wikipedia.org/wiki/List_of_mathematical_symbols
  2. Get greek letter for, and latex version of a symbol that you draw: http://detexify.kirelabs.org/classify.html
  3. “Python fire is a library for automatically generating command line interfaces (CLIs) from absolutely any Python object.” https://github.com/google/python-fire
  4. Frobenius norm: the square root of the sum of the squares of the elements in the matrix. http://mathworld.wolfram.com/FrobeniusNorm.html
  5. broadcasting rules https://pytorch.org/docs/stable/notes/broadcasting.html
  6. Einstein summation convention: https://rockt.github.io/2018/04/30/einsum
  7. Halide, a language for fast, portable computation on images and tensors:
    http://halide-lang.org/
  8. Polyhedral compilation: https://polyhedral.info/
  9. Chris Lattner http://www.nondot.org/sabre/ (creator of Swift)
  10. “It’s all about initialization” 10,000 layer deep neural net with no normalization layers! “Fixup Initialization” https://arxiv.org/abs/1901.09321
  11. Homework: read at least section 2.2 of this paper: “Delving Deep into Rectifiers” https://arxiv.org/abs/1502.01852 in which Kaiming He and his collaborators invent (1) Resnet, (2) “He initialization” method (great improvement over Glorot / Xavier initialization), and (3) the PRelu ([Parameterized Relu) activation function.
  12. Read the Glorot and Bengio initialization paper from 2010. “Understanding the difficulty of training deep feedforward neural networks”, by Xavier Glorot and Yoshua Bengio http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf Jeremy said we will be implementing much of it in this course.
  13. Look into torch.nn.Conv2d() Why is there a multiplier of sqrt(5) in the initialization? Jeremy pointed out that it is not documented and he thinks it’s incorrect.
  14. “The matrix calculus you need for deep learning” By Terence Parr and Jeremy Howard: https://explained.ai/matrix-calculus/index.html
  15. Jeremy has proposed a further improvement over “He initialization” in this lecture! He suggested testing it out… See the thread https://forums.fast.ai/t/shifted-relu-0-5/41467
18 Likes

Thanks @jcatanza! Perhaps you could copy some of those in to the lesson 8 wiki post? You’ll also see there examples of how to create nicely formatted links, if you’re interested.

1 Like

Sorry for the delay on this one, I watched the unedited version and took notes and then had trouble finding the timestamps for the clips in the edited version.

Lesson 9

  1. When you see something in a library, don’t assume that it’s right or that it makes sense. When it comes to deep learning, none of us know what we’re doing. It doesn’t take too much to dig into it yourself. Lesson 9: Don’t assume libraries are correct

  2. I dont set a random seed, this is very intentional because I want to see variation in my model, I don’t want it to be hidden away behind a fixed seed. Lesson 9: When not to use a seed

  3. Coroutines in python are worth looking up and learning about, we’ll be using them lots.

  4. You can and should schedule everything, your dropout amount, what kind of data augmentation you do, weight decay, learning rate, momentum, everything. It’s very unlikely you would want the same hyperparameters throughout. Lesson 9: Hyperparameter Scheduling

4 Likes

This week kept me busy, lots of great advice!

Lesson 10

  1. I want to remind you that it’s totally okay if you’re not keeping up with everything, I’m trying to give you enough to keep you busy until Part 2 next year, but don’t feel like you need to understand everything within a week of first hearing it. Lesson 10: Don’t worry
    Reminder: This was also advice from Part 1 Lesson 2. If you’re here you got through that and you’ll get through this too!

  2. What will tend to happen with stuff in fast.ai is that we’ll start with something trivially easy and at some point in the next hour or two you might reach a point where you’re feeling totally lost,
    the trick is to go back to the point where it was trivially easy and figure out the bit where you suddenly noticed you were totally lost and find the bit in the middle where you kind of missed a bit because we are going to keep building up from trivially easy stuff. Lesson 10: Where’d you get lost?

  3. Dunder methods, there’s a particular list I suggest you know, and this is the list: Lesson 10: Dunder Methods

  • __getitem__
  • __getattr__
  • __setattr__
  • __del__
  • __init__
  • __new__
  • __enter__
  • __exit__
  • __len__
  • __repr__
  • __str__
  1. You need to be really good at browsing source code. This is a list of things you should know how to do in your editor of choice: Lesson 10: What an editor needs
  • Jump to tag/symbol by with(with completions)
  • Jump to current tag
  • Jump to library tags
  • Go back
  • Search
  • Outlining/folding
  1. Standard deviation is more sensitive to outliers than mean absolute deviation. For that reason the mean absolute deviation is very often the thing you want to be using because, in ML, outliers are often more of a problem than a help. Mean absolute deviation is really underused, you should get used to it. Lesson 10: Mean absolute deviation

  2. Replacing things involving squares with things that use absolute values often works better. It’s a good tip to remember. Lesson 10: Squares are for squares

  3. From now on, you’re not allowed to look at an equation, or type it in LaTeX without also typing it in python, actually calculating some values, and plotting it, because this is the only way we get a sense of what these variables and equations actually mean. Lesson 10: Always play with equations

  4. Your problem to play with during the week is "how accurate can you make a model just using the layers we’ve created so far, and, for the ones that are great accuracy, what does the telemetry look like? How can you tell it’s going to be good? And then what insights can
    you gain from that to make it even better? Try to beat me (0.9898 at best point, 0.9822 at end) You can beat it pretty easily with some playing around, but do some experiments. Lesson 10: Toy Problem #1 (Homework)

  5. Epsilon appears in lots of places in deep learning and is a fantastic hyperparameter that you should be using to train things better Lesson 10: More about epsilon

  6. It’s really good to create interesting little games to play, in research we call them toy problems. Almost everything in research is toy problems you come up with and try to find solutions to. Another toy problem to try during the week is, what’s the best accuracy you can get in a single epoch using whatever normalization you like and only architectures we’ve used up to lesson 7? Lesson 10: Toy Problem #2 (Homework)

8 Likes

Hey guys, I just wanted to share something that’s been a realization and a really helpful distinction between part 1 and part 2 of the class. In Part 1 Jeremy emphasizes that you’re not meant to understand everything and just to trust that it will all make sense at some future point. This was amazing advice and worked for me. I kept that mindset going into Part 2, but I think it was a mistake.

Part 1 is top-down and it jumps between new topics that are independent. Part 2 is bottom-up and progresses through dependent topics. In Part 2, if you don’t understand the proceeding lecture, you’re unlikely to understand the current one. This clicked for me when I reread the advice below from Part 2. Now I’ve gone back and done some more work with earlier lectures, and everything is clicking now. So if you feel lost, go back and dive deeper.

A related note is that I find it very hard to understand two new concepts at once. I’m not a software engineer so as I was getting into lesson 9, I was trying to understand both complex ML stuff, and complex Python stuff and it wasn’t working. I went back and watched some talks on python, reviewed decorators, built in functions, dunder methods, the python data model, and then when I came back, I only had one thing to worry about, the new ML material, and I found I processed it much more easily. So if you’re stuck during Part 1, trust the process and just keep going. If you’re stuck on Part 2, go back to the last point you understood everything and work forward at a slower pace.

5 Likes

Super advice, and I’ll add another thing that Jeremy reiterates every class and that I think a lot of people miss. He’s packing enough information into the course to keep us busy for next 4-8 months or so until next course is available. Each lecture is so dense it’s almost necessary to watch it multiple times or do what I do and covert to audio for consecutive listening. For Part 2 last year I probably listened to it six times. I’m guessing that this year’s part 2 will take multiple viewings before I can even get to the point I’m comfortable with the audio only.

So don’t worry if you don’t get it all in the week it’s posted. It’s online for rewatching until you do get it. The forums are also full of people at various stages of ‘getting it’ so if you don’t understand something ask. Someone who understands it slightly better will share. Fast.ai is most effective when you use all three parts (course, library, and community). Although I’m guessing most people here reading this already know that. :wink:

4 Likes

Lesson 11

  1. A big part of getting good at deep learning in your domain is knowing how to create small workable useful datasets. In your domain area, whether it’s audio or sanskrit texts, try to come up with a toy problem or two which you hope might give insight into your full problem. Lesson 11: Small useful datasets

  2. If you haven’t seen compose used in programming before, google it. It’s a super useful concept and it comes up all the time. Lesson 11: Compose

  3. A great opportunity to contribute is using telemetry to view activations of different layers and seeing what happens experimentally. The theory people generally don’t know how to train models, and the practitioners forget about actually thinking about the foundations at all, but if you can combine the two and try some experiments to see what happens when you adjust weight decay, you can find some really interesting results. Lesson 11: Another opportunity

  4. The trick to making Adam and Adam-like things work well is to make epsilon 0.1 (or between 1e-3 and 1e-1). Most people use 1e-7 and that’s never going to be a good idea.
    Lesson 11: Another place where epsilon matters

  5. When you’re augmenting your data, look at or listen to your augmented data. Don’t just chuck it in a model but look at the augmented output and try to find something to study to see if you’re losing information. Lesson 11: Mindful data augmentation

  6. Image augmentation isn’t just about throwing some transformation functions in there, but think about when you’re going to do it because you have this pipeline where you start with bytes, which become floats. Think about where you’re gonna do the work. Do whatever you can while they’re still bytes, but be careful to not do anything that will cause rounding errors or saturation problems. Lesson 11: Optimizing data augmentation

  7. When doing data augmentation on non-images, ask yourself what kind of changes could occur in data that wouldn’t cause the label to change but would still leave that data as a reasonable example that could show up in your dataset. Lesson 11: Data augmentation in other domains

4 Likes

Lesson 12

  1. Be careful about using automated formatting and creating too many rules. Sometimes unconventional formatting can help you to understand your code better. This is the only way to make your code work. Debugging machine learning code is awful so you have to make sure the thing you write makes sense and is simple. Lesson 12: Unconventional Formatting

  2. The mixup paper is a pretty easy read by paper standards. I suggest you check it out.

  3. Make your equations in code as close as possible to the paper you’re implementing. When you’re comparing something to a paper, you want something that you can look at and straightaway say “oh, that looks very familiar”. As long as it’s not familiar you may want to think about how to make it more familiar. Lesson 12: When to use Greek Letters

  4. Don’t listen to people in your organization saying we can’t start modeling until we do all this cleanup work. Start modeling right now, see if the results are okay, and if they are, maybe you can skip the cleanup or do them both simultaneously.

  5. Read the Bag of Tricks Paper and think about for each of those resnet tweaks, why did they do that? It wasn’t some brainless random search where they tried everything. They sat back and thought “how do we use all the inputs we have and take advantage of all the computation we are doing?” Spend time thinking about architectures and experimenting with them, it’s really important for effective practitioners to be able to write nice concise architectures so that you can change them and understand them. Lesson 12: Build Architectures Thoughtfully

  6. A super important thing to remember if you’re doing fine-tuning is don’t ever freeze the weights in the batchnorm layers if you’re ever doing partial layer training. Lesson 12: Batchnorm Does Weird Things

  7. The way to debug in DL is to not make mistakes in the first place. The only way to do that is to make your code so simple that it can’t have a mistake, and to check every single intermediate result along the way. You also need to be a great scientist, which means keeping a journal notebook to keep track of your results. Lesson 12: A War Story About DL Debugging (really long and detailed response that’s worth a second listen)

  8. When preprocessing data for neural nets, leave it as raw as you can is the rule of thumb.

  9. Anybody who has made it to lesson 12 in this course should be learning Swift for TensorFlow. This one comes with some great advice.

  • Python’s days are numbered.
  • DL libraries change all the time so if you’re spending all the time learning one library in one language, you won’t be prepared for that change.
  • I’ve spent time using, in real world scenarios, at least a couple dozen languages, and each time I learn a new language, I become a better developer. It’s just a good idea to learn a new language
    Lesson 12: Why Swift for Tensorflow
6 Likes

Lesson 13

  1. Importing Python into Swift is a super cool feature that you should use to fill in the gaps while this ecosystem doesn’t exist, but as soon as possible, fill in the gaps. We shouldn’t use this as such a crutch that we never write our own even better dataframes and libraries. Lesson 13: Use It, Don’t Abuse It

  2. If you’re hacking around over the coming months and you find things aren’t the way you want then you can, and should, change it. Lesson 13: Swift Is Fully Customizable

  3. Take a look at the code below after the lesson and think “what is going on here?” because this is something really interesting. Lesson 13: Only Possible In Swift
    lesson13%20img

  4. Some advice from Chris: “The thing I want to emphasize is that optimizing unsafe buffer pointers is a super low-level geeky thing that not everyone should do. It exists because at certain points in your journey you may find it useful, but usually you’re not working at this level”. Lesson 13: You Don’t Have To Use Every Feature

1 Like

As a Swift neophyte, I found this Medium post to be helpful in understanding the code here.

https://link.medium.com/JGSzlVdxjW

1 Like

Lesson 14

  1. There are many ways to get involved. Pick some piece of this that might be interesting in your domain, and over the next 12-24 months, explore with us so that by the time all the pieces are together you’ll have what you need for your domain. It’s not so much, how do I build this custom derivative thing in Swift for TensorFlow (hard), but rather, starting to use little bits of this in the work that you’re doing. Start by just building a notebook, or converting a small library from Python to Swift, or writing a blogpost (Chris). Lesson 14: How You Can Help
1 Like

@MadeUpMasters since the course is public, I’ve taken the liberty of copying these over to the part 1 post. Hope that’s OK!

1 Like