Fastbook Chapter 2 questionnaire solutions (wiki)

fastbook Chapter 2 solutions

Here are the questions:

  1. Provide an example of where the bear classification model might work poorly, due to structural or style differences to the training data.

There are many cases that the bear classification model could fail, especially if these cases were not represented in the training data:

  • The bear is partially obstructed
  • Nighttime images are passed into the model
  • Low-resolution images are passed into the model
  • The bear is far away from the camera
  • The bear training dataset is highly biased towards one type of features (eg. color)
  1. Where do text models currently have a major deficiency?

Text models can generate context-appropriate text (like replies or imitating author style). However, text models still struggle with correct responses. Given factual information (such as a knowledge base), it is still hard to generate responses that utilizes this information to generate factually correct responses, though the text can seem very compelling. This can be very dangerous, as the layman may not be able to evaluate the factual accuracy of the generated text.

  1. What are the possible negative societal implications of text generation models?

The ability for text generation models to generate context-aware, highly compelling responses can be used at a massive scale to spread disinformation (“fake news”) and encourage conflict.

Models reinforce bias (like gender bias, racial bias) in training data and create a vicious cycle of biased outputs.

  1. In situations where a model might make mistakes, and those mistakes could be harmful, what is a good alternative to automating a process?

The predictions of the model could be reviewed by human experts for them to evaluate the results and determine what is the best next step. This is especially true for applying machine learning for medical diagnoses. For example, a machine learning model for identifying strokes in CT scans can alert high priority cases for expedited review, while other cases are still sent to radiologists for review. Or other models can also augment the medical professional’s abilities, reducing risk but still improving efficiency of the workflow. For example, deep learning models can provide useful measurements for radiologists or pathologists.

  1. What kind of tabular data is deep learning particularly good at?

Deep learning is good at analyzing tabular data that includes natural language, or high cardinality categorical columns (containing larger number of discrete choices like zip code).

  1. What’s a key downside of directly using a deep learning model for recommendation systems?

Machine learning approaches for recommendation systems will often only tell what products a user might like, and may not be recommendations that would be helpful to the user. For example, if a user is familiar with other books from the same author, it isn’t helpful to recommend those products even though the user bought the author’s book. Or, recommending products a user may have already purchased.

  1. What are the steps of the Drivetrain approach?

  1. How do the steps of the Drivetrain approach map to a recommendation system?

The objective of a recommendation engine is to drive additional sales by surprising and delighting the customer with recommendations of items they would not have purchased without the recommendation. The lever is the ranking of the recommendations. New data must be collected to generate recommendations that will cause new sales . This will require conducting many randomized experiments in order to collect data about a wide range of recommendations for a wide range of customers. This is a step that few organizations take; but without it, you don’t have the information you need to actually optimize recommendations based on your true objective (more sales!)

  1. Create an image recognition model using data you curate, and deploy it on the web.

To be done by the reader. Watch lesson 2 video for help.

  1. What is DataLoaders?

The DataLoaders class is the class that passes the data to the fastai model. It is essentially a class that stores the required Dataloader objects (usually for train and validation sets).

  1. What four things do we need to tell fastai to create DataLoaders?
  • what kinds of data we are working with
  • how to get the list of items
  • how to label these items
  • how to create the validation set
  1. What does the splitter parameter to DataBlock do?

In fastai DataBlock, you provide the splitter argument a way for fastai to split up the dataset into subsets (usually train and validation set). For example, to randomly split the data, you can use fastai’s predefined RandomSplitter class, providing it with the proportion of the data used for validation.

  1. How do we ensure a random split always gives the same validation set?

It turns out it is impossible for our computers to generate truly random numbers. Instead, they use a process known as a pseudo-random generator. However, this process can be controlled using a random seed. By setting a random seed value, the pseudo-random generator will generate the “random” numbers in a fixed manner and it will be the same for every run. Using a random seed, we can generate a random split that gives the same validation set always.

  1. What letters are often used to signify the independent and dependent variables?

x is independent. y is dependent.

  1. What’s the difference between crop, pad, and squish Resize() approaches? When might you choose one over the other?

default resized (cropped) grizzly image:

Screen Shot 2020-03-25 at 11.22.10 AM
crop is the default Resize() method, and it crops the images to fit a square shape of the size requested, using the full width or height. This can result in losing some important details. For instance, if we were trying to recognize the breed of dog or cat, we may end up cropping out a key part of the body or the face necessary to distinguish between similar breeds.

pad is an alternative Resize() method, which pads the matrix of the image’s pixels with zeros (which shows as black when viewing the images). If we pad the images then we have a whole lot of empty space, which is just wasted computation for our model, and results in a lower effective resolution for the part of the image we actually use.

padded grizzly image:

Screen Shot 2020-03-25 at 11.25.41 AM

squish is another alternative Resize() method, which can either squish or stretch the image. This can cause the image to take on an unrealistic shape, leading to a model that learns that things look different to how they actually are, which we would expect to result in lower accuracy.

grizzly image squished:

Screen Shot 2020-03-25 at 11.29.00 AM
Which resizing method to use therefore depends on the underlying problem and dataset. For example, if the features in the dataset images take up the whole image and cropping may result in loss of information, squishing or padding may be more useful.

Another better method is RandomResizedCrop, in which we crop on a randomly selected region of the image. So every epoch, the model will see a different part of the image and will learn accordingly.

  1. What is data augmentation? Why is it needed?

Data augmentation refers to creating random variations of our input data, such that they appear different, but not so different that it changes the meaning of the data. Examples include flipping, rotation, perspective warping, brightness changes, etc. Data augmentation is useful for the model to better understand the basic concept of what an object is and how the objects of interest are represented in images. Therefore, data augmentation allows machine learning models to generalize . This is especially important when it can be slow and expensive to label data.

  1. What is the difference between item_tfms and batch_tfms?

item_tfms are transformations applied to a single data sample x on the CPU. Resize() is a common transform because the mini-batch of input images to a cnn must have the same dimensions. Assuming the images are RGB with 3 channels, then Resize() as item_tfms will make sure the images have the same width and height.
batch_tfms are applied to batched data samples (aka individual samples that have been collated into a mini-batch) on the GPU. They are faster and more efficient than item_tfms. A good example of these are the ones provided by aug_transforms(). Inside are several batch-level augmentations that help many models.

  1. What is a confusion matrix?

A class confusion matrix is a representation of the predictions made vs the correct labels. The rows of the matrix represent the actual labels while the columns represent the predictions. Therefore, the number of images in the diagonal elements represent the number of correctly classified images, while the off-diagonal elements are incorrectly classified images. Class confusion matrices provide useful information about how well the model is doing and which classes the model might be confusing .

  1. What does export save do?

export saves both the architecture, as well as the trained parameters of the neural network architecture. It also saves how the DataLoaders are defined.

  1. What is it called when we use a model for getting predictions, instead of training?

Inference

  1. What are IPython widgets?

IPython widgets are JavaScript and Python combined functionalities that let us build and interact with GUI components directly in a Jupyter notebook. An example of this would be an upload button, which can be created with the Python function widgets.FileUpload().

  1. When might you want to use CPU for deployment? When might GPU be better?

GPUs are best for doing identical work in parallel. If you will be analyzing single pieces of data at a time (like a single image or single sentence), then CPUs may be more cost effective instead, especially with more market competition for CPU servers versus GPU servers. GPUs could be used if you collect user responses into a batch at a time, and perform inference on the batch. This may require the user to wait for model predictions. Additionally, there are many other complexities when it comes to GPU inference, like memory management and queuing of the batches.

  1. What are the downsides of deploying your app to a server, instead of to a client (or edge) device such as a phone or PC?

The application will require network connection, and there will be extra network latency time when submitting input and returning results. Additionally, sending private data to a network server can lead to security concerns.

On the flip side deploying a model to a server makes it easier to iterate and roll out new versions of a model. This is because you as a developer have full control over the server environment and only need to do it once rather than having to make sure that all the endpoints (phones, PCs) upgrade their version individually.

  1. What are 3 examples of problems that could occur when rolling out a bear warning system in practice?

The model we trained will likely perform poorly when:

  1. Handling night-time images
  2. Dealing with low-resolution images (ex: some smartphone images)
  3. The model returns prediction too slowly to be useful
  1. What is “out of domain data”?

Data that is fundamentally different in some aspect compared to the model’s training data. For example, an object detector that was trained exclusively with outside daytime photos is given a photo taken at night.

  1. What is “domain shift”?

This is when the type of data changes gradually over time. For example, an insurance company is using a deep learning model as part of their pricing algorithm, but over time their customers will be different, with the original training data not being representative of current data, and the deep learning model being applied on effectively out-of-domain data.

  1. What are the 3 steps in the deployment process?
  1. Manual process – the model is run in parallel and not directly driving any actions, with humans still checking the model outputs.
  2. Limited scope deployment – The model’s scope is limited and carefully supervised. For example, doing a geographically and time-constrained trial of model deployment, that is carefully supervised.
  3. Gradual expansion – The model scope is gradually increased, while good reporting systems are implemented in order to check for any significant changes to the actions taken compared to the manual process (i.e. the models should perform similarly to the humans, unless it is already anticipated to be better).
  1. For a project you’re interested in applying deep learning to, consider the thought experiment “what would happen if it went really, really well?”

To be done by reader

  1. Start a blog, and write your first blog post. For instance, write about what you think deep learning might be useful for in a domain you’re interested in.

To be done by reader. Check this forum post for inspiration.

25 Likes

@jeremy please wiki-fy :slight_smile:

Thanks @jeremy! I have also added the link to the official Lesson 2 topic. I will start adding solutions tonight and throughout the week :slight_smile:

I’ve added almost half of the solutions and I’ll add the remaining during the next two days.

Most of the question are done, also thanks to the help from the community. I will finish the rest by tomorrow, if someone doesn’t already finish it! :slight_smile:

1 Like

Hello all,

I have finished the questionnaire (with help from members of the community). Hope this helps!

7 Likes

I’ve created a Quizlet deck from the above questions/answers for anyone who wants to use them for studying. I did a little answer trimming and bullet pointing, but the information is there. Thanks @ilovescience!

https://quizlet.com/_8gx0mu?x=1jqt&i=1f3dh7

4 Likes

Very useful… Thanks. Do you have these for lesson 1 and other lessons as well?

Thanks!

@ilovescience, I would suggest updating this answer to reflect some of the upsides as well.

The application will require network connection, and there will be extra network latency time when submitting input and returning results. Additionally, sending private data to a network server can lead to security concerns.

On the flip side deploying a model to a server makes it easier to iterate and roll out new versions of a model. This is because you as a developer have full control over the server environment and only need to do it once rather than having to make sure that all the endpoints (phones, PCs) upgrade their version individually.

1 Like

Updated the Wiki. Feel free to change again.

For something like ChatGPT, what would be a good alternative without it getting to the end user?

Yes, you can find solutions to each chapter under the chapter’s video.

I can think of placing a model specially to flag such answers, between the original output and the user. It would increase the loading time by a little but seems like a fair trade.

Hi, first of all thank you very much for all this useful information and for giving this gift to us. I have a question, lets say i trained a model and deployed it. After a while i got new data that i want to add or train the model on it. What is the best way to redeploy the newly trained model/data ?

With question #2, I feel like LLM “hacks” like RAG have already gone a long way to resolving the deficiencies of text models when it comes to correctness.

Other thoughts?

1 Like

@ilovescience thanks for compiling the questionnaire answers!

Question #2: As suggested by @leeknowlton I would also consider adding a remark about RAG as a commonplace measure to minimize hallucinations by grounding model responses in up-to-date, verifiable sources. Also, some frameworks such as LlamaIndex (example using metadata for source citation) and LangChain (citations) provide mechanisms to faciliate cross-referencing of model answers with the original content.

Question #6: Would bias amplification (if certain types of content are historically more popular among a specific user group, the model might prioritize recommending similar content to similar users, further enhancing those preferences and biases), reduced content diversity (filter bubble), and the cold start problem also fall under key downsides of directly using deep learning model for a recommender system?