Share your work here ✅

Explore the user interface and logic in model-driven apps

The best way to understand the basic concept of UI in model-driven apps is to run a sample app. You’ll also discover model-driven apps concepts that you can apply as you develop your own apps. To run a sample app, scroll to the bottom of the screen from your home page in Power Apps and select one of the apps where the Type is model-driven.

Sample app on Power Apps Innovation Challenge.

Before you start with modeling model-driven apps, you need to understand the approach for making model-driven apps. Model-driven apps have three design phases:

  • Model your business data
  • Define your business processes
  • Build the app

First you need to define the structure of the data stored in Common Data Service. We already explored the basic concept about Common Data Service, and creating and customizing entities. As a second step, you should define and enforce consistent business processes as a key aspect of model-driven app design. After modeling data and defining processes, you must build your app by selecting and setting up the components you need in the App Designer.

The App Designer

App Designer is the main tool for building model-driven apps. From App Designer you can easily navigate to Site map designer, Form designer and View designer as you need them for designing different components in model-driven apps.

To run an app designer and see how it looks, select Create and then select the Model-driven app from blank tile.

affiliation

Manet or Monet ? Still mixing the two ? Me as well. Therefore , I have made this little demo from the lessons one and two :slight_smile:

Manet or Monet ?

My First Henna Classifier Model

I’m thrilled to share my first attempt at building an image classifier using the tools from the FastAI course!

This week, I built a model that can distinguish between henna designs and forest photos—a fun and creative way to explore deep learning.

Here is my Notebook Is it Henna or Forest?

The Challenge

Initially, I ran into some issues. My model wasn’t predicting probabilities correctly—it would identify the image as “henna” but return a probability of 0.0000! This was quite puzzling and required some investigation.

How I Solved It

  1. Understanding Class Labels: I learned that the model’s class labels (learn.dls.vocab) determine how the probabilities are indexed. Using the o2i attribute, I was able to correctly map the class label (“henna”) to its index, resolving the probability issue.

probs[learn.dls.vocab.o2i[‘henna’]]

  1. Cleaning and Balancing the Dataset: I verified all images using verify_images() to ensure no corrupted files were causing issues. Luckily, my dataset had no invalid images, and I balanced the number of images between the “henna” and “forest” classes.

  2. Refining the Model: After fixing the dataset and re-training the model with ResNet18 , I achieved an error rate of 0.0000.

Excited for What’s Next

This is just the beginning of my deep learning journey. I’m excited to explore more datasets, fine-tune models, and solve real-world problems using AI.

Feel free to check out my Henna Classifier Notebook . Let me know your thoughts or any suggestions to improve it!

2 Likes

Today, I watched YouTube video of first chapter and ran jupiter notebook on kagggle. Modified code to learn different kind of xray images. It identified if the given image is an xray of chest, spine leg or head.

Just made an alternative to bird or not.

Re-implemented the code from scratch and tweaked some in my way. The model is for classifying genders based on the face. It’s not that good in terms of performance but maybe I can learn more by trying to improve it!

made an improved version? IDK if it is better in practice tho

1 Like

I made a penguin species classifier using ‘convnext_tiny_in22k’. I was able to get ~92% accuracy with just 3 epochs. Here’s the link.

Let me know what you guys think.

Hello,

I made a small image classifier that tells if you’re looking at a mountain or a beach.
Demo
Kaggle

First ML Model: M&M’s Classifier (90% Accuracy!)

Following Chapter 1 of the course, I adapted the bird classifier to identify M&M’s from other chocolates (Snickers, KitKat, Hershey’s). As someone completely new to ML, getting this working felt like magic!

What I did:

  1. Started with the base vision learner architecture
  2. Created my own dataset of chocolate images
  3. Fine-tuned the model
  4. Achieved ~90% accuracy on test images

Here’s a quick demo of the results:

predicted_class,_,probs = learn.predict(im)
print(f"This is a: {predicted_class}.")
predicted_idx = classes.index(predicted_class)
# print(f"Probability it's a {predicted_class}: {probs[predicted_idx]:.4f}")
for idx, ele in enumerate(classes):
    print(f"Probability it's a {ele}: {probs[idx]:.4f}")

This is a: mnms.
Probability it’s a hersheys: 0.0000
Probability it’s a kitkat: 0.0000
Probability it’s a mnms: 1.0000
Probability it’s a snickers: 0.0000

Key learnings:

  • Image preprocessing is crucial (had to handle RGB vs RGBA)
  • The importance of diverse training data
  • How transfer learning makes this accessible to beginners

Code snippet of prediction:

test_image = PILImage.create("test_image.png").convert('RGB')
predicted_class,_,probs = learn.predict(test_image)
![Screenshot 2025-01-15 at 1.52.56 PM (1)|690x229](upload://hy4NVFZtYWyr6aS3RdrEHHW1vkK.png)



https://www.kaggle.com/code/husseinserhan/m-ms-vs-other-chocolates-classification/edit

If you are watch YouTube videos for study then you should use this amazing YouTube To Transcript web tool which can extract subtitle form the video just pasting the YT URL.
Thanks for All Support :innocent:

Follow lecture 1 from fastai. Here is some exercise try to classify asian noodle dishes.

Well, I might be late but I already feel powerful after this kaggle exercice, thank you Jeremy for this masterclass.
Here’s my work a simple model for telling whether the given image is a cat or not.
link: https://www.kaggle.com/code/skonteye/is-it-a-cat-creating-a-model-from-your-own-data

Well I am software developer (mainly web php applications) with over 10 years of experience but also been a handball player and coach.

So just for the fun of playing around I have modified the is it a bird demo and created a model that will tell you if an image of a sports game is about soccer, handball, basketball or tennis.

You can see the link here: What sport is it? | Kaggle

:slight_smile:

Hi Jeremy and team, I’m Marcial and I work at the Paraguayan Tax Administration and I was thinking about what use case I could use and I was out of ideas. One day later, in my daily work, I downloaded a pdf from our system that according to the file name should contain a Minutes of Assembly but when I opened it it was actually a tax return, and from there I got the idea of ​​making a detector of Minutes of Assembly. Later I could see with our IT Department how to add this functionality to notify taxpayers to correctly submit the forms. It worked quite well with 98% effectiveness.

Here is the link, greetings:

2 Likes

Hi there!

My name is Yehor, I’m from Ukraine and I have extensive experience as an ML Engineer in Fintech (Banking). This experience rarely involved Neural Networks, so I feel like Deep Learning could be my space for rapid growth.

In Part 1 assignment I decided to create a Citrus classifier . The results are rather impressive, but not ideal. For example, a common mistake of my model is to mistake orange for tangerine and vice versa. To be honest, the task of separating those two can be challenging even to me (like in example below).

Also I had much fun optimizing the code in the notebook a bit: imports, class for defining storage folders for each class in a unified way. Also, I turned the initial binary classification into Multi class task, which is surprisingly easy.

That is very cool! What tool did you use to create an app?

Nomor WhatsApp resmi bank Danamon adalah 0831"8142"0077 Nomor ini merupakan WhatsApp Resmi BANK DANAMON

Can’t believe how easy it is to build an insect/arachnid identifier!

2 Likes

Started with simple changes, identifying Parrot vs Crow.
tried Crow vs Raven which didn’t work for some reason.

2 Likes

Finished lesson 2 of the course today and deployed a model to classify architectural styles. Its amazing how easy fast.ai makes things. Learnt so many new things.


Here is the link to my application. And here is the link to the notebook

3 Likes