Explore the user interface and logic in model-driven apps
The best way to understand the basic concept of UI in model-driven apps is to run a sample app. You’ll also discover model-driven apps concepts that you can apply as you develop your own apps. To run a sample app, scroll to the bottom of the screen from your home page in Power Apps and select one of the apps where the Type is model-driven.
Before you start with modeling model-driven apps, you need to understand the approach for making model-driven apps. Model-driven apps have three design phases:
Model your business data
Define your business processes
Build the app
First you need to define the structure of the data stored in Common Data Service. We already explored the basic concept about Common Data Service, and creating and customizing entities. As a second step, you should define and enforce consistent business processes as a key aspect of model-driven app design. After modeling data and defining processes, you must build your app by selecting and setting up the components you need in the App Designer.
The App Designer
App Designer is the main tool for building model-driven apps. From App Designer you can easily navigate to Site map designer, Form designer and View designer as you need them for designing different components in model-driven apps.
To run an app designer and see how it looks, select Create and then select the Model-driven app from blank tile.
Initially, I ran into some issues. My model wasn’t predicting probabilities correctly—it would identify the image as “henna” but return a probability of 0.0000! This was quite puzzling and required some investigation.
How I Solved It
Understanding Class Labels: I learned that the model’s class labels (learn.dls.vocab) determine how the probabilities are indexed. Using the o2i attribute, I was able to correctly map the class label (“henna”) to its index, resolving the probability issue.
probs[learn.dls.vocab.o2i[‘henna’]]
Cleaning and Balancing the Dataset: I verified all images using verify_images() to ensure no corrupted files were causing issues. Luckily, my dataset had no invalid images, and I balanced the number of images between the “henna” and “forest” classes.
Refining the Model: After fixing the dataset and re-training the model with ResNet18 , I achieved an error rate of 0.0000.
Excited for What’s Next
This is just the beginning of my deep learning journey. I’m excited to explore more datasets, fine-tune models, and solve real-world problems using AI.
Feel free to check out my Henna Classifier Notebook . Let me know your thoughts or any suggestions to improve it!
Today, I watched YouTube video of first chapter and ran jupiter notebook on kagggle. Modified code to learn different kind of xray images. It identified if the given image is an xray of chest, spine leg or head.
Re-implemented the code from scratch and tweaked some in my way. The model is for classifying genders based on the face. It’s not that good in terms of performance but maybe I can learn more by trying to improve it!
Following Chapter 1 of the course, I adapted the bird classifier to identify M&M’s from other chocolates (Snickers, KitKat, Hershey’s). As someone completely new to ML, getting this working felt like magic!
What I did:
Started with the base vision learner architecture
Created my own dataset of chocolate images
Fine-tuned the model
Achieved ~90% accuracy on test images
Here’s a quick demo of the results:
predicted_class,_,probs = learn.predict(im)
print(f"This is a: {predicted_class}.")
predicted_idx = classes.index(predicted_class)
# print(f"Probability it's a {predicted_class}: {probs[predicted_idx]:.4f}")
for idx, ele in enumerate(classes):
print(f"Probability it's a {ele}: {probs[idx]:.4f}")
This is a: mnms.
Probability it’s a hersheys: 0.0000
Probability it’s a kitkat: 0.0000
Probability it’s a mnms: 1.0000
Probability it’s a snickers: 0.0000
Key learnings:
Image preprocessing is crucial (had to handle RGB vs RGBA)
The importance of diverse training data
How transfer learning makes this accessible to beginners
If you are watch YouTube videos for study then you should use this amazing YouTube To Transcript web tool which can extract subtitle form the video just pasting the YT URL.
Thanks for All Support
Well I am software developer (mainly web php applications) with over 10 years of experience but also been a handball player and coach.
So just for the fun of playing around I have modified the is it a bird demo and created a model that will tell you if an image of a sports game is about soccer, handball, basketball or tennis.
Hi Jeremy and team, I’m Marcial and I work at the Paraguayan Tax Administration and I was thinking about what use case I could use and I was out of ideas. One day later, in my daily work, I downloaded a pdf from our system that according to the file name should contain a Minutes of Assembly but when I opened it it was actually a tax return, and from there I got the idea of making a detector of Minutes of Assembly. Later I could see with our IT Department how to add this functionality to notify taxpayers to correctly submit the forms. It worked quite well with 98% effectiveness.
My name is Yehor, I’m from Ukraine and I have extensive experience as an ML Engineer in Fintech (Banking). This experience rarely involved Neural Networks, so I feel like Deep Learning could be my space for rapid growth.
In Part 1 assignment I decided to create a Citrus classifier . The results are rather impressive, but not ideal. For example, a common mistake of my model is to mistake orange for tangerine and vice versa. To be honest, the task of separating those two can be challenging even to me (like in example below).
Also I had much fun optimizing the code in the notebook a bit: imports, class for defining storage folders for each class in a unified way. Also, I turned the initial binary classification into Multi class task, which is surprisingly easy.
Finished lesson 2 of the course today and deployed a model to classify architectural styles. Its amazing how easy fast.ai makes things. Learnt so many new things.