Article
20 Dec
2023

Man, Monkey or Martial Artist - Integrating an Azure AI Custom Vision Model with Power Apps (part 2)

In part 1 of this series, we discussed some relevant background information on computer vision and how to build a computer vision model in Azure. In part 2, we’ll show you step-by-step how to integrate your Custom Vision model into a mobile app you build from scratch using Microsoft Power apps low-code platform.
Anthony Allen
|
30
min read
man-monkey-or-martial-artist-integrating-an-azure-ai-custom-vision-model-with-power-apps-part-2

Azure Custom Vision Review

Azure Custom Vision is a cloud-based machine learning service offered by Microsoft Azure. It empowers both citizen and code-first developers to create custom image classifiers tailored to recognize specific objects, patterns, or attributes unique to their respective domains. With this Azure service, users can effortlessly construct, train, and deploy prediction models without the need for extensive expertise in machine learning or computer vision.

Once the model is trained, it generates a confidence score, indicating the likelihood that an image corresponds to specific classes. For instance, one could train a model to determine how confidently an image represents a man, woman, or monkey. Code-first developers can access these deployed Custom Vision models through APIs, SDKs, or a dedicated website, while Citizen Developers have the convenience of accessing their image classifier model via a mobile application created within Power Apps.

In part 1 of the series, we demonstrated how easy it is to build a Custom Vision classifier model that can determine if an image contains a man, a woman, or a monkey and if they are wearing a Business suit, a Kung Fu uniform or Casual Wear (e.g., jeans & sneakers). We also discussed in detail several items that affect the accuracy of a Custom Vision model including overfitting, data balance, data quantity, and data variety that could help us answer the following questions:

Integrating the Model with Power Apps

Now that we have a trained Azure Custom Vision model that has been quick-tested and published, we need a means for users to easily access it. In keeping with our Low-Code approach, we are going to access the Custom Vision model by building a mobile app with Microsoft Power Apps. This is what we need the app to accomplish:

  • Take a new photo with the device’s built-in camera or use an image already in a gallery or folder 
  • Automatically submit the image to the classifier
  • Return all predictions but only display the top 5 on the screen
  • Autogenerate an image title based on the top predicted result (e.g., Monkey 99.9%  Taken on: 10/24/2023 10:02 AM)
  • Allow the user to identify the actual description of the image by selecting the character class, apparel class, and environment dimension
  • Save the full set of prediction results to SharePoint for later analysis

Here’s a preview of the completed app:

Set Up List in SharePoint 

The first thing you need to do is set up a data store for the prediction results.

You will need to capture the image itself, the auto generated title, the full set of prediction results, and some basic audit information. 

Ideally, it would be preferable to save the data to a DB with an image table joined to a results table. However, that would require a little bit more work on the backend as well as during the save process within the application. 

A simple Low-Code approach would be to use a Microsoft SharePoint list where we can save the image and the results unformatted in a single row. The table below shows the column names and datatypes you need to set up your list. The first twenty-one columns will be populated by the app. The last four will be auto populated by SharePoint.  

Column names and datatypes you need to set up your list

For the Character, Apparel, and Environment columns, you will need to manually add the choices plus the default choice you want displayed in the app before the user makes a selection.  An example of that is shown below:

Manually add the choices plus the default choice you want displayed in the app before the user makes a selection

Spelling and letter case are important so take note of the column names.

If you change them either intentionally or unintentionally it can cause an error later when adding the “save“ functionality to the app. 

Also, keep the SharePoint tab open in the background because you will need to copy and paste the URL when you are setting up the data sources for your app.

Start Building the Mobile App

Now that we’ve set up our destination, we can start building the app. We’re going to start by creating a blank canvas app. 

First, log into your account at https://make.powerapps.com/. If you do not have one, you can create one for free here: Microsoft Power Apps for Developers 

After you’ve logged in, click on the “Create” link on the left.  

Click on the "Create" link on the left

On the next screen click the “Create” button under the “Blank canvas app” choice.

Click the “Create” button under the “Blank canvas app”

Give your app a meaningful name, here we’re using the name “CustomVisionClassifier”.  Then select the “Phone” format and then click on the “Create” button at the bottom of the screen.

Canvas App from Blank

Connect to Custom Vision Model

Now that we have our blank app we can add the prediction model as a data source for the app.  Click on the Data cylinder / database icon on the left ( 1 ). 

Click on the Data cylinder / database icon on the left

Next, click on “Add Data” ( 2 ).   Under “Select a data source”, type “custom vision” as your search term in the search bar ( 3 ) and choose your Custom Vision model that you created in part 1 of this series ( 4 ).

Connect to Your SharePoint  List

Now you need to add the SharePoint list you created earlier as a data source. Go back to the browser tab with your SharePoint list and copy the URL.  Switch back to your Power Apps tab and click on the “Add data” button again ( 1 ). This time, type in “SharePoint” in the search box ( 2 ).  Choose the SharePoint source option ( 3 ) and then click on “Connect to a SharePoint site”.

Add the SharePoint list you created earlier as a data source

Paste the URL you copied above into the prompt.

Paste the URL

If more than one SharePoint list shows up, choose the list ( 1 ) you intend to use for this app and then click on the “Connect" button at the bottom of the screen ( 2 ).

Choose the listyou intend to use for this app

Now, your app is connected to both your custom vision model and the SharePoint destination. Next, we’ll work on the app itself.

Adding Controls to the Splash Screen

First, we’ll create the splash screen. Click on the Tree view icon ( 1 ) in the left hand side rail. Click on the “New screen” link to add a new screen ( 2 ) and choose the blank layout.  Once the new screen has been added to the canvas app, rename it to SplashScreen ( 3 ).

Design the splash screen

Under Tree view in the left hand side rail, select the SplashScreen and then click on the “Insert” button at the top of the screen. One by one, add a text label, button, and image controls to the screen. 

Select the SplashScreen and then click on the “Insert” button at the top of the screen

Under Tree view in the left hand side rail, rename the items and rearrange them as shown below:

Designing the Splash Screen

Under Tree view in the left hand side rail, select SplashScreen.  You can change the background color of the canvas app to match or blend in with your logo. We used a Background Fill of RGBA(0, 51, 102, 1) to match our classifier app logo. We’ll discuss adding a logo next. 

Change background color

To include a custom logo on the splash screen of your app, under Tree view in the left hand side rail, select the img_AppIcon and go to its properties. 

Upload your own custom logo here from your desktop. You can resize the image control on the canvas and then change the image properties to center, fill, fit, stretch, or tile depending on how you want it to appear.

Insert logo

Next up is the navigation button at the bottom of the canvas app.

There are a number of properties that you can change to suit the feel of your app. Under Tree view in the left hand side rail, select btn_start control. 

Now that it is selected, you can edit the control’s properties via the drop down and function boxes on the left of the screen or via the properties panel on the right hand of the screen. For example, you can change the text of the button to “Start Classifying” by ( 1 ) using the drop down on the left, ( 2 ) select the Text option, ( 3 ) type “Start Classifying” into the function box.  Just as easily, you can change the text by using the properties panel on the right of the screen ( 4 ). 

Add button text

Let’s look at the other properties of this button control.  You can change the position, font, colors, and more. For example, a border radius of 15 makes the button edges nice and rounded.

After you have title, logo, and button on screen, you can tweak each control’s properties as needed so everything fits, is well spaced, and conforms to the look and feel of your organization's style guidelines. Create the Second ScreenNow you can create the main screen for the app where the user will upload and submit images to the classifier. Under Tree view in the left hand side rail, Click on the “New screen” icon to add a blank screen as you did previously. Click on the new screen to set focus and rename it ModelPredictions.  Update the screen's Background Fill properties to the same value you used for the splash screen.  In our case, it was RGBA(0, 51, 102, 1).

As it is now, your app doesn’t do anything other than look pretty!

You still need to add navigation functionality to the btn_start on the Splashscreen screen.  Under Tree view in the left hand side rail, Click on btn_Start and modify the OnSelect property to Navigate(ModelPredictions) as shown below:

Add navigation function to the button

Now save the app and press “preview the app” in the upper right hand corner of the command bar and test the functionality of your btn_start  button.  

When the app opens, click on the “Start Classifying” button. It should immediately take you to your blank ModelPredictions screen. After you have tested the navigation functionality of your app, close the app preview to get back into editing mode.

Designing the Model Predictions screen

At this point, you have become familiar with setting the different control properties.  

Therefore, we’ll only provide the base properties of each control we add to the canvas. We will leave it up to you to customize each control to match the look and feel of your app style.

Now you can begin to design the layout of the controls for the main screen.  Under Tree view in the left hand side rail, click on the screen ModelPredictions to set focus.  Clink on the insert icon at the top and search for the “Add Picture” control.  Insert that control onto the ModelPredictions screen.  What it actually adds is a Group that contains two items, a button and an image control.  Change the group name to AddPictureToClassify.  Click on the down arrow next to the group name to expand the view.

Click on the button control under AddPictureToClassify group to set focus and rename the button to btn_AddPicture. Depending on your preference, you can leave the button’s position where it is across the middle of the image or you can experiment with the relative positioning so it is either at the top or bottom of the image. We preferred it to be on the bottom so you can see more of the image on the mobile device.

Now we are going to set up the image control that allows the user to take a picture or pull one from the mobile device’s saved pictures gallery. Click on the image control under AddPictureToClassify group to set focus and rename the image name to img_SubjectOfInterest and change the property ImagePosition from “Fill” to “ Center”.

When you are done setting the AddPictureToClassify group’s button and image properties, your Tree view in the left hand side rail should look similar to this:

Now, insert two text labels, a text input box, and a button onto the ModelPredictions screen.

Add title bar

Click on one of the label controls under the ModelPredictions screen to set focus and rename the label to lbl_Title and change the Text property to “Title:”.

The title of the image that gets saved to SharePoint is at first autogenerated by the app but then it is also modifiable by the user.

Click on the Text Input control under the ModelPredictions screen to set focus and rename the text input control to txt_ImageTitleInSharePoint. Change the Default  property to "Image - " & Now() and change the Mode property to SingleLine.

Now we need a way for the user to submit the image, prediction results, and the title to SharePoint.  Click on the button control under the ModelPredictions screen to set focus and rename the control to btn_Save and change the Text to “Save”.  

We suggest resizing the button so it is a little bigger than the text it displays. You can also modify the BorderThickness to 2 and the BorderRadius to 15 to give it a more streamlined appearance.  We’ll rearrange all these items a bit more once we get the gallery control for the results returned by the image classifier mode situated on the canvas app.

As seen earlier, Power Apps has a nice organizational feature to help keep your controls together by adding them to a Group. Using the mouse and Ctrl key, select the controls lbl_Title, txt_ImageTitleInSharePoint, and btn_save. Click on the ellipses and click on “Group”.  Rename the group to Group_SubmittedTitle.

Designing the Returned Results Section

The results section needs a nice large title banner to separate it from the image upload section above it. Click on the remaining label control under the ModelPredictions screen to set focus.  Rename the label to  lbl_GalleryTitle and change the Text property to something meaningful like “Model's Classification Prediction”. Change the Fill and font properties to make it stand out.  After some additional rearranging, your canvas app should look similar to this:

Add grouping

Next step is to add the actual control that will display the prediction  results returned by the image classifier model. Under Tree view in the left hand side rail, click on the ModelPredictions screen to set focus. Click on the insert button at the top and search for “Vertical Gallery” and add the control to the canvas app.

A vertical gallery control acts like a group. Click on the down arrow of the gallery control under the ModelPredictions to expand the Tree view in the left hand side rail. Upon insertion, there were a couple of items automatically added by Power Apps that we do not need for this app.  Remove the Rectangle and NextArrow items from the group and rename the gallery to gal_PredictionResults.  You will be left with two labels and a separator.  Change the Layout property of the gallery to “Title and Subtitle”.  

You can leave the Data source property alone for now.

Next we need to fix the layout of the results. ResultClass and Probability are the two values the classifier will return to us.  In the Tree view in the left hand side rail, click on the Separator item under the gal_PredictionResults and rename it  to fmt_ClassSeparator.  

Next, click on one of the label controls under the gal_PredictionResults and rename it to “lbl_ResultProbability”, then change the Font size property to 18 and the Text property to “0%”. You may want to use a different font and size depending on your preferences. Click on the other label in the gallery group and rename it to “lbl_ResultClass”.  

For this next step, you need to make sure you are out of the gallery control because location of this item matters. You are going to add a solid rectangle to the canvas and send it to the background so it can visually separate the results section from the other elements in the app. Select the ModelPredictions screen in the Tree view in the left hand side rail to set focus. Insert a rectangle shape and update the item’s name to fmt_Rectangle. Send the rectangle to the background like you did in previous steps. Afterwards, you can change the color of the rectangle to help make the gallery section stand out on the screen.  

Now create a group called Group_ReturnedResults for lbl_GalleryTitle, gal_PredictionResults, and fmt_Separator. After resizing and grouping, this is what the result section should look like :

Format results section

The User’s Classification Choices

One of the requirements of this app was that the user should be able to select the actual Character class, Apparel class, and the Environment dimension of the image so we could save it to SharePoint and compare the model’s accuracy to the user’s submitted description.  So now let’s add this functionality by inserting three dropdown controls, four text labels, and a background rectangle to the ModelPredictions screen. Remember, when inserting a new control, you can use the search bar to find the one you need. 

Rename each newly added control according to the table below:

Now add all these items to a group called Group_ActualDescription and position the controls on the screen below the results section as seen below:

Description section formatted

Remember, send the rectangle to the background just like you did for the results section above.  After the controls are in place, you can modify the font and color properties to match the theme of your app. 

Connecting the Dropdown Lists to Sharepoint

Once the dropdown boxes are in place, you need to populate them with the user’s available choices.  You could just go through and programmatically create a list for use in each dropdown.  The downside to that would be that if you decided later that you needed a different set of choices, then you would have to edit the app and republish it.  Fortunately,  the built-in connector between Power Apps and Sharepoint can handle changes in available choices without any code changes in the app because it pulls the choices directly from the SharePoint column in the list you created at the start of this tutorial.

The code for each choice will be in this format:

Choices({name of the SharePoint datasource}.{column name from which to pull the choices})

Using the table below, update the properties for each dropdown.  The “Items” property represents the choices available to the user.  The “Default” property is the value the user sees when the app is first loaded.  These default values need to match exactly what you put in the choices when you set up the Sharepoint list or they will not show up in the app.

Now add all these items to a group called Group_ActualDescription and position the controls on the screen below the results section as seen below:

Actual description section formatted

Test the Dropdown Controls functionality

To test the dropdown lists, click “Save” and then click on the “Preview the app” button in the upper right hand corner of the command bar. You should see the same choices here that you added to the SharePoint list.  Everything looks great so far and appears to be functioning as expected. 

However, before we get to submitting the image to the classifier and saving the returned results to SharePoint, there’s one more thing we can do to improve the overall look and feel of this screen in the app: add a default image.

Testing the dropdowns

Add a Default Image

You can add an interesting default image that loads up when you first run the mobile app and click on “Start Classifying".

If you do not have an image available already, just copy a training or test image for now and rename it “DefaultImage” to make it easier to find and use in the next step. In the Tree view in the left hand side rail, click on img_SubjectOfInterest under the ModelPredictions screen.

Now upload the image by clicking on the media button as shown below ( 1 ) and then click on the “Upload” button ( 2 ).

In the dropdown properties box ( 3 ) select image and modify the function by replacing “SampleImage” with “DefaultImage” ( 4 ).

Add a default image

Next we have to add a small bit of code in two places to make sure the default image loads when the app starts.  

The first place to add code will be in the app properties.  

Under Tree view in the left hand side rail, select the App icon and use the drop down properties to select “OnStart” and then add this to the function box: img_SubjectOfInterest.Image = DefaultImage. The second place will be right after the navigation code on btn_start.  Click on btn_start ( 1 )  to set focus.  

Use the dropdown properties to select “OnSelect” ( 2 ) and then add Reset(btn_AddPicture); to the end of the existing code ( 3 ). To test the default image, click “Save” and then click on the “Preview the app” button in the upper right hand corner of the command bar.

Set default image on load

Calling the Custom Vision Model

Now it's time to put the Custom Vision model to work. 

From the user’s perspective, it is all done with the click of a button. 

Behind the scenes, there are just a few basic steps to code to send the image to the classifier and get the predicted results on screen. 

First, we need to create an empty collection called imgcol that will store the data returned from the classifier. A Collection is a type of variable in Power Apps

in which you can store data in a tabular format (i.e., rows and columns). There are four columns in this collection but you do not need to worry about defining them up front. Power Apps will do that for you once you call the image classifier. We are only interested in the “probability” and “tagName” columns as seen in the example results below:

Next, log back into https://www.customvision.ai/projects and open up your Custom Vision model. Click on the project settings so you can copy the Project Id. 

In the next step, you will need to paste the Project Id into a connection string in your canvas app.

Switch back to your canvas app.  To get the prediction results from the Custom Vision model, you will call this function CustomVision.ClassifyImageV2 and pass it the following parameters:  the Project Id, the iteration name of your model, and the image source (e.g., img_SubjectOfInterest).

When the predictions come back from the model, we store them in the collection called imgcol.  In order to save the results to the table in SharePoint in the desired format, we essentially have to pivot and transform the returned data stored in the collection.  In the code below, we look up the probability for each roll up class and interaction class in the collection based on the tag name and then assign the probability of each tag to a separate variable (e.g., varMonkeyInCasualWear).

To get started with adding the necessary code, click on the Tree view ( 1 ) in the left hand side rail and under the ModelPredictions screen, select btn_AddPicture ( 2 ) to set focus. Now select “OnChange” from the properties dropdown ( 3 ) and get ready to type some code!

Actually, you can just copy & paste the code below into the function box ( 4 ).  


/* Create an empty collection for the returned results */ClearCollect(imgcol,{});

/* Call the Custom Vision classifier */ClearCollect(imgcol,
CustomVision.ClassifyImageV2     (          "Project Id gets pasted here",           "Iteration name of your published model",           img_SubjectOfInterest.Image      ).predictions);

/* Look up the values for the roll-up classes /Set(varWoman , LookUp(imgcol,tagName = "Woman").probability100);Set(varMan , LookUp(imgcol,tagName = "Man").probability100);Set(varMonkey , LookUp(imgcol,tagName = "Monkey").probability100);Set(varSuit , LookUp(imgcol,tagName = "Business suit").probability100);Set(varKFU , LookUp(imgcol,tagName = "Kung Fu uniform").probability100);Set(varCasualWear , LookUp(imgcol,tagName = "Casual Wear").probability*100);

/* Look up the values for the interaction classes /Set(varWomanInSuit , LookUp(imgcol,tagName = "Woman in Business suit").probability100);Set(varManInSuit , LookUp(imgcol,tagName = "Man in Business suit").probability100);Set(varMonkeyInSuit , LookUp(imgcol,tagName = "Monkey in Business Suit").probability100);Set(varWomanInKFU , LookUp(imgcol,tagName = "Woman in Kung Fu uniform").probability100);Set(varManInKFU , LookUp(imgcol,tagName = "Man in Kung Fu uniform").probability100);Set(varMonkeyInKFU , LookUp(imgcol,tagName = "Monkey in Kung Fu uniform").probability100);Set(varWomanInCasualWear , LookUp(imgcol,tagName = "Woman in Casual Wear").probability100);Set(varManInCasualWear , LookUp(imgcol,tagName = "Man in Casual Wear").probability100);Set(varMonkeyInCasualWear , LookUp(imgcol,tagName = "Monkey in Casual Wear").probability100);

/* everything else including the kitchen sink (see part 1 of tutorial) /Set(varOther , LookUp(imgcol,tagName = "Other").probability100);


Even though we added some code to btn_AddPicture, there is no new functionality you can readily test at this point since nothing has actually been sent to the screen yet.  Therefore, the next step is to get the top 5 results to display in the gallery control on the canvas app.

Connect the Gallery to a Data SourceEarlier, we connected the Custom Vision model as a datasource to the gallery control. To refer to the records in the gallery, you use the term “ThisItem”.  When we finally display the probability values on the screen, we will need to transform and format them by multiplying the value by 100, rounding to the nearest tenth, and then adding a “%” sign.  Since this is a gallery control, you only have to format the first record of the list and all subsequent records will follow suit once they are loaded and displayed.  To add the probability, go to the Tree view in the left hand side rail ( 1 ), select lbl_ResultProbabilty ( 2 ) to set focus and then select the Text property ( 3 ).  

You can use this expression in the formula bar ( 4 ): Text(ThisItem.probability * 100, "0.0", "en-US") & "%"

Result probability

In addition to the probability value, the classification model returns one of the tag names we used to train it.  (e.g., “Monkey in a Business Suit”).  

To get the tag name on screen in the app, go to the  Tree view ( 1 ) in the left hand side rail, click on lbl_ResultClass to set focus and then select the Text property ( 3 ).  Update the item’s Text property to this value: ThisItem.tagName ( 4 ).

Result Tagname

Formatting the Gallery TextNow that the model is connected and we have a working result set, we can change the default image title to include the class with the highest probability and include a timestamp for when we submitted it.  First, you will need to reference the collection imgcol and pick the top 1 record using the formula: FirstN(imgcol, 1).tagName to get the value of the tagname and the formula FirstN(imgcol, 1)).probability to get the value of the probability.  You should apply the same formatting rule to the probability result as you did in the earlier section.  Finally, you will append the date stamp to the title to ensure the name is unique.

To set the default text of the image title, go to Tree view in the left hand side rail, expand the Group_SubmittedTitle, click on txt_ImageTitleInSharePoint to set focus and then select the Default property.  

Update the property to this value:


Last(FirstN(imgcol, 1)).tagName & " " & Text(Last(FirstN(imgcol, 1)).probability * 100, "0.0", "en-US") & "% Taken on: " & Now()


Preview the AppLet us compare the app’s current functionality to the requirements stated at the beginning of this article.  Hit the “Preview app” button at the top right of the screen to run the app and click on the “Start Classifying” button.   Select an image from your PC image library or use one of the test images you generated previously (requirement #1).  We submitted a picture of a monkey in a business suit partially buried in the sand to our classifier model.  The image was automatically sent to the classifier (requirement #2).  

The results look good!  

The top five results are displayed and formatted correctly (requirement #3). Our auto-generated title for the picture is properly formatted (requirement #4): “Monkey 99.4% Taken on: 11/13/2023 4:32 PM”.  At the bottom of the screen, you can select the character class ( 1 ), apparel class ( 2 ), and environment dimension ( 3 ) (requirement #5). Now we are down to the last requirement for this app: saving the full set of prediction results to SharePoint (requirement #6). To accomplish this, we will add the functionality to the “Save” button ( 4 ) located next to the image title.

What to do with the results

Saving the results to SharePoint

In our earlier blog post, we highlighted the image classifier's challenges in accurately identifying specific classes and apparel styles based on the limited set of examples we tested. To investigate potential patterns in misclassifications, we aim to conduct tests on a larger dataset, collecting the classifier's predictions for thorough analysis. To streamline this data collection process using a low-code approach, we will employ SharePoint to store the outcomes of each image submitted through the canvas app. Subsequently, we can analyze the classifier's performance using tools such as Power BI.  

The syntax for the save function is this: 

Patch( DataSource, BaseRecord, ChangeRecord1 [, ChangeRecord2, … ]) 

The DataSource will be the name of the SharePoint connection we added earlier.  You can find it on the left hand side rail under “Data”.  For the BaseRecord, we will be using the “Defaults” function because we are only adding new records to the list.  ChangeRecord1 consists of the column name and the value being put into that column.  In the diagram below, you can see how the different components of the Patch function map to the objects on the canvas app as well as to the columns in the SharePoint list. 

Sharepoint list mapping

Here’s the complete code you will need to add to “OnSelect” for the button btn_Save.


Patch(    

'Custom Vision Classification Repository',    

Defaults('Custom Vision Classification Repository'),    

{Title: txt_ImageTitleInSharePoint.Text},  

{Image: img_SubjectOfInterest.Image},    

{Character: ddl_Character.SelectedText},    

{Apparel: ddl_Apparel.SelectedText},    

{Environment: ddl_Environment.SelectedText},    

{'Business suit': varSuit},    

{'Kung Fu uniform': varKFU},    

{'Casual Wear': varCasualWear},    

{Woman: varWoman},    

{Man: varMan},    

{Monkey: varMonkey},    

{'Woman in Business suit': varWomanInSuit},    

{'Man in Business suit': varManInSuit},    

{'Monkey in Business suit': varMonkeyInSuit},    

{'Woman in Kung Fu uniform': varWomanInKFU},    

{'Man in Kung Fu uniform': varManInKFU},    

{'Monkey in Kung Fu uniform': varMonkeyInKFU},  

{'Man in Casual Wear': varManInCasualWear},    

{'Woman in Casual Wear': varWomanInCasualWear},  

{'Monkey in Casual Wear': varMonkeyInCasualWear} ,  

{Other: varOther}
);


Pre-deployment Testing

Now that everything is connected, once again click on the “Preview app” icon in the upper right of the command bar.

Once the app starts up in your browser, click on the “Start Classifying” button in the bottom of the app. When the ModelPredictions screen pops up, click on “Tap or click to add a picture” and upload an AI generated test photo or go to your PC’s photo gallery and upload an existing photo of one of your pet monkeys in casual wear.  After a few seconds, the model will return its predictions.  Only the top 5 should be displayed on the screen.  

Select the character class, apparel class, and environment dimension for each picture. Click on the “Save” button to send all the predictions to SharePoint. You can modify the title or use the title that was autogenerated for you. Here are a few examples we submitted to the classifier using our Midjourney AI generated test photos:

Since you want to be able to later verify and/or improve the accuracy of the classifier, it is important that you select the correct Character Class, Apparel class, and Environment dimension before saving the results to SharePoint. 

In the examples above, you can see that the classifier predicted “Monkey in a Business suit” for the heavy set man in a frost-covered beige business suit who was standing on a street corner.  With enough submissions, you should be able to determine which specific types of images you need to add to your training sets. We will discuss this in more detail during the next article. After you submit and save a few test images, go to the SharePoint list and see what was stored.

You should see values for every tag, not just the top 5 that were displayed in the gallery control in the app. This is what our saved results look like in SharePoint:

Sharepoint results

Deploying the App

Looks like we are ready to take the app out into the real world and make it mobile. In the upper right hand corner of the command bar, you will see the “Publish” button ( 1 ). Click on it and the publish window opens. 

Here you have the opportunity to rename your app as well as add a meaningful description. 

Once you are all set, click on the “Publish this version” button ( 2 ).

Once it is published, you can log into PowerApps with your Microsoft account on your mobile device. 

Open the app and start classifying your friends, family, neighbors, pets, and assorted household appliances.

Publish

Congratulations!  Using both Azure Cognitive Services and Microsoft Power Apps, you have successfully built a Custom Vision canvas app that can help you classify the attendees at your next big technology conference.  Well, maybe not exactly

Next time…

Now that you have a grasp of how Microsoft's Power Platform simplifies the integration of AI-driven models into low-code custom mobile apps, it's time to begin gathering your test prediction data. This data will later be imported into Power BI for thorough analysis. In the upcoming Part 3, we will examine the data and draw comparisons between the predictions and the actual descriptions. Additionally, we'll delve into various strategies for refining your classification model to enhance overall prediction accuracy.

How can we help?

Understanding low code development applications and uses, and the variety of AI complex use cases, might be something you are struggling with.

Turning to technologies that you do not grasp entirely its a challenge sometimes too hard to overtake alone. The best advice on how to do so effectively, is ironically to get some good advice. As experienced software and data experts, The Virtual Forge is here to help you understand your business problems, with up front engagement and guidance for you as the client: what are your problems and how can we solve them?

Sources: 

Our Most Recent Blog Posts

Discover our latest thoughts, tendencies, and breakthroughs in the realm of software development and data.

Swipe to View More

Get In Touch

Have a project in mind? No need to be shy, drop us a note and tell us how we can help realise your vision.

Please fill out this field.
Please fill out this field.
Please fill out this field.
Please fill out this field.

Thank you.

We've received your message and we'll get back to you as soon as possible.
Sorry, something went wrong while sending the form.
Please try again.