Article
15 Apr
2024

Man, Monkey or Martial Artist - How To Create a Custom AI Vision Image Classifier and Integrate it with Microsoft Power Platform (part 3)

In part 1, we discussed some relevant background information on computer vision and how to build a computer vision model in Azure. In part 2, we showed you step-by-step how to integrate your Custom Vision model into a mobile app you build from scratch using Microsoft Power Apps low-code platform. In part three, we are going to show you how to automate image processing using Microsoft SharePoint, Microsoft PowerToys and Microsoft Power Automate.
Anthony Allen
|
25
min read
man-monkey-or-martial-artist---how-to-create-a-custom-ai-vision-image-classifier-and-integrate-it-with-microsoft-power-platform-part-3

Welcome to part 3 of our series on Azure Computer Vision and Power Platform.  In part 1, we discussed some relevant background information on computer vision and how to build a computer vision model in Azure.  In part 2, we showed you step-by-step how to integrate your Custom Vision model into a mobile app you build from scratch using Microsoft Power Apps low-code platform. In part three, we are going to show you how to automate image processing using Microsoft SharePoint, Microsoft PowerToys and Microsoft Power Automate.

Azure Custom Vision Review

Azure Custom Vision is a cloud-based machine learning service offered by Microsoft Azure. It empowers both citizen and code-first developers to create custom image classifiers tailored to recognize specific objects, patterns, or attributes unique to their respective domains. With this Azure service, users can effortlessly construct, train, and deploy prediction models without the need for extensive expertise in machine learning or computer vision. Once the model is trained, it generates a confidence score, indicating the likelihood that an image corresponds to specific classes. For instance, one could train a model to determine how confidently an image represents a man, woman, or monkey. Code-first developers can access these deployed Custom Vision models through APIs, SDKs, or a dedicated website, while Citizen Developers have the convenience of accessing their image classifier model via a mobile application created within Power Apps.

At the beginning of this series, we demonstrated how easy it is to build a Custom Vision classifier model that can determine if an image contains a man, a woman, or a monkey and if they are wearing a Business suit, a Kung Fu uniform or Casual Wear (e.g., jeans & sneakers). We also discussed in detail several items that affect the accuracy of a Custom Vision model including overfitting, data balance, data quantity, and data variety that could help us answer the following questions:

Now that we have a trained model and a mobile app to help us access it, we need to start testing the model’s prediction accuracy.  To accomplish this, we need to gather a sufficient quantity of images depicting the characters and their apparel in a variety of environments while they are performing numerous actions that differ slightly and/or substantially from the training images. So, this big question is: how will we get this wide variety of test images to submit to our Custom Vision model?  We don’t ordinarily see many monkeys in business suits around here this time of year.  Naturally, they are all at the beach sporting their best casual wear.  As we did during the training phase, we chose to use an AI image service called Midjourney to generate all the images we needed.

Midjourney AI bot

Midjourney represents an innovative generative artificial intelligence program and service, developed by the independent research lab, Midjourney, Inc., headquartered in San Francisco, California. This cutting-edge technology harnesses the power of natural language descriptions, called "prompts," to translate text into images. This transformative process unfolds seamlessly through user interaction with the AI via a dedicated bot integrated into the popular chat platform, Discord. By issuing commands with varying descriptive complexity, users are able to use language to create intricate visual landscapes.  T

he bot returns four unique artistic interpretations based on the supplied text.  The user can either upscale each image for export or the user can have the bot generate another set of interpretations based on one of the previously generated images. Since its unveiling in open beta on July 12, 2022, individuals from many different fields have been tapping into the capabilities of Midjourney to manifest their creative visions into visual expressions, exploring new possibilities and functionality within the realm of digital creativity.

Designing the test images with Midjourney

As discussed in the first post of this series, It's important to consider Data Variety when designing and training an image classifier, as well as when evaluating its performance on different datasets and real-world scenarios across a wide range of image variations and conditions. In an attempt ensure a balanced and diverse testing set, we included images that varied along these twelve dimensions:

  1. We tried to get a range of emotional expressions on the characters but some were just a bit over the top.  Characters in Business suits were usually very somber unless they were jumping.  Characters jumping nearly always had their mouths and eyes wide open. “Angry” monkeys looked quite ferocious, especially when jumping out of bed. “Serious” monkeys produced some really incredible images, especially when they were wearing a Kung Fu uniform.
  2. “Muscled” was used primarily in the gym to reduce the number of images containing scrawny characters lifting impossibly heavy weights.
  3. We included different types of primates to see if the model could handle large variations in facial features and hair coloring in the monkey class.  We also tried generating images with gendered monkeys, gorillas, baboons, and chimpanzees which resulted in either creatures that looked straight out of “The Island of Dr. Moreau” or images with a human central character with the primate being awkwardly behind or next to them.
  4. “Long Hair” was included to see if the Custom Vision model was primarily using hair length to help classify women's images.
  5. The rock wall environment images were impressively highly detailed, down to the little screw holes keeping the rocks fastened to the walls.  However, it was incredibly difficult to get the character to look like they were actually climbing the wall the way you would expect if you just popped in to visit a friend rock climbing at the local gym. In many cases, it looked like the characters were posing suspended mid-air in front of a rock wall while facing the camera for a fashion magazine photoshoot.
  6. Apparently, we could not get the wording just right for the “lifting weights” action.  To be fair, it was impressive that the weight size and quantity were evenly distributed on each side of the bar if and when the bar had any weights on it.  However, more often than not, the bar would either be going through the character’s head or being held in a way that wasn’t expected (i.e.via the knuckles, shoulder blades,hips, or buttocks)  In several instances, the weights were magically suspended perpendicular to the axis of the bar itself. In one image, the weight bar had what essentially looked like tiny tea bags hanging from it.
  7. The “jumping” action was inspired by the children’s story “Five Little Monkeys Jumping on the Bed”.  I guess someone has read the AI bot story because no images were generated with a character actually jumping up and down on the bed.  The characters appeared to be either jumping off the bed or in front of it.  
  8. Very few images of characters “jumping rope” were successfully generated.  Think “Indian Jones takes his whip to a dimly-lit gym”. When the character was actually holding the rope in both hands, it was never connected in ways you could visually follow.  
  9. Characters on an exercise bike were not always “riding” the bike.  Occasionally, they were just sitting on it doing something else like reading a book or looking at their phone.  In some cases, the character was literally “on” the bike, standing side-saddle between the seat and handlebars. Since those images still satisfied the base requirements of a character clearly doing something in an environment, they were still included in the test batch.
  10. “Running on a treadmill” was more difficult than expected for the AI bot.  Characters would be seen running through, running over, running across, or simply running away from the treadmill, but rarely actually running on the treadmill.
  11. “Cluttered” was primarily used in the “Bedroom” environment.  We definitely got what we asked for!  Oscar the Grouch would be impressed with how cluttered the floors were with random pieces of junk, garbage, or animal body parts.  Some objects on the floor  were completely nightmarish.
  12. The design of the “Gym” environment may have been taking hints from the clothing or ethnicity of the character.  For example, we would get monkeys in business suits on an exercise bike in a room that looked more like an office setting than a gym. Sometimes there were oddly culture-specific religious iconography such as paintings on the walls, statues. In the case of “British” characters, there were suits of armor.  Some “Asian“ characters had pottery or shelves of fresh vegetables that looked out of context in a traditional gymnasium. To remedy this, we added the phrase “There are various types of exercise equipment along the walls of the gym” to the Midjourney prompt.  After that point, we started to get rooms filled with weights, pulleys, treadmills, bikes and other hi-tech exercise equipment whose function was not readily apparent but still looked appropriate nonetheless.
  13. As we mentioned in the training phase, “Location” and “Secondary object in the scene” were intrinsically linked.  While theoretically possible to find some of these combinations in the real world, we did not want to generate a bedroom scene containing a lamp post, a beach scene containing a refrigerator, or kitchen scene containing a bed with a nightstand.  Therefore, not every possible combination of the twelve dimensions listed above was generated.
  14. “Sitting on the edge of a bed” often produced scenes where the character was reclining at an odd angle on the literal horizontal and vertical edge joint of the mattress. 
  15. Generally speaking, the details in the background of the “Kitchen” environment were easily recognizable (e.g., sinks, microwave ovens, refrigerators, coffee machines, etc.) especially if you didn’t stare at them too long trying to figure out how they actually worked. Some noticeable exceptions included a faucet that looped back onto itself, a dishwasher (or oven) that had the window port of a front loading clothes washing machine, and what was supposed to be a coffee maker but looked more like a giant  5 ft.tall bowl mixer in the center of the floor.
  16. The “Lamp Post” in the “Street” environment  tended to be about the same height as the character, presumably so at least some of  the light bulb would still be in-frame.
  17. We were thinking an “Umbrella” in the “Beach” environment would mean a large beach umbrella but often it was some type of hand-held parasol.  What made these particularly notable was the umbrella handle was not always connected in a straight line to the underside of the open umbrella. SOmetimes there was a mysterious third hand right off the character’s shoulder holding the umbrella.
  18. To further test the limits of the model, we occasionally included a “ball” in the “Beach” environment.  We were interested to see if accuracy would significantly diminish when a brightly-colored object was present partially occluding the main character in the image. We used different phrases like “playing with a beach ball”, “catching a beach ball”, and “throwing a beach ball” to various degrees of success. We also had to add “large brightly colored” as a modifier.  Otherwise instead of getting the highly recognizable standard striped beach ball one would expect to see on a beach, we would get either many grapefruit sized balls that were magically floating in the air, or we would get ostensibly unpatterned bowling balls being palmed unnaturally like a basketball. These characters clearly have impressive grip strength.

The instructions provided to the Midjournney bot were generally constructed in the following format:

\imagine A highly detailed full body view of a {body modifier} {ethnicity} {character} with {hair color} hair, 

wearing a {apparel color} / {apparel pattern} / {apparel modifier} {apparel}.  

The {character} is {action in scene

OPTION #1: [ on / in / up / ] a {secondary object in the scene}

[at / in / on] {environment modifier}{environment

OPTION #2: [next to / on / in / in front of / with] a {secondary object in the scene}. 

Photo realistic quality.

We used a Google sheet to carefully mix and match the different dimensions and combine them into a single statement that we could copy and paste into the  Discord server prompt. Some specific examples of these statements appear below:

  • A highly detailed full body view of a muscled Mexican Man with brown hair, wearing a heather gray t-shirt, jeans and sneakers. The Man is standing at a street Corner in the snow. Photo realistic quality.
  • A highly detailed full body view of a full figured French Woman with brown hair, wearing a traditional pastel Kung Fu uniform. The Woman is standing next to an umbrella on the beach. Photo realistic quality.
  • A highly detailed full body view of a petite Gorilla with brown hair, wearing a light colored business suit. The Gorilla is running on a treadmill at the gym. There are various types of exercise equipment along the walls of the gym. Photo realistic quality.

Over the course of several months, we used the AI bot to generate around 15,000 images. Initially, we would submit 21 different statements for each of the 15 Apparel by Environment combinations. There was a bit of trial and error whereas some environment modifiers, apparel modifiers and action combinations were abandoned.  Each submission resulted in 4 variations to choose from each combination.  

Some images were immediately saved once upscaled to the propper size.  

Others needed to be “varied by region” where you highlight areas of the image and submit them to the AI bot to re-imagined.  he re-imagined images already had about 50-75% of what were were going for but the image need to be edited for variety of reasons including: to remove extra limbs, to bring the apparel more in line with expectations, to tweak facial expressions or hair color, to reposition limbs in a more natural way, to remove weird and unnecessary items from the background or to make the Kung Fu uniforms slightly less revealing for women.  

About 3,600 images were finally exported to a local drive and saved in 1 of 45 folders for each Character x Apparel x Environment combination.  Afterwards, we recruited the help of outside observers to review the images with the intention of removing any additional images from the testing set where the characters had more than two hands, legs, or feet, where questionable items were present in the environment, where the character was not doing something anatomically possible, or in one case where there were inexplicably spilled intestines under the character sitting on the floor. Examples of these rejected images are provided later in the article.

Here’s the final breakdown of the counts of the acceptable images we kept after the second wave of reviews:

Examples from each Character x Apparel x Environment combination:

Character in Business suit at the Beach
Character in Business suit in a Bedroom

Character in Business suit at the Gymnasium

Character in Business suit in a Kitchen
Character in Business suit on a Street Corner
Character in Casual Wear at the Beach
Character in Casual Wear in a Bedroom
Character in Casual Wear at the Gymnasium
Character in Casual Wear in a Kitchen
Character in Casual Wear on a Street Corner
Character in Kung Fu uniform at the Beach
Character in Kung Fu uniform in a Bedroom
Character in Kung Fu uniform at the Gymnasium

Character in Kung Fu uniform in a Kitchen
Character in Kung Fu uniform on a Street Corner

Using Microsoft PowerToys to create standard file naming conventions

In part 4 of this series, we will be using Power BI to import the data we collect now and compare the actual vs predicted contents of the image.  

If you remember In our mobile app, these contents were identified with three dropdown boxes that the user manually selected prior to saving the classified image to SharePoint.  Since we intended to automate the testing process, we need a way to simulate those manual user selections. A standardized filename would be one way to handle this.  However in Discord, the exported file names are auto generated and limited to about 100 characters.  

The auto generated filename would be in the format of {username}{LEFT(submitted instructions,48)}{GUID}.png similar to:

“UserName_A_full-body_view_portrait_of_a_tall_British_man__97354cfc-f9da-4b78-a1bb-961dee8ca751.png”.

Clearly, due to the 100 character limitation, not enough information is present in the exported filename to determine what was being submitted to the Custom Vision classifier.

For simplicity’s sake, we only wanted to have a single folder being monitored for new images in the cloud as opposed to the 45 folders we created locally while generating the images. Therefore, all the files had to be renamed in a way that the actual contents (i.e., Character, Apparel, Environment) could be automatically extracted from the name while still keeping the file name unique.  To accomplish this, we used Microsoft PowerToys which is an open-source set of utilities for Windows operating systems.

PowerToys provides additional features and functionalities to enhance productivity, customization, and overall user experience. It is designed to be a collection of handy tools for power users and developers.  The specific tool we were interested in was PowerRename which provides advanced file renaming capabilities, allowing users to bulk rename files based on specific criteria or patterns. Using regular expressions, we were able to select a folder and rename the files by truncating the string left of the GUID and adding standardized descriptors based on the folder in which they were saved. The final renamed file would look like this:

“Man_Business suit_Beach_97354cfc-f9da-4b78-a1bb-961dee8ca751.png”.  

Now we have the Character, Apparel, Environment, and a unique identifier in the filename that we can use in the upcoming Power BI Analysis.   Why we chose this particular file name structure will become more apparent once we start developing tasks in Power Automate.

How do we efficiently test the Custom Vision model?

The Power Apps application we developed in the last article is great for processing images on the go.  However, we have over 3,100 images to test. Therefore, using our mobile app to upload, submit for classification, manually identify the Character, Apparel, and Environment, and finally save each image individually to Share Point would be a monumental task.  We needed an easy and inexpensive way to upload and store the files in the cloud while having a process that monitors the storage folder so anytime we upload a new file, it is automatically sent to the Custom Vision model for classification. Using a combination of SharePoint for storage and Power Automate for processing is our preferred solution for this example.  Let’s walk through how we set up and used SharePoint and Power Automate.

Using SharePoint for image storage and results

SharePoint is often considered an effective platform for managing substantial image databases due to its various key features such as Scalability, Metadata, Tagging, and Workflow Automation. The platform is adept at handling large volumes of data, making it well-suited for the storage of extensive image collections, with the ability to scale to meet the evolving storage demands of organizations. Users benefit from the capability to attach metadata and tags to images within SharePoint, facilitating enhanced organization, searchability, and retrieval of specific images. This contributes to the establishment of a more organized and easily navigable image library.  Moreover, SharePoint supports the automation of diverse business processes, including workflows related to images. This automation streamlines tasks such as approval processes, content publishing, and image categorization, fostering efficiency and reducing manual workload. These features collectively make SharePoint a robust solution for organizations seeking a comprehensive and scalable platform for managing large quantities of images. 

Set up the image folder in SharePoint

We navigated to our Documents folder on SharePoint and created a new folder called “AI_Blog_Post_03”.  No other customization is needed.  Just note, do not upload anything to this folder until the Power Automate task has been developed and turned on because any files you put there now will just be ignored.

Set up results list in SharePoint

The next thing you need to do is set up a data store for the prediction results. For this store, you will need a field for the image, fields for the prediction results of each tag, fields that identify the actual content of the image (Character, Apparel, Environment), the Custom VIsion model iteration used for the predictions, as well as some basic audit information.   A simple Low-Code approach would be to use a Microsoft SharePoint list where we can simply save the image and the results in a single row.  The table below shows the column names and datatypes you need to set up your list.  Some of the audit columns will be auto generated by SharePoint when you create this new list called  “Custom Vision Classification Repository“.  

Spelling and letter case are important so take note of the column names. If you change them either intentionally or unintentionally, it can cause an error later when saving the results from within Power Automate.  Also, keep the SharePoint tab open in the background because you will need to copy and paste the URL when you are setting up the SharePoint destination step for your flow.

Microsoft Power Automate

Power Automate, formerly known as Microsoft Flow, is a part of the Microsoft Power Platform and is designed to be a no-code/low-code platform, allowing users with varying technical backgrounds to create automated workflows without extensive programming knowledge. This is beneficial for users who may not have deep coding skills but still need to implement a solution involving AI.  Power Automate seamlessly integrates with Azure services, including Custom Vision. This makes it easy to incorporate Custom Vision AI capabilities into your automated workflows.  In addition, Power Automate provides a visual workflow designer, making it easy to design and understand the flow of actions. This is particularly advantageous when dealing with complex workflows involving multiple steps, data transformations, and interactions with different services. Power Automate is a cloud-based service, offering the advantage of accessibility from anywhere with an internet connection. You can create and manage your workflows through a web browser, and the workflows themselves run in the cloud, providing scalability and reliability.

Integrating the Custom Vision model with Power Automate

Create a new Flow

Log in to the Power Automate portal at make.powerautomate.com

Click on the create button (1)  and select Automated cloud flow (2)

Choose a trigger for the flow

Next you need to select a trigger that will initiate the flow. Remember, we want this flow to send an image to the classifier model whenever we upload a new image file to the SharePoint folder.  In the “Choose your flow’s trigger” search bar (1), type  “When a file is created in SharePoint”.  Put a tick mark next to the suggestion that says “When a file is created (properties only)” (2).  Lastly, type in the name of your flow in the Flow name text box (3). 

Connect to SharePoint

If you do not have an existing connection to Sharepoint, you will be asked to log in to create the connection.  Enter the same credentials you used to log into SharePoint when you created the image folder. When the first step appears in your flow, click on it to expand it to access the properties.  In the drop down boxes below, select your Site Address, Library Name, and Folder.

Add Control Operation: Get the file content

In the above step, we configured a trigger to monitor when a new file is added to a specific folder in SharePoint. Next we need to communicate to the rest of the flow which folder and file was actually added.  Under the first step, there will be a small plus sign (1).  Click on that to insert a new step and add an action. A dialog box will pop up asking you to choose an operation. In the search bar, type in “get the file content using path”.  There could be many similar items for different connectors returned in the search.  Make sure you select the action with the turquoise SharePoint icon.  Once the step is added, click on the area containing the name of the step so you can expand the box and configure the properties.   From the dropdown box, select the Site Address (2). Next you need to specify the File Path.  We want to use the path returned from the trigger step immediately above this one so click on the “Add dynamic content” link (3).   In the dynamic content search bar (4), type in “Full” and then select the search result “Full Path” (5). Those two properties are all you need to configure for this step.

Add Control Operation: Classify an image

Under the step you just completed, click on the small plus sign to insert a new step and add an action. Again, a dialog box will pop up asking you to choose an operation. This time, type in the search bar “Classify an image” and select the “Classify an image (V2)” operation with the blue Custom Vision icon.  If the step is not already expanded, click on the step name so you can expand the box and configure the properties.  To complete configuration for this step, you will need to log back into Azure, navigate to your Custom Vision model and obtain the ProjectID and published Iteration name you want to use in this flow.  Put them into the appropriate fields as shown below.

Configure: Classify an image

For the Image Content field, click on the blue link to add dynamic content (1).  Type in “File Content” in the search field (2) and select “File Content” under the “Get file content using path” header (3). This is a good place to stop, save your work, and test the steps you have created so far. 

Testing your flow

As you develop your flow, the Flow Checker is a valuable tool for addressing any issues that may arise. Integrated into Microsoft Power Automate, the Flow Checker assists users in identifying and resolving potential problems or errors within their workflows. Its primary objective is to enhance the reliability and performance of flows by detecting common issues and offering recommendations for resolution. To access Flow Checker at any time while working on your flow, simply look for the stethoscope icon positioned next to the "Save" icon in the upper right corner of the screen.

Test your connection to the model

In order to extract the prediction results in the next step, we need to get a sample of the classifier output.  To do this, we need to turn the flow “on” and upload a test image. Click on the “Save” icon and then click on “My flows” on the left. After the page loads, your flow should appear at the top of the list.  Notice that it looks disabled.  Hover over the name with the mouse and click on the ellipses to get more options.  Select “Turn On”. After a few seconds, it should no longer look disabled.  At this point, it is safe to send up a test image file to SharePoint. It can be any image at this point.  We just want to make sure the folder is being monitored for new activity and sending images to the classifier.  Afterwards, we want to get the output from the classifier to set up the rest of our flow.  After you upload the image, wait about a minute and then go back to “My Flows”. Click on the name of your flow to get to the details page.  Hopefully, you will see the same “Succeeded” message in green at the bottom.

Get sample output

In the 28-day run history section, click on the start date of this current run.  You can click through each step and see what the inputs and outputs were for each.  In the “Classify an Image (V2)” step, click on the “show raw outputs”.  Select all the JSON text and copy.  

TIP: You should paste the output in a new text file on your desktop since you will need it in the next step we add to the flow.

Parse the JSON Payload

Now go back into edit mode of the flow. Under the “Classify an Image (V2)” step, click on the small plus sign to insert a new step and add an action. A dialog box will pop up asking you to choose an operation. In the search bar, type in “parse JSON”.  There may be other parsing operations returned in the search.  Make sure you select the action under “Data Operation” with the purple parse JSON icon.

We can easily rename the task by clicking on the ellipses (1) and clicking on rename (2) and typing in “Parse JSON returned by the Custom Classifier”.  For the Content field (3), we want to get the JSON body returned by the classifier in the previous step.  To do this, click on the blue link (which is not visible in this image) to add dynamic content as you have in other steps.  Type in “Body” in the search field and select “body” under the “Classify an Image (V2)” section header. While you could manually enter the JSON schema, Power Automate can generate it for you once you provide a sample.   Simply click on “Generate from sample” (4) and the “Insert a sample JSON Payload” window pops up.   Go back to the new file on your desktop, copy the JSON output, and paste it into the text area (5). Click on Done (6).  The Schema is automatically formatted for you so you can move on to the next step.

Add Control Operation: Initialize variables

Under the Parse JSON step, there will be a small plus sign (1).  Click on that to insert a new step and add an action. The dialog box will pop up asking you to choose an operation. This time, type in the search bar (2)  “Initialize variable” and select the action with the purple Variable icon (3). Repeat these steps 16 more times so you end up with a total of 17 variable steps.  These variable names correspond to the columns you created earlier for the “Custom Vision Classification Repository“ in SharePoint.

Configure: Initialize variables

Once all the variable steps are added, click on the light purple area containing the step name so you can expand the box and configure the properties. For readability and debugging purposes, you should click on the ellipses and rename each one using the variable name it is initializing in that particular step. 

In the table below, you will find each variable name, type and default value that needs to be initialized.

  1. We are using iteration # 17 for this flow.  Your iteration number will most likely be different.
  2. We are using variable names that closely resemble the tag names used in the Custom Vision model. We will demonstrate later in this article how that naming convention can be helpful

Add Control Operation: Apply to each 

As we learned in the previous article, the data returned from the classifier will essentially be a list of tag names and their associated probabilities.  To process each element of the list individually, we need to use the “Apply to each” control.  Every operation in the scope of this step will be applied to each tag name in the returned list. Under the last “Initialize Variable” step you created, there will be a small plus sign (1).  Click on that to insert a new step and add an action. The dialog box will pop up asking you to choose an operation. This time, type in the search bar (2)  “control” and select the control operation with the gray “Apply to each” icon (3).  

Configure: Apply to each 

Click on the ellipses to rename the new step to “Apply to each Prediction” (1). Click on the name to expand the step to see its configurable properties. Click on “Add dynamic content” (2) and type “predictions” in the search box (3).  Select “predictions” with the purple icon (4) under the “Parse JSON returned by Custom Classifier” heading.  Now that we have the predictions, we need to add the action(s) that we want applied to each element of the list.  Click on the “Add an action” link (5). The dialog box will pop up asking you to choose an operation. Type in the search bar “switch” and select the control operation with the dark gray “Switch” icon.

Using the Switch operation

“Switch” acts like your typical conditional CASE statement found in many programming languages.  As we iterate through the list, we want to look at the TagName member and based on its value (e.g., Woman,Business suit, Monkey in a Kuing Fu uniform, Other), we want to assign its corresponding probability to one of the variables we initialized (e.g., varWoman, varBusinessSuit, VarMonkey_KungFuUniform).   Click on the “Switch” step name to expand and see its configurable properties. Click on “Add dynamic content” (1) and type “tagname” in the search box (2).  Select “tagname” with the purple icon (3) under the “Parse JSON returned by Custom Classifier” heading.  

Add Control Operation: Set variable 

From the “Switch” operation we now have a specific TagName which can be 1 of 16 different values.  Each one of those values represents a case or pathway to setting a specific variable.  Therefore, we need to add 16 different “Case” operations.  To keep your flow easy to follow and easy to debug, you should rename each “Case’ as you add it via the ellipses (1).  In the “Equals” text box (2), use the same value you have for the case name (e.g., “Woman”).  Once those are set, click on “Add an action”. Another dialog box will pop up asking you to choose an operation. Type in the search bar (3)  “set variable” and select the control operation with the purple “Set variable” icon (4).  Using the ellipses, rename this operation to “Set Variable - “ plus the value of the TagName you are working on.

Configure: Set Variable

Now we need to get the probability associated with the current TagName in the list.  Click on “Add dynamic content” (1) and type “probability” in the search box (2).  Select “probability” with the purple icon (3) under the “Parse JSON returned by Custom Classifier” heading.  As a reminder, there are 16 different cases.  You can easily lose track of what you have already added because they will quickly expand past what you can visibly see on the screen at one time.  It will be helpful to use step and variable names that align with the TagName as you create and configure each one (4). 

TIP: If you miss adding any cases, it will not be immediately apparent when the results are saved to Sharepoint in the next step.  However, once you check the columns in your SharePoint list, if one or more is always 0, there’s a good chance you missed a case in the switch operation under this “Apply to each” step.

Send the results to SharePoint

At this point, you know the name and path of the image file, you have assigned the model Iteration to a variable, the JSON payload from the classifier has been parsed, and the probabilities for each TagName have been stored in their corresponding variables.  You are almost ready to send the classification results for this image to the “Custom Vision Classification Repository” in SharePoint. To get started, we need to add a new step.  Under the “Apply to each Prediction” step, there will be a small plus sign.  Click on that to insert a new step and add an action. The dialog box will pop up asking you to choose an operation. Type in the search bar “Create Item” and select the  “Create Item” action with the turquoise SharePoint icon. Once added, click on the ellipses to rename to “Create Item - Add to SharePoint List”.  At first, you will only see two configurable items: Site Address and List Name.  Since you have already connected to Sharepoint in a previous step, you can simply use the drop down to select your Site Address (1).  Once that field is populated, you can use the drop down for the List Name (2) to select “Custom Vision Classification Repository”.  Now that the location has been selected, the other fields from the SharePoint list will be available for configuration.  Fields with an asterisk next to them are required fields.

For the Title (3), we are going to use the image file name collected by the flow’s trigger step.  Click on “Add dynamic content” and type “file name” in the search box.  Select “File name with extension” under the “When a file is created (properties only)” header.   For our purposes now, we know the file name will be unique so it is a good choice for the record’s title in SharePoint. 

For the Iteration field (4),  click on “Add dynamic content” and type “iteration” in the search box.  Select the dark purple “varIterationNumber” icon under the Variables header.  For the remaining 16 TagName fields listed (not all are present on the screenshot below), just click on “Add dynamic content” for each one and assign the appropriate variable to the field (5).  

TIP: From a stylistic standpoint, Power Automate is quite flexible.  There are many different ways to get the iteration name used by the classifier.  Since we originally set up the Iteration column in SharePoint to be a number, we have to use the integer value from varIterationNumber we set in the variable section of the flow.  However, if you had set the column up as text in SharePoint, you could use the value from the iteration returned by the “Parse JSON” step which would be the string  “Iteration17” instead of the number 17.  You could also have initialized the iteration variable earlier in the flow by using the text string “Iteration17” and then use that variable as dynamic content to configure the step “Classify an Image (V2)”.  When you click on “Add dynamic content” for the field Iteration in the current step, you would now have the option of either using the variable for the Iteration field or you could use iteration with blue icon under the “Classify an image (V2)” header.

Extracting the actual file contents from the file name

The last three fields we need to populate in this step are for the choice columns Character, Apparel, and Environment. Earlier in this article, we mentioned that we used Microsoft PowerToys to help bulk rename all the image files to follow a specific format.  Now we will explain why.  With simple string manipulation, Power Automate can parse out the appropriate values by using the “_” character as a delimiter. The formulas are shown in the table below:

For the image with the file name “Man_Business suit_Beach_97354cfc-f9da-4b78-a1bb-961dee8ca751.png”, the formula for Character will take all the text up to the first occurrence of the underscore to yield the value of “Man”. The formula for Apparel will take all the text after the first occurrence of the underscore up to the next underscore to yield the value of “Business suit”. The formula for Environment will take all the text after the 2nd occurrence of the underscore up to the next underscore to yield the value of “Beach”.  Since we knew we were using the underscore as a delimiter, we kept the space in “Business suit” so “suit” would not end up being parsed into the Environment column.

TIP: If you have not done so recently, please save your work now.  Working with expressions can be tricky.  Sometimes, if you inadvertently miss or type something extra, the step can become corrupt and uneditable.  If you have a saved copy, you can just go back to and work on the expression again. 

To add these formulas, click on “Add dynamic content” (1) for each field.  However, instead of using the search box, Click on the “Expression” tab (2) and paste the appropriate formula from the table above into the “Fx” box (3) and then hit the “Update” button (4). Save your work again once the expressions are done.

Save the image to SharePoint

If you will notice, there was one thing that we could not save to SharePoint in the step above: the actual image.  The image field did not even show up in the list of fields to which we could assign values.  Currently, the only way to do that via Power Automate is to update the existing record with the image.  Therefore we need to add one last step to accomplish this. Under the “Create Item - Add to SharePoint List” step, there will be a small plus sign.  Click on that to insert a new step and add an action. The dialog box will pop up asking you to choose an operation. This time, type in the search bar “send http” and select the SharePoint operation with the turquoise “Send an HTTP request to SharePoint” icon. Since you have already connected to Sharepoint multiple times in this flow, you can simply use the drop down to select your Site Address (1). For the Method field (2), select “POST” as your option.  The next and most important thing we need is the ID of the record we just saved to SharePoint in the step immediately preceding this one since that ID will point to the record we need to update with the image file.  Once we have this ID, we will dynamically construct the Uniform Resource Identifier (URI) we need to update the column. 

To simplify populating the Uri field (3), you will just need to paste the code from the table below which was generated using a combination of dynamic content and expressions.  However, if you renamed the “Create Item” step differently from the example we used above, you will have to use “Add dynamic content”, search for “ID”, and replace the code from the table in red @{outputs('Create_item_-_Add_to_Sharepoint_List')?['body/ID']} with the turquoise ID icon from under the appropriately named header.

For the Headers configuration (4), just type in the value pairs from the headers section shown in the table below.

For the Body configuration (5), you can use and paste the code from the table below.  However, remember to replace the purple text with your actual site details.  If you used a different name for the flow’s initial trigger step, you will need to replace the code from the table in red ['body/{FullPath}']  by using “Add dynamic content” and then finding the turquoise full path icon under the appropriately named header.

OK, that was the last step to your Power Automate flow! Make sure you use the Flow Checker one last time to resolve any issues.  Errors will most likely be from missing variables, or differences in the names used.  Once the Flow Checker returns with no errors, click on the “Save” icon at the top. 

Testing and Monitoring your Power Automate Flow

Since we already turned on the flow earlier, you know it is monitoring the correct folder.  At this time, you should delete the test image file you sent up when we needed to get a sample JSON payload.  Now you can start sending your test images to SharePoint.  Depending on the pricing plan you have with Azure, the number of images you process an hour, the size of the images, etc, you may get an alert from Power Automate saying that your operation has been throttled because it is “hitting an action limit designed to protect the connector service being called”.  For more information, see our resource links at the end of the article.  To monitor the performance of your flow, go to your flow’s details page.  On there you can click on the “Analytics” link to see how many actions are being called and make adjustments to your flow accordingly.  

TIP: If you are processing large quantities of images, you can distribute the workload across multiple flows but you will need to specify a separate upload folder for each flow, otherwise they will all be triggered at the same time when a new image is added to the folder!

Reviewing the Results

After you have successfully processed some images, you can view the results in your “Custom Vision Classification Repository” on SharePoint.   You can rearrange the column order to suit your needs.  The layout we chose was to optimize tracking of what uploaded successfully, especially for when we get into comparing different models in the next article.  It is also an easy way to determine if the images were being updated, otherwise, they would be missing from the record.  Remember, inserting the probability results and updating the image are two separate steps in your flow so it is quite possible to get the probability data without the image being saved.   It is also important to note that you are essentially seeing a thumbnail of the image. If you click on the image, it will open up in a new browser window with the URL indicating that it is being pulled directly from the uploads folder.  Conversely, if you removed the file from the upload folder, the image would appear broken in this SharePoint list.  

In the example below,everything seems to be running as expected.  The image is visible, the Title column contains the image filename, the filename was successfully parsed into the Character, Apparel, and Environment columns and if you continued scrolling to the right, you would see all the probabilities for each TagName we created for the Custom Vision model.  Only about 3,000 more images to go until we can start the next article!

What’s Next?

In the next article, we will connect Power BI to our SharePoint list and ingest all the probability data for analysis.  After transforming the data, building relationships, and creating measures, we will compare effects of training time on different models across Character, Apparel, and Environment dimensions.  We will also use visualizations to help us identify for which images the model made the worst predictions and then look for different dimension combinations that may need more images added to the training portion of the model.

Sources and Inspiration: 

  • Five Little Monkeys Jumping on the Bed by Eileen Christelow
  • The Island of Doctor Moreau by H.G. Wells
  • Train your model with Custom Vision

https://learn.microsoft.com/en-us/windows/ai/windows-ml/tutorials/image-classification-train-model

  • Use SharePoint and Power Automate to build workflows

https://learn.microsoft.com/en-us/power-automate/sharepoint-overview

  • Get started with Power Automate

https://learn.microsoft.com/en-us/power-automate/getting-started

  • Microsoft PowerToys: Utilities to customize Windows

https://learn.microsoft.com/en-us/windows/powertoys/

  • Working with the SharePoint Send HTTP Request flow action in Power Automate 

https://learn.microsoft.com/en-us/sharepoint/dev/business-apps/power-automate/guidance/working-with-send-sp-http-request

  • Limits of automated, scheduled, and instant flows

https://learn.microsoft.com/en-us/power-automate/limits-and-config

  • Climbing As Easy As Walking For Smaller Primates

https://www.sciencedaily.com/releases/2008/05/080515145406.htm

How can we help?

Understanding low code development applications and uses, and the variety of AI complex use cases, might be something you are struggling with.

Turning to technologies that you do not grasp entirely its a challenge sometimes too hard to overtake alone. The best advice on how to do so effectively, is ironically to get some good advice. As experienced software and data experts, The Virtual Forge is here to help you understand your business problems, with up front engagement and guidance for you as the client: what are your problems and how can we solve them?

Our Most Recent Blog Posts

Discover our latest thoughts, tendencies, and breakthroughs in the realm of software development and data.

Swipe to View More

Get In Touch

Have a project in mind? No need to be shy, drop us a note and tell us how we can help realise your vision.

Please fill out this field.
Please fill out this field.
Please fill out this field.
Please fill out this field.
Send Message

Thank you.

We've received your message and we'll get back to you as soon as possible.
Sorry, something went wrong while sending the form.
Please try again.