Fun with Azure Functions and the Emotion API

I was speaking recently at the Perth MS Cloud Computing User Group about Azure Functions.

I wanted to do a demo heavy presentation for a change including some live coding and one of the scenarios I was keen to cover was Cortana Intelligence Suite.

For the demo I thought it would be good to create a function that used a blob trigger, send the uploaded object to be analysed by Cortana Intelligence and stored the results in a DocumentDB collection.

The main reason for this was to demonstrate not only basic functionality but also how to reference other functions and to use package restore.

The thing I like about Azure Functions is that this sort of scenario is actually really trivial to wire up.

Azure Function Setup

I used a dynamic App Service Plan which is a great way of getting scale on a pay as you go basis.

Once you’ve created a Function App it is straightforward to create a basic function. As mentioned, I went for a blob trigger in C# as I wanted to process uploaded images.

A basic blob trigger function looks like the following.

using System;
public static void Run(string myBlob, TraceWriter log)
{
    log.Info($"C# Blob trigger function processed: {myBlob}");
}

This is not really very useful as we want to bring in an object that contains all the information about our uploaded blob. For this we change our input to from a string to an ICloudBlob.

We need to set up our output. I won’t go through how to do that here as the documentation covers it very well. We want to send documents to DocumentDB once we’ve set up our connection, so for that we need to use IAsyncCollector as our output and change the method to be async.

So we get:

using System;
public static async Task Run(Microsoft.WindowsAzure.Storage.ICloudBlob myBlob, IAsyncCollector<object> outputDocument, TraceWriter log)
{
    log.Info($"C# Blob trigger function processed: {myBlob}");
}

Good, so now we have a method signature that is useful!

Cortana Intelligence Suite

I decided to use the Emotion API for my testing, it’s a great API that tells you what the percentages of emotions are in the faces of people in photos.

You can test the API yourself, you just need to sign up and get an API key, then provide a URI for your image, which in our case is the URI of our uploaded blob, and voila!

I wanted to check my acting abilities by taking some photos of expressions I was making and check whether I should make the move to Hollywood!

To add this to your Resource Group, search for “Cognitive Services API” in the Azure Portal, click Create like normal, and then choose the API type you’re interested in. As mentioned, I chose Emotion API.

Azure Function Next Steps

We have the basics for our function but adding a DocumentDB endpoint, and getting the function signature set up correctly. If we run it as it is it will work, but won’t be terribly useful.

Once we’ve got our Emotion API key we need to store that in our Application Settings. To do that, click the “Function app settings” above the code and click the “Go to App Service Settings” button. Go to “Application settings” in the resultant Settings blade, and add a new App setting.

We need to use some assemblies that are included in the Functions deployment but that need to be referenced. To do that we use “#r” to bring them in.

#r "Microsoft.WindowsAzure.Storage"
#r "System.Web"
#r "System.Runtime"
#r "System.Threading.Tasks"
#r "System.IO"

We also need to bring in the assemblies required by the Emotion API. These are not included in standard Functions set of assemblies so we need to do a package restore for those.

For this we create a project.json file:

{
  "frameworks": {
    "net46":{
      "dependencies": {
        "Microsoft.ProjectOxford.Emotion": "1.0.251"
      }
    }
   }
}

If you’re familiar with App Services, you can do this in a couple of ways:

  • Go to Visual Studio Online from the App Service Tools menu and create the file in place in the main folder of the function
  • Create in the editor of your choice, and upload to the main folder using the Debug console in Kudu

Once you upload the file, you’ll see in the Log Streaming Service window below your code.

The final part of the puzzle is to complete the code required to take the uploaded blog, send the information to the Emotion API and then create a document and send to the output.

The final code should look like the following:

#r "Microsoft.WindowsAzure.Storage"
#r "System.Web"
#r "System.Runtime"
#r "System.Threading.Tasks"
#r "System.IO"
#r "Newtonsoft.Json"

using System;
using System.Threading.Tasks;
using Microsoft.WindowsAzure.Storage.Blob;
using System.Web.Configuration;
using Microsoft.ProjectOxford.Common;
using Microsoft.ProjectOxford.Emotion;
using Microsoft.ProjectOxford.Emotion.Contract;

public static async Task Run(ICloudBlob myBlob, IAsyncCollector<object> outputDocument, TraceWriter log)
{
    log.Info($"C# Blob trigger function processed: {myBlob}");
    
    var apiKey = WebConfigurationManager.AppSettings["EMOTION_API_KEY"];

    EmotionServiceClient emotionServiceClient = new EmotionServiceClient(apiKey);
    Emotion[] emotions = await emotionServiceClient.RecognizeAsync(myBlob.Uri.ToString());

    var photo = new PhotoResult
    {
        Uri = myBlob.Uri.ToString(),
        Name = myBlob.Name,
        NoMatches = emotions.Length,
        ProcessTime = DateTime.UtcNow,
        Results = emotions
    };

    await outputDocument.AddAsync(photo);
} 

public class PhotoResult
{
    public string Uri;
    public string Name;
    public int NoMatches;
    public DateTime ProcessTime;
    public Emotion[] Results;
}

So looking at this, we grab our key from our AppSettings, create an EmotionServiceClient and then call RecognizeAsync passing in the URI of the blob information we’ve passed in.

The API returns an array that contains details about faces that have been identified (FaceRectangle) and matching emotions (Scores). It’s an array because the API actually matches all faces it finds in the image and scores each one.

I created a simple class as I wanted to store some additional information, and this is serialised when we send it through to DocumentDB. This magic happens in the final line where we call AddSync and pass in our newly created class.

And that’s it, we’ve created a nice simple mechanism for checking the emotions of people in photos that are uploaded.

Testing my Acting

So first I uploaded the following picture:

Happy

It’s my happy face and the API concurs.

"Scores": {
  "Anger": 4.162134e-10,
  "Contempt": 1.47855415e-13,
  "Disgust": 3.8750686e-10,
  "Fear": 5.99909066e-13,
  "Happiness": 1,
  "Neutral": 4.58070127e-12,
  "Sadness": 3.19319154e-10,
  "Surprise": 1.40573334e-11
}

Next I tried for surprise.

Surprise

This returned the following scores:

"Scores": {
  "Anger": 0.0241219755,
  "Contempt": 0.00002152401,
  "Disgust": 0.00158446026,
  "Fear": 0.0162605,
  "Happiness": 0.000183772121,
  "Neutral": 0.00540139666,
  "Sadness": 0.000415531016,
  "Surprise": 0.95201087
}

Not bad, a good amount of surprise with a small amount of Anger and Fear thrown in.

Next up was sadness:

Sad

I tried to think about a puppy that had lost it’s favourite toy for this one!

"Scores": {
  "Anger": 0.00167506724,
  "Contempt": 0.00182426057,
  "Disgust": 0.00735369138,
  "Fear": 0.0000129641885,
  "Happiness": 1.29625617e-8,
  "Neutral": 0.002562766,
  "Sadness": 0.9865707,
  "Surprise": 4.980104e-7
}

Again, a good result, I’m clearly an empathetic guy.

Finally I wanted to try anger.

Angry

For this I wanted to think about how angry I was that someone would take a toy away from a puppy.

"Scores": {
  "Anger": 0.4741387,
  "Contempt": 0.0000462272656,
  "Disgust": 0.0145976059,
  "Fear": 0.0000404144557,
  "Happiness": 0.5108594,
  "Neutral": 0.0000221199662,
  "Sadness": 0.0002525145,
  "Surprise": 0.0000430380351
}

Only 47% angry, guess I’m just not an angry type of guy. In fact somehow I’m a bit more happy than angry in this, which I can only attribute to upturning of the mouth.

Anyway, clearly I’ve missed my calling a career in films awaits, although I think I’ll have to stick to comedy!

Conclusion

I wanted to test out the Cortana Intelligence Suite APIs and I wanted to do a more complex scenario using Azure Functions.

I chose this complexity because I also wanted to demonstrate how to use references and package restores and non-standard inputs and outputs.

For this I chose a blob trigger that called the Emotion API and stored the result in DocumentDB.

This is a task that I can see could have some real world applications, including gauging the reaction of people in any number of situations since the API works out values for all faces it finds in a picture.

So now go and have a play with Azure Functions and have a play with the range or APIs in the Cortana Intelligence Suite, there is some great and directly application stuff in there.

Leave a Reply

Your email address will not be published. Required fields are marked *