Azure Automation and PowerApps

One of our applications in our “test playground” is running some code in an Azure WebApp that needs to be restarted once in a while. Rather than trying to fix the underlying problem (no fun in that right?), I decided to create a small mobile app to restart the WebApp when needed. To make it a bit more fun, I used the following “code-less” solutions to make it work:

  • Azure Automation: Graphical Runbook to restart the WebApp; use a Webhook to call the Runbook using a simple HTTP POST
  • Microsoft Flow: calls the Azure Automation Webhook when a control is selected in a PowerApp
  • PowerApp: simple app with a button that calls the above Flow

Azure Automation

I created an Azure Automation account with the option to create a service principal. This results in an account that is added as Contributor for the subscription in which the Azure Automation account was created. This also means that a runbook that uses this account is allowed to restart a WebApp in the same subscription. In my case, the Automation Account and the WebApp are in the same subscription.

Now, before you can use the Restart-AzureRMWebApp cmdlet, you need to add the AzureRM.Websites module to the Automation Account. To do so, navigate to https://www.powershellgallery.com/packages/AzureRM.Websites/1.1.2 and use the Deploy to Azure Automation button. Follow the instructions to add the module to an existing Azure Automation account. When you are finished, click Assets in the Automation Account’s main pane and then click Modules. You should see the following:

 image

Now you can duplicate the AzureAutomationTutorial graphical runbook. In Runbooks, click that Runbook and use the Export option to export the definition to a local file on your computer. Now add a new Runbook and use the Import an existing runbook option together with the export file you just created. Your copied Runbook will look like below:

image

You can remove everything after Login to Azure (that’s the login with the Service Principal that has Contributor rights). Just add the Restart-AzureRMWebApp cmdlet like so:

image

The Restart-AzureRmWebApp only needs two parameters: the name of the WebApp and the resource group of the WebApp. To be able to call the Runbook using HTTP POST, create a Webhook for it. In the properties of the Runbook, click Webhooks and then add a Webhook. Note that there is no authentication for these Webhooks. It’s just a long, unique URL with an expiration date that you set. Make sure you copy the URL before you save the Webhook because it will not be shown later. I created a RunFromPowerApps webhook like so:

image

You can try the Webhook with Postman (https://www.getpostman.com/) or curl and see if a job gets started.

Microsoft Flow

Well, this could not be simpler. Go to https://flow.microsoft.com and login with your credentials (the same credentials for PowerApps, in my case they are Azure AD organization credentials). From My flows create a new flow that looks like this:

image

In the URI, enter the Webhook address from Azure Automation. Save the flow. We will now use this flow in PowerApps.

PowerApps

To create a PowerApp, install the Windows PowerApp application (a Windows Store app) and logon with the same credentials you used with Flow. I created a blank app with a simple button, nothing fancy. With the button selected, click Flows from the Action menu. You should see the flow you created. Just select it to link it to the button selection. You should see something like:

image

Note that it is possible to pass data to the flow as parameters to the Run() command. You could for instance create a list of WebApps to restart and pass the WebApp to be restarted to the Flow and the Webhook.

Test the PowerApp with the play button in the menu bar. When you click Restart, check that the Automation Job fired properly:

image

Now you can run the PowerApp on your iOS or Android device with the PowerApp app for those platforms. Enjoy!

This simple example shows that a lot can be accomplished with tools like Azure Automation, Flow and PowerApps for prototyping or even actual applications with a quick time to value.

2016-04-22 16_32_15-Presentation1 - PowerPoint

Using Azure Cognitive Services from Logic Apps

Azure’s Cognitive Services are very easy to use from within your own applications or more “code-less” solutions such as Azure Logic Apps. In this post, I will show a simple example of a Logic App that does sentiment analysis on incoming tweets. When the sentiment score is very high, an SMS is sent.

To perform sentiment analysis, use the Text Analytics API from Cognitive Services. First, in the Azure Portal, create a Cognitive Services account of type Text Analytics. After that account has been created, you will need the following information to be used in Logic Apps:

  • Endpoint: https://westus.api.cognitive.microsoft.com/text/analytics/v2.0
  • Key: use one of the two secret keys to access the API

If you create a Cognitive Services account in the free tier, you can make 5000 calls per 30 days.

Now create a Logic App in the Azure Portal and go to Triggers and Actions to enter the designer. I will not provide step-by-step details as working with triggers and actions in the graphical designer is very easy. To obtain the results we want, we will have to switch to Code View though.

First, from the Microsoft Managed APIs, add a Twitter trigger. Provide your credentials to Twitter and provide a search term.

Now click the + icon to add an action. To call the Sentiment Analysis API, use the HTTP action and provide the following information (Note: forgetting to specify the specific API to call is a common error; for the URI below also add /sentiment to perform sentiment analysis!!!):

2016-04-22 15_48_34-Logic Apps Designer - Microsoft Azure

Naturally, replace your key with one of the keys obtained from your Cognitive Services account. The body of your HTTP POST can be an array of documents, with each document having an id and the text you want to analyze. In our case, we want to analyze the Tweet text so we use the graphical designer to insert it. In Code View, this will be:

2016-04-22 15_52_59-Logic Apps Designer - Microsoft Azure

Now we want to send an SMS when the sentiment of the Tweet is very positive. The sentiment is expressed as a value between 0 and 1 where 1 is a really, really, really positive tweet.

To send an SMS when the sentiment is above 0.95, first click the + icon and add a condition. The value to evaluate is part of the HTTP body of the previous action. So add that and select greater than or equals and enter 0.95 in the value. Then switch to advanced view to see the expression you built. It will look like:

@greaterOrEquals(body(‘Http’), 0.95)

The above is not going to cut it though since the response body is JSON and the sentiment score needs to be extracted from it. Change the expression to the following:

@greaterOrEquals(float(json(string(body(‘Http’))).documents[0].score), 0.95)

Since the response body is an array of documents and we only have one document, just obtain the score from the first document.

2016-04-22 16_03_05-Logic Apps Designer - Microsoft Azure

Now we can click Add an action in the If yes section to send an SMS. You can use the Twilio Send Message Managed API to do so but you will need an account at Twilio for this to work. Alternatively, you can send an e-mail or just post a result to http://requestb.in. For Twilio, you will end up with something like below. Phone numbers have been blurred to protect the innocent.

2016-04-22 16_08_08-Logic Apps Designer - Microsoft Azure

In the above, we only want to show the score for the Tweet and not the whole body. This can be done in Code View:

In the Send_Message action, change the following:

@{body(‘Http’)} for @{triggerBody()[‘TweetText’]}

to:

@{json(string(body(‘Http’))).documents[0].score} for @{triggerBody()[‘TweetText’]}

Note that changes like the above can make the UI designer unavailable.

When you save this Logic App, incoming tweets containing Azure should be analyzed and you should get SMSs when tweets are very positive. Hey, it’s Azure, why shouldn’t they be?:-)

You can check if the Logic App is executing correctly from the Operations tile:

2016-04-22 16_23_11-Logic Apps Designer - Microsoft Azure

For a search term like Azure, I recommend to turn off the Logic App if you don’t want to exhaust your 5000 free tier API calls.

nodejs_logo_green

Azure Resource Manager REST API from Node

In our video about Fault Domains in Azure IaaSv2, we mentioned Azure Resource Manager and the use of templates to deploy IaaSv2 resources such as virtual machines in a fault domain, a load balancer, a public IP address and more. Azure Resource Manager also has a REST API that can be used from any language. This post discusses the use of the REST API from node.js, including obtaining a token from Azure Active Directory using adal-node.

Before obtaining the token, you need to decide which account to use. In this case I created a service principal in Azure AD to be used as a service account. The process to create a service principal is well documented here and here. I created the service principal using the procedure in the first link, by creating a dummy application in Azure Active Directory. When you create such a dummy application you will obtain two of several things you need to obtain the token:

  • The client ID (a GUID) which basically serves as the user name
  • A generated key with a validity of 1 or 2 years that servers as the password

Other information you will need is the tenant ID (also a GUID) which is used to construct the authorization URL. To actually obtain the token using ADAL (Active Directory Authentication Library) for node.js with adal-node, take a look at adal-node on npm, in the server to server with client credentials sample. There are some issues with that sample code, so I modified it as follows:

var adal=require('adal-node');
var AuthenticationContext= adal.AuthenticationContext;
var tenantID="TenantIDGUID";
var clientID="ClientIDGUID";
var resource="https://management.azure.com/";
var authURL="https://login.windows.net/" + tenantID;
var secret="ClientSecret";
var context=new AuthenticationContext(authURL);
context.acquireTokenWithClientCredentials(resource, clientID, secret, function(err,tokenResponse) { }

Some things to note:

Once you obtain the token, you will get a tokenResponse in the callback function. The tokenResponse contains:

{ tokenType: 'Bearer',
expiresIn: 3600,
expiresOn: Fri Jun 05 2015 10:46:48 GMT+0200 (Romance Daylight Time),
resource: 'https://management.azure.com/',
accessToken: 'long, long token',
isMRRT: true,
_clientId: 'clientID',
_authority: 'https://login.windows.net/tenantID' }

So basically, you are getting an OAuth bearer token you can use in a call to a Web API that expects such a token. The Azure Resource Manager REST APIs will be called with this token.

To actually make the API request, I do the following:

  • Use the restler module: see https://www.npmjs.com/package/restler
  • Get the access token from the token response above: the token is obtained with tokenResponse[‘accessToken’]
  • Build the request URL, in this case to list all resources in my subscription. To interactively find out the kind of requests you can make, use Resource Explorer
  • Make the REST call with restler, passing the accessToken

The code looks like this where you replace {yourSubID} with your Azure subscription ID:

var rest=require('restler');
authHeader = tokenResponse['accessToken'];
requestURL="https://management.azure.com/subscriptions/{yourSubID}/resources?api-version=2015-01-01";
rest.get(requestURL, {accessToken:authHeader}).on('complete',function(result) {
console.log(result); });

If you go to https://github.com/gbaeke/armnode you will find the full samples to get you started. Hope this is helpful. Leave a comment if you have further questions.

download

Fault Domains in Azure IaaSv2

With the availability of IaaSv2 in Microsoft Azure, several new features are available that dramatically change the way resources are deployed and maintained. One profound change is the introduction of three fault domains for IaaSv2 virtual machines as opposed to two fault domains for IaaSv1 virtual machines. In the case of Azure, a fault domain is basically a rack of servers. A power failure at the rack level will impact all servers in the rack or fault domain. To make sure your application can survive a fault domain failure, you will need to spread your application’s components, for instance front-end web servers, across fault domains. The way to do this in Azure is to assign virtual machines to an availability set. Upon deployment but also during service healing, Azure’s fabric controller will spread the virtual machines that belong to the same availability set across the fault domains automatically. As an administrator, you cannot control this assignment.

If you deploy virtual machines in cloud services (IaaSv1 style), the maximum amount of fault domains is two which can present a problem. For instance, when you deploy a majority node set cluster with three nodes across two fault domains, it is entirely possible that the fault domain that hosts two of the three nodes fails. When that happens, the surviving node does not have majority and will go offline as well. For such deployments, three fault domains are a requirement to survive a failure in one fault domain.

Now that you understand what a fault domain is and the requirement for three fault domains, how do you get three fault domains in Azure? Well, you will need to deploy virtual machines using the IaaSv2 model. This model is based on Azure Resource Manager which also enables rich template based deployment of virtual machines, network interfaces, IP addresses, load balancers, web sites and more. Many Microsoft and community templates can be found at http://azure.microsoft.com/en-us/documentation/templates/

To get a feel for how such a deployment works and to check if your resources are spread across three fault domains, take a look at our Cloud Chat video:

Windows Azure Point-to-Site Networking

If you are having trouble with the point-to-site VPN configuration in Windows Azure, here are some tips about the procedure:

  • Follow the procedure located at http://msdn.microsoft.com/en-us/library/windowsazure/dn133792.aspx for creating the virtual network and the gateway.
  • When configuring the certificates for the VPN connection, first create the self-signed root certificate with the following command:  

    makecert -sky exchange -r -n "CN=RootCertificateName" –pe -a sha1 -len 2048 -ss My

  • The above command creates a self-signed root certificate and stores it in your certificate store (Certificates – Current User\Personal\Certificates). Next, export that certificate to a .cer file and upload it to Azure from the dashboard of the virtual network using the Upload client certificate link (the name of that link will probably be changed in the future Smile) I also stored the root certificate in my Trusted Roots.
  • Now create a client certificate with the self-signed root certificate as the issuer. The command I used is different from the one in the documentation because it did not work for me. I used:

    makecert -n "CN=ClientCertificateName" -pe -sky exchange -m 96 -ss
    my -a sha1 -is my -in "RootCertificateName"

  • The above command creates the client certificate in the same store as the root certificate and uses the root certificate previously generated as the issuer. Be sure to check that the issuer is the root certificate you uploaded to Azure.

In the dashboard of the virtual network, download the x64 or x86 client VPN package and install it. There will be an extra network connection that uses SSTP to connect to your Azure gateway:

image

 

In Azure the dashboard should show connected clients:

image

Office 365 Identity Management with DirSync without Exchange Server On-Premises

This post describes how users, groups and contact are provisioned in Office 365 from the on-premises Active Directory. By using DirSync, these objects are created in and synchronized to Office 365. Without an Exchange Server and Exchange Management tools in place, it is not always obvious how these objects should be created.

The following sections describe the procedures you can follow without Exchange or the Exchange management tools in place.

IMPORTANT NOTE
The sections below only specify the basic actions you need to perform in Active Directory to have the object appear in the right place in Office 365 (user, security group, mailbox, distribution group, contact). Note that almost all properties of these objects need to be set in Active Directory. If you want to hide a distribution group from the address book or you want to configure moderation for a distribution group, you have to know the property in Active Directory that’s responsible for the setting, set the value and perform directory synchronization. You will also need to upgrade the Active Directory schema with Exchange Server 2010 schema updates. You cannot use the Exchange Server 2010 System Manager without having at least one Exchange Server 2010 role installed on-premises.

Create a user account

Create a regular user account in Active Directory. This user account will be replicated by DirSync and it will appear in the Users list in the portal (https://portal.microsoftonline.com).

Important: set the user logon name to a value with a suffix that matches the suffix used for logging on to Office 365. For instance, if you logon with first.last@xylos.com in Office 365, set the UPN to that value:
clip_image002

User accounts without a mailbox (or any other license) can be used in Office 365 to grant permissions such as Billing Administrator or Global Administrator. A user account like this is typically used to create a DirSync service account.

Create a user account for a user that needs a mailbox

Create a user account as above. Set the user’s primary e-mail address in the email attribute or you will get an onmicrosoft.com address only:

clip_image004

When this user is synchronized and an Exchange Online license is added in the portal, a mailbox will be created that has the E-mail address in the E-mail field as primary SMTP address. Automatically, a secondary SMTP address is created with prefix@tenantid.onmicrosoft.com:

clip_image006

What if the user needs extra SMTP addresses?

  • You cannot set extra SMTP addresses in Exchange Control Panel (or Remote PowerShell) because the object is synchronized with DirSync.
  • In the on-premises Active Directory you need to populate the proxyAddresses attribute of the user object. You can set the values in this field with ADSIEdit or Active Directory Users and Computers (Windows Server 2008 ADUC and higher with Advanced Features turned on)
  • In the proxyAddresses field, make sure that you also list the primary SMTP address with SMTP: (in uppercase) in front of the address. Secondary addresses need smtp: (in lowercase) in front of the address.
    clip_image008

Note: instead of editing the proxyAddresses field directly, you can use a free (but at this point in time beta) product: http://www.messageops.com/software/office-365-tools-and-utilities/office-365-active-directory-addin. The tool adds the following tabs to Active Directory Users and Computers:

  • O365 Exchange General: set display name, Email address, additional Email addresses and even a Target Email Address (for mail redirection)
  • O365 Custom Attributes: set custom attributes in AD for replication to Office 365
  • O365 Delivery Restrictions: accept messages from, reject messages from
  • O365 Photo: this photo will appear on the user’s profile and will be used by Lync Online as well
  • O365 Delegates: to set the publicDelegates property

When a user is created in AD, you can use the additional tabs this tool provides to set all needed properties at once.

To summarize the actions for a mailbox:

  • Create a user in ADUC with the user logon name (UPN) and e-mail address to the primary e-mail address of the user (UPN and e-mail address do not have to match but it’s the most common case)
  • Make sure the user has a display name (done automatically for users if you specify first, last and full name in the AD wizard)
  • Set proxyAddresses manually or with the MessageOps add-in to specify additional e-mail addresses (with smtp: in the front) and make sure you also specify the primary e-mail address with SMTP: in the front.
  • Let DirSync create and sync the user to Office 365
  • Assign an Exchange Online license to the user. A mailbox will be created with the correct e-mail addresses.

Create a security group

Create a security group in Active Directory. The group will be synchronized by DirSync and appear in the Security Groups in the portal. The group will not appear in the Distribution Groups in Exchange Online (obviously).

Create a distribution group

Create a distribution group in Active Directory. In the properties of the group set the primary e-mail address in the E-mail address field:

clip_image010

In addition to the e-mail address, the group object also needs a display name (displayname attribute). If the distribution group in AD has an e-mail address and a display name, the group will appear in the Distribution Groups list in Exchange Online after synchronization.

Note that specifying members and alternate e-mail addresses has to be done in the local Active Directory as well. If you have installed the MessageOps add-in, you can set easily set those properties.

Create a mail-enabled distribution group

You can add a display name and e-mail address to a standard security group to mail-enable the group. After stamping those two properties, the group will appear in the list of Distribution Groups in Exchange Online. When you list groups with the Get-Group cmdlet, you will see the following:

clip_image012

You can stamp the properties manually or use the MessageOps add-in to set these properties easily.

Create a contact

Create a contact object in Active Directory. In the properties of the created object, fill in the E-mail field in the General tab.

Conclusion

Although DirSync makes it easy to create directory objects in Office 365, without an Exchange Server and the Exchange management tools it is not always obvious how to set the needed properties in order to correctly synch these objects. If you find it too much of a hassle to set the required properties on your local Active Directory objects, there are basically two things you can do:

  • Turn off Directory Synchronization and start mastering directory objects in the cloud
  • Install at least one Exchange Server 2010 SP1 so that you can use the Exchange management tools