Azure App Configuration presentation source code

On the 16th April 2021 I gave a presentation at Global Azure Bootcamp 2021 Austria. My talk about Azure App Configuration was presented in German (and you also can find the recording below).

You can download the source code of my presentation here at GitHub. Each finished Demo has been put in an extra branch.

It has been interesting to prepare and I created two specific implementation details I will follow up in future blog posts:

  • Use Azure Event Grid for pushing Azure App Configurations down to ASP.NET Core APP
  • Sticky Feature Session Manager for FeatureManagement

I hope you enjoyed my session!

AndiP

Cosmos DB RBAC-Access with Managed Identities

The public preview of role-based access control (RBAC) for the Azure Cosmos DB Core (SQL) API was announced yesterday at Microsoft Ignite. This does not only allow us to authenticate our requests with an Azure Active Directory identity based on roles, but also allows to audit the identities which accessed your data.

In this blog post I walk you through a complete example on how you can use Azure Cosmos DB with RBAC and managed identity. We will

  • Create following Azure Resources
    • Cosmos DB Account
    • Log Analytics Account
    • Azure Function App
  • Create a Azure Cosmos DB role with specific access permissions
  • Assign the CosmosDB role to a managed identity
  • Write an Azure Function App to access CosmosDB with managed identity

Create and configure Azure Resources

Open https://portal.azure.com and create a new resource group „cosmosrbac„.

Create a new Azure Cosmos DB Account in that resource group. For this sample I had been using the new serverless option. Use the DataExplorer (under Settings) to create a new Database „demodb“ and a Collection „democol„. I am using „/p“ as a generic partition key. This is especially useful, if you store different kind of documents in your collection which are partitioned differently. Also create a document with the „New document“ button since our permissions will be read only later.

{
"id":"001",
"p":"Test",
"First":"Andreas"
}

Now create a new Log Analytics Workspace and a Azure Function App (serverless) in the resource group „cosmosrbac„. I named those „cosmosrbaclog“ and „cosmosrbacfunc

Now lets configure the Azure Cosmos DB account so that all Dataplane Requests are audited in our Log Analytics Workspace. To do that select your Azure Cosmos DB resource, click on „Diagnostic settings“ in the section „Monitoring„. Add a new diagnostic setting where you just select „DataPlaneRequests“ and your created Log Analytics Workspace. I named that setting „SecurityLog“.

Diagnostic settings for Azure Cosmos DB account

In the portal select your Azure App Function and click on „Identity“ in the section „Settings„. Select „System assigned“ and turn the Status to „On„. Write down the given Object ID which represents the Object ID of the Service Principal in Azure AD. Read more about the managed identity types in the documentation here.

System assigned Identity for Azure Function App

Configure CosmosDB RBAC role and assign to identity

To create a new CosmosDB RBAC role we need to define which permissions the role consists of. This can be done by defining a JSON file. We will be creating a new role „MyReadOnlyRole“ with following permissions:

  • Microsoft.DocumentDB/databaseAccounts/readMetadata
  • Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/read
  • Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/executeQuery
  • Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/readChangeFeed

You can find all available Cosmos DB permissions here.

In the Azure Portal open the cloud shell. You can find the cloud shell in the upper bar on the right side. I am using PowerShell version! If you use bash instead you might need to slightly adopt the statements below. To create and assign the role you need the latest Azure Cosmos DB az extension which you can install with this command:

az extension add --name cosmosdb-preview

Then start the nano editor and create a new file „roledefinition.json

nano roledefinition.json

Copy and paste the following json document and save the document by pressing CTRL+Q and selecting Y:

{
    "RoleName": "MyReadOnlyRole",
    "Type": "CustomRole",
    "AssignableScopes": ["/"],
    "Permissions": [{
        "DataActions": [
            "Microsoft.DocumentDB/databaseAccounts/readMetadata",
            "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/read",
            "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/executeQuery",
            "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/readChangeFeed"
        ]
    }]
}

Now create the defined role with the following statements. Make sure you replace the values of $resourceGroupName and $accountName with those you selected:

$resourceGroupName='cosmosrbac'
$accountName='cosmosrbac'
az cosmosdb sql role definition create --account-name $accountName --resource-group $resourceGroupName --body roledefinition.json

To later get a list of your defined roles you can issue this statement:

az cosmosdb sql role definition list --account-name $accountName --resource-group $resourceGroupName

Write down the value of „name“ of the new created role definition as we will need it in our next step.

Now we assign this new role to our previously created system managed identity. Replace the $readOnlyRoleDefinitionId value with the value of the role-name (see above) and replace the $principalId value with the Object ID we got earlier when we configured our Azure Function App:

$resourceGroupName='cosmosrbac'
$accountName='cosmosrbac'
$readOnlyRoleDefinitionId = 'name of role definition'
$principalId = 'object id of system managed identity'

az cosmosdb sql role assignment create --account-name $accountName --resource-group $resourceGroupName --scope "/" -p $principalId --role-definition-id $readOnlyRoleDefinitionId

We now can use this system managed identity from Azure Function App to access Cosmos DB with the permissions we defined for that role. To see all the assignments you made you execute this statement:

az cosmosdb sql role assignment list --account-name $accountName --resource-group $resourceGroupName

Access Azure CosmosDB with Azure Function App

Now start Visual Studio 2019 and create a new function app with a HTTP Trigger. Right click the new created project and select „Manage nuget packages…“. Make sure the checkbox „Include prerelease“ is checked! Add the following nuget packages to your solution:

  • Azure.Identity (Version 1.3.0)
  • Microsoft.Azure.Cosmos (Version 3.17.0-preview1)
Functionality is only in the preview of 3.17.0 but not in the final release!

Modify Function1.cs file as following. First replace the using statements with these:

using Azure.Identity;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Cosmos;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using System;
using System.Threading.Tasks;

Add a person class which represents our documents in Azure Cosmos DB

public class Person
{
    public string Id { get; set; }
    public string p { get; set; }
    public string First { get; set; }
}

Inside the class „public static class Function1“ write following two function. One to read an item and another to write an item. Make sure that you replace <yourAccount> with the name of your Azure Cosmos DB account name:

[FunctionName("GetItem")]
public static async Task<IActionResult> GetItem(
  [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
   ILogger log)
{
  try
    {
      // Since we know that we have a managed identity we instatiate that directly
      // var tokenCredential = new DefaultAzureCredential();
      ManagedIdentityCredential tokenCredential = new ManagedIdentityCredential();
      CosmosClient client = new
       CosmosClient("https://<yourAccount>.documents.azure.com:443/", tokenCredential);
      Container container = client.GetContainer("demodb", "democol");
      ItemResponse<Person> res = await container.ReadItemAsync<Person>("001", new PartitionKey("Test"));
      return new OkObjectResult(JsonConvert.SerializeObject(res.Resource));
    }
    catch (Exception ex)
    {
      return new BadRequestObjectResult(ex.ToString());
    }
}
[FunctionName("CreateItem")]
public static async Task<IActionResult> CreateItem(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
    ILogger log)
{
  try
  {
    ManagedIdentityCredential tokenCredential = new ManagedIdentityCredential();
    CosmosClient client = 
      new CosmosClient("https://<yourAccount>.documents.azure.com:443/", tokenCredential);
    Container container = client.GetContainer("demodb", "democol");
    ItemResponse<Person> res = await container.CreateItemAsync<Person>(
	new Person() 
	{ Id = "002", 
	  p = "Test", 
          First = "Sonja" }, 
	new PartitionKey("Test"));
    return new OkObjectResult(JsonConvert.SerializeObject(res.Resource));
  }
  catch (Exception ex)
  {
    return new BadRequestObjectResult(ex.ToString());
  }
}

Compile the application and publish it to the Azure function app (Right click project – Publish). Open the Azure portal again and navigate to your Azure function app. Under the section „Function“ select „Functions„. Select each function and aquirce a function URL with the „Get function Url“ buttom in the upper bar.

Azure Functions deployed

Now use a tool like Postman to try to execute your functions. While the GetItem-Function will return a result

GetItem Function returning data from Azure Cosmos DB

the CreateItem-Function will return an RBAC access error.

CreateItem Function returns an error claiming that the principal does not have the required RBAC permissions

Analyzing Loganalytics Azure CosmosDB Dataplane

Navigate now to your Azure Log Analytics Workspace in the Azure Portal. Under the section „General“ select „Logs„. Issuing the following statement will show the failed access to the document:

AzureDiagnostics 
| where ResourceProvider == "MICROSOFT.DOCUMENTDB"
  and Category == "DataPlaneRequests"
  and ResourceGroup == "COSMOSRBAC"
  and requestResourceType_s == "Document"
| where OperationName == 'Create'
| summarize by statusCode_s, OperationName, aadPrincipalId_g, aadAppliedRoleAssignmentId_g

Replacing the Operation Name with „Read“ on the other hand will show all 200 success code for that given principal id.

AzureDiagnostics 
| where ResourceProvider == "MICROSOFT.DOCUMENTDB"
  and Category == "DataPlaneRequests"
  and ResourceGroup == "COSMOSRBAC"
  and requestResourceType_s == "Document"
| where OperationName == 'Read'
| summarize by statusCode_s, OperationName, aadPrincipalId_g, aadAppliedRoleAssignmentId_g

I hope you enjoyed our tour!

Kind regards
AndiP

Step by Step securing Applications with AAD B2C and EasyAuth

Today I show you step by step how you can use Azure Active Directory Business to Consumer (AAD B2C) to secure your backend site/services.

For that reason I create a service backend and two web applications:

  • REST Backend-Service (Web-API) secured by AADB2C
  • Single Page Application
  • ASP.NET Application

Both applications will be able to access the web-api after the user is authenticated. The following diagram illustrates our example:

Create the Infrastructure

First of all we will create our required infrastructure:

  • AAD B2C Instance in Azure
  • 3 Azure Web Apps

Setup the Azure Web Apps

  1. Login to https://portal.azure.com
  2. Create a new resource group (f.e. "b2ctest")
  3. Add 3 Web Apps to this resource group
SPA-App Web-API ASP.NET Core App
Name b2cspaspec* b2ctestspecapi* b2ctestspec*
Runtime Node 12 (LTS) .NET Core 3.1 .NET Core 3.1
Plan YourFreePlan-1 YourFreePlan-1 YourFreePlan-1
https://b2cspaspec.azurewebsites.net/ https://b2ctestspecapi.azurewebsites.net/ https://b2ctestspec.azurewebsites.net/

*) These names are already taken, choose unique names!

  1. Configure Git-Deployment for the SPA-App
    • Select the web app in the azure portal
    • Under Deployment** – Deployment Center
      • Select CI / Local Git and click Continue
      • Select "App Service build service" and click Continue
      • Finish
Git-Clone Url https://b2cspaspec.scm.azurewebsites.net:443/b2cspaspec.git
Username $b2cspaspec
Password av********cs

Copy the Git Clone URL and store the git credentials

Setup Azure AAD B2C Tenant

Next we will create a new Azure Active Directory B2C Tenant. You can skip that step if you already have one.

  1. Login to https://portal.azure.com
  2. Create a new ressource group for your AAD B2C
  3. Add an Azure Active Directory B2C instance (Settings should be obvious)
    • Organization Name
    • Initial Domain name
    • Country/Region
  4. To access your AAD B2C Tenant you need to switch to that tenant like you would switch to another AAD tenant. (Upper right corner)

Create User Flows in AAD B2C

Navigate to the AAD B2C Overview page

Select New user flow under Policies / UserFlows to select a predefined user flow. We select Sign up and sign in in the recommended version. This will allow a potential user to sign up and in, or an existing user to sign in.

Configure the flow as so:

  • Name: B2C_1_ (f.e. signupin ==> B2C_1_signupin)
  • Identity Provider: Select "Email signup"
  • MFA: SMS or phone
  • MFA Enforcement: Conditional
  • Check Conditional Access option
  • Select additional attributes to be collected

Click CREATE to create the flow. Under User Flows select your flow again and click on Properties as we want to modify the password complexity.

I choose the following settings, but you can obviously choose different:

  • Complexity: Custom
  • Character set: All
  • Minimum length: 16
  • Maximum length: 80
  • Character classes required: None

Create Application registrations in AAD B2C

Each of our services (2 webapps and 1 webapi) need to be secured and therefore represented as AAD applications in our B2C tenant.

Again navigate to the AAD B2C Overview page and select App registrations

Register AAD-App Web-API

  • Name: b2ctestspecapi

  • Supported account type: Accounts in any identity provider or organizational directory (for authenticating users with user flows)

  • Redirect URI: Webhttps://b2ctestspecapi.azurewebsites.net/.auth/login/aad/callback (Enter you WebApp URI there and append /.auth/login/aad/callback which will be the path EasyAuth of Azure App Service will require!)

  • Under Overview note the Application (Client) ID (af9c********0f66 in my case)

  • Under Manage / Authentication / Implicit Grant

    • Check "Access Tokens"
    • Check "ID Tokens"
  • Under Manage / Certificates & secrets / Client Secrets

    • Create a new secret and keep it (-Ox********** in my case)
  • Under Manage / API permissions add following permissions (Do not forget to GRANT CONSENT afterwards)

    • offline_access (Delegated)
    • openid (Delegated)
    • Grant admin consent for your tenant
  • Under Manage / Expose an API

Register AAD-App ASP.NET Core App

  • Name: b2ctestspec

  • Supported account type: Accounts in any identity provider or organizational directory (for authenticating users with user flows)

  • Redirect URI: Webhttps://b2ctestspec.azurewebsites.net/.auth/login/aad/callback (Enter you WebApp URI there and append /.auth/login/aad/callback which will be the path EasyAuth of Azure App Service will require!)

  • Under Overview note the Application (Client) ID (cf6d********d2ee in my case)

  • Under Manage / Authentication / Implicit Grant

    • Check "Access Tokens"
    • Check "ID Tokens"
  • Under Manage / Certificates & secrets / Client Secrets

    • Create a new secret and keep it (.dSz********** in my case)
  • Under Manage / API permissions add following permissions if not present(Do not forget to GRANT CONSENT afterwards)

    • offline_access (Delegated)
    • openid (Delegated)
    • From My Apis select b2ctestspecapi (app we created earlier) and add both permissions:
      • Spec.Read
      • Spec.Write
    • Grant admin consent for your tenant

Register AAD-App SPA-App

  • Name: spademo

  • Supported account type: Accounts in any identity provider or organizational directory (for authenticating users with user flows)

  • Redirect URI: Single-page application SPAhttps://b2cspaspec.azurewebsites.net/ (We will use MSAL here)

  • Under Overview note the Application (Client) ID (9192********fe5 in my case)

  • Under Manage / Authentication / Platform Configurations

  • Under Manage / Authentication / Implicit Grant

    • Check "Access Tokens"
    • Check "ID Tokens"
  • Under Manage / API permissions add following permissions if not present(Do not forget to GRANT CONSENT afterwards)

    • offline_access (Delegated)
    • openid (Delegated)
    • From My Apis select b2ctestspecapi (app we created earlier) and add both permissions:
      • Spec.Read
      • Spec.Write
    • Grant admin consent for your tenant

Test User Flow and create Test-User

Again navigate to the AAD B2C Overview page and select B2C_1_signupin policy under Policies/User Flows. Click on the Run user flow button and select the spademo application. As reply url choose https://jwt.ms so we can examine the result. Select no access token ressources now. Click Run user flow

You should see a SignIn/SignUp Dialog – Create a new user with one of your EMail-Addresses. Once you hit create you should be redirected to https://jwt.ms where you can examine you Auth-Token.

Secure Web-API and ASP.NET Core App with Azure App Easy Auth

Since the best security code is the code you never write, we want to make use of Azure Apps Easy Auth service to protect our ASP.NET Core App and Service.

For this switch back into your regular tenant (where the Azure Apps are located). To view all App Services in your tenant click this link.

Secure B2CTestSpecAPI AppService

First we configure EasyAuth for our API app service. Then we create a simple api application that we deploy there.

Navigate to your b2ctestspecapi App Service.

Under Settings – Authentication/Authorization enable App Service Authentication. Also enable the Token Store (at the end of the page). Select Login with Azure Active Directory as action. This will ensure that no unauthenticated call is reaching your application.

Select Azure Active Directory from the list of Authentication Providers Under the Managment Mode click on Advanced

  • ClientId: af9c************0f66
  • Issuer URL: https://yourtenantname.b2clogin.com/yourtenantname.onmicrosoft.com/B2C_1_signupin/v2.0/.well-known/openid-configuration (We had this url from the Endpoints if you remember)
  • Client Secret: .-OX*************

Create a simple ASP.NET Core App AADDemoAPI in Visual Studio which automatically will have the WeatherForeCast API implemented. Deploy this service to this Azure App Service.

Test the authentication by navigating to https://b2ctestspecapi.azurewebsites.net which should require a login. Once you have logged in you should be able to examine the tokens by accessing: https://b2ctestspecapi.azurewebsites.net/.auth/me

Secure B2CTestSpec AppService

Navigate to your b2ctestspec App Service.

Under Settings – Authentication/Authorization enable App Service Authentication. Also enable the Token Store (at the end of the page). Select Login with Azure Active Directory as action. This will ensure that no unauthenticated call is reaching your application. If you want to serve some pages without authenticating choose the other option. In this case however you need to check in you code if you are authenticated or not! If not you have to make sure to redirect the user to the login page.

Select Azure Active Directory from the list of Authentication Providers Under the Managment Mode click on Advanced

  • ClientId: cf6d************a2ee
  • Issuer URL: https://yourtenantname.b2clogin.com/yourtenantname.onmicrosoft.com/B2C_1_signupin/v2.0/.well-known/openid-configuration (We had this url from the Endpoints if you remember)
  • Client Secret: .dSz*************
  • Allowed Token Audiences:

This should do the trick right? Well as long as we want only to authenticate this will work. But if we also want the access token to access our WebAPI we need to do more.

Right no configuration in the UI where you can specify the required permission scopes. To request the scopes you have to redirect users to this url:

There is however a workaround to this i stumbled upon in the documentation. Using the mangement API you can set the default scopes for EasyAuth. If you want to use an UI to do that use https://ressources.azure.com and navigate to subscriptions/yoursubscription/resourceGroups/yourResourceGroup/providers/Microsoft.Web/sites/yourSite/config/authsettings. Under the entry allowedaudiences add:

  • resource: Client ID of the WebAPI you want to call
  • scope: add webservices scopes to openid, offline access
"additionalLoginParams": [
	  "resource=af9c************0f66",
	  "scope=openid offline_access https://spectologicb2crtm.onmicrosoft.com/specapi/Spec.Read https://spectologicb2crtm.onmicrosoft.com/specapi/Spec.Write"
	]

Let’s now implement an ASP.NET Core Application that we can deploy to this Azure App Service and will call the API webservice we created earlier.

In Visual Studio 2019 create a new ASP.NET Core MVC app named AADDemoWF and implement the Index-Method of HomeController.cs

public async Task Index()
{
    ViewData["X-MS-TOKEN-AAD-ID-TOKEN"] = Request.Headers["X-MS-TOKEN-AAD-ID-TOKEN"];
    ViewData["X-MS-TOKEN-AAD-ACCESS-TOKEN"] = Request.Headers["X-MS-TOKEN-AAD-ACCESS-TOKEN"];
    ViewData["X-MS-TOKEN-AAD-EXPIRES-ON"] = Request.Headers["X-MS-TOKEN-AAD-EXPIRES-ON"];
    ViewData["X-MS-TOKEN-AAD-REFRESH-TOKEN"] = Request.Headers["X-MS-TOKEN-AAD-REFRESH-TOKEN"];
 
    string accessToken = Request.Headers["X-MS-TOKEN-AAD-ACCESS-TOKEN"];
    try
    {
        string url = "https://b2ctestspecapi.azurewebsites.net/weatherforecast";
        HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, url);
        request.Headers.Add("Authorization", $"Bearer {accessToken}");
        var response = await _client.SendAsync(request);
        ViewData["SERVICERESP"] = await response.Content.ReadAsStringAsync();
 
    }
    catch (Exception ex)
    {
        ViewData["SERVICERESP"] = ex.ToString();
    }
    return View();
}

Also adapt the index.cshtml page to visualize the tokens and the service response.

@{
    ViewData["Title"] = "Home Page";
}

<div class="text-center">
<h1 class="display-4">Auth Tokens</h1>
<div>X-MS-TOKEN-AAD-ID-TOKEN</div><br />
<div>X-MS-TOKEN-AAD-ACCESS-TOKEN</div><br />
<div>X-MS-TOKEN-AAD-EXPIRES-ON</div><br />
<div>X-MS-TOKEN-AAD-REFRESH-TOKEN</div><br />
<div>SERVICERESP</div><br />

<a href="https://b2ctestspec.azurewebsites.net/.auth/login/aad?p=&post_login_redict_uri=/&scope=openid+offline_access+https%3A%2F%2Fspectologicb2crtm.onmicrosoft.com%2Fspecapi%2FSpec.Read&redirect_uri=https%3A%2F%2Fb2ctestspec.azurewebsites.net%2F">Explicit</a>
</div>

Deploy this application to the App Service and you should be able to access this site after logging in through AAD B2

Secure the SPA Application

In this case I am using a sample provided by microsoft that we will adapt to our requirements. Open a terminal and clone the sample of microsoft.

First we will change app/authConfig.js. If you do not request all required resource permission scopes during login the silet token aquiring will fail and fall back to a authentication that will show a brief popup. Since the user is logged on already this will just result in a brief flicker. To avoid this we add all base scopes in the initial login.

// Config object to be passed to Msal on creation.
// For a full list of msal.js configuration parameters, 
// visit https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md

const msalConfig = {
    auth: {
        clientId: "9192**************0fe5",
        authority: "https://spectologicb2crtm.b2clogin.com/spectologicb2crtm.onmicrosoft.com/B2C_1_signupin",
        knownAuthorities: [ 'spectologicb2crtm.b2clogin.com' ],
        redirectUri: "https://b2cspaspec.azurewebsites.net/",
    },
    ... 

// Add here the scopes that you would like the user to consent during sign-in
const loginRequest = {
    scopes: ["openid","offline_access","https://spectologicb2crtm.onmicrosoft.com/specapi/Spec.Read"]
};
 
// Add here the scopes to request when obtaining an access token for MS Graph API
const tokenRequest = {
    scopes: ["User.Read", "Mail.Read"],
    forceRefresh: false // Set this to "true" to skip a cached token and go to the server to get a new token
};
 
const weatherTokenRequest = {
    scopes: ["https://spectologicb2crtm.onmicrosoft.com/specapi/Spec.Read"],
    forceRefresh: false // Set this to "true" to skip a cached token and go to the server to get a new token
};

Change app/graphConfig.js and add weatherConfig there:

// Add here the endpoints for MS Graph API services you would like to use.
const graphConfig = {
    graphMeEndpoint: "https://graph.microsoft.com/v1.0/me",
    graphMailEndpoint: "https://graph.microsoft.com/v1.0/me/messages",
};
...
// Add here the endpoints for Weather services you would like to use.
const weatherConfig = {
    weatherEndpoint: "https://b2ctestspecapi.azurewebsites.net/weatherforecast"
};

Add a function callWeatherAPI to app/graph.js. The code just makes a request to an given endpoint with a bearertoken.

function callWeatherAPI(endpoint, token, callback) {
    console.log(endpoint);
    console.log(token);
    const headers = new Headers();
    const bearer = `Bearer ${token}`;
 
    headers.append("Authorization", bearer);
 
    const options = {
        method: "GET",
        headers: headers
    };
 
    console.log('request made to Weather API at: ' + new Date().toString());
 
    fetch(endpoint, options)
        .then(response => response.json())
        .then(response => callback(response, endpoint))
        .catch(error => console.log(error));
}

Change app/authPopup.js and adapt the seeProfile-Method that is called once a user clicks on the Profile-Button

...
function seeProfile() {
    getTokenPopup(weatherTokenRequest).then(response => {
        callWeatherAPI(
            weatherConfig.weatherEndpoint, 
            response.accessToken, 
            updateUI);
    }).catch(error => {
        console.error(error);
    });
}
...

The upper method will call updateUI-function after a successful call to the webservice. We change the implementation in app/ui.js

...
function updateUI(data, endpoint) {
    console.log('Graph API responded at: ' + new Date().toString());
 
    if (endpoint === weatherConfig.weatherEndpoint) {
        const dataElement = document.createElement('p');
        dataElement.innerHTML = "<strong>Data: </strong>" + JSON.stringify(data);
        profileDiv.appendChild(dataElement);
    } else if (endpoint === graphConfig.graphMeEndpoint) {
        const title = document.createElement('p');
...

We are almost done now. We need to change Server.js to serve the right port for Azure App Service:

// app.listen(port); Replace this line with the next
app.listen(process.env.PORT||port);

Check in all changes to master in this git repository!

Now deploy our git repository to our NODE Azure App Service by using the credentials we created earlier.

Now you can try your service at https://b2cspaspec.azurewebsites.net/

That’s it for today! Have a great day!

AndiP

Accessing AAD OAuth2 protected API from PHP

Overview

What are we building today?

We do have a Web APP API running in an Azure App Service that is protected with Azure Active Directory (AAD). We want to access this webservice from a PHP website. In our setup the php website runs in a local docker container. The first and foremost goal is to show how we can get hands on the access token from our PHP application.

  1. Request the Index.php through http://localhost:90
  2. Use the OAuth2 library thenetworg/oauth2-azure to request an access token
  3. Authenticate against the app phpDemo in AAD and request access to ressource weatherapi
  4. Return the access token
  5. Return access token. Here you could call the web service with the returned bearer token
  6. Render the token on the website
  7. Use Postman to call the protected webservice

In the following sections contain a step by step guidance how to implement this scenario.

Creating the API-Application

Open Visual Studio 2019 and create a new „ASP.NET Core Web App“

  • Name: weatherapi
  • Plattform: .NET Core / ASP.NET Core 3.1
  • Template: API

In the solution explorer right click the project – Publish…

  • Azure
  • Azure App Service (Windows)
  • Select your subscription
  • + Create a new Azure App Service …
    • Name: weatherapi (select a unique name)
    • Subscription: Your Subscription
    • Resource Group: New… (phpdemo)
    • Hosting Plan: New… (testplan)

    • Click Create to create the Azure Resources
  • Select the web app and click „Finish“
  • Click Publish to publish the WebApp

Now you should be able to call your webservice under following URL: https://weatherapi.azurewebsites.net/weatherforecast

Create AAD App for ASP.NET API with weather.read

"appRoles": [
{
"allowedMemberTypes": ["Application" ],
"description": "Weather API",
"displayName": "Weather API",
"id" : "4B*************08",
"isEnabled": true,
"lang" : null,
"origin": "Application",
"value": "weather.read"
}
],

Securing the API Application with Easy Auth

  • Navigate to Azure App Services
  • Select your weatherapi Application
    • Under Settings click Authentication / Authorization
    • Turn App Service Authentication ON
      • In the drop down select „Log in with Azure Active Directory“
      • Click „Azure Active Directory“
      • Select Advanced
      • Copy ClientID to Client ID field
      • Copy „OpenID Connect metadata document“ to Issuer Url
      • Click OK
    • Save

Take care with the Issuer URL, because your application might break if you use JWT-Tokens with a „wrong“ issuer. JWT-Tokens with Ver 1.0 have a different Issuer than JWT-Tokens Ver 2.0. 

Open the  OpenID Connect Metadata Url in a browser and locate the „issuer“ element. It will be set to:

If you use V1 Tokens you need to set Issue in Easy Auth to

The „iss“ element in the tokens must match the issuer configured in EasyAuth! Read more on Access Tokens V1/V2 here!

Create AAD App for PHP Application

  • Open https://portal.azure.com
  • Navigate to Azure Active Directory / App registrations
  • + New registration
    • Name: phpdemo
    • Account Types: Single Tenant
    • Redirect URI: Web – http://localhost:90
    • Register
  • Write down ClientID. In my case it is: c9e*****9ea
  • Click on the button „Endpoints“
  • Click on Manage – Certificates & secrets
    • + New client secret
      • Name: mysecret
      • Expiry: 2 Years
      • Copy down the Secret (we need that later). In my case its: AKX**********
  • Click on Manager – API Permissions
    • + Add a permission
      • Select „My APIs“-Tab
      • Select „weatherapi“
      • Select „Application permissions“
      • Select „weather.read“
      • Add Permissions
    • Note the orange exclamation sign! Use „Grant admin consent for…“ button next to „+ Add permission“ and confirm with Yes.

Creating the PHP Application

Since I did not want to install all PHP stuff on my machine I decided to develop my example in a docker container. Thanks to the TruthSeekers who wrote this excellent post on how to do exactly that. You can clone the source code here.

After I cloned the git repository I had to adopt my docker-compose.yml file slightly since port 80 / 8080 are already consumed on my machine.

... 
ports: 
- 90:80
... 
ports:
- 9090:8080 
... 
Then I simply ran
  • docker-compose up -d

which brought up all containers, including SQL, which we won’t be needing now. Since we need some OAuth-Libraries for PHP, I decided to use the package manager tool composer.  

I did a quick „docker container list“ to figure out the php container.

Then with

  • docker exec -it php-docker-simple_php_1 bash

I connect to the bash of the running container.

Execute the following statements to install composer:

apt-get update
apt-get install zip unzip
Cd ~
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
php -r "if (hash_file('sha384', 'composer-setup.php') === '795f976fe0ebd8b75f26a6dd68f78fd3453ce79f32ecb33e7fd087d39bfeb978342fb73ac986cd4f54edd0dc902601dc') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
php composer-setup.php
php -r "unlink('composer-setup.php');"

As OAuth2 library I want to use „thenetworg/oauth2-azure“ which can be found here.
It is based on the OAuth2 Library „thephpleague/oauth2-client“ which can be found here.

Then install this package to the sample which resides in /var/www/html:

 
cd /var/www/html
php ~/composer.phar require thenetworg/oauth2-azure

Exit to the terminal and start VS-Code in the local directory or any other editor of your choice

For our sample we need following information which we collected earlier:

  • Client ID of the phpdemo app
  • ClientSecret of the phpdemo app
  • redirectUri of phpdemo app
  • aad tenant id
  • OAuth 2.0 Authorize Url
  • OAuth 2.0 Token Url
  • Client ID of the weatherapi app

We need all these values and add them in the following code fragment which initializes the oauth provider for Azure AAD:

$provider = new TheNetworg\OAuth2\Client\Provider\Azure([
'clientId' => 'c9e*****9ea',
'clientSecret' => 'AKX********',
'redirectUri' => 'http://localhost:90',
'tenant' => 'e2*******3d',
'urlAuthorize' => 'https://login.microsoftonline.com/e2*******3d/oauth2/v2.0/authorize',
'urlAccessToken' => 'https://login.microsoftonline.com/e2*******3d/oauth2/v2.0/token',
'urlAPI' => 'b37*******b02',
'scope' => 'b37*******b02/.default'
]);
$provider->defaultEndPointVersion = TheNetworg\OAuth2\Client\Provider\Azure::ENDPOINT_VERSION_2_0;

We can now retrieve the access token with the following code:

$accessToken = $provider->getAccessToken('client_credentials', [
'scope'=> $provider->scope
]);
Here is the complete code of src/index.php to get and display the access token that we can then use to make a request to our api service:

<?php
echo "OAuth Sample<br/>";
/* Make sure we include necessary libraries */
require_once(__DIR__."/vendor/autoload.php");
/* Initialize provider */
$provider = new TheNetworg\OAuth2\Client\Provider\Azure([
    'clientId' => 'c9e*****9ea',
    'clientSecret' => 'AKX********',
    'redirectUri' => 'http://localhost:90',
    'tenant' => 'e2*******3d',
    'urlAuthorize' => 'https://login.microsoftonline.com/e2*******3d/oauth2/v2.0/authorize',
    'urlAccessToken' => 'https://login.microsoftonline.com/e2*******3d/oauth2/v2.0/token',
    'urlAPI' => 'b37*******b02',
    'scope' => 'b37*******b02/.default'
]);
$provider->defaultEndPointVersion = TheNetworg\OAuth2\Client\Provider\Azure::ENDPOINT_VERSION_2_0;

try {
    $accessToken = $provider->getAccessToken('client_credentials', [
        'scope' => $provider->scope
    ]);
} catch (\League\OAuth2\Client\Provider\Exception\IdentityProviderException $e) {
    // Failed to get the access token
    exit($e->getMessage());
}
echo "Access-Token:";
echo $accessToken;

Now we can call http://localhost:90 on our machine and retrieve the JWT Bearer Token.
Open Postman and make a get request to:

In the Authorization TAB select type: „Bearer Token“ and paste the access token to the token field.
If you hit Send now, you will retrieve the results of the weatherforecast.

That’s it. Thanks
Andreas

Metrics in CosmosDB (DocumentDB-Focus)

Update 19.11.2018: Checkout my experimental implementation of a Metrics-Client Library for CosmosDB on github.

With Azure Monitor you have a single way to access metrics for a lot of supported azure resources. My personal guess on how the implemented Azure Monitor is that each Azure Resource provider (like Microsoft.DocumentDB) built an addon-provider „microsoft.insights“ that is used by Azure Monitor.

Azure Monitor only supports these metrics for CosmosDB. The Azure Monitor REST API documentation to query metrics describes the input and output values.

Of these I found these most relevant for the DocumentDB-API:
– Consumed request units: TotalRequestUnits (Unit:Count, Aggregation: count/total)
– Number of requests made: TotalRequests (Unit:Count, Aggregation: count/total)

which can be filtered by
– DatabaseName
– CollectionName
– Region
– StatusCode

Update: During development of the sample I noticed that the documentation is quite outdated. The new Version „2018-01-01“ supports additional metrics that are not documented on the page above. Here the DocumentDB relevant ones:
– AvailableStorage
– DataUsage
– DocumentCount
– DocumentQuota
– IndexUsage
– ReplicationLatency
– ServiceAvailability

(Un)Fortunately CosmosDB provides other interesting metrics as well that cannot be retrieved by Azure Montior. You might have noticed additional metric data in the metrics blade of CosmosDB like:

  • Available Storage
  • Data Size
  • Index Size
  • Max RUs per Second (filterd on a collections partition in a specific region)
  • Observed Read Latency
  • Observed Write Latency

These and more metrics can be retrieved directly from the CosmosDB Azure Resource Provider. This was/is the „old way“ to retrieve metrics before the arrival of Azure Monitor, if i got that right. The reference chapter describes all the various metrics you can consume from CosmosDB Resource Provider.

While Azure Monitor provides a nice nuget package (enable preview!) that you can use to access Azure Monitor metrics, you need to work with the REST-API to access the other metrics.

In this article I will focus on the DocumentDB metrics retrieved by REST (also the Azure Monitor ones). You can find a Azure Monitor Sample with .NET Core here.

Structure of Uri(s) for the Metric REST interfaces

The {resourceUri} is actually a path to the requested Azure resource. Azure Monitor basically always uses the path to the CosmosDB account. If you work directly with the Azure Resource Provider of CosmosDB you need the other paths to:

Resource Uri – Definition {resourceUri}

Resource Uri -> Database Account

This path is basically used whenever we work with Azure Monitor REST-API
– subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.DocumentDb/databaseAccounts/{cosmosDBAccountName}

Resource Uri -> DocumentDB Database
– subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.DocumentDb/databaseAccounts/{cosmosDBAccountName}/databases/{databaseResourceId}

Resource Uri -> DocumentDB Collection
(Mostly used in Azure resource Metric queries)
– subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.DocumentDb/databaseAccounts/{cosmosDBAccountName}/databases/{databaseResourceId}/collections/{collectionResourceId}

Resource Uri -> DocumentDB Collection Partition in a specific region
– subscriptions/{subscriptionId}/resourceGroups/{rgName}/providers/Microsoft.DocumentDb/databaseAccounts/{cosmosDBAccountName}/region/{regionName}/databases/{databaseResourceId}/collections/{collectionResourceId}/partitions

Region Names {regionName}

You can find out which regions your CosmosDB is available by querying the ARM REST API of the CosmosDB Account Azure Resource. Use the „Resources Get REST API“. For CosmosDB you find the documentation on how to retrieve the details of the CosmosDB Account Resource here.

The documentations misses out additional values in „properties“. While „enableAutomaticFailover“ and „enableMultipleWriteLocations“ (multimaster) is quite easy to guess I have no idea what „capabilities“ and „configurationOverrides“ will contain (maybe other API’s?):
– capabilites: []
– configurationOverrides: {}
– enableAutomaticFailover: false
– enableMultipleWriteLocations: false

Here a non exhaustive list of potential regions:

  • North Europe
  • West Europe

Request Examples

CosmosDB Resource Provider GET metrics sample

splitted in multiple rows for better reading
This request will fetch the „Available Storage“, „Data Size“, „Index Size“ in the time frame:
2018-10-10T06:55:00.000Z to 2018-10-10T07:55:00.000Z with a 5 minute interval (PT5M). Since the resource uri path points to a specific collection, only the data of this collection is retrieved!

    https://management.azure.com/
    subscriptions/12a34456-bc78-9d0e-fa1b-c2d3e4567890/resourceGroups/demoRG/
    providers/Microsoft.DocumentDb/databaseAccounts/demodocs/
    databases/6XAQAA==/collections/6XAQAITyPQA=/metrics?
    api-version=2014-04-01
    &$filter=
        (
        name.value eq 'Available Storage' or 
        name.value eq 'Data Size' or 
        name.value eq 'Index Size'
        ) and 
        startTime eq 2018-10-10T06%3A55%3A00.000Z and 
        endTime eq 2018-10-10T07%3A55%3A00.000Z and 
        timeGrain eq duration'PT5M'

Azure Monitor GET metrics sample

splitted in multiple rows for better reading

This request will fetch the „TotalRequests“-metric within the timespan from: 10.Oct. 2018 07:57 to 10.Oct. 2018 08:57 (one hour). The result will be delivered in 1 Minute invervals (PT1M). In this case we want all Databases, Collections, Regions and StatusCodes.

    https://management.azure.com/
    subscriptions/12a34456-bc78-9d0e-fa1b-c2d3e4567890/resourceGroups/demoRG/
    providers/Microsoft.DocumentDb/databaseAccounts/demodocs/
    providers/microsoft.insights/metrics?
    timespan=2018-10-10T07:57:00.000Z/2018-10-10T08:57:00.000Z
    &interval=PT1M
    &metric=TotalRequests
    &aggregation=total
    &$filter=DatabaseName eq '*' and CollectionName eq '*' and Region eq '*' and StatusCode eq '*'
    &api-version=2017-05-01-preview

The azure portal on the CosmosDB metrics blade currently uses this API-Version: 2017-05-01-preview. There is a more recent one „2018-01-01“. To get the supported API-Versions send in a wrong one :-).

Note that the new version requires „metricNames“ instead of „metric“!

    https://management.azure.com/
    subscriptions/12a34456-bc78-9d0e-fa1b-c2d3e4567890/resourceGroups/demoRG/
    providers/Microsoft.DocumentDb/databaseAccounts/demodocs/
    providers/microsoft.insights/metrics?
    timespan=2018-10-10T07:57:00.000Z/2018-10-10T08:57:00.000Z
    &interval=PT1M
    &metricNames=TotalRequests
    &aggregation=total
    &$filter=DatabaseName eq '*' and CollectionName eq '*' and Region eq '*' and StatusCode eq '*'
    &api-version=2018-01-01

Other intervals:
– PT1M (1 minute)
– PT5M (5 minutes)
– PT1H (1 hour)
– PT1D (1 day)
– P7D (7 days)

Azure ComosDB requests the Azure Portal uses for the metrics blade

Overview TAB

Number of requests (aggregated over 1 minute interval)

  • TYPE: Azure Monitor (microsoft.insights provider)
  • ResourceUri -> Database Account
  • API-Version -> 2017-05-01-preview
  • timespan -> 2018-10-10T07%3A57%3A00.000Z/2018-10-10T07%3A58%3A00.000Z
  • metric -> TotalRequests
  • aggregation -> total
  • interval -> PT1M
  • $filter ->
    DatabaseName eq 'databaseName' and 
    CollectionName eq '*' and 
    Region eq '*' and 
    ddStatusCode eq '*'

Number of requests (counted over 1 minute interval)

  • TYPE: Azure Monitor (microsoft.insights provider)
  • ResourceUri -> Database Account
  • API-Version -> 2017-05-01-preview
  • timespan-> 2018-10-10T06%3A58%3A00.000Z/2018-10-10T07%3A58%3A00.000Z
  • metric-> TotalRequests
  • aggregation-> count
  • interval-> PT1M
  • $filter ->
    DatabaseName eq 'databaseName' and 
    CollectionName eq 'colName' and 
    StatusCode eq '*'

Data and Index storage consumed

  • TYPE: Azure Resource Provider Metric
  • ResourceUri -> DocumentDB Collection
  • API-Version -> 2014-04-01
  • $filter->
    (
    name.value eq 'Available Storage' or 
    name.value eq 'Data Size' or 
    name.value eq 'Index Size'
    ) and 
    endTime eq 2018-10-10T07%3A55%3A00.000Z and 
    startTime eq 2018-10-10T06%3A55%3A00.000Z and 
    timeGrain eq duration'PT5M'

Documentation for fetching metrics from the Collection:

Max consumed RU/s per partition key range

  • TYPE: Azure Resource Provider Metric
  • ResourceUri -> DocumentDB Collection
  • API-Version -> 2014-04-01
  • $filter->
    (
        name.value eq 'Max RUs Per Second'
    ) and 
    endTime eq 2018-10-10T07%3A58%3A00.000Z and 
    startTime eq 2018-10-10T06%3A58%3A00.000Z and 
    timeGrain eq duration'PT1M'

Depending on the given resourceUri path the result will vary. The portal uses these three combinations of ResourceUri(s):

  • DocumentDB Database
  • DocumentDB Collection
  • DocumentDB Collection Partition in a specific region

You can find the respective documentation here:

For the „DocumentDB Collection Partition in a specific region“ I missed out the documented „partitionId“-value in my results. I got only „partitionKeyRangeId“. I also got a „region“-value for each entry in my value array. The portal uses the MAX value of all retrieved metric values to display the MAX-RUs for a partition.

Throughput TAB

Number of requests (aggregated over 1 minute interval)

see next section, uses the results from the same query

Number of requests exceeded capacity (aggregated over 1 minute interval) Status:429

This request had been used in the „Overview-Tab“ as well. The result is basically grouped by Database, Collection and Statuscode. So we can filter the 429 requests to get result we need.

  • TYPE: Azure Monitor (microsoft.insights provider)
  • ResourceUri -> Database Account
  • API-Version -> 2017-05-01-preview
  • timespan-> 2018-10-10T08%3A30%3A00.000Z/2018-10-10T09%3A30%3A00.000Z
  • metric-> TotalRequests
  • aggregation-> count
  • interval-> PT1M
  • $filter ->
    DatabaseName eq '*' and 
    CollectionName eq '*' and 
    StatusCode eq '*'

The generic result structure is documented here.

Within the the value the Metric ID will be „/subscriptions/12a34456-bc78-9d0e-fa1b-c2d3e4567890/resourceGroups/myRG/providers/Microsoft.DocumentDb/databaseAccounts/cosmosDBAccount/providers/Microsoft.Insights/metrics/TotalRequests“.

The timeseries array contains entries that basically represent the groups (by DBName, CollectionName and Status Code). Each group contains (in this case 60 items, one per minute PT1M) all the values for that group. The metadatavalues.name will be one of the following:

  • collectionname
  • databasename
  • statuscode

End to end observed read latency at the 99th percentile – metrics – Azure Resource

Latency for a 1KB document lookup operation observed in North Europe in the 99th percentile

  • TYPE: Azure Resource Provider Metric
  • ResourceUri -> Database Account
  • API-Version -> 2014-04-01
  • $filter->
    (
        name.value eq 'Observed Read Latency' or 
        name.value eq 'Observed Write Latency'
    ) and 
    endTime eq 2018-10-10T15%3A00%3A00.000Z and 
    startTime eq 2018-10-10T14%3A00%3A00.000Z and 
    timeGrain eq duration'PT5M'

I was really missing out a single page that describes all the metric possibilities with CosmosDB. I hope that this fills the gap.

Enjoy and have a great day!

AndiP

Adding Changefeed support to CosmosDB Datamigration Tool

In a production environment I usually would use Azure Functions which use the Change Feed Processor Library internally to continuously push changes in my CosmosDB to other destination(s).

However, for some small testing scenarios, demos and also some coding fun, I decided to add ChangeFeed support to the Azure CosmosDB data migration tool (link to original version).

So with this special version of the Azure CosmosDB Data Migration Tool you have additional options for the DocumentDB-Source available unter „Advanced Options“:

Once you check „Use change feed of the collection“ you get following options:

You can select where you want start reading from the change feed. Either start at the creation time of the collection or select a specific date.  I admit I could have added a DateTime-Picker :-P.

At „File to read/store continuation tokens“ you can provide a file name to store the continuation tokens. If you re-run the wizard and provide the file again only the new updated documents will be processed.

Last but not least you need to set, if you want to update the provided continuation token file with the new continuation tokens which in most situations is desired.

Thnx

AndiP

Cosmos DB access control with users and permissions

Overview

I have written a small lab on using user permissions with CosmosDB. This article dwells a bit on the scenarios when to use permissions instead of masterkey and how you can use a set of permissions to grant access to multiple documents, collections, partitions at once.

Scenarios for user/permissions resource keys

Before we dive into the details of „How we can work with users/permissions to access data in Cosmos DB“ let’s first discuss some scenarios where that can be applied.

In an ideal scenario we ensure that our authenticated user can access our services only through a limited set of API’s as shown in this simplified architecture:

In this scenario we have two options.
Option A: Use the master key and ensure access to the right data in the data services depending on the users permissions.
Option B: Depending on the authenticated user create a set of user permissions (correlating it either directly 1:1 to a cosmos db user, or using a user per tenant) and use this to connect to Cosmos DB in your data services.

In the following scenarios we want to enable direct access to Cosmos DB from applications and services that are not under our full control. We need to ensure that those clients can only access data they are supposed to.

Let’s say the Vendor B needs very fast regional access to the data stored in Cosmos DB. She wants to avoid additional latency by running constantly through business services provided by Vendor A.

In this case Vendor A can grant access to Vendor B’s specific data in Cosmos DB by limiting the access to a single document, single tenant collection or a specific partition within a collection.

Vendor B can use that token/permissions to access the database directly without having access to data of other tenants.

Another scenario might be a mobile application that should be able to fetch the users profile/data quickly directly globally from Cosmos DB.

In this case the application service could provide a resourcetoken/permission to access the database directly. I personally do not like this scenario and would rather use a global distributed API governed by API-Management because that way I have more control in how the user accesses the data (metrics, throttling, additional protection mechanisms). Such a scenario is described with a Xamarin Forms application in the documentation including sample code.

Implementing Users/Permissions

Users are stored within the context of the database in Cosmos DB. Each user has a set of unique named permissions. To learn more about the structure within CosmosDB read this.

Each permission object consists out of

  • Token (string)… Access token to access cosmos db
  • Id (string)… unique permission id (for user)
  • PermissionMode… either Read or All
  • ResourceId (string)… ID of the resource the permission applies to
  • ResourceLink (string)… Self-Link to resource where perm apply.
  • ResourcePartitionKey (string)… PartitionKey of resource perm applies.

Once you have acquired a permission you need only to transfer the Token to the client that should access Cosmos DB. A token is represented as a string.

// token: "type=resource&ver=1&sig=uobSDos7JRdEUfj ... w==;"
string tokenToTransfer = permission.Token;

The client can create a connection to Cosmos DB using that token.

DocumentClient userClient = new DocumentClient(
    new Uri(docDBEndPoint), 
    transferedToken);

!Important! The token stays valid until it expires even if you delete the permission in Cosmos DB. The expiration time can be configured!

You can access multiple resources by providing a set of Permissions. Use the constructor allowing to pass a list of Permissions.
Serializing and Deserializing those permissions as JSON strings is a bit painfully:

Serialization:

// Serializing to JSON
MemoryStream memStream = new MemoryStream();
somePermission.SaveTo(memStream);
memStream.Position = 0L;
StreamReader sr = new StreamReader(memStream);
string jsonPermission = sr.ReadToEnd();

The serialized token looks like this:

{
    "permissionMode":"Read",
    "resource":"dbs/q0dwAA==/colls/q0dwAIDfWAU=/",
    "resourcePartitionKey":["apollak"],
    "id":"mydata",
    "_rid":"q0dwAHOyMQBzifcvvscGAA==",
    "_self":"dbs/q0dwAA==/users/q0dwAHOyMQA=/permissions/q0dw ... AA==/",
    "_etag":"\"00001500-0000-0000-0000-5aaa4a6b0000\"",
    "_token":"type=resource&ver=1&sig=uobSD ... 2kWYxA==;2L9WD ... Yxw==;",
    "_ts":1521109611
}

De-serialization:

memStream = new MemoryStream();
StreamWriter sw = new StreamWriter(memStream);
sw.Write(jsonPermission);
sw.Flush();
memStream.Position = 0L;
Permission somePermission = Permission.LoadFrom<Permission>(memStream);
sw.Close();

By adding multiple permissions to a list you can create a document client with access to multiple resources (f.e. 1..n documents, 1..n collections, 1..n partitions).

List<Permission> permList = new List<Permission>();
// adding two permissions
permList.Add(Permission.LoadFrom<Permission>(memStream));
permList.Add(lpollakDataPermission);
DocumentClient userClient = new DocumentClient(
    new Uri(docDBEndPoint,UriKind.Absolute), 
    permList);

Important things to know:
– If you restrict the partition with a Permission you MUST always set the partition key accessing CosmosDB!
– Permission IDs must be unique for each user and must not be longer than 255 characters
– Tokens expire after an hour per default. You can set an expiration starting by 10 minutes up to 24 hours. This is passed within the RequestOptions in seconds.
– Each time you read the permission from the permission feed of a user a new token gets generated

Example to customize the expiration time

lpollakDataPermission = await client.UpsertPermissionAsync(
    UriFactory.CreateUserUri(dbName, apollakUserid), 
    lpollakDataPermission, 
    new RequestOptions() { ResourceTokenExpirySeconds = 600});

Simple sample implementation as LAB

I have written a small LAB that you can use with CosmosDB Emulator to play with the permissions. There is a student version to start from and also a finished version. In the project folder you will find a „ReadMe.md“ file describing the steps. Download Cosmos DB Security Lab

Learn more about securing CosmosDB

Azure Functions CosmosDB Output binding with Visual Studio 2017

Azure Functions CosmosDB Output binding

Create a new Azure Function in Visual Studio 2017

  1. Start Visual Studio 2017
  2. CTRL/STRG + Q
    a. Type „extensions“
    b. In the extensions and updates dialog search under Online for „Azure Functions and WebJobs Tools“
    c. Install extension
    d. Restart Visual Studio
  3. Create a new Cloud/Azure Functions Project in your „Work“-Folder.
    a. Name = „CBFuncAppLab“
  4. Right click project and select „Manage Nuget-Packages“
    a. Upgrade Microsoft.Net.SDK.Functions to 1.0.6
    b. Search online package: „Microsoft.Azure.WebJobs.Extensions.DocumentDB“ – 1.1.0-beta4
    c. DO NOT INSTALL „Microsoft.Azure.DocumentDB“! This will result in a binding issue, as WebJobs SDK uses a different instance of „Document“ class (v. 1.13.2)!

  5. CTRL+SHIFT+A to add a new file – Select „Azure Function Function“ (CTRL+E – function) under „Visual C# Items“
    a. Name the class „CosmosDBUpsertFunction.cs“
    b. Choose „Generic Webhook“ and click „OK“

Let’s upsert a single document to the collection

For this we need to modify the code like this

namespace CBFuncAppLab
{
    public static class CosmosDBUpsertFunction
    {
        [FunctionName("CosmosDBUpsertFunction")]
        public static object Run(
            [HttpTrigger(WebHookType = "genericJson")]
            HttpRequestMessage req,
            TraceWriter log,
            [DocumentDB("demodb", "democol",
                        CreateIfNotExists =true, 
                        ConnectionStringSetting = "myCosmosDBConnection")] 
                        out dynamic document)
        {
            log.Info($"Webhook was triggered!");
            var task = req.Content.ReadAsStringAsync();
            task.Wait();
            string jsonContent = task.Result;
            dynamic data = JsonConvert.DeserializeObject(jsonContent);
            if (data!=null)
            {
                document = data;
                return req.CreateResponse(HttpStatusCode.OK, new
                {
                    greeting = $"Will upsert document!"
                });
            }
            else
                return req.CreateResponse(HttpStatusCode.BadRequest, new
                {
                    error = "Document was empty!"
                });
        }
    }
}

Make sure you have declared all using statements

using System.Net;
using Newtonsoft.Json;
using System.Net.Http;
using System.Threading.Tasks;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;

Open ‚local.settings.json‘ and configure the your connection string

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "AzureWebJobsDashboard": "",
    "myCosmosDBConnection": "AccountEndpoint=https://YOURACCOUNT.documents.azure.com:443/;AccountKey=MASTERKEY-READWRITE;"
  }
}

Either in Azure Portal or with a tool like „DocumentDB Studio“ create a new collection „democol“ within „demodb“ Database in your CosmosDB Account.

Let’s try it!

Start your Azure Function App in the Debugger and fire up a tool like „Postman“.
In the Output Console you can find the URL for your Webhook. In my case:

Use Postman to do a POST request on this URL with a JSON document as body. F.E:

{
    "id":"apollak",
    "Name":"Andreas"
}

Verify the output in the console window and if the document got written to your documents collection.

Let’s upsert multiple documents to the collection

Now lets output multiple documents in one call. For this we use a parameter with the IAsyncCollector interface. For synchron methods you can use ICollector instead. Alter the code like so:

[FunctionName("CosmosDBUpsertFunction")]
public static async Task<object> Run(
    [HttpTrigger(WebHookType = "genericJson")]
    HttpRequestMessage req, TraceWriter log,
    [DocumentDB("demodb", "Items", 
    CreateIfNotExists = true, 
    ConnectionStringSetting = "myCosmosDBConnection")] 
    IAsyncCollector<dynamic> documents)
{
    log.Info($"Webhook was triggered!");

    string jsonContent = await req.Content.ReadAsStringAsync();
    dynamic data = JsonConvert.DeserializeObject(jsonContent);
    if (data!=null)
    {
        foreach(var document in data)
            await documents.AddAsync(document);

        return req.CreateResponse(HttpStatusCode.OK, new
        {
            greeting = $"Will upsert documents!"
        });
    }
    else
        return req.CreateResponse(HttpStatusCode.BadRequest, new
        {
            error = "No documents where provided!"
        });
}

Run this sample again and post an array of JSON Documents instead. Verify if the function works and the documents have been upserted to the CosmosDB collection.

[{
    "id":"apollak",
    "Name":"Andreas",
    "Likes":"Pizza"
},
{
    "id":"smueller",
    "Name":"Susi"
}]

Thnx and Enjoy
AndiP

Digging into CosmosDB storage

**updated section ‘Will it work the other way around’ on 25.07.2017 **

Well, not how the data is stored internally, but rather how CosmosDB seems to handle data that is stored and accessed via Graph, Table or MongoDB API. Each of these collections/graphs that have been created with the new available APIs can be still accessed with the “native” DocumentDB SQL API.  To date it remains a mystery for me if the CosmosDB Team just uses the “classic” api to provide all these alternate APIs on top or if uses some magic behind.

Please note that while accessing CosmosDB Graph/Table/MongoDB data  with DocumentDB SQL is quite interesting it is not something to use in production and probably not supported by Microsoft. Microsoft might at any time change their way of storing this data in CosmosDB and your code might break.

The first three sections will describe “Graph-API”, “Table-API” and “MongoDB-API”. The 4th section explains how you can change the visual portal experience in Azure and in the 5th section I try to do create documents with DocumentDB API and try to query them with the graph API.

Graph-API

imageTo illustrate this on the Graph API, I created a simple graph by executing a series of commands inside the Graph Explorer.  For this article I only use two vertices (one with an additional property) and one edge which connects both vertices (see the upper two vertices in the image):

  • g.addV(‚place‘).property(’name‘,’Rivendell‘);
  • g.addV(‚place‘).property(’name‘,’Elronds Haus‘).property(‚FoodRating‘,’Delicious’);
  • g.V().has(’name‘,’Rivendell‘).addE(‚path‘).to(V().has(’name‘,’Elronds Haus‘)).property(‚weight‘,1.0).property(‚level‘,’difficult‘);

imageIn parallel I use Azure DocumentDB Studio to connect to my graph with the regularly DocumentDB SQL API. If we select “Elronds Haus” in Graph Explorer we can see the Vertex-ID and the properties to this vertex.

In Azure DocDB Studio we can now issue a query on the collection to reveal the vertex for “Elronds Haus”. To reduce complexity I removed the internal fields like _ts, _etag,_self,… in the images.

  • select * from c where c.id=“ae5668e1-0e29-4412-b3ea-a84b2eb68104″

 

imageId and Label of the vertex is just stored as normal JSON fields, but the properties are stored as a combination of _value and some unique property id field. The edge interestingly stores it’s properties different and more easy to query with DocDB SQL.

image

Where we can find all paths with a weight of 1 easy with

  • select * from c where c.label=“path“ and c.weight=1

we need to issue a more complex query to find a specific vertex by a property value. I’m not sure why they decided to store these properties as array, but maybe this is required for some graph functionality I am not aware of yet.

  • SELECT * FROM c WHERE c.label = „place“  AND c.name[0]._value=“Elronds Haus“
  • SELECT v FROM v JOIN c IN v.name WHERE v.label = „place“  AND c._value = „Elronds Haus“

The edges themselves can be easily discovered by querying the “_isEdge” field. The linked vertices for the edge are stored in the following fields:

  • _sink, _sinkLabel… Id and Lable of the IN-Vertex
  • _vertexId, _vertexLabel… Id and lable of the OUT-Vertex

In this video Rimma Nehme (@rimmanehme) mentioned at 37:20 that the SQL-Extensions will be available at a later point in time enabling you to query the graph with SQL instead of gremlin.

Table API

In this case I use the new Table API of CosmosDB. While you have the same namespace and same API in .NET you need to replace the old Storage nugget package with a new one to have it work.

imageI created three instances of PersonEntity that derive from TableEntity as you would expect with the Storage Table API. The store the race of the person as partitionkey and a combination of first and last name as row key. As soon as you create a new table with the CreateIfNotExistsAsync() method a new database “TablesDB” will be created in CosmosDB with a collection named after your table.

imageKeep in mind that the API will create a new collection for every table! It might be better from a cost perspective to store various kinds of documents into one collection!

As soon as we add the entities to the table we can see them in DocumentDB Studio. Because we used a combination of first and last name as rowkey we can see that the rowkey repesents the “id” field of the CosmosDB entry.

While you can now query more properties than just RowKey and ParitionKey always use the PartitionKey to avoid costly partition scans! You could do a LINQ query like this:

image

imageNow lets load one document in DocumentDB Studio and examine how the data is stored for TableAPI. Again I removed all the internal properties like _ts,_rid,…

What instantly pops into our eye is the use of the $ sign which will cause some trouble constructing DocDB SQL statements as we will see.  Like in the graph API we have multiple fields defining a property. I find this more approachable as this naming reduces the size of the document. (“_value” vs “$v”). The partitionkey is stored as $pk.

CosmosDB stores the type of the properties within it’s $t field. Where 2=string, 16=integer, 1=double.

To query for Bilbo we need to escape the $ character in our query:

  • SELECT * from p where p[‚FirstName‘][‚$v‘] = ‚Bilbo‘ and p[‚$pk‘]=’Hobbit‘

imageTo query the document with LINQ you need to build the entity like in the image. Then you create a typed LINQ query:

  • var queryable2 = client.CreateDocumentQuery<PersonT>(collection.SelfLink, feedOptions).Where( doc => (doc.FirstName.v==“Bilbo“) );

 

MongoDB

First we will create two simple entries with the native MongoDB Client where I add two documents. The second document also uses an ISOData type for date/time. You can see that MongoDB also stores the ID as ObjectId type.

image

There seem to be some issues with other BSON Types though. For example there is an article mentioning some compatibility issues with SiteCore and CosmosDB MongoDB API and I believe it is related to the yet unsupported BSON Type BsonInt32. As far as I have seen (I lost the article in the web Sad smile) currently only ObjectId and ISODate are supported types in CosmosDB MongoDB API.

imageAgain if we now examine those two documents in Azure DocumentDB Studio we can see that id is stored twice. First as “id” and second as [“_id”][“$oid”]. Another way to declare the data type of fields. The field of type ISODate is stored as EPOCH value.

 

 

Switching the portal experience

This will with all APIs except with MongoDB. The reason for this might be legacy Smile. If you take a look at the ARM-Template to create GraphAPI, TableAPI, MongoAPI and DocumentDB API you will notice that while as CosmosDB with MongoAPI has set “kind” property to MongoDB, all others have set it to GlobalDocumentDB.

imageimage

imageAll other APIS rely on the  tags collection within the resource definition. So to change the Table Experience to Graph Experience is to remote the “defaultExperience:Table” tag  and add a new “defaultExperience:Graph” in the portal and reload the page.

image

Will it work the other way around? ** updated 25.07.2017 **

Now we have figured that out, I wonder if I can take a “normal” DocumentDB API collection, fill it with data that looks like what we have created with GraphAPI. Then change the experience to GraphAPI and see if we can access the data via Gremlin.

For that purpose I have set up a brand new CosmosDB with DocumentDB API “demobuilddocs” in the portal. I am again using the Document DB Studio to create a new collection and three documents to it (You can download the documents here!).

Expressed in gremlin this would be (25.07.2017: replaced ö,ü with oe and ue):

  • g.addV(‚place‘).property(’name‘,’Hobbithoehle‘); 
  • g.addV(‚place‘).property(’name‘,’Gasthaus Zum gruenen Drachen‘); 
  • g.V().has(’name‘,’Hobbithöhle‘).addE(‚path‘).to(V().has(’name‘,’Gasthaus Zum gruenen Drachen‘)).property(‚weight‘,2.0);

imageIn DocumentDB Studio I create a new single partitioned collection “democol” with 400 RU/s for imagedemobuilddocs”. Then I create the three documents with CTRL+N (Create document in context menu of collection). So that’s it.

Finally we will change the defaultExperience tag for “demobuilddocs” in the portal to “Graph”:image

Refresh the portal page and navigate to the Data Explorer (Preview). Et voila Smile:

image

Next try that with GraphExplorer and it works all fine as well.

image 

imageNow lets try that with the cloud bash and gremlin client. (Spoiler: Will break! – No it won’t. It would break if we used ö,ü,… in the JSON). First I copy my hobbit.yaml connection configuration to doccb.yaml and edit it with nano to point to  the graph url “demobuilddocs.graphs.azure.com”. Please note that GraphExplorer which uses the .NET CosmosDB SDK will connect to “demobuilddocs.documents.azure.com”. Then I add the path to my collection and the primary key as password (I have truncated that for security reasons).

image

Now I run my gremlin client (I have installed that with a tar ball in my cloud bash) and connect to my graph database:

image

And lets query for edges and see how that works.

image

**updated 25.07.2017**
Now when we read the vertices we will get the correct result.

image

Now lets try that with vertices and see it break with an Decoder Exception. It is missing some strange close marker for OBJECT.
If however our JSON contains mutated vowels (Umlaute) the the decode will fail with an exception:

image

A .NET implementation btw like GraphExplore can handle the mutated vowels without any problem. But you might want to look out for this error in a Java Client. If you examine the exception you can see that the resulting json misses some closing brackets.

image

This will need some further inspection, but I am closing out my article for today. Stay tuned,…

P.S: If you create a new vertex in the gremlin console, you can query that with no problems. But if you replace that document by just changing the value of a property with DocumentDB Studio you have the same error if you query the vertex with the gremlin console. Obviously Microsoft is storing more data than meets the eye Smile. On the other hand it is interesting to see the .NET Client SDK to work.

Keep digging and have a great weekend

AndiP

CosmosDB – Build 2017 Recap Vienna

AzureCosmosDBViennaSpeakerAuf der Veranstaltung //BUILD on Tour gestern bei Microsoft Österreich durfte ich einen Vortrag zum Thema “CosmosDB” halten.

Die Slides zu meinem CosmosDB Vortrag könnt ihr hier herunterladen. Auf GitHub findet ihr meinen Source Code zum CosmosDB Vortrag und hier das Azure CosmosDB Graph Explorer Beispiel.

Hier auch die Gremlin Queries für das Beispiel aus The Hobbit.

Aja, nachdem es für einige Verwirrung gesorgt hat. In CosmosDB gibt es nur Request Units (RU) pro Sekunde/Minute und keine Orks Smile. Obwohl es hätte was.

Bester Sager eines Teilnehmers in dem Vortrag zu den UWP Apps: “Helene Fischer? Naja ohne Ton gehts” *haha*

Viel Spaß

AndiP