Direct Methods with IOTHub in C#

There is a new preview in town. With this you can invoke a direct method on a device. Currently only MQTT devices are supported in this scenario. There is a nice article with some NodeJS samples. When Roman Kiss posted on the Azure Forum that he would like to write his simulated device in C# I thought this might be an nice opportunity to figure out why this does not work.

Well the answer is pretty simple: It is not yet implemented in the C# SDK.

But being me I decided to make the “impossible” possible (for the fun sake of it). First I did pull the complete preview of the Azure IOT Sdks  from github. Then I spend some time in figuring out what the NodeJS implementation does. I love debugging JavaScript *sigh*.

And then I quickly modded (aka hacked) the Microsoft.Azure.Devices.Client (Be aware that this is not an optimal solution Smile). These are the changes I made:

Microsoft.Azure.Devices.Client – MqttIotHubAdapter

sealed class MqttIotHubAdapter : ChannelHandlerAdapter
...
const string TelemetryTopicFormat = "devices/{0}/messages/events/";
// ADDED =>
const string MethodTopicFilterFormat = "$iothub/methods/POST/#";
const string MethodTopicFormat = "$iothub/methods/res/{0}/?$rid={1}";

Microsoft.Azure.Devices.Client – MqttIotHubAdapter – Connect Function

This was the most difficult to find out, because I did not expect this “hack”. Expect the unexpectable!
async void Connect(IChannelHandlerContext context)
{
...
var connectPacket = new ConnectPacket
{
ClientId = this.deviceId,
HasUsername = true,
// CHANGED => You need to add this weird suffix to make it work!
Username = this.iotHubHostName + "/" + this.deviceId + "/DeviceClientType=azure-iot-device%2F1.1.0-dtpreview&api-version=2016-09-30-preview",
HasPassword = !string.IsNullOrEmpty(this.password),

Microsoft.Azure.Devices.Client – MqttIotHubAdapter – SubscribeAsync Function
Here I added the method topic subscription!

async Task SubscribeAsync(IChannelHandlerContext context)
{
if (this.IsInState(StateFlags.Receiving) || this.IsInState(StateFlags.Subscribing))
{
return;
}

this.stateFlags |= StateFlags.Subscribing;

this.subscribeCompletion = new TaskCompletionSource();
string topicFilter = CommandTopicFilterFormat.FormatInvariant(this.deviceId);
var subscribePacket = new SubscribePacket(Util.GetNextPacketId(), new SubscriptionRequest(topicFilter, this.mqttTransportSettings.ReceivingQoS));
System.Diagnostics.Debug.WriteLine($"Topic filter: {topicFilter}");
await Util.WriteMessageAsync(context, subscribePacket, ShutdownOnWriteErrorHandler);
await this.subscribeCompletion.Task;

// ADDED => WE are using the const I decleared earlier to construct the topicFilter
this.subscribeCompletion = new TaskCompletionSource();
topicFilter = MethodTopicFilterFormat.FormatInvariant(this.deviceId);
System.Diagnostics.Debug.WriteLine($"Topic filter: {topicFilter}");
subscribePacket = new SubscribePacket(Util.GetNextPacketId(), new SubscriptionRequest(topicFilter, this.mqttTransportSettings.ReceivingQoS/*QualityOfService.AtMostOnce*/));
await Util.WriteMessageAsync(context, subscribePacket, ShutdownOnWriteErrorHandler);
await this.subscribeCompletion.Task;
// <= ADDED

}
Microsoft.Azure.Devices.Client – MqttIotHubAdapter –SendMessageAsync Function
Since we do want to acknowledge the arrival of the method we need to modify this too:
async Task SendMessageAsync(IChannelHandlerContext context, Message message)
{
// CHANGED => For our publish message we need to send to a different topic
string topicName = null;
if (message.Properties.ContainsKey("methodName"))
topicName = string.Format(MethodTopicFormat, message.Properties["status"], message.Properties["requestID"]);
else
topicName = string.Format(TelemetryTopicFormat, this.deviceId);
// <= CHANGED

PublishPacket packet = await Util.ComposePublishPacketAsync(context, message, this.mqttTransportSettings.PublishToServerQoS, topicName);
...
Microsoft.Azure.Devices.Client – MqttTransportHandler – ReceiveAsync Function
Since we do not get a lockToken with the Methodcall, we should not enqueue the Null in our completionQueue
public override async Task<Message> ReceiveAsync(TimeSpan timeout)
{
...
Message message;
lock (this.syncRoot)
{
this.messageQueue.TryDequeue(out message);
message.LockToken = message.LockToken;
// Changed line below to exclude LockTokens that are null #HACK better check if it is a Method message
if ((message.LockToken != null)&&(this.qos == QualityOfService.AtLeastOnce) )
{
this.completionQueue.Enqueue(message.LockToken);
}
...
Microsoft.Azure.Devices.Client – Util– ComposePublishPacketAsync
A little change here to prevent that this method “destroys” our carefully constructed topic name earlier.
public static async Task<PublishPacket> ComposePublishPacketAsync(IChannelHandlerContext context, Message message, QualityOfService qos, string topicName)
{
var packet = new PublishPacket(qos, false, false);

// MODIFIED ==>
if (message.Properties.ContainsKey("methodName"))
packet.TopicName = topicName; // Make sure to keep our Topic Name
else
packet.TopicName = PopulateMessagePropertiesFromMessage(topicName, message);
// <== MODIFIED
...

Microsoft.Azure.Devices.Client – Util– PopulateMessagePropertiesFromPacket
And finally we need to populate our method Messages with properties like our requestID, methodName,…
public static void PopulateMessagePropertiesFromPacket(Message message, PublishPacket publish)
{
message.LockToken = publish.QualityOfService == QualityOfService.AtLeastOnce ? publish.PacketId.ToString() : null;

// MODIFIED ==>
Dictionary<string, string> properties = null;
if (publish.TopicName.StartsWith("$iothub/methods"))
{
var segments = publish.TopicName.Split('/');
properties = UrlEncodedDictionarySerializer.Deserialize(segments[4].Replace("?$rid", "requestID"), 0);
properties.Add("methodName", segments[3]);
properties.Add("verb", segments[2]);
}
else
properties = UrlEncodedDictionarySerializer.Deserialize(publish.TopicName, publish.TopicName.NthIndexOf('/', 0, 4) + 1);
// <== MODIFIED

foreach (KeyValuePair<string, string> property in properties)
{
...

Building the simulated device with the modded Microsoft.Azure.Devices.Client SDK
Just create a new Console application and reference the modded SDK
 
using Microsoft.Azure.Devices.Client;
using System;
using System.Collections.Generic;
using System.Text;

namespace DeviceClientCS
{
class Program
{
private static async void ReceiveCloudToDeviceMessageAsync(DeviceClient client,
string theDeviceID)
{
Console.WriteLine($"Receiving messages from Cloud for device {theDeviceID}");
while (true)
{
Message receivedMessage = await client.ReceiveAsync();
if (receivedMessage == null) continue;

Console.ForegroundColor = ConsoleColor.Yellow;
Console.WriteLine($"Received method ({receivedMessage.Properties["methodName"]}): {Encoding.ASCII.GetString(receivedMessage.GetBytes())} for device {theDeviceID} - Verb: {receivedMessage.Properties["verb"]}");
Console.ResetColor();

// ACKNOWLEDGE the method call
byte[] msg = Encoding.ASCII.GetBytes("Input was written to log.");
Message respondMethodMessage = new Message();
foreach (KeyValuePair<string, string> kv in receivedMessage.Properties)
respondMethodMessage.Properties.Add(kv.Key, kv.Value);
respondMethodMessage.Properties.Add("status", "200");
await client.SendEventAsync(respondMethodMessage);
}
}


static void Main(string[] args)
{
string deviceID= "myDeviceId";
string connectionString = "<Your device connection string goes here>";

DeviceClient client = DeviceClient.CreateFromConnectionString(connectionString, deviceID, TransportType.Mqtt);
ReceiveCloudToDeviceMessageAsync(client, deviceID);
Console.ReadLine();
}
}
}
 
And here is a final screen shot of my results:
Result
Cheers
AndiP

 
 
 
 

Ubuntu with Visual Studio Code ARM Template

 

Setup the new cross-platform ASP.NET Core with Visual Studio Code on a Linux machine quickly? It’s a bit tedious to do all the required installation bits, not to mention to figure out the little issues.

DeployToAzure

imageUsing our new ARM-Template you can setup such Box on Microsoft Azure with a single click on the Deploy-Button (if you have an Azure Account already, if not get one here)! 

Then fill the parameters with values

  • Credentials
  • DNS-Name
  • Run full Ubuntu-Desktop? (installation will take much longer, but you can play Mahjong)
  • Resource-Group Name

and click “Create”.

The ARM Template installs:

  • Docker (from Docker Extension)
  • Ubuntu Desktop with XRDP and xfce4 (Full or Minimal)
  • Visual Studio Code
  • .NET Core SDK
  • NodeJS and NPM v6
  • Yeoman with ASP.NET Generator
  • C# Extension for Visual Studio Code in Visual Studio Code

RDPLoginLater use Remote Desktop Connection to connect to your machine! Computer:<DNS-Name>.<Location of Resource Group>.cloudapp.azure.com. Enter your credentials in the xrdp login dialog. Make sure “sesman-Xvnc” is selected!

 

VSCodeMenuYou find Visual Studio Code under Development. Or you can start it from the shell with “code .”. You also may use Yeoman with the preinstalled ASP.NET Generator.

Read more about the ASP.NET Generator at the blog of Scott Hanselman.

Enjoy playing with .NET Core and Visual Studio Code running in Windows Azure

YOGeneratorAndiP

API Management on Global Azure Bootcamp 2016

I recently had the opportunity to give an introduction into API Management at the Global Azure Bootcamp 2016 in Linz. You can find the pickings of that event here (german only). I decided to publish my slides about API Management but also some information about the demo environment I used.

Ok this turned out to be more a blog post about how to authenticate Web Apps with Web API Apps Smile

First and foremost, to play around with Azure API Management you need a Microsoft Azure Subscription, which you can get here.

My demo environment looked like this:

  • 1 developer instance of API Management managed through the classic azure portal.
  • 1 azure resource group where I run a free  App Service Plan managed through the new azure portal with
  • 3 Azure API Apps (CalcGAB2016, CalcEnterprise, CalcEnterpriseClient)
  • 1 azure active directory instance managed through the classic azure portal.

If you plan to create API Apps yourself I recommend to use the template “Azure API APP” in ASP.NET applications. This will come preconfigured with the Swashbuckle packages which allow to create an OPEN API SPECIFICATION (formerly known as swagger) document straight from your code. You can read more here about how to customize the swashbuckle  generated api definitions.

Now to my sample code. Since there is plenty of documentation on how to use API-Management (you can find an incomplete but helpful list in the last slide of my presentation). My JWT-Token demo is based on the presentation from Darren Miller (see time – 7:30).

Therefore will focus instead on some specifics in the AAD Integration of the API app “CalcEnterprise” and the web app “CalcEnterpriseClient” which I have secured with AAD.

Securing Azure Web/API Apps

I love the idea that you can move out the authentication of your application and instead just configure it on the portal. Like a former colleague of me said: You do not want a web developer to write your authentication code Smile. Instead you just write the application with “No Authentication” selected and configure the access in the new management portal:

image

Depending on the authentication you selected your ClaimsPrincipal.Current object will hold all claims provided by the authority that authenticated your visitors. Aside of that you receive the complete token and some other information about the authentication in the headers that Azure provides:

X-MS-CLIENT-PRINCIPAL-NAME=f.e. email
X-MS-CLIENT-PRINCIPAL-ID=f.e. a GUID like in AAD
X-MS-CLIENT-PRINCIPAL-IDP=Identity Provider (AAD => aad)
X-MS-TOKEN-AAD-ID-TOKEN=in AAD the JWT Token with additional claims, that you also can find in ClaimsPrincipal.Current.Claims if you happen to run the application in ASP.NET

Step 1 Configure your AAD & securing your Web APP/API

After you have created a new AAD instance like <yourChoosenName>.onmicrosoft.com you can define an AAD application which your application will be using. Under Applications Tab

  • Add new application “Add an application that my organization is developing”
  • Name it and select “Web Application and/or WEB API”
  • Provide the sign-in url which will be the URL to your website like: https://&lt;yourapp>.azurewebsites.net
  • Provide an unique ID which can be any unique URI, but for multitenant applications use the base tenant uri in combination like so: https://<yourTenantName>.onmicrosoft.com/<your unique name of the app>

After you have created the application you will find also populated the REPLY URL which is important as the JWT Token will be sent only to this URL by the identity provider! To configure Authentication/Authorization for your Web APP/API:

  • copy Client ID of your AAD Application
  • Open “View Endpoints” (button on the bottom of the screen) – Copy the URL of the Federation Metadata Document
    Open the Federation Metadata Document URL in a browser
    In the XML find the element “Entity” and copy the content of the  attribute “entityID” which contains the link to the Issuer (Issuer  URL)

You will need these two values to configure Authentication/Authorization in your WebAPP/API like so:

image

Step 2 Applying this concept to my Sample Code

I figured out that I could create at least 2 different scenarios with my two Web APP/APIs:

  • Assign Client and API to a single AAD applications
  • Assign Client and API into separate AAD applications

With the first option I can easily authenticate my call into the API from my Client with the same identity that authenticated on the client (Implemented in the HomeController “Index_SameAAD”):image

With the second option I can my Client App as Service Principal to authenticate to my API, which hides the original identity for the API.
(Implemented in the HomeController “Index”):

image

But I also can re-authenticate the identity on the API to see the original identity. I found this excellent article of Vittorio Bertocci on using ADAL’s AquireTokenByAuthorizationCode to call a Web API from a Web APP which showed me the way how to implement this.(Implemented in the HomeController “Index_OtherAAD”):

image

Step 3 – Clone the source

Feel free to clone my source code from my github repository and play with it.

You need to replace following placeholders with actual values and deploy & configure your apps to Azure of course.

  • ”<your API MGMT API key>”
  • <yourAPIMInstanceName>
  • <yourcalcEnterpriseAPIwebsitesUrl>
  • <yourAPIMInstanceName>
  • <CalcEnterpriseClient AAD CLientID>
  • <CalcEnterpriseClient AAD App Secret/Key>
  • <CalcEnterpriseAPI AAD CLientID>
  • <yourTenantName>

First restore all packages – For some reasons I had issues in the Calc-Project with not loading the correct DLL for System.Web.Http and others (funny enough it shows errors in Visual Studio 2015 but still compiles fine *lol*). Closing the Solutions and opening the Project-File instead fixes this.

Clone the Source
Download Slides

Enjoy a nice day – AndiP

Creating a JWT-Token in Windows 8.1 Phone App

I thought I quickly download the JWT nuget package to my Windows 8.1 universal app. Well I was wrong.  After some searching I found this article Creating a JWT token to access Windows Azure Mobile Services. But System.Security.Cryptography is no longer available in Windows Phone 8.1 Universal Apps. You should rather use the classes in Windows.Security.Cryptography which are of course inherently different.

So I rewrote the JsonWebToken Class to work in my universal app and share this here if you run into the same issue. I validated it with the JWT debugger on http://jwt.io/.

BTW, before you ask that question: “Why do you not use Windows 10 Universal App?” Answer: I would if the Windows 10 Preview on my Windows Phone would be in a better shape Smile. This was the first preview I had to roll back in my life.

/// <summary>
/// based on http://www.contentmaster.com/azure/creating-a-jwt-token-to-access-windows-azure-mobile-services
/// Reimplemented cryptographic part
/// </summary>
public class JsonWebToken
{
/// <summary>
/// Create a HMACSHA256 Signing HASH
/// </summary>
/// <param name="signingKey"></param>
/// <param name="bytesToSign"></param>
/// <returns></returns>
private static byte[] HMACSHA256(byte[] signingKey, byte[] bytesToSign)
{
var signingKeyBuffer = CryptographicBuffer.CreateFromByteArray(signingKey);
var bytesToSignBuffer = CryptographicBuffer.CreateFromByteArray(bytesToSign);

var hmacAlgorithm = MacAlgorithmProvider.OpenAlgorithm(MacAlgorithmNames.HmacSha256);
var hash = hmacAlgorithm.CreateHash(signingKeyBuffer);
hash.Append(bytesToSignBuffer);
string base64Hash = CryptographicBuffer.EncodeToBase64String(hash.GetValueAndReset());
return Convert.FromBase64String(base64Hash);
}

public static string Encode(object payload, string key)
{
return Encode(payload, Encoding.UTF8.GetBytes(key));
}

public static string Encode(object payload, byte[] keyBytes)
{
var segments = new List<string>();
var header = new { alg = "HS256", typ = "JWT", kid = 0 };
byte[] headerBytes = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(header, Formatting.None));
byte[] payloadBytes = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(payload, Formatting.None));
segments.Add(Base64UrlEncode(headerBytes)); segments.Add(Base64UrlEncode(payloadBytes));
var stringToSign = string.Join(".", segments.ToArray());
var bytesToSign = Encoding.UTF8.GetBytes(stringToSign);
byte[] signature = HMACSHA256(keyBytes, bytesToSign);
segments.Add(Base64UrlEncode(signature));
return string.Join(".", segments.ToArray());
}

// from JWT spec
private static string Base64UrlEncode(byte[] input)
{
var output = Convert.ToBase64String(input);
output = output.Split('=')[0]; // Remove any trailing '='s
output = output.Replace('+', '-'); // 62nd char of encoding
output = output.Replace('/', '_'); // 63rd char of encoding
return output;
}

internal static string TestJWT()
{
var privateKey = "secret";
var issueTime = DateTime.Now;
var utc0 = new DateTime(1970, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
var exp = (int)issueTime.AddMinutes(60).Subtract(utc0).TotalSeconds;
var payload = new
{
exp = exp,
ver = 1,
aud = "[Your AUD]",
uid = "[A unique identifier for the authenticated user]"
};
return JsonWebToken.Encode(payload, privateKey);
}
}

Enjoy your day

AndiP

Using HLP files in Windows 10

It is amazing how some vendors of libraries in the automation industry still require you to read help files in the old Microsoft hlp format.  Trying to open such a file results EDGE to show you the following screen.
Image Error opening Help in WIndows-based programs: "Feature not included" or "Help not supported"

If you think you can download and install the version for Windows 8.1. you are wrong. But do not throw away your downloaded MSU-File (for Windows 8.1 x64 the name is Windows8.1-KB917607-x64.msu).

Start your command prompt as Administrator!

imageFirst extract the content of the MSU File to another directory:

md ContentMSU
expand Windows8.1-KB917607-x64.msu /F:* .\ContentMSU

Now we can extract the contained CAB-File:

cd ContentMSU

md ContentCAB

expand Windows8.1-KB917607-x64.cab /F:* .\ContentCAB

This will extract 279 files.  Depending on your culture and language settings we need to locate the right MUI-File. My language is german so I use “de-”. English folk use “en-“.

cd ContentCAB
dir amd64*de-*.
People who use the x86 variant need to run “dir x86*de-*.”
Navigate to the given path, in my case

cd amd64_microsoft-windows-winhstb.resources_31bf3856ad364e35_6.3.9600.20470_de-de_1ab8cd412c1028d0

Here we will find “winhlp32.exe.mui”. We need to replace %SystemRoot%\de-de\winhlp32.exe.mui with our new file:

takeown /f "%SystemRoot%\de-de\winhlp32.exe.mui"
icacls "%SystemRoot%\de-de\winhlp32.exe.mui" /grant "%UserName%":F
ren %SystemRoot%\de-de\winhlp32.exe.mui winhlp32.exe.mui.w10
copy winhlp32.exe.mui %SystemRoot%\de-de\winhlp32.exe.mui


 

takeown /f "%SystemRoot%\winhlp32.exe"
icacls "%SystemRoot%\winhlp32.exe" /grant "%UserName%":F
ren %SystemRoot%\winhlp32.exe winhlp32.exe.w10

cd ..

dir *.exe /s
Find the right path starting either with amd64 or x86 and navigate to it
cd "amd64_microsoft-windows-winhstb_31bf3856ad364e35_6.3.9600.20470_none_1a54d9f2f676f6c2"
copy winhlp32.exe %SystemRoot%\winhlp32.exe

Cheers
AndiP

Fluid Simulation Integration with Blender

Dear readers, as promised I will now follow up with the integration of 3D objects in our tracked footage. It took a while to continue this series because my beloved father unfortunately passed away.

Today I will create some fluid simulations for our kitchen scene. Last time we tracked only the relevant part of the footage which produced a tracked camera from frame 431 to frame 618. First we will now cut down the background footage. Then we modify the tracked camera to start a frame 0.

Preparations

Img01Change the camera animation in Blender
In Blender change the timeline to the dope sheet and navigate to frame 430 (one before our relevant first frame). Zoom in so that you can see the key frames. Now select the menu “SELECT – Before current frame”. With the mouse over the dope sheet hit “x” to delete all key frames from 0 – 430.

Now navigate to frame 431 and select the menu “SELECT – After current frame”.  Hit “g” and type “431-“ which will move all the selected key frames to frame 0. Make sure to type the minus character at the end!

Switch back to the timeline and set start and end  accordingly (0/187).

Img02Use Adobe Media Encoder to cut down our background footage
Open Adobe Media Encoder and open preferences (STRG+,). Select Appearance/Display Format. Make sure that the frame rate is set accordingly to the frame rate of the background footage. (In my case 25 FPS).
Close preferences and drag in your footage. Choose AVI uncompressed and name the file. In my case “SHOOT27_431-618.avi”. Since Adobe Media Encoder gives us no direct way to show the current frame we are at we need to work with the time code provided. Since I recorded my footage without resetting the time code of my camera the clip starts with the time code: 11:40:51:14 which translates to 11 hours, 40 minuts, 51 seconds and 15 frames. The last number defines the frame in the case of 25FPS the value ranges from 0-24.

Img03Since we do want to encode starting at frame 431 we need to add this to the current starting frame 14. 431+14 = 445. Click on the time code to change it and change the last number to 445. Img04

As soon as you hit <ENTER> to confirm the value the time code calculates correctly to: 11:41:08:20. Now set the IN-POINT by clicking on the button right next to the time code.

Similar to the start frame we navigate to the required end frame. We move the current position to the first frame and again change the number 14 to 618+14 = 632. After the current position changed we can set our OUT-POINT by clicking on the 2nd button from left next to the time code.  Make sure that the output video has the same aspect ration and the same key frame rate as the original footage and encode the video.

Create 3D Model and set background footage

Img-07Now lets create our 3D Model of a drinking glass and a plane below where we do want to simulate a wet surface later. I won’t explain the creating a simple model for a drinking  glass in this tutorial. Align the 3D Models with the 3D Model you got from PF-Track. In my case this is the part of the kitchen with the window.

Switch the renderer to CYCLES in the top dropdown on blender (see image). Next we do want to load in our background footage so we can move the PF-Track model of our kitchen to another layer. With the mouse in the 3D View hit “n” to show the right toolbar. Img-08 

Scroll down to “Background Images” and select the checkbox. Then load in the background movie and set the settings like illustrated in the screen shot.

Now we will set the material for our two objects. Select the drinking glass, then  the material tab and create a new material by clicking on the button “+ New Material”.

Img-09

Name the material glass. Select “Glass BSDF” shader and set the color to pure white (1.0/1.0/1.0. Default is 0.8/0.8/0.8). Leave the refraction index (IOR) at 1.310. You can find a list of materials with their corresponding refraction indices here.

 

 

Since we also want to render the background footage so that it’s reflection is caught in the class we need to change the “world”-settings. In the “world-tab” open up the section “surface” and click on the “use nodes”-Button.  Then click on the small button right to the color and select “Image-Texture” from the pop up menu. Select the AVI-file we created earlier, set the amount of frames, start frame and AutoRefresh. Make sure you set Vector to “Texture Coordinate | Window”.

1. Img-08-1 2. Img-08-2 3. Img-08-3

Create the fluid simulation system

Create a fluid domain

Img13-FluidDomainCreate a cube around the surface and the drinking glass. Switch to wireframe to see the objects within. This will be our water simulation domain. Create a materialWater” for this object. Again use the Glass BSDF shader but change the refraction index (IOR) to 1.301 and the color to pure white (1/1/1). Select the cube and switch to the tab “Physics” and click on “Fluid”. Set the type to “Domain”.

TIME: The timing of the simulation is very important. There are textboxes for Start and End.  This indicates Start and End Time in SECONDS! So in our case start with 0 and end with 186 (frames) / 25 (FPS) = 7.44 (~7.4). Set the SPEED setting to 1 which indicates normal speed.

DOMAIN SIZE: To create a realistic water simulation the simulator needs to know the size the domain cube represents in the real world. Under section “Fluid World” find the setting “Real World Size”. This value indicates the longest side of the cube in meters. So a value of 0.4 represents 40 cm.

SLIP-TYPE: You can find these settings under the section “Fluid Boundary”.   The slip-type determines the stickiness of the surface of the boundary (surface adhesion). You can change the surfaces Smoothing Options (0 = off, 1=Standard,…) and Subdivision (the resolution of the surface for the calculations: 1=off, 2=1 subdivide, 3=2 subdivides,…). Be careful! A high value of the resolution increases the calculation of the simulation significant.

PARTICLES: To create a more realistic simulation use particles. To be able to use particles (splash when hit boundary/obstacles) you need to set the subdivision (Boundary settings) to at least 2. Tracer allows to define how much particles already exist at the beginning od the simulation.

Create fluid obstacles

Although it is possible to use the objects we already created as obstacles in the simulation it sometimes can be more effective to create simplified versions of the objects as obstacles to reduce calculation time.  To illustrate this duplicate the objects drinking glass and the “wet” surface and name them “Drinking glass obstacle” and “kitchenette surface obstacle” or similar.

Make sure you turn rendering off for these obstacle objects. In the “Physics”-Tab select “Fluid” and choose “Obstacle” as type.
Img10-DrinkingGlassObstacleImg12-PlaneObstacle

Since we have our surface set as obstacle we can modify our visible kitchenette surface and modify it to a wet ground. For that subdivide the visible plane several times and use extraction tools to shape it like a little water surface (see an example in the next section below the glass).

Create InFlow object

Img11-InflowSphereSince we do want the water mysteriously appear in the middle of the air and fill the glass we need an InFlow-Object to indicate where the water will come from. Create a small sphere which you also need to hide from rendering. This sphere must reside inside the fluid domain!

Img11-InflowSettingsIn the “Physics”-tab activate “Fluid” and set the type to “In-Flow”.  We initialize the volume with the volume of the object (sphere) so we set the “Volume initialization” to “Volume”. In my case I want to have velocity along the positive x axis so I set the X velocity to 0.6.

We also do not want to poor constantly water into the glass so we enable the InFlow object only for a brief moment in time. For that we navigate to frame 0 in our time line. Activate the “Enabled”-Checkbox and right click it to  insert a key frame. Move forward to frame 18. Deactivate the “Enabled”-Checkbox and set another key frame here.

Bake the fluid simulation

To bake the fluid simulation simply switch to the fluid domain object and hit the “Bake”-Button in the Physics-Tab. This will take a time. After that you will see the simulation when you scrub through the timeline and also when you render it:

Img14-BakedWater Img15-RenderNoNodes

Preparing render for After Effects composition

Finally we do not want to render the background footage directly but composite it later in After Effects. To do that we need to extract the background footage. Since we want to keep all the reflections we cannot simply remove the background render in the world-tab.

PassINdexFor the the drinking glass and the fluid domain set the PASS INDEX to 1 (object tab). For the water surface set the PASS INDEX to 2. Switch to “Nodes”-View and and select “World” in the bottom toolbar. Use the ID-Mask nodes to isolate a alpha map for the objects with the pass index 1 and two. Then use the “Set Alpha” node to isolate the object from the rendered image. We now can take the result from ID 2 and modify it with RGB curves and make it slightly transparent by using an Alpha Over. We place the result of the isolation in the foreground (lower image input) and the the upper image to black (0/0/0) with Alpha 0. We use the factor to make the water surface transparent. In our case I set this to 0.427. We then combine the water surface and the class with the water again with an Alpha Over and take the result as final render.

Composite-Nodes

Img16RenderedNodes

Now we can finally render our animation and integrate it with the original footage in After Effects. I hope you have enjoyed this tutorial.

Cheers
AndiP

Blender 3D Integration with PFTrack

imageI recently bought a DJI Ronin to be able to do smooth shots. Not only a smooth shot is much more enjoyable for the eye it also is much easier to track if you do 3D-Integration. While shooting smooth shots with the DJI Ronin still requires a lot of practice the shots are a lot smoother as if they were shot plain handed.

Today I will show you how to import a moving shot into PF-Track. Then we will of course track the camera but also to position a 3D-Model (glass) on the kitchenette.  In the image on the right I have visualized my shot where my camera moves from left to right.

I structured the post with the following topics

  • PFTrack application
    • Creating the required nodes in PFTrack
    • Configure “Photo Survey” node, match points and solve camera
    • Configure “Photo Mesh” and create the mesh
    • Exporting mesh and camera data to Blender
  • Blender application
    • Importing mesh and camera data into Blender
    • Verification (optional) but recommended

I am using PFTrack 2015.05.15 for this. Create a new project PFTrack first. Change to project view by clicking on the “PRJ” Button in the left lower corner (image).  Click the “Create”-Button, fill out Name, Path,… and click “Confirm”. Enable filebrowser and project media browser by clicking on the corresponding icons in the top of the application (image). Import the footage by dragging it into the “Default”-Folder or create your own project structure. 

Creating the required nodes in PFTrack

Drag your shot to the node window. In the lower left corner enable the “nodes-menu” (image).  Click on “Create” to create an “Photo Survey”-node. Setup “Photo Mesh”- and “Export”- nodes with the same procedure. Your node tree should look like this:
image

Configure “Photo Survey” node, match points and solve camera

imageDouble click the “Auto Track” node. Since the calculations take quite some time we should only calculate what is necessary. Since I have a much longer recording (switching the camera on/off while you are holding the heavy DJI Ronin is quite a challenge) I only need to track a small portion. In my case from frame 431 to 618. Open “Parameters” of the “Photo Survey”-node and setStart frame” and “End frame” in the “Point Matching” section. Finally hit “AutoMatch” (image) and wait until the calculations are done.

After the points have been tracked click on the “Solve all” button (image) in the “Camera Solver” section. If you enable the split view (see buttons on the right corner) you will end up with a point cloud and a solved camera:
image

Configure “Photo Mesh” node and create the mesh

After solving the camera we need to create depth maps for each frame and create a mesh. Note that you won’t get a perfect mesh but it will suffice to help place things in the 3D world in Blender later. Switch to the “Photo Mesh” nodes. If you do not require all points set the bounding box accordingly in the “Scene” section. To do this click the “Edit” button.  If you hover now over the planes of the bounding box in the 3D view they will highlight and can be moved by dragging them with the mouse. Once you are finished hit “Edit” again.

Let’s create the depth maps next. Depending on your requirement set the Depth Maps resolution to “Low”, “Medium” or “High”. Be aware that a higher resolution results in a much longer calculation. I left the variation % at the default of 20 and set my resolution to “Medium”. Now hit the “Create” button in the “Depth Mapssection. This will take a while.

After building the depth maps we can create the mesh. Note that you also could create a smaller portion of the mesh by setting the bounding box in the “Mesh” section.  Create the mesh simply by hitting the “Createbutton in the “Meshsection. And finally we should have our mesh:
image

Exporting mesh and camera data to Blender

PFTrack offers to export the mesh and camera data in various formats: “Open Alembic”, “Autodesk FBX 2010 (binary)”. Also you can export the mesh without camera to “Wavefront OBJ” and “PLY”. “Open Alembic” export fails on my windows pc and I have not been able to use that so far.

For Blender we we should have two options: “Autodesk FBX 2010 (binary)” and “Wavefront OBJ”.

Unfortunately we have two issues with the FBX format. First of all Blender can only import “Autodesk FBX 2013 (binary)”. Therefore we need an extra step in converting the fbx-file with Autodesks FBX Converter 2013.2. This allows us to import the cameras and the mesh, but the camera rotations are completely messed up Sad smile.  I do not know if this is a bug in Blender or PFTrack but it does not help to make a smooth workflow. So what is the solution?

imageThe solution is to split up camera- and mesh export. So first we export the mesh as “Wavefront OBJ”. Since Blender uses the z-axis for up/down we change the default settings for the coordinate system toRighthanded” and “Z up”. Then we name an output file (f.e. Kitchen-z-up.obj) and click on the button “Export Mesh”.

To export the camera data we use the previously created “Export” node that is connected to the “Photo Survey” node. In the parameters of the “Export” node we select the formatCollada DAE”. Choose what to export in the TABs on the right side. Since I won’t be needing the point cloud I removed the point cloud from the export. Make sure that the camera is selected and “Separate Frame” is not checked. If checked PFTrack would create a separate camera for each frame. Since we do want to render an animation later leave that unchecked. Name the output file (f.e. KitchenNoPC.dae) and hit the “Export Scenebutton.

image

So we end up with two files. One (Kitchen-z-up.obj) contains our model and the other (KitchenNoPC.dae) our animated tracked camera.

Importing mesh and camera data into Blender

Start up Blender (I am using Version 2.74).  Open user preferences (STRG+ALT+U) and select the “AddOns”-Tab. Select the categoryImport-Export” and make sure that “Import-Export Wavefront OBJ format– AddOn is selected.

First make sure that our render settings are set correctly. We set resolution (should match with footage) and frame rate.
(It is crucial to set the frame rate correctly before you import the animated camera !! Otherwise the camera will be out of sync even if you change the frame rate later!!)
image

imageSelect “File/Import/Wavefront (.obj)” from the menu. Navigate to mesh obj file you created with PFTrack (f.e. “Kitchen-z-up.obj”) Make sure that you change the import settings in the left lower corner as shown in this image:

Then click on “Import” to import the object.

 

Select “File/Import/Collada (Default) (.dae)” from the menu. Navigate to the exported collada camera track file (f.e. KitchenNoPC.dae) and clickImport COLLADA”.

This will import two objects: An empty object “CameraGroup_1” and the animated cameraImageCamera01_2” (names can vary of course). Although the position of the camera after the import looks correct, the position of the camera will rotate 90 degrees on the global x-axis once you scrub through the timeline. I assume that the Pixelfarm team meant to parent the “ImageCamera01_2” to the “CameraGroup_1” because the empty object is rotated 90 degrees on the x-axis.

imageSo simply select the animated cameraImageCamera01_2”. In the object settings (image) select parent and choose the empty object CameraGroup_1”.

And we are almost finished. Since PFTrack exports the camera over the full length of the shot you might want to define the animation range in the timeline window like so:
image

Finally we need to fix the field of view of the camera which is also not correctly imported/exported by PFTrack. In PFTrack double click on the “Photo Survey” node and you can find the camera settings in the camera tab:
image

So back in Blender select the animated camera (“ImageCamera01_2”) and switch to the camera settings. Change the sensor to “Horizontal” and set the width to the film back value from PFTrack. In this case “14.75680”.
(!! Make sure your render settings are set to the same aspect ratio as your footage !!).
image

Then change the focal length of the camera to the value from PFTrack. In this case “12.851”.
image

imageVerification (optional but recommended): To see if everything is correct I recommend to load the original footage as background and see if it matches correctly. Mistake with f.e.  the frame rate settings happen easily. To do this select the animated camera again. With the mouse cursor in the 3D View hitN” to show the settings of the selected object in the right bar in the 3D View.

Find “Background Image” setting and check it. Then hit the “Add Image” button.

Then selectMovie Clip” instead of “Image”. Uncheck the option “Camera Clip”. ClickOpen” and navigate to the footage and click “Open Clip”.

Then click “Front” and set the Opacity to 0.500.
Now you can scrub through the time line and see if everything lines up perfectly.

image

Next thing of course is to create some 3D objects and place them on the table. For the final render we simply move the mesh to another layer and mix the original footage with our CGI objects in After Effects. Pay attention to things like lightning and reflection. Maybe a topic for another post Smile. So long.

Cheers
AndiP