Blender 3D Integration with PFTrack

imageI recently bought a DJI Ronin to be able to do smooth shots. Not only a smooth shot is much more enjoyable for the eye it also is much easier to track if you do 3D-Integration. While shooting smooth shots with the DJI Ronin still requires a lot of practice the shots are a lot smoother as if they were shot plain handed.

Today I will show you how to import a moving shot into PF-Track. Then we will of course track the camera but also to position a 3D-Model (glass) on the kitchenette.  In the image on the right I have visualized my shot where my camera moves from left to right.

I structured the post with the following topics

  • PFTrack application
    • Creating the required nodes in PFTrack
    • Configure “Photo Survey” node, match points and solve camera
    • Configure “Photo Mesh” and create the mesh
    • Exporting mesh and camera data to Blender
  • Blender application
    • Importing mesh and camera data into Blender
    • Verification (optional) but recommended

I am using PFTrack 2015.05.15 for this. Create a new project PFTrack first. Change to project view by clicking on the “PRJ” Button in the left lower corner (image).  Click the “Create”-Button, fill out Name, Path,… and click “Confirm”. Enable filebrowser and project media browser by clicking on the corresponding icons in the top of the application (image). Import the footage by dragging it into the “Default”-Folder or create your own project structure. 

Creating the required nodes in PFTrack

Drag your shot to the node window. In the lower left corner enable the “nodes-menu” (image).  Click on “Create” to create an “Photo Survey”-node. Setup “Photo Mesh”- and “Export”- nodes with the same procedure. Your node tree should look like this:
image

Configure “Photo Survey” node, match points and solve camera

imageDouble click the “Auto Track” node. Since the calculations take quite some time we should only calculate what is necessary. Since I have a much longer recording (switching the camera on/off while you are holding the heavy DJI Ronin is quite a challenge) I only need to track a small portion. In my case from frame 431 to 618. Open “Parameters” of the “Photo Survey”-node and setStart frame” and “End frame” in the “Point Matching” section. Finally hit “AutoMatch” (image) and wait until the calculations are done.

After the points have been tracked click on the “Solve all” button (image) in the “Camera Solver” section. If you enable the split view (see buttons on the right corner) you will end up with a point cloud and a solved camera:
image

Configure “Photo Mesh” node and create the mesh

After solving the camera we need to create depth maps for each frame and create a mesh. Note that you won’t get a perfect mesh but it will suffice to help place things in the 3D world in Blender later. Switch to the “Photo Mesh” nodes. If you do not require all points set the bounding box accordingly in the “Scene” section. To do this click the “Edit” button.  If you hover now over the planes of the bounding box in the 3D view they will highlight and can be moved by dragging them with the mouse. Once you are finished hit “Edit” again.

Let’s create the depth maps next. Depending on your requirement set the Depth Maps resolution to “Low”, “Medium” or “High”. Be aware that a higher resolution results in a much longer calculation. I left the variation % at the default of 20 and set my resolution to “Medium”. Now hit the “Create” button in the “Depth Mapssection. This will take a while.

After building the depth maps we can create the mesh. Note that you also could create a smaller portion of the mesh by setting the bounding box in the “Mesh” section.  Create the mesh simply by hitting the “Createbutton in the “Meshsection. And finally we should have our mesh:
image

Exporting mesh and camera data to Blender

PFTrack offers to export the mesh and camera data in various formats: “Open Alembic”, “Autodesk FBX 2010 (binary)”. Also you can export the mesh without camera to “Wavefront OBJ” and “PLY”. “Open Alembic” export fails on my windows pc and I have not been able to use that so far.

For Blender we we should have two options: “Autodesk FBX 2010 (binary)” and “Wavefront OBJ”.

Unfortunately we have two issues with the FBX format. First of all Blender can only import “Autodesk FBX 2013 (binary)”. Therefore we need an extra step in converting the fbx-file with Autodesks FBX Converter 2013.2. This allows us to import the cameras and the mesh, but the camera rotations are completely messed up Sad smile.  I do not know if this is a bug in Blender or PFTrack but it does not help to make a smooth workflow. So what is the solution?

imageThe solution is to split up camera- and mesh export. So first we export the mesh as “Wavefront OBJ”. Since Blender uses the z-axis for up/down we change the default settings for the coordinate system toRighthanded” and “Z up”. Then we name an output file (f.e. Kitchen-z-up.obj) and click on the button “Export Mesh”.

To export the camera data we use the previously created “Export” node that is connected to the “Photo Survey” node. In the parameters of the “Export” node we select the formatCollada DAE”. Choose what to export in the TABs on the right side. Since I won’t be needing the point cloud I removed the point cloud from the export. Make sure that the camera is selected and “Separate Frame” is not checked. If checked PFTrack would create a separate camera for each frame. Since we do want to render an animation later leave that unchecked. Name the output file (f.e. KitchenNoPC.dae) and hit the “Export Scenebutton.

image

So we end up with two files. One (Kitchen-z-up.obj) contains our model and the other (KitchenNoPC.dae) our animated tracked camera.

Importing mesh and camera data into Blender

Start up Blender (I am using Version 2.74).  Open user preferences (STRG+ALT+U) and select the “AddOns”-Tab. Select the categoryImport-Export” and make sure that “Import-Export Wavefront OBJ format– AddOn is selected.

First make sure that our render settings are set correctly. We set resolution (should match with footage) and frame rate.
(It is crucial to set the frame rate correctly before you import the animated camera !! Otherwise the camera will be out of sync even if you change the frame rate later!!)
image

imageSelect “File/Import/Wavefront (.obj)” from the menu. Navigate to mesh obj file you created with PFTrack (f.e. “Kitchen-z-up.obj”) Make sure that you change the import settings in the left lower corner as shown in this image:

Then click on “Import” to import the object.

 

Select “File/Import/Collada (Default) (.dae)” from the menu. Navigate to the exported collada camera track file (f.e. KitchenNoPC.dae) and clickImport COLLADA”.

This will import two objects: An empty object “CameraGroup_1” and the animated cameraImageCamera01_2” (names can vary of course). Although the position of the camera after the import looks correct, the position of the camera will rotate 90 degrees on the global x-axis once you scrub through the timeline. I assume that the Pixelfarm team meant to parent the “ImageCamera01_2” to the “CameraGroup_1” because the empty object is rotated 90 degrees on the x-axis.

imageSo simply select the animated cameraImageCamera01_2”. In the object settings (image) select parent and choose the empty object CameraGroup_1”.

And we are almost finished. Since PFTrack exports the camera over the full length of the shot you might want to define the animation range in the timeline window like so:
image

Finally we need to fix the field of view of the camera which is also not correctly imported/exported by PFTrack. In PFTrack double click on the “Photo Survey” node and you can find the camera settings in the camera tab:
image

So back in Blender select the animated camera (“ImageCamera01_2”) and switch to the camera settings. Change the sensor to “Horizontal” and set the width to the film back value from PFTrack. In this case “14.75680”.
(!! Make sure your render settings are set to the same aspect ratio as your footage !!).
image

Then change the focal length of the camera to the value from PFTrack. In this case “12.851”.
image

imageVerification (optional but recommended): To see if everything is correct I recommend to load the original footage as background and see if it matches correctly. Mistake with f.e.  the frame rate settings happen easily. To do this select the animated camera again. With the mouse cursor in the 3D View hitN” to show the settings of the selected object in the right bar in the 3D View.

Find “Background Image” setting and check it. Then hit the “Add Image” button.

Then selectMovie Clip” instead of “Image”. Uncheck the option “Camera Clip”. ClickOpen” and navigate to the footage and click “Open Clip”.

Then click “Front” and set the Opacity to 0.500.
Now you can scrub through the time line and see if everything lines up perfectly.

image

Next thing of course is to create some 3D objects and place them on the table. For the final render we simply move the mesh to another layer and mix the original footage with our CGI objects in After Effects. Pay attention to things like lightning and reflection. Maybe a topic for another post Smile. So long.

Cheers
AndiP

Connect ZyWALL 35 with Azure VPN site to site

Some time ago I got an “old” ZyWALL 35 from an ex colleague and I always wanted to configure a site-to-site connection  to Azure. Although Microsoft only provides automatic scripts for the more advanced professional enterprise VPN gateways you can configure the device (if it is capable of VPN) yourself. This however can have some caveats like different expected key sizes, where you need to work around.

Hopefully this helps others with a ZyWALL35 to configure site-2-site connection and also those who also happen to have a different device.

I divided the article into the following sections:

  1. Setting up the virtual network environment in Azure
  2. Set Shared Key Length in Azure VNET to 31
  3. Configuring the ZyWALL35

May you succeed Smile.  Cheers AndiP!

1. Setting up the virtual network environment in Azure

My small private local network is operating in the following address range: 10.0.0.0/24 (10.0.0.0 – 10.0.0.255). I do want to have my Azure machines operate in the local VNET in the address range 10.0.1.0/24. So first of all we will create a virtual network in Azure for that purpose. Log into the Azure Management Portal. With the “+ NEW” button in the lower left corner we create a new virtual network.

image

image

image

Then hit the “Create”-Button. After Azure has created the virtual network we are presented with the Dashboard of our new virtual network. From there we add more Subnet’s.

image

Here we add another subnet (Do not forget to save the changes with the SAVE button!). I left some address range empty because this will be required by the Gateway-Subnet for the VPN Site-to-Site connection. Unfortunately you cannot add the Gateway-Subnet here in the new portal.

Name Address Space CIDR Block
Subnet-2 10.0.1.0/24 10.0.1.32/27

Now we need to configure our site to site connection by clicking into the VPN connection section. Inside the VPN Connection settings we select “Site-to-site” and give the local site a name (in our case “SpectoLogicLocalVPN”). As VPN gateway IP address we provide the public facing IP address of our ZyWALL 35 behind the internet modem. Finally we provide the address ranges of our local network we would like to connect. Due to a bug in the new Azure Portal you need to checkCreate gateway immediately”*.

It is also important to set the optional gateway configuration (see images below). Set the routing type to “Static”!

*Otherwise you will get the error “Deployment Failed”. The reason might be that you are not able to create the required Subnet-Gateway in the SubNet section. Once you created a site-to-site connection and remove it again you cannot remove the new created “Gateway Subnet” in the subnets, even though you would not require it any longer (another bug).

image    image

image

We can now see the new automatically added subnet “GatewaySubNet”:
image

Creating the gateway will take some time. (UP to 25 minutes)!

Once the gateway is created we will see the result here. The public gateway IP-Address will be needed later when we configuring our ZyWALL35 device!
image

2. Set Shared Key Length in Azure VNET to 31

Azure  uses SharedKeys that are bigger than the ZyWALL 35 supports. So we have to set the key size in Azure manually via PowerShell Script to change it to a smaller value. In our case “31”!

As David pointed out in the comment section there is now an easier way to achieve this.

We also need to switch to the old Azure Portal as the new portal does not allow the shared key management. Navigate to your VNET and you will find the “Manage Key”-Button in the Dashboard of the VNET:
image

Unfortunately the key is 32 characters long:
image

While we are at it. Although we gave meaningful names in our new Azure Portal the names of the local VPN Network and the Azure Network are completely differing from what we originally entered. I assume that this is because of the new “Resource Group” management. It makes things a bit more complex as we need the “real” names for our powershell script.

The VNETName can be found either in the new portal here Surprised smile:
image
or in the old portal here:
image

The local network name can be found in the new portal here Surprised smile Surprised smile Surprised smile:
image
or in the old portal here:
image

Now, after we somehow managed to find out the real names we can use them in our script below. Make sure you have imported the Azure publishing settings file and that you have imported the certificate either to your personal or local machine store. If not you need to know the Thumbprint of the certificate and assign it to the variable $mgmtCertThumb. If the certificate can be found in the store the script will locate it for you:

# Sets a vpn key with a smaller keylength
# © by Andreas Pollak / SpectoLogic

$subID = (Get-AzureSubscription -Current).SubscriptionId
$VNetName = „Group SpectoLogic_Resources SpectoLogicVPN
$VNetNameLocal = „9A10F5F7_SpectoLogicLocalVPN
$uri = „
https://management.core.windows.net/“+$subID+“/services/networking/“+$VNetName+“/gateway/connection/“+$VNetNameLocal+“/sharedkey“
$body = ‚<?xml version=“1.0″ encoding=“utf-8″?><ResetSharedKey xmlns=“
http://schemas.microsoft.com/windowsazure“><KeyLength>31</KeyLength></ResetSharedKey>‘;

#Identify Management Certificate Thumbprint
$mgmtCertThumb = $null

$mgmtCertCandidateCount = (Get-ChildItem -path cert:\CurrentUser\My\ | Where-Object {$_.FriendlyName -like ((Get-AzureSubscription -Current).SubscriptionName + ‚*‘)}).Count
if ($mgmtCertCandidateCount -ne 1)
{
$mgmtCertCandidateCount = (Get-ChildItem -path cert:\LocalMachine\My\ | Where-Object {$_.FriendlyName -like ((Get-AzureSubscription -Current).SubscriptionName + ‚*‘)}).Count
if ($mgmtCertCandidateCount -eq 1)
{
$mgmtCertThumb = (Get-ChildItem -path cert:\LocalMachine\My\ | Where-Object {$_.FriendlyName -like ((Get-AzureSubscription -Current).SubscriptionName + ‚*‘)} | Select-Object -First 1).Thumbprint
}
else
{
echo „Could not locate the certificate thumbprint of the corresponding azure
management certificate!“
echo „Please make sure to install the azure management certificate in the ‚personal‘
folder “
echo „of either the ‚local machine‘ or ‚current user‘ certificate store on your
machine! “
echo „The friendly name of the certificate must start with the SubscriptionName to be
automatically detected! “
}
}
else
{
$mgmtCertThumb = (Get-ChildItem -path cert:\CurrentUser\My\ | Where-Object {$_.FriendlyName -like ((Get-AzureSubscription -Current).SubscriptionName + ‚*‘)} | Select-Object -First 1).Thumbprint
}

$headerDate = ‚2012-03-01‘
$headers = @{„x-ms-version“=“$headerDate“}

Invoke-RestMethod -Uri $uri -Method Put -Body $body -Headers $headers -CertificateThumbprint $mgmtCertThumb

After running the script we can now acquire our key and store it for later when we configure our ZyWALL35. It should be 31 characters long now. ATTENTION: DO NOT RECREATE THE KEY. Otherwise it will be 32 characters long again. Use the script again to regenerate the key!

3. Configuring the ZyWALL35

Finally we get to configure our ZyWALL35. First we download a script as text file where we can read the basic configuration settings like HASH, Encryption Algorithms,…

image

We log on to the ZyWALL35 Configuration Website and select VPN from the “Security”-Menu.

image image

Configure the Global Settings as shown in this screen shot:
image

Select the tab “VPN Rules (IKE)”.  Add a new gateway policy by clicking on the “add new gateway policy” button (image).

In the section “Property” we name our local VPN Gateway Policy “SpectoLogicVPNGWPolicy”. Also make sure NAT Traversal is checked!

In the section “Gateway Policy Information” we need to provide our local public IP address as well as the public Azure VPN Gateway address. So under “My ZyWALL” we provide our local public IP address:
image

Under “Primary Remote Gateway” we provide the public Azure Gateway IP-Address. In our home scenario we leave IPSec High Availability unchecked.
image

Since I have not set up any PKI Infrastructure I go for the simple “Pre-Shared-Key” Authentication. Note that the ZyWALL 35 only supports Pre-Shared-Keys that are 31 characters long. This conflicts with Azure, per default not allowing smaller key sizes. See section “Set Shared Key Length in Azure VNET to 31 “ above on how to change that.
image

We leave the extended authentication settings untouched (uncheck “Enable Extended Authentication”)  and configure the IKE Proposal.
image

Finally we hit apply. Back in the “VPN Rules (IKE)”-Tab  we select “Add Network Policy”:
image

We name the VPN Network Policy “SpectoLogic VPN Net Policy” and set it to active (check that checkbox!). Also check “Nailed-up”!
image

The linked Gateway Policy should already appear populated:
image

In the section “Local Network” we select “Subnet Address” from the “Address Type” dropdown and we provide the starting IP address in our local network and define the subnet mask for the range.
image

Now we also need to configure our remote network under the “Remote Network” section. Again we select “Subnet Address” from the “Address Type” dropdown and provide starting IP address and subnet mask:
image

For the IPSec Proposal select the same values we already used for the VPN Gateway Policy (Exception: PFS set to NONE!):
image

So we end up with:
image

To Connect / Disconnect the VPN in the new portal click on the following elements:
image
You also can pin the last element to your dashboard by right clicking on the name of the VNET in the middle section:
image

Finally we can enjoy our site-2-site connection (New Portal / Old Portal):

imageimage

Setting up Win10 on Raspberry PI II running Win10 with Hyper-V

Hey,

Welcome to SpectoLogic. I am Andreas, blogging for SpectoLogic, an organization that is currently, hmm under construction :-). Let’s dive into the topic.

Since my Raspberry PI II had been laying around for a while and Microsoft released now a first preview of „Windows 10 IoT Core Insider Preview“ I decided to get my hands dirty. I first stumbled across an article from Mario Fraiss on how to set up your Raspberry Pi with Windows 10 IoT in a Hyper-V or physical environment.

Unfortunately I am running Windows 10 Preview under Hyper-V and the solution he provides for the Hyper-V variant involves messing up with drivers of your SD-Device. Something I did not want to do on my precious device.

So I came up with the plan to write the image to a virtual drive and then move this over to the SD-card with the Win32 Disk Imager.

I created a second disk for my Windows 10 machine in my Hyper-V environment (Fixed Disk 8 GB in size, also a VHDX-File). Then in Windows 10 I ran the dism-statement (see his blog) and applied the image to this virtual disk that looks physical to Windows 10 *gg*. Everything worked fine.

The challenge began when I tried to run Win32 Disk Imager. Because I was not able to choose the fixed disk. This is due the implementation of Win32 Disk Imager which only allows the selection of removable devices (probably to protect unexperienced users from overwriting their operating system).

So I downloaded the source code and tools which I installed promptly on Windows 10. The culprit lies in disk.cpp in the method

  • bool checkDriveType(char *name, ULONG *pid)

Simply replace following code:

if (GetDisksProperty(hDevice, pDevDesc, &deviceInfo) &&

    (

        ((driveType == DRIVE_REMOVABLE) && (pDevDesc->BusType != BusTypeSata)) || 

        ((driveType == DRIVE_FIXED) && 

         ((pDevDesc->BusType == BusTypeUsb) || (pDevDesc->BusType == 0xC) || (pDevDesc->BusType == 0xD)) 

        ) 

    ) 

   )

with this one

if(GetDisksProperty(hDevice, pDevDesc, &deviceInfo))

Then I recompiled the thing which was an adventure on it’s own as usually with QT. With my new version of Win32DiskImager No Protect (Use at YOUR OWN RISK) I  created an IMG-File (just select the first of the volumes, the tool internally uses the physical disk, so all the other partitions follow :-)).

Then I copied the 8GB IMG-File to my primary machine where I have my SD-Card attached and used the regular Win32 Disk Imager to apply the image to my SD-Card.

And you are ready to use it with your raspberry pi II. The reason why you see only one partition in Windows on your SD is that there are no drive letters assigned to the partitions.

Cheers

AndiP

Further recommended article: