The Customer Service Experience: Tumi

This is not a site where I focus on reviews of products.  No one gives me anything.  I'm not sponsored by anyone.  Not that I am opposed to it, I am just a simple man going through the world.  When I run into experiences I find unique I like to share them as best I can.  I have just experienced one and feel it should be shared.

Many years ago, when I started traveling heavily for work I, like many others simply purchased whatever cheap bag I happened to find as a carry on.  What I experienced was that these "cheap" bags could not stand up to the abuse of heavy business travel.  I ended up destroying 3 or 4 bags.  After some heavy research, I finally settled that my next piece of luggage would be the Tumi Ducati Carry On pictured here.  

I took this bag across the United States and the globe via road, rail, and air.  In 6 years of abuse it never had any major issues.  It received the regular battle scars that you wold expect luggage to get and recently it experienced some tears in the interior lining and a small aesthetic part on the exterior had fallen off.  I had heard about Tumi's repair policy so I took the bag to a dealer who shipped it to a service center.  

After two weeks, I reached out to the dealer who in turn reached out to Tumi.  As it turns out my faithful Ducati carry on could not be repaired.  Because it could not be repaired, Tumi was going to give me a credit work the full retail value of the bag from 6 years ago to go toward the purchase of a new bag.  I could not believe it.  I didn't ask for it, it was as if it was simply a thing that they do.  

Needless to say I have already ordered my new Tumi bag for my next round of travel and when it is time to make any further luggage purchases, I will certainly be purchasing one from them.  I find customer service experiences like this to be rare and I think it is important to highlight companies that are doing it right.  Do you have any great customer service stories to share.  I would love to hear about them in the comments section below.  If you like this content or the content on this site, be sure to follow me on Twitter to get the latest.  

Louisiana

I was born in the rain on the Pontchartrain
Underneath the Louisiana Moon
I don’t mind the strain of a hurricane
They come around every June
High black water, a devil’s daughter
She’s hard, she’s cold, and she’s mean
But nobody taught her it takes a lot of water
To wash away New Orleans
- The Band of Heathens

Louisiana will always be home no matter where I may roam.  I grew up in a small town on the north shore of Lake Pontchartrain, just north of New Orleans.  It will always have a special place in my heart.  Since I have started pursuing my photography, I have not had a chance to get home and shoot some of the places that defined my childhood.

I was home this weekend to watch my youngest brother get married but before all of the festivities began, I was able to capture a few images.  The two above were the ones that I felt best captured the moment.  The sunrise only gave me about 5 min of color and then it turned flat and grey, but sometimes 5 min. is all you need.  

I have more images from Louisiana posted in that gallery on my portfolio site.  You can purchase these or any other images on my site there.  

San Francisco

I have been to the Bay Area a few times but it seems that every time I do, I end up in Silicon Valley.  I have never actually spent any time downtown.  While this time of year is not necessarily the best for photography in the bay area I was able to make the best of it.  If you are interested in purchasing prints, please check out my portfolio.

 

Washington DC

I often travel for work and I have started bringing my tripod and camera rig with me when I do.  A few weeks I found myself in Washington DC and I spent a few mornings before my meetings downtown trying to get some shots.  The first morning sunrise was a bust but the second produced some of my favorite pictures.  If you are interested in prints, you can find them on my portfolio page.

Thoughts on DevOps

My job as a Technology Solutions Professional at Microsoft has me focused Cloud Application Development.  With my background being primarily in the infrastructure and data space, this means I would naturally gravitate to enjoy exploring DevOps.  It just so happens that was the first focus of the blogs that I have created for the Microsoft US Partner Blog.  Since I have moved a bulk of my technology blogging topics to those platforms, I wanted to make sure I regularly post a link to thos topics here for readers who still come to this site looking for tech content. 

Often when I talk about DevOps in presentations, I represnet it almost as a theology and I believe this to be the case.  It is not a tool and not something you can just go buy with a credit card.  There is certainly tooling designed to aid in the implementation of DevOps but readers looking for something like that should look elsewhere as you won't find that here.  

** [Introduction to DevOps]( https://blogs.technet.microsoft.com/msuspartner/2016/08/23/introduction-to-devops/) **

Prior to working for Microsoft, I spent 12 years as an IT consultant. On every project I worked on, there was tension between different parts of the organization. Most visibly, the people on the business side had needs that they wanted to apply technology to but saw IT as a blocker to addressing those needs, while the people in IT felt like the business had unrealistic expectations. In addition to this external battle, there is often one within IT as well – developers see operations as a blocker to deploying their applications, while operations sees developers as oblivious to the problems that code can cause. These biases can make it difficult for IT to reach its real potential as a force multiplier in an organization.

DevOps can help an organization address this problem. It’s not a certification, role, or tool, but a philosophy for how IT operates, focusing on adding value back to the business faster. It’s not a single change that aligns the organization. Rather, it’s a focus on creating a faster flow from development to production. This is done by creating cross functional teams with members from both sides, with a focus on products, not projects. [[Read More...](https://blogs.technet.microsoft.com/msuspartner/2016/08/23/introduction-to-devops/)]

 ** [Infrastructure as Code](https://blogs.technet.microsoft.com/uspartner_learning/2016/08/25/application-development-infrastructure-as-code/) ** 

The concept of infrastructure as code, or programmable infrastructure, plays a significant part in making DevOps possible and is the first step in bringing the development and operations disciplines together within an organization. [[Read More...](https://blogs.technet.microsoft.com/uspartner_learning/2016/08/25/application-development-infrastructure-as-code/)]

** [Continuous Integration](https://blogs.technet.microsoft.com/uspartner_learning/2016/08/25/application-development-infrastructure-as-code/) ** 

The software development practice of continuous integration is not simply about tooling. Continuous integration brings together all aspects of a software development project and provides a process that increases the flow of work that’s under way. For the purposes of our discussion related to cloud application development, it is the integration of developers and operations personnel to validate that features or changes to the system work, and that they don’t conflict with the work of others. This is accomplished through a process with source control, automated build, and test at its core. The practice of continuous integration allows the identification and addressing of bugs quicker, which in turn allows for improved quality. [[Read More...]( https://blogs.technet.microsoft.com/uspartner_learning/2016/08/29/application-development-continuous-integration/)]

** [Continuous Delivery](  https://blogs.technet.microsoft.com/uspartner_learning/2016/09/06/continuous-delivery/) **

The idea of continuous delivery can strike terror into the heart of operations personnel. It conjures up the thought of constantly having people in a delivery “war room” to address issues when things go wrong, which does not sound appealing.

Continuous delivery done well, though, is a natural extension of continuous integration (covered in part 3 of this blog series), and eliminates the need for a “war room.” And, as with all of the principles we’ve talked about in this series, continuous delivery is rooted in process, not tools.  [[Read More...]( https://blogs.technet.microsoft.com/uspartner_learning/2016/09/06/continuous-delivery/)]

 ** Wrap Up **

Are you using DevOps in your shop, if so we would love to learn more about what is working for you and what is not.  Let us know it the comments below.  If you are looking for my more technical blog content, check out the links and follow the blogs there.  I will try to do roundup posts like this here to ensure that there are links in both places.  If you like this content and want to interact with me on a more regular basis, follow me on [Twitter](http://twitter.com/jgardner04) where I primarily talk about technology.   

 

Landscape and Cityscape Photography Training

As I mentioned in my previous post, I have been interested in photography for a while, but have yet to really spend the time learning about the craft. While I have spent some time learning about the science behind how light works with the sensors on the camera and the aperture triangle, I have not put thought into composition, the way to achieve a certain look, or post processing. Time for some training.

When I purchased my camera I did a lot of research and spoke with friends who were more knowledgeable than I with equipment. The resounding response to the question, "What gear should I buy?", was a return question. "What kind of things do you want to photograph?" When I started looking for photography training I figured I should ask myself a similar question. Who are the photographers that you admire and why.

As I began exploring this question, I came across Elia Locardi (Website | Twitter) and kept coming back to his work. I was drawn to the high dynamic color range of his images and loved the way he seems to capture cities at a unique moment of sunset with all of the lights on. While that is not the best way I can describe it, it is the best way I can explain it. I guess ultimately I like the way the images that he captures speak to me.

I found that Elia had partnered with the website Fstoppers to create a series of video tutorials that covered both how the shots were captured and then how they were processed in post-production. Not only was there a series on landscape photography, but cityscape photography as well. I grabbed a special that they were running to get both of the together and off I went.

I consumed the videos very quickly following along and learning a ton. For every tutorial I was thinking back through my very thin portfolio and wondering how I could edit those images and bring new life to them. That is just what I have started doing. I have taken them one at a time and started processing them with the techniques learned through the tutorials and have been able to bring my photos to a level that I never thought possible. I can't recommend the tutorials enough and hope that you get as much out of them as I did.

I have begun publishing them in my portfolio. As I get them processed I am looking forward to your feedback. If you have any training that that you have found to be outstanding, I would love to know about it. I am always looking to learn more. If you like this content or the photos in my portfolio, be sure to follow me on Twitter, Instagram, Flickr, and 500px

My Photographic Journey

Corey and I moved to Lafayette from New Orleans in April of last year.  Around the same time I acquired my Sony a6000 have have used it to capture our travels together.  Recently I realized that I have done poor job in capturing the world close to home.  I also realized that I have also done a poor job of talking about my photographic journey.

This site has been dedicated primarily to my technology posts but I have really be cataloging that through work.  You can find my latest technology posts in the locations below.

So to that end, the current posts will certainly remain here but future posts will likely be about photographing my journey.  I am looking forward to sharing my journey with you.  If you are interested in following me on photo social media sites, check out the links below.  You can check out my full portfolio through link in the menu above.

Book Review: The Phoenix Project

914-sugelzl.jpg

Earlier this month I attended a DevOps bootcamp event Microsoft hosted in one of our Bellevue offices.  We were able to bring in members of the product group to discuss how Microsoft approaches DevOps internally and how it has contributed to the incredible release pace for Azure feature.  During the ensuing discussion, the book The Phoenix Project was mentioned.  It was not a title I had heard but my interest was piqued so I downloaded it on my Kindle for the flight home.  What I uncovered was a great story about how all of IT, through the use of DevOps, can be a competitive advantage or a business anchor.  The choice of which, is completely up to every organization to decide. Many of the books that I read about technology take take one of two routes.  The first type is technical and include click here or type this line of code.  The others sell themselves as IT books but are really more about business processes.  I found The Phoenix Project to be closer aligned with the latter but was deeper than most at relating the importance of IT integration in the business.  The authors are able to allow everyone, even non-technical readers to understand the challenges and needs to approach IT with a DevOps mentality.

“DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.”—Donovan Brown in the book, “DevOps on the Microsoft Stack” (Wouter de Kort, 2016).

While fiction, as someone with 14 years of IT experience under my belt, I could see The Phoenix Project story play out at any number of companies.  The challenges are very relatable and while approached from the IT perspective, it would be remiss for someone outside of IT to dismiss the story.  I don't want to spoil any of the details for a potential reader, but approaching IT as if it were a factory allows non-IT personnel to also understand DevOps principles and in my opinion very worth the read.

Are you using DevOps in your organization?  Have you read The Project Project?  I would love to hear your thoughts on how the principles outlined in the book play out in your day to day operations in the comments below.  Don't forget to follow me on Twitter and follow the Microsoft US Azure Partner Community to stay up to date on the latest about DevOps and the Microsoft cloud.

Advanced VM ARM Template

adobestock_61327476.jpg

As you have seen, I have been doing quite a bit of work with ARM templates and VMs recently.  This post is no different.  I have been working on a project where multiple VMs need to be created from a custom image and they need to be joined to an existing domain.  In this post I will walk through the elements of the ARM template I created. NOTE: This template is not based on any best practices, simply a proof of concept

Tl;dr - Grab the template from my GitHub account.

Creating Multiple Resources

The power of ARM templates is the ability to create complex environments from a single definition file.  Part of that power comes in the ability to create multiple resources of the same type.  This happens through the use of the copy tag when defining a resource.

[code] copy:{ "name": "storagecopy", "count": "[parameters('count')]" } [/code]

Access to the current iteration can be done through the use of the copyIndex() function.  This provides the flexibility append it to names creating a unique name for each iteration.  An example of this can be seen in the "name": example below.

[code]"name": "[concat(variables('storageAccountName'),copyIndex())]"[/code]

# Virtual Machines from a Custom Image

Before we dive into the template it is important to note, at time of writing this, the virtual machine custom image must be in the same storage account as the .vhd that will be deployed with the new Virtual Machines.  It is for this reason that this template creates a "Transfer VM" with a custom script extension.  This script uses PowerShell and AZCopy to move the image from one storage account to the target storage account.  The gold image can be removed after the VMs are deployed without any issue.  The Transfer VM can also be removed.  This could also be scripted but is not included in the current version of the template.  If you want to take a deeper look at creating a VM in this transfer model you can check out the quick start template on GitHub

Networking

This template also assumes that you already have a virtual network created and takes these as parameters to deploy the new virtual machines to this network.  The public IP addresses and NICs will all be attached to this network.  If you have different network requirements, you will need to make these changes before deployment.  In my demo environment, my domain controller is on the same vnet that the virtual machines will be deployed to.  Because of this, I have set my domain controllers to be the DNS servers and set up external forwarders there.  This ensures that the domain join request are routed to the domain controllers.  In other words, standard networking rules apply as if you were doing this on-prem.

Domain Join

The domain join function is performed by a new extension.  Previously it needed to be done through DSC.  I find this to be much smoother.  More information about the extension can be found here on GitHub.

The Business

Now, down to the code.  I know that is what everyone cares to see anyway.  If you want to download directly or make changes/comments, please do so through GitHub.

[code]

{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "storageAccountName": { "type": "string", "metadata": { "description": "Prefix name of the storage account to be created" } }, "vmCopies": { "type": "int", "defaultValue": 1, "metadata": { "descritpion": "Number of storage accounts to create" } }, "storageAccountType": { "type": "string", "defaultValue": "Standard_LRS", "allowedValues": [ "Standard_LRS", "Standard_GRS", "Standard_ZRS", "Premium_LRS" ], "metadata": { "description": "Storage Account type" } }, "vmName": { "type": "string", "metadata": { "description": "Name prefix for the VMs" } }, "adminUserName": { "type": "string", "metadata": { "description": "Admin username for the virtual machines" } }, "adminPassword": { "type": "securestring", "metadata": { "description": "Admin password for virtual machines" } }, "dnsLabelPrefix": { "type": "string", "metadata": { "description": "DNS Name Prefix for Public IP" } }, "windowsOSVersion": { "type": "string", "defaultValue": "2012-R2-Datacenter", "allowedValues": [ "2008-R2-SP1", "2012-Datacenter", "2012-R2-Datacenter" ], "metadata": { "description": "The Windows version for the VMs. Allowed values: 2008-R2-SP1, 2012-Datacenter, 2012-R2-Datacenter." } }, "domainToJoin": { "type": "string", "metadata": { "description": "The FQDN of the AD domain" } }, "domainUsername": { "type": "string", "metadata": { "description": "Username of the account on the domain" }

}, "ouPath": { "type": "string", "defaultValue": "", "metadata": { "description": "Specifies an organizational unit (OU) for the domain account. Enter the full distinguished name of the OU in quotation marks. Example: 'OU=testOU; DC=domain; DC=Domain; DC=com" } }, "domainJoinOptions": { "type": "int", "defaultValue": 3, "metadata": { "description": "Set of bit flags that define the join options. Default value of 3 is a combination of NETSETUP_JOIN_DOMAIN (0x00000001) & NETSETUP_ACCT_CREATE (0x00000002) i.e. will join the domain and create the account on the domain. For more information see https://msdn.microsoft.com/en-us/library/aa392154(v=vs.85).aspx" } }, "existingVirtualNetworkName": { "type": "string", "metadata": { "description": "Name of the existing VNET" } }, "subnetName": { "type": "string", "metadata": { "description": "Name of the existing VNET" } }, "existingVirtualNetworkResourceGroup": { "type": "string", "metadata": { "description": "Name of the existing VNET Resource Group" } }, "transferVmName": { "type": "string", "defaultValue": "TransferVM", "minLength": 3, "maxLength": 15, "metadata": { "description": "Name of the Windows VM that will perform the copy of the VHD from a source storage account to the new storage account created in the new deployment, this is known as transfer vm." } }, "customImageStorageContainer": { "type": "string", "metadata": { "description": "Name of storace container for gold image" } }, "customImageName": { "type": "string", "metadata": { "description": "Name of the VHD to be used as source syspreped/generalized image to deploy the VM. E.g. mybaseimage.vhd." } }, "sourceImageURI": { "type": "string", "metadata": { "description": "Full URIs for one or more custom images (VHDs) that should be copied to the deployment storage account to spin up new VMs from them. URLs must be comma separated." } }, "sourceStorageAccountResourceGroup": { "type": "string", "metadata": { "description": "Resource group name of the source storage account." } } }, "variables": { "storageAccountName": "[parameters('storageAccountName')]", "imagePublisher": "MicrosoftWindowsServer", "imageOffer": "WindowsServer", "OSDiskName": "osdiskforwindows", "nicName": "[parameters('vmName')]", "addressPrefix": "10.0.0.0/16", "subnetName": "Subnet", "subnetPrefix": "10.0.0.0/24", "publicIPAddressName": "[parameters('vmName')]", "publicIPAddressType": "Dynamic", "vmStorageAccountContainerName": "vhds", "vmSize": "Standard_D1", "windowsOSVersion": "2012-R2-Datacenter", "virtualNetworkName": "myVNET", "vnetID": "[resourceId(parameters('existingVirtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', parameters('existingVirtualNetworkName'))]", "subnetRef": "[concat(variables('vnetID'),'/subnets/', parameters('subnetName'))]", "customScriptFolder": "CustomScripts", "trfCustomScriptFiles": [ "ImageTransfer.ps1" ], "sourceStorageAccountName": "[substring(split(parameters('sourceImageURI'),'.')[0],8)]" }, "resources": [ { "name": "[concat(variables('storageAccountName'),copyIndex())]", "copy": { "count": "[parameters('vmCopies')]", "name": "storagecopy" }, "type": "Microsoft.Storage/storageAccounts", "location": "[resourceGroup().location]", "sku": { "name": "[parameters('storageAccountType')]" }, "apiVersion": "2016-01-01", "kind": "Storage", "properties": {} }, { "name": "[concat(variables('publicIPAddressName'),copyIndex())]", "dependsOn": [ "storagecopy" ], "apiVersion": "2016-03-30", "copy": { "count": "[parameters('vmCopies')]", "name": "publicipcopy" }, "type": "Microsoft.Network/publicIPAddresses", "location": "[resourceGroup().location]", "properties": { "publicIPAllocationMethod": "[variables('publicIPAddressType')]", "dnsSettings": { "domainNameLabel": "[concat(parameters('dnsLabelPrefix'),copyIndex())]" } } }, { "name": "[parameters('transferVmName')]", "dependsOn": [ "storagecopy" ], "apiVersion": "2016-03-30", "type": "Microsoft.Network/publicIPAddresses", "location": "[resourceGroup().location]", "properties": { "publicIPAllocationMethod": "[variables('publicIPAddressType')]", "dnsSettings": { "domainNameLabel": "[concat(parameters('dnsLabelPrefix'),'trans1')]" } } }, { "apiVersion": "2016-03-30", "copy": { "count": "[parameters('vmCopies')]", "name": "niccopies" }, "type": "Microsoft.Network/networkInterfaces", "name": "[concat(variables('nicName'),copyIndex())]", "location": "[resourceGroup().location]", "dependsOn": [ "[concat('Microsoft.Network/publicIPAddresses/',variables('publicIPAddressName'),copyIndex())]" ], "properties": { "ipConfigurations": [ { "name": "ipconfig1", "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses',concat(variables('publicIPAddressName'),copyIndex()))]" }, "subnet": { "id": "[variables('subnetRef')]" } } } ] } }, { "apiVersion": "2016-03-30", "type": "Microsoft.Network/networkInterfaces", "name": "[parameters('transferVmName')]", "location": "[resourceGroup().location]", "dependsOn": [ "[concat('Microsoft.Network/publicIPAddresses/',parameters('transferVmName'))]" ], "properties": { "ipConfigurations": [ { "name": "ipconfig1", "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses',parameters('transferVmName'))]" }, "subnet": { "id": "[variables('subnetRef')]" } } } ] } },

{ "comments": "# TRANSFER VM", "name": "[parameters('transferVmName')]", "type": "Microsoft.Compute/virtualMachines", "location": "[resourceGroup().location]", "apiVersion": "2015-06-15", "dependsOn": [ "storagecopy", "[concat('Microsoft.Network/networkInterfaces/', parameters('transferVmName'))]" ], "properties": { "hardwareProfile": { "vmSize": "[variables('vmSize')]" }, "osProfile": { "computerName": "[parameters('transferVmName')]", "adminUsername": "[parameters('AdminUsername')]", "adminPassword": "[parameters('adminPassword')]" }, "storageProfile": { "imageReference": { "publisher": "[variables('imagePublisher')]", "offer": "[variables('imageOffer')]", "sku": "[parameters('windowsOSVersion')]", "version": "latest" }, "osDisk": { "name": "[parameters('transferVmName')]", "vhd": { "uri": "[concat('http://', variables('storageAccountName')[0], '.blob.core.windows.net/', variables('vmStorageAccountContainerName'), '/',parameters('transferVmName'),'.vhd')]" }, "caching": "ReadWrite", "createOption": "FromImage" } }, "networkProfile": { "networkInterfaces": [ { "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('transferVmName'))]" } ] } }, "resources": [ { "comments": "Custom Script that copies VHDs from source storage account to destination storage account", "apiVersion": "2015-06-15", "type": "extensions", "name": "[concat(parameters('transferVmName'),'CustomScriptExtension')]", "location": "[resourceGroup().location]", "dependsOn": [ "[concat('Microsoft.Compute/virtualMachines/', parameters('transferVmName'))]" ], "properties": { "publisher": "Microsoft.Compute", "type": "CustomScriptExtension", "autoUpgradeMinorVersion": true, "typeHandlerVersion": "1.4", "settings": { "fileUris": [ "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/201-vm-custom-image-new-storage-account/ImageTransfer.ps1" ] }, "protectedSettings": { "commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File ','ImageTransfer.ps1 -SourceImage ',parameters('sourceImageURI'),' -SourceSAKey ', listKeys(resourceId(parameters('sourceStorageAccountResourceGroup'),'Microsoft.Storage/storageAccounts', variables('sourceStorageAccountName')), '2015-06-15').key1, ' -DestinationURI https://', variables('StorageAccountName'), '.blob.core.windows.net/vhds', ' -DestinationSAKey ', listKeys(concat('Microsoft.Storage/storageAccounts/', variables('StorageAccountName')), '2015-06-15').key1)]" } } } ] },

{ "apiVersion": "2015-06-15", "type": "Microsoft.Compute/virtualMachines", "name": "[concat(parameters('vmName'),copyIndex())]", "copy": { "count": "[parameters('vmCopies')]", "name": "vmcopies" }, "location": "[resourceGroup().location]", "dependsOn": [ "[concat('Microsoft.Storage/storageAccounts/', variables('storageAccountName'),copyIndex())]", "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'),copyIndex())]", "[concat('Microsoft.Compute/virtualMachines/', parameters('transferVmName'),'/extensions/',parameters('transferVmName'),'CustomScriptExtension')]" ], "properties": { "hardwareProfile": { "vmSize": "[variables('vmSize')]" }, "osProfile": { "computerName": "[concat(parameters('vmName'),copyIndex())]", "adminUsername": "[parameters('adminUsername')]", "adminPassword": "[parameters('adminPassword')]" }, "storageProfile": { "osDisk": { "name": "[concat(parameters('vmName'),copyIndex(),'-osdisk')]", "osType": "windows", "createOption": "FromImage", "caching": "ReadWrite", "image": { "uri": "[concat('http://', variables('StorageAccountName'), copyIndex(), '.blob.core.windows.net/',variables('vmStorageAccountContainerName'),'/Microsoft.Compute/Images/',parameters('customImageStorageContainer'),'/',parameters('customImageName'))]" }, "vhd": { "uri": "[concat('http://', variables('StorageAccountName'), copyIndex(), '.blob.core.windows.net/',variables('vmStorageAccountContainerName'),'/',parameters('vmName'),copyIndex(),'-osdisk.vhd')]" } } }, "networkProfile": { "networkInterfaces": [ { "id": "[resourceId('Microsoft.Network/networkInterfaces',concat(variables('nicName'),copyIndex()))]" } ] }, "diagnosticsProfile": { "bootDiagnostics": { "enabled": "true", "storageUri": "[concat('http://',variables('storageAccountName'),'.blob.core.windows.net')]" } } } }, { "apiVersion": "2015-06-15", "type": "Microsoft.Compute/virtualMachines/extensions", "name": "[concat(parameters('vmName'),copyIndex(),'/joindomain')]", "copy": { "count": "[parameters('vmCopies')]", "name": "domainextension" }, "location": "[resourceGroup().location]", "dependsOn": [ "[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'),copyIndex())]" ], "properties": { "publisher": "Microsoft.Compute", "type": "JsonADDomainExtension", "typeHandlerVersion": "1.3", "autoUpgradeMinorVersion": true, "settings": { "Name": "[parameters('domainToJoin')]", "OUPath": "[parameters('ouPath')]", "User": "[concat(parameters('domainToJoin'), '\\', parameters('adminUserName'))]", "Restart": "true", "Options": "[parameters('domainJoinOptions')]" }, "protectedsettings": { "Password": "[parameters('adminPassword')]" } } } ] }

[/code]

Are you using Azure Resource Manager Templates?  If so, we would love to hear about how you are using them in the comments below.  If you like this content and want to know how I work with Microsoft Partners, please check out the US Partner Community Blog for some of my other posts.  Don't forget to follow me on Twitter.

 

 

 

Azure PaaS Services: Narrowing Focus

adobestock_61126770.jpg

When I started at Microsoft 18 months ago, I joined the National Partner Technology Strategist team focusing on the Azure Platform.  In my role as a National Partner Technology Strategist I was focused on three main areas with a national coverage responsibility: community, readiness, and practice development.  Because Azure is a platform and it is not possible to be an "expert" in all of Azure, Microsoft leadership recognized the need to focus the PTS on a narrow workload to better serve partners. Microsoft made some organizational changed and consolidated all PTS resources into a single organization and then into Enablement Team Units.  These units are then further narrowed into workload specialization.  It is in this unit that I will be focusing my efforts with partners on Azure PaaS Services.  While Platform as a Service could cover almost everything on Azure, many are split under other workload areas.  My focus will be on: Logic Apps, Service Fabric, Cloud Services, Web Apps, API Apps, and Redis Cache.  There are also many underlying things that will also be important with these workloads including DevOps, Application Lifecycle Management, Containerization, Desired State Configuration, and more.

While I am sure that my passion for data and reporting will continue to endure, the focus of postings on this blog will likely reflect the time I am spending in these areas.  Is there anything in particular that you would like to see me cover about the topics above?  Let me know in the comments below.  If you like the content of this blog, follow me on Twitter where I share and discuss lots of similar content.

Azure Resource Manager Templates and Custom Script Extensions

buildingblocks.jpg

In my previous article, Building Azure Resource Manager Templates , I covered how to get started with Azure Resource Manager templates.  While they are certainly great for basic deployments, where they really shine is in their ability to allow for complex deployments.  This post will cover the Custom Script Extension and how they can be used to configure Virtual Machines during the deployment process. Note: This article makes the assumption that you are familiar with the Azure portal and Visual Studio.  I am not writing a full step-by-step article.  While I will outline all of the things that need to happen, I am not doing a “click here” walk-through.

The Setup

When I was working on my ARM Template to deploy SQL Server 2016 with the AdventureWorks sample databases installed, I needed a way to configure the virtual machine once it was installed.  This is done using the Custom Script for Windows Extension.  It is dependent upon the creation of the virtual machine, as can be seen from the image below and requires that the virtual machine be created before adding the extension.

CustomScriptExtension
CustomScriptExtension

The Business

After adding the Custom Script Extension, a resource is added to the virtual machine in the ARM template with they type "extensions".  The code can be seen below.  It shows up as nested in the JSON Outline window.  It also creates a customScripts folder in the solution.  In the case of a Windows extension this file is a PowerShell or .ps1 file.

[code] { name: test, type: extensions, location: [resourceGroup().location], apiVersion: 2015-06-15, dependsOn: [ [concat('Microsoft.Compute/virtualMachines/', parameters('Sql2016Ctp3DemoName'))] ], tags: { displayName: test }, properties: { publisher: Microsoft.Compute, type: CustomScriptExtension, typeHandlerVersion: 1.4, autoUpgradeMinorVersion: true, settings: { fileUris: [ [concat(parameters('_artifactsLocation'), '/', variables('testScriptFilePath'), parameters('_artifactsLocationSasToken'))] ], commandToExecute: [concat('powershell -ExecutionPolicy Unrestricted -File ', variables('testScriptFilePath'))] } } } [/code]

From the custom script, I can perform a host of different actions based on PowerShell.  The code below performs a number of actions.  It creates a folder structure, downloads files, creates and executes a PowerShell function to extract the zip files, moves files, executes T-SQL, and opens firewall ports.

[code language="powershell"] # DeploySqlAw2016.ps1 # # Parameters

# Variables $targetDirectory = "C:\SQL2016Demo" $adventrueWorks2016DownloadLocation = "https://sql2016demoaddeploy.blob.core.windows.net/adventureworks2016/AdventureWorks2016CTP3.zip"

# Create Folder Structure if(!(Test-Path -Path $targetDirectory)){ New-Item -ItemType Directory -Force -Path $targetDirectory } if(!(Test-Path -Path $targetDirectory\adventureWorks2016CTP3)){ New-Item -ItemType Directory -Force -Path $targetDirectory\adventureWorks2016CTP3 } # Download the SQL Server 2016 CTP 3.3 AdventureWorks database files. Set-Location $targetDirectory Invoke-WebRequest -Uri $adventrueWorks2016DownloadLocation -OutFile $targetDirectory\AdventureWorks2016CTP3.zip

# Create a function to expand zip files function Expand-ZIPFile($file, $destination) { $shell = new-object -com shell.application $zip = $shell.NameSpace($file) foreach($item in $zip.items()) { $shell.Namespace($destination).copyhere($item) } }

# Expand the downloaded files Expand-ZIPFile -file $targetDirectory\AdventureWorks2016CTP3.zip -destination $targetDirectory\adventureWorks2016CTP3 Expand-ZIPFile -file $targetDirectory\adventureWorks2016CTP3\SQLServer2016CTP3Samples.zip -destination $targetDirectory\adventureWorks2016CTP3

# Copy backup files to Default SQL Backup Folder Copy-Item -Path $targetDirectory\AdventureWorks2016CTP3\*.bak -Destination "C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup"

# Restore SQL Backups for AdventureWorks and AdventrueWorksDW Import-Module SQLPS -DisableNameChecking cd \sql\localhost\

Invoke-Sqlcmd -Query "USE [master] RESTORE DATABASE [AdventureWorks2016CTP3] FROM  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\AdventureWorks2016CTP3.bak' WITH  FILE = 1,  MOVE N'AdventureWorks2016CTP3_Data' TO N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\AdventureWorks2016CTP3_Data.mdf',  MOVE N'AdventureWorks2016CTP3_Log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\AdventureWorks2016CTP3_Log.ldf',  MOVE N'AdventureWorks2016CTP3_mod' TO N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\AdventureWorks2016CTP3_mod',  NOUNLOAD,  REPLACE,  STATS = 5

GO" -ServerInstance LOCALHOST -QueryTimeout 0

Invoke-Sqlcmd -Query "USE [master] RESTORE DATABASE [AdventureworksDW2016CTP3] FROM  DISK = N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\AdventureWorksDW2016CTP3.bak' WITH  FILE = 1,  MOVE N'AdventureWorksDW2014_Data' TO N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\AdventureWorksDW2016CTP3_Data.mdf',  MOVE N'AdventureWorksDW2014_Log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\AdventureWorksDW2016CTP3_Log.ldf',  NOUNLOAD,  REPLACE,  STATS = 5

GO" -ServerInstance LOCALHOST -QueryTimeout 0

# Firewall Rules #Enabling SQL Server Ports New-NetFirewallRule -DisplayName “SQL Server” -Direction Inbound –Protocol TCP –LocalPort 1433 -Action allow New-NetFirewallRule -DisplayName “SQL Admin Connection” -Direction Inbound –Protocol TCP –LocalPort 1434 -Action allow New-NetFirewallRule -DisplayName “SQL Database Management” -Direction Inbound –Protocol UDP –LocalPort 1434 -Action allow New-NetFirewallRule -DisplayName “SQL Service Broker” -Direction Inbound –Protocol TCP –LocalPort 4022 -Action allow New-NetFirewallRule -DisplayName “SQL Debugger/RPC” -Direction Inbound –Protocol TCP –LocalPort 135 -Action allow #Enabling SQL Analysis Ports New-NetFirewallRule -DisplayName “SQL Analysis Services” -Direction Inbound –Protocol TCP –LocalPort 2383 -Action allow New-NetFirewallRule -DisplayName “SQL Browser” -Direction Inbound –Protocol TCP –LocalPort 2382 -Action allow #Enabling Misc. Applications New-NetFirewallRule -DisplayName “HTTP” -Direction Inbound –Protocol TCP –LocalPort 80 -Action allow New-NetFirewallRule -DisplayName “SSL” -Direction Inbound –Protocol TCP –LocalPort 443 -Action allow New-NetFirewallRule -DisplayName “SQL Server Browse Button Service” -Direction Inbound –Protocol UDP –LocalPort 1433 -Action allow #Enable Windows Firewall Set-NetFirewallProfile -DefaultInboundAction Block -DefaultOutboundAction Allow -NotifyOnListen True -AllowUnicastResponseToMulticast True [/code]

By default the custom script is located in the solution but it does not have to be.  In the code example below, I actually call the script from GitHub.  Note the fileUris: link.

[code] resources: [ { name: deploySql2016Ctp3, type: extensions, location: [resourceGroup().location], apiVersion: 2015-06-15, dependsOn: [ [concat('Microsoft.Compute/virtualMachines/', parameters('Sql2016Ctp3DemoName'))] ], tags: { displayName: deploySql2016Ctp3 }, properties: { publisher: Microsoft.Compute, type: CustomScriptExtension, typeHandlerVersion: 1.4, autoUpgradeMinorVersion: true, settings: { fileUris: [ https://raw.githubusercontent.com/jgardner04/Sql2016Ctp3Demo/master/Sql2016Ctp3Demo/CustomScripts/deploySql2016Ctp3.ps1 ], commandToExecute: powershell.exe -ExecutionPolicy Unrestricted -File deploySql2016Ctp3.ps1 } } } ] [/code]

With this post we showed how we can create a virtual machine and customize it through the use of Azure Resource Manager templates.  In future posts we will explore how to expand the use of Azure Resource Manager templates to create complex services that include multiple Azure Resources and services.  Are you using Azure Resource Manager templates in your environment?  We would love to hear about it in the comments below.

If you like the content on my blog, I also blog on the US Azure and Data Analytics Partner Blogs.  I encourage you to check those out for more great resources. Also don't forget to follow me on Twitter as much of what I talk about is related to Azure.

Shutdown tagged VMs with Azure Automation

adobestock_97576601.jpg

In my previous post PowerShell to update the tags on resources, I added tags to the virtual machines in my subscription.  There are a host of reasons why resources may be tagged in Azure.  I have seen customers use them to identify the department or application resources belong to.  I have seen partners use tags for billing by tagging resources by customers.  The use varies but as a huge fan of automation I am going to use them to automate tasks against my virtual machines.  In this post I will use tags in conjunction with Azure Automation to shutdown virtual machines at the end of the day. Note: This article makes the assumption that you are familiar with the Azure portal.  I am not writing a full step-by-step article.  While I will outline all of the things that need to happen, I am not doing a “click here” walk-through.  I also am not going to cover the movement of the file from one blob to another, I will do that in a separate post.

The Setup

There are a lot of templates that can be used for controlling VMs in the Azure Automation gallery but for me it was not that simple.  The preferred method of security using Azure Automation is to use RBAC.  The problem, for me, is that to get Azure Automation working with RBAC you need to be able to add that resource to Azure Active Directory and then to the Azure Subscription as a Co-Administrator.  All of that works well if you own the Azure Subscription or can get a user added.  In my case, I do not and cannot have that done with the way they are configured at internally at Microsoft.

In my previous post about using Azure Automation, my SQL Agent in the Cloud I created an Azure Automation RunAs account when I created my Azure Automation Account.  I will use this account to perform automation actions.

Setting up the Azure RunAs Account

In an effort to practice what I preach, I am going to link out this portion of the post in an effort not to recreate the wheel.  Checkout the documentation for how to Authenticate Runbooks with Azure Run As Account.  I actually made some corrections on this documentation for the Azure team in GitHub in preparation for this article and getting it to flow smoothly.

Once the New-AzureServicePrincipal.ps1 file has been run two Azure Automation Assets will have been created.  A Certificate and Connection, both will be used in the code to shut down tagged VMs.

The Business

First create the assets necessary to automate the subscription.  The first is a schedule.  This scrip will run daily.  The second asset to create is a variable with the subscription name in it.  The subscription name is called when calling the Get-AzureRmSubscription command.  The final piece is to actually create the runbook.  Create a blank PowerShell runbook with the following code in it.

[code language="powershell"] $currentTime = (Get-Date).ToUniversalTime() Write-Output "Runbook started."

# Establish Connection $Conn = Get-AutomationConnection -Name 'AzureRunAsConnection' Add-AzureRMAccount -ServicePrincipal ` -Tenant $Conn.TenantID ` -ApplicationId $Conn.ApplicationID ` -CertificateThumbprint $Conn.CertificateThumbprint

$subName = Get-AutomationVariable -Name 'Subscription' Get-AzureRmSubscription -SubscriptionName $subName

# Get a list of all tagged VMs in the Subscripiton that are running $resourceManagerVMList = @(Get-AzureRmResource | ` where {$_.ResourceType -like "Microsoft.*/virtualMachines"} | ` where {$_.Tags.Count -gt 0 ` -and $_.Tags.Name ` -contains "AutoShutdownSchedule"} | ` sort Name) Write-Output "Found [$($resourceManagerVMList.Count)] tagged VMs in the subscription"

#Shutdown any running VMs foreach($vm in $resourceManagerVMList) { $resourceManagerVM = Get-AzureRmVM -ResourceGroupName ` $vm.ResourceGroupName ` -Name $vm.Name ` -Status foreach($vmStatus in $resourceManagerVM.Statuses) { if($vmStatus.Code.CompareTo("PowerState/running") -eq 0) { $resourceManagerVM | ` Stop-AzureRmVm -Force Write-Output $vm.Name was Shutdown } } }

Write-Output "Runbook finished (Duration: $(("{0:hh\:mm\:ss}" -f ((Get-Date).ToUniversalTime() - $currentTime))))" [/code]

This code assumes that the only VMs in the account are running in the v2 or Azure Resource Manager type.  If you are dealing with a mixed environment, additional code will need to be written to shut down those VMs.

While this is designed to shut down my VMs at the end of the day, there are some exciting new features on the horizon to help with this as well.  Some of these power scheduling features will come standard in the new DevTest Labs service in Azure.

Are you using Azure Automation in your environment today?  I would love to hear about it in the comments below.  If you like the content on my blog, I also blog on the US Azure and Data Analytics Partner Blogs.  I encourage you to check those out for more great resources. Also don't forget to follow me on Twitter as much of what I talk about is related to Azure.

Adding Tags to Resources in Azure

adobestock_106652904.png

I am always looking for ways to automate my Azure environment.  I use Azure as a Demo and Testing environment and do not want it running 24/7 and shutting off each virtual machine at the end of a day is time consuming.  I want to have Azure Automation do that for me.  I am working on a post to show just how to do that but the first step was to set a tag on the virtual machines that I wanted to shut down.  In this post I will walk through setting up tags for virtual machines in Azure with PowerShell. Organizing my resources by tags gives me the flexibility of applying them across resource group and allowing me to automate them across my entire subscription.  Tags can be applied in the portal but with multiple virtual machines in an environment, that is a time consuming proposition.  The smarter approach is to apply them systematically through PowerShell.  In this short post I will share the script I used to apply tags to my virtual machines.

The Business

In the script below I set the tags on the entire DemoAndTesting before applying it to the virtual machines. This step is not necessary to apply the tag only to virtual machines.  I also limit the tags to the virtual machines in that same resource group in the code below.

[code language="powershell"]

Login-AzureRmAccount $rmGroupName = "DemoAndTesting" Set-AzureRmResourceGroup -Name $rmGroupName -Tag @( @{ Name="vmType"; Value="test"}) $tags = (Get-AzureRmResourceGroup -Name $rmGroupName).Tags Get-AzureRmResource |` where {$_.ResourceType -eq "Microsoft.Compute/virtualMachines" -and $_.ResourceGRoupName -eq $rmGroupName} | ` ForEach-Object {Set-AzureRmResource -Tag $tags -ResourceId $_.ResourceId -force}

[/code]

The code to just apply the tags all virtual machines in a subscription it would look like the following.

[code language="powershell"]

Login-AzureRmAccount Get-AzureRmResource |` where {$_.ResourceType -eq "Microsoft.Compute/virtualMachines"} | ` ForEach-Object {Set-AzureRmResource -Tag @( @{ Name="vmType"; Value="test"}) -ResourceId $_.ResourceId -force}

[/code]

As a database administrator at heart I hated to create a cursor (ForEach-Object) to do this but trying a set based pipe didn't work.  I would love to hear from you if you are doing this in a different way.  How are you using tags in your Azure environment?  Let us know in the comments below.

Building Azure Resource Manager Templates

adobestock_99956429.jpg

Part of my responsibility as a Partner Technology Strategist at Microsoft is to work on community motions.  I regularly publish blog articles on the Microsoft US Partner Community Azure Blog and am working on building the newly created Data & Analytics Partner Community Blog.  In March I published an article on Creating a SQL Server 2016 demo and linked to an Azure Resource Manager template I created to deploy a Windows Server virtual machine with SQL 2016 installed with the AdventureWorks database sames deployed.  In this article I am going to go through the details of setting up this ARM template and some of the tools I used to get there. tl;dr -  If you don't care about how the sausage was made you can grab the code from the GitHub repository.

Note: All of this code was generated in Visual Studio 2015 Enterprise but can be generated using the Community edition as well.  If you don't wan to use Visual Studio at all you can do that as well, ARM templates are simply JSON files and can be written in your favorite text editor; at the moment mine is Atom.

Setup

With Visual Studio installed, make sure that you have Azure SDK installed as well.  This will add the Azure Resource Group template that we need.  That said, get VS running and create a new project.  Select Azure Resource Group, as seen in the image below, to get the project set up.

AzureResourceGroup
AzureResourceGroup

The next screen, as seen below, pops up and takes the templates one step further and allow you to get started with some pre-built configurations.  These are certainly a great place to start if you are looking to build one of those solutions.  For the our purposes we are going to create a blank template.

SelectTemplate
SelectTemplate

The Business

Scaffold
Scaffold

This blank template builds out the project scaffolding with 3 main files, seen in the image but the one where we will focus the most is on the azuredeploy.json file.  The file has the basic structure of an ARM template and if you wanted to write the JSON from scratch this is where you would do it.  The Azure team, and the broader community, have created some great references to help get you started and published them on GitHub.

As I mentioned earlier in this article, it is quite possible to use your favorite text editor to create ARM Templates but the power of VS can be seen by opening the azuredeploy.json file.  This opens the JSON Outline window that not only helps users navigate but has some great creation tools.  Clicking the cube in the upper left of that window, seen in the image below, will bring up the Add Resource dialog.

JSONOutline
JSONOutline

These templates can be used to create a number of items within the templates and build out the parameters and variables as well.  For my SQL 2016 VM deployment I ended up with 5 resources, 9 parameters, and 18 variables.  The full JSON code can be found here.  I would normally post it directly into the post but at 254 lines and no JSON code support for the syntax formatter it makes the post troublesome to read.

The Magic

Deploying a virtual machine through an ARM template is not necessarily where the magic happens. Using the Custom Script for Windows Extension, is where I am able to deploy the AdventureWorks databases.  The script actually does a few things and can be downloaded from GitHub.

  • Create the folder structure necessary to deploy the databases
  • Download and extract the files
  • Move the backup files to the proper location
  • Uses SQL PowerShell to execute the restore commands
  • Open the firewall ports necessary to connect to the SQL Server

Deployment

As with all things SQL and most things Azure the way you deploy an ARM template, well, depends.  if you are using Visual Studio, right clicking on the project will allow for deployment right from inside of VS.  If you prefer to stay away from Visual Studio, they can also be deployed from PowerShell using the New-AzureRmResourceGroupDeployment command or azure group deployment create from the Azure CLI.  There are additional ways and a great breakdown of how to deploy Azure Arm templates can be found on the Deploy resources with Azure Resource Manager templates documentation.

Are you using Azure Resource Manager templates are a powerful way to create templates for deployments on Azure.  Are you using ARM templates today?  I would love to hear about how you are using them in the comments below.

Azure Automation, my SQL Agent in the Cloud

death_to_stock_photography_weekend_work-2-of-10.jpg

My focus for the past 18 moths at Microsoft has been on Azure but that does not mean I left my love for SQL behind.  In fact, has become an asset.  In the course of regular operations I have built out a workflow to import call statistics and reporting data from our community activities into Azure SQL for reporting in Power BI.  In that process I needed the ability to run a stored procedure on schedule to normalize some data.  Without a SQL Agent in Azure SQL, I use Azure Automation to get this done.  In this article I will walk through the application workflow and how I set up Azure Automation to be my SQL Agent in the cloud.

Workflow & Setup

Before we get started, a bit of context on the data workflow.  The raw data is delivered via email in a password protected Excel file.  I extract the relevant data into a .CSV file and upload it to Azure Blob Storage.  I have created an Azure Data Factory pipeline to check the storage location and pipe the data into a staging table in Azure SQL Database.  At this point, I need to normalize the data into my database and archive the file in the event that I want to access the raw data later.

This article makes the assumption that you are familiar with the Azure portal.  I am not writing a full step-by-step article.  While I will outline all of the things that need to happen, I am not doing a "click here" walk-through.  I also am not going to cover the movement of the file from one blob to another, I will do that in a separate post.

Automation Account

As the name of the article suggests, we are going to start with an Automation Account.  Create an account with the requisite name, subscription, resource group and location.  I choose to create an Azure Run As account but determine if this is right for your security needs.  Once open, the default Automation account looks like the image below.

AutomationAccount
AutomationAccount

The Basics

Before we get started with the specific workflow, it is important to understand the structure of an Automation account.  Runbooks are where we will write the actions that we want to perform, assets are resources that we can call into the Runbook and there are various types, finally jobs are the actual execution of the Runbook.  It is also important to note that you can nest these Runbooks for complex tasks.

I cover the separation of assets from the code to highlight the fact that a Runbook can be created that can execute against many different environments.  In this case we can create a single Runbook that can execute across multiple SQL Servers by creating a combination of assets and jobs.  An advanced example may be that you have some index maintenance you perform and want to create one job that connects to all of your databases to execute.

Creating Assets

Before creating the Runbook, we will create some assets to call in it.  The first is a credential.  This is the credential of the SQL Server that you will connect to.  The second asset I would create is a schedule.  I run my script daily so I create a schedule to reflect that but there is an hourly option as well.

The Runbook

With my assets created, I create a PowerShell Workflow Runbook with the following code.

[code language="powershell"]

workflow Execute-SQL { param( [parameter(Manditory=$true)] [string] $SqlServer,</code>

[parameter(Manditory=$false)] [int] $SqlServerPort = 1433,

[parameter(Manditory=$true)] [string] $Database,

[parameter(Manditory=$true)] [PSCredential] $SqlCredential )

$SqlUsername = $SqlCredential.UserName $SqlPassword = $SqlCredential.GetNetworkCredential().Password

inlinescript{ $Connection = New-Object System.Data.SqlClient.SqlConnection("Server=tcp:$using:SqlServer,$using:SqlServerPort;Database=$using:Database;User ID=$using:SqlUsername;Password=$using:SqlPass;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;") $Connection.Open() $Cmd=New-Object System.Data.SqlClient.SqlCommand("EXECUTE usp_MyStoredProcedure", $Connection) $Cmd.CommandTimeout=120 $DataSet=New-Object System.Data.DataSet $DataAdapter=New-Object System.Data.SqlClient.SqlDataAdapter($Cmd) [void]$DataAdapter.fill($DataSet) $Connection.Close() } }

[/code]

While you can create the workflow yourself. You do not necessarily need to create it from scratch.  There is a gallery with hundreds of community driven templates to get you started.  To create a Runbook from the gallery, simply hit the gallery button shown below.

Gallery
Gallery

Scheduling Execution

The final step to making Azure Automation your SQL Agent in the cloud is to schedule the Runbook.  From the Runbook panel, shown in the image below, select schedule to associate the one created when setting up our assets.   Configure the parameters that are defined in the Runbook (SQLServer, Port, Databsae, SqlCredential).

Schedule
Schedule

Note that the SqlCredential is the name of the asset created earlier.  The rest of the parameters are going to be the actual names unless they have been defined as assets.

Wrap Up

There are a ton of advanced functions in Azure Automation that didn't get covered but this should be the basics to help you get started.  How are you using Azure Automation?  We would love to hear from you in the comments below.

Microsoft and Open Source

death_to_stock_photography_vibrant-9-of-10.jpg

This week I spent 2 days in New Orleans at the RedHat North American Partner conference.   As A Microsoft employee, I did not think this would be something that would happen.  In a world of consumption economics, my attendance makes perfect sense.  In this post I will talk a bit more about the Microsoft's open source landscape, our RedHat partnership, and why consumption economics brought us together.

https://twitter.com/OpenAtMicrosoft/status/717133480845516800

While the embrace of open source appears as a is a tectonic shift in Microsoft's company culture; the development and commitment to open source aligns us with our corporate mission statement.  If we are truly to enable everyone on the planet to do more, we need to arm them with the tools to do so and support efforts that are already moving to do that.  When I am in discussions with people about this topic they often question how committed we actually are.  They understand that we want people using Linux on Azure but how committed are your really?

At Microsoft our mission is to enable people and businesses throughout the world to realize their full potential

To answer that question, it is best to let actions speak louder than words.  I can find no better example than our decision to make one of our flagship products, .NET, open source.  While that is a single example, here are some additional figures that we collect; today, as of this draft, Microsoft employees have committed 860,001 lines of code to open source projects.  We have committed 445,107,561 lines of code since we started tracking the numbers.  The current President of the Apace Foundation, Ross Gardler (Blog | Twitter) is a Microsoft employee.  If we created a slide with the logos of all the open source companies that we worked for, it would make NASCAR jealous.

I myself have even participated in those numbers.  I have created a repository on GitHub where I have published Azure solutions to help partners get started with some of our technologies.

The natural extension of our open source journey would then be to form a relationship with the largest players in the space.  We have done just that.  In November of last year, Microsoft and RedHat made the announcement that they were collaborating to get RedHat products working on the Azure cloud.  The announcement goes into deep detail about what the partnership means but some of the highlights are outlined below.

  • RedHat solutions available natively to Azure customers
  • Integrated support teams
  • Unified workload management across hybrid cloud deployments
  • Collaboration on .NET for a new generation of application development capabilities

In his book, Consumption Economics, J.B. Wood draws parallels between the rise of electricity and the modern power grid with the development of cloud computing.  Microsoft has embraced this idea with the development of our Azure platform and in doing so, added fuel to our open source alignment.  For Microsoft to provide the electricity that empowers people to do more, we had to work to ensure that our cloud platform worked with all of the ways that people might consume it.  It is because of this that we are so engaged in the most popular open source projects working today.  We want to ensure that the solutions developed on not only our technologies but open source work equally well on our platform.  The latest numbers that I heard, are that 25% of all virtual machines running on Azure are Linux based virtual machines.

So to that end, my role at the RedHat North American Partner conference was an extension of what I already do today.  I talked with RedHat partners about how working with Microsoft and our Azure platform can help their customers and am looking forward to the development of those relationships.  It is exciting new times we live in; I for one am looking forward to what our partners and customers do with this new RedHat partnership.

Morning Routine

deathtostock_medium10.jpg

As an engineer at heart, I am always looking for ways to streamline the system.  Over the past few months, I have been analyzing how I approach my mornings.  Through somewhat anecdotal research, I have been able to identify a few patterns and practices that make up my most productive of days.  From this information I am working to derive a morning routine.  In this post, I will outline my routine and how each plays a larger role in my productivity.

Sleep

Perform a web search you can you will find millions of results related to why sleep is so important.  My sleep patterns have always been erratic but on days I was most productive I slept well the night before.  I love data and wanted to find a way to determine if my hypothesis was correct and then how could I monitor my progress.

About the time I started looking into this, I received my shiny new Microsoft Band 2 and began tracking my sleep.  I found some interesting data.  I was not, according to Microsoft Health, sleeping well.  I tended to wake up many times, 6 on average.  For the past 2 years, since she was 6 weeks old I have shared the bed with my dog.  Her favorite thing to do is to curl up in my armpit.

Lafayette_2015-20150619-DSC00060

The problem is that I am a very light sleeper and when she would move, I would wake up.  Armed with this data I was able to break through the guilty face my wife gave me every time I had tried this in the past.  In a single night of the experiment I was able to get a good night sleep.  In the weeks that followed the data piled up and I am getting better sleep than ever.

Exercise

Exercise is another one of those things that we all should be doing.  This is not a New Years resolution discussion.  It is, however, a lifestyle discussion.  I have found a direct correlation between my most productive days and exercise.  If you don't take my word for it a simple search will reveal similar findings from much more scientific sources.  Any discussion about exercise is also a scaled one and it does not mean you need to go for a run or the gym, while I certainly encourage those things, a simple walk helps jump start my creativity.

Oddly enough, one of the major obstacles to my sleep was a driver to my morning exercise.  I have conditioned Maggie to a 2 mile walk every morning.  This simple act, while not always easy to start, helps define the start to my day.  I have heard a lot about meditation  and have yet to take up the practice.  I do find my walks with her to be my meditation time.  They help me get my mind aligned for the day to come.  I don't listen to music, audiobooks, or podcasts.  It is just time for the two of us to enjoy the morning together.

I have also put a renewed focus back on running.  I am still doing CrossFit and Olympic weight lifting in my garage but I have always enjoyed running and wanted to fold it into my regular rhythm.  I do not have a race goal in mind, I simply enjoy running.  Four mornings a week I put time in on the road after I take Maggie on her morning walk.  Being the data guy that I am, I of course track everything.  I track my runs with my Garmin Fenix 3 Sapphire and if anyone is interested in seeing my data, it can be found on Garmin Connect.

Diet

Like exercise this is something I have put renewed focus and energy into.  I love to cook; being a native of South Louisiana means cooking is in my blood,  After watching documentaries like Food, Inc., Fat, Sick, and Nearly Dead, and Forks over Knives; I had focused on better quality ingredients but I really was not on any real diet.  I have always been pretty healthy even weighing just shy of 130kg.  Microsoft encourages employees to participate in an annual health screening.  When I got my latest blood test back, I found that I was flirting with high cholesterol and new it was time for some changes.  I was going to get serious about eating better.

I didn't make any dramatic changes, I just started paying better attention to what I was putting into my body.  I downloaded the MyFitnessPal app on my phone and set a goal that I wanted to loose 1kg per week.  The app gave me a calorie target and I began entering my food.  I started to notice that the macro breakdown of what I was eating and what I though I was eating didn't match up.  I thought I was getting balance when in fact I was very carb heavy in my diet.  I focused on fixing that and settled on a regular morning of about a tablespoon of almond butter, a banana or fruit of some kind and a Progenex recovery shake after my morning workout.  I find that by starting the day off with a very balanced breakfast sets me up for a productive day.

Writing

I spend much of my life communicating via written word.  The only way to improve this is to actually do it.  I take about ~10-15min every morning to write in a journal.  There are a lot of different ways to do this but much of the time, I simply write it in my Evernote Moleskin notebook and then capture it for longevity into Evernote.  Once there it will be run through an Optical Character Recognition (OCR) process so I can search for it later.

“There is nothing to writing. All you do is sit down at a typewriter and bleed.” - Ernest Hemingway

No Email

I think there is a place to have a discussion about the way email is used today but this isn't the post for this.  Email in the mornings is a dreadful distraction.  Especially checking it first thing in the morning.  Even the most focused individuals can be pulled down the rabbit hole and spend all day inside of email.  I open Outlook in offline mode to begin the working day and the only reason I open it at all is because my without my calendar I would not know what meetings I need to attend or where I am supposed to be.  I answer emails right before lunch and at 4pm when I find I am not very creative and can get through the administrative task of processing my inbox.

With my calendar open I begin my day.  What does your morning routine look like?  How do you prepare for the day ahead?  Are you using a tool or pattern you would like to share?  I would love to hear from you in the comments section below.

Big Data: What's the Deal?

img_0076.jpg

Marketing strikes again!  Big Data is such a catchy, vague term it was destined to become a buzzword.  It is much like that magical place, the "cloud", where panda bears ride unicorns that sing "It's a Small World After All".  In my job as a Partner Technology Strategist at Microsoft part of my responsibilities are to be technically deep in Data Platforms and Advanced Analytics.  I can't seem to convince my mom what that means and that, yes, it is a real job, but I digress.  I spend quite a bit of time explaining what big data is and what it really means.  In this post I will cover how I actually define "Big Data", the data analytics pipeline, and an overview of lambda architecture, and finally how I talk about big data.

Big Data

I love talking to people about their environments and their data.  The environments vary wildly in size, and data type.  But do they really have "Big Data"?  Data is considered big data if it has one of the three Vs: volume, variety, and velocity.

Volume

For years, organizations have collected vast amounts of data.  This trend is only increasing at an exponential rate.  In a presentation I recently conducted for a partner, I used a few examples of scientific data collection.

  • In 2000 the Sloan Digital Sky Survey collected more data in its 1st week than was collected in the entire history of Astronomy
  • By 2016 the New Large Synoptic Survey Telescope in Chile will acquire 140 terabytes in 5 days - more than Sloan acquired in 10 years
  • The Large Hadron Collider at CERN generates 40 terabytes of data every second

The amount of data the is being collected can reach into the 100's of GB, TB, PB range.  I saw a statistic recently that in 2010 Twitter was generating over 1 TB of tweets daily.  While these examples are meant to be extreme I have worked with smaller organizations that have 100's of TB of data and that qualifies it as having big data.

Variety

Variety refers to the type of data that an organization collects.  In any given organization they may have structured data from their ERP system and unstructured data that they are collecting for brand analysis from social media.  These two data sets vary not only by type but in schema as well.  Organizations looking to make sense of these seemingly unrelated data types have big data.  With these data types questions like the following can be analyzed: how can they tell if Twitter activity or brand sentiment are effecting sales?

Velocity

While velocity is self explanatory, when it is used it the context of big data, the data is also typically small in size but enter the system at a rapid rate.  This is the type of data generated by sensors, IoT devices, or SCADA systems.  These type of environments may generate 100,000 1kb tuples per second.

Data Analytics Pipeline and Lambda Architecture

While there is certainly debate on additional ways to define big data, what we have established in this post allows us to shift focus on how we actually process the data.  The stages of the data analytics pipeline follow the logical flow of the data: ingest, processing, storage, and delivery.  When we discuss the three Vs, it is clear that there are many different types of data and the size that is needed to process can be quite large, enter lambda architecture.

Lambda architecture was designed to meet the challenge of handing the data analytics pipeline through two avenues, streaming and batch processing methods.  These two data pathways merge just before delivery to create a holistic picture of the data.  The streaming layer handles data with high velocity, processing them in real-time.  The batch layer handles large volumes of data.  Batch processing can take extended periods of time.  By combining the layers the streaming data can fill in the time gap missing in the batch layer.  The image below illustrates this concept.

Lambda Architecture Outline
Lambda Architecture Outline

In my role at Microsoft, I find myself having this discussion with not only partners but internal resources as well.  I present it in a very similar format to this post.  It is with this basic understanding, we are able to explore the more interesting topic of how Microsoft has created services on Azure to support this model and the interesting products and services our partners and customers are using them for.  Are you using big data services on Azure?  We would love to hear about them in the comments below.  If you are interested in learning more about Azure you can find more posts and information on the US Partner Blog where I also write.

Book Review: Extreme Ownership

9781250067050.jpg

It is no secret that I am a fan of Tim Ferriss (Blog | Twitter).  I have mentioned it in multiple blog posts and podcasts. A few weeks ago on his podcast guest was Jocko Willink (Twitter). He is the co-author of a book called Extreme Ownership. I am fascinated by the leadership books but also with the broader notion personal responsibility. I have also always been into military books, so it would seem that the trifecta was complete and I must read this book. I consumed this tome in 2 days and connected with many of the book's central themes. One of the points that Jacko and Leif call out is that none of the leadership principles in the book are new.  Their is no new material in leadership, like self-help, that has not been explored for thousands of years.  It is a refreshingly honest position to take.  What I find makes their approach connect with me as a reader is the way they are able to take leadership approaches with mortal consequences and relate them to business scenarios.

We hope to dispel the myth that military leadership is easy because subordinates robotically and blindly follow orders.  On the contrary, U.S. military personnel are smart, creative, freethinking individuals - human beings.  They must literally risk life and limb to accomplish the mission.  For this reason, they must believe in the cause for which they are fighting.

The book is structured by chapter into three main sections.  The first is a narration of a leadership challenge that the authors faced at or preparing for war.  This is followed by an explanation of the leadership principle involved in the story and then finally by relating the principle to a business scenario the duo has encountered in their business consulting engagements post military service.  This format works well and enables to authors to drive home the lesson they are trying to teach.

Often, the most difficult ego to deal with is your own.

One of the principles of the book revolves around ego and the challenges it creates when leaders fail to keep it controlled.  Ego is not demonized, as it can be what drives leaders to be successful, yet an awareness of ego and the ability to understand how it can adversely affect success is a critical leadership skill.  While I try to ensure control of my own ego, I have certainly failed many times.  I have also been around leaders who always assume that everyone in the room is smarter than they are, but are confident enough to lead.  In my experience these leaders yield comments like, "This is the best boss I have ever worked for" or "l would love to work with that guy".

While a simple statement, the Commander’s Intent is actually the most important part of the brief. When understood by everyone involved in the execution of the plan, it guides each decision and action on the ground.

The concept of Commander's Intent may seem like a military concept but in its simplest form answers the "why" question. "Why are we doing this?".  Everyone executing the plan needs to be able to answer this question.  This concept is one that can get lost in a large organization but as a leader, it is imperative that the goal is explained until the individual contributors on the team can explain why they are doing something.  If they understand the why; they can make better decisions, be more creative, and believe in what they are doing.

The title Extreme Ownership implies a need to take personal responsibility and this is certainly a main theme of the book but it is also an area of personal interest.  It seems that when I pass by a television with the news on, someone is blaming someone for something.  They are fat because of, they are sick because of, I didn't get a raise because of, this project failed because of, etc.  Unless the statement is: "because of me", an honest review of the situation has not be completed.  Jocko and Leif are able to shine a light on this problem.  If more people were to conduct themselves with extreme ownership they would find a sense of enablement instead of a need for entitlement.

These are just a few of the concepts offered by Jocko and Leif in a book and I feel all leaders should read this book and take it to heart.  Leading people is one of the most challenging jobs around and understanding the foundations outlined in the book pave the way to success.  This is one of the few books that I have wanted to reread as soon as I finished it.  I even got my wife to read it.  I have talked about it with many of my friends, family, and colleagues.  Have you read Extreme Ownership?  I would love to hear your thoughts in the comments section below.

If you like this review and would like to receive exclusive content from Beyond the Corner Office, sign up for our newsletter.  Finallay, don't forget to check out our podcast.