Azure, Docker, Linux, Technical

Bash on Windows Productivity Talk

The bi-annual Denver Dev Day was last week and I had the opportunity to present a topic titled “Using Bash on Windows to Increase your Productivity” to an awesome room of fellow techies. The idea for the session came from my increasing use of Bash and Linux, specifically Windows Subsystem for Linux (WSL), and I thought this talk might not only help others learn a few new tools or tricks but also help me learn what others are doing. I was right on both accounts! If you were there, thanks for coming and I hope it was worth your time investment. If you weren’t there, the session abstract and slides are below. Although, the majority of the time was in Bash showing different scenarios and trying different things folks threw out at me, which was fun! Here’s a sampling of what I showed:

  • Edit Windows files, with mnt and alias
  • Built in VS Code support
    • Launch project from bash
    • Integrated shell
  • Run any Win exe
    • Echo $PATH to show what was included and my modifications
    • Launching Visual Studio 2017
    • Docker tools
    • K8s / minikube
      • Running minikube start requires window to have administrator rights, so we discussed differences between Windows and Linux users/permissions
  • Run bash from CMD
    • dir | bash “grep Desk“
    • bash -c “ls -lh” | findstr Desk
  • Azure
    • Multi-window/multi-account (I use a separate Linux user for each Azure subscription)
      • Az account show | jq .name
    • Multi-pane with Help
  • Dotfiles (My work-in-progress dotfiles)
    • Github for environment consistency & rebuild
  • Shell in Azure

Session Abstract:

Did you know Windows 10 can run Bash on Linux?? While it may seem weird seeing those words together, that’s no reason to shy away and not consider how this new capability can be leveraged to increase your day to day productivity. Think about all the Linux features, code samples, tutorials and tools that are out in the world. Now think about all the Windows counterparts. Bash on Windows gives us the option to use all of it on a single operating system and I’ll show you how!

This session will show you how to get up and running and then we’ll spend some time looking at specific development scenarios and why you would want to use Bash. If development isn’t your focus, we’ll also look at some DevOps scenarios targeting Azure. Finally, I’ll show you some of my favorite tools, tips and tricks along the way that can help you leave the room with knowledge you can immediately put to good use.

Azure, PowerShell, Technical

New Service Fabric PowerShell Cmdlets

If you prefer to use PowerShell to interact with Azure and you are working with Service Fabric, today is your lucky day! Technically, //Build held a couple weeks ago was your lucky day since that’s when these were released but today is when I’m getting around to writing this post.

Announced a couple weeks ago, there are some new cmdlets that allow you to do cluster management. Tasks such as creating a cluster, adding/removing a node, adding/removing a node type, changing reliability or durability, things like that, are now possible using PowerShell. As of today, here are the new commands, currently at v 0.1.1:

Add-AzureRmServiceFabricApplicationCertificate
Add-AzureRmServiceFabricClientCertificate
Add-AzureRmServiceFabricClusterCertificate
Add-AzureRmServiceFabricNode
Add-AzureRmServiceFabricNodeType
Get-AzureRmServiceFabricCluster
New-AzureRmServiceFabricCluster
Remove-AzureRmServiceFabricClientCertificate
Remove-AzureRmServiceFabricClusterCertificate
Remove-AzureRmServiceFabricNode
Remove-AzureRmServiceFabricNodeType
Remove-AzureRmServiceFabricSetting
Set-AzureRmServiceFabricSetting
Set-AzureRmServiceFabricUpgradeType
Update-AzureRmServiceFabricDurability
Update-AzureRmServiceFabricReliability

For the latest documentation, check out the docs.

Installation

Admittedly, I’m not a huge PowerShell user. But, I wanted to at least give these a quick test run. Especially focusing on the New-AzureRmServiceFabricCluster command as that lets us now create a new cluster without writing an ARM template! Pretty cool…for the scenarios where it supports the customizations we need. More on that later. I mentioned I wasn’t a huge PowerShell users, and by that I mean to say I didn’t even have the Azure PowerShell SDK installed on my main machine. So I went off to install it. Just my luck, the first install didn’t go so well as when I tried running some of these new commands I got an error saying I needed to run Import-Module on AzureRM.ServiceFabric. Well, when I did that I got this error:

Import-Module : The module to process ‘.\Microsoft.Azure.Commands.ServiceFabric.dll’, listed in field ‘NestedModules’ of module manifest ‘C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager\AzureRM.ServiceFabric\AzureRM.ServiceFabric.psd1′ was not processed because no valid module was found in any module directory.

Indeed, the dll it was looking for didn’t exist. After some unsuccessful troubleshooting I gave up and removed the SDK and reinstalled. That time it worked.

Create a New Cluster

Starting with Hello World, I wanted to create a new cluster. Nothing fancy, just following one of the examples given in the help documentation. Updating my script, I ended up with this (watch for wrapping if you copy/paste):

$pwd=“OneSuperSecret@99” | ConvertTo-SecureString -AsPlainText -Force
$RGname=“testposhasf”
$clusterloc=“SouthCentralUS”
$subname=$RGname.$clusterloc.cloudapp.azure.com”
$pfxfolder=“c:\MyCertificates\”

Write-Output “create cluster in ” $clusterloc “subject name for cert ” $subname “and output the cert into ” $pfxfolder

New-AzureRmServiceFabricCluster -ResourceGroupName $RGname -Location $clusterloc -ClusterSize 3 -VmPassword $pwd -CertificateSubjectName $subname -CertificateOutputFolder $pfxfolder -CertificatePassword $pwd -OS WindowsServer2016DatacenterwithContainers

That creates a 3 node cluster using the Server 2016 w/Containers OS and secures it by creating a new cert and storing it in Key Vault along with downloading it locally so I can use it. (Install it locally before trying to access Service Fabric Explorer.) It took around 10ish minutes and resulted in a usable cluster, all without writing a single line of JSON!

And here’s the Resource Group view, showing all of the artifacts it created for me:

A few things to point out:

  1. While this command does create a secure cluster, notice it created the Key Vault in the same Resource Group. Not really the best deployment scenario, but it gets the job done. If you’d prefer to use an existing Key Vault, use one of the other options of the same command to create a new Key Vault Resource Group. Examples are shown in the help.
  2. For some reason, it created the cluster with version 5.5.216 of the Service Fabric runtime, whereas the latest version is 5.6.210 (and preferred when using Windows containers). Hopefully this will get fixed soon.
  3. If you don’t like the naming scheme (does “l5nbd6qsesaeu100” mean anything to you?), you’ll need to create a JSON template.
  4. For control over many other options (such as deploying into an existing VNET), you’ll be back in JSON.
  5. Just because you’re back in JSON, you can still leverage this command by passing in your template file (see the other examples).

All-in-all, I think this is a great start and I like where the tooling is going. I can’t wait to see these capabilities grow and, hopefully, be adopted over in the CLI world.

Azure, Azure Government, Technical

Azure Event Hubs vs AWS Kinesis

With Amazon and Microsoft being the main providers for cloud based telemetry injestion services I wanted to do a feature and price comparison between the two. If nothing else, this info should help with an understanding of each services capabilities and perhaps help with making a decision on which service is best for your needs. I realize if you’re on AWS you’re probably going to use Kinesis and if you’re on Azure you’re probably going to use Event Hubs. But at least arm yourself with all the info before diving in!

Two caveats to this info worth noting:

  1. Yes, I work for Microsoft. I did not fudge these numbers or any of the info to paint a nicer picture for Azure. This info is factual based on my research into both services.
  2. Cloud services and their pricing change, so these specs and pricing are current as of the date of this post and you should re-check on Azure or AWS to verify.

This is a purely objective comparison focused on service specs. I’m not going to get into the usability of either service, programming efficiency, portal experiences, or anything else like that. Just numbers. Notice there are a couple question marks on the AWS side because I couldn’t find the info in the Kinesis documentation and folks I asked didn’t know. If you can help fill in those gaps, or notice some of this has changed, please let me know in the comments.

 

Event Hubs

AWS Kinesis

Input Capacity

1MB/s per Throughput Unit (TU)

1MB/s per Shard

Output Capacity

2MB/s per TU

2MB/s per Shard

Events/s

1K

1K

Latency

50ms Avg, 99th % < 100ms

10s min

Protocol

HTTPS or AMQP 1.0

HTTPS

Max Message Size

256KB

1MB

Included Storage

84GB per TU

?? (none?)

Max Consumers

1 Consumer Group (Basic Tier)

20 Consumer Groups (Standard Tier)

?? (only limited by output capacity?) (See <Update 6/1/2016> below)

Monitoring

Built in portal metrics or REST API

CloudWatch

Message Retention

24 hrs (up to 7 days)

24 hrs (up to 7 days)

Price per Hour

$0.015/TU Basic Tier
$0.030/TU Standard Tier

$0.015/Shard

Price per Million Units

$0.028 Basic & Standard (64KB/unit)

$0.014 (25KB/unit)

Extended Data Retention Price

Only if stored event size exceeds 84GB * #TU’s, $0.024/GB (assuming LRS)

$0.020/Shard hour

Region used for pricing

East US

US East

Throughput Flexibility

Adjust TU’s as needed

Adjust Shards as needed

Supported Regions

18 (plus GovCloud)

9

<Update 6/1/2016> Turns out the answer to Max Consumers for Kinesis isn’t exactly straight forward due to their dependency on HTTP(S), as pointed out to me after publishing this post in February. Kinesis is limited to 5 read transactions per shard so your max consumers is going to be dependent on how you spread those transactions across your consumers. If you have five consumers each reading once per second, five is your max. Since output is capped at 2MB/s,  you can read up to that capacity in each transaction but you have to design your consumers to work within those limits. Additional info on this Stack Overflow thread.</Update 6/2/2016>

To compare pricing, I’m using the sample from AWS. In case they change their sample, here is the sample the below numbers are based on:

“Let’s assume that our data producers put 100 records per second in aggregate, and each record is 35KB. In this case, the total data input rate is 3.4MB/sec (100 records/sec*35KB/record). For simplicity, we assume that the throughput and data size of each record are stable and constant throughout the day.”

Kinesis Pricing Sample

Shards

4

Shard cost/month (31 days)

$44.64

PUT cost/month

$7.50

Total

$52.14

Extended Retention Cost

$59.52

Total w/Extended Retention

$111.66

Event Hubs Pricing Sample

 

Basic

Standard

TU

4

4

TU cost/month (31 days)

$44.64

$89.28

PUT cost/month

$7.50

$7.50

Total

$52.14

$96.78

Extended Retention Cost*

N/A

$47.24

Total w/Extended Retention

N/A

$144.02

* Extended storage only available on Standard tier

Results

On the pricing side, I found it interesting they are the exact same price! Unless you need extended retention and need to bump up to the Standard tier on Event Hubs. Comparing the specs, the items that jump out for me that might impact a decision are latency (Event Hubs blows away Kinesis), protocol (no AMQP on Kinesis), max message size (Kinesis is quite a bit larger), the size of a pricing unit (64KB for Event Hubs and 25KB for Kinesis), and the number of regions. Whichever service you choose to go with, hopefully this info helps make the decision a bit easier.

Azure, Technical

Entity Framework Code First Deployment of Azure AD Sample

If you’re interested in building applications using Azure AD (and really, why would you *not*?), the best code repository to be aware of is https://github.com/AzureADSamples. TONS of samples with documentation showing many different scenarios. This post takes a look at one of the samples in a bit more detail, specifically in the area of deploying the sample to Azure and implementing Code First deployment/migrations. Using EF and Code First can be a bit of a religious debate which I will avoid in this post. The sample uses EF and getting Code First to work is only a few extra steps.

The sample I’m working with here is the WebApp-MultiTenant-OpenIDConnect-DotNet sample. Click the link to get the sample code and read the documentation on how to run the sample. Get everything up and running in Azure AD against a local deployment of the sample (covered in the GitHub documentation for the project), then come back here when you’re ready to get Code First set up and deploy to Azure.

Implement Code First

If you’re not familiar with Code First take some time and read through the documentation on the asp.net website, specifically this walkthrough which shows how to get it set up in your project as well as deployed. As you’ll see, I’m really not doing anything fancy here beyond what you see in the asp.net site. Actually, this is a bit more simplified since we’re not doing any seeding.

  1. Add a connectionString entry to the web.config pointing to your local database
    In my case, I’m using ProjectsV12 but you could easily user MSSQLLocalDB, v11.0 or full SQL depending on your local dev machine setup:

    (If you previously ran the sample locally you already have a local database named TodoListWebAppContext. Either delete it or rename it. You could also update the project to use a different name. This isn’t necessary, but it helps demonstrate the Code First deployment later on in this post.)

  2. Remove existing initializer

    Because we’re using Code First to build our database, we don’t need the TodoListWebAppInitializer initializer that is currently called in Global.asax.cs used to create the database. Open up that file and comment out line 19:

  3. Enable migrations
    Now we need to run a few commands in the Package Manager Console. If it’s not already open, click Tools -> NuGet Package Manager -> Package Manager Console. Once open, type “enable-migrations –contexttypename TodoListWebAppContext” and hit enter:

    You’ll notice a new folder called “Migrations” was added to the project along with a new Configuration.cs file. Just leave those as-is.

  4. Add a migration
    Now we need to add our first migration, which is the initial creation of the database in this case since I haven’t deployed the app locally yet. In the Package Manager Console, type “add-migration InitialCreate” (InitialCreate is just a label used for the migration that you can change to identify the specific migration) and hit enter:

    Now you’ll see a few more files added to that new Migrations folder. If you poke through them, you’ll see they define the database changes to apply and the class inherits from the DbMigration EF migration class. I won’t go through them here to define what they do or how they work but it’s worth the time to understand those concepts if you don’t already have that down (look at the asp.net site linked earlier.)

  5. Update the database

    Finally, we run update-database to actually let EF create the database based on our InitialCreate migration definition. In the Package Manager Console, type “update-database” and hit enter:

    When that’s finished, you now have a local database created for you based on the definition in the project. Open up SQL Server Object Explorer and expand your local DB to see the new database:

Success! Go ahead and run the project locally and, assuming you had everything hooked up correctly in Azure AD prior to these steps, all will work fine using this configuration. Feel free to rinse-and-repeat the above add-migration/update-database commands as you update the data model in the project. Each time you add a migration you’ll see some new files pop up in your Migrations folder.

Deploy to Azure

Now let’s look at what it will take to deploy this project into an Azure Web App and SQL Database running in Azure. (I’m using the new Azure Preview Portal in the screenshots below)

  1. Create a new Resource Group
    To help keep your resources organized, create a new Resource Group in the closest region, in my case South Central. We’ll deploy our Web App and SQL Database into this Resource Group.
  2. Create a new Azure Web App
    I’m going with the Free tier here but it will work on any of the pricing tiers. Choose the same Location here that you chose for your Resource Group, in my case South Central.
  3. Create a new SQL Database
    Same story here when creating a SQL Database…choose a server (or create a new one) in the same region and add it to the Resource Group. I’m using the Basic pricing tier, which as of the date of this post is estimated at $4.99/mo.

    There is a free SQL tier that is available in the current management portal when creating a new Web App and that will work for this sample, too, if you prefer to go that route.

  4. Add connection string
    Once the database is created, click in to the connection strings tab and copy the ADO.NET connection string:
    It helps to paste the connection string into a text editor so you can easily find the placeholder for the password and update that. If you don’t, your Web App won’t be able to connect to the database.

    Now open up your Web App and go into Application settings to access the Connection strings. Create a new connection string named “TodoListWebAppContext” (or the name you used in your web.config file if different than what I have above) paste your connection string to your database into the value field and click Save:

  5. Publish web app to Azure
    Ok, everything is now set up in Azure and ready for us to publish our application.
    1. Go back to your project in Visual Studio, right click the TodoListWebApp project and click on Publish
    2. Choose “Microsoft Azure Web Apps” as your target
    3. Log in to your Azure subscription (if prompted) and choose your web app from the drop down and click OK
    4. Leave the Connection screen without changes and click Next to the Settings screen
    5. On the Settings screen, the TodoListWebAppContext connection string should be pre-populated for you
    6. Check the “Execute Code First Migrations…” check box
    7. Click Publish and wait for the magic to happen
    8. After Visual Studio finished publishing, your browser should open to your new Azure Web App…but don’t try to Sign Up or Sign In yet…we’re not done J
  6. Update Azure AD application
    The last step is to get your app properly registered in Azure AD. You can either update the existing app you created when you first set up the sample, or start from scratch and create a completely new application in your AAD tenant. Here, I’m doing the former. If you create a new application, don’t forget to update the Client ID and Password from your new app in the web.config and re-publish your Web App.
    1. Log into the Azure management portal (https://manage.windowsazure.com) and drill down into your existing Azure AD tenant and application
    2. Click on the Configure tab
    3. Update the Sign-On URL to your new Azure Web App URL (use either http or https, just remember which so you navigate to the proper URL later for testing)
    4. Scroll down to the Single Sign-On section and find the Reply URL. Remove the existing URL and add in your Azure Web App URL
    5. Click Save
  7. Now the AAD, Web App and SQL Database are all set up. Navigate to your site and click on Sign Up and enter your AAD info as you previously did in the local sample, log in using your AAD user, and click on todos in the top nav
  8. More magic was happening as you accessed the app for the first time. If you would’ve looked at your SQL Database before the previous step there would’ve been nothing there. That’s because the app creates the database the first time it’s accessed, which you just did. Open up your Server Explorer in Visual Studio and refresh your Azure connection. You’ll see your new SQL Database listed. Right-click on the database and choose “Open in SQL Server Object Explorer”, log in with your credentials you set up when you created the database, and you’ll be taken to SQL Server Object Explorer where you can interact with your new database like you would any other SQL database.

    The additional table you see there, “_MigrationHistory”, is owned by EF and is populated every time you do a deployment which includes a database change.

And that’s it! Feel free to go back to your project, update the data model and re-publish to Azure. After you log back into the site and access todo’s, you’ll see the database reflect the data model change as well as a new entry in the _MigrationHistory table.

Azure, Azure Government, Technical

Using Event Hub and EventProcessorHost on Azure Government

There are few needs which apply to almost every industry when it comes to building software and solutions to meet the needs of that industry. Manufacturing, healthcare, industrial, education, home automation, military and public safety (to name a few) all have a need to collect data from hundreds/thousands/millions of data sources and bring the data together in order to either report on it as a whole or send the data somewhere else. For example, a government agency responsible for monitoring rainfall and temperature across an entire country. It would be great if that agency could set up a few thousand monitoring stations around the country and have those stations report their respective sensor data to a central location for aggregation where the agency can begin to see trends across various regions within the country and across given time spans. Quite a bit more reliable and near-real time compared to sending a worker out to each station to collect data and bring it back manually to a data center.

In order to manage the intake and processing of what could be billions of pieces of data per day we will need a scalable and efficient hub for all of the sources to talk to at once. Using architecture speak, we need a durable event stream collection service. Azure Event Hub was built to support these types of use cases and perform as the event stream collection service that sits in the middle of our Internet of Things (IoT) architecture. Once we get our environmental sensors set up to send their data to Event Hub, we can easily scale that service to support the thousands of devices we need and begin building really powerful reporting solutions that utilize the ingested data.

To see what an actual Event Hub implementation would start to look like on Azure Government, where it was recently released (as of the date of this post) along with all other Azure regions, let’s start by setting up a simple Event Hub service using a single instance of EventProcessorHost following the instructions on the Azure documentation site. For the most part, using Event Hubs in Azure Government is the same as any other Azure region. However, since the endpoint for Azure Government is usgovcloudapi.net instead of windows.net for many other Azure regions, the sample needs to be modified a bit. Creating the Event Hub and storage account is exactly the same, shown in the screenshots below choosing the USGov Iowa region:

Creating the Event Hub

Creating the Storage Account

Creating the sender client is the same as shown in the example, as well. The small tweak we need to make is on the receiver, which references the storage account we created previously since EventProcessorHost utilizes a storage account when processing messages. Notice the URL for the storage endpoint in Azure Government is *.core.usgovcloudapi.net. When you create the EventProcessorHost in the receiver application, the default behavior of the class is to assume you are using a storage account located in the *.core.windows.net domain. This means if you run the sample as-is (with your Event Hub and Storage Account info, of course), you will get an error:

Since my Storage Account was named “rkmeventhubstorage”, the default behavior is to create a URI of rkmeventhubstorage.blob.core.windows.net. Obviously, that doesn’t exist. I need a URI of rkmeventhubstorage.blob.core.usgovcloudapi.net. What now?

Diving into the source for Microsoft.ServiceBus.Messaging.EventProcessorHost, you’ll see (or just save your time and trust me) that the blob client is created using the CloudStorageAccount class. Looking at the documentation for that class, you won’t see anything to help get that endpoint updated (as of the writing of this post.) Turns out there’s an undocumented property for EndpointSuffix. Bingo. All you need to do is add a property for EndpointSuffix to use core.usgovcloudapi.net and the stars will align. Here is the full Main method for the Receiver application, showing the use of the EndpointSuffix property.

string eventHubConnectionString = "Endpoint=sb://rkmeventhub-ns.servicebus.usgovcloudapi.net/;SharedAccessKeyName=ReceiveRule;SharedAccessKey={YourSharedAccessKey}
string eventHubName = "rkmeventhub";
string storageAccountName = "rkmeventhubstorage";
string endpointSuffix = "core.usgovcloudapi.net";
string storageAccountKey = "{YourStorageAccountKey}";
string storageConnectionString = string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1};EndpointSuffix={2}",
storageAccountName, storageAccountKey, endpointSuffix);


string eventProcessorHostName = Guid.NewGuid().ToString();
EventProcessorHost eventProcessorHost = new EventProcessorHost(eventProcessorHostName, eventHubName, EventHubConsumerGroup.DefaultGroupName, eventHubConnectionString, storageConnectionString);
eventProcessorHost.RegisterEventProcessorAsync().Wait();


Console.WriteLine("Receiving. Press enter key to stop worker.");
Console.ReadLine();

After adding that property, your Receiver will be able to receive the messages successfully.

Azure, Azure Government, PowerShell, Technical

Get Started with PowerShell on Azure Government

Many folks using Azure Government probably have a subscription or two on public Azure. If you’re bouncing between environments and using PowerShell on each, it could become cumbersome to switch between them. This post shows a method that I’ve found to be easy to implement and simple to switch between environments. As a footnote, this can also be used to set up multiple environments beyond Azure Government, such as on-premises Azure.

If you do nothing after installing the Azure PowerShell modules and then run Get-AzureEnvironment, you’ll get two results (as of this posting): AzureCloud and AzureChinaCloud. So the first thing we need to do is add another environment for Azure Government. After that, we’ll use the certificate method to connect to our subscription. I prefer this method for three reasons:

  1. I can use this same certificate for my other subscriptions, allowing me to easily switch between them on the same machine
  2. Azure Government doesn’t support using Azure AD (Add-AzureAccount), at least based on my experiences (see edit below)
  3. Using a publishing settings file may work, but honestly I haven’t spent time using this method to see if it works or works as well as using a certificate

Ok, let’s add that new local environment. Run the following Posh command (I included line breaks for readability):

Add-AzureEnvironment -name “AzureGovernment”
-PublishSettingsFileUrl “https://manage.windowsazure.us/publishsettings/index?client=xplat”
-ServiceEndpoint “https://management.core.usgovcloudapi.net”
-ManagementPortalUrl “https://manage.windowsazure.us” -StorageEndpoint “core.usgovcloudapi.net”
-ActiveDirectoryEndpoint “https://login.windows.net/” -ActiveDirectoryServiceEndpointResourceId “https://management.core.usgovcloudapi.net/”

 Feel free to change the –name parameter value to whatever you want to use as this is a local environment name, but leave the rest as-is. And don’t forget the trailing slash on -ActiveDirectoryServiceEndpointResourceId or you’ll get an error when authenticating.

Now let’s create a local certificate. Open up a Visual Studio command prompt or other cli that supports makecert and run:

makecert -sky exchange -r -n “CN=<YourCertName>” -pe -a sha1 -len 2048 -ss My “c:temp<YourCertName>.cer”

For a reference on how to do that, look here: https://msdn.microsoft.com/en-us/library/azure/gg551722.aspx

Once that cert is created, you need to add it to your subscription in Azure Government.

  1. Navigate to https://manage.windowsazure.us and log in
  2. At the bottom of the left navigation, click on “Settings”
  3. Click on “Management Certificates”
  4. At the bottom of the screen, click on “Upload” and choose the .cer file you created earlier and stored in c:temp, then upload the file

Once the certificate has been added, you can now add a new subscription entry using the Azure environment and certificate previously created. First, you need to grab some configuration values:

$subId “<YourSubscriptionId>”
$thumbprint 
“<YourCertificateThumbprint>”
$cert Get-Item Cert:\CurrentUser\My\$thumbprint
$localSubName 
“<LocalSubscriptionName>”
$environmentName “AzureGovernment”

 <YourSubscriptionId> can be copied from the Management Certificates screen where you uploaded your certificate. Double click the value next to your cert and it will highlight the entire value so you can copy it, although it won’t show the entire value. You can expand the width of the column if you’d like to see the entire value (that was recently added J)

<YourCertificateThumbprint> can be copied from the same location under the Thumbprint column.

<LocalSubscriptionName> is a local name you will use to refer to this subscription, so use a name that makes sense to you. Maybe “ProdAzureGovernment”, as an example.

For environmentName, use the same name you used earlier when creating the local Azure Environment. If you kept my default, the name will be “AzureGovernment”.

Now run the following (I included line breaks for readability):

Set-AzureSubscription -SubscriptionName $localSubName
-SubscriptionId $subId -Certificate $cert -Environment $environmentName

If all went well, you’re all set! To see your local subscriptions, run Get-AzureSubscription. You should see your new ProdAzureGovernment subscription (or whatever you called it) along with any other subscriptions you already had configured, if any. You will also see which one is default and also current. The one flagged as default will be used by default when you first fire up PowerShell. The one marked current is what you’re currently hitting when you run commands against your subscription. You can change which subscription is default and current by running Select-AzureSubscription and passing in the desired config.

Assuming you have one subscription called “MSDN” and another called “ProdAzureGovernment”, within the same PowerShell window you can switch between them by simply running Select-AzureSubscription.

Select-AzureSubscription “MSDN” –Current
Get-AzureVM

Will show you all VMs on your MSDN subscription.

Select-AzureSubscription “ProdAzureGovernment” –Current
Get-AzureVM

Will show you all VMs on your Azure Government subscription.

If you have your Azure Government subscription set to current and then run Get-AzureSubscription, you may receive an error stating “The given key was not present in the dictionary.” I’m not sure what the cause of this is, but all other commands I’ve run against the subscription have succeeded just fine. If I figure that out I’ll post an update.

It’s just that simple! Hope that helps. As always, if you have any questions or suggestions please post a comment.

<EDIT>Thanks to a tip from my colleague Keith Mayer, I discovered why I couldn’t get Azure AD to work. My previous script for Add-AzureEnvironment was missing the -ActiveDirectoryEndpoint parameter, which is kind of important. After adding that to the environment definition I was able to use Azure AD and the Add-AzureAccount cmdlet to authenticate against Azure Government. Yeah! This is actually the preferred method going forward as opposed to using a certificate.</EDIT>

Azure, SharePoint

Starting 2015 Fresh with a New Job

The title says it all…my last day with Neudesic is this Friday. To give you a hint for who my new employer is, I just added Satya Nadella to my “coworkers” list on Twitter. More on the new role in a bit.

I was at Neudesic for four and a half years and experienced a few major corporate transitions. While I believed the company was great and had awesome people working there when I first started, that belief has not changed one bit through those transitions. Leaving a place you know is going in the right direction and people you enjoy working with is very difficult, but I’m confident I’ve made the right decision to leave now. I learned a ton and experienced a lot of ups and downs and I hope to stay in touch with as many of my Neudesic colleagues as possible.

As for my new job, I am joining Microsoft starting January 5th and my title will be Azure Sr. Technical Evangelist. I’ve been making a concerted effort to transition my technical skills from a SharePoint focus to an Azure focus for quite some time, so I’m super excited to be able to jump into a new role and have that platform be where I spend the majority of my time. While the title has the word “Evangelist” in it, the bulk of my responsibilities will be to work with ISV’s and help them build awesome technology on Microsoft platforms. Of course Azure, but also Office 365, Windows and phone. At least that was how the role was described to me so check back in a year and we’ll see how close I was!

Changing roles, employers and technology focus is a lot to change at one time. Yeah, it’s a bit nerve-racking. But it’s also very exciting and I’m ready for the new challenges ahead. With these shifts, you can expect to see me less active in pure SharePoint community events and more active in Azure related events. Community has always been a focus for me and will continue to be a focus regardless of technology. Since components of my new role will still touch Office 365 and SharePoint I will never be too far way, though! Oh, and this does not mean that I’m going to stop helping organize the SharePoint Saturday Denver event coming up on January 17th! You should come!