Docker, Linux, Technical

Oracle Database on Docker for Windows

Coming out of DockerCon this year one of the announcements I was most excited about was from Oracle with their Docker support. I don’t know why I was excited about it as I haven’t used Oracle for a project in over 12 years, but odd things excite me. Since I have Docker for Windows running on my Windows 10 laptop, I decided I would use that to create an image of Oracle Database 11.2.0.2 Express Edition. I won’t rehash the steps here as the good folks at Oracle have done a decent job of this already, but I will call out a few things I noticed:

  • Don’t un-compress the installation binaries after downloading. Yeah, I know they call that out in the docs, but I missed it initially.
  • Be patient. Or multi-task.
    snip_20170522172249
  • And most importantly, Express expects at least 2048MB swap space. The MobyLinuxVM used by Docker for Windows only has 1024MB. So you will get an error stating:
    ”This system does not meet the minimum requirements for swap space. Based on the amount of physical memory available on the system, Oracle Database 11g Express Edition requires 2048 MB of swap space. This system has 1023 MB of swap space. Configure more swap space on the system and retry the installation. “

So unless someone out there can tell me how to set up a larger swap space in that VM, we are stuck and can’t use Docker for Windows. Dammit. I did tag on to an existing Docker forum post, so we’ll see if that bears any fruit.

In the meantime, I fired up an Ubuntu image in Azure and installed Docker and used that to create the container. I didn’t create it with swap space so I did have to go in and add that (I used this method), but once that was setup the image was created just fine. Previous note applies regarding patience and/or multi-tasking.

snip_20170522173310

Not the smallest of images, but there you have it. Now I can fire up an Express Database running in Docker by running:

docker run –name oracleexpress –shm-size=1g -p 1521:1521 -p 8080:8080 -e ORACLE_PWD=tmppassword oracle/database:11.2.0.2-xe

After about 5 minutes, you’ll have a running container! To test the connection and make sure it was running, I logged in using sqlplus from the container:

docker exec -ti oracleexpress sqlplus system/tmppassword@//localhost:1521/XE

Connection successful, and I was able to query the database!

snip_20170522212954

Here’s the image up on Docker Hub if you just want to pull it and start playing.

Now to figure out that MobyLinuxVM swap space…

P.S. If there was any doubt it would run on Windows 10, here it is running on my Windows machine after pulling the image down from Docker Hub (click the image for full screen):

Linux, Technical

Launch Visual Studio from Bash on Windows

Since I’m starting to use Bash on Windows (WSL) more regularly, I added a quick way to launch Visual Studio 2017.

  1. Edit .bashrc and add the VS path (I’m obviously using Enterprise so your path may be different):  export PATH=$PATH:”/mnt/c/Program Files (x86)/Microsoft Visual Studio/2017/Enterprise/Common7/IDE”
  2. I chose to add an alias, so I also added this to my .bashrc:  alias vs2017=devenv.exe
  3. Reload your shell:  . /.bashrc

Now I can quickly pop open Visual Studio by using “vs2017”. For example, to open an existing solution I can navigate to the folder containing the .sln and simply type “vs2017 mysolutionfile.sln” at my bash prompt and VS2017 will fire up with that project loaded.

Here’s my .bashrc if you want to see the full file.

Azure, Azure Government, Technical

Azure Event Hubs vs AWS Kinesis

With Amazon and Microsoft being the main providers for cloud based telemetry injestion services I wanted to do a feature and price comparison between the two. If nothing else, this info should help with an understanding of each services capabilities and perhaps help with making a decision on which service is best for your needs. I realize if you’re on AWS you’re probably going to use Kinesis and if you’re on Azure you’re probably going to use Event Hubs. But at least arm yourself with all the info before diving in!

Two caveats to this info worth noting:

  1. Yes, I work for Microsoft. I did not fudge these numbers or any of the info to paint a nicer picture for Azure. This info is factual based on my research into both services.
  2. Cloud services and their pricing change, so these specs and pricing are current as of the date of this post and you should re-check on Azure or AWS to verify.

This is a purely objective comparison focused on service specs. I’m not going to get into the usability of either service, programming efficiency, portal experiences, or anything else like that. Just numbers. Notice there are a couple question marks on the AWS side because I couldn’t find the info in the Kinesis documentation and folks I asked didn’t know. If you can help fill in those gaps, or notice some of this has changed, please let me know in the comments.

 

Event Hubs

AWS Kinesis

Input Capacity

1MB/s per Throughput Unit (TU)

1MB/s per Shard

Output Capacity

2MB/s per TU

2MB/s per Shard

Events/s

1K

1K

Latency

50ms Avg, 99th % < 100ms

10s min

Protocol

HTTPS or AMQP 1.0

HTTPS

Max Message Size

256KB

1MB

Included Storage

84GB per TU

?? (none?)

Max Consumers

1 Consumer Group (Basic Tier)

20 Consumer Groups (Standard Tier)

?? (only limited by output capacity?) (See <Update 6/1/2016> below)

Monitoring

Built in portal metrics or REST API

CloudWatch

Message Retention

24 hrs (up to 7 days)

24 hrs (up to 7 days)

Price per Hour

$0.015/TU Basic Tier
$0.030/TU Standard Tier

$0.015/Shard

Price per Million Units

$0.028 Basic & Standard (64KB/unit)

$0.014 (25KB/unit)

Extended Data Retention Price

Only if stored event size exceeds 84GB * #TU’s, $0.024/GB (assuming LRS)

$0.020/Shard hour

Region used for pricing

East US

US East

Throughput Flexibility

Adjust TU’s as needed

Adjust Shards as needed

Supported Regions

18 (plus GovCloud)

9

<Update 6/1/2016> Turns out the answer to Max Consumers for Kinesis isn’t exactly straight forward due to their dependency on HTTP(S), as pointed out to me after publishing this post in February. Kinesis is limited to 5 read transactions per shard so your max consumers is going to be dependent on how you spread those transactions across your consumers. If you have five consumers each reading once per second, five is your max. Since output is capped at 2MB/s,  you can read up to that capacity in each transaction but you have to design your consumers to work within those limits. Additional info on this Stack Overflow thread.</Update 6/2/2016>

To compare pricing, I’m using the sample from AWS. In case they change their sample, here is the sample the below numbers are based on:

“Let’s assume that our data producers put 100 records per second in aggregate, and each record is 35KB. In this case, the total data input rate is 3.4MB/sec (100 records/sec*35KB/record). For simplicity, we assume that the throughput and data size of each record are stable and constant throughout the day.”

Kinesis Pricing Sample

Shards

4

Shard cost/month (31 days)

$44.64

PUT cost/month

$7.50

Total

$52.14

Extended Retention Cost

$59.52

Total w/Extended Retention

$111.66

Event Hubs Pricing Sample

 

Basic

Standard

TU

4

4

TU cost/month (31 days)

$44.64

$89.28

PUT cost/month

$7.50

$7.50

Total

$52.14

$96.78

Extended Retention Cost*

N/A

$47.24

Total w/Extended Retention

N/A

$144.02

* Extended storage only available on Standard tier

Results

On the pricing side, I found it interesting they are the exact same price! Unless you need extended retention and need to bump up to the Standard tier on Event Hubs. Comparing the specs, the items that jump out for me that might impact a decision are latency (Event Hubs blows away Kinesis), protocol (no AMQP on Kinesis), max message size (Kinesis is quite a bit larger), the size of a pricing unit (64KB for Event Hubs and 25KB for Kinesis), and the number of regions. Whichever service you choose to go with, hopefully this info helps make the decision a bit easier.

Office Dev, Technical

File Upload to SharePoint Online

One of my peers, Doug Perkes, wrote an awesome sample project on GitHub called Office 365 SharePoint File Management which demonstrates a multi-tenant MVC application connecting to Office 365 to allow a user to upload a file to SharePoint Online. Very handy, indeed. This post gives you steps to follow to integrate the same functionality into an existing MVC application. In order to get to the point where your application can upload a file to SharePoint Online, you must first provide the ability for the user to authenticate into their Office 365 tenant and have your application configured as a multi-tenant app, which is handled by Azure Active Directory. Ignoring a bunch of plumbing code which you’ll see by following the steps below, you can then call into SharePoint Online using either the REST or CSOM API (Doug’s sample shows both.)

As you’re walking through these steps, have Doug’s repo open (or clone it locally) as you’ll be grabbing files from it. For every step where you bring a file over from the repo, update the namespace to match your project namespace.

  1. Add an Office 365 Connected Service, if not already done. To do this, you will need an Office 365 developer account which can be obtained either through your MSDN subscription or a free one-year subscription. Doug walks through how to do this in Step 3 on his GitHub repo so I won’t duplicate that here.
  2. Install EntityFramework, if not already done. Right-click -> Manage NuGet Packages, or using the Package Manager Console window.
  3. Install Microsoft.SharePointOnline.CSOM NuGet package.
  4. Add a Utils folder at the root of your project
  5. From the repo, bring in Utils/SettingsHelper.cs
  6. Add Models/ApplicationDbContext.cs
  7. Add Models/ADALTokenCache.cs
  8. Add Models/ADALTokenCacheInitializer.cs
  9. Update your Global.asax.cs
    1. Add using statement for System.Data.Entity
    2. Add the following line to the Application_Start method:
      Database.SetInitializer(new Models.ADALTokenCacheInitializer());
  10. Update your App_Start/Startup.Auth.cs file to incorporate the code from his Startup.Auth.cs
  11. Add Models/SearchResult.cs
  12. Add Models/SearchModel.cs
  13. Add Views/Home/Sites.cshtml
  14. Add ExecuteSearchQuery method from his HomeController.cs to your controller and resolve references
  15. Add Sites method from his HomeController.cs to your controller and resolve references
  16. Add ConsentApp and RefreshSession methods from his AccountController.cs to your AccountController and resolve references

Stopping here, when you run your app you will now be able to log into an Office 365 tenant and display the Sites view which will show all SharePoint sites the user has access to (you’ll need to add an entry point to the Sites view on your own, something like: @Html.ActionLink(“Start Here »”, “Sites”, “Home”, new { @class = “btn btn-primary btn-lg” }) .) This is done using the Search REST API. Doug continues in his sample to include additional views for Libraries, Upload and UploadFile which all show how to read from SPO and then upload a file to a library in SPO using CSOM. I won’t walk through the steps of how to incorporate that functionality into your project as it’s pretty repetitive for what was done above to get Sites working.

If you’d like a deeper understanding of what the code is doing, check out the two references below:

Multi-tenant MVC app using AAD to Call O365 API
Searching a SPO site using REST

Office Dev, Technical

Check Users Browser within Office 365 Add-In

When writing an Office 365 Add-In intended to be run in Office 365 (as opposed to just an Office thick client, such as Word), you may need to be concerned about which browser your user is in. I’ll cover a specific scenario and another more general scenario and how to perform the check.

Internet Explorer 9 Support

As of the writing of this post, any Add-In published to the Office Store will be validated against IE 9 and rejected if it doesn’t work. Other than random IE 9 JavaScript quirks, your Add-In may be using an Office API feature that isn’t supported in IE 9 such as the coercion type HTML when using setSelectedDataAsync. The validation team realizes there aren’t always work-arounds for these limitations and they allow us to state the Add-In doesn’t support IE 9 in the app description and “fail gracefully” with a kind error message. To check for IE 9 in your Add-In, add the following function to your app.js file within app.initialize:

// App doesn’t support IE 9
app.isBrowserSupported =
 function () {
   var ua = navigator.userAgent, tem,
   M = ua.match(/(opera|chrome|safari|firefox|msie|trident(?=\/))\/?\s*(\d+)/i) || [];
   M = M[2] ? [M[1], M[2]] : [navigator.appName, navigator.appVersion, ‘-?’];
   if ((tem = ua.match(/version\/(\d+)/i)) != null) M.splice(1, 1, tem[1]);
var browser = M.join(‘ ‘);
   return browser != ‘MSIE 9’;
};

Now wherever it makes sense in your Add-In to check the browser and display a kind message back to the user (perhaps in Home.js, after app.initialize() is called), add a check and behave accordingly:

if (app.isBrowserSupported()) {
   // All is good, proceed as normal
}
else {
   // Browser not supported, display kind error message and disable functionality
}

General Browser Check

For any other need to check the browser, here’s that same function but a bit more generic so you can modify to fit your needs. I “stole” this from a co-worker so I’m not sure who the original author is. If you do, please leave a comment so I can give them credit.

var ua = navigator.userAgent, tem,
M = ua.match(/(opera|chrome|safari|firefox|msie|trident(?=\/))\/?\s*(\d+)/i) || [];
if (/trident/i.test(M[1])) {
    tem =
 /\brv[ :]+(\d+)/g.exec(ua) || [];
    app.showNotification(
 ‘IE ‘ + (tem[1] || ”));
}
if (M[1] === ‘Chrome’) {
    tem = ua.match(/\b(OPR|Edge)\/(\d+)/);
    if (tem !=
 null) return tem.slice(1).join(‘ ‘).replace(‘OPR’, ‘Opera’);
}
M = M[2] ? [M[1], M[2]] : [navigator.appName, navigator.appVersion, ‘-?’];
if ((tem = ua.match(/version\/(\d+)/i)) != null) M.splice(1, 1, tem[1]);
app.showNotification(M.join(‘ ‘));

As you can see, it uses the Office 365 Add-In built-in app.showNotification method to show the result.

.NET, Office Dev, Technical

Convert Office Add-In Web to MVC

Using Visual Studio to create a new Office Add-In results in two projects:  One for your Office Add-In (basically, the manifest) and another for the web project where you do the bulk of your work to implement functionality. The web project that is created is a basic HTML/JS/CSS application…nothing fancy like ASP.NET. For most situations, a lightweight client-side web application is ideal and it makes sense for that to be the default of the VS project template. How about those other situations where you need something to run on the server, like an ASP.NET MVC application? There are a couple choices:

  1. Add a new MVC project to your solution and pull in the Office “stuff” and trim it down so it can be used as the web project for your Add-In
  2. Convert the existing web project to MVC

I’ll show you how to do the second option in this post.

MVC Plumbing

  1. Using NuGet Package Manager, add Microsoft.AspNet.Mvc to your web project
  2. Add the following folders to your project:  App_Start, Controllers, Views
  3. Right-click the Views folder and add a new Web Configuration File and name it Web.config
  4. Add the following code to this new Web.config, replacing [[YourNamespace]] with the namespace of your project
       1: <?xml version="1.0"?>

       2:  

       3: <configuration>

       4:   <configSections>

       5:     <sectionGroup name="system.web.webPages.razor" type="System.Web.WebPages.Razor.Configuration.RazorWebSectionGroup, System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">

       6:       <section name="host" type="System.Web.WebPages.Razor.Configuration.HostSection, System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />

       7:       <section name="pages" type="System.Web.WebPages.Razor.Configuration.RazorPagesSection, System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />

       8:     </sectionGroup>

       9:   </configSections>

      10:  

      11:   <system.web.webPages.razor>

      12:     <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=5.2.3.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />

      13:     <pages pageBaseType="System.Web.Mvc.WebViewPage">

      14:       <namespaces>

      15:         <add namespace="System.Web.Mvc" />

      16:         <add namespace="System.Web.Mvc.Ajax" />

      17:         <add namespace="System.Web.Mvc.Html" />

      18:         <add namespace="System.Web.Routing" />

      19:         <add namespace="[[YourNamespace]]" />

      20:       </namespaces>

      21:     </pages>

      22:   </system.web.webPages.razor>

      23:  

      24:   <appSettings>

      25:     <add key="webpages:Enabled" value="false" />

      26:   </appSettings>

      27:  

      28:   <system.webServer>

      29:     <handlers>

      30:       <remove name="BlockViewHandler"/>

      31:       <add name="BlockViewHandler" path="*" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" />

      32:     </handlers>

      33:   </system.webServer>

      34:  

      35:   <system.web>

      36:     <compilation>

      37:       <assemblies>

      38:         <add assembly="System.Web.Mvc, Version=5.2.3.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />

      39:       </assemblies>

      40:     </compilation>

      41:   </system.web>

      42: </configuration>

CSS

I chose to use the Site.css stylesheet that the NuGet package created for me. To do that, I took all of the CSS from the app.css and the home.css file and put it in the Site.css file.

JavaScript

Copy the App.js and Home.js files to the Scripts directory.

Content

I’ll assume your existing web project has the standard files from the VS template:  app.js, app.css, home.html, home.css and home.js. This section shows how to pull that content into a new view.

  1. Right-click the Controllers folder and add a new Controller named HomeController. This should add a file that looks like the following:
       1: using System;

       2: using System.Collections.Generic;

       3: using System.Linq;

       4: using System.Web;

       5: using System.Web.Mvc;

       6:  

       7: namespace [[YourNamespace]].Controllers

       8: {

       9:     public class HomeController : Controller

      10:     {

      11:         // GET: Home

      12:         public ActionResult Index()

      13:         {

      14:             return View();

      15:         }

      16:     }

      17: }

  2. Add a Home and Shared folder to the Views folder
  3. Right-click the Shared folder and add a new View called _Layout using the “Empty (without model)” template and checking the box for “Create as a partial view”
    snip_20151111122614
  4. Replace the <head> content with the following (note the reference to the Office UI Fabric, that’s optional if you aren’t using it…but you should be):
       1: <meta charset="utf-8" />

       2: <meta name="viewport" content="width=device-width, initial-scale=1.0">

       3: <meta http-equiv="X-UA-Compatible" content="IE=Edge" />

       4: <title>@ViewBag.Title - My ASP.NET Application</title>

       5: <script src="~/Scripts/modernizr-2.6.2.js"></script>
       1:  

       2:  

       3: <link href="~/Content/Office.css" rel="stylesheet" />

       4: <script src="https://appsforoffice.microsoft.com/lib/1/hosted/office.js" type="text/javascript">

       1: </script>

       2: <script src="~/Scripts/jquery-1.10.2.min.js">

       1: </script>

       2:  

       3: <!-- Office UI Fabric -->

       4: <link rel="stylesheet" href="//appsforoffice.microsoft.com/fabric/1.0/fabric.min.css" />

       5: <link rel="stylesheet" href="//appsforoffice.microsoft.com/fabric/1.0/fabric.components.min.css" />

       6:  

       7: <link href="~/Content/Site.css" rel="stylesheet" type="text/css" />

       8: <script src="~/Scripts/App.js">

       1: </script>

       2: <script src="~/Scripts/Home.js">

    </script>

  5. Replace the <body> content with the following:
       1: <div id="content-header">

       2:     <div class="padding">

       3:         <h1>[[Your application name]]</h1>

       4:     </div>

       5: </div>

       6:  

       7: <div id="content-main">

       8:  

       9:     @RenderBody()

      10:     <hr />

      11:     <footer>

      12:         <p>&copy; @DateTime.Now.Year - Content Mixr</p>

      13:     </footer>

      14: </div>

  6. If you have anything like a top nav or other content that is consistent across multiple pages in your app, paste it in there as appropriate
  7. Right-click the Home folder and add a new View called Index (or whatever you want it to be called, the rest of this post assumes Index) using the “Empty (without model)” template and checking the box for “Create as a partial view”
  8. Replace the contents of index.cshtml with the HTML from the <body> section of your home.html file (keep the ViewBag.Title at the top of the file if you’re going to use it)

Config and Cleanup

  1. Right-click the Views folder and add a new partial view called _ViewStart and add the following content:
       1: @{

       2:     Layout = "~/Views/Shared/_Layout.cshtml";

       3: }

  2. (This may already exist but create it if not) Right-click the App_Start folder and add a new class file called RouteConfig.cs and add the following content:
       1: using System;

       2: using System.Collections.Generic;

       3: using System.Linq;

       4: using System.Web;

       5: using System.Web.Mvc;

       6: using System.Web.Routing;

       7:  

       8: namespace [[Your application namespace]]

       9: {

      10:     public class RouteConfig

      11:     {

      12:         public static void RegisterRoutes(RouteCollection routes)

      13:         {

      14:             routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

      15:  

      16:             routes.MapRoute(

      17:                 name: "Default",

      18:                 url: "{controller}/{action}/{id}",

      19:                 defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }

      20:             );

      21:         }

      22:     }

      23: }

  3. Open the Global.asax.cs file and edit the Application_Start() method to the following:
       1: protected void Application_Start()

       2: {

       3:     AreaRegistration.RegisterAllAreas();

       4:     RouteConfig.RegisterRoutes(RouteTable.Routes);

       5: }

  4. If your app doesn’t use Bootstrap, delete the Bootstrap CSS and JS files (if you are, go back to your _Layout.cshtml file and add the Bootstrap references as the above code doesn’t have it)
  5. You may have noticed the MVC package brought in a reference to a newer version of jQuery (1.10.2 as of the writing of this post.) There are now probably two jQuery versions in the Scripts folder so delete the version you don’t want.
  6. Open the App Manifest XML file (from the Office Add-In project) and set the DefaultValue for SourceLocation to ~remoteAppUrl since the default page of the app is now the web app default page
       1: <DefaultSettings>

       2:   <SourceLocation DefaultValue="~remoteAppUrl/" />

       3: </DefaultSettings>

That should do it. Please post a comment here or contact me directly if you hit any snags.

Office Dev, Open XML, Technical

Open XML SDK Intro

Let me start by saying I AM NOT an expert with Open XML. I dabbled with it a few years ago for a small project I was doing and then merrily went on my way doing just fine without the need to touch it again. That changed this week as I had a challenge to do something the Office 365 API and Office JavaScript API don’t support (as of the writing of this post, anyway), a seemingly simple task of determine the page count of a document. The Primary Interop Assembly supports this but building a VSTO didn’t support the need…I needed something external that could inspect the document properties without actually opening the document in Word. The answer finally came to me from the other side of the world by way of a co-worker, Andrew Coates (thank you!) He pointed out that I could pull out the page count through Open XML and using the Open XML SDK, so I started diving in and learned it’s really simple to use, which is not at all how I remember it! I’ll use this post as an introduction to the SDK to show how simple it is to use.

First steps, go get the 2.5 SDK and the SDK Productivity Tool (check out this video to learn more about the tool.) If you’re more of a documentation person, here are the docs. I won’t go into the details of the Open XML spec or format, but it’s worth saying that there are multiple packages included in an Open XML document. So to interact with the document in any way we need to figure out which package we need to interact with. That’s where the Productivity Tool can help you (or the docs.) Firing that up and opening a document will allow you to inspect the Open XML of a document, find what you’re looking for, then you can program against it.

For finding the page count, I needed to look at the Pages property located in the /docProps/app.xml package under the Properties element. The screenshot here shows the Reflected Code tab opened which shows the value (1, in this case) along with the namespace of extended-properties.

Knowing it’s in extended-properties, I can now jump over to Visual Studio and use the SDK to pull out the value for the document using WordprocessingDocument.ExtendedFilePropertiesPart.Properties.Pages. Simple, I don’t even have to mess with an XML object, which is nice.

using DocumentFormat.OpenXml.Packaging;

namespace LoadOOXMLDocument
{
  class Program
  {
    static void Main(string[] args)
    {
      const string filename = “hi.docx”;
      using (WordprocessingDocument wordDoc = WordprocessingDocument.Open(filename, true))
      {
        ExtendedFilePropertiesPart propPart = wordDoc.ExtendedFilePropertiesPart;
        Console.WriteLine(“The document has {0} pages.”, propPart.Properties.Pages.Text);
        Console.ReadLine();
      }
    }
  }
}

If you want to dive deeper, here are some other online resources:

The Wordmeister
Eric White Blog
OpenXMLDeveloper.org
GitHub Samples

Azure, Technical

Entity Framework Code First Deployment of Azure AD Sample

If you’re interested in building applications using Azure AD (and really, why would you *not*?), the best code repository to be aware of is https://github.com/AzureADSamples. TONS of samples with documentation showing many different scenarios. This post takes a look at one of the samples in a bit more detail, specifically in the area of deploying the sample to Azure and implementing Code First deployment/migrations. Using EF and Code First can be a bit of a religious debate which I will avoid in this post. The sample uses EF and getting Code First to work is only a few extra steps.

The sample I’m working with here is the WebApp-MultiTenant-OpenIDConnect-DotNet sample. Click the link to get the sample code and read the documentation on how to run the sample. Get everything up and running in Azure AD against a local deployment of the sample (covered in the GitHub documentation for the project), then come back here when you’re ready to get Code First set up and deploy to Azure.

Implement Code First

If you’re not familiar with Code First take some time and read through the documentation on the asp.net website, specifically this walkthrough which shows how to get it set up in your project as well as deployed. As you’ll see, I’m really not doing anything fancy here beyond what you see in the asp.net site. Actually, this is a bit more simplified since we’re not doing any seeding.

  1. Add a connectionString entry to the web.config pointing to your local database
    In my case, I’m using ProjectsV12 but you could easily user MSSQLLocalDB, v11.0 or full SQL depending on your local dev machine setup:

    (If you previously ran the sample locally you already have a local database named TodoListWebAppContext. Either delete it or rename it. You could also update the project to use a different name. This isn’t necessary, but it helps demonstrate the Code First deployment later on in this post.)

  2. Remove existing initializer

    Because we’re using Code First to build our database, we don’t need the TodoListWebAppInitializer initializer that is currently called in Global.asax.cs used to create the database. Open up that file and comment out line 19:

  3. Enable migrations
    Now we need to run a few commands in the Package Manager Console. If it’s not already open, click Tools -> NuGet Package Manager -> Package Manager Console. Once open, type “enable-migrations –contexttypename TodoListWebAppContext” and hit enter:

    You’ll notice a new folder called “Migrations” was added to the project along with a new Configuration.cs file. Just leave those as-is.

  4. Add a migration
    Now we need to add our first migration, which is the initial creation of the database in this case since I haven’t deployed the app locally yet. In the Package Manager Console, type “add-migration InitialCreate” (InitialCreate is just a label used for the migration that you can change to identify the specific migration) and hit enter:

    Now you’ll see a few more files added to that new Migrations folder. If you poke through them, you’ll see they define the database changes to apply and the class inherits from the DbMigration EF migration class. I won’t go through them here to define what they do or how they work but it’s worth the time to understand those concepts if you don’t already have that down (look at the asp.net site linked earlier.)

  5. Update the database

    Finally, we run update-database to actually let EF create the database based on our InitialCreate migration definition. In the Package Manager Console, type “update-database” and hit enter:

    When that’s finished, you now have a local database created for you based on the definition in the project. Open up SQL Server Object Explorer and expand your local DB to see the new database:

Success! Go ahead and run the project locally and, assuming you had everything hooked up correctly in Azure AD prior to these steps, all will work fine using this configuration. Feel free to rinse-and-repeat the above add-migration/update-database commands as you update the data model in the project. Each time you add a migration you’ll see some new files pop up in your Migrations folder.

Deploy to Azure

Now let’s look at what it will take to deploy this project into an Azure Web App and SQL Database running in Azure. (I’m using the new Azure Preview Portal in the screenshots below)

  1. Create a new Resource Group
    To help keep your resources organized, create a new Resource Group in the closest region, in my case South Central. We’ll deploy our Web App and SQL Database into this Resource Group.
  2. Create a new Azure Web App
    I’m going with the Free tier here but it will work on any of the pricing tiers. Choose the same Location here that you chose for your Resource Group, in my case South Central.
  3. Create a new SQL Database
    Same story here when creating a SQL Database…choose a server (or create a new one) in the same region and add it to the Resource Group. I’m using the Basic pricing tier, which as of the date of this post is estimated at $4.99/mo.

    There is a free SQL tier that is available in the current management portal when creating a new Web App and that will work for this sample, too, if you prefer to go that route.

  4. Add connection string
    Once the database is created, click in to the connection strings tab and copy the ADO.NET connection string:
    It helps to paste the connection string into a text editor so you can easily find the placeholder for the password and update that. If you don’t, your Web App won’t be able to connect to the database.

    Now open up your Web App and go into Application settings to access the Connection strings. Create a new connection string named “TodoListWebAppContext” (or the name you used in your web.config file if different than what I have above) paste your connection string to your database into the value field and click Save:

  5. Publish web app to Azure
    Ok, everything is now set up in Azure and ready for us to publish our application.
    1. Go back to your project in Visual Studio, right click the TodoListWebApp project and click on Publish
    2. Choose “Microsoft Azure Web Apps” as your target
    3. Log in to your Azure subscription (if prompted) and choose your web app from the drop down and click OK
    4. Leave the Connection screen without changes and click Next to the Settings screen
    5. On the Settings screen, the TodoListWebAppContext connection string should be pre-populated for you
    6. Check the “Execute Code First Migrations…” check box
    7. Click Publish and wait for the magic to happen
    8. After Visual Studio finished publishing, your browser should open to your new Azure Web App…but don’t try to Sign Up or Sign In yet…we’re not done J
  6. Update Azure AD application
    The last step is to get your app properly registered in Azure AD. You can either update the existing app you created when you first set up the sample, or start from scratch and create a completely new application in your AAD tenant. Here, I’m doing the former. If you create a new application, don’t forget to update the Client ID and Password from your new app in the web.config and re-publish your Web App.
    1. Log into the Azure management portal (https://manage.windowsazure.com) and drill down into your existing Azure AD tenant and application
    2. Click on the Configure tab
    3. Update the Sign-On URL to your new Azure Web App URL (use either http or https, just remember which so you navigate to the proper URL later for testing)
    4. Scroll down to the Single Sign-On section and find the Reply URL. Remove the existing URL and add in your Azure Web App URL
    5. Click Save
  7. Now the AAD, Web App and SQL Database are all set up. Navigate to your site and click on Sign Up and enter your AAD info as you previously did in the local sample, log in using your AAD user, and click on todos in the top nav
  8. More magic was happening as you accessed the app for the first time. If you would’ve looked at your SQL Database before the previous step there would’ve been nothing there. That’s because the app creates the database the first time it’s accessed, which you just did. Open up your Server Explorer in Visual Studio and refresh your Azure connection. You’ll see your new SQL Database listed. Right-click on the database and choose “Open in SQL Server Object Explorer”, log in with your credentials you set up when you created the database, and you’ll be taken to SQL Server Object Explorer where you can interact with your new database like you would any other SQL database.

    The additional table you see there, “_MigrationHistory”, is owned by EF and is populated every time you do a deployment which includes a database change.

And that’s it! Feel free to go back to your project, update the data model and re-publish to Azure. After you log back into the site and access todo’s, you’ll see the database reflect the data model change as well as a new entry in the _MigrationHistory table.

Azure, Azure Government, Technical

Using Event Hub and EventProcessorHost on Azure Government

There are few needs which apply to almost every industry when it comes to building software and solutions to meet the needs of that industry. Manufacturing, healthcare, industrial, education, home automation, military and public safety (to name a few) all have a need to collect data from hundreds/thousands/millions of data sources and bring the data together in order to either report on it as a whole or send the data somewhere else. For example, a government agency responsible for monitoring rainfall and temperature across an entire country. It would be great if that agency could set up a few thousand monitoring stations around the country and have those stations report their respective sensor data to a central location for aggregation where the agency can begin to see trends across various regions within the country and across given time spans. Quite a bit more reliable and near-real time compared to sending a worker out to each station to collect data and bring it back manually to a data center.

In order to manage the intake and processing of what could be billions of pieces of data per day we will need a scalable and efficient hub for all of the sources to talk to at once. Using architecture speak, we need a durable event stream collection service. Azure Event Hub was built to support these types of use cases and perform as the event stream collection service that sits in the middle of our Internet of Things (IoT) architecture. Once we get our environmental sensors set up to send their data to Event Hub, we can easily scale that service to support the thousands of devices we need and begin building really powerful reporting solutions that utilize the ingested data.

To see what an actual Event Hub implementation would start to look like on Azure Government, where it was recently released (as of the date of this post) along with all other Azure regions, let’s start by setting up a simple Event Hub service using a single instance of EventProcessorHost following the instructions on the Azure documentation site. For the most part, using Event Hubs in Azure Government is the same as any other Azure region. However, since the endpoint for Azure Government is usgovcloudapi.net instead of windows.net for many other Azure regions, the sample needs to be modified a bit. Creating the Event Hub and storage account is exactly the same, shown in the screenshots below choosing the USGov Iowa region:

Creating the Event Hub

Creating the Storage Account

Creating the sender client is the same as shown in the example, as well. The small tweak we need to make is on the receiver, which references the storage account we created previously since EventProcessorHost utilizes a storage account when processing messages. Notice the URL for the storage endpoint in Azure Government is *.core.usgovcloudapi.net. When you create the EventProcessorHost in the receiver application, the default behavior of the class is to assume you are using a storage account located in the *.core.windows.net domain. This means if you run the sample as-is (with your Event Hub and Storage Account info, of course), you will get an error:

Since my Storage Account was named “rkmeventhubstorage”, the default behavior is to create a URI of rkmeventhubstorage.blob.core.windows.net. Obviously, that doesn’t exist. I need a URI of rkmeventhubstorage.blob.core.usgovcloudapi.net. What now?

Diving into the source for Microsoft.ServiceBus.Messaging.EventProcessorHost, you’ll see (or just save your time and trust me) that the blob client is created using the CloudStorageAccount class. Looking at the documentation for that class, you won’t see anything to help get that endpoint updated (as of the writing of this post.) Turns out there’s an undocumented property for EndpointSuffix. Bingo. All you need to do is add a property for EndpointSuffix to use core.usgovcloudapi.net and the stars will align. Here is the full Main method for the Receiver application, showing the use of the EndpointSuffix property.

string eventHubConnectionString = "Endpoint=sb://rkmeventhub-ns.servicebus.usgovcloudapi.net/;SharedAccessKeyName=ReceiveRule;SharedAccessKey={YourSharedAccessKey}
string eventHubName = "rkmeventhub";
string storageAccountName = "rkmeventhubstorage";
string endpointSuffix = "core.usgovcloudapi.net";
string storageAccountKey = "{YourStorageAccountKey}";
string storageConnectionString = string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1};EndpointSuffix={2}",
storageAccountName, storageAccountKey, endpointSuffix);


string eventProcessorHostName = Guid.NewGuid().ToString();
EventProcessorHost eventProcessorHost = new EventProcessorHost(eventProcessorHostName, eventHubName, EventHubConsumerGroup.DefaultGroupName, eventHubConnectionString, storageConnectionString);
eventProcessorHost.RegisterEventProcessorAsync().Wait();


Console.WriteLine("Receiving. Press enter key to stop worker.");
Console.ReadLine();

After adding that property, your Receiver will be able to receive the messages successfully.

Technical

How to Install Windows 10 Preview on a WinBook TW700

I picked up a couple WinBook TW700’s from MicroCenter back around Christmas time and after seeing some good stability results running Windows 10 preview on my Surface Pro 3 I thought the tablet would be a good device for the OS, too. I decided to do a clean install completely formatting the “hard” disk so I could get back the recovery partition space. The install is pretty much like on any other computer, but there are some driver issues that you need to be aware of so I wanted to post the steps here. This was with the current technical preview release build 10041 so any future builds may be slightly different and, hopefully, an improved experience with driver updates.

First of all, if you ever want the option to rollback to your Windows 8.1 install, you’ll need to create a recovery USB drive. Plenty of info on how to do that on the Internets so I won’t try to cover it here. For what it’s worth, I didn’t bother. 😉 Once you’re comfortable on that front, hit up the Windows Insider site to download the latest Windows 10 build. You’ll need a USB stick at least 4 GB to use for the tablet install so procure one if you don’t already have one. Format the USB to FAT32, mount the Windows 10 ISO on your computer and copy all of the contents of the ISO to your empty USB stick.

Now head over to your tablet. Since drivers aren’t available for many of the devices on the 700, the easiest way I found to get them updated post-Win 10 install is to copy the existing. Navigate to c:windowsSystem32DriverStoreFileRepository and copy the entire FileRepository folder to another USB stick and keep it handy. Now get a USB hub plugged into your tablet so you can have at least three devices plugged in at once. You’ll need to attach a USB keyboard, USB mouse and your USB drive with the Windows 10 installation files. Once that’s all hooked up, restart the tablet. When it’s starting up, hit F12 to get into the boot menu so you can choose to boot from the USB. From here on out it’s a basic Windows install so have fun with that and come back when complete, formatting your drives during the process if you so wish.

Now you should have a nice clean Windows 10 install and noticed that pretty much nothing works, except the USB mouse and keyboard :). Remove the Windows 10 USB and plug in the USB with the drivers. Launch Device Manager and you’ll see many yellow exclamation points. For each one you see, right click to update the driver and choose the option to search a local folder for drivers and select your USB. This will take some time and new ones will pop up after updating others, so just be patient and keep going until all the exclamation points are gone. Once Device Manager is clean, restart the tablet. After it comes back up, you’re up and running and all devices, including the touch screen, should be working. Lastly, run Windows Update to grab any updates there and restart if prompted. Unplug the USB hub and have fun tabletting on Windows 10!