Azure, Docker, Linux, Technical

Bash on Windows Productivity Talk

The bi-annual Denver Dev Day was last week and I had the opportunity to present a topic titled “Using Bash on Windows to Increase your Productivity” to an awesome room of fellow techies. The idea for the session came from my increasing use of Bash and Linux, specifically Windows Subsystem for Linux (WSL), and I thought this talk might not only help others learn a few new tools or tricks but also help me learn what others are doing. I was right on both accounts! If you were there, thanks for coming and I hope it was worth your time investment. If you weren’t there, the session abstract and slides are below. Although, the majority of the time was in Bash showing different scenarios and trying different things folks threw out at me, which was fun! Here’s a sampling of what I showed:

  • Edit Windows files, with mnt and alias
  • Built in VS Code support
    • Launch project from bash
    • Integrated shell
  • Run any Win exe
    • Echo $PATH to show what was included and my modifications
    • Launching Visual Studio 2017
    • Docker tools
    • K8s / minikube
      • Running minikube start requires window to have administrator rights, so we discussed differences between Windows and Linux users/permissions
  • Run bash from CMD
    • dir | bash “grep Desk“
    • bash -c “ls -lh” | findstr Desk
  • Azure
    • Multi-window/multi-account (I use a separate Linux user for each Azure subscription)
      • Az account show | jq .name
    • Multi-pane with Help
  • Dotfiles (My work-in-progress dotfiles)
    • Github for environment consistency & rebuild
  • Shell in Azure

Session Abstract:

Did you know Windows 10 can run Bash on Linux?? While it may seem weird seeing those words together, that’s no reason to shy away and not consider how this new capability can be leveraged to increase your day to day productivity. Think about all the Linux features, code samples, tutorials and tools that are out in the world. Now think about all the Windows counterparts. Bash on Windows gives us the option to use all of it on a single operating system and I’ll show you how!

This session will show you how to get up and running and then we’ll spend some time looking at specific development scenarios and why you would want to use Bash. If development isn’t your focus, we’ll also look at some DevOps scenarios targeting Azure. Finally, I’ll show you some of my favorite tools, tips and tricks along the way that can help you leave the room with knowledge you can immediately put to good use.

Docker, Linux, Technical

MobyLinuxVM Root Access

I’ve needed/wanted this a couple times so posting here to make it easier to find. When using Docker for Windows in Linux mode, it creates a Linux VM running in Hyper-V which actually hosts the containers you create. If you ever need to access that VM, here’s a method that works (thanks to Docker Saigon):
#get a privileged container with access to Docker daemon
docker run --privileged -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker alpine sh

#run a container with full root access to MobyLinuxVM and no seccomp profile (so you can mount stuff)
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh

#switch to host FS
chroot /host

 

Azure, PowerShell, Technical

New Service Fabric PowerShell Cmdlets

If you prefer to use PowerShell to interact with Azure and you are working with Service Fabric, today is your lucky day! Technically, //Build held a couple weeks ago was your lucky day since that’s when these were released but today is when I’m getting around to writing this post.

Announced a couple weeks ago, there are some new cmdlets that allow you to do cluster management. Tasks such as creating a cluster, adding/removing a node, adding/removing a node type, changing reliability or durability, things like that, are now possible using PowerShell. As of today, here are the new commands, currently at v 0.1.1:

Add-AzureRmServiceFabricApplicationCertificate
Add-AzureRmServiceFabricClientCertificate
Add-AzureRmServiceFabricClusterCertificate
Add-AzureRmServiceFabricNode
Add-AzureRmServiceFabricNodeType
Get-AzureRmServiceFabricCluster
New-AzureRmServiceFabricCluster
Remove-AzureRmServiceFabricClientCertificate
Remove-AzureRmServiceFabricClusterCertificate
Remove-AzureRmServiceFabricNode
Remove-AzureRmServiceFabricNodeType
Remove-AzureRmServiceFabricSetting
Set-AzureRmServiceFabricSetting
Set-AzureRmServiceFabricUpgradeType
Update-AzureRmServiceFabricDurability
Update-AzureRmServiceFabricReliability

For the latest documentation, check out the docs.

Installation

Admittedly, I’m not a huge PowerShell user. But, I wanted to at least give these a quick test run. Especially focusing on the New-AzureRmServiceFabricCluster command as that lets us now create a new cluster without writing an ARM template! Pretty cool…for the scenarios where it supports the customizations we need. More on that later. I mentioned I wasn’t a huge PowerShell users, and by that I mean to say I didn’t even have the Azure PowerShell SDK installed on my main machine. So I went off to install it. Just my luck, the first install didn’t go so well as when I tried running some of these new commands I got an error saying I needed to run Import-Module on AzureRM.ServiceFabric. Well, when I did that I got this error:

Import-Module : The module to process ‘.\Microsoft.Azure.Commands.ServiceFabric.dll’, listed in field ‘NestedModules’ of module manifest ‘C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager\AzureRM.ServiceFabric\AzureRM.ServiceFabric.psd1′ was not processed because no valid module was found in any module directory.

Indeed, the dll it was looking for didn’t exist. After some unsuccessful troubleshooting I gave up and removed the SDK and reinstalled. That time it worked.

Create a New Cluster

Starting with Hello World, I wanted to create a new cluster. Nothing fancy, just following one of the examples given in the help documentation. Updating my script, I ended up with this (watch for wrapping if you copy/paste):

$pwd=“OneSuperSecret@99” | ConvertTo-SecureString -AsPlainText -Force
$RGname=“testposhasf”
$clusterloc=“SouthCentralUS”
$subname=$RGname.$clusterloc.cloudapp.azure.com”
$pfxfolder=“c:\MyCertificates\”

Write-Output “create cluster in ” $clusterloc “subject name for cert ” $subname “and output the cert into ” $pfxfolder

New-AzureRmServiceFabricCluster -ResourceGroupName $RGname -Location $clusterloc -ClusterSize 3 -VmPassword $pwd -CertificateSubjectName $subname -CertificateOutputFolder $pfxfolder -CertificatePassword $pwd -OS WindowsServer2016DatacenterwithContainers

That creates a 3 node cluster using the Server 2016 w/Containers OS and secures it by creating a new cert and storing it in Key Vault along with downloading it locally so I can use it. (Install it locally before trying to access Service Fabric Explorer.) It took around 10ish minutes and resulted in a usable cluster, all without writing a single line of JSON!

And here’s the Resource Group view, showing all of the artifacts it created for me:

A few things to point out:

  1. While this command does create a secure cluster, notice it created the Key Vault in the same Resource Group. Not really the best deployment scenario, but it gets the job done. If you’d prefer to use an existing Key Vault, use one of the other options of the same command to create a new Key Vault Resource Group. Examples are shown in the help.
  2. For some reason, it created the cluster with version 5.5.216 of the Service Fabric runtime, whereas the latest version is 5.6.210 (and preferred when using Windows containers). Hopefully this will get fixed soon.
  3. If you don’t like the naming scheme (does “l5nbd6qsesaeu100” mean anything to you?), you’ll need to create a JSON template.
  4. For control over many other options (such as deploying into an existing VNET), you’ll be back in JSON.
  5. Just because you’re back in JSON, you can still leverage this command by passing in your template file (see the other examples).

All-in-all, I think this is a great start and I like where the tooling is going. I can’t wait to see these capabilities grow and, hopefully, be adopted over in the CLI world.

Docker, Linux, Technical

Oracle Database on Docker for Windows

Coming out of DockerCon this year one of the announcements I was most excited about was from Oracle with their Docker support. I don’t know why I was excited about it as I haven’t used Oracle for a project in over 12 years, but odd things excite me. Since I have Docker for Windows running on my Windows 10 laptop, I decided I would use that to create an image of Oracle Database 11.2.0.2 Express Edition. I won’t rehash the steps here as the good folks at Oracle have done a decent job of this already, but I will call out a few things I noticed:

  • Don’t un-compress the installation binaries after downloading. Yeah, I know they call that out in the docs, but I missed it initially.
  • Be patient. Or multi-task.
    snip_20170522172249
  • And most importantly, Express expects at least 2048MB swap space. The MobyLinuxVM used by Docker for Windows only has 1024MB. So you will get an error stating:
    ”This system does not meet the minimum requirements for swap space. Based on the amount of physical memory available on the system, Oracle Database 11g Express Edition requires 2048 MB of swap space. This system has 1023 MB of swap space. Configure more swap space on the system and retry the installation. “

So unless someone out there can tell me how to set up a larger swap space in that VM, we are stuck and can’t use Docker for Windows. Dammit. I did tag on to an existing Docker forum post, so we’ll see if that bears any fruit.

In the meantime, I fired up an Ubuntu image in Azure and installed Docker and used that to create the container. I didn’t create it with swap space so I did have to go in and add that (I used this method), but once that was setup the image was created just fine. Previous note applies regarding patience and/or multi-tasking.

snip_20170522173310

Not the smallest of images, but there you have it. Now I can fire up an Express Database running in Docker by running:

docker run –name oracleexpress –shm-size=1g -p 1521:1521 -p 8080:8080 -e ORACLE_PWD=tmppassword oracle/database:11.2.0.2-xe

After about 5 minutes, you’ll have a running container! To test the connection and make sure it was running, I logged in using sqlplus from the container:

docker exec -ti oracleexpress sqlplus system/tmppassword@//localhost:1521/XE

Connection successful, and I was able to query the database!

snip_20170522212954

Here’s the image up on Docker Hub if you just want to pull it and start playing.

Now to figure out that MobyLinuxVM swap space…

P.S. If there was any doubt it would run on Windows 10, here it is running on my Windows machine after pulling the image down from Docker Hub (click the image for full screen):

Linux, Technical

Launch Visual Studio from Bash on Windows

Since I’m starting to use Bash on Windows (WSL) more regularly, I added a quick way to launch Visual Studio 2017.

  1. Edit .bashrc and add the VS path (I’m obviously using Enterprise so your path may be different):  export PATH=$PATH:”/mnt/c/Program Files (x86)/Microsoft Visual Studio/2017/Enterprise/Common7/IDE”
  2. I chose to add an alias, so I also added this to my .bashrc:  alias vs2017=devenv.exe
  3. Reload your shell:  . /.bashrc

Now I can quickly pop open Visual Studio by using “vs2017”. For example, to open an existing solution I can navigate to the folder containing the .sln and simply type “vs2017 mysolutionfile.sln” at my bash prompt and VS2017 will fire up with that project loaded.

Here’s my .bashrc if you want to see the full file.

Azure, Azure Government, Technical

Azure Event Hubs vs AWS Kinesis

With Amazon and Microsoft being the main providers for cloud based telemetry injestion services I wanted to do a feature and price comparison between the two. If nothing else, this info should help with an understanding of each services capabilities and perhaps help with making a decision on which service is best for your needs. I realize if you’re on AWS you’re probably going to use Kinesis and if you’re on Azure you’re probably going to use Event Hubs. But at least arm yourself with all the info before diving in!

Two caveats to this info worth noting:

  1. Yes, I work for Microsoft. I did not fudge these numbers or any of the info to paint a nicer picture for Azure. This info is factual based on my research into both services.
  2. Cloud services and their pricing change, so these specs and pricing are current as of the date of this post and you should re-check on Azure or AWS to verify.

This is a purely objective comparison focused on service specs. I’m not going to get into the usability of either service, programming efficiency, portal experiences, or anything else like that. Just numbers. Notice there are a couple question marks on the AWS side because I couldn’t find the info in the Kinesis documentation and folks I asked didn’t know. If you can help fill in those gaps, or notice some of this has changed, please let me know in the comments.

 

Event Hubs

AWS Kinesis

Input Capacity

1MB/s per Throughput Unit (TU)

1MB/s per Shard

Output Capacity

2MB/s per TU

2MB/s per Shard

Events/s

1K

1K

Latency

50ms Avg, 99th % < 100ms

10s min

Protocol

HTTPS or AMQP 1.0

HTTPS

Max Message Size

256KB

1MB

Included Storage

84GB per TU

?? (none?)

Max Consumers

1 Consumer Group (Basic Tier)

20 Consumer Groups (Standard Tier)

?? (only limited by output capacity?) (See <Update 6/1/2016> below)

Monitoring

Built in portal metrics or REST API

CloudWatch

Message Retention

24 hrs (up to 7 days)

24 hrs (up to 7 days)

Price per Hour

$0.015/TU Basic Tier
$0.030/TU Standard Tier

$0.015/Shard

Price per Million Units

$0.028 Basic & Standard (64KB/unit)

$0.014 (25KB/unit)

Extended Data Retention Price

Only if stored event size exceeds 84GB * #TU’s, $0.024/GB (assuming LRS)

$0.020/Shard hour

Region used for pricing

East US

US East

Throughput Flexibility

Adjust TU’s as needed

Adjust Shards as needed

Supported Regions

18 (plus GovCloud)

9

<Update 6/1/2016> Turns out the answer to Max Consumers for Kinesis isn’t exactly straight forward due to their dependency on HTTP(S), as pointed out to me after publishing this post in February. Kinesis is limited to 5 read transactions per shard so your max consumers is going to be dependent on how you spread those transactions across your consumers. If you have five consumers each reading once per second, five is your max. Since output is capped at 2MB/s,  you can read up to that capacity in each transaction but you have to design your consumers to work within those limits. Additional info on this Stack Overflow thread.</Update 6/2/2016>

To compare pricing, I’m using the sample from AWS. In case they change their sample, here is the sample the below numbers are based on:

“Let’s assume that our data producers put 100 records per second in aggregate, and each record is 35KB. In this case, the total data input rate is 3.4MB/sec (100 records/sec*35KB/record). For simplicity, we assume that the throughput and data size of each record are stable and constant throughout the day.”

Kinesis Pricing Sample

Shards

4

Shard cost/month (31 days)

$44.64

PUT cost/month

$7.50

Total

$52.14

Extended Retention Cost

$59.52

Total w/Extended Retention

$111.66

Event Hubs Pricing Sample

 

Basic

Standard

TU

4

4

TU cost/month (31 days)

$44.64

$89.28

PUT cost/month

$7.50

$7.50

Total

$52.14

$96.78

Extended Retention Cost*

N/A

$47.24

Total w/Extended Retention

N/A

$144.02

* Extended storage only available on Standard tier

Results

On the pricing side, I found it interesting they are the exact same price! Unless you need extended retention and need to bump up to the Standard tier on Event Hubs. Comparing the specs, the items that jump out for me that might impact a decision are latency (Event Hubs blows away Kinesis), protocol (no AMQP on Kinesis), max message size (Kinesis is quite a bit larger), the size of a pricing unit (64KB for Event Hubs and 25KB for Kinesis), and the number of regions. Whichever service you choose to go with, hopefully this info helps make the decision a bit easier.

Office Dev, Technical

File Upload to SharePoint Online

One of my peers, Doug Perkes, wrote an awesome sample project on GitHub called Office 365 SharePoint File Management which demonstrates a multi-tenant MVC application connecting to Office 365 to allow a user to upload a file to SharePoint Online. Very handy, indeed. This post gives you steps to follow to integrate the same functionality into an existing MVC application. In order to get to the point where your application can upload a file to SharePoint Online, you must first provide the ability for the user to authenticate into their Office 365 tenant and have your application configured as a multi-tenant app, which is handled by Azure Active Directory. Ignoring a bunch of plumbing code which you’ll see by following the steps below, you can then call into SharePoint Online using either the REST or CSOM API (Doug’s sample shows both.)

As you’re walking through these steps, have Doug’s repo open (or clone it locally) as you’ll be grabbing files from it. For every step where you bring a file over from the repo, update the namespace to match your project namespace.

  1. Add an Office 365 Connected Service, if not already done. To do this, you will need an Office 365 developer account which can be obtained either through your MSDN subscription or a free one-year subscription. Doug walks through how to do this in Step 3 on his GitHub repo so I won’t duplicate that here.
  2. Install EntityFramework, if not already done. Right-click -> Manage NuGet Packages, or using the Package Manager Console window.
  3. Install Microsoft.SharePointOnline.CSOM NuGet package.
  4. Add a Utils folder at the root of your project
  5. From the repo, bring in Utils/SettingsHelper.cs
  6. Add Models/ApplicationDbContext.cs
  7. Add Models/ADALTokenCache.cs
  8. Add Models/ADALTokenCacheInitializer.cs
  9. Update your Global.asax.cs
    1. Add using statement for System.Data.Entity
    2. Add the following line to the Application_Start method:
      Database.SetInitializer(new Models.ADALTokenCacheInitializer());
  10. Update your App_Start/Startup.Auth.cs file to incorporate the code from his Startup.Auth.cs
  11. Add Models/SearchResult.cs
  12. Add Models/SearchModel.cs
  13. Add Views/Home/Sites.cshtml
  14. Add ExecuteSearchQuery method from his HomeController.cs to your controller and resolve references
  15. Add Sites method from his HomeController.cs to your controller and resolve references
  16. Add ConsentApp and RefreshSession methods from his AccountController.cs to your AccountController and resolve references

Stopping here, when you run your app you will now be able to log into an Office 365 tenant and display the Sites view which will show all SharePoint sites the user has access to (you’ll need to add an entry point to the Sites view on your own, something like: @Html.ActionLink(“Start Here »”, “Sites”, “Home”, new { @class = “btn btn-primary btn-lg” }) .) This is done using the Search REST API. Doug continues in his sample to include additional views for Libraries, Upload and UploadFile which all show how to read from SPO and then upload a file to a library in SPO using CSOM. I won’t walk through the steps of how to incorporate that functionality into your project as it’s pretty repetitive for what was done above to get Sites working.

If you’d like a deeper understanding of what the code is doing, check out the two references below:

Multi-tenant MVC app using AAD to Call O365 API
Searching a SPO site using REST

Office Dev, Technical

Check Users Browser within Office 365 Add-In

When writing an Office 365 Add-In intended to be run in Office 365 (as opposed to just an Office thick client, such as Word), you may need to be concerned about which browser your user is in. I’ll cover a specific scenario and another more general scenario and how to perform the check.

Internet Explorer 9 Support

As of the writing of this post, any Add-In published to the Office Store will be validated against IE 9 and rejected if it doesn’t work. Other than random IE 9 JavaScript quirks, your Add-In may be using an Office API feature that isn’t supported in IE 9 such as the coercion type HTML when using setSelectedDataAsync. The validation team realizes there aren’t always work-arounds for these limitations and they allow us to state the Add-In doesn’t support IE 9 in the app description and “fail gracefully” with a kind error message. To check for IE 9 in your Add-In, add the following function to your app.js file within app.initialize:

// App doesn’t support IE 9
app.isBrowserSupported =
 function () {
   var ua = navigator.userAgent, tem,
   M = ua.match(/(opera|chrome|safari|firefox|msie|trident(?=\/))\/?\s*(\d+)/i) || [];
   M = M[2] ? [M[1], M[2]] : [navigator.appName, navigator.appVersion, ‘-?’];
   if ((tem = ua.match(/version\/(\d+)/i)) != null) M.splice(1, 1, tem[1]);
var browser = M.join(‘ ‘);
   return browser != ‘MSIE 9’;
};

Now wherever it makes sense in your Add-In to check the browser and display a kind message back to the user (perhaps in Home.js, after app.initialize() is called), add a check and behave accordingly:

if (app.isBrowserSupported()) {
   // All is good, proceed as normal
}
else {
   // Browser not supported, display kind error message and disable functionality
}

General Browser Check

For any other need to check the browser, here’s that same function but a bit more generic so you can modify to fit your needs. I “stole” this from a co-worker so I’m not sure who the original author is. If you do, please leave a comment so I can give them credit.

var ua = navigator.userAgent, tem,
M = ua.match(/(opera|chrome|safari|firefox|msie|trident(?=\/))\/?\s*(\d+)/i) || [];
if (/trident/i.test(M[1])) {
    tem =
 /\brv[ :]+(\d+)/g.exec(ua) || [];
    app.showNotification(
 ‘IE ‘ + (tem[1] || ”));
}
if (M[1] === ‘Chrome’) {
    tem = ua.match(/\b(OPR|Edge)\/(\d+)/);
    if (tem !=
 null) return tem.slice(1).join(‘ ‘).replace(‘OPR’, ‘Opera’);
}
M = M[2] ? [M[1], M[2]] : [navigator.appName, navigator.appVersion, ‘-?’];
if ((tem = ua.match(/version\/(\d+)/i)) != null) M.splice(1, 1, tem[1]);
app.showNotification(M.join(‘ ‘));

As you can see, it uses the Office 365 Add-In built-in app.showNotification method to show the result.

.NET, Office Dev, Technical

Convert Office Add-In Web to MVC

Using Visual Studio to create a new Office Add-In results in two projects:  One for your Office Add-In (basically, the manifest) and another for the web project where you do the bulk of your work to implement functionality. The web project that is created is a basic HTML/JS/CSS application…nothing fancy like ASP.NET. For most situations, a lightweight client-side web application is ideal and it makes sense for that to be the default of the VS project template. How about those other situations where you need something to run on the server, like an ASP.NET MVC application? There are a couple choices:

  1. Add a new MVC project to your solution and pull in the Office “stuff” and trim it down so it can be used as the web project for your Add-In
  2. Convert the existing web project to MVC

I’ll show you how to do the second option in this post.

MVC Plumbing

  1. Using NuGet Package Manager, add Microsoft.AspNet.Mvc to your web project
  2. Add the following folders to your project:  App_Start, Controllers, Views
  3. Right-click the Views folder and add a new Web Configuration File and name it Web.config
  4. Add the following code to this new Web.config, replacing [[YourNamespace]] with the namespace of your project
       1: <?xml version="1.0"?>

       2:  

       3: <configuration>

       4:   <configSections>

       5:     <sectionGroup name="system.web.webPages.razor" type="System.Web.WebPages.Razor.Configuration.RazorWebSectionGroup, System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">

       6:       <section name="host" type="System.Web.WebPages.Razor.Configuration.HostSection, System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />

       7:       <section name="pages" type="System.Web.WebPages.Razor.Configuration.RazorPagesSection, System.Web.WebPages.Razor, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />

       8:     </sectionGroup>

       9:   </configSections>

      10:  

      11:   <system.web.webPages.razor>

      12:     <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=5.2.3.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />

      13:     <pages pageBaseType="System.Web.Mvc.WebViewPage">

      14:       <namespaces>

      15:         <add namespace="System.Web.Mvc" />

      16:         <add namespace="System.Web.Mvc.Ajax" />

      17:         <add namespace="System.Web.Mvc.Html" />

      18:         <add namespace="System.Web.Routing" />

      19:         <add namespace="[[YourNamespace]]" />

      20:       </namespaces>

      21:     </pages>

      22:   </system.web.webPages.razor>

      23:  

      24:   <appSettings>

      25:     <add key="webpages:Enabled" value="false" />

      26:   </appSettings>

      27:  

      28:   <system.webServer>

      29:     <handlers>

      30:       <remove name="BlockViewHandler"/>

      31:       <add name="BlockViewHandler" path="*" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" />

      32:     </handlers>

      33:   </system.webServer>

      34:  

      35:   <system.web>

      36:     <compilation>

      37:       <assemblies>

      38:         <add assembly="System.Web.Mvc, Version=5.2.3.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />

      39:       </assemblies>

      40:     </compilation>

      41:   </system.web>

      42: </configuration>

CSS

I chose to use the Site.css stylesheet that the NuGet package created for me. To do that, I took all of the CSS from the app.css and the home.css file and put it in the Site.css file.

JavaScript

Copy the App.js and Home.js files to the Scripts directory.

Content

I’ll assume your existing web project has the standard files from the VS template:  app.js, app.css, home.html, home.css and home.js. This section shows how to pull that content into a new view.

  1. Right-click the Controllers folder and add a new Controller named HomeController. This should add a file that looks like the following:
       1: using System;

       2: using System.Collections.Generic;

       3: using System.Linq;

       4: using System.Web;

       5: using System.Web.Mvc;

       6:  

       7: namespace [[YourNamespace]].Controllers

       8: {

       9:     public class HomeController : Controller

      10:     {

      11:         // GET: Home

      12:         public ActionResult Index()

      13:         {

      14:             return View();

      15:         }

      16:     }

      17: }

  2. Add a Home and Shared folder to the Views folder
  3. Right-click the Shared folder and add a new View called _Layout using the “Empty (without model)” template and checking the box for “Create as a partial view”
    snip_20151111122614
  4. Replace the <head> content with the following (note the reference to the Office UI Fabric, that’s optional if you aren’t using it…but you should be):
       1: <meta charset="utf-8" />

       2: <meta name="viewport" content="width=device-width, initial-scale=1.0">

       3: <meta http-equiv="X-UA-Compatible" content="IE=Edge" />

       4: <title>@ViewBag.Title - My ASP.NET Application</title>

       5: <script src="~/Scripts/modernizr-2.6.2.js"></script>
       1:  

       2:  

       3: <link href="~/Content/Office.css" rel="stylesheet" />

       4: <script src="https://appsforoffice.microsoft.com/lib/1/hosted/office.js" type="text/javascript">

       1: </script>

       2: <script src="~/Scripts/jquery-1.10.2.min.js">

       1: </script>

       2:  

       3: <!-- Office UI Fabric -->

       4: <link rel="stylesheet" href="//appsforoffice.microsoft.com/fabric/1.0/fabric.min.css" />

       5: <link rel="stylesheet" href="//appsforoffice.microsoft.com/fabric/1.0/fabric.components.min.css" />

       6:  

       7: <link href="~/Content/Site.css" rel="stylesheet" type="text/css" />

       8: <script src="~/Scripts/App.js">

       1: </script>

       2: <script src="~/Scripts/Home.js">

    </script>

  5. Replace the <body> content with the following:
       1: <div id="content-header">

       2:     <div class="padding">

       3:         <h1>[[Your application name]]</h1>

       4:     </div>

       5: </div>

       6:  

       7: <div id="content-main">

       8:  

       9:     @RenderBody()

      10:     <hr />

      11:     <footer>

      12:         <p>&copy; @DateTime.Now.Year - Content Mixr</p>

      13:     </footer>

      14: </div>

  6. If you have anything like a top nav or other content that is consistent across multiple pages in your app, paste it in there as appropriate
  7. Right-click the Home folder and add a new View called Index (or whatever you want it to be called, the rest of this post assumes Index) using the “Empty (without model)” template and checking the box for “Create as a partial view”
  8. Replace the contents of index.cshtml with the HTML from the <body> section of your home.html file (keep the ViewBag.Title at the top of the file if you’re going to use it)

Config and Cleanup

  1. Right-click the Views folder and add a new partial view called _ViewStart and add the following content:
       1: @{

       2:     Layout = "~/Views/Shared/_Layout.cshtml";

       3: }

  2. (This may already exist but create it if not) Right-click the App_Start folder and add a new class file called RouteConfig.cs and add the following content:
       1: using System;

       2: using System.Collections.Generic;

       3: using System.Linq;

       4: using System.Web;

       5: using System.Web.Mvc;

       6: using System.Web.Routing;

       7:  

       8: namespace [[Your application namespace]]

       9: {

      10:     public class RouteConfig

      11:     {

      12:         public static void RegisterRoutes(RouteCollection routes)

      13:         {

      14:             routes.IgnoreRoute("{resource}.axd/{*pathInfo}");

      15:  

      16:             routes.MapRoute(

      17:                 name: "Default",

      18:                 url: "{controller}/{action}/{id}",

      19:                 defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }

      20:             );

      21:         }

      22:     }

      23: }

  3. Open the Global.asax.cs file and edit the Application_Start() method to the following:
       1: protected void Application_Start()

       2: {

       3:     AreaRegistration.RegisterAllAreas();

       4:     RouteConfig.RegisterRoutes(RouteTable.Routes);

       5: }

  4. If your app doesn’t use Bootstrap, delete the Bootstrap CSS and JS files (if you are, go back to your _Layout.cshtml file and add the Bootstrap references as the above code doesn’t have it)
  5. You may have noticed the MVC package brought in a reference to a newer version of jQuery (1.10.2 as of the writing of this post.) There are now probably two jQuery versions in the Scripts folder so delete the version you don’t want.
  6. Open the App Manifest XML file (from the Office Add-In project) and set the DefaultValue for SourceLocation to ~remoteAppUrl since the default page of the app is now the web app default page
       1: <DefaultSettings>

       2:   <SourceLocation DefaultValue="~remoteAppUrl/" />

       3: </DefaultSettings>

That should do it. Please post a comment here or contact me directly if you hit any snags.

Office Dev, Open XML, Technical

Open XML SDK Intro

Let me start by saying I AM NOT an expert with Open XML. I dabbled with it a few years ago for a small project I was doing and then merrily went on my way doing just fine without the need to touch it again. That changed this week as I had a challenge to do something the Office 365 API and Office JavaScript API don’t support (as of the writing of this post, anyway), a seemingly simple task of determine the page count of a document. The Primary Interop Assembly supports this but building a VSTO didn’t support the need…I needed something external that could inspect the document properties without actually opening the document in Word. The answer finally came to me from the other side of the world by way of a co-worker, Andrew Coates (thank you!) He pointed out that I could pull out the page count through Open XML and using the Open XML SDK, so I started diving in and learned it’s really simple to use, which is not at all how I remember it! I’ll use this post as an introduction to the SDK to show how simple it is to use.

First steps, go get the 2.5 SDK and the SDK Productivity Tool (check out this video to learn more about the tool.) If you’re more of a documentation person, here are the docs. I won’t go into the details of the Open XML spec or format, but it’s worth saying that there are multiple packages included in an Open XML document. So to interact with the document in any way we need to figure out which package we need to interact with. That’s where the Productivity Tool can help you (or the docs.) Firing that up and opening a document will allow you to inspect the Open XML of a document, find what you’re looking for, then you can program against it.

For finding the page count, I needed to look at the Pages property located in the /docProps/app.xml package under the Properties element. The screenshot here shows the Reflected Code tab opened which shows the value (1, in this case) along with the namespace of extended-properties.

Knowing it’s in extended-properties, I can now jump over to Visual Studio and use the SDK to pull out the value for the document using WordprocessingDocument.ExtendedFilePropertiesPart.Properties.Pages. Simple, I don’t even have to mess with an XML object, which is nice.

using DocumentFormat.OpenXml.Packaging;

namespace LoadOOXMLDocument
{
  class Program
  {
    static void Main(string[] args)
    {
      const string filename = “hi.docx”;
      using (WordprocessingDocument wordDoc = WordprocessingDocument.Open(filename, true))
      {
        ExtendedFilePropertiesPart propPart = wordDoc.ExtendedFilePropertiesPart;
        Console.WriteLine(“The document has {0} pages.”, propPart.Properties.Pages.Text);
        Console.ReadLine();
      }
    }
  }
}

If you want to dive deeper, here are some other online resources:

The Wordmeister
Eric White Blog
OpenXMLDeveloper.org
GitHub Samples