Accessing Visual Studio Online from YOUR Azure Account – Now we can with this new PaaS offering

In case you missed Brian Harry’s announcement – we can now either link existing Visual Studio Online (VSO) instances OR we can create brand new VSO instances and control them through the Azure Portal. The problem before was that there had to be a 1:1 correspondence between VSO an a Microsoft Account. If you’re managing your own projects, that is likely not a problem. If however, you are in an enterprise, having a direct dependency on a Microsoft Account doesn’t work.
The good news is that today, we can associate VSO instances to an Azure Subscription.



image2

Creating a new VSO instance is easy:

image3

Note: if you want, an Azure Active Directory instance can be associated with your VSO instance.

image4

And just like that, we have a new TFS instance courtesy of Visual Studio Online!!

image5

No more having to spend unproductive time installing and configuring TFS from scratch. In a matter of minutes, we can stand up TFS with Visual Studio Online. From here, you can elect whether you want TFVC or git for your source code control. From there, it is very simple to connect your Azure Website to your new repository in order to publish from the source code repository. Like the on-prem version of TFS, you can choose your process template: CMMI, MSF Agile or Scrum. With VSO, you can also create build definitions. For more info on what VSO has to offer, check out this site. For a comparison of features between Visual Studio 2013 (On Prem) and Visual Studio Online – check out this resource.

Posted in Team Foundation Server, Visual Studio Online | Leave a comment

Restsharp, MultiPartFormDataContent File Name — 400 error

Ok, I really hate and enjoy spending 6 hours hunting down an issue. I hate it because the bug is super stupid and I have no idea why it would be this way. I enjoy it because this is what developers do, we solve issues and bugs are issues.

I have spent a total of 6 hours trying figure out why I was not able to do an HTTP post to an endpoint with a file attached. I was at an advantage because I had a working example (no direct access to the source, just the running application) so I could upload the file and capture the traffic via Fiddler.

I tried to make my post as similar as the working example, but I kept getting back a 400 error telling me that the file was not included. The problem was that the FILE WAS INCLUDED. I could see the bytes in the message via Fiddler. Finally I decided to pull the source for RestSharp.Portable locally to see if I could find the issue.

After a ton of testing I finally noticed one little difference. A difference to me looked like a non-issue… turns out it was the issue.

Here is the Fiddler results of the working post
Content-Disposition: form-data; name=”file”; filename=”awesome-desktop-wallpapers-4.jpg”

Here is the Fiddler results of the non-working post
Content-Disposition: form-data; name=file; filename=awesome-desktop-wallpapers-4.jpg;

Notice the difference? Yea the quotes around the filename header property. This was the issue. Turns out that per the w3 spec for Content-Disposition the file name SHOULD be wrapped in quotes.

To fix this issue I simply escaped my file name prior to adding it to my request and then the post uploaded.

Now on to the core issue. When stepping through the source for the Restsharp library I found the exact line of code causing the issue. Take a look at the code below, the yellow box is the bad line.

RestError

The issue is that multipartContent.Add is a part of the System.Net.Http namespace.

Turns out that if you do not have the fileName escaped it will NOT add quotes around it when the HTTP header is created. If the fileName IS escaped it will add quotes when it is added to the HTTP header.

Something does not seem right, but at least I know the issue and how to work around it.

Till next time,

Posted in C#, Uncategorized | Leave a comment

Debugging xUnit tests in Xamarin Studio and Mono Develop using The Debugging Trick™

Update: This post has been completely revamped, as it turns out there is much easier way, kudos to Michael for pointing this out. I will lament my image editing skills for a small second :-)

As I mentioned in my recent post on the Splunk blog, we’ve been developing a new awesome portable C# SDK that will support Xamarin/Mono.  We ran into a bunch of issues running on Linux with tests failing due to file paths and supporting pct-encoded URIs, so I decided to try to debug in Xamarin Studio.

I had really hoped this would be easy. It wasn’t as easy I hoped, but it turns out after a bunch of trial and error that there is a pretty simple path.  Xamarin Studio does have a “Debug Application” option, but running that all I could do was run the xUnit console runner, but I couldn’t pass it any parameters. I tried various things for a while and came up empty.

And then I decided to tweet the always helpful Michael Hutchinson to see if he had any ideas, and he did!

tweet

Update: It turns out you don’t need to do what is suggested above, and which the first version of this post recommended. 

Read on my friends to find out what you can do…..

First open Xamarin Studio to your test project. You can see I’ve done this below and I’ve added a breakpoint on the completely ridiculous, ficticious test we want to debug.

test project1

Now right click on the project, then select Options->Run->Custom Commands. Next a select Execute in the drop down. And put the following into the Command box.

“/Users/glennblock/xunit/xunit.console.clr4.exe ${TargetFile}”

dialog

The first part points to the location of the xunit console runner. The second is an expansion which will get replaced with the test dll.

Now all you have to do is run!

debug

Volia!

And that my friends is “The Debugging Trick™“!

Side note, I don’t think anyone should have to do this, though this version is far less painful than my first post. I just found out from Michael great news, that there is proper IDE support for xUnit in the works.

Screen Shot 2014-05-14 at 11.29.01 PM

This is very good news! In the meanwhile you have the trick :-).

As a side note this technique is general useful for beyond unit tests, you can use it to debug any managed exe where you need to pass arguments.

Thank you Michael Hutchinson for the help!

Posted in mono, xamarin | Leave a comment

#BeyondCallbacks or How Koa helps me Code Better

For the better part of my life I have been a C# programmer. But lately I have ventured into JavaScript land. And I like it. I have come over the “what kind of junk is this”-phase and come to see the power and beauty that is “hidden under a huge steaming pile of good intentions and blunders is an elegant, expressive programming language”. You should read that book, by the way, that’s the one that made me like JavaScript. Being a backend-guy (I will NEVER understand CSS… There – I’ve said it!) I soon came to look into Node too. And pretty soon after that I met Express. Express was very nice since it reminded me of Nancy. Felt right at home, back on the super-duper-happy-path! So after going through a lot of examples and tutorials and writing a couple of applications on my own I grew really tiered of one feature of most Node applications. That was … …wait (state)

…creating request for fact (state)

Wrapping in logger (state)

And authenticating (state)

Calling service (state)

“THE FACT”

Attaching to state

Out of callback

reformatting response and out of callback

Out of callback

There – we can continue. the fact that you need to use callbacks so extensively. Don’t get me wrong – the non-blocking principles that Node is built around is awesome. I especially like that you “fall into the pit of success” since everything is written around non-blocking code, which automatically helps my application to scale and manage resource wisely. But seriously… all those nested callbacks are making my eyes bleed. Talk about hiding the intention of the code. And I also grew very tired of trying of passing state through the chain of callbacks just to be able to use it in the final one. And for the record; Yes – I have heard about promises, but for some reason I couldn’t wrap my head around it. For me, it didn’t feel natural. Never gave it a proper chance, I’m willing to admit. But when I saw Koa Js things started to make sense again. Here is a mini-application that returns a user, from MongoDB by name sent to the URL.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
// Dependencies
var koa = require('koa');
var app = koa();
var logger = require('koa-logger');
var route = require('koa-route');
 
// Db access
var monk = require('monk');
var wrap = require('co-monk');
var db = monk('localhost/koaDemoUsers');
var users = wrap(db.get('users'));
 
// Middleware
app.use(logger());
 
// Route
app.use(route.get('/user/:name', getUser));
 
// Route handling
function *getUser(userName) {
var user = yield users.findOne({name:userName});
if (!user) this.throw(404, 'invalid user name');
this.body = user;
};
view raw miniKoaApi.js hosted with ❤ by GitHub
thatWasGreatPretty nice, huh? Thumbs up from me! I even threw in some logging and error handling just to make it a little more interesting. Strip that out and you end up with 2,3 significant lines of code. I take my web frameworks like my coffee -  short, sweet and powerful. And we have not lost the non-blocking features that we’ve come to expect and love in Node. In short – Koa help med Code Better! In the rest of this post I’ll introduce you to the concepts of Koa and give you a short overview to how it works. Continue reading 

Posted in javascript | Leave a comment

Telerik, moving to Kendo

I’ve been pretty quiet lately. Spent all my time on our big app, working with the usual MVC, DDD, nHibernate and the like. Much to our content, so far we have managed to keep up with the ever changing landscape of Dutch mental healthcare. For our webpages we were using the Telerik MVC suite, in a previous post I already shared some of our experiences. Telerik is replacing the MVC suite with the new Kendo suite. Which has several very appealing options. Besides that support on the MVC suite is coming to an end. Time to move on.

Replacing all components in one go was not possible. Kendo has the same architecture as the MVC suite and a very similar syntax. But our app is just to large to change all in one go. According to the documentation it’s possible to mix the two suites, even in the same view. To get that actually to work took an effort beyond the faq. In this post I’ll dive into the details.

CSS

Both MVC and Kendo require some style sheets. In the mvc suite you need a StyleSheetRegistrar component to render the css links. Kendo follows the standard way.

Registering the style sheets

Code Snippet
  1. @Html.Telerik().StyleSheetRegistrar().DefaultGroup(group => group.Add(“telerik.common.css”).Add(“telerik.eposoffice2010silver.min.css”).Combined(true).Compress(true))
  2. <linkhref=”@Url.Content(“~/Content/kendo.common.min.css”)rel=”stylesheet”/>
  3. <linkhref=”@Url.Content(“~/Content/kendo.default.min.css”)rel=”stylesheet”/>
  4. <linkhref=”@Url.Content(“~/Content/epos/epos.css”)rel=”stylesheet”type=”text/css”/>

Completely straightforward

jQuery

When it comes to jQuery things get somewhat complicated. The MVC suite requires version <= 1.7, Kendo requires versions >= 1.9. These two are not compatible. Thank goodness the jquery-migrate script library patches the leaks, making it possible for the mvc suite to run using jquey 1.9. The gotcha is that the mvc suite uses a ScriptRegistrar component to register the Telerik scipts. By default this component will also register (the wrong version of) jQuery again. This is prevented by the jQuery(false) method of the registrar.

Code Snippet
  1. <scriptsrc=”@Url.Content(“~/Scripts/jquery-1.11.0.min.js”)type=”text/javascript”></script>
  2. <scriptsrc=”@Url.Content(“~/Scripts/jquery-migrate-1.2.1.min.js”)“></script>
  3. <scriptsrc=”@Url.Content(“~/Scripts/kendo.all.min.js” + “?v=” + version)“></script>
  4. <scriptsrc=”@Url.Content(“~/Scripts/kendo.aspnetmvc.min.js” + “?v=” + version)“></script>
  5. <scriptsrc=”@Url.Content(“~/Scripts/kendo.culture.nl-NL.min.js” + “?v=” + version)“></script>

The MVC suite scripts should be rendered at  the end of the page

Code Snippet
  1. @(Html.Telerik().ScriptRegistrar().jQuery(false).Globalization(true).DefaultGroup(group => group.Combined(true).Compress(true)))

To sum things up. The full master layout

Code Snippet
  1. @using Telerik.Web.Mvc.UI
  2. <!DOCTYPEhtml>
  3. <html>
  4. <head>
  5.     <metacharset=”utf-8″/>
  6.     <title>@ViewBag.Title</title>
  7.     @Html.Telerik().StyleSheetRegistrar().DefaultGroup(group => group.Add(“telerik.common.css”).Add(“telerik.eposoffice2010silver.min.css”).Combined(true).Compress(true))
  8.     <linkhref=”@Url.Content(“~/Content/kendo.common.min.css”)rel=”stylesheet”/>
  9.     <linkhref=”@Url.Content(“~/Content/kendo.default.min.css”)rel=”stylesheet”/>
  10.     <linkhref=”@Url.Content(“~/Content/epos/epos.css”)rel=”stylesheet”type=”text/css”/>
  11.     <scriptsrc=”@Url.Content(“~/Scripts/jquery-1.11.0.min.js”)type=”text/javascript”></script>
  12.     <scriptsrc=”@Url.Content(“~/Scripts/jquery-migrate-1.2.1.min.js”)“></script>
  13.     <scriptsrc=”@Url.Content(“~/Scripts/kendo.all.min.js”)“></script>
  14.     <scriptsrc=”@Url.Content(“~/Scripts/kendo.aspnetmvc.min.js”)“></script>
  15.     <scriptsrc=”@Url.Content(“~/Scripts/kendo.culture.nl-NL.min.js”)“></script>
  16.     @* Scripts required for ajax forms *@
  17.     <scriptsrc=”@Url.Content(“~/Scripts/jquery.unobtrusive-ajax.js”)type=”text/javascript”></script>
  18.     <scriptsrc=”@Url.Content(“~/Scripts/jquery.validate.min.js”)type=”text/javascript”></script>
  19.       <scriptsrc=”@Url.Content(“~/Scripts/jquery.validate.unobtrusive.min.js”)type=”text/javascript”></script>
  20.     @* App specific scripts *@
  21.     <scriptsrc=”@Url.Content(“~/Scripts/Epos.js”)type=”text/javascript”></script>
  22.       <scriptsrc=”@Url.Content(“~/Scripts/tiny_mce/tiny_mce.js”)type=”text/javascript”></script>
  23. </head>
  24. <body>
  25.     <scripttype=”text/javascript”>
  26.         kendo.culture(“nl-NL”);
  27.     </script>
  28.     <div>
  29.         <divid=”maincontent”>
  30.             @RenderBody()
  31.         </div>
  32.         @(Html.Telerik().ScriptRegistrar().jQuery(false).Globalization(true).DefaultGroup(group => group.Combined(true).Compress(true)))
  33.     </div>
  34. </body>
  35. </html>

In case your views use Ajax forms, rendered using Ajax.BeginForm, the ajax scripts are required. Imho these forms are a pita. The handling of events is quite entangling and, even worse, fields edited with a Kendo editor are not included the postdata. For most of our forms we do a straight post using extension methods. Alas, not all views have been adapted yet, so for now the scripts are still required. The good thing is that they still work in the new jquery scenario and don’t seem to bite anything else.

In action

Using this layout both suites can be combined on one page. Here’s a Kendo Colorpicker used within an mvc tabstrip

Untitled

Localization

The way the localization of the components work differs. Kendo is very much script based. For instance the titles of the buttons of the colorpicker above can only be set from script.

To standardize the look and feel of our components we build them using html extension methods, as described here. Also the Telerik Components, which have a lot of options which can be set.  The Kendo suite has an Html extension library, the extension methods in there provide options to configure the component. Alas this list of options is not complete, it is smaller in than the list of configuration options from script. FI it is not possible to set the captions from the extension method for the colorpicker.

All html extension methods do is render some script. It is no great deal to bypass the Kendo extension methods and render the script yourself. The Kendo components all follow the same pattern

Code Snippet
  1. <inputid=”colorpicker”type=”color”/>
  2. <script>
  3.     $(“#colorpicker”).kendoColorPicker({
  4.         buttons: false
  5.     })
  6. </script>

The main difference is that one component renders an input and the other a div.

To get all possibilities of configuration via script you can create your own extension methods which directly render the script

Code Snippet
  1. internalstaticMvcHtmlString KendoColorPickerFor<TModel>(thisHtmlHelper<TModel> htmlHelper, Expression<Func<TModel, string>> expression)
  2. {
  3.     /*
  4.         <input id=”colorpicker” type=”color” />
  5.         <script>
  6.         $(“#colorpicker”).kendoColorPicker({
  7.             messages: {
  8.             apply: “Update”,
  9.             cancel: “Discard”
  10.             }
  11.         })
  12.         </script>
  13.         */
  14.     var id = expression.NameBuilder(htmlHelper.ViewData.Model);
  15.     var value = ModelMetadata.FromLambdaExpression(expression, htmlHelper.ViewData).Model;
  16.     var sb = newStringBuilder();
  17.     sb.AppendFormat(“<input id=’{0}‘ type=’color’ value=’{1}‘/>”, id, value);
  18.     sb.AppendLine(“<script>”);
  19.     sb.AppendFormat(“$(‘#{0}‘).kendoColorPicker({{ messages: {{ apply: ‘OK’, cancel: ‘Annuleren’ }} }})”, id);
  20.     sb.AppendLine(“</script>”);
  21.     returnMvcHtmlString.Create(sb.ToString());
  22. }

The captions are set to OK and annuleren. For now hard coded.

The NameBuilder method is the one from one of my previous stories. It is used to generate an unique id.

Small gotchas

So far mixing Kendo and the classical suite works well. We had two small gotchas so far.

  • Window stacking. In the mvc suite you can stack windows on top of each other. A popup in a popup, In the Kendo suite you can do that as well. This stacking is done using high z-index-es. The classical suite uses a higher range than Kendo. Resulting in a Kendo popup window launched from a classic window appearing behind it’s originator. You cannot stack Kendo and classical windows. Thanks goodness changing from classical window to Kendo window is no big deal.
  • Ajax forms. As mentioned inputs using Kendo components will exclude the edited fields from the ajax postdata. In our app we are switching to posting the data straight from script, using our postdata extension. Far more flexible and no more need for a form.

Winding down

Finding out the way the classical suite and Kendo require jQuery was somewhat of a hurdle. But now all works well and we can convert all further code at our pace. And please our users with a better look and feel. The way the Kendo window automatically adjusts to the size of it’s content is worth the migration on itself.

Posted in Uncategorized | Leave a comment