Friday, 08 February 2013

If you've been using trying to get the FritzBox VPN client to work lately and have been getting the above error. It is probably caused by a missing folder on your machine. In my case I was missing the folder "FRITZ!Fernzugang" which is found in the following directory on my machine "C:\ProgramData\AVM"

It seem the Fritz!Box tries to save a file to that directory but doesn't check if it exists or even tries to create the folder of it doesn't exist. I had this problem with the 64bit version of the client for Windows 7. I am guessing it may only be a problem on that version.

You can see more of whats going on if you check the following folder C:\Program Files\FRITZ!VPN\logs there is a files named nwtsrv.log which will give you a lot more detail on what's happening.

If you have been having errors importing the configuration for a VPN into the Fritz!Box itself trying exporting the file encrypted first for some reason it doesn't seem to like an unencrypted file.

posted on Friday, 08 February 2013 10:26:15 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Monday, 17 September 2012
Using TFS with the WorkflowCentralLogger, PowerShell and PSAKE

I was recently brought into a client site where they had made use of PSAKE to handle their build process. The build would be kicked off from the traditional Workflow in TFS using an Invoke Process. Everything was working perfectly until they spotted that when the build failed there was no way of viewing which unit tests had failed from within TFS. In short PowerShell was giving precious little to the TFS summary view.

The question was how could we get that rich logging information you got in the build summary when doing a traditional build using Workflow? Setting up a traditional build and observing how MSBUILD is called from TFS starts to shed some light on the situation

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe /nologo /noconsolelogger "C:\Builds\1\Scratch\Test Build\Sources\user\Test\Build.proj" /m:1 /fl /p:SkipInvalidConfigurations=true  /p:OutDir="C:\Builds\1\Scratch\Test Build\Binaries\\" /p:VCBuildOverride="C:\Builds\1\Scratch\Test Build\Sources\user\Test\Build.proj.vsprops" /dl:WorkflowCentralLogger,"C:\Program Files\Microsoft Team Foundation Server 2010\Tools\Microsoft.TeamFoundation.Build.Server.Logger.dll";"Verbosity=Normal;BuildUri=vstfs:///Build/Build/111;InformationNodeId=6570;
http://mytfshost:8080/tfs/Test%20Collection;"*WorkflowForwardingLogger,"C:\Program Files\Microsoft Team Foundation Server 2010\Tools\Microsoft.TeamFoundation.Build.Server.Logger.dll";"Verbosity=Normal;"


In the above example I have highlighted the section I discovered is responsible for the summary view you usually see when kicking off a build from TFS. I discovered this with a bit of guesswork and some reflector usage to see what was going on inside MSBUILD. Googling for the WorkflowCentralLogger gives precious little back about how it works and more about the errors people have encountered with it.

Getting to the solution
You will be forgiven for thinking the answer to the problem is just adding the missing WorkflowCentralLogger switch (with arguments) to your MSBUILD command line in PowerShell/PSAKE. Sadly its not that simple. See the InformationNodeId in the above command line? This appears to tell the WorkFlowCentralLogger where it needs to append its logging information. Passing it into the Invoke Process was my first thought, the problem is you're not going to find anything that will give it to you, I wasn't able to find it anywhere.

So how do you get it to work then?
The answer is, you need to build a Custom Workflow Activity. A custom workflow activity will have access to the current Context. To use this you need to inherit the class "CodeActivity". Its up to you how you use this Custom Workflow Activity, you have one of two ways.

  • Place it above the Invoke Process in your workflow, get the InformationNodeId and pass this as an OutArgument to the Invoke Process below it (not tested fully)
  • Or invoke Powershell from within the Custom Activity using a runspace and pass it the code context. (fully tested)
   3:  namespace MyWorkflowActivities
   4:  {
   5:      using System;
   6:      using System.Collections.Generic;
   7:      using System.Linq;
   8:      using System.Text;
   9:      using System.Collections.ObjectModel;
  10:      using System.Management.Automation;
  11:      using System.Management.Automation.Runspaces;
  12:      using System.IO;
  13:      using System.Activities;
  14:      using System.Collections;
  15:      using System.Globalization;
  17:      using Microsoft.TeamFoundation.Build.Client;
  18:      using Microsoft.TeamFoundation.Build.Workflow.Activities;
  19:      using Microsoft.TeamFoundation.Build.Workflow.Services;
  21:      public OutArgument<string> InformationNodeIdOut { get; set; }
  23:      [BuildActivity(HostEnvironmentOption.All)]
  24:      public sealed class GetInformationNodeId : CodeActivity
  25:      {
  26:          protected override void Execute(CodeActivityContext context)
  27:          {
  29:              context.TrackBuildMessage("Getting the Information Node Id", BuildMessageImportance.Low);
  30:              IActivityTracking activityTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking((ActivityContext) context);
  31:              string informationNodeId = activityTracking.Node.Id.ToString("D", (IFormatProvider)CultureInfo.InvariantCulture);
  33:              context.SetValue<string>(this.InformationNodeIdOut, informationNodeId);
  34:          }
  35:      }
  37:  }

The code above illustrates the first solution. Its a lot simpler and you'll have to pass that node id to MSBUILD when you construct its command line in PowerShell. Line 30 and 31 is where all the magic takes place, I managed to find this line using reflector in MSBUILD. If you have never written a custom activity before Ewald Hofman has a short summary of one here

The diagram below illustrates where GetInformationNodeId (code above) sits just above the InvokeProcess which calls PowerShell.



The second solution, which I actually went with is slightly more complex and I'll blog about how I did that in another article. You might be wondering what are the immediate benefits of one over the other? The beauty of going with the second solution is you can make use of the code activity context within your PowerShell scripts. So for example instead of writing your PowerShell events out to the host you could wrap that call in context.TrackBuildMessage (as illustrated on line 29 above). Hopefully I'll find some time to blog about that next week!

I'd be interested to hear about other peoples experiences.

posted on Monday, 17 September 2012 14:19:34 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Saturday, 25 August 2012
How to check a PDFs page size with iTextSharp

I don't know why I found it so hard to get hold of this information. I've placed it onto my blog for reference purposes. As before, if you can suggest a better method of doing this please leave a comment.


   1:   public string GetPageSize(string PathToPDF)
   2:          {
   3:              var reader = new PdfReader(PathToPDF);
   5:              // A post script point is 0.352777778mm
   6:              const float postScriptPoints = (float)0.352777778;
   8:              // The height and width are returned in post script points from iTextSharp
   9:              float height = reader.GetPageSizeWithRotation(1).Height * postScriptPoints;
  10:              float width = reader.GetPageSizeWithRotation(1).Width * postScriptPoints;
  12:              reader.Close();
  14:              if ((width >= 210 && width < 211)
  15:                  && (height >= 297 && height < 298))
  16:              {
  17:                  return "A4";
  18:              }
  20:              return "unknown page size";
  21:          }
posted on Saturday, 25 August 2012 15:11:48 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Thursday, 26 July 2012
Is YouView too late?

YouView has taken an incredibly long time to launch. The idea behind YouView I believe is a brilliant one, however I can't help but think they're a little late to the market and when you see what they have to offer you can't help but think ".oh is that it?"


Don't get me wrong its nice having a set top box that enables you to watch on demand content from your TV but the BBC iPlayer, ITV Player, 4OD and Demand 5 have been available from the XBOX 360 (£174) and PlayStation 3 (£187) for several months now. It makes you wonder if anyone will shell out £299 for the YouView set top box. Look at all the new SmartTV's out there that already have the above mentioned services built in or available to download as apps and you end up scratching your head wondering if its really worth it. The majority of the content that YouView boasts to have besides the ones I have mentioned above are freely available right now if you have Digital TV. There is no need for a set top box. You are basically just getting the ability to watch the above on demand content as extra. 

Enter Sky's new service which will be added to YouView called NowTV and all of a sudden the above starts to look a little more viable. While NowTV is currently only a movie service, Sky will later be offering Sky Sports and content from its flagship channels Sky1, Sky Atlantic and Sky Living. Granted if you are a Sky subscriber and have an Xbox you're probably already seeing some of this content on Sky Player. The one problem there is you have to be a Sky subscriber to get to that content and as always some shows are blocked on the live Sky Channels on Xbox because they don't hold the digital rights to stream it over IPTV. It makes you wonder if we will see the same issue taking place if they end up streaming live Sky channels over YouView?

Its important where you get YouView

Look closer at the offers for YouView available from BT and Talk Talk as opposed to buying one solo and plugging it in. Suddenly things start to get a bit clearer, not all YouView offers are equal. Get a set top box from Talk Talk for YouView and with an additional "boost" you can get some of the Sky Channels without the need for a dish and a subscription with Sky. It appears TalkTalk will be offering the service they currently used to offer on TalkTalk TV in addition to YouView content which is great news for TalkTalk customers.

BT appear to be offering the same content from BT Vision to people who get a YouView box with them. However looking at BT's line up I prefer the selection of content and channels that appear to be available from TalkTalk. The dilemma there is I prefer getting my broadband from BT instead of TalkTalk.

Over the coming months it will be interesting to see how this pans out. I'm also interested to see what Sky does with its content. In the past they've always been interested in owning the platform and the rights to the content instead of sharing their content with other platforms for a fee.

posted on Thursday, 26 July 2012 09:03:50 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Thursday, 19 July 2012

I've posted this for my on reference more than anything else.

Basically if you are trying to work out the value and text of a dropdown list inside a SharePoint list this is the syntax


listDataFromServer[i].get_item('clients').get_lookupId()      - This will return you the value

listDataFromServer[i].get_item('clients').get_lookupValue()  - This will return you the text from the dropdown list.

posted on Thursday, 19 July 2012 14:31:18 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Friday, 29 June 2012

I thought I'd post this for my own records so I have somewhere to refer back to it. I've also posted it because there was very little help regarding the problem on the Internet and the workarounds proposed weren't that nice. Many ranged from hacks that involved forcing the page to reload itself or having to use CAML instead.

Anyway here is the scenario, the text book piece of code below is used to update a list item in SharePoint using JavaScript. Everything works fine, however if someone else updates a record on another machine after you and then you update the same record on your machine you'll get the "Version Conflict" error.


   1:  function updateListItem(id, statusField, valueToChangeTo, listName, newparentId, parentField) {
   3:            var ctx = SP.ClientContext.get_current();
   5:            var list = ctx
   6:                      .get_web()
   7:                      .get_lists()
   8:                      .getByTitle(listName);
  10:              var item = list.getItemById(id);
  12:             item.refreshLoad();
  14:             item.set_item(statusField, valueToChangeTo);
  15:             item.set_item(parentField, newparentId);
  17:             item.update();
  19:            ctx.executeQueryAsync(function () {
  20:                console.log("New value: ", item.get_item(statusField));
  21:            })
  22:        };


So what went wrong?
Well basically the object that you're accessing is a cached object, you retrieved on the first time you saved the item. Since someone else changed the object before you this time your cached object is going to cause a version conflict as SharePoint as a newer version of the item.

How do I solve the problem?
You need to load the object again and then update it.

   1:  function updateListItem(id, statusField, valueToChangeTo, listName, newparentId, parentField) {
   3:            var ctx = SP.ClientContext.get_current();
   5:            var list = ctx
   6:                      .get_web()
   7:                      .get_lists()
   8:                      .getByTitle(listName);
  10:            var item = list.getItemById(id);
  12:            ctx.load(item)
  14:            ctx.executeQueryAsync(function () {
  16:                updateListitemAfterData(item, statusField, valueToChangeTo, parentField, newparentId);
  17:            })
  18:        }
  20:        function updateListitemAfterData(item, statusField, valueToChangeTo, parentField, newparentId) {
  21:            var ctx = SP.ClientContext.get_current();
  22:            item.set_item(statusField, valueToChangeTo);
  23:            item.set_item(parentField, newparentId);
  24:            item.update();
  26:            ctx.executeQueryAsync(function () {
  28:                console.log("New value: ", item.get_item(statusField));
  29:            })
  32:        }
So in the code above I call updateListItem with my values. This then goes and loads the list item fresh from SharePoint and waits using an async call. Once it gets this async call it calls updateListItemAfterData to do the actual saving for us. Please note in the above example you may want to pass the context or to declare it globally instead to be more efficient.  

So far the above solution appears to be working for me, with no version conflicts Smile

posted on Friday, 29 June 2012 12:12:47 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Monday, 28 May 2012
The EU cookie law, what a mess..

If you haven't already noticed, the EU cookie law has now become mandatory in the UK over the weekend.

However it's left a terrible taste in the mouths of several website owners when the ICO (Information Commissioners Office) at the last minute stated that it was ok to use "Implied Consent" as opposed to implicit consent  before placing cookies on the users machine. While thousands of website owners will rejoice. Those that had committed the recourses to meet the implicit cookie consent requirement are probably fuming.

Implied consent is effectively placing the onus back on the user by telling them that by using your site a cookie will or has already been placed onto their machine. If they are unhappy about this, they can remove it themselves, or they can just continue using your site as usual. As a large majority of sites have been informing users about the placing of cookies on their machines in their privacy policy for years you can't help but feel that it has somewhat lost its bite and makes a mockery of the whole situation.

What is interesting is there appears to be an attitude among some companies to sit back and see who gets sued first before taking any action. You can certainly understand their reaction when a large amount of government websites themselves are not compliant, this morning appears to be following with the implied consent root. By placing cookies on your machine and displaying a small message at the bottom of the page about their cookie policy.


You can't help but feel when the government came to overhauling their websites to try and meet the implicit cookie consent requirement that someone said "Hang on a minute we have X hundred sites and we're going to have to recode how all of them to handle cookies in one year!". I also couldn't help but wonder when developers were looking at the issue and discovered that certain server technologies they were using just couldn't be changed to handle the new cookie law requirement. The issue probably fell heavily on the ICO's shoulders, you can almost picture that meeting taking place. How on earth could they enforce a law the government itself was not even abiding by?

How are websites implementing the cookie law this morning?

No 10 Downing Street -

No 10's website (you guessed it) has gone for "Implied Consent" I get 4 cookies placed onto my machine. You'll be forgiven if you missed the information about Cookies I've highlighted it for you below.

Amazon placed 9 cookies onto my machine as soon as I visited the website with an anonymous browser. They also appear to have gone with implied consent, scroll right to the bottom of the page and you will see the words in the footer "Cookies & Internet Advertising"

Lloyds TSB -

Lloyds TSB have a small message at the top of their site that links to their cookie policy



Visiting several European websites, I found many of them also followed the implied consent pattern. The information about what cookies they placed on your machine was usually buried inside their privacy policy.

While it has been stated that Britain is out of step with EU law because of the use of "Implied Consent" which could lead to fights in the European courts, you can't help but feel the law really doesn't hold much water if the rest of Europe appears to be following the same approach. Perhaps someone somewhere responsible for the law, realised what a massive mistake it was and hopefully it will slowly be forgotten as yet another mistake. You only have to look at the European Unions own website which also uses "Implied Consent" with some details in its "Legal notice" to realise that not much will probably happen as long as you explain about your cookie policy in your privacy policy.

Report those offending cookies

The ICO has also created a page to allow members of the public to report their concerns about the use of cookies. Personally I really can't see too many people using it, if they were not aware of what cookies were to begin with. I would guess it is targeted more towards technically minded people, however these type of people are more than likely to just delete the offending cookie from their browser than think anything more of it.

Fighting Crime

The ICO also states on its website that ".the intention behind this Regulation is also to reflect concerns about the use of covert surveillance mechanisms online." It goes on to explain about the use of spyware and "..such activities often have a criminal purpose behind them.". While I appreciate the intention of the law to fight crime, I don't believe a criminal enterprise is going to stop using cookies in this way because it is illegal to do so. However when a criminal is charged with this very offense I presume I will stand corrected.

I await to see what will happen in the coming months, if anything happens at all..

posted on Monday, 28 May 2012 10:30:57 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Thursday, 24 May 2012
Are hasty responses to customer emails harming your business?

We all know how important it is for companies to respond to customer queries. A customer with a complaint can soon become a companies worst nightmare when they begin to vent their frustrations using social media such as twitter and Facebook. Many companies recognise this and employ teams of people to respond to emails. To assist these people many of them are equipped with the standard responses to queries ie

  • "Our opening times are between x and y"
  • To place an item in your basket select a size first and click the yellow add to basket button.
  • To place a return log into your account and click on the "return items" button.

The last one in that list is a good example of an issue my wife once had with a website where she was trying to return an item. She informed the company that when she clicked on the "return items" link that the site gave her an error. She also copied down the error for the company to help them fix the issue.

The response she got, you guessed it!

"To place a return log into your account and click on the "return items" button."

The customer support team were either working on auto pilot and just saw the word "return" and nothing else not bothering to read the rest of the email. Or they had some sort of automated system in place for responding to emails. Because when my wife responded and told them they didn't reply to her email she got the same response again. It was only after several attempts that it appeared human sense kicked in and someone in the company acknowledged something was wrong.

I have had several cases myself when asking web based organisations questions. I have even gone to great lengths to stress that "I am NOT referring to X I am referring to Y" it seems as though if there is not a predetermined script for an error on the site or something that doesn't fit into how the company works someone somewhere just chooses the closest response. 

I am starting to see a trend here where people are beginning to vent their frustrations on twitter about this very issue. I am wondering if its become almost as big an issue as the outsourced call centre where the operator working off a script does not understand the problem the customer is having.

posted on Thursday, 24 May 2012 10:00:33 (GMT Standard Time, UTC+00:00)  #    Comments [0]