Monday, 17 September 2012
Using TFS with the WorkflowCentralLogger, PowerShell and PSAKE

I was recently brought into a client site where they had made use of PSAKE to handle their build process. The build would be kicked off from the traditional Workflow in TFS using an Invoke Process. Everything was working perfectly until they spotted that when the build failed there was no way of viewing which unit tests had failed from within TFS. In short PowerShell was giving precious little to the TFS summary view.

The question was how could we get that rich logging information you got in the build summary when doing a traditional build using Workflow? Setting up a traditional build and observing how MSBUILD is called from TFS starts to shed some light on the situation

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe /nologo /noconsolelogger "C:\Builds\1\Scratch\Test Build\Sources\user\Test\Build.proj" /m:1 /fl /p:SkipInvalidConfigurations=true  /p:OutDir="C:\Builds\1\Scratch\Test Build\Binaries\\" /p:VCBuildOverride="C:\Builds\1\Scratch\Test Build\Sources\user\Test\Build.proj.vsprops" /dl:WorkflowCentralLogger,"C:\Program Files\Microsoft Team Foundation Server 2010\Tools\Microsoft.TeamFoundation.Build.Server.Logger.dll";"Verbosity=Normal;BuildUri=vstfs:///Build/Build/111;InformationNodeId=6570;
TargetsNotLogged=GetNativeManifest,GetCopyToOutputDirectoryItems,
GetTargetPath;TFSUrl=
http://mytfshost:8080/tfs/Test%20Collection;"*WorkflowForwardingLogger,"C:\Program Files\Microsoft Team Foundation Server 2010\Tools\Microsoft.TeamFoundation.Build.Server.Logger.dll";"Verbosity=Normal;"

 

In the above example I have highlighted the section I discovered is responsible for the summary view you usually see when kicking off a build from TFS. I discovered this with a bit of guesswork and some reflector usage to see what was going on inside MSBUILD. Googling for the WorkflowCentralLogger gives precious little back about how it works and more about the errors people have encountered with it.

Getting to the solution
You will be forgiven for thinking the answer to the problem is just adding the missing WorkflowCentralLogger switch (with arguments) to your MSBUILD command line in PowerShell/PSAKE. Sadly its not that simple. See the InformationNodeId in the above command line? This appears to tell the WorkFlowCentralLogger where it needs to append its logging information. Passing it into the Invoke Process was my first thought, the problem is you're not going to find anything that will give it to you, I wasn't able to find it anywhere.

So how do you get it to work then?
The answer is, you need to build a Custom Workflow Activity. A custom workflow activity will have access to the current Context. To use this you need to inherit the class "CodeActivity". Its up to you how you use this Custom Workflow Activity, you have one of two ways.

  • Place it above the Invoke Process in your workflow, get the InformationNodeId and pass this as an OutArgument to the Invoke Process below it (not tested fully)
  • Or invoke Powershell from within the Custom Activity using a runspace and pass it the code context. (fully tested)
   1:   
   2:   
   3:  namespace MyWorkflowActivities
   4:  {
   5:      using System;
   6:      using System.Collections.Generic;
   7:      using System.Linq;
   8:      using System.Text;
   9:      using System.Collections.ObjectModel;
  10:      using System.Management.Automation;
  11:      using System.Management.Automation.Runspaces;
  12:      using System.IO;
  13:      using System.Activities;
  14:      using System.Collections;
  15:      using System.Globalization;
  16:   
  17:      using Microsoft.TeamFoundation.Build.Client;
  18:      using Microsoft.TeamFoundation.Build.Workflow.Activities;
  19:      using Microsoft.TeamFoundation.Build.Workflow.Services;
  20:   
  21:      public OutArgument<string> InformationNodeIdOut { get; set; }
  22:      
  23:      [BuildActivity(HostEnvironmentOption.All)]
  24:      public sealed class GetInformationNodeId : CodeActivity
  25:      {
  26:          protected override void Execute(CodeActivityContext context)
  27:          {
  28:          
  29:              context.TrackBuildMessage("Getting the Information Node Id", BuildMessageImportance.Low);
  30:              IActivityTracking activityTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking((ActivityContext) context);
  31:              string informationNodeId = activityTracking.Node.Id.ToString("D", (IFormatProvider)CultureInfo.InvariantCulture);
  32:              
  33:              context.SetValue<string>(this.InformationNodeIdOut, informationNodeId);
  34:          }
  35:      }
  36:      
  37:  }

The code above illustrates the first solution. Its a lot simpler and you'll have to pass that node id to MSBUILD when you construct its command line in PowerShell. Line 30 and 31 is where all the magic takes place, I managed to find this line using reflector in MSBUILD. If you have never written a custom activity before Ewald Hofman has a short summary of one here

The diagram below illustrates where GetInformationNodeId (code above) sits just above the InvokeProcess which calls PowerShell.

 

image

The second solution, which I actually went with is slightly more complex and I'll blog about how I did that in another article. You might be wondering what are the immediate benefits of one over the other? The beauty of going with the second solution is you can make use of the code activity context within your PowerShell scripts. So for example instead of writing your PowerShell events out to the host you could wrap that call in context.TrackBuildMessage (as illustrated on line 29 above). Hopefully I'll find some time to blog about that next week!

I'd be interested to hear about other peoples experiences.

posted on Monday, 17 September 2012 14:19:34 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Saturday, 25 August 2012
How to check a PDFs page size with iTextSharp

I don't know why I found it so hard to get hold of this information. I've placed it onto my blog for reference purposes. As before, if you can suggest a better method of doing this please leave a comment.

 

   1:   public string GetPageSize(string PathToPDF)
   2:          {
   3:              var reader = new PdfReader(PathToPDF);
   4:   
   5:              // A post script point is 0.352777778mm
   6:              const float postScriptPoints = (float)0.352777778;
   7:   
   8:              // The height and width are returned in post script points from iTextSharp
   9:              float height = reader.GetPageSizeWithRotation(1).Height * postScriptPoints;
  10:              float width = reader.GetPageSizeWithRotation(1).Width * postScriptPoints;
  11:   
  12:              reader.Close();
  13:   
  14:              if ((width >= 210 && width < 211)
  15:                  && (height >= 297 && height < 298))
  16:              {
  17:                  return "A4";
  18:              }
  19:   
  20:              return "unknown page size";
  21:          }
posted on Saturday, 25 August 2012 15:11:48 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Friday, 29 June 2012

I thought I'd post this for my own records so I have somewhere to refer back to it. I've also posted it because there was very little help regarding the problem on the Internet and the workarounds proposed weren't that nice. Many ranged from hacks that involved forcing the page to reload itself or having to use CAML instead.

Anyway here is the scenario, the text book piece of code below is used to update a list item in SharePoint using JavaScript. Everything works fine, however if someone else updates a record on another machine after you and then you update the same record on your machine you'll get the "Version Conflict" error.

 

   1:  function updateListItem(id, statusField, valueToChangeTo, listName, newparentId, parentField) {
   2:            
   3:            var ctx = SP.ClientContext.get_current();
   4:   
   5:            var list = ctx
   6:                      .get_web()
   7:                      .get_lists()
   8:                      .getByTitle(listName);
   9:   
  10:              var item = list.getItemById(id);
  11:                 
  12:             item.refreshLoad();
  13:           
  14:             item.set_item(statusField, valueToChangeTo);
  15:             item.set_item(parentField, newparentId);
  16:   
  17:             item.update();
  18:            
  19:            ctx.executeQueryAsync(function () {
  20:                console.log("New value: ", item.get_item(statusField));
  21:            })
  22:        };

 

So what went wrong?
Well basically the object that you're accessing is a cached object, you retrieved on the first time you saved the item. Since someone else changed the object before you this time your cached object is going to cause a version conflict as SharePoint as a newer version of the item.

How do I solve the problem?
You need to load the object again and then update it.

   1:  function updateListItem(id, statusField, valueToChangeTo, listName, newparentId, parentField) {
   2:   
   3:            var ctx = SP.ClientContext.get_current();
   4:   
   5:            var list = ctx
   6:                      .get_web()
   7:                      .get_lists()
   8:                      .getByTitle(listName);
   9:   
  10:            var item = list.getItemById(id);
  11:   
  12:            ctx.load(item)
  13:   
  14:            ctx.executeQueryAsync(function () {
  15:   
  16:                updateListitemAfterData(item, statusField, valueToChangeTo, parentField, newparentId);
  17:            })
  18:        }
  19:   
  20:        function updateListitemAfterData(item, statusField, valueToChangeTo, parentField, newparentId) {
  21:            var ctx = SP.ClientContext.get_current();
  22:            item.set_item(statusField, valueToChangeTo);
  23:            item.set_item(parentField, newparentId);
  24:            item.update();
  25:   
  26:            ctx.executeQueryAsync(function () {
  27:   
  28:                console.log("New value: ", item.get_item(statusField));
  29:            })
  30:   
  31:   
  32:        }
So in the code above I call updateListItem with my values. This then goes and loads the list item fresh from SharePoint and waits using an async call. Once it gets this async call it calls updateListItemAfterData to do the actual saving for us. Please note in the above example you may want to pass the context or to declare it globally instead to be more efficient.  

So far the above solution appears to be working for me, with no version conflicts Smile

posted on Friday, 29 June 2012 12:12:47 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Thursday, 01 March 2012
First look at Visual Studio 11 Beta

Just a quick blog article on my first thoughts on Visual Studio 11 Beta. I suppose the first thing that hits me after the web install (you will need to reboot) is "oh its very monochrome like.". I think I can understand the choice behind the monochrome like feel, its probably been targeted towards developers like myself who use ridiculously high resolutions to get everything on the screen. It also reminds me of some Java IDE's and some Linux GUI applications. 

image

Have a look in Tools > Options and we have the ability to switch to a darker themed version. I can already think of 4 developers I know who would prefer this type of theme, however the majority of developers will probably be looking for ways to get the old themes back.

image

What I do find nice though, is that Microsoft appear to have geared the IDE towards the ability of the developers machine. Its a good idea because not every developer is given the best machine for running a development environment.

image

The Solution Explorer
The solution explorer appears to have changed and appears to be a hybrid between the class explorer and the old solution explorer.

image

Verdict so far?

The new GUI appeared highly responsive, however I was using it from a machine with a lot of memory and an SSD drive. I personally like the monochrome type feel of the IDE although I know I'll be in the minority. I noticed that the Source Control providers such as GIT and Mercurial I had installed on my machine didn't come up in the source control provider drop down, so we'll probably need new versions of these plugins (among others) created for when the product is finally released.

Anyway, more later.

posted on Thursday, 01 March 2012 10:56:01 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Wednesday, 15 February 2012

Its true Sharepoint DataForm Webparts are incredibly easy to develop if you need to query lists or data without the need to create fully grown webparts from scratch in SharePoint they are definitely the way to go. However there is a but and that is that most of the SharePoint Universe appears to believe that everyone creates DW webparts on their production environment using SharePoint Designer. In many production environments SharePoint Designer is disabled as default. If you have stand alone webparts this may not be an issue, but if you have a set of DataForm webparts that need to be linked to each other you're going to face all kinds of problems when you try to link them to each other on a target environment.

So for example lets say I have developed some webparts on my development SharePoint box. I access a list from my webparts, I have also had the sense to take the list from the production environment as a template so I won't have any conflicts with field names. All works fine my webparts talk to each other but then when I deploy them and try to get them to talk to each other they just refresh the page. Even if I've created connections between the webparts, what happened?

Enter what I feel is the Achilles heal of the DataForm Webpart. If you created those webparts in SharePoint Designer and created the webpart connection between them in there you probably didn't realise that it places a bit of code in there which looks something like this.

 

<xsl:value-of select="ddwrt:GenFireConnection(concat('g_cb4fe2eb_738d_4bbb_8ec7_ce81633092a5*',$fields),string(''))"></xsl:value-of>
 

The problem in the above bit of code, is that when you make a webpart connection in SharePoint Designer with DataForm Webparts it hard codes the GUID of the target webpart. When you deploy your webparts the target environment will give them different GUID's. Even if you try to re-establish the connections in SharePoint's Web Interface this won't make a difference at all.

If you are wondering what GenFireConnection does and I appreciate there is precious little documentation about it, most of it being on previous versions of SharePoint. Its creates an ASP.NET post back link which contains the consumer webpart  GUID (highlighted above) and the data we are sending to our consumer webpart such as the value of a field.

Work Around
The only work arounds I have found to this little problem, unless anyone else has a better method, is to.

  1. Create the link in the SharePoint web interface on the target environment after the webparts are deployed and placed on the page.
  2. Select "Edit Webpart" and use the XSL Editor to change the GUID on the "GenFireConnection" on the calling webpart  to the new GUID of the target webpart.

The other option is to just use query strings in your lists. SharePoint Designer will quite happily accommodate this using parameters you can then pass this around in links around fields.

The above seems to always work for me, although I would love to know if there is a more elegant solution to this.

posted on Wednesday, 15 February 2012 13:15:51 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Friday, 20 January 2012
SQL Insert Statement Issues

Have you ever got the following SQL Insert statement issues

"There are fewer columns in the INSERT statement than values specified in the VALUES clause. The number of values in the VALUES clause must match the number of columns specified in the INSERT statement."

Like me you probably went and counted your columns and then counted your values and realised they were the same so spent ages scratching your head trying to figure out what on earth was going on. Well here is how I managed to reproduce the issue.

insert into myTable ([columnA], [columnB ) values (1,2)

Did you see what I did in the above statement? I left the "]" off the end of "columnB" the above statement will give you the above mentioned error message. It was pretty much a typo on my part and it took me ages to find it in a large SQL Insert statement.

Hope this helps anyone who has gone about trying to solve the above problem and found they do have equal columns and values.

posted on Friday, 20 January 2012 14:39:31 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Saturday, 01 October 2011
Solving the 3709 error problem

I've spent many weekends looking into this problem and thought I'd best blog it so some one else at least gets the benefit of it.

The issue I am talking about centres around the following error message.

"The connection cannot be used to perform this operation. It is either closed or invalid in this context."

The above error message has been a true bane to me. It was an issue on a classic ASP site that was quite happily ticking away for many years. I spent ages looking through the code ensuring that the SQL Connection was properly closed after each use and that ADODB.Recordsets were being used correctly. The error didn't make sense to me because the problem only happened occasionally and I was convinced it was either an issue with MDAC or the version of IIS (we had moved to a new server a few months ago) .

The solution
To cut a long story short, the solution I discovered  was in SQL Server 2005! Looking through the SQL server logs I discovered that after the last request SQL Server would "Auto Close" the connection and release resources. When the website made another request SQL would be busy spinning up which would then return the error above!

To stop this happening right click on your database in SQL Server Management Studio, select properties, then select Options and set "Auto Close" to false. I believe this option is now removed in newer versions of SQL Server.

 image

posted on Saturday, 01 October 2011 00:32:33 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Thursday, 16 June 2011
Setting up DasBlog on Windows Server 2008

I've been meaning to do a quick blog article about this for some time so I don't forget. I found setting up DasBlog on Windows Server 2008 pretty difficult. I currently run DasBlog on a Windows Server 2008 server with the following app pool ".Net Framework v2.0 Application pool in Integrated Mode"

One of the issues I discovered was setting up the permissions so that DasBlog could read and write the to the content folders. To do this follow the steps you find here http://learn.iis.net/page.aspx/624/application-pool-identities/ 

Basically you need to give the Application Pool that DasBlog is running under, permission to these folders. So for example setting permission on the content folder to allow the following user IIS AppPool\[your app pool name] read and write access.

posted on Thursday, 16 June 2011 20:12:10 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Saturday, 02 April 2011
LizaMoon–Injection and Cross Site Scripting attacks

Following the news on the LizaMoon injection attacks which have been publicised a lot in the press lately really made me want to find out more. Being a technically minded person I wanted to scrape past the general media version of what was happening and get down to what this means to people who run websites that might be vulnerable.

Reading posts on Stack Overflow it seemed to be the same old vulnerabilities that have been around for a very long time were once again being exploited.   Even though I have checked many sites I have worked on in the past, you can't help but wonder if there is anything you have forgotten. Security vulnerabilities in websites is not something you can say "yes I fixed it" its an on going battle (a bit like an arms race) where you have to keep up to date with the latest vulnerabilities.

One of the classic vulnerabilities I have seen from such attacks in the classic query string SQL injection attack. Take for example the following url on a website.

readmessage.asp?messageid=234

or

readmessage.php?messageid=234

There is nothing wrong with the above urls as long as what happens behind the scenes makes sure that whichever SQL database you are using be it MySQL or MS SQL Server is protected from bad input. Basically you cannot trust any input you get from the web.

One of the things I like doing with the above type of input before I even reach SQL is to ensure that the query string I am being sent in this case messageid is an integer. So in what ever language you are coding in, a very simple step is if messageid is indeed intended to be a query string test it to make sure it is. If you find it is not a query string you can either boot the user back to the page they came from or just send them to a generic error page that basically says that you can't understand what they wanted to do. Never display a detailed error message that divulges SQL statements and lines of code.

If messageid is supposed to be a string such as say a GUID? Test that all the characters used in the GUID are in a whitelist of acceptable characters first so for example accept A-Z, a-z, 0-9 and -  and reject everything else. In addition you can also HTML Encode or escape the input before sending it along to your code that persists it to SQL. In your code that does SQL persistence you can also help prevent such attacks by trying to use parameterised SQL statements instead of building your SQL update or insert statements as strings.

Other methods I have seen being used (although not a fan of) is where no text input is expected is to literally remove words and symbols such as "update", ), (, ',"insert" and "delete" this however can only be done where you definitely know these words are not intended as text values in a table field. If not used properly this could backfire and you could end up loosing data in sentences a user may have been innocently entering into a system.

The other thing to remember is just because the content went into the database safely doesn't mean that when you display that same content back to the user its going to be safe. Take for example a message board that uses a SQL server to store its messages, its pretty easy to escape what a user enters so that its perfectly preserved in SQL. Lets for example say that happened to be some JavaScript and that the JavaScript functionality was to redirect a user to a malicious site.  If you do not HTML Encode the message board text when displayed in the users browser you are basically putting users that trust your site at risk. HTML Encoding what you display to the user ensures that the user sees text of what is being presented and that the browser doesn't suddenly kick in and starts to execute the code its been given. Remember that this is just about any text you display to the user including the browser title tag which may be  something like this..

<title>Does anyone know how to make green widgets?</title>

The above if not encoded could quite easily be changed to the following by a malicious user post on your message board.

<title>Does anyone</title><script>document.location='somesite'</script><title></title>

The code above could potentially redirect a user to a malicious site.

posted on Saturday, 02 April 2011 20:38:48 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Friday, 04 June 2010
Handling the DropDownList SelectedIndexChanged event in a Repeater

This is more for my own reference more because I keep on forgetting how to do it and am constantly look it up all the time. If it helps you out, even better! And before you say "..but in MVC you can do it like this..". I know, but some of us still have to work with Webforms working with legacy apps. 

My main problem with DropDownLists in Repeater control examples on the net is they don't show you how to figure out which DropDownList in your Repeater list fired the SelectedIndexChanged event.

 
   1:   
   2:  // This is bound to the ItemDataBound event on the repeater.
   3:  protected void RepeaterBasketItems_ItemDataBound(object sender, RepeaterItemEventArgs e)
   4:  {
   5:      DropDownList DropDownListQuantity = 
   6:          (DropDownList)e.Item.FindControl("DropDownListQuantity");
   7:   
   8:      // hint after typing += you can hit TAB TAB in Visual 
   9:      // Studio for it to create the event handler for you.
  10:      DropDownListQuantity.SelectedIndexChanged 
  11:          += new EventHandler(DropDownListQuantity_SelectedIndexChanged);
  12:  }
  13:   
  14:  // Handles the Selected Index changed event. 
  15:  void DropDownListQuantity_SelectedIndexChanged(object sender, EventArgs e)
  16:  {
  17:      
  18:      DropDownList dropdown = (DropDownList)sender;
  19:   
  20:      // Cast the parent to type RepeaterItem
  21:      RepeaterItem repeaterRow = (RepeaterItem)dropdown.Parent;
  22:   
  23:      // Inside the RepeaterItem find a hidden Literal I 
  24:      // placed there which contains the Item Id of the row. 
  25:      // You could use the DataItem if this is being persisted
  26:      Literal LiteralItemId = (Literal)repeaterRow.FindControl("LiteralItemId");
  27:      
  28:      // Parse this string into an integer
  29:      int itemId = int.Parse(LiteralItemId.Text);
  30:      
  31:      //You can do some error handling here if the parse doesn't work..
  32:      
  33:      
  34:      // Get the value from the dropdown list.
  35:      int newQuantity = int.Parse(dropdown.SelectedValue);
  36:      
  37:      // Over here you could put your update method. that uses itemid and new quantity.
  38:  }
posted on Friday, 04 June 2010 09:09:36 (GMT Standard Time, UTC+00:00)  #    Comments [0]