Thursday, 02 May 2013

Many people complain about SharePoint personally I haven't had much of a problem with it in the past until now. The theory of SharePoint is sound, it's a good idea and I was an early SharePoint Developer back in the days of version 1.

The main problem with SharePoint I find these days, is actually getting teams to use it. I have worked with a lot of companies in the past and many have it installed. However there appears to be a key issue in SharePoint which makes it harder for people to use and I've seen it on several sites.

How can I easily copy my documents up to SharePoint?

Any SharePoint dev or person who has used it will say "Oh just click on the explorer view". The explorer view is great for getting documents into SharePoint provided it actually works. You can also map a network place to a folder in SharePoint, however this also suffers exactly the same problem. It doesn't always work!

The frustration!

Visit the Microsoft forum's and its full of people asking the same questions who are being presented with all kinds of work arounds. From ensuring certain services are turned on, to adding SharePoint to your trusted sites right down to wiping all of your browsing history. For some this works but for quite a lot this just doesn't work. For the end user this is frustrating, they may not be very technical and it takes time out of their day. The main question on their minds is "Why can't it just work?". These users who experience these issues usually raise the problem with their tech support desk who find themselves in a difficult predicament. What usually happens is people come up with their own solutions to problems and some of them involved bypassing SharePoint and just placing the final document in there while they make use of tools such as DropBox, SkyDrive (the irony) or Email to share documents.

Get it right

Surely Microsoft understands that this is such a vital part in getting people to use SharePoint. They need to get this bit right and it should just work with no questions asked. The result of this not working is one (not all) of the reasons I hear people saying so many unfavourable things about SharePoint. Microsoft if you get this bit right you're going to see some happier people.


Note
I am awaiting the next version of Office 365 & SharePoint with interest, if this issue is addressed here it will be a huge leap forward.

posted on Thursday, 02 May 2013 10:10:46 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Friday, 15 February 2013
The very distant future of retail

Reading several articles about the breakthroughs in 3D printing suddenly gave me a vision on what the future of retail could be possibly 30 years from now. You'll have to use your imagination as I take you on this journey, you may roll your eyes and utter the word "ppleeeasse!" but bare with me.

Imagine if you would a future in where we are all purchasing one physical product that gets delivered to our houses a bit like like gas or water its piped to our houses or shipped by lorries. Its arrives in large vats filled with not very interesting grey granules. It is a key product and its value is an indicator of how well an economy is doing just like the price of oil. We go to comparison websites to choose the company with the better deal as we do for out telephone, gas or electricity.

Just how every home now has a TV, broadband and a computer of some kind, every home has an advanced compact and state of the art 3D printer. Online Shops such as Amazon and Apple were at the forefront of this technology wave as just as we had programs such as iTunes installed on our machines for music we have programs such as iWear and Jungle Stuff. These programs work almost exactly the same as iTunes or Spotify, we find a digital item we want and pay for it. It is then automatically downloaded. But the difference is we are now paying for the DRM protected designs for clothes, shoes, hand bags, pans, dishes and small furniture items. We download these items and our 3D printers which take the grey granules we purchase in bulk, "print" these products into the real world as wearable clothing shoes or small items.

The quality of what we buy is governed by the price we pay for these granules. We buy cheaper granules for things we do not think we will posses for long. When we are done with an item of clothing it is fed back into the printer and broken back down into its granular form. The system recycles itself. Thousand of unknown designers just as unknown authors did, now have a platform to market their designs to the public at incredibly low costs. The industry of manufacturing cheap garments in the far east has disappeared and replaced by just the need for the most up to date designs. But here is a grey area and as markets have taught us there will always be a black market to any new product. It takes it place in copies of original copyright designs and the breaking of DRM protected designs and people "chipping" or hacking their 3D printers so they can accept non copyright designs. There is of course an open source movement, as usual its not for the technically inept and more focused towards those that understand the technology and can accept designs through an open market where people have contributed free designs.  

It is a world where traditional brands such as Nike, Adidas etc focus on the next cutting edge design of their product and will probably endorse a certain type of granule for their products.

There will still be a market for traditional products but these will be for the wealthier as our populations increase producing cottons, silks and wool become more expensive as land is better suited given over to food production.

There you have it my thoughts on where retail could be in the very distant future.

posted on Friday, 15 February 2013 10:06:31 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Friday, 08 February 2013

If you've been using trying to get the FritzBox VPN client to work lately and have been getting the above error. It is probably caused by a missing folder on your machine. In my case I was missing the folder "FRITZ!Fernzugang" which is found in the following directory on my machine "C:\ProgramData\AVM"

It seem the Fritz!Box tries to save a file to that directory but doesn't check if it exists or even tries to create the folder of it doesn't exist. I had this problem with the 64bit version of the client for Windows 7. I am guessing it may only be a problem on that version.

You can see more of whats going on if you check the following folder C:\Program Files\FRITZ!VPN\logs there is a files named nwtsrv.log which will give you a lot more detail on what's happening.

If you have been having errors importing the configuration for a VPN into the Fritz!Box itself trying exporting the file encrypted first for some reason it doesn't seem to like an unencrypted file.

posted on Friday, 08 February 2013 10:26:15 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Monday, 17 September 2012
Using TFS with the WorkflowCentralLogger, PowerShell and PSAKE

I was recently brought into a client site where they had made use of PSAKE to handle their build process. The build would be kicked off from the traditional Workflow in TFS using an Invoke Process. Everything was working perfectly until they spotted that when the build failed there was no way of viewing which unit tests had failed from within TFS. In short PowerShell was giving precious little to the TFS summary view.

The question was how could we get that rich logging information you got in the build summary when doing a traditional build using Workflow? Setting up a traditional build and observing how MSBUILD is called from TFS starts to shed some light on the situation

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe /nologo /noconsolelogger "C:\Builds\1\Scratch\Test Build\Sources\user\Test\Build.proj" /m:1 /fl /p:SkipInvalidConfigurations=true  /p:OutDir="C:\Builds\1\Scratch\Test Build\Binaries\\" /p:VCBuildOverride="C:\Builds\1\Scratch\Test Build\Sources\user\Test\Build.proj.vsprops" /dl:WorkflowCentralLogger,"C:\Program Files\Microsoft Team Foundation Server 2010\Tools\Microsoft.TeamFoundation.Build.Server.Logger.dll";"Verbosity=Normal;BuildUri=vstfs:///Build/Build/111;InformationNodeId=6570;
TargetsNotLogged=GetNativeManifest,GetCopyToOutputDirectoryItems,
GetTargetPath;TFSUrl=
http://mytfshost:8080/tfs/Test%20Collection;"*WorkflowForwardingLogger,"C:\Program Files\Microsoft Team Foundation Server 2010\Tools\Microsoft.TeamFoundation.Build.Server.Logger.dll";"Verbosity=Normal;"

 

In the above example I have highlighted the section I discovered is responsible for the summary view you usually see when kicking off a build from TFS. I discovered this with a bit of guesswork and some reflector usage to see what was going on inside MSBUILD. Googling for the WorkflowCentralLogger gives precious little back about how it works and more about the errors people have encountered with it.

Getting to the solution
You will be forgiven for thinking the answer to the problem is just adding the missing WorkflowCentralLogger switch (with arguments) to your MSBUILD command line in PowerShell/PSAKE. Sadly its not that simple. See the InformationNodeId in the above command line? This appears to tell the WorkFlowCentralLogger where it needs to append its logging information. Passing it into the Invoke Process was my first thought, the problem is you're not going to find anything that will give it to you, I wasn't able to find it anywhere.

So how do you get it to work then?
The answer is, you need to build a Custom Workflow Activity. A custom workflow activity will have access to the current Context. To use this you need to inherit the class "CodeActivity". Its up to you how you use this Custom Workflow Activity, you have one of two ways.

  • Place it above the Invoke Process in your workflow, get the InformationNodeId and pass this as an OutArgument to the Invoke Process below it (not tested fully)
  • Or invoke Powershell from within the Custom Activity using a runspace and pass it the code context. (fully tested)
   1:   
   2:   
   3:  namespace MyWorkflowActivities
   4:  {
   5:      using System;
   6:      using System.Collections.Generic;
   7:      using System.Linq;
   8:      using System.Text;
   9:      using System.Collections.ObjectModel;
  10:      using System.Management.Automation;
  11:      using System.Management.Automation.Runspaces;
  12:      using System.IO;
  13:      using System.Activities;
  14:      using System.Collections;
  15:      using System.Globalization;
  16:   
  17:      using Microsoft.TeamFoundation.Build.Client;
  18:      using Microsoft.TeamFoundation.Build.Workflow.Activities;
  19:      using Microsoft.TeamFoundation.Build.Workflow.Services;
  20:   
  21:      public OutArgument<string> InformationNodeIdOut { get; set; }
  22:      
  23:      [BuildActivity(HostEnvironmentOption.All)]
  24:      public sealed class GetInformationNodeId : CodeActivity
  25:      {
  26:          protected override void Execute(CodeActivityContext context)
  27:          {
  28:          
  29:              context.TrackBuildMessage("Getting the Information Node Id", BuildMessageImportance.Low);
  30:              IActivityTracking activityTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking((ActivityContext) context);
  31:              string informationNodeId = activityTracking.Node.Id.ToString("D", (IFormatProvider)CultureInfo.InvariantCulture);
  32:              
  33:              context.SetValue<string>(this.InformationNodeIdOut, informationNodeId);
  34:          }
  35:      }
  36:      
  37:  }

The code above illustrates the first solution. Its a lot simpler and you'll have to pass that node id to MSBUILD when you construct its command line in PowerShell. Line 30 and 31 is where all the magic takes place, I managed to find this line using reflector in MSBUILD. If you have never written a custom activity before Ewald Hofman has a short summary of one here

The diagram below illustrates where GetInformationNodeId (code above) sits just above the InvokeProcess which calls PowerShell.

 

image

The second solution, which I actually went with is slightly more complex and I'll blog about how I did that in another article. You might be wondering what are the immediate benefits of one over the other? The beauty of going with the second solution is you can make use of the code activity context within your PowerShell scripts. So for example instead of writing your PowerShell events out to the host you could wrap that call in context.TrackBuildMessage (as illustrated on line 29 above). Hopefully I'll find some time to blog about that next week!

I'd be interested to hear about other peoples experiences.

posted on Monday, 17 September 2012 14:19:34 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Saturday, 25 August 2012
How to check a PDFs page size with iTextSharp

I don't know why I found it so hard to get hold of this information. I've placed it onto my blog for reference purposes. As before, if you can suggest a better method of doing this please leave a comment.

 

   1:   public string GetPageSize(string PathToPDF)
   2:          {
   3:              var reader = new PdfReader(PathToPDF);
   4:   
   5:              // A post script point is 0.352777778mm
   6:              const float postScriptPoints = (float)0.352777778;
   7:   
   8:              // The height and width are returned in post script points from iTextSharp
   9:              float height = reader.GetPageSizeWithRotation(1).Height * postScriptPoints;
  10:              float width = reader.GetPageSizeWithRotation(1).Width * postScriptPoints;
  11:   
  12:              reader.Close();
  13:   
  14:              if ((width >= 210 && width < 211)
  15:                  && (height >= 297 && height < 298))
  16:              {
  17:                  return "A4";
  18:              }
  19:   
  20:              return "unknown page size";
  21:          }
posted on Saturday, 25 August 2012 15:11:48 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Thursday, 26 July 2012
Is YouView too late?

YouView has taken an incredibly long time to launch. The idea behind YouView I believe is a brilliant one, however I can't help but think they're a little late to the market and when you see what they have to offer you can't help but think ".oh is that it?"

image

Don't get me wrong its nice having a set top box that enables you to watch on demand content from your TV but the BBC iPlayer, ITV Player, 4OD and Demand 5 have been available from the XBOX 360 (£174) and PlayStation 3 (£187) for several months now. It makes you wonder if anyone will shell out £299 for the YouView set top box. Look at all the new SmartTV's out there that already have the above mentioned services built in or available to download as apps and you end up scratching your head wondering if its really worth it. The majority of the content that YouView boasts to have besides the ones I have mentioned above are freely available right now if you have Digital TV. There is no need for a set top box. You are basically just getting the ability to watch the above on demand content as extra. 

Enter Sky's new service which will be added to YouView called NowTV and all of a sudden the above starts to look a little more viable. While NowTV is currently only a movie service, Sky will later be offering Sky Sports and content from its flagship channels Sky1, Sky Atlantic and Sky Living. Granted if you are a Sky subscriber and have an Xbox you're probably already seeing some of this content on Sky Player. The one problem there is you have to be a Sky subscriber to get to that content and as always some shows are blocked on the live Sky Channels on Xbox because they don't hold the digital rights to stream it over IPTV. It makes you wonder if we will see the same issue taking place if they end up streaming live Sky channels over YouView?

Its important where you get YouView

Look closer at the offers for YouView available from BT and Talk Talk as opposed to buying one solo and plugging it in. Suddenly things start to get a bit clearer, not all YouView offers are equal. Get a set top box from Talk Talk for YouView and with an additional "boost" you can get some of the Sky Channels without the need for a dish and a subscription with Sky. It appears TalkTalk will be offering the service they currently used to offer on TalkTalk TV in addition to YouView content which is great news for TalkTalk customers.

BT appear to be offering the same content from BT Vision to people who get a YouView box with them. However looking at BT's line up I prefer the selection of content and channels that appear to be available from TalkTalk. The dilemma there is I prefer getting my broadband from BT instead of TalkTalk.

Over the coming months it will be interesting to see how this pans out. I'm also interested to see what Sky does with its content. In the past they've always been interested in owning the platform and the rights to the content instead of sharing their content with other platforms for a fee.

posted on Thursday, 26 July 2012 09:03:50 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Thursday, 19 July 2012

I've posted this for my on reference more than anything else.

Basically if you are trying to work out the value and text of a dropdown list inside a SharePoint list this is the syntax

 

listDataFromServer[i].get_item('clients').get_lookupId()      - This will return you the value

listDataFromServer[i].get_item('clients').get_lookupValue()  - This will return you the text from the dropdown list.

posted on Thursday, 19 July 2012 14:31:18 (GMT Standard Time, UTC+00:00)  #    Comments [0]

 Friday, 29 June 2012

I thought I'd post this for my own records so I have somewhere to refer back to it. I've also posted it because there was very little help regarding the problem on the Internet and the workarounds proposed weren't that nice. Many ranged from hacks that involved forcing the page to reload itself or having to use CAML instead.

Anyway here is the scenario, the text book piece of code below is used to update a list item in SharePoint using JavaScript. Everything works fine, however if someone else updates a record on another machine after you and then you update the same record on your machine you'll get the "Version Conflict" error.

 

   1:  function updateListItem(id, statusField, valueToChangeTo, listName, newparentId, parentField) {
   2:            
   3:            var ctx = SP.ClientContext.get_current();
   4:   
   5:            var list = ctx
   6:                      .get_web()
   7:                      .get_lists()
   8:                      .getByTitle(listName);
   9:   
  10:              var item = list.getItemById(id);
  11:                 
  12:             item.refreshLoad();
  13:           
  14:             item.set_item(statusField, valueToChangeTo);
  15:             item.set_item(parentField, newparentId);
  16:   
  17:             item.update();
  18:            
  19:            ctx.executeQueryAsync(function () {
  20:                console.log("New value: ", item.get_item(statusField));
  21:            })
  22:        };

 

So what went wrong?
Well basically the object that you're accessing is a cached object, you retrieved on the first time you saved the item. Since someone else changed the object before you this time your cached object is going to cause a version conflict as SharePoint as a newer version of the item.

How do I solve the problem?
You need to load the object again and then update it.

   1:  function updateListItem(id, statusField, valueToChangeTo, listName, newparentId, parentField) {
   2:   
   3:            var ctx = SP.ClientContext.get_current();
   4:   
   5:            var list = ctx
   6:                      .get_web()
   7:                      .get_lists()
   8:                      .getByTitle(listName);
   9:   
  10:            var item = list.getItemById(id);
  11:   
  12:            ctx.load(item)
  13:   
  14:            ctx.executeQueryAsync(function () {
  15:   
  16:                updateListitemAfterData(item, statusField, valueToChangeTo, parentField, newparentId);
  17:            })
  18:        }
  19:   
  20:        function updateListitemAfterData(item, statusField, valueToChangeTo, parentField, newparentId) {
  21:            var ctx = SP.ClientContext.get_current();
  22:            item.set_item(statusField, valueToChangeTo);
  23:            item.set_item(parentField, newparentId);
  24:            item.update();
  25:   
  26:            ctx.executeQueryAsync(function () {
  27:   
  28:                console.log("New value: ", item.get_item(statusField));
  29:            })
  30:   
  31:   
  32:        }
So in the code above I call updateListItem with my values. This then goes and loads the list item fresh from SharePoint and waits using an async call. Once it gets this async call it calls updateListItemAfterData to do the actual saving for us. Please note in the above example you may want to pass the context or to declare it globally instead to be more efficient.  

So far the above solution appears to be working for me, with no version conflicts Smile

posted on Friday, 29 June 2012 12:12:47 (GMT Standard Time, UTC+00:00)  #    Comments [0]