Quick & Easy: Create a Folder and Assign a Content Type (Or, Have Your KPIs and Eat Them Too)

In order to work around a KPI problem I wrote about here, I did some testing and discovered that KPI’s work against folders with meta data in the same way that they work against documents or list items.   I proved it out by creating a new content type based on the folder content type and then added a few fields.  I created some indicators and proved to myself that KPIs work as expected.  This was welcome news.  It’s not perfect, because the drill-down you get from the KPI against the folders is not exactly what you want.  This isn’t too much a drawback in my case because 1) the end users don’t know any better and 2) the drill-down goes to a folder.  They click the folder name and they are at the item.  It’s two clicks instead of one, which isn’t the end of the world.

This flowed nicely with the work I was doing.  I am creating a folder for every document that gets uploaded.  This is done via an event receiver.  As a result, it’s a piece of cake to keep the parent folder’s meta data in sync with the KPI-driven meta data from the file itself since the plumbing is already in place.  This allows me to have my KPI’s and eat them too 🙂

I modified the event receiver to add the folder and then set this new folder’s content type to my custom KPI-friendly content type.  This bit of code did the trick:

 SPFolderCollection srcFolders = targetWeb.GetFolder("Documents").SubFolders;
  SPFolder addedFolder = srcFolders.Add(properties.ListItem.ID.ToString());
  SPContentTypeId kpiCT = new SPContentTypeId("0x0120002A666CAA9176DC4AA8CBAA9DC6B4039F");
  addedFolder.Item["Content Type ID"] = kpiCT;
  addedFolder.Item.Update();

To locate the actual Content Type ID, I accessed that content type via site settings and copy/pasted it from the URL as shown:

image

</end>

Subscribe to my blog!

Technorati Tags: ,

Quick and Easy: Get the SPFolder of an SPListItem in an Event Receiver

I hate to admit it, but I struggled with this one all day.  My event receiver needs to update a field of its parent folder.  This little bit shows how to do it:

        private void UpdateParentFolder(SPItemEventProperties properties)
        {

            SPFolder thisItemFolder = properties.ListItem.File.ParentFolder;
            thisItemFolder.Item["ZZ Approval Status"] = "Good news, everyone!";
            thisItemFolder.Item.Update();
           
           
        } // UpdateParentFolder

In this case, I’m working with a document library and the properties are coming from an ItemAdded event.

The trick is that you can’t get the SPFolder of the item directly from the item itself (i.e. properties.ListItem.Folder is null).  Instead, go to the list item’s associated File and get the File’s folder.

</end>

 Subscribe to my blog!

Technorati Tags:

Yet Another Event Receiver Debug Trick

I’m sure I’m not the first person to come up with this.  However, I haven’t noticed anyone publish a trick like this since I started paying close attention to the community last July.  So, I thought I’d post it this quick and easy debug tip.

I’m working on an event receiver that started to generate this error in the 12 hive:

Error loading and running event receiver Conchango.xyzzyEventReceiver in xyzzy, Version=1.0.0.0, Culture=neutral, PublicKeyToken=blahbalhbalh. Additional information is below.  : Object reference not set to an instance of an object.    

I didn’t know where I had introduced this bug because I had done too many things in one of my code/deploy/test cycles. 

I tried this solution to get my pdb in there with hopes that SharePoint’s 12 hive would show the stack trace, but no luck.  I don’t know if it’s possible and if someone does, please let me know 🙂

I know it’s possible to write your own log messages to the 12 hive.  Frankly, I wanted something a little less scary and quicker to implement.

It occurred to me that I could at least get some basic trace information by catching and re-throwing generic exceptions like this:

  try {
    UpdateEditionDate(properties);
  }
  catch (Exception e)
  {
    throw new Exception("Dispatcher, UpdateEditionDate(): Exception: [" + e.ToString() + "].");
  }

This showed up in the 12 hive thusly:

Error loading and running event receiver Conchango.xyzzyEventReceiver in xyzzy, Version=1.0.0.0, Culture=neutral, PublicKeyToken=blahblahblah. Additional information is below.  : Dispatcher, UpdateEditionDate(): Exception: [System.NullReferenceException: Object reference not set to an instance of an object.     at Conchango.xyzzyManagementEventReceiver.UpdateEditionDate(SPItemEventProperties properties)     at Conchango.xyzzyManagementEventReceiver.Dispatcher(SPItemEventProperties properties, String eventDescription)].

That gave me all the detail I needed to track down that particular problem and I expect to use it a lot going forward.

</end>

Subscribe to my blog!

Sunday Funny: “NOT FOR EXPORT”

Back around 1998, the company I worked for at the time received some funding to create a new e-commerce product.  We had the full gamut of business requirements to meet.  It had to be fast, easy for end users, flashy, multi-language, etc.  Sad to say, I probably haven’t had as an ambitious set of work to accomplish since those heady days.

This effort pre-dated Microsoft.NET.  Plain vanilla ASP was still somewhat new (or least very unfamiliar to my company).  "Brick and mortar" companies were doomed.  Doomed!  This is to say that it was pioneering work.  Not Hadron Collider pioneering work, but for us in our little world, it was pioneering work.

We were crazy busy.  We were doing mini POC’s almost every day, figuring out how to maintain state in an inherently stateless medium, figuring out multi-language issues, row-level security.  We even had create a vocabulary to define basic terms (I preferred state-persistent but for some reason, the awkward "statefull" won the day).

As we were madly inventing this product, the marketing and sales people were out there trying to sell it.  Somehow, they managed to sell it to our nightmare scenario.  Even though we were designing and implementing an enterprise solution, we really didn’t expect the first customer to use every last feature we built into the product day zero.  This customer needed multi-language, a radically different user interface from the "standard" system but with the same business logic.  Multi-language was especially hard in this case, because we always focused on Spanish or French, but in this case, it was Chinese (which is a double-byte character set and required special handling given the technology we used).

Fast forward a few months and I’m on a Northwest airlines flight to Beijing.  I’ve been so busy preparing for this trip that I have almost no idea what it’s like to go there.  I had read a book once about how an American had been in China for several years and had learned the language.  One day he was walking the city and asked some people for directions.  The conversation went something this:

  • American: "Could you tell me how to get to [XX] street?"
  • Chinese: "Sorry, we don’t speak English".
  • American: "Oh, well I speak Mandarin." and he asked them again in Chinese, but more clearly (as best he could).
  • Chinese: Very politely, "Sorry, we don’t speak English".

The conversation went on like that for bit and the American gave up in frustration.  As he was leaving them he overheard one man speaking to the other, "I could have sworn he was asking for directions to [XX] street."

I had picked up a few bits and pieces of other China-related quasi-information and "helpful advice":

  • A Korean co-worked told me that the I needed to be careful of the Chinese because "they would try to get me drunk and take advantage of you" in the sense of pressuring me into bad business decisions.
  • We were not allowed to drive cars (there was some confusion as to whether this was a custom, a legal requirement or just the client’s rule).
  • There were special rules for going through customs.
  • We were not allowed to use American money for anything.
  • You’re not supposed to leave tips.  It’s insulting if you do.

And finally, I had relatively fresh memories the Tiananmen massacre.  When I was at college, I remember seeing real-time Usenet postings  as the world looked on in horror.

In short, I was very nervous.  I wasn’t just normal-nervous in the sense that I was delivering a solution that was orders of magnitude more complicated than anything I had ever done before.  I was also worried about accidentally breaking a rule that could get me in trouble.

I’m on this 14 hour flight and though it was business class, 14 hours is a damned long time. There are only so many ways to entertain yourself by reading, watching movies or playing with the magnetized cutlery.  Even a really good book is hard to read for several hours straight. 

Eventually, I started to read the packaging material on a piece of software I was hand-carrying with me to the client, Netscape’s web server.  I’m reading the hardware/software requirements, the marketing blurbs, looking at the pretty picture and suddenly, I zero in on the giant "NOT FOR EXPORT" warning, something about 128 bit encryption.  I stuffed the box back into my carry bag, warning face-down (as if that would have helped) and tried to keep visions of Midnight Express out of my head. 

Looking back on it now, I should have been worried, if at all, when I left the U.S., not when I was entering China 🙂  Nothing untoward happened and I still consider that to be the best and most memorable business trip I’ve had the pleasure of making.

</end>

Subscribe to my blog!

Technorati Tags: ,

Solution: SPQuery Does Not Search Folders

This past week I was implementing an "evolving" solution for a client that uses BDC and SPQuery and ran into some difficulty using SPQuery against a document library containing folders.  Bottom line: assign "recursive" to the view attribute of the query.

My scenario:

  • On Monday, I upload a document and supply some meta data.
  • The following week, I upload a new document.  Much of this new document’s meta data is based on the document I uploaded on Monday (which we call the "master document").
  • We’ve created a web service façade that provides a BDC-friendly interface to the list so that users can easily locate that Monday document via a title search.
  • A BDC data column provides a friendly user interface.  (This is part of my attempt at using BDC for a more friendly Lookup column).

The final BDC façade service uses a query like this to do the lookup:

 // Used U2U tool to assist in generating this CAML query.
      oQuery.Query =
        "<Where>";

      if (titleFilter.Length > 0)
        oQuery.Query +=
          "  <And>";

      oQuery.Query +=
        "    <And>" +
        "      <Geq>" +
        "        <FieldRef Name=\"DocumentId\" />" +
        "        <Value Type=\"Text\">" + minId + "</Value>" +
        "      </Geq>" +
        "      <Leq>" +
        "        <FieldRef Name=\"DocumentId\" />" +
        "        <Value Type=\"Text\">" + maxId + "</Value>" +
        "      </Leq>" +
        "    </And>";

      if (titleFilter.Length > 0)
        oQuery.Query +=
          "    <Contains>" +
          "      <FieldRef Name=\"Title\" />" +
          "      <Value Type=\"Text\">" + titleFilter + "</Value>" +
          "    </Contains>" +
          "  </And>";
      oQuery.Query +=
        "</Where>";

During the initial stage of development, this worked great.  However, we introduced folders into the directory to solve some problems and suddenly, my BDC picker wouldn’t return any results.  I tracked this down to the fact that the SPQuery would never return any results.  We used folders primarily to allow multiple files with the same name to be uploaded but with different meta data.  When the file is uploaded, we create a folder based on the list item’s ID and then move the file there (I wrote about that here; we’ve had mixed results with this approach but on the whole, it’s working well).  The user don’t care about folders and in fact, don’t really understand that there are any folders.  We have configured all the views on the library to show items without regard to folders.

I hit this problem twice as the technical implementation evolved and solved it differently each time.  The first time, I wasn’t using the CONTAINS operator in the query.  Without a CONTAINS operator, I was able to solve the problem by specifying the view on the SPQuery’s contructor.   Instead of using the default constructor:

SPList oList = web.Lists["Documents"];

SPQuery oQuery = new SPQuery();

I instead used a constructor that specified a view:

SPList oList = web.Lists["Documents"];

SPQuery oQuery = new SPQuery(oList.Views["All Documents"]);

That solved the problem and I started to get my results.

I then added the CONTAINS operator into the mix and it broke again.  It turns out that the CONTAINS operator, so far as I can tell, does not work with the view the same way as the a simpler GEQ / LEQ operators.  I did some searching and learned that the query’s ViewAttributes should be set to "Recursive", as in:

oQuery.ViewAttributes = "Scope=\"Recursive\"";

That solved the problem for CONTAINS.  In fact, this also solved my original search problem and if I had specified the recursive attribute the first time, I would not have run into the issue again.

The fact that a view-based SPQuery works for some operators (GEQ/LEQ) and not others (CONTAINS), coupled with the fact that KPIs don’t seem to work at all with folder-containing document libraries leads me to believe that SPQuery has some orthogonality issues.

Special Thanks:

  • The good folks at U2U and their query tool.
  • Michael Hoffer’s great "learning by doing" blog post, comments and responses.

</end>

 Subscribe to my blog!

MOSS KPI bug? List Indicator Tied to Document Library With Folders

 

UPDATE 02/29/08: I solved this problem by creating a folder and then assigning a content type to the folder which has the meta data I need for the KPIs.  I described that in a little more detail here.

We have implemented a technical solution where users upload documents to a document library.  An event receiver creates a directory and moves the file to that directory (using a technique similar to what I wrote about here).  We’ve successfully navigated around the potential issues caused by event receivers that rename uploaded files (mainly because users never start their document by clicking on "New" but instead create the docs locally and then upload them).

The meta data for these documents includes a Yes/No site column called "Urgent" and another site column called "Status".  We need to meet a business requirement that shows the percentage of "Urgent" documents whose status is "Pending".

This is usually simple to do and I described something very much like this at the SharePoint Beagle with lots of screen shots if you’re interested.

In a nutshell, I did the following:

  • Create a view on the doc library called "Pending".
  • Configure the view to ignore folder structure.
  • Create a KPI List.
  • Create an indicator in the list that points to the doc lib and that "Pending" view.

This simply does not work.  The KPI shows my target (e.g. five urgent documents) but always shows the actual number of urgent documents as zero.  Paradoxically, if you drill down to the details, it shows the five urgent documents in the list.  I created a very simple scenario with two documents, one in a folder and one not.  Here is the screen shot:

image

The above screen shot clearly shows there are two documents in the view but the "value" is one.  The "CamlSchema" with blank document Id is in the root folder and the other is in a folder named "84".

It appears to me that even though you specify a view, the KPI doesn’t honor the "show all items without folders" setting and instead, confines itself to the root folder.

If I’m wrong, please drop me a line or leave a comment.

</end>

Subscribe to my blog!

 

Technorati Tags:

SPD Workflow “Collect Data From A User”: Modify the Generated Task Form

I’m working on a project that uses five different SharePoint Designer work flows to handle some document approvals.  SPD provides the "collect data from a user" action so that we can prompt the user for different bits of information, such as whether they approve it, some comments and maybe ask what they had for dinner the other night.

The forms are perfectly functional.  They are tied to a task list as a content type.  They are 100% system-generated.  This is their strength and weakness.  If we can live with the default form, then we’re good to go.  However, we don’t have too much control over how SPD creates the form.  If we don’t like that default behavior, we need to resort to various tricks to get around it (for example, setting priority on a task). 

I needed to provide a link on these task forms that opened up the view properties (dispform.asxp) of the "related item" in a new window.  This provides one-click access to the meta data of the related item.  This is what I mean:

image

Thankfully, we can do that and it’s not very hard.  Broadly speaking, fire up SPD, navigate to the directory that houses the workflow files and open the ASPX file you want to modify.  These are just classic XSL transform instructions and if you’ve mucked about with itemstyle.xsl, search or other XSL scenarios, this will be easy for you.  In fact, I found it to be generally easier since the generated form is somewhat easier to follow as compared to a search core results web part (or the nightmarish CWQP).

Of course, there is one major pitfall.  SPD’s workflow editor expects full control over that file.  If you modify it, SPD will happily overwrite your changes give the right set of circumstances.  I did two quick tests to see how bad this could get.  They both presuppose that you’ve crafted a valid SPD workflow that uses the "collect data from a user" step.

Test 1:

  • Modify the ASPX file by hand.
  • Test it (verify that your changes were properly saved and didn’t break anything).
  • Open up the workflow and add an unrelated action (such as "log to history").
  • Save the workflow.

Result: In this case, SPD did not re-create the form.

Test 2:

  • Do the same as #1 except directly modify the "collect data from a user" action.

Result: This re-creates the form from scratch, over-writing your changes.

Final Notes:

  • At least two SPD actions create forms like this: "Collect Data From a User" and "Assign To Do Item".  Both of these actions’ forms can be manually modified.
  • I was able to generate my link to dispform.aspx because, in this case, the relate item always has its ID embedded in the related item’s URL.  I was able to extract it and then build an <a href> based on it to provide the one-click meta data access feature.  It’s unlikely that your URL follows this rule.  There may be other ways to get the ID of the related item but I have not had to cross that bridge, so I don’t know if gets to the other side of the chasm.
  • I didn’t investigate, but I would not be surprised if there is some kind of template file in the 12 hive that I could modify to affect how SPD generates the default forms (much like we can modify alert templates).

</end>

Subscribe to my blog!

Technorati Tags: ,

Are “Unknown Error” Messages Really Better Than a Stack Trace?

I was reading Madhur’s blog post on how to enable stack trace displays and now I’m wondering: why don’t we always show a stack trace?

Who came up with that rule and why do we follow it?

End users will know something is wrong in either case.  At least with a stack trace, they can press control-printscreen, copy/paste into an email and send it to IT.  That would clearly reduce the time and effort required to solve the issue.

</end>

Technorati Tags:

Sunday (Embarrassing) Funny: “My Name is Paul Galvin”

A bunch of years ago, my boss asked me to train some users on a product called Results.  Results is an end user reporting tool.  It’s roughly analogous to SQL Server Reporting Service or Crystal.  At the time, it was designed to run on green tubes (e.g. Wyse 50 terminal) connected to a Unix box via telnet. 

My default answer to any question that starts with "Can you … " is "Yes" and that’s where all the trouble started.

The client was a chemical company out in southern California and had just about wrapped up a major ERP implementation based on QAD’s MFG/PRO.  The implementation plan now called for training power end users on the Results product.

I wasn’t a big user of this tool and had certainly never trained anyone before.  However, I had conducted a number of other training classes and was quick on my feet, so I was not too worried.  Dennis, the real full-time Results instructor, had given me his training material.  Looking back on it now, it’s really quite absurd.  I didn’t know the product well, had never been formally trained on it and had certainly never taught it.  What business did I have training anyone on it? 

To complicate things logistically, I was asked to go and meet someone in Chicago as part of a pre-sales engagement along the way.  The plan was to fly out of New Jersey, go to Chicago, meet for an hour with prospect and then continue on to California. 

Well, I got to Chicago and the sales guy on my team had made some mistake and never confirmed the meeting.  So, I showed up and the prospect wasn’t there.  Awesome.  I pack up and leave and continue on to CA.  Somewhere during this process, I find out that the client is learning less than 24 hours before my arrival that "Paul Galvin" is teaching the class, not Dennis.  The client loves Dennis.  They want to know "who is this Paul Galvin person?"  "Why should we trust him?"  "Why should we pay for him?"  Dennis obviously didn’t subscribe to my "give bad news early" philosophy.  Awesome.

I arrive at the airport and for some incredibly stupid reason, I had checked my luggage.  I made it to LAX but my luggage did not.  For me, losing luggage is a lot like going through the seven stages of grief.  Eventually I make it to the hotel, with no luggage, tired, hungry and wearing my (by now, very crumpled) business suit.  It takes a long time to travel from Newark — to O’Hare — to a client — back to O’Hare — and finally to LAX.

I finally find myself sitting in the hotel room, munching on a snickers bar, exhausted and trying to drum up the energy to scan through the training material again so that I won’t look like a complete ass in front of the class.   This was a bit of a low point for me at the time.

I woke up the next day, did my best to smooth out my suit so that I didn’t look like Willy Loman on a bad day and headed on over to the client.  As is so often the case, in person she was nice, polite and very pleasant.  This stood in stark contrast to her extremely angry emails/voicemails from the previous day.  She leads me about 3 miles through building after building to a sectioned off area in a giant chemical warehouse where we will conduct the class for the next three days.  The 15 or 20 students slowly assemble, most them still expecting Dennis. 

I always start off my training classes by introducing myself, giving some background and writing my contact information on the white board.  As I’m saying, "Good morning, my name is Paul Galvin", I write my name, email and phone number up on the white board in big letters so that everyone can see it clearly.  I address the fact that I’m replacing Dennis and I assure them that I am a suitable replacement, etc. I have everyone briefly tell me their name and what they want to achieve out of the class so that I can tailor things to their specific requirements as I go along.  The usual stuff.

We wrap that up and fire up the projector.  I go to erase my contact info and … I had written it in permanent marker.   I was so embarrassed.  In my mind’s eye, it looked like this: There is this "Paul Galvin" person, last minute replacement for our beloved Dennis.  He’s wearing a crumpled up business suit and unshaven.  He has just written his name huge letters on our white board in permanent marker.  What a sight! 

It all ended happily, however.  This was a chemical company, after all.  A grizzled veteran employee pulled something off the shelf and, probably in violation of EPA regulations, cleared the board.  I managed to stay 1/2 day ahead of the class throughout the course and they gave me a good review in the end.  This cemented my "pinch hitter" reputation at my company.  My luggage arrived the first day, so I was much more presentable days two and three.

As I was taking the red eye back home, I was contemplating "lessons learned".  There was plenty to contemplate.  Communication is key.   Tell clients about changes in plan.  Don’t ever check your luggage at the airport if you can possibly avoid it.  Bring spare "stuff" in case you do check your luggage and it doens’t make it.  I think the most important lesson I learned, however, was this: always test a marker in the lower left-hand corner of a white board before writing, in huge letters, "Paul Galvin".

</end>

Technorati Tags: ,

Perspectives: SharePoint vs. the Large Hadron Collider

Due to some oddball United Airlines flights I took in the mid 90’s, I somehow ended up with an offer to transform "unused miles" into about a dozen free magazine subscriptions.  That is how I ended up subscribing to Scientific American magazine.

As software / consulting people, we encounter many difficult business requirements in our career.  Most the time, we love meeting those requirements and in fact, it’s probably why we think this career is the best in the world.  I occasionally wonder just what in the world would I have done with myself if I had been born at any other time in history.  How terrible would it be to miss out on the kinds of work I get to do now, at this time and place in world history?  I think: pretty terrible.

Over the years, some of the requirements I’ve faced have been extremely challenging to meet.  Complex SharePoint stuff, building web processing frameworks based on non-web-friendly technology, complex BizTalk orchestrations and the like.  We can all (hopefully) look proudly back on our career and say, "yeah, that was a hard one to solve, but in the end I pwned that sumbitch!"  Better yet, even more interesting and fun challenges await.

I personally think that my resume, in this respect, is pretty deep and I’m pretty proud of it (though I know my wife will never understand 1/20th of it).  But this week, I was reading an article about the Large Hadron Collider in my Scientific American magazine and had one of those rare humbling moments where I realized that despite my "giant" status in certain circles or how deep I think my well of experience, there are real giants in completely different worlds. 

The people on the LHC team have some really thorny issues to manage.  Consider the Moon.  I don’t really think much about the Moon (though I’ve been very suspicious about it since I learned it’s slowing the Earth’s rotation, which can’t be a good thing for us Humans in the long term).  But, the LHC team does have to worry.  LHC’s measuring devices are so sensitive that they are affected by the Moon’s (Earth-rotation-slowing-and-eventually-killing-all-life) gravity.  That’s a heck of a requirement to meet — produce correct measurements despite the Moon’s interference.

I was pondering that issue when I read this sentence: "The first level will receive and analyze data from only a subset of all the detector’s components, from which it can pick out promising events based on isolated factors such as whether an energetic muon was spotted flying out at a large angle from the beam axis."  Really … ?  I don’t play in that kind of sandbox and never will.

Next time I’m out with some friends, I’m going to raise a toast to the good people working on the LHC, hope they don’t successfully weigh the Higgs boson particle and curse the Moon.  I suggest you do the same.  It will be quite the toast 🙂

</end>

Technorati Tags: