Labels

accessibility (2) ADF (1) archiver (3) cmu (1) contributor (13) cookie (1) DAM (3) date (3) download (3) dynamic list (4) ephox (5) fatwire (1) fck (1) filters (1) folders (4) headers (2) IBR (3) ImageAlchemy (3) java (4) javascript (2) layout/template (4) link (6) locale (2) multilingual (1) rendition (3) replicator (4) rules (1) schema (1) search (11) sites (1) sitestudio (24) ssp/sspu (5) SSUrlFieldName (2) stellent (4) timezone (1) urm (1) weblogic (1) workflow (2)

Sunday 8 June 2008

Useful SSPU logging

Well I haven't started rewriting the hyperlink wizard, my apologies. I've been wrestling with SSPU which has failed to publish our site repeatedly over the last two weeks. In an effort to work out what the hell was going on we turned on all the logging, right up to "debug for all". It was too much to be helpful... so after reviewing the logs I was able to come up with some meaningful logging levels.

PRIMARY LOG (FILE)
default: ERROR
syndicator: INFO
date-time: CRITICAL
analyser: CRITICAL
replicator: INFO
packagemanager: INFO
ice.cache: VERBOSE
delivery: INFO
delivery.ice: VERBOSE

SECONDARY LOG (DATABASE)
default: ERROR

Ok let's start with the database log. I know it sounds counter-intuitive but database logging slows down the software tremendously - it needs to read the entire database to display the SSPU status page (so make sure you purge often!) When you're viewing the SSPU website the only thing you care about is errors. Ignore everything else.

The primary log file is what you turn to when there is a problem. Set Syndicator to INFO to report the overall status of SSPU. It also includes info about database purges. Set Analyzer to CRITICAL to ignore messages about malformed links (they won't affect the publish anyway.) Set Replicator to INFO - this reports which files are actually being processed. It also includes final summaries and error counts. Set Delivery to INFO to report when a job is pushed to the subscription client/FTP (subagent). Set Delivery.ice to VERBOSE in order to report the status of the subagent's delivery and see an actual confirmation message that the publish succeeded. Finally there is a meaningless bug in the interface so I set date-time to CRITICAL so it won't get reported.

There are two additional settings you might consider. Set Packagemanager to INFO in order to see exactly which files have been selected (or skipped) for update. Set Ice.cache to VERBOSE in case there are some undelivered files floating around - it reports what items are waiting to be delivered.

One final important tip - Delivery.ice may report warnings about ice 501 errors. These can safely be ignored. They simply mean that SSPU was asking for confirmation that the job was finished but the subscription client (subagent) was too busy processing the job to respond. SSPU will keep resending the job until it receives a 200 confirmation - but subagent will only process the first push and ignore the rest. Once subagent finishes it sends confirmation, SSPU stops pushing and subagent discards all the repeated pushes. This is the intended behaviour.

Oh and the problem with our publish? Too many broken links!

Tuesday 3 June 2008

The achilles heel of Oracle UCM - hyperlinks

UCM is a great document management system but its websites are sure awkward. The most obvious blunder was the editor so thank god that's been fixed! This leaves us with the biggest weakness in the system - hyperlinks.

Let's look at exhibit A - the Hyperlink Wizard. The new UCM spots a brand new fully AJAX interface for creating links. There's an amazing amount of code and effort put into it - just to make it work exactly like the old one! Fair dinkim guys, it took a pHD* to understand the old one so why replicate its horrible functionality? Did you expect your users to be so fully engrossed in the old way that they would be incapable of doing it any more simply? Why did you waste your time reinventing the wheel when it is just as square as the old one? I wonder if they have ever tried to create a link...

The first thing it does is ask "do you want to link to a section, file or URL?" What? Why do I have to choose? What's the difference? I want to link to another web page... who knows? Click on file. "Do you want the current item, existing file from server, upload a file, new file, or new word doc?" Hmmm, I'm editing this link so I don't want the current item (duh). Why would I create a word doc? I'm trying to make a link to a web page! I guess I'll have to choose existing, good thing I already know the content id. Once the initial search results finally load, i have to search again using my content id. Ok, I have selected my content item. Now it asks "use default web section metadata, choose a section or just link to URL?" Do i care? What does a section mean anyway? I think i want a URL but I know my page is already used in website x, so I'll drill down into that website until i find a "section" that sounds like my page. Click click click click click. Click next and it displays some ugly code and calls it my "link URL" (i thought i chose a section!) asking to me confirm. Hmm that looks nothing like the link i expected to see. Click finish and hope for the best.

Wow, what a pointlessly verbose experience (and i even removed a step!) Steve Krug says, "DON'T MAKE ME THINK!" so my contributors skip all that by simply pasting the published URL into the first "URL" field. The system however does not recognise published URLs, decides there are no links to that page, and deletes it from the published site. D'oh!

And take a look at the URLs it publishes. Every one ends in some seemingly random number! Why? Because the system must give every page a unique id. C'mon guys, most free CMS software generates human-sounding URLs even before Web2.0 happened. Is it really that hard?

And so ends another rant. Hopefully my next post will be about a replacement Hyperlink Wizard that I have written for you to download and enjoy.

* I work at a uni, my contributors are academics and they screw up the links all the time.