Momentum 2008 – Centrestage

November 13, 2008 at 11:00 pm | Posted in D6, Momentum | 1 Comment
Tags: , ,

Of course the star of the show was Centrestage. If you don’t know what Centrestage is (where have you been?), in a single sentence, it’ s the next generation of Documentum client providing Web 2.0 features, a significantly different customisation model (compared with WDK) and no-cost/low-cost licencing model.

I won’t go into too much detail about the features except to say they include basic content services, personal spaces, team spaces, blogs, wikis, rss, tagging and faceted search. The time line was set as 1.0 to be released April 2009 (the beta version is available on the download site), 1.5 to be released after that and then a D7 version released by the end of 2009.

What did interest me was some of the details of the architecture and development environment. This is a web client that implements rich client functionality using Javascript. Centrestage uses a library ExtJS v2.2, that has powerful DHTML manipulation facilities. All the back-end logic is provided via DFS which is accessed via a technology called DWR v2.0. DFS provides a SOAP/WS-* interface which is difficult to call via Ajax. DWR (Direct Web Remoting) solves this problem – take a look at the wikipedia link, it’s a fascinating idea.

The UI is composed from numerous separate components which, in concept at least, are like Sharepoint WebParts. Since each component needs to be rendered on the page separately I wondered whether this would mean that a page with, say, 20 components would need 20 separate network calls to display the page. In a high-latency network environment this could be a performance nightmare. Apparently the DWR library allows for batching of requests – it means that having numerous components on the page could be displayed using a smaller number of network requests.

Momentum 2008 – XML Store

November 13, 2008 at 8:26 am | Posted in Architecture, D6, Momentum, Performance | 2 Comments
Tags: , ,

On Tuesday and Wednesday I attended a load more sessions covering XML Store, Centrestage, Composer, Sharepoint and Web Content Management. In the next few posts I’ll share some of my thoughts and impressions, starting with XML Store.

For those that don’t know, EMC purchased a company called X-hive a while back. X-hive have an XML database product and that has now been integrated into the full Content Server stack. The easiest way to picture this is to take the old picture of the repository as consisting of a relational database and a file system and add in a third element, the XML Store.

From 6.5 (possibly sp1, I don’t remember) all XML is stored in the XML store. The XML Store is built around the many XML standards that are in existence such as XQuery, XSL and the XML full-text query standard.

The XML is not stored in the usual textual XML format but in a DOM format. This presumably is to allow them to implement various types of index and to optimise the query access patterns. The performance claims for the database are impressive although they need to be taken with a pinch of salt. As with all benchmarking, vendors will target specific goals in the benchmark. However your real-life workloads could be very different. If you are expecting high-throughput for an application using the XML store I suggest you put some work into designing and executing your own benchmarks.

In addition to indexes there is also a caching facility. This was only talked about at a high-level, however just as relational database performance experts made a career in 1990s out of sizing the buffer cache properly so we may see something similar with XML database installations. We may see them suffering poor performance as a result of under-sized hardware and mis-configuration. As always don’t expect this to just work without a little effort and research.

One other point I should make is that the XML Store is not limited to the integrated Content Server implementation. You can also install instances of XML Store separately. For example the forthcoming Advanced Site Caching Servicees product provides for a WebXML target. This is essentially an XML Store database installed alongside the traditional file system target that you currently get with SCS. You can then use the published XML to drive all sorts of clever dynamic and interactive web sites.

Momentum 2008 part 1

November 11, 2008 at 4:22 pm | Posted in Architecture, D6, Momentum | 2 Comments
Tags: , ,

All this week I am at Momentum in Prague. It’s a great opportunity to catch up with Documentum employees, partners and users, and also to see what is going in the Documentum world.

I arrived yesterday morning, and attended the Sharepoint Integration product advisory forum. The forum was run by Erin Samuels and Andrew Chapman. The session centred around a number of topics relating to Sharepoint-Documentum integration.

First of all there was a round-table on the kind of integration scenarios people were facing. Interestingly, and reassuringly, there seem to be far fewer ‘maverick’ implementations as Andrew called them. Maverick implementations are where sharepoint is installed as a generic application that can be just configured and used by any department and team without any kind of guidance or direction from IT. This leads to silos of information and lack of control of any kind over the information. Whilst departments like this quick and easy delivery of applications it stores up problems for the organisation as it is no longer able to utilise or manage enterprise-wide data.

Andrew then talked about a new product that is due to come out called Journalling. Whilst I don’t think the naming is great (maybe that’s not how it is going to be sold but it was certainly the name used for the technology) the principle was very powerful. It uses the Microsoft-provided Sharepoint EBS interface to allow you to redirect where sharepoint stores its data. By default sharepoint will store content and metadata in a SQL server database. Each sharepoint instance will require a sql server instance (apparently) and this can easily become a big data management problem. Furthermore as sql server stores all content as BLOBs (Binary Large OBjects) there can be scalability issues.

With Documentum EBS implementation, content is (transparently to the user) stored in a Documentum repository rather than SQL server (there is just a ‘journal’ entry in sharepoint representing the object). This provides all kinds of useful benefits such as being able to leverage Documentum’s data storage scalability, EMC hierachical storage management, de-deduplication and so on.

At this point there was a big discussion around a point introduced by Andrew. Since the data is now stored in Documentum we can access it via Documentum clients; for example you average user might be creating content in sharepoint across the organisation, but you have power users who need the full power of Documentum interfaces to work with the data. But what operations should documentum clients be allowed on sharepoint originated data? Read or other types of operation that don’t modify the content/metadata are fine, but should we allow update or delete access? If yes then there is additional work required as right now an update outside of sharepoint would cause sharepoint to throw an error the next time a user accesses the object. Predictably there was an almost equal 3-way split over who wanted no Documentum access, read-only/no-modify access and total control.

Later on I got to meet up with some people that I only know from the Documentum forums and blogs: Johnny Gee, Erin Riley and Jorg Kraus. It was great to finally get to speak to these guys after years of interacting over the web.

Create a free website or blog at WordPress.com.
Entries and comments feeds.