Category: Alfresco Technical Tips

Website Snapshots with Crafter WEM and Alfresco

Posted by on April 04, 2013

Every now and then we see a requirement from customers who specify that a snapshot of the entire repository for the website be maintained on a deployment or on a daily basis in order to enable a audit of the entire site at a given point in time, or to allow a rollback of the entire site at a given point in time.

The versioning system within Alfresco’s core repository does not natively support snapshots.  While it is possible to model this capability within the system through custom coding, the solutions tend to be complex while the demand for the functionality remains low.   In any technical solution it is important to keep things as simple as possible.  Whenever you run into a narrow requirement that threatens to complicate your systems you need to take a step back and determine:

  • Is the requirement is truly necessary?
  • Can the requirement can be changed slightly to work better with the existing technology?
  • If the requirement is necessary, can you can integrate with a 3rd party that solves the problem directly without complicating the overall solution?

In our example we are left with the third option.  Given this, the first question is: are there any existing, robust, affordable solutions for maintaining snapshots of an entire collection of “assets” that already exists and can be cleanly integrated?

The answer is “Yes.” Snapshot capabilities while not very common in web content management systems are extremely common in source code control systems like SVN, Git and others.  All of these off the shelf source control systems are extremely robust and are free open source platforms.

Following the idea that we’re going to integrate with an existing source control system to meet the snapshot requirement, the second question is: can our WCM environment represent all content and metadata as files?

Again, with Crafter WEM the answer is yes.  All content and metadata are represented as XML files or raw native formats such as images, rich documents, videos etc.

Our third question is obviously: how does the integration work?

Let’s review our business and technical requirements:

  • We want a snapshot of content that went LIVE on our site on either a deployment OR daily basis.
  • We need to be able to audit the entire site at a given point-in-time if an investigation is required (by law or organizational policy)
  • We want a integration with a source control system that does not complicate our overall WCM software and solution.
  • Our solution must cover all content and metadata

Now let’s review a potential solution for these requirements.

alfresco-wcm-repository-snapshot-arch

Getting published content in to the source control system

Because Crafter WEM is a decoupled system where the authoring and delivery system are independent systems separated by an approval process, and a deployment, multi-channel, multi-endpoint deployment is a natural part of the architecture.  As you can see in the illustration above, we have a typical deployment to our web infrastructure in the DMZ and we also have a deployment agent inside our corporate firewall, which will put content and metadata in a disk location managed by SVN, Git, or some other source control management system. All metadata and content can be deployed to the source control endpoint.

Delivering content and metadata to the source control system is a simple and robust solution.

Creating a snapshot

To create a snapshot we must check-in (AKA commit) to the source control system.

If we want to create a snapshot on every single deployment we can use a simple callback in our deployment target configuration to perform a check in via a command line operation.

Code Listing 1:
The configuration below demonstrates how to invoke a command line operation from the deployment agent after content has been received.

<beans xsi:schemaLocation="...">
    <bean id="MyTarget" init-method="register">
        <property name="name"><value>sample</value></property>
        <property name="manager" ref="TargetManager"/><property name="postProcessors">
            <list><ref bean="commitSiteOnDeploy"/></list>
        </property>
        <property name="params">
            <map>
                <entry key="root"><value>target/sample</value></entry><entry key="contentFolder"><value>content</value></entry>
                <entry key="metadataFolder"><value>meta-data</value></entry>
            </map>
        </property>
    </bean>
    
    <bean id="commitSiteOnDeploy">
        <property name="command" value="svn  "/>
        <property name="arguments">
            <map>
                <entry key="ci" value="PATH TO DEPLOYMENT"/>
            </map>
        </property>
    </bean>
</beans>

However, if we want to commit a daily (or any time-based snapshot) we can accomplish this with a simple operating system based scheduled task that performs the check in via a command line operation.

Reviewing a snapshot for audit

Reviewing a snapshot in the event of an audit is as simple as a check out of the particular version from the snapshot repository and launching a Crafter Engine instance on top of the checked out content.

Reverting our WCM environment to a particular version

Reversion of our entire site would require the following steps

  • Check out the particular version
  • Either create a new WEM project based on the check-out content and re-deploy to your targets
  • OR import the checked out site in to your existing WEM project and then deploy to your targets.  Take care to analyze your deployment history for deletes as these will need to be managed if you choose to revert over top of your existing project.

Performing Diff Operations

It’s possible to perform a diff operation between versions at any time. Almost all modern source control repositories support version dif functionality.

If you wish to compare the head of your repository with a particular version you can take the following steps:

  • Check out the trunk of your snapshot repository
  • Copy your preview directory in to your snapshot repository check-out
  • Use source control repository to perform diff operation.

Summary

Repository snapshots are an important requirement for a small number of organizations.  Crafter WEM is able to support these requirements with simple, robust integrations through its deployment architecture and readily available, affordable source control systems.

Alfresco Cloud’s Key Capabilities

Posted by on March 15, 2013

SaaS Based Collaboration

The first aspect and most basic use of Alfresco Cloud is as a cloud hosted collaboration application for your organization.  Alfresco Cloud is multi-tenant and can host as many organizations (which Alfresco calls networks) and project spaces within each of those networks as is needed.

In the illustration below you can see two independent organizations each with several project teams working independently on the Alfresco Cloud.

 

If you need to spin up a simple collaboration environment for your department Alfresco Cloud is a great solution.  Alfresco Cloud is affordable and based on per user pricing.  There is zero software to install or setup and you get a ton of really rich collaborative features from document libraries to wikis, calendars, blogs and much more.

Cross-Organization Collaboration

Where things start to get really interesting, however, is with cross-organization.  With Alfresco Cloud you can manage content between organizations to enable B2B interactions between knowledge workers from the different organizations – again all with zero infrastructure setup.

In the illustration below you can see a project team from each organization collaborating with one another through Alfresco Cloud’s permissions which ensure that only that content which should be shared is in fact shared.

Alfresco One: Private – Public Cloud Sync

The thing is that not all content is meant to live in the cloud.  Organizations of all sizes generally have some content they still feel needs to be controlled and secured inside the firewall or as is often the case, there are integrations with critical business systems that are mandatory and those integrations are only possible between systems located within our firewalls.

With Alfresco Cloud this is no issue.  You can setup and host your own private infrastructure internally which serves as the system of record and hosts all of your content including those items which must remain internal and for content you want to collaborate on with organizations outside the firewall you can create a synchronization (using Alfresco One) with Alfresco Cloud and synchronize specific content between your organizations private infrastructure and the cloud to facilitate the collaboration.

In the illustration above we have a private infrastructure on the left and the cloud on the right. You can see that some project teams are working only against this internal infrastructure while others may work only against the cloud.   And we can see a secure, relationship between our internal infrastructure on the left with the Alfresco Cloud on the right.  This synchronization is enabling our teams to collaborate with one another regardless of whether they are working on public or private infrastructure.

Remote API for the Cloud

And finally Alfresco Cloud supports a remote application programming interface or API which is based on CMIS (Content Management Interoperability Standard) and a few additional Alfresco specific non-CMIS APIs.

This is a real game changer because it means that collaboration no longer has to take place through the user interface but as we can see here in the diagram we can enable applications and automated processes to participate in our collaborations – and because we have a sync between private a public cloud infrastructure we’re not just talking about cloud based content storage here – which is great in its own right — we also have a very powerful integration platform.

When you combine the API and the public/private sync what you gain is infrastructure akin to an integration bus.

 

 

 

Alfresco Cloud is much more than meets the eye

Posted by on February 28, 2013

As many of you know Alfresco introduced its cloud offering almost a year ago. At the time of this writing there are a number of unique ways you can interact with Alfresco Cloud:

  • Collaborative SaaS (Software as a Service) application. Teams to quickly spin up collaborative spaces (in Alfresco’s Share application) and begin working together with zero on premise software.
  • Members of the cloud can join multiple networks which enables them to work and collaborate across organizational boundaries.
  • Custom applications can use Alfresco Cloud as a store. You can interact with cloud through an API (CMIS and Alfresco specific RESTful APIS.)
  • And you can sync content between your on-premise instance of Alfresco and the networks with-in cloud that you belong to.

Share in the cloud as a SaaS offering is a pretty obvious play. There is a lot of value in this simple use case for organizations that need good collaboration tools but just don’t have the appetite for or enough user volume to justify hosting their own infrastructure.

When you combine this SaaS offering with the ability to securely and selectively collaborate with other organizations, you are now enabling all kinds of people-oriented B2B interactions that can be extremely difficult when you have a system that is stuck behind a firewall.

Add to that an API to that and now it’s not just people-oriented B2B and internal interactions that can take place, it’s automation and rich behavior.  At this point Alfresco in the cloud is no longer an application.  It’s a bus.

Now not all content was meant to live outside the firewall and not all systems can or even should live/reach outside the firewall.  With Alfresco’s “cloud sync” capability Alfresco Cloud closes this gap by allowing organizations to selectively and securely sync specific content between an on-premise instance and the cloud.  This is extremely exciting because it opens up a whole new realm of possibilities for B2B integration and mobile enablement.

If you’re thinking about Alfresco Cloud as a simple collaboration application or a simple cloud based content store it’s time to rethink.  Alfresco Cloud paired with Alfresco on premise is an extremely exciting hybrid architecture and integration middle-ware that opens up use cases which have traditionally required dedicated business to business infrastructures that were difficult to get approved let alone set up: basically not possible.

On March 14th Rivet Logic will co-host a webinar with Alfresco entitled Using Alfrescos Hybrid Cloud Architecture for Better Web Content Management where we will discuss and demonstrate how hybrid architectures can be applied in a WCM (Web Content Management) context to enable collaboration with external partners like agencies and for integrations with other content services, providers and consumers such as AP, Routers and the like.  While WCM use cases will be the focus of the conversation, the topic is perfect for anyone interested in learning more about Alfresco hybrid architectures.  See you there!

https://www.alfresco.com/events/webinars/using-alfrescos-hybrid-cloud-architecture-better-web-content-management

Web CMS and Digital Assets: Crafter Rivet / Alfresco Integration with Adobe Photoshop

Posted by on February 21, 2013

Digital assets are a key component of almost all web experience and customer engagement projects. In today’s era of engagement with all of the additional content targeting, personalization, internationalization and multi-channel publishing the number and permutation of digital assets associated with any given project are growing rapidly.  This trend will only continue as we move forward.  Content workers (authors, designers, content mangers) need to be able to create, locate, modify and manage the growing number of assets easily and efficiently in order to maintain brand quality and deliver projects on time and on budget.

In today’s blog entry we’re going to focus on the creative side of WCM (Web Content Management) and DAM (Digital Asset Management) even though this is only a small portion of the overall set of use cases.

Let’s begin by considering the following example use cases:

  • Create mobile appropriate image resolution variants
  • Create video stills
  • Imprint watermarks
  • Thumbnails for galleries and promotional kickers

Each of these use cases are important ingredients in providing the user with a great experience but they also introduce a lot of additional work for our content teams.  One of the ways to deal with the large volume of asset creation and manipulation responsibilities is to automate them.   The use cases mentioned above and many others like them are a perfect candidate for automation.

Crafter Rivet leverages Alfresco’s enterprise content management services for image transformation. With a few simple rules applied at the repository level it’s possible to provide your content team with image resolution variants, video stills, apply watermarks, to scale and crop thumbnails and then to make these assets available for review by our authors all in an automated fashion with no additional labor required beyond uploading the canonical assets.

Another important way to help our content teams cope with the sheer volume of digital asset related workload is to make sure our teams are able to work with the very best tools at their disposal.  With today’s modern browsers it is possible to provide a fairly decent set of tools / asset manipulation functionality right with-in the browser.  However, while purely web-based tools have their advantages they are often slower and much less powerful than the desktop tools serious content contributors are used to working with.

The biggest productivity boosts are gained when we empower our designers and other content workers on our team with rich, native tools that they are already familiar with and work  with on a daily basis.

Adobe’s creative suite (which contains tools like PhotoShop) is the quintessential software package for image/digital asset creation and manipulation.  Designers are deeply familiar with these tools and are able to leverage their enormous arsenal of capability to accomplish a tremendous amount of work in a short amount of time.  The issue that many organizations often face, is that while the tools themselves are great, the interfaces between the tools and the systems that ultimately store, manage and deliver the assets are either non-existent, human-process based, or have clunky integration. This gap creates a drag on the margin of productivity and introduces room for error.

Fortunately Alfresco, Adobe and Crafter Rivet Web Experience Management have a solution that connects rich, creative desktop tools,  to your systems of record (e.g. repository) and ultimately to your systems of engagement (e.g. website) in a seamless fashion.  Content creators work right with-in the rich, local tools that they are familiar and productive with and those tools are deeply integrated with the repository which means that all of the organization, policies, metadata extraction, and versioning provided by the repository etc are seamlessly integrated and enforced.  Alfresco is a CMIS (Content Management Interoperability Standard) compliant repository.  This standards based interface enables it to communicate with external applications like Adobe’s products in order to interact with and operate on the content, metadata, permissions, versions and so on housed within the repository.   Adobe provides a platform called Adobe Drive which enables its tools to connect in a rich fashion over CMIS to Alfresco.  Once we’ve connected our Adobe tools and our Alfresco repository authors working within Crafter Studio, the authoring and management component of Crafter Rivet can now see content updates coming from the Adobe tools right in context with the work they are doing through in context preview and editing. They can also interact with that content through the web based tools, workflow, versioning, metadata capture and publishing capabilities of Crafter Studio.

By closing the integration gap we can now provide powerful tools for productivity and at the same time do so in a way that makes it seamless and easy for our creative teams to collaborate across the entire process.

Click on the video below to see Adobe and Crafter Rivet WEM / Alfresco in action together!

Video of Photoshop altering images in Crafter Rivet Web CMS and Alfresco

 

Crafter Rivet is a 100% open source, java based web CMS for web experience management and customer engagement.  Learn more about Crafter Rivet at crafterrivet.org

Web CMS Content Enrichment with OpenCalais, Crafter Rivet and Alfresco

Posted by on February 15, 2013

Content enrichment is the process of mining content in order to add additional value to it.  A few examples of content enrichment include: entity extraction, topic detection, SEO (Search Engine Optimization,) and sentiment analysis.  Entity extraction is the process of identifying unique entities like people and places and tagging the content with it.  Topic detection looks at the content and determines to some probabilistic measure what the content is about.  SEO enrichment will look at the content and suggest edits and keywords that will boost the content’s search engine performance. Sentiment analysis can determine the tone or polarity (negative or positive) of the content.

Content enrichment provides an enormous opportunity to improve the effectiveness of your content.  However, it is clear that detailed analysis and the work of adding additional markup and metadata to content can be extremely time consuming for authors and content managers.  Fortunately there are many free and commercial services available that can be used to enrich your content while saving countless hours for authors and content managers.

One such service is OpenCalais from Thompson Reuters.  Open Calais is a toolkit of capabilities that includes state-of-the-art semantic data mining of content via restful services.

In the video below you’ll find a short demonstration of how OpenCalais can quickly be integrated with Crafter Rivet’s authoring platform (Crafter Studio) to make it extremely fast and easy for authors to enrich articles and other types of content with rich, structured metadata.

Coupled vs Decoupled Web CMS Architecture

Posted by on February 10, 2013

Generally speaking there are two main types of Web CMS architectures: coupled and decoupled.  Coupling refers to the relationship between the authoring tools and content delivery of your live site.

The classic example of a coupled CMS architecture is a blog engine. In a coupled system, the underlying store for your content serves both authoring and delivery.  Your authoring capabilities are part of the live delivery system but are available only to those who have permissions.  In a coupled system, the process of making content live is typically a matter of setting a flag in the database.

Coupled CMS Architecture

A decoupled system by contrast is one that puts authoring and delivery in separate applications, and potentially, on separate infrastructure. In a decoupled system, the process of making content live is done through a publishing mechanism where content is pushed from the authoring platform (and underlying content repository) to a content delivery infrastructure.

Decoupled CMS Architecture

So which approach is the right architecture? The reality here there is no single right or wrong answer.  The answer depends on context: alignment with your requirements, your business process and your business goals.  The topic of coupled vs. decoupled is not a new one.  It’s a debate that has been going on for a long time and will continue to go on so long as there continues to be Web CMS platforms, “fan boys” and a perception that one size fits all.  The more appropriate way to approach the question is to analyze the strengths and weaknesses of each approach and then to consider these in the context your own specific use cases.  We’ll see that in different scenarios we come to different conclusions on which architecture to use.

Coupled Architecture

Let’s take a look at the strengths and weaknesses for a coupled CMS architecture.

Strengths

  • Easy to set up and deploy a single instance.
  • Authoring and delivery are on the same infrastructure which can make it easier to build cohesive authoring tools.
  • Relatively easy administration of production system for single sites

Weaknesses

  • SLAs (Service Level Agreements) are coupled — meaning that authoring has to be just as available as the live site.
  • Coupled infrastructures are generally more complex to scale, as they typically depend heavily on database scalability.
  • Content is typically captured in a database schema meant for delivery on the site.  This can make integration and migration difficult.
  • Software complexity is higher because the code base contains both authoring and delivery concerns.  All but the most trivial CMS projects involve some development, and thus becomes an development issue.
  • Pushing content in and out of the CMS to and from third parties takes place in the same system that runs your live site.  Integration is not native to the architecture and it may impact the system’s performance.

Decoupled Architecture

Let’s take a look at the strengths and weaknesses of a decoupled CMS architecture.

Strengths

  • Easier to scale for high traffic websites, and to handle multi-site management.
  • SLAs are decoupled.  Authoring can be down for upgrades without impacting delivery. The reverse is also true.
  • Scale only the components that you need.  If you are not adding more authors then there is no need to scale out authoring.  This affects the complexity of the solution and also license costs where they are involved.
  • Code complexity is isolated within each of the two components (authoring and delivery).  Each piece can be as simple or complex as is needed without impacting the other.
  • Integration is a built in concept, as content is natively published to a the remote delivery system, it is generally straightforward to push to other systems as well. Also note, integration takes place on the authoring tier safely away from the live site protecting stability and performance.
  • Content migration and sharing with other systems is generally much more innate to the architecture.
  • Multi-channel support by nature, as publishing to mobile apps, social channels, and other digital channels is a natural extension of the native publishing capability.
  • Content sharing and syndication are more naturally supported.
  • When complexity is isolated and scaling is simple, it’s easier to develop and deploy rich features to your users.

Weaknesses

  • Setup has more components and can be more complex.
  • Publishing introduces additional points of failure.
  • Sub-division of components can go too far driving up complexity and driving down the cohesion of the user experience.

Making a Choice

It’s clear that each approach has its own strengths and weaknesses.  A coupled approach may work really well in once scenario while a decoupled approach may be much more appropriate in another.

Our analysis reveals that a coupled architecture can work well for websites that need to be set up and put on line in short order and that do not need to be able to scale quickly or to publish content beyond the website itself.  On the other hand, we see that a decoupled architecture is ideal for websites that require high levels of availability and performance, need a lot of tailored functionality, must be integrated with third party business systems and must publish to one or more digital channels beyond the website itself.

Build Engaging Web Pages with Crafter Rivet and Alfresco

Posted by on January 31, 2013

Today’s internet is a noisy.  Our customers are bombarded with content and messaging from all angles.  As a result they learn to filter our anything that is not directly and immediately useful to them.  Thus there is ever growing importance for our websites and services to be up to date with fresh content and on top of that for that content to speak as directly as is possible to any given user.

In this article we’ll explore a number of ways to leverage the Crafter Rivet WEM platform to create a dynamic website that can quickly and effortlessly to keep your content fresh and timely.  We’ll go beyond the basics to show how you can support complex requirements and content production processes and finally we’ll demonstrate how to target your content directly to the visitors on your site.

Author Selected Content

In many cases our authors want to make a specific editorial choice about what content to display on their site. For example, they may want to decide which promotions to show on the home page.

 

To begin let’s keep things simple and start with a use case that has a very basic set of requirements:

  • Authors manage a collection of promotional objects which can be assigned to the home page for display in a carrousel.
  • Authors make a daily decision about which objects to show on the home page from the collection or gallery of objects from within the CMS.
  • All site visitors see the same promotional objects.

To allow authors to hand pick content to be associated with a page, region, or component in Crafter Rivet we must create a content association between the target content and the object that will point to them.  In the case of our example above there is a parent child association between the home page and the promotions.

parent child 1 to many content association

To create this association we use the Content Type Editor in Crafter Studio, which is the authoring environment for Crafter Rivet.

First we create the promotion type.  You can see an example of the fields in the image below.

Next we update our home page type and add a item selector type field and child content data source.

Once we’ve updated our models you can create and link promotion objects to your home page by editing the home page.

Finally we edit our home page template to include code that will walk the list of the associated child content (the promotion objects) and render them.

<#assign promos = model.promos.item />
<#list promos as promo>
 <#assign promoItem = siteItemService.getSiteItem(promo.key) />
 <div>
 <img src="${promoItem.image!''}" />
 <div>
 <h1>${promoItem.headline!""}</h1>
 <p>${promoItem.subhead!""}</p>
 </div>
 </div>
 </#list>

Looking at the code above we see that what we’re doing is getting the key values for the promotional content associated with the home page and the looping over them.  For each item we load the actual content from the SiteItemService and then use the item to build out our markup.

Query Driven Content

Our example of promotional objects on the home page is very simple.  It assumes that every user will get the same content and also that authors will not want to stage more than a single scenario at a time.  This of course may work well for some organizations but it’s far too simplistic for others. Now that we’ve covered the basic mechanics of author managed content associations let’s imagine a use case with much more complex requirements:

  • Our authors need to create different variations of the home page that correspond to marketing campaigns which will take place throughout the year.
  • Editors choose the content that goes with each campaign.
  • A campaign may include a new offering and potentially other associated content that must go out at the same time.
  • Campaign content production doesn’t necessarily happen in sequence.  Each campaign is developed by a different team and may be more or less complex which in turn means campaigns may complete production at vastly different times.  Occasionally a content production will complete right before the campaign starts.
  • Once a campaign is ready it needs to be approved and scheduled for a deployment.  The team assigned to that campaign will move on to the next one.
  • When a campaign is over it needs to stop showing up on the home page

For a set of requirements like the ones we see above we’re going to rely on a number of features on both in both authoring and delivery.

First we need a content model that is going to support our use case.  We need a campaign that can be created independently. The campaign will point to promotional content, products and other relevant content.  Our content model might look something like the following:

We can imagine a campaign might have the following properties

  • A campaign code that matches a code tracked in our CRM system
  • A effective date that indicates when it allowed to begin operating on the site
  • A expiration date that indicates when it must end its operation
  • A disabled flag that would allow it to be shut down even if it is active
  • Associations to all of the content that are related to the campaign.  These content items will be selected by the author exactly as described in our earlier use case.

Next we need to update the home page template to use a campaign. Because a new campaign may come along at any time we will use a query to determine which campaign to show.  At a minimum our query will look at

  • All objects of type campaign
  • That are equal to or beyond the effective date
  • That are not beyond the expiration date
  • That are not disabled.

The template code would look something like the following:

<#assign queryStatement = "content-type:/component/campaign " />
<#assign queryStatement = queryStatement + "AND effective_dt:[TODAY TO *] " />
<#assign queryStatement = queryStatement + "AND expiration_dt:[* TO TODAY] " />
<#assign queryStatement = queryStatement + "AND disabled_b:false " />
<#assign query = searchService.createQuery() />
<#assign query = query.setQuery(queryStatement) />
<#assign query = query.setRows(1) />

<#assign campaign = searchService.search(query).response.documents[1] />

<#list campaign.promos as promo>
 <#assign promoItem = siteItemService.getSiteItem(promo.localId) />
 <div>
 <img src="${promoItem.image!''}" />
 <div>
 <h1>${promoItem.headline!""}</h1>
 <p>${promoItem.subhead!""}</p>
 </div>
 </div>
 </#list>

In the code above we can see above that we are creating a query statement, limiting our results to a single record and then executing the query.  You may recognize that the code after the query which processes the results and shows the promotional content associated to the campaign remains nearly identical to the code in our last example.

Now that we have our mechanics set up teams can begin creating content.  Once campaigns are complete they can be queued up for approval and deployment.  Because a campaign may require product updates that cannot be shown on the site before the start date content managers may leverage scheduled publishing.  Scheduled publishing allows the editorial team to approve content for publish but to delay the deployment of the content until a specific date and time.  Finally, editors can be sure all related updates from associated images to associated products will be deployed because Crafter Studio will seamlessly calculate and deploy the campaign’s dependencies.

Targeted Content

Targeting content is all about getting the right content to the right audience at the right time.  From our last example we’ve seen how facilitate a complex production process and more importantly for this topic how to add metadata to content objects and then how to use that metadata to dynamically determine which content to show on our site.  In essence we’ve covered the “right content at the right time” part of targeting. Now we want to extend our discretion to a specific audiences.  Let’s extend our use case from above with a few additional requirements.

  • All requirements from the previous section hold.
  • Authors may create one or more campaigns that are eligible to be active on the site at any given time
  • Campaigns may be specific to different audiences.  In other words content may be targeted at that audience.  For example: All users,  users of a specific gender.
  • Campaigns that match more specific criteria for a site user (are more targeted) should be preferred for use over less relevant content.

At this point all of our production mechanics will remain the same.  Addressing the new requirements will require the following additional steps:

  • We’ll  add criteria to our campaign definition and then update the content with the appropriate values.
  • We’ll add additional user specific criteria to our query.
  • We’ll provide the capability for our authors to review how well the targeting rules work through Crafter Studio.

Let’s begin by updating our content model for campaign.

  • We’ll add a new targeting section to our form definition for campaign called “Targeting”
  • To our new section we’ll add a grouped check box for Gender and a data source linked to the values: Female, Male, and All

Next we’ll update our existing campaigns with the appropriate targeting data.  Perhaps we mark all existing items with the All value for the gender field.

Now we can turn our attention to the template and our query.  We know we’re going to target some content specifically to women, some to men and some to everyone regardless of gender.  Let’s take a look at the code:

<#assign gender = profile["gender"]!"ALL" />

<#assign queryStatement = "content-type:/component/campaign " />
<#assign queryStatement = queryStatement + "AND effective_dt:[TODAY TO *] " />
<#assign queryStatement = queryStatement + "AND expiration_dt:[* TO TODAY] " />
<#assign queryStatement = queryStatement + "AND disabled_b:false " />
<#assign queryStatement = queryStatement + "AND (gender:\"" + gender + "\"^10 OR gender:\"ALL\") />
<#assign query = searchService.createQuery() />
<#assign query = query.setQuery(queryStatement) />
<#assign query = query.setRows(1) />

<#assign campaign = searchService.search(query).response.documents[1] />

<#list campaign.promos as promo>
 <#assign promoItem = siteItemService.getSiteItem(promo.localId) />
 <div>
 <img src="${promoItem.image!''}" />
 <div>
 <h1>${promoItem.headline!""}</h1>
 <p>${promoItem.subhead!""}</p>
 </div>
 </div>
 </#list>

In the code above we see two new lines of interest.  The first is <#assign gender = profile["gender"]!”ALL” />  This code gets the value for the property gender from the current profile and puts it in to a variable.

The second is line of interest is <#assign queryStatement = queryStatement + “AND (gender:\”" + gender + “\”^10 OR gender:\”ALL\”) />  In this line we see we’re constraining our query by the gender coming from the profile OR  records that have ALL selected.  The key here is that we have ^10 on match for the profile’s gender.  This ensures that any records which match that aspect of the query will be given a higher relevance and thus chosen for display.

As you can see it is quite simple to create solutions for driving dynamic content targeted at specific attributes in a visitor’s profile.

However, as we discussed previously, the other half of the issue is enabling the author to test.  The reality today is that content authors are creating content for different devices, market segments and all sorts of other facets.  With such a dynamic environment they need tools that will help them review and test their work.

Crafter Studio provides targeting tools for authors to address this issue.  The authoring environment can be configured with any number of predefined persona(s). A persona is like a profile, in fact it behaves exactly the same way but instead of setting up and signing in as specific users to test different scenarios authors can simply switch back and forth between the available configured persona(s).   Each persona has a name, image and a description to help authors identify the scenarios they represent.  Authors can also change the property values of a given persona once they have assumed it.

 

Managing Component-based Websites with Alfresco

Posted by on November 04, 2012

From time to time at Rivet Logic we get questions concerning Alfresco‘s ability to build and manage component-based websites due to its file / folder based organization. In many cases this question arises from either a limited look at Alfresco’s features (sometimes sourced from industry analyst reports that lack depth) or a far too simplistic understanding of Alfresco’s core capabilities and established best practices for web content management and dynamic content delivery.

Through this short article I will demonstrate the following:

  • Repository organization has no bearing on website placement of components
  • Primary organization in a file / folder structure is a strength and offers technical capability that differentiates Alfresco from lesser Web CMS platforms.

Decoupled CMS ArchitectureThis article will approach this question with the capabilities of Crafter Rivet, a 100% open source web content and experience management solution built as a plug in for Alfresco.  Crafter Rivet consists of two major subsystems:  1) Crafter Studio, an authoring environment built on top of Alfresco’s content repository and 2) Crafter Engine, a high performance content delivery platform which relies on Apache Solr for dynamic content queries. Crafter Rivet and Alfresco enable a de-coupled content management architecture: Crafter Studio and Alfresco for content authoring, management and workflow, and Crafter Engine for high performance dynamic content delivery.  Content is published from Alfresco to the de-coupled Crafter Engine delivery stack, which is where the website application(s) runs. Once deployed to Crafter Engine, content updates are immediately available and are reflected in a high performance, queryable Apache Solr index to support dynamic content delivery.  Moreover, all content is cached in memory for very low response times.

With this understanding of how the separation of content management from content delivery work in a de-coupled Web CMS platform such as Alfresco, we can now discuss how the content organization within the Alfresco repository facilitates component-based site development. We refer to content organization within the repository as the “primary organization” of the content.  Crafter Rivet allows authors to build component-based websites, and place content components anywhere on the site regardless of the primary organization found inside the Alfresco repository.  Components may be placed in a region of a page explicitly through drag-and-drop construction, or they may be surfaced through a dynamic query.

Explicit placement simply links the content to a page through an ID-based association, completely independent of the underlying file/folder structure. Query-driven content placement leverages a Solr-based index and returns a list of component IDs (and other attributes) based on the given criteria. And in both cases, the primary organization of the content plays no role in the ability for the content to be used in any region of any page of the website.

Primary Content OrganizationWithin the repository and authoring life cycle, primary organization is an extremely useful paradigm.  Authors need to be able to find content quickly.  By putting every content item in a structured location (i.e., folder) it is much easier to find when browsing. Contrast this with the opaque database/data structure approach taken by many other platforms which offer no direct access or ability to browse for content.  In general, we have found it best to remove abstraction as much as possible when modeling content. For site pages, as an example, we model these as files and folders that match the URL. Crafter Engine understands that specifically for pages, the URL and this structure are related.  A change in to the folder structure also changes the URL.  This is extremely helpful to authors as it maps directly to their expectation when thinking about page hierarchy in their information architecture.

Components, on the other hand, tend to be grouped by type or some other such criteria. Authors may create arbitrary folder structure to house components, forming a primary organization that enables them to quickly find and work with content. This primary organization has nothing to do with how, when or where the content will be rendered on the site. Further, Alfresco allows content managers to attach rules, policies and permissions to folders inside the repository.  This is an extremely useful and powerful capability not found in may other Web CMS systems. By leveraging the primary organization of pages and components, content managers can enact specific rules for content transformation, metadata extraction, validation, workflow, and much more throughout the authoring life cycle.

 

As you begin to work with Alfresco for web content management you want to keep in mind that the organization of your content inside the repository and the delivery of that content are two very different things.  Alfresco as a content repository is generally de-coupled from the delivery mechanism as we have seen with Crafter Rivet.  Take advantage of the fact that Alfresco’s file / folder structure has many organizational and technical benefits during the authoring life cycle and has no bearing on the delivery of that content when deployed to a dynamic content delivery engine like Crafter Engine.

Extending Crafter Engine with Java backed functionality

Posted by on October 06, 2012

Crafter Engine is the high-performance website / web app delivery engine for Crafter Rivet Web Experience Management. Out of the box Crafter Engine ships with support for many of the types of engaging functionality you have come to expect from a WCM/WEM platform.  However, there are times when we want to add additional capabilities to integrate with internal and 3rd party systems to meet specific business objectives.  In this article we’ll demonstrate how you can create Java backed plug-ins for Crafter Engine.

To illustrate the integration process we’ll integrate a simple RSS reader based on the ROME RSS processing library
You can download this example (and others) at the following SVN location: https://svn.rivetlogic.com/repos/crafter-community/ext/engine

To begin let’s start with some background on Crafter Rivet, Crafter Engine and how a plug-in is organized structurally:

Crafter Rivet

Crafter Rivet is web experience management solution based on the Alfresco Content Management platform with a de-coupled architecture.  This means that the authoring environment and the production delivery environment are separate infrastructure integrated by workflow and deployment from authoring to delivery.  Crafter Rivet components power some of the internet’s largest websites.  A decouple architecture makes things easier to support, more flexible and very scalable.

Crafter Engine

Crafter Engine owes its performance characteristics to its simplicity.  Content is stored on disk and is served from memory.  Dynamic support is backed by Apache Solr. At the heart of Crafter Engine is Spring MVC, a simple, high performance application framework based on one of the world’s most popular technologies: Spring framework.

What is a Crafter Engine Plug-in

A Crafter Plug-in is a mechanism for extending the core capabilities of Crafter Engine, the delivery component of Crafter Rivet. Plug-ins allow you to add additional services and make them available to your template / presentation layer.

An example plug-in as we suggested above might be an XML reader which would function as specified below:

  • The plugin would expose a in-process Java based service like Feed RssReaderService.getFeed(String url)
  • Your your presentation templates would then simply call <#assign feedItems = RssReaderService.getFeed(“blogs.rivetlogic.com”) />

Anatomy of a Crafter Engine Plug-in

You will note from the diagram above that plug-ins are simple JAR files that contain both Java code, configuration and an Spring bean factory XML file that loads and registers the services with Crafter Engine.

Loading the plug-in

1. When the container loads the Crafter Engine WAR the shared classes lib folder is included in the class path.

2. When the Crafter Engine WAR starts up it will scan the class path for its own spring files and any available plug-ins. At this time the services described in the Spring bean XML file with in your plug-in will be loaded. Your service interfaces are now available in the presentation layer.

Interacting with your service

A. The user makes a request for a given URL

B. Crafter Engine will load all of the content descriptors from disk for the page and components needed to render the specific URL. Once the descriptors are loaded the templates for the page and components will loaded from disk.

C. The templates may now call your service interfaces to render from and interact with your back-end code. Your Java backed service may then do whatever it was intended to do, returning the results to the template for processing. Once complete, the responses are then returned to the user in the form of a rendered web page.

A simple Example

Now that we have a bit of background, let’s get down to the nuts and bolts of the matter and build, install and configure our RSS reader integration.

The Java Service

/**
 * Reads a RSS or Atom feed from a specified URL and returns a data structure that can be accessed from a template.
 */
public class RssReaderService {

    public SyndFeed getFeed(String url) throws IOException, FeedException {
        SyndFeedInput input = new SyndFeedInput();
        XmlReader reader = new XmlReader(new URL(url));

        try {
            return input.build(reader);
        } finally {
            try {
                reader.close();
            } catch (IOException err) {
                // handle error
            }
        }
    }
}

The Configuration

/crafter/engine/extension/services-context.xml:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd">

    <bean id="rssReaderService" class="org.rivetlogic.crafter.engine.rss.RssReaderService" />
</beans>

Packaging and Install

The compiled class and spring configuration file must be found in the class path.  To facilitate this we build these objects and their dependencies in to a single JAR file.  You can find the build process for these at the SVN location above.  Place the JAR file that is produced by the build process in to your shared class path for example: /TOMCAT-HOME/shared/lib and restart the application.

Now that you have restarted the application your service is available to your presentation layer.  This means that in any template you can now add the following:

<#assign feedItems = RssReaderService.getFeed(“blogs.rivetlogic.com”) />

Using the Plug-in:

Web Ninja: Create a RSS Feed Component for Authors

Create a new content type as a component:

Create a template for the RSS Widget by clicking on the type canvas and editing the template property:

Author: create and configure a RSS Component

Modify the component properties through the content type form:

And insert the widget in to a web page:

Save and close and the configured component is now showing on your web page!

Resources for Crafter Rivet – Web CMS for Alfresco 4

Posted by on April 16, 2012

Last week was a busy week for those of us working on Crafter Rivet, the WEM/WCM extension for Alfresco 4.0.  We’re extremely excited about this release and are busy scheduling events and demos as word is starting to get out!

You can download Crafter Rivet here:

If you missed our Webinar last week that was co-hosted with Alfresco you can check it out here:
http://www2.alfresco.com/Crafter0412

For existing Alfresco WCM customers on Alfresco version 2 and 3 using the AVM based solution, we’ve put together a couple of blogs to help you think about your migration to Alfresco 4 and the core repository:

For everyone who wants to learn more about this exciting and powerful open source solution for web content and experience management that sits on top of Alfresco, the world’s most open and powerful content management platform, we’re hosting a Crafter Rivet Roadshow in a city near you in May!  Come on out for content packed presentations, demonstrations, Q & A and free lunch!

Sign-up for the Crafter Rivet Roadshow here!

Crafter Roadshow Dates:

San Francisco
Tues. May 8

Los Angeles
Wed. May 9

Chicago
Tues. May 15

New York
Wed. May 16

Boston
Thur. May 17

Washington DC
Tues. May 22