Google Checkout

Ai caramba! Didn’t MS try this a few years ago… passport?

Advertisements

Everything 2.0

Just when you thought 2.0 had been buried… Everything 2.0 and they don’t have reBlogger! Scandalous!

The Spot 4 Oracle

ts4oracle_logo.gifSick of collecting hundreds of web feeds to cover all the Oracle blogs? Now get all your Oracle news and blog posts in one place: theSpot4Oracle.

It's built with reBlogger

What will you do with your reBlogger?

reBlogger 4.0

Ivan has posted about the status of reBlogger 4.0

DRM in RSS 2, OPML 2 and ATOM 1

DRM, Digital Rights Management

DRM handles the description, layering, analysis, valuation, trading, monitoring and enforcement of the usage restrictions that accompany a specific instance of a digital work.

Any blog post is a "digital work" and as a creator of aggregation software, the rights of the author is important to me. But I'll go beyond that and say that I want authors to earn from their work and I want our software to handle that for them. I'll go even further beyond that and say that our software can encourage and enforce correct usage restrictions too.

To what extent do the existing web feed specs make provision for DRM? Or… to put it another way, if our software was to implement DRM on behalf of the authors, which spec has the features we need and which ones don't?

I've posted about the need for extensions to RSS in order to safeguard the content creators rights. As a creator of an aggregation product, I think it's very a important topic. Here are some of my posts that contain practical suggestions:

Although these posts are not in the same focus area (rights protection) as my own posts, I've found quite a few comments about the limitations of RSS and the inability to influence the "owners" of the spec who will take charge in dealing with the changes that are needed. Here are some of the posts I have found:

Clearly we need some changes, otherwise companies (Microsoft?) and people will just begin to implement their own changes as they see fit, on behalf of their customers. Another wild west scenario.

OPML

Dave might be onto some of the things I am looking for:

I'm leading a lunch discussion today about Identity in RSS and OPML, particularly OPML 2.0, which has a element for the author's identity. It's specified in 2.0 as a URL, and should plug into the work being done in this community.

The OPML 2.0 spec has some really useful information in the <HEAD> area.

<dateCreated> is a date-time, indicating when the document was created.
<dateModified> is a date-time, indicating when the document was last modified.
<ownerName> is a string, the owner of the document.
<ownerEmail> is a string, the email address of the owner of the document.
<ownerId> is the http address of a web page that contains an HTML a form that allows a human reader to communicate with the author of the document via email or other means.

Dave is clearly interested in taking the long view by including this element:

<docs> is the http address of documentation for the format used in the OPML file. It's probably a pointer to this page for people who might stumble across the file on a web server 25 years from now and wonder what it is.

But OPML is not designed to contain content, but rather to link to content – and perhaps to link to the content which is linked to by that content (recursively). It's very good and useful at that. OPML is not what I'm looking for.

RSS

The RSS 2.0 spec contains only 1 author related element and it's an email address:

An item's author element provides the e-mail address of the person who wrote the item (optional).

I don't think it's sufficient because email addresses change over time. So RSS would not provide enough information for the protection of the rights of the author.

ATOM

The W3C Atom format spec (not Atom 0.3) has far more useful information than either RSS or OPML in terms of tracking the lifetime of the "item" (content) and in always being able to find the original author. Atom even hasa "rights" element. No wonder entire sites are converting to ATOM.

The "atom:author" element is a Person construct that indicates the author of the entry or feed.

The "atom:contributor" element is a Person construct that indicates a person or other entity who contributed to the entry or feed.

The "atom:id" element conveys a permanent, universally unique identifier for an entry or feed.

The "atom:published" element is a Date construct indicating an instant in time associated with an event early in the life cycle of the entry.

The "atom:updated" element is a Date construct indicating the most recent instant in time when an entry or feed was modified in a way the publisher considers significant. Therefore, not all modifications necessarily result in a changed atom:updated value.

The "atom:rights" element is a Text construct that conveys information about rights held in and over an entry or feed.

I really like the foresight of this next element!

If an atom:entry element does not contain an atom:rights element, then the atom:rights element of the containing atom:feed element, if present, is considered to apply to the entry.

Atom does a far better job of giving the elements that can be used to protect the authors of the content. In the two specs above the main author element which is intended to contain an email. But email addresses change over time – and in this way an author could lose touch with the ways in which their content is being used.

Atom uses this word "person" throughout ther spec. What is a "person" in Atom?

A Person construct is an element that describes a person, corporation, or similar entity (hereafter, 'person'). This specification assigns no significance to the order of appearance of the child elements in a Person construct. Person constructs allow extension Metadata elements.

The "atom:name" element's content conveys a human-readable name for the person. The content of atom:name is Language-Sensitive.

The "atom:uri" element's content conveys an IRI associated with the person. Person constructs MAY contain an atom:uri element, but MUST NOT contain more than one.

The "atom:email" element's content conveys an e-mail address associated with the person. Person constructs MAY contain an atom:email element, but MUST NOT contain more than one.

Overall I can imagine Atom providing us with enough elements to be able to implement some form of protection for the rights of the initial author.

What is the issue here?

If we don't take action now, we will have a situation where people earn off content in the same way as people earn from paintings. If I paint a wonder piece of art, I sell it – and that's the end of my revenue. The artwork can be resold 20 times and increase in value 100 times… but I make nothing. Speculators make everything, I get nothing.

Without protecting the author and providing them with income, we really cannot expect to see the emergence of professional authors who create great content over the long term.

Specs

Here are links to the specs:

This is an important issue to me because we're building the reBlogger website based aggregator and I want to honor the digital rights of the author… but I can't programmatically determine what their rights are!

built_with_reblogger2006.gif

reBlogger past, present and future

Yesterday I completed our first interview about reBlogger with Robin Good. It will be published next week on his excellent site.

I had great fun doing it – a lot more fun than I expected. I've realized that I (and our team) are very passionate about what we're doing.

In essence I sketched out the existing reBlogger 3.x and the forthcoming Next Gen version and the corporate version after that. I'll share the extremely short version here. (I look forward to reading Robin's take on our discussion)

Background to reBlogger: We began to build reBlogger almost 2 years ago for our own internal needs on TopXML. It was about a year ago that we made some copies of reBlogger available to other websites that we have close relationships with. By the end of last year reBlogger was in it's 3rd version and we had moved it to .NET because of the massive increase in development productivity over Classic ASP. As I travelled in Europe at the end of last year I was in beautiful Venice (Italy) and I read Robin's website (also in Italy). Robin wrote passionately about newsmastering. Amongst other things he wrote:

We need something of an entirely new order of magnitude to manage all of this information.

Search engines, open directories, and millions of bloggers are not enough.

We need a multi-layered, self-organizing approach that allows the load to be highly distributed and the focus and depth to be guaranteed by the combined result of many highly focused individual efforts.

As I travelled around Venice along the grand canal (and went to a wonderful masked opera), I began to see a much larger picture of what our existing reBlogger can be used for. I saw that we could provide the answer that Robin had been looking for.

I bought newsmasters.com for a small fortune and ever since we've been gradually building reBlogger into something that will help people and companies manage the torrent of data that is flowing past them – in Octber 2005 it was called a river of news but these days it's a tidal wave of news.

reBlogger 3.x is predominantly designed for SEO companies and websites that want to track blog content in tightly grouped themes. This product is described in this post covering the "reBlogger engine" and you can view many existing websites that are built our of reBlogger 3.x One of the great things is that right out of the box reBlogger gives you excellent ranking in search engines because of it's focus on creating themes. In this way reBlogger 3.x is similar to a content manager (CM).

reBlogger Next Gen is designed for building meme or web 2.0 websites. It's an engine with a far higher level of functionality than reBlogger 3.x. It's basically DIGG-in-a-box. With the mushrooming number of 2.0 sites out there (all containing voting and Ajax coolness) there is a big need for standardization and componentization. Atlas brings that at a technology level, but we're making DIGG sites (meme, web 2.0) into a commodity that anyone can buy. By using our reBlogger Next Gen you can easily have a DIGG site working on your website. It's got all the functionality of reBlogger 3.x plus all the existing Ajax goodies that most 2.0 existing sites have – and then some extra innovations that have not been seen yet on the web, for example Hover Comments.

reBlogger corporate version is designed for… corporates. When you have 1,000 bloggers in your company, you have major headache looming. How do you track the bloggers? When you can get the blog posts of your competitors employees you have a major opportunity! What can you extract from their blogs? Sales departments want to track buzz about a product, is it increasing or declining? Marketing departments may want to generate buzz about upcoming products and compare that graphically to buzz about upcoming competitor products (think XBox 360 and PS3).

We think the enormous volume of blog content is a whole new addition to the lives of people who are connected via the internet. Everyone wants to track something of interest to them. Everyone naturally has a desire to play and explore. We have the long term vision to enable it.

built_with_reblogger2006.gif

The reBlogger engine

I think that what we see in the next version of Windows – the RSS platform – is just one layer in a typical "feed stack" or "RSS Stack". I call it a stack because there are different things happening at higher and higher layers of abstraction above the previous layer, rather like a protocol stack where the lowest stack can be Ethernet > IP > TCP > HTTP.

Ivan mentioned that he is building the next generation reBlogger over the existing engine. In that post look for the words: "No changing original reBlogger code".

In the existing reBlogger 3.x engine, we have a stack of sorts – each layer has a greater level of functionality that depends on the previous layer:

  1. Find, track and manage feeds (all versions of RSS, ATOM, RDF etc.)
  2. Manage types of resources differently (podcasting, blogging, flikr images)
  3. Aggregate the content intelligently (collate/merge a bunch of feeds together – like the engine in Windows RSS Platform does)
  4. Mixing (combining posts into a single feed, from across different sources)
  5. Filering (to remove off topic posts, adverts, Google bad neighbourhoods and profanity)
  6. pre-Publishing (only publish newest items, avoid duplicates)
  7. SEO-friendly publishing (create highly optimized content on your own website)
  8. SEO-friendly site navigation (insert highly optimized navigation structure including URLList and Sitemaps)
  9. Automation (do all of the above automatically with no administrator involvement)
  10. Extensive administrator tools
  11. Allow customization of the look and feel

Now in the next generation reBlogger we will be adding significantly to this engine. In fact rather than replace it, we will simply add over what is already there.

built_with_reblogger2006.gif