Blog to log

Online services that relate to managing RSS feeds from blogs (mainly). Some new ideas emerging but no killer yet.

Just sniffing around I can across:

  1. Reblog, filter and republish other peoples feeds on your own site. “Useful to individuals who want to maintain a weblog but prefer curating content to writing original posts”. This is a poor way of describing a Planet or Portal, but with more editorial control.
  2. TagCloud extracts keywords from a set of given RSS feeds and builds a ‘tag cloud’ (or a ‘keyword cloud’ technically).

Rise of the editor

Beyond the self publishing revolution to the rise of the editor: as more content comes online good editorial will be needed to sift though it. This role will become more and more important and sites like Slashdot might be knocked off the top of the ‘most read geek portal’ list.

I envisaged sites like Vibewire being a collection of blogs, ether hosted by them or where ever, that feed into the individual channels and the editors simply selecting from what is already published on these blogs. These channels in turn would feed into the main page.

Reblog gives anyone the power to be their own Slashdot or Vibewire.

Update: Looks like someone already has.

RSS needs Tags

TagCloud illustrates one of the main problems with RSS, and that is that it provides raw information without any means to contextualise it. With the recent rise of tags or Folksonomy (I that that word) feeds need to be extended to allow for keywords/tags to be provided with the feed. Then tools to map theses remote taxonomy’s to local ones would be needed but would provide a means to manage the large volumes of information effectively.

Update:
Seem someone is trying to put a name on this problem: Feed Overload Syndrome, and its solution ‘Meta-Feeds’ (I guess everyone is trying to be the first to name the next big fad). But lets break Mr Burnham down a bit:

Burnham reckons tagging is no good because it makes tag soap as every one has their own tags, which is OK and that’s what so called ‘folksonomy’ are about. Individually they are worth little but on mass these little bits add up to more than the expert made taxonomies which are worth a lot (if you know how to use them). The goal however is to map from whatever incoming taxonomy a local personal one which will ultimately have more meaning to the viewer (the advantage of folksonomys).

Burnhams solutions breaks down to what Reblog is doing:

…the posts are categorized and placed into a taxonomy using advanced statistical processes such as Bayesian analysis and natural language processing

So basically machine keyword scraping and the mythical ‘natural language processing’ (oh for a computer that can understand!). This is never going to be as valuable to the end user as human tagging and won’t map perfectly to an expert taxonomy. What we really want is some sort of collaborative filtering process that maps from one folksonomy/taxonomy to another based on trust networks that the end user subscribes to. As a result of this Peer to Peer Social Networking looks promising and more like a realistic solution, or even some sort of Google ranking based on community and author.

Burnham did get one thing right however, this mapping process will have to be external to the actual feeds, ether with a smart feedreader client that talks to a service/community to do the mapping.

11 thoughts on “Blog to log”

  1. BJ and I had done a fair amount of work customising Drupal so that it could aggregate any blog instead of *just* Drupal blogs. We encountered similar problems with remote vs. local taxonomies. However, BJ was (rightly) convinced that taxonomies couldn’t play that important a role in the aggregation of content. That is to say, if I’m quoting BJ correctly, that taxonomies are context-dependent – the taxonomies on my blog are relevant to the cluster of knowledge you find on my site and on my site only. The ‘glue’, if you will, that should bring knowledge together (what we see nowadays as planets or aggregation) should be networks, or more specifically, social networks. BJ’s done alot of work with this and trust as a relational attribute. You should speak to him about it because I’m only trying to explain what he has told me. I think he’s in a quiet period of his thesis right now.

  2. “The tools to map these remote taxonimies to local ones would be needed….”
    Yes. Yes they would.
    Do you envisage the mapping being done by an automated process? If so, is your concern how such a process might be carried out (given the content-ignorance of computation)?
    Abend writes “…was (rightly) convinced that taxonomies couldn’t play th[is] important role in the agregation of content…[because] taxonomies are context-dependent [i.e., content-dependent]. The ‘glue’…should be…social networks.”
    Is this claim a funciton of the (possible) non-computability of remote-local taxonomy mapping and hence a manifesto for real-time agent-based mapping, or a manifesto for the _assumption_ of an implicit mapping (riding on the back of the trusted social network etc.)?

Comments are closed.