I read all my feeds through a personal mish-mash of Java, Perl and shell scripts. Although tools and abstractions are fun, there’s no substitute for keeping track of the real state of a technology.
Everything was originally RSS 2.0 based, with special-case conversion for other feed formats but, as Atom adoption has grown, that became an anachronistic approach. Increasingly, feeds are taking advantage of the clear content model, defined behaviour for relative URLs and unambiguous date treatment. Now, everything’s based around the Atom model, with RSS being converted.
As Sam points out, his “experience is that things that people care about tend to get fixed.” It’s interesting, for me, that my motivation was to get pictures from Tim Bray’s feed. The links are relative, so I needed to implement xml:base handling, and now those alpha-channeled drop shadows look pretty good in my reader, even though it’s styled nothing like his site.
Next up was links in Intertwingly — relative links and no xml:base at all. Processing is split into downloading and parsing, with the link being through raw files. Unfortunately, that loses the original locations the resources are from. A little extra metadata parsing and everything’s well again, although the SVG’s still getting dropped by my zealous safe-XHTML-only filtering.
I’m not suggesting that anyone exploit corner-cases in a spec to spur on development, but having people with the confidence to publish valuable content without working around others’ bugs seems like an important part of the ecosystem.