Wednesday, 11th September 2002
Testing Pingback client
This post exists partly to list the blogs I know of that support PingBack, but mostly to help test my new PingBack client implementation.
[... 68 words]RSS feeds coming soon
A quick note concerning RSS feeds. I have not yet implemented them on my new blog, but I plan to do so in the next few days. On the advice of Chris Coome and Bill Kearney (both of whom replied to my question on [rss-dev]) I will be providing feeds in both RSS 1.0 and RSS 0.91 formats, and I plan to provide individual feeds for the various categories on the site. I also have an idea for a feature that will allow people to “build their own” RSS feed consisting of the categories they are most interested in. As always, watch this space :)
Disable CSS bookmarklet
A handy bookmarklet courtesy of Rick on the MACCAWS mailing list:
[... 20 words]Remind me why people still use IE
The Register: IE 6 SP1 omits fixes for 20 outstanding flaws:
[... 166 words]effnews part two
Fetching and Parsing RSS Data is the second installment of the effnews project, a series of tutorials on creating an RSS news reader in Python. This time topics covered include exception handling and event based XML parsing using xmllib
.
Flash applications
Flash MX and the Bigger Picture: Lightweight Internet Applications:
[... 231 words]RSS 1.0 feed now available
I’ve set up my first new syndication feed using RSS 1.0. I’ve checked the feed against this RSS validator and it seems to pass, but throws a warning that item descriptions are meant to be between 0 and 500 characters in length. As I want to provide the full contents of my entries in the feed (for people using aggregators such as AmphetaDesk) I’ve decided to ignore the warning and leave it as it is.
New form of spam protection
I’ve had an idea for a new way of hiding email addresses from spam harvesters—shield the address behind a form that must be submitted via POST. Site visitors can now click a button on my Contact page to reveal my email address. Spammers could always circumvent the system by writing a harvester that parses HTML pages for forms and submits every single one, but I’m hoping they won’t bother.