| Subcribe via RSS

Perl Module Monday: Net::Twitter(::Lite)

September 28th, 2009 | 4 Comments | Posted in CPAN, GitHub, Perl, Twitter

(If I keep covering multiple modules in a post, I’m going to have to change the title and tag I use…)

I generally try to use these posts to highlight lesser-known modules, and I imagine that the Net::Twitter module is fairly higher-profile than most of my previous choices. But are you familiar with Net::Twitter::Lite, as well?

It’s not unusual for CPAN to offer more than one solution to a given problem. The wide range of XML parsers is a testament to this. And when a subject is popular, the odds are even greater that people may choose to “roll their own” rather than trying to contribute to an existing effort. Fortunately, the interface to the social messaging service Twitter has been spared this. Maybe it’s because the source code is hosted on GitHub, and thus it is easier for people to contribute. Whatever the reason, the only real competition to Net::Twitter for basic Twitter API usage is Net::Twitter::Lite. And it’s not actually a competitor in the general sense.

Rather than representing a competing implementation, Net::Twitter::Lite came about as an (almost completely) interface-compatible alternative to Net::Twitter after it was refactored to use Moose internally. While it doesn’t have 100% of the features that Net::Twitter has, both modules strive for 100% coverage of Twitter’s API. Where N::T::Lite runs without the additional requirement of Moose, N::T gives you finer-grained control over which parts of the API are loaded and made available to connection objects.

I’ve used both modules, and can attest to the fact that the interface is kept consistent between them. At $DAY_JOB I authored a tool to echo data to a Twitter stream, for which N::T::L was the best choice as it had the fewest dependencies and our needs did not call for the additional functionality of N::T. My Twitter-bot (cpan_linked) was written with N::T in the pre-Moose days, and has not had a single problem since I seamlessly upgraded N::T to the Moose-based version. As I work on the next generation CPAN-bot, I’ll be using the OAuth support, as well as possibly the search API. Since it will be a long-running daemon, I’ll stick with the more-featureful N::T for it. But thanks to the diligence of the modules’ authors, I could just as easily swap between them at will.

If you’re planning to interface to Twitter from Perl, these two modules should be your starting point. But be sure to look at the other Twitter-oriented modules, just to be sure. There’s a lot of activity around this API, and Perl developers have kept on top of it.

Tags: , , , ,

Perl Module Monday: File::Find::Object

September 21st, 2009 | 2 Comments | Posted in CPAN, Perl

When Higher Order Perl came out, one of the first concepts from it that I was able to make immediate use of was that of iterators. Wonderful things, iterators, when suitable to the task at hand. I used an iterator class to hide from the user-level when a DBI-style database statement handle was actually 4 separate handles on 4 separate hosts. So any time I see a stream interface get converted to an iterator, I at least give it a fair looking-over.

The File::Find::Object module is an excellent example of this. It takes the concept of File::Find as found in Perl’s core, and makes into an iterative, object-oriented interface. It has two features that sell me on it, over vanilla File::Find:

  • You can instantiate more than one instance of the finder at a time, as it has no global-variable usage to cause problems. This allows side-by-side comparison of finds run in different directories, sub-finds that execute based on interim results from the current find, etc.
  • Once initialized, it acts as an iterator. This has two obvious benefits: firstly, you can stop when you want without using any tricks such as die-ing or forcing $File::Find::prune. The second benefit is less apparent, until you run your find on a huge set of directories and files; as an iterator, the finder will only move forward as you call it. It doesn’t immediately sprint full-steam-ahead over the whole of the search-space.

Shlomi Fish has taken over most of the maintenance of the module. His main write-up on it is here, with links to CPAN, Kobesearch and Freshmeat. That page also links to File::Find::Object::Rule, a port of File::Find::Rule to FFO. Shlomi has also written about the module more extensively, under the heading, “What you can do with File-Find-Object (that you can’t with File::Find)“. This second posting has some very useful examples of FFO in action, and I highly recommend reading it and then giving FFO a try.

Tags: , ,

Embracing the Ungulate

September 16th, 2009 | 4 Comments | Posted in CPAN, Metaprogramming, Perl

It’s long past time I started learning Moose. I have a CPAN module (WebService::ISBNDB) that currently uses Class::Std to do the inside-out object thing, so converting it to Moose would be the perfect candidate for a “learning experience”.

Can anyone recommend some online resources (tutorials, blog posts, etc.) that resemble what I’ll be trying to do… i.e., go from a less-favorable inside-out solution to Moose? All pointers greatly appreciated.

Tags: , , ,

Perl Module Monday: HTTP Parsing Triple-Play

September 14th, 2009 | No Comments | Posted in CPAN, HTTP, Perl

For this week’s Module Monday, I’m going to break form a little bit and actually look at three modules. All of these address the same basic problem, which I wrote about yesterday: parsing HTTP messages.

Right after writing the previous post, I discovered (by means of my CPAN Twitter-bot) two other solutions to this problem, both using linked C/C++ code for speed. So let’s have a look at all of them:

  • HTTP::Parser is the first one I discovered, and the one I’ve stepped up to help maintain. It has a pretty straight-forward interface, but requires that the content be passed to it as strings (though it can handle incremental chunks). Unlike the code in HTTP::Daemon that I hope to eventually replace with this, it does not read directly from a socket or any other file-handle-like source. It uses integer return codes to signal when it is finished parsing a message, at which point you can retrieve a ready-to-use object that will be either a HTTP::Request or an HTTP::Response, depending on the message.
  • HTTP::Parser::XS is the one I discovered via the Twitter-bot, and is also the newest of the pack. Tatsuhiko Miyagawa took this and wrote a pure-Perl fallback, then integrated them into Plack (more on the overall Plack progress in this blog post). The interface is a little unusual, compared to the more minimal approach of the previous option, in that it stuffs most of the information into environment variables in accordance with the PSGI specification (though in this case it uses a hash-table which is passed by reference, rather than actual environment variables). Which is great for projects (like Plack) that are specifically built around PSGI, but may not be as great for more light-weight parsing needs. Also, being very new, the documentation is very spare. It also uses integer return-codes to signal progress, and the codes are very similar in nature to those used by HTTP::Parser (the meaning of -1 seems to differ).
  • HTTP::HeaderParser::XS is the third of the set, and the one I discovered most-recently, as a result of a reference to it in the POD docs of the previous module. This one is over a year old, but seems to have just the one release. It is based on a C++ state-machine, and also offers only sparse documentation.

So, as I move forward with making HTTP::Parser a more generally-useful piece of code, these are my competition and hopefully inspiration. I’d like to see the speed of XS code eventually, but would prefer to make PSGI support an option so that the code is useful in more contexts.

Suggestions always welcome!

Tags: , , ,

Parsing HTTP Headers

September 13th, 2009 | 3 Comments | Posted in CPAN, GitHub, HTTP, Perl

So, I’ve volunteered to co-maintain the HTTP::Parser CPAN module. I did this because I’ve been looking for something I can use in RPC::XML::Server instead of my current approach, which is to rely on the parsing capabilities built in to HTTP::Daemon. This is somewhat clumsy, and definitely over-kill; I only have to do this in cases where the code is not already running under HTTP::Daemon or Apache. If the code is already using HTTP::Daemon, then it has its own accept() loop it can use, and if the code is running under Apache then the request object has already parsed the headers.

My need comes when the code is not in either of these environments, it has to be able to take the socket it gets from a typical TCP/IP-based accept() and read off the HTTP request. To avoid duplicating code, I trick the socket into thinking that it’s an instance of HTTP::Daemon::ClientConn, which is itself just a GLOB that’s been blessed into that namespace for the sake of calling methods. So it works. But it makes the code dependent on having HTTP::Daemon loaded, even when the user is not utilising that class for the daemon functionality of the server. I’ve needed to drop this for a while, now.

(I’m not impugning HTTP::Daemon or the libwww-perl package itself– both are excellent and I utilise them extensively within this module. But if you are not running your RPC server under HTTP::Daemon, then you probably would prefer to not have that code in memory since you aren’t really using it.)

Thing is, you can use the request and response objects without having to load the user-agent or daemon classes. But there isn’t an easy, clean way to use just the header-parsing part of the code by itself. The ClientConn class has a get_request() method that can be instructed to parse only the headers and return the HTTP::Request object without the body filled in. The content of the request can then be read off of the socket/object with sysread(). This is why I use the minor hack that I do.

What I want to do, is be able to do this parsing-out of headers without the ugly hack, without loading all of HTTP::Daemon just so I can call one subroutine (albeit 200+ lines of subroutine). (And to be fair, I also call the read_buffer() routine after the header has been read, to get any content that was already read but not part of the header.) So I came across HTTP::Parser. It has a lot of promise, but it’s not quite where I need it to be. For one thing, it won’t stop at just parsing the headers. This is something I need, for cases where the user wants to spool larger elements of a message to disk or for handling compressed content. But most of all, it seemed to not be in active maintenance– there were two bugs in RT that had been sitting there, with patches provided, for over a year.

Fortunately, an e-mail to the author let me offer to help out, and he accepted. The code was not in any repository, so I set up a repo on GitHub for it here, and seeded it with the four CPAN releases so that there would be something of a history to fall back on. I’ve applied the patches (well, applied one, and implemented the other with a better solution) and pushed the changes.

Now, I have to decide how to move forward with this, how to make it as efficient (or more so) than the code in HTTP::Daemon, how to make it into something I can use in RPC::XML::Server to eliminate the unsightly hack I have to rely on currently.

Tags: , , ,

Perl Module Monday: Plack

September 7th, 2009 | 1 Comment | Posted in GitHub, Perl, Web Services

This will be a slightly unusual installment of PMM, as I want to look at a module so new that it isn’t actually on CPAN yet, just GitHub: Plack. (When it makes it to CPAN, it should be here.)

Plack is a reference implementation of the burgeoning PSGI initiative. What is PSGI? Well, if you follow that link you’ll get a more complete explanation, but the short form is that it is a Perl alternative to Python’s WSGI (Web Server Gateway Interface) and Ruby’s Rack. The longer-form is that it’s a specification layer to decouple web applications from the specifics of how they’re being run, whether that’s CGI, FastCGI, Apache with mod_perl, etc. The longer explanation can be had at the link.

Back to Plack: Plack is the first reference implementation of the PSGI spec, and already it can pass all of the Catalyst tests. And as of this commit, the plackup script can coerce a an app written for Catalyst, CGI, etc. into running under different environments, thanks to the magic of PSGI.

I’ll be watching Plack very closely. I see a PSGI connector for my XML-RPC server in the not-too-distant future.

Tags: , , ,

Muscle Memory, Part 1: The Strain of Repetitiveness

September 3rd, 2009 | No Comments | Posted in Metaprogramming, Perl

Earlier this morning, I worked a bit on my (other) hobby. Specifically, I fired up my airbrush[1] and painted the road wheels for a WWII Soviet tank that I’m working on.

Ask any modeler who builds armor subjects (assuming you know any, other than myself) and odds are that the road wheels are their least-favorite part of the model. They’re numerous, and worst of all, they’re numbingly repetitive. For this model, I had a total of 36 wheels to paint: on each side of the tank there are 12 road wheels in 6 pairs, plus 3 pairs of smaller wheels that act as return-rollers (keeping the tread from sagging too close to the tops of the road wheels) for a total of 18 per side. For some tank designs, the wheels are fairly simple, smooth affairs that are easy to paint. These, however, had a lot of tight corners and angles that I had to work the paint into. To be fair, this is not the worst-case I’ve dealt with; some years back I built a Panzerkampfwagen 35(t), which sports a numbing total of 24 wheels per side. At least those wheels were easier to paint than this morning’s were.

But it got me thinking about repetitive activity, and how it crops up in my coding. Like most dutiful Perl programmers, I use the “strict” and “warnings” pragmas almost religiously. I even set up templates in editors when and where I can, to ensure that these are always present in my modules. (Well, the use of “warnings” is a little more recent, so I still have some older code on CPAN that lacks the pragmata.)

Some would look at my repetitive use of these, and point out the recent addition to CPAN, common::sense. In many ways, this is a useful tool. But it suffers from some drawbacks:

  • It isn’t part of the core, so it would be an additional dependency.
  • It includes features that are specific to 5.10, so if you’re trying to maintain compatibility for older Perls, it isn’t an option.
  • Most of all, it hides too much of what is being done.

That last point is the most salient to me (that, and the fact that I have modules being used by large-ish projects that are still using 5.6.1). People sometimes talk about “self-documenting” code, code that is very clear in its purpose just from reading it. Truly, a name like “common::sense” is pretty clear. What isn’t as clear is what the author defines as “common sense”, and whether that matches your definition of such. The pragma-module does do its thing fast and with less memory usage than loading the individual parts does. But the user has to ask themselves if their code is clearer and more self-documenting with or without it.

As programmers, we loathe repeating ourselves. We program our editors with cut-and-paste and macro-definition capabilities, just to save a few keystrokes here and there. But we also often find ourselves committing bug-fixes to our repositories with a commit-message that is some variation of “cut/paste error… oops!”

In reasonable, small doses, repeating yourself can be an acceptable thing. Some people in my hobby clean their airbrushes by just running paint thinner through until it comes out clear. But I disassemble and carefully clean mine after every use, even if I plan on immediately loading another color and using it again. A friend in my hobby club back in Denver once said that he does that for the simple reason that the 5 minutes or so that it takes lets him rest his mind and refocus his thoughts on what his next steps are going to be.

When I start a new module or application, putting in the repetitive parts (even if it means only loading a template and making small adjustments) helps me narrow my focus from the project as a whole, down to this one file in particular that I’m about to work on. So, maybe repeating yourself isn’t always a bad thing.

(Edit: This entry is not meant as a critique of common::sense, but rather an argument that repeating oneself is not always a bad thing.)

[1] Before anyone asks: no, I can’t do any custom work for your car or motorcycle. I lack the skill at this point, and my airbrushes are designed for working with model paints. The lacquers one uses for automotive work would be hard on the internal workings.

Tags: ,