“TERRA: The Nature of our World” gets South by Southwest (SXSW) Nomination

I’ve mentioned the TERRA group project in previous posts. Earlier this month, I received some great news regarding the project. “TERRA: The Nature of our World” was nominated in the student/university website category by SXSW interactive Web Awards.

TERRA is a partnership between Montana PBS, The Media/Theatre Arts Department at Montana State University, Montana State University libraries, and various independent filmmakers. Montana State University libraries was brought in to build/code the site and content management (metadata, data preservation architecture…) The site was designed with a nod to the future of digital libraries. It’s a digital video library with commenting, ratings, tags AND a controlled vocabulary. And it’s all wrapped up with some AJAX functionality and a Dublin Core/OAI metadata backend. The TERRA group also experimented with syndicating our content as podcasts. You can actually search iTunes for TERRA and receive our podcasts. That’s powerful stuff and the reach of the site has been amazing. Just last week, TERRA podcasts were placed as default content in the download for democracy player, effectively doubling the TERRA audience in one fell swoop. More and more, I see leveraging these type of communities as the future of library content distribution.

As for the nomination, I am honored and just a little surprised. SXSW is the center of the web geek world and to even be considered is quite humbling. We’ve got some university press lately, but I didn’t see it going a lot further. (Check the MSU news release for complete details.) So, I’m heading to Austin, TX with a colleague in March. I’m excited to step off the library circuit and see how the other half lives. Stay tuned for SXSW updates.

And if the mood should strike you, vote for “TERRA: The Nature of our World” in the People’s Choice Award race at https://secure.sxsw.com/peoples_choice/.

Web 2.0 = Library 2.0: Offline 2007 (MT)

I had the opportunity to speak about Library 2.0 for the Montana Library Association yesterday. It’s a really great group. Engaged, interested, friendly… You get the picture. Greg Notess spoke about screencasting and made it look really easy. He was ponting to the ease of distribution for the video content he creates with sites like wink and YouTube. I surveyed some web 2.0/library 2.0 web sites and made the argument that web 2.0 is not about a single application, but rather a shift in what people can expect from web applications. The medium of the web is changing and the idea of the network is informing what the web can be. (Slides are available at my slideshare space).

I also got to see a few demos of protopage and Google page creator. I watched as several web sites were created and made live to the world during a session. That’s pretty useful web 2.0 stuff for some Montana Libraries without access to a server. Overall, it was a fun experience and it was exciting to see the group embrace what web 2.0 principles can do for library web apps.

UPDATE: Suzanne Reymer, the Montana State Libraries Statewide Technology Librarian, was one of the presenters on protopage and Google Page Creator and she has set up a new blog for Montana Library Association (MLA) 2007 at http://mla-conference.blogspot.com/.  I mentioned that the structured data behind blogs (think RSS) is the greatest innovation afforded by blog software.  I stand by that statement.  It’s given me hope for the semantic web.  But… blogs can also be useful in recording conference and event information.  Suzanne already knows this and she’s on the case.

S3 (Simple Storage Service) – Amazon and Libraries

Have you heard of Amazon’s s3 (Simple Storage Service)? From the site:

Amazon S3 is “storage for the Internet” with a simple Web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the Web.

It’s one of Amazon’s newer web services. At .15 cents/gig of storage, it’s a pretty cheap option. Caveat emptor: S3 is intended for developers as an option for storage that can be queried with SOAP and REST web services, so they also get you for network traffic at .25 cents/gig. I wasn’t able to find anything in the fine print about checksum routines and the integrity of the objects, but I’m assuming backups and error checking are part of the Amazon routine. (Update from the horse’s mouth: found this thread in the forums which talks about Amazon’s data protection routines. It’s reassuring…)

Can the library use this? I think so. Even with the mentioned caveats, in the end you are looking at taking the server management side out of the equation. That’s pretty liberating for the small digital shops that our libraries are. At work, we’re experimenting with using the service to store some of our master digitization objects. I mentioned that this was an experiment, right? We’ve got some objects on the S3 servers and are looking into building a web interface that will allow our Special Collections staff to pull down master files when they receive requests from patrons. We’re also working with a campus entity to store media files on S3 and then building a search interface to query S3 for the data. It’s all a work in progress, but something to consider. I can tell you that my library and university will never have the infrastructure or access to a network cloud like Amazon’s. That’s not a knock; them’s just the facts.

(Sidebar: If you’re interested in web services, think about browsing around the Amazon Web Services Developer Connection. Lots of code examples, “howtos” and discussion to get you thinking about web service applications. Don’t be afraid to get you hands dirty and make some mistakes. It’s the only way to learn.)

Gettin’ Edumacated… A Digital Library Curriculum

This post on “Beyond the Job” calling for applications to a Digital librarian fellowships program at the University of Iowa SLIS came across my feedreader earlier in the month. I’m starting to see more of this which is pretty exciting from where I’m sitting. It means some schools are taking the step to train students for digital library work. (Most of the schools have used seed money from an IMLS grant for library education.) Here’s a sample curriculum from the IOWA SLIS website:

Students enrolled in this special Digital Libraries track will take the 9-semester-hour core specified for
the general MA in SLIS degree program.

021:120 Computing Foundations 3 s.h.
021:122 Conceptual Foundations 3 s.h.
021:101 Cultural Foundations 3 s.h.

Students in this track will also take the following 6 semester hours:

021:224 Electronic Publishing 3 s.h.
021:226 Digital Libraries 3 s.h.

Students will also enroll in the following course each semester.

021:239 Topics in Digital Libraries 1 s.h.

Additionally students will choose at least 6 semester hours from the following:

021:123 User Education: Multimedia 3 s.h.
021:242 Search and Discovery 3 s.h.
021:220 Programming for Text Manipulation 3 s.h.
021:124 Database Systems 3 s.h.
021:278 Information Policy 2 s.h.

22C:196 Human Computer Interaction 3 s.h.

The remaining 12 semester hours of course work may be taken from the other courses offered by the School as well as courses selected (with advisor approval) from other departments in the University.

Students are strongly encouraged to take a programming course such as Perl or Java.

I love the emphasis here on programming and relational databases. I use these skills daily in mapping out data structures and metadata crosswalks. It’s also nice to see “electronic publishing” get some face time. I’m seeing more of this “library as publisher” direction in my job and an epublishing course could really help out. The only piece I might add would be a digital library practical component – some internships in a local electronic publishing company, a semester practicum with the local digital content group. Give the students an opportunity to show their stuff in a real world setting. I’m sure this could be built into the course of study with those open 12 credits.

I’m also just a bit envious… I remember cobbling together bits and pieces of classes and work experiences that were going to help place me in a digital library shop after graduation. For the most part it worked… I worked for the UWDCC at the University of Wisconsin doing mostly grunt work – scanning, prepping for scanning, entering metadata, bit of interface design – and it was this experience that really gave me a fuller picture of digital library work. Perl and PHP programming was mostly learned on the job at my “fellowship” with the University of Wisconsin Division of Information Technology working in their corporate library and on the main web site for the communications team. Seems like this “cobbling” won’t be the reality anymore, not that there’s anything wrong with that. I’m liking the move to standardize a digital library curriculum. Here’s another program moving in the same direction and a recent article about a digital library curriculum from D-LIB Magazine:

University of North Carolina SILS – http://sils.unc.edu/news/releases/2006/01_digitalcurriculum.htm

The Core: Digital Library Education in Library and Information Science Programs
Jeffrey Pomerantz, Sanghee Oh, Seungwon Yang, Edward A. Fox and Barbara M. Wildemuth
D-Lib Magazine, November 2006 Volume 12,Number 11 ISSN 1082-9873

Just a little food for thought during the holiday season. Best wishes to all.

Anatomy of a Function

It’s been a little while since my last post. My recent work schedule and Turkey Day played a part in that. I’ve been working .com hours on a super cool project. (I mentioned the TERRA group in an earlier post.) I’ve learned so much in the last couple of weeks. It’s amazing what a hard deadline and a shifting set of requirements will do to your web programming skills. All the hard work is about to bear fruit as as the new TERRA web site is about to go live. Have a look at a different kind of digital library.

But, that’s not the point of this post… I wanted to share a little code that made some data conversion very simple over the course of the TERRA project. It’s a simple little php function that converts a MySQL timestamp into an RFC 822 date format (For the project, we stored the item update fields as timestamps and then converted them when we generated our various XML feeds. RFC 822 or RFC 2822 are necessary for valid feeds.) Here’s the php function in all its glory:

//function converts mysql timestamp into rfc 822 date
function dateConvertTimestamp($mysqlDate)
if ($rawdate == -1) {
$convertedDate = ‘conversion failed’;
} else {
$convertedDate = date(‘D, d M Y h:i:s T’,$rawdate);
return $convertedDate;
//end dateConvertTimestamp

You call the function by including it on the page and using the following code:

$newPubdate = dateConvert(“$stringToConvert”);
echo $newPubdate;

Where $stringToConvert would be any MySQL timestamp value that needs conversion.

In the end a string like this “2005-05-17 12:00:00″ looks something like this “Tue, 17 May 2005 12:00:00 EST”. You could also reverse the conversion using this php function:

//function converts rfc 822 date into mysql timestamp
function dateConvert($rssDate)
if ($rawdate == -1) {
$convertedDate = ‘conversion failed’;
} else {
$convertedDate = date(‘Y-m-d h:i:s’,$rawdate);
return $convertedDate;
//end dateConvert

NOTE: If/when you copy and paste the above code, make sure all ” (double quotes) and ‘ (single quotes) are retyped. WordPress is doing a number on the proper format.

I just wanted to share the wealth a bit. If you’ve got questions or suggestions, don’t be shy about dropping a comment. I’ll be home in Wisconsin for the next several days, but I’ll have limited internet access there.  I’ll try to answer questions if they arise.

Dueling Ajax – couple of articles

I’ve been a bit of the Ajax poster boy lately. Two pieces that I wrote for library audiences have just been published.

Building an Ajax (Asynchronous JavaScript and XML) Application from Scratch.” Computers in Libraries 26, no. 10 (November/December 2006).

Ajax (Asynchronous JavaScript and XML): This Isn’t the Web I’m Used To.” Online 30, no. 6 (November/December 2006)
uri: http://www.infotoday.com/Online/nov06/Clark.shtml

Both articles fall into the “introductory” mode, although the CIL article walks you through a proof of concept Ajax page update script (mentioned in an earlier post…). I want to be clear: I’m not an Ajax evangelist. I find the suite of technologies that make Ajax go intriguing and the improvements that the Ajax framework can make to some library applications are worth learning about and applying. I tried to point out the good and the bad. Although, it is a four letter word…

I did want to mention a couple of books that were really helpful in getting me up to speed with the Ajax method.

Ajax in Action by Dave Crane, Eric Pascarello and Darren James

DHTML Utopia Modern Web Design Using JavaScript & DOM by Stuart Langridge

(Click on the book covers if you are into book learnin’ and want to browse the Amazon records.) Dig in and discover (or rediscover) some of the possibilities when you put Javascript to work in your apps.


Get every new post delivered to your Inbox.