Monday, 26 November 2012

Metadata Update #12 : Bibliographic Framework

There has been a lot of talk in recent years about what is going to happen with MARC.  We know that MARC works great for libraries and it has been around for a long time.  We also know that it doesn't actually fit into today's world of computing and the Internet.

For about the last year or so, the Library of Congress has been looking seriously at the development of something that will not only replace MARC but go beyond it.  The URL that I've attached is for their new propose framework they call BIBFRAME.  Here is the paper if you'd like to read it:
http://www.loc.gov/marc/transition/pdf/marcld-report-11-21-2012.pdf

Just as with RDA, I think that the cataloguing/metadata world is on the brink of really big, exciting and important changes.  I think that this is another development which is worth following. 

I had a quick reading over of the paper but I want to sit down and read it more carefully - for sure before I pick my ALA Mid-winter sessions.  What I read so far makes me think that if this is something that could be implemented along with RDA, libraries would truly be revolutionized.

Wednesday, 31 October 2012

Metadata Update #11 : Identifying RDA records

In some of the original training materials we used at the University Library to learn about RDA for copy cataloguing, the instructions were to look for an "i" in the 17th position in MARC leaders.  In looking at records, the cataloguers have been finding out that there is something wrong with these instructions.
   
After doing a little digging around, it turns out that these are the instructions that cataloguers have been given if they want to contribute RDA records to WorldCat:

  • Catalogers may contribute original cataloging using RDA to WorldCat if desired. (040 $e with value rda and Leader/18 (Desc) coded i if ISBD punctuation is used or blank if not).


  • So, there are a few points:

    1. The leader position is 18 not 17
    2. We are looking for either an "i" or a blank.  If there is an "a".  That would indicate AACR2 coding
    3. The 040 subfield "e" should be used in concert with leader coding to identify RDA records

    So, it looks like we are making sure that the "a" is not present in position 18 in the leader and the subfield "e" is present in the 040.

    Thursday, 20 September 2012

    Metadata Update #10 - Virtual International Authority File

    I'd really like to share the VIAF website address:  http://viaf.org/
    This website and search engine combines authority files from national libraries from all over the world.
    It is the use of authority records and the ability to disambiguate terms and names which makes library catalogues useful to searchers.  It is one of the things that makes libraries stand out from information stored on the WWW in general.  Libraries help users identify the specific "John Smith" they are looking for - an almost impossible task on the web.

    Try the search engine out for yourself.  Type in a surname, a place name and then a name of a company, institution or musical group.  Notice that you will see the flags for the countries where a certain authority record is in use.  If you click the heading, it will take you to the authority record in WorldCat.

    I think that as we continue to think more about FRBR and FRAD we will hear increased emphasis in related discussions on using authority records and linking our bibliographic records with authorities.  It makes sense to me.  If this is what libraries are good at, why not make the best possible use of it?

    Wednesday, 8 August 2012

    Google Book Search Project

    This morning I was reading an interesting blog post by Karen Coyle about the Google Book Search Project (http://kcoyle.blogspot.ca/2012/07/fair-use-deja-vu.html).  Apparently Google filed a case that claimed that digitizing a book in order to make it searchable is "fair use" and thus legal in the U.S. as long as proper precautions are taken to ensure that this digitization doesn't essentially make an entire copyrighted book or substantial sections of it free to use and download.

    One of the arguments made by Google is that full text searching is far superior to searching using library catalogues.  Karen argues that this is essentially "throwing libraries under the bus".  She says that full text searching should be seen as complimentary to "standards based metadata", not a superior replacement for it.  I agree with her.  When I used to do a lot of reference work I found that there were times when full text searching was critical.  For example, a patron might come into the library and only know a line of a poem or song and want to find the full version.  Granger’s poetry index only indexes certain types of poetry in certain ways......  for some reason, I couldn’t find a sniff of any poem that begins “I can’t get enoughsky of Lizzie Pitofsky”!  A Google search for a line of text was usually successful.  That is, if the patron got the words and spelling at least more or less correct.  However, I can think of many times when the actual cataloguing of materials helped users do a fairly exhaustive search of the public library's collection for materials about a certain topic which would never have been possible or reasonable to do with full text searching alone. 

    There is this term that sounds really geeky, disambiguation, which is used to identify a metadata goal which I believe  is key to some of the best stuff that cataloguing in libraries can do for users. Basically, library catalogues help users to tell the difference between a table that is a piece of furniture and the type that you might use in a chemistry or mathematics class; to know the difference between Sydney Australia and Sydney Nova Scotia; or to tell the difference between mechanical and psychological stress.  This essentially is what disambiguation is. I’ve used full text searching in systems such as Dialog where I had to sit down with a piece of paper and figure out, for example, that if I want to find information about chemical tables but not the periodic table, I will have to string together some sort of Boolean search which includes a “not” for the term “periodic” and perhaps use some proximity operators so that I wouldn’t end up with every chemistry publication that happened to have a table of contents.  However, in the library catalogue there is a subject heading “chemistry – tables”.  A bit of typing and a few clicks and I have a nice list of resources.  Not so in Dialog.  So, why wouldn’t I just start with the library’s catalogue?  Makes sense to me that I would.

    So, in my experience, standards based metadata that is used in library catalogues is generally effective and efficient for most types of information searches.  However, when a person is really looking for a needle in a haystack and the colour and size of that needle is known, full text searching is a real lifesaver. I do agree that it is most unfortunate that Google has chosen to speak of library catalogues in a way that devalues them to make their case.  However, it is interesting that it is possible that new opportunities for digitization of materials for full text searching could be opening up in the near future.  If you’d like to read the entire case filing, the user is here:  http://www.scribd.com/doc/101229854/Google-Motion-for-Summary-Judgment

    Wednesday, 11 July 2012

    Metadata Update #9 - MARC 21 Standard

    For those who weren't at the MARC 21 Standards meeting, this is the link that I recommend for looking information about MARC tags:
    http://www.loc.gov/marc/bibliographic/

    That's not all folks.....

    As promised, here is OCLC’s version of the MARC standard:


    I don’t tend to use this one that much because it is generally less up to date and also I find it a little harder to navigate.  However, it does have some strengths so it may be worth having a look at it sometime. It gives a slightly different explanation or definition of the MARC tag that helps clarify why it might be used.  A very useful feature is that this website lists some of the odd local or OCLC specific tags which are not part of the “official” standard.   An example is the 099 tag which can be found in this version of the standard.  In short, the OCLC version of the MARC Standard is much more complete but not my first place to look because it is not as easy to tell which tags are actually part of the international standard and which are local and because it is less up to date.  However, because of richness of the information it contains, it is a valuable resource to use and is often the second place I look when seeking information about MARC tags.

    Tuesday, 3 July 2012

    Libraries, Patrons and eBooks - report of a study

    This is an interesting report of a Pew Internet study which was done in Nov -Dec 2011 and funded by the Gates Foundation:
    http://libraries.pewinternet.org/2012/06/22/libraries-patrons-and-e-books/
    As expected with something funded by the Gates Foundation, it focuses on U.S public libraries but I think that its still of interest to those working in academic libraries.  From my point of view, findings that Americans value and use the library and like eBooks but often either can't find or use the eBooks that the library offers are key.  Unlike paper books that sit on a shelf, a library user who doesn't know how to search or where to begin searching, will have a hard time locating eBooks which appear readily accessible to library workers.  In addition, as long as a print book is written in legible type-face on reasonable quality paper, most readers do not require an special technology to just pull a book off a shelf and start reading it.  Not so with eBooks.  Lack of standardisation and interoperability in eBook readers and content platforms in addition to financial issues, which the study did not address, there appears to be a whole new set of barriers to access to information and literacy in general. 
    Personally speaking, I also think that the study was a little shallow and/or that it oversimplifies the "eBooks in libraries" issue.  However, given the changing, "wild west" nature of eBooks, I think that it will be a long time before the sort of detailed, thoughtful analysis I would like to see will be possible.  In the meantime, I try to read everything I get my hands on about eBooks in libraries!

    Tuesday, 19 June 2012

    Metadata Update #8 - The Cataloguing Calculator

    I've promised to share information about commonly used cataloguing tools.  This is the first of several posts that I am going to make on this topic.  The post includes a series of brief exercises to help you explore the different functions and information that can be accessed through the calculator. 
    This link is for the cataloguing calculator.  It is found in the Cataloguer’s Desktop but it is also freely available on the web from any internet connection. It’s a tool that I used in my cataloguing class and it might be useful to have for those times when you want to quickly cook up a Cutter Number or country code and you don’t want to pull out your cheat sheets:

    Here’s a few things that you can do to get orientated to using the calculator:

    1.       Suppose you want to cook up a Cutter Number.  Click the LC Cutter search option then start typing the last name, title, etc (except location)you are Cuttering for and notice the amazing Cutter Number appear at the top of the screen.  The more letters you type, the longer the number it will calculate.  So, just type in as many letters as you need.  Of course, you’ll need to tweak some of the numbers but at least you’ll have a starting point.
    2.       Suppose you need to Cutter for a location or maybe you wonder about cutting for location:  Type Saskatchewan or Alberta or China, etc. and click the “Geog. Cutter” button.  You’ll see the Cutter you should use for that location.  Another neat thing is that there are also cross references for some locations.  I use this quite a bit when the instructions in Classification Web give a cutter range for geographical location.
    3.       Speaking of China, while “China” is still in the “find it” box, try clicking the Geog. Area Codes box and you will see a list of both cross references and codes for the various acceptable geographic area codes in use for China.  Try the same for Canada.  Likely you’ll recognize more than a few of those codes.  I don’t know if I’ve seen n-cnp—used very much!
    4.       Now try searching Canada in the “country codes” (what a surprise!  I think that we have a lot of discontinued codes in the catalogue!)  Try other countries too. 
    5.       Next, click the “language codes” button and search for English.  We know the “eng” but what about the other versions of English?  I don’t think that I’ve ever seen them.  Try some other languages.  You may notice that a lot of language tags have been discontinued.
    6.       Next click “AACR2 abbreviations”.  While we are moving to RDA, keep in mind that we will still need to use AACR2 sometimes so this may continue to be a useful tool – especially as we use AACR2 less often.  Try typing in some common descriptive terms such as “folio” or even just “fol”, “page”, “illust”. 
    7.       Next, click on the “MARC Var. Fields”.  This one can be very useful!  Suppose that you want to look up uniform titles.  Type “uniform” into the “find it” box and see what you get.  Notice all of the different options for uniform title.  Maybe you have never seen a 630 uniform title and want to read up on it.  Click on that line and a window will popup.  You can read OCLC’s technical information about that field.  I actually prefer LC’s MARC standard website but this one gives you the basic information you need to know.  In my next message I’ll introduce you to the LC MARC standard website in case you haven’t used it before.
    8.       Now have a look at the field codes along the bottom of the screen. By default, this section displays the fixed field codes that are used for books.  You may not need to use or look up these codes very often but maybe you are curious about what they mean.  If you click, for example, “Ills:”, you will see that this field is for recording illustration information and the window shows you all the details about the information this field can contain.  You can try searching for other types of materials using the drop down menu labeled “enter fixed field below”.  Select “musical recordings”.  Notice that the selection of field codes changes.  Click the “comp” code.  Notice all of the different musical types that can be encoded.  If you are disoriented by these fixed field codes, keep in mind that these are codes which are typically used in fields such as the Leader, 006 or 008.  Can you imagine what sort of amazing records and searching ability if this aspect of MARC records was used to its potential.
    Playing with the Cataloguing Calculator and learning more about how it might be useful reminded me of just how much information can be encoded in a MARC records and all of the different ways that the same information can be expressed.  Of course, our ILS is not programmed and/or configured to read and use all of the information or metadata that can possibly be encoded.  Even so, with the introduction of new types of software that use our records such as USearch, we keep finding out about new and interesting ways in which the existing content of MARC records can be used.  We have also been learning that if we don’t understand how the encoding of MARC records works and make poor choices about what is important or not important, the new software may not work very well.  Programmers generally assume that the MARC standard (or another relevant standard) has been applied and followed.  Searching, displaying information and recognizing duplicate records are examples of the types of things that may not work well in the new systems if there are problems with the underlying metadata.  So, taking the opportunities to learn about the standards we use, even if it is just a little snippet here and there, should be helpful in the long run.  I must be in the right job because I find this stuff fascinating.

    Tuesday, 15 May 2012

    Getting back on track - Thinking about leadership

    Taking three weeks off for vacation seemed to result in my getting behind in everything.  So, I will try to get back into doing my regular posts again.

    My brain isn't in the metadata mode at the moment so I don't have a metadata update to post.  I do, however, have an interesting blog post from Ken Haycock about leadership in libraries and the eight core values that all managers share:
    http://www.kenhaycock.com/index.php/Ken-s-Blog/Entry/8-core-beliefs-of-extraordinary-bosses.html

    I found this blog post to be very good food for thought.

    Wednesday, 28 March 2012

    Metadata Update #7 - RDA clarity

    I finally decided to stop getting the Auto-Cat Listserv for the time being and went to the digest version of the OCLC listserv.  Wow!  This is so much better.  I got the OCLC digest this morning and all of the complaining and carrying on is essentially filtered out.  What I would like to pass along is a little bit of clarity about Day One for RDA from OCLC.  This is a direct quote from the head of the Metadta Services Department and chair of the Program for Cooperative Cataloguing (PCC), Linda Barnhard bout what Day One "means":

    This means that for authority records:

    ·Saturday, March 30, 2013 is the last day that new AACR2 authority
    records will be permitted in the LC/NACO Authority File.

    ·Beginning Sunday, March 31, 2013, all new authority records entering
    the LC/NACO Authority File must be coded RDA.

    This means that for bibliographic records:

    ·Beginning Sunday, March 31, 2013, all access points on bibliographic
    records coded "pcc" must be RDA, even if the bibliographic description
    follows AACR2.

    ·There is no set date for PCC institutions to begin contributing RDA
    bibliographic records.  PCC continues to believe that institutions can
    set their own timetable for this transition.

    More information on NACO RDA training and record review will be coming
    soon on PCCLIST.  As soon as catalogers are trained on NACO RDA
    authority work, they may begin contributing those records to the LC/NACO
    Authority File, even before March 31, 2013.  NACO training will be
    general training, and will focus on the differences between AACR2 and
    RDA heading and reference construction, and on the new fields that can
    be added to authority records.  It will not cover specialized areas,
    such as music, law, series, and complicated uniform titles such as the
    Bible, Koran, etc.  Some NACO Funnel Coordinators may wish to plan
    specialized training sessions in these areas to assure satisfactory
    understanding and record review prior to the March 31, 2013
    implementation date.

    Information from the Acceptable Headings Implementation Task Groupwill
    alsocontinue to be posted on PCCLIST; this group is overseeing the
    changes to the authority file in  preparation for RDA, including the
    marking of all headings that are currently not acceptable under RDA. 
    Their website
    provides a wealth of information.  The PCC Day One FAQ
    has been updated to reflect this new information.

    Yay Linda!  There is so much confusion about this on the listservs.  It is basically what I remembered hearing at ALA but as soon as the discussions started up, I started to doubt myself.

    So, what does this mean?  I think that it means two things.  The first is that any library that works with an authority vendor needs to talk to that vendor and see what their plans are.  The second is that libraries need to focus on learning RDA as it applies to the access points.  Yes, we need to learn all of the background and theory.  We need to understand RDA as a whole.  But, I think that it also means that in our practice, we can direct our energies towards learning how to apply RDA to our access points first.  Details of description can come later.  At, yes, it is ok to mix RDA and AACR2 coding in records as long as the access points are RDA.  This makes sense.

    Tuesday, 20 March 2012

    Metadata Update #6 - The Current Buzz

    There's been so much buzz on the listservs for the last two to three weeks that it has been hard for me to keep up with it all and for me to find something that might be useful to share with everyone.

    So, why don't I talk a bit about what the buzz has all been about?  It seems that the announcement of day one for the implementation of RDA as March 13th 2013 has started a lot of controversy.  It is almost as though people thought that RDA would go away and it doesn't appear to be.  Now there is a lot of "freaking out".  I haven't been able to read all of what people have written on the issue.  In fact, I think that I might have read about a 10th of it but I feel that it's enough for me to understand what all the buzz is about.  Here is my summary:

    1)  A lot of people don't like RDA for a variety of reasons and don't want to have to learn or use it.
    2)  There seems to be a lot of confusion about what RDA implementation means.
    3)  Many people distrust LC, OCLC, and cataloguing vendors.  They feel that RDA and the RDA Toolkit are somehow tools for taking the power and ability to catalogue out of the hands of the library.  They don't like how the RDA test was conducted and they don't like it that a small group of people are pushing their agenda (whatever that is) in the process of promoting RDA.
    4)  There is a sense of paranoia in terms of RDA signaling the end of cataloging positions and traditional libraries.

    Just to get a sense of the discussion, here's a little snippet from one discussion:

    I do not think that ???? gets what the "nightmare" is around him. Just take a look at unemployment and lack of jobs.

    Our nightmare is that in 2016 there might not be any catalogers around as we will outsource everything and accept publishers data as it is. There will be only small maintenance units in libraries to clean up tape loads that are coming in. On the other hand, our catalogs will have pictures and lots of links to Amazon, Google and others thus becoming a giant advertising source for these businesses. 

    I am not going to venture if the libraries themselves will still be around not to mention OCLC, which might be replaced by a Google bibliographical network that harvests bibliographical data from all possible sources meshing it together into a multimedia show for our entertainment.

    Hmm, maybe it is not a nightmare but the bright future I have been hearing from the RDA proponents. 

    So, what is this all about?  I don't really know.  But, I can offer my humble opinion.  Having recently finished my MLIS through a U.S. university and also having been at ALA in January, one thing that I am hearing is that there is a lot of stress at libraries in the U.S. right now.  The economy has been bad and it is impacting on libraries.  Their services are in demand but they can't afford to run their libraries.  At the King Library in San Jose librarians have been forced to take "mandatory furloughs" which basically means that their jobs and pay have been cut back from one to two days every two weeks.  However, there are much worse situations elsewhere in San Jose, California and the U.S. in general.  Librarians are being laid off and entire branches are being closed.  This makes a lot of real stress.  So, it's no wonder that it comes out in the listservs. 

    The second thing is that AACR2 has been in use for a very long time and there are many cataloguers who are very good at using it.  These cataloguers are also in the last few years of their career.  They have seen a lot of change and really understand the mistakes of the past.  There's no question that the road to developing and implementing RDA has been awkward and bumpy.  It's not surprising if cataloguers who have already been through all the bumps before aren't exactly jumping up and down to think that they'll have to end their careers with another bunch of bumps.  Actually, some of them seem hopping mad, if anything.  What's even worse is the sense that I'm reading in the listservs that the current group working on RDA is not listening to "lessons from the past".  I can understand that frustration.  Yet, it seems to be part of human nature that we, at least in part, need to work through things for ourselves and learn from our own mistakes.  In the meantime the older and more experienced stand on the sidelines and shake their heads and wonder if the human race will ever learn.

    The third thing is that people are making some very good points about how the entire library world is not on the same page.  ILS vendors may not be able to support some things that RDA is trying to do.  There is a lot of legacy metadata that folks aren't quite sure what to do with.  LC and OCLC are sometimes sending different messages.  None of this is very surprising either.  RDA is new and there is bound to be a lot of confusion.  It will take a while for it to become mature and for everyone to come to the same level of understanding and agreement that currently exists with AACR2. 

    So, I've just brushed off all of the worry and complaining?  I hope not.  I just wanted to put it in perspective.  I think that people are making some good points.  However, there also is a lot of negativity and panic. I'm not sure that the negative energy is helping me personally!  And, taking the time to sort through it all is certainly eating into my productivity.  I think that I might drop out of these listservs for a while.  Folks need a place to vent but it's taking up too much of my time.  I'm still going to work on the metadata updates and hope that people are finding them useful.  I am so far behind in my 23Things assignments, I’m not sure how I’m going to catch up.  It might be a project for this weekend.

    Tuesday, 13 March 2012

    Metadata Update #5 - "Legacy"

    So as libraries move toward RDA implementation, there seems to be a lot of talk about the AACR2/MARC records in our catalogues already.  What do we do with them?  When do we leave them alone and when do we convert them to RDA.  One of the interesting words that I often hear people use to refer to our existing metadata is "legacy".  Those old AACR2/MARC records are our "legacy" metadata.  It's an interesting term.  It makes it sound like the existing records in our catalogues are the "inheritance" that the next generation of library workers will receive from the cataloguing community.  I like the term.  Rather than saying that MARC and AACR2-based records are bad and need to be replaced, it makes it sound like they are something of value to be passed down through time.  Given the number of "legacy" records in existence, I suspect that even though they may be worked over and remade into new formats from time to time, the core of those original records will be the cataloguing legacy of the work of cataloguers done in about the last 40 or so years. 

    Wednesday, 7 March 2012

    Metadata Update #4: FRBR

    Hi Everyone!
    So, how do we get ready for RDA?  Well, some of the expert trainers recommend to start out by learning about FRBR.  Here's a link to a video on the topic.  It's a few years old but hopefully it will help to get you thinking about what FRBR is and why it is useful in libraries.
    http://www.loc.gov/today/cyberlc/feature_wdesc.php?rec=4554

    If you can't watch the video, try this Slide show (there are notes at the botton of the screen if you view the slide show in "normal" mode).  http://www.loc.gov/catworkshop/RDA%20training%20materials/FRBR%20Overview%20and%20Application_Module%201_CLW.ppt
    This is also a newer presentation.

    Thursday, 1 March 2012

    Metadata Update #3 - RDA Day One Announced!

    The Library of Congress announced a couple of days ago that Day One for the implementation of RDA will be March 31, 2013.  Their press release says that they expect that Library and Archives Canada will also, hopefully, have the same target date.
    If you want to read more about the plan that LC has for training the remainder of their cataloguers, the information is available in this document:  http://www.loc.gov/aba/rda/pdf/RDA_Long-Range_Training_Plan.pdf

    Monday, 27 February 2012

    Prezi

    I don't know if I'll have time for a Metadata update this week.  However, I did the exercise for 23 Things this morning on Prezi.  It's an interesting tool.  In a few minutes I made this presentation:  http://prezi.com/y5on3l9in7go/using-music-in-informational-and-instructional-videos-and-presentations/
    I think that I like that it is so quick and easy to make a presentation.  However, I usually use other software that has more features and offers me more control so I found some things about Prezi to be a little frustrating.  While a person can sit down and use it almost right away, it is often difficult to achieve the precise results I want.  I found that as I was working with it I just gave up trying to get certain effects.  Instead I just did the sorts of things that the application is well-suited to doing.  This isn't necessarily a bad thing if all I want to do is bang out a nice-looking presentation quickly.  However, if I am trying to design specific things for specific reasons, I think that it would be frustrating and ineffective to use.  It is better to spend the time learning the more complex software in that case.

    Tuesday, 21 February 2012

    Fun Survey

    So, this week I don't have any heavy metadata things to report on.  Instead I have my survey from 23 Things.  It was very easy to set up and I think that it will be useful.  I will share the results with this blog in a week or so.  Please take the survey if you have time:  https://docs.google.com/spreadsheet/embeddedform?formkey=dHkyQUwyVXhmYzJ5VzRYdVdYUUxKWmc6MQ And, you can forward it to others as well.
    Thanks and have fun.

    Monday, 13 February 2012

    Cataloguing and Metadata Update #3 - More Reporting on RDA from ALA Midwinter

    Ok, this post is lazy.  I admit it.  I found a nicely-written blog post that describes two of the sessions that I attended where RDA was discussed.  I'm not sure who Steve is but I think that he captured the content of those sessions well:
    http://cloud.lib.wfu.edu/blog/pd/category/rdafrbr/

    Not to be completely lazy, I was a blogger for ALA Mid Winter as well.  Here is a blog post that I wrote for the Copy Cataloguer's Interest Group that fills in some of the gaps that Steve missed.  You can read it at: http://www.alcts.ala.org/metadatablog/2012/01/copy-cataloging-interest-group-jan-21-2012/

    I hope that you can access my blog post without being a member of the metadata bloggers group!

    Tuesday, 7 February 2012

    Series Authority Records

    The question was raised at the cataloguing meeting this morning as to when authority records for series titles was discontinued by Library of Congress.  Time flies....  it was June 1st,2006.  Here is a link to LC's policy about what they do with series titles:  http://www.loc.gov/catdir/cpso/series.html .  According to this document, their decisions was "not to create/update series authority records and not to provide controlled series access points in its bibliographic records for resources in series."

    This document also says that there is more information about series titles and the 490 tag in the Cataloger's Desktop.  Our subscription has been renewed so we should have access again if you want to read more about it.

    Cataloguing and Metadata Update - # 2 More Genre Stuff

    While we don’t catalogue too many films, I just happen to have a documentary DVD on my desk at this moment and it needs subject headings and a call number.  Yesterday I started thinking about subject-headings for this DVD and reflected on the idea of separating what the DVD is about and what type of film is presenting that information.   In the process I was reminded of the concepts behind genre forms.   Last night as I was paging through notes from the conference I read that that there was a group working on a policy document on how to apply genre forms to different types of films.  Then, as I was wading through my email this morning, a link to that document happened to show up on one of my list serves:  http://olacinc.org/drupal/capc_files/LCGFTbestpractices.pdf.  So, it seems natural that this week’s post should be an extension to the introduction to genre/heading forms I wrote in last week’s blog.
    I think that I’ll use the document I’ve referred to extensively because I think that it contains some excellent examples and makes some good points.   Here’s a direct quote: 
    Genre/form headings are intended to describe what a work is, while subject headings describe what a work is about. For example, True Grit starring John Wayne is a western; it would be assigned the genre/form headings Western films and Fiction films. If classified, it could be placed in PN1997.A2-.Z8 (fictional motion pictures produced through 2000). John Wayne—The Duke: Bigger than Life is a nonfiction study of Wayne’s life and work and includes excerpts from many of Wayne’s westerns. It is a biographical documentary about Wayne and the western film genre. It would be assigned the genre/form headings Biographical films; Documentary films; and Nonfiction films along with the subject headings Wayne, John, 1907-1979; Motion picture actors and actresses—United States—Biography; and Western films—United States— History and criticism. It would be classified in PN2287.A-Z (biography of American actors) or PN1995.9.W4 (history and criticism of western films).

    The document is very kind in providing an example of what we might see in MARC records for these movies:
    Title: True grit
    655 _7 $a Western films. $2 lcgft
    655 _7 $a Feature films. $2 lcgft
    655 _7 $a Fiction films. $2 lcgft

    Title: John Wayne—the Duke : bigger than life
    600 10 $a Wayne, John, $d 1907-1979.
    650 _0 $a Motion picture actors and actresses $z United States $v Biography.
    650 _0 $a Western films $z United States $x History and criticism.
    655 _7 $a Biographical films. $2 lcgft
    655 _7 $a Documentary films. $2 lcgft
    655 _7 $a Nonfiction films. $2 lcgft
    Again, this is just a snippet of an example of what can be done with genre headings to describe and classify films more precisely for users.  These are the various types of films that are already either recognized or in use by LC cataloguers when they apply genre headings:
    Action and adventure films
    Animated films
    Biographical films
    Children’s films
    Comedy films
    Detective and mystery films
    Epic films
    Fantasy films
    Film adaptations
    Historical films
    Horror films
    Musical films
    Romance films
    Science fiction films
    Silent films
    Sports films
    Thrillers (Motion pictures)
    War films
    Western films  
    Having worked in reference at the public library for a number of years.  I certainly can see how these headings would be useful in that context.  It is not unusual for a patron to come to the desk wanting help selecting a film which either fits into a certain category or is NOT a certain category.  The immediate application doesn’t seem as apparent in the academic library.  However, consider the following which is an example of how genre headings can be used in very complex ways to very specifically describe a film:
    Title: Family guy. A new hope
    [A television parody of the Star wars motion picture]
    630 _0 $a Star wars (Motion picture) $v Parodies, imitations, etc.
    650 _0 $a Star Wars films $v Parodies, imitations, etc.
    650 _0 $a Science fiction films $v Parodies, imitations, etc.
    655 _7 $a Parody television programs. $2 lcgft
    655 _7 $a Television comedies. $2 lcgft
    655 _7 $a Science fiction television programs. $2 lcgft
    655 _7 $a Animated television programs. $2 lcgft
    655 _7 $a Television series. $2 lcgft
    655 _7 $a Fiction television programs. $2 lcgft21
    I can see how this type of catalogue record could help university library patrons ensure that they are borrowing the version of the film that they actually require. e.g.  They are getting a recording of the play of Romeo and Juliet, not a parody, not a musical, not a comedy, and not a movie about a couple of hamsters, etc.  While LCSH’s free-floating subdivisions have always allowed for some description of the type of materials, I think that the genre form will be more accessible or easier to understand for today’s diverse academic library users.  Considering that many library users are from divergent cultures and speak English as a second language, the genre/heading terms are much clearer and easy to understand than the terminology found in LCSHs.  For example, looking at the 630 in the example above, we can see that the term “motion picture” is confusing because it is referring to a televisions program.  In the 650s we see the subdivisionv Parodies, imitations, etc.”.  That’s not very precise.  We know that it is not the original story but we don’t know what form the interpretation takes.  In the genre headings we can glean that it is an animated television program which did a comedy parody of a science fiction television program (ok, not quite right but closer than where the LC subject headings take us).
    So, what about that DVD I’m working on at the moment?  I won’t be adding any genre/form headings.  We’re not there yet.  But, I certainly am thinking about them and considering how they will change both the description and access of those notoriously confusing [videorecording]s!
    What’s up for next week?  I’m not sure yet.  I’m reading through my notes on FRBR and FRAD.  These are very interesting and worth exploring but it might take me a while to put something together. 

    Friday, 3 February 2012

    Cataloguing and Metadata Update #1 - Genre Headings

    Maybe you've noticed some new coding in MARC records in the last few months that you don't recognize.  It looks something like this: 
    655 #7 $a Road maps. $2 lcgft
    or
    651 #0 $a Alamo Reservoir $v Maps.
    655 #7 $a Bathymetric maps. $2 lcgft

    So, what is this all about?  Why is the second indicator a 7 and does it mean that the $2 or |2 is some sort of local coding from another system that just needs to be stripped out?  Until last week, I was stripping these out thinking that it wasn't anything that we would want in our records.

    Woah nelly.... even though our OPAC won't do anything with this type of 6XX field, don't strip it.  It's a new extension to MARC developed by LC that is gradually being adopted called "Genre/Form Terms".  Here is a link to a useful FAQ that explains all about them:  http://www.loc.gov/catdir/cpso/genre_form_faq.pdf.

    In short, genre terms have been developed to allow cataloguers to describe in a 6xx field what type of resource an item is.  It is a controlled vocabulary which allows cataloguers to describe very specifically what type of material the record represents and the vocabulary goes way beyond the limited GMDs. No, it's not the same as the GMD and we're not getting rid of the GMD yet (baby steps toward that one), it's just one step toward making the records more useful to users who are looking for particular types of materials.  Really curious and want to search around the vocabulary?  Here's a link to a place where you can have a look:  http://id.loc.gov/authorities/genreForms.html

    So why add these "Genre/form terms" to our MARC records if our catalogues don't do anything with them:

    1)  There is a new generation of ILS (the generic term for systems like Millennium) and they will be designed to make use of these subject fields.
    2)  Cataloguers are no longer restricted to describing what an item is "about" in 6xx fields which gives them more options and flexibility in terms of making items findable.  This responds to what users need and want.
    3)  Free floating subdivisions don't work very well in some information retrieval systems - espeically ones that use faceted searching such as Primo.  Genre terms replace many of those subdivisions.
    4) The existing GMD's are too limited for the range of materials in existence today and also limited to the physical format of the item (not only does the GMD not differentiate between videos and DVDS but it does not differentiate between an instructional DVD and feature film DVD which is one of the functions that the genre form is intended to perform).
    5) A bunch of other reasons that were discussed at the conference and I either forget or don't understand yet.

    So, what does this mean for us?  Nothing much.  When we see records with this type of coding in it, we'll now understand what it is and we also know not to strip it out.  It will be useful to us some day.  We don't have to learn or apply it yet.  But I thought that it was good to at least know what is going on with them.

    Monday, 30 January 2012

    Trying Blogger

    I really appreciate that this is much easier to use than WordPress....