Annual Conference 2010, Middelburg The Netherlands
THE CHALLENGING FUTURE OF INDEXING
Roosevelt Academy, University College, Middelburg
29 September - 1 October 2010
Middelburg Conference Session Reports
The Invisible Indexer (Professor John Sutherland, SI President)
Professor Sutherland began by lamenting on the difficulties that he had encountered in getting articles published on the subject of indexing. His work is rarely spiked, but contributions on indexing are generally deemed ‘not sexy enough’ to get into print.
As an example of the low visibility of indexers, he cited the Oxford Companion to the Book, an exhaustive work of some 1500 pages, which devotes more space to the difference between the booklouse and the bookworm than to the profession of indexing. Henry Wheatley merits a short entry, but does not make it as far as the index. The index itself is excellent, but the indexer (Philip Aslett) is not even listed as a contributor, although he does appear briefly in the list of acknowledgements. Well-known indexers such as Mulvany and Bell are almost unknown outside the indexing world, while very few reviewers mention the index, although they must all have used it while writing their reviews. In some cases the index itself becomes almost invisible, as in the recent biography of the Greene family (Shades of Greene), in which the index is set in a point size so small as to be almost unreadable.
Professor Sutherland then turned to the issue of remuneration (or lack if it). Indexers are not only poorly paid, but they have to supply their own hardware/software, receive no benefits and are solely responsible for funding their own pensions, yet a bad index compromises a book much more than bad proofreading.
Alternatives to professional indexing have their own problems. It is often said that authors know their own works best, but sometimes in-depth knowledge may be a disadvantage when compiling an index and it is noteworthy that the Wheatley Medal has only twice been given to an author-indexer. Search engines have not made human indexers obsolete as they are not able to interpret the text in order to formulate entries and they are prone to information overload – the longest index is not necessarily the best index. There is also the question of field specificity: philosophy is concept-heavy, while history is name-heavy and specialities like law and medicine have an extensive technical vocabulary. All these require indexes with different structures.
Professor Sutherland concluded that indexers need a little more visibility and a lot more cash, a sentiment that was met with warm applause by the audience.
What does research tell us about the way people search information? (Professor Michaël Steehouder, University of Twente)
Benjamin Surkyn's usability approach was his starting point : usable, useful, desirable, valuable, findable, accessible, credible
Steehouder moved on to consider project examples from research on computer manuals.
He noted seven steps of information seeking : problem awareness, problem definition, choice of medium, locating the relevant information, understanding the information, inferring a solution and evaluating a solution. Of these he identified problem definition, and locating the relevant information, as being the most relevant for indexers.
As a brief exercise we were asked to give the search terms we would use for the dots above the "e" in Michaël [a nicely topical exercise for St. Michael's day] Accents? Umlauts? Diacriticals? Punctuation marks? Were our terms our users' terms?
Analysis of helpdesk calls shows that after a bit of narrative, a helpful agent would start asking questions. Time for the 'nyms' : synonyms, hyponyms and hypernyms.
Useful advice was that most users study a problem, not its solution.
Keyword clouds, user tagging, and automated problem recognition from natural language are all proving helpful to some users.
There were nods of agreement when he said indexes to many computer manuals are not helpful.
The placing of page numbers in an index may matter. Is indented, or run-on layout easier for the user? Do dots across lines from entry to page number (possibly in a manual) aid users? He gave some interesting examples of research from the 1990s.
Research shows the way people may search information is proving more complicated than first thought. However, some developments from information science, doing high tech work with natural language, seem promising. At the end of the day, research follows practice!
If the mark of an early talk is its relevance and usefulness for the rest of the programme, this was indeed excellent.
How to be a green indexer (Femke Ijsseldijk)
Femke spoke after dinner on the first evening of the recent Middelburg conference concerning the meaning of sustainability and how our everyday choices make us more or less ‘green’. She also provided lots of links for further information about sustainability which have been incorporated into the conference blog <http://adelef.wordpress.com/>.
First we watch a brief cartoon about the meaning of sustainability and then Femke explained the different ways of assessing sustainability. ‘Cradle to cradle’ requires all systems to be regenerative, so all materials used in production should be capable of re-use. ‘Life cycle analysis’ examines the total impact of a product’s manufacturing, use and disposal in terms of the material, energy and emissions put into and taken out of the process.
‘Green’ can come in many shades. ‘Greenwash’ is used to describe those organizations who imply that their actions have environmental benefits, For example, the ubiquitous hotel bathroom sign exhorting guests to reuse their towels which implies the hotel has green credentials without them actually having to take any action at all to reduce their environmental impact (in fact, it just saves them money). Also there is ‘consumer green’ by which products are marketed as being environmentally friendly.
We all then took part in a quiz in which we were shown several pairs of images and were asked to say which from each pair we thought had the lower carbon footprint, with some surprising results. We concluded that it was good to be a vegetarian in a Hummer, and better still to be a vegetarian in a Toyota Prius.
There are lots of ways in which we can work more sustainably (see the links from the blog) but one particularly intriguing one was the ecofont which uses less ink in printing because all the printed characters are full of tiny holes. Ingenious!
New Technologies and Innovative Indexing Methods (Harry Bego - Texyz)
Harry Bego gave a stimulating and enlightening talk about his program TExtract. He described its principles and features, which are based on control and feedback. Control is concerned with links to index entries, and feeback is the ability to check what you have done. The core feature is text accessibility via text linking which enables highlighting and navigation.
The procedure for using TExtract is automatic initial indexing to produce a list of words which have been selected on the basis of usage. The indexer then edits and expands the index. This step is semi-automatic requiring manual input.
Two editing styles are available. The first is index-oriented ie entries have significance scores reflecting the importance of terms. The other style is text-oriented, where the index is created directly from the text. A hybrid style is also available.
The hybrid style is aimed at authors, who will know their work and know what they want to achieve from the index. Bego’s opinion is that indexers will prefer to start in the text-oriented style and then move on to the index-oriented style for completion of the index.
Bego demonstrated index creation with his program, showing how to search for entries with high significance scores and to delete unwanted entries. Terms can be rotated or inverted as required. Co-occurring terms can be identified to use as sub-entries.
Demonstrations of software always far outshine any descriptions, and to be able to watch Bego demonstrate his program was a valuable experience.
A Pair of Shoes in the thesaurus – reflections on human and computer indexing (Eric Sieverts – University Library Utrecht)
Eric gave a humorous, informative, though speedy, presentation covering the topics: the way search engines work; what's wrong with this way of searching; metadata and indexing; indexing and knowledge organization; knowledge organization and the semantic web.
Google search and find paradigm and what's wrong with it?
How does Google know what the searcher (via the query) means and how does it know what the text in a document means? We've all experienced the problems – often you don't find what you are looking for and just as often you find too much. For example a search for the word "thesaurus" returned a photo of a pair of shoes with the laces intertwined (does this imply that thesauri are not easy to use?) The new Google Instant attempts to predict what you are looking for after you have typed 1 or 2 letters. But is Google guessing correctly? Sometimes Google acts as a moral authority (try typing nude – into the search box). Dilemma: Google is not user-friendly - the user has to invent the correct term(s) to describe what he/she is looking for. On the other hand, indexers are expensive – they have to analyze the document to assign the correct term(s) for the user. Why is the Google user still satisfied despite lousy recall and precision problems? Well, the system looks simple, you always get something back from your search and, due to smart relevance ranking, you often don't look beyong the first page of 10 search results.
Language technology on the searcher side refines your original query using a semantic network (or ontology) to return better recall and precision in the results. Statistical analysis of the search result generates additional terms from which you can select an improvement to your query. For example, if you enter the word stones in the search box, you 'll get suggestions such as: bladder stones, kidney stones, rolling stones, precious stones etc.
Language technology at the document side can enrich the document with "correct" terms (from a thesaurus) or derive terms that arein the text of the document itself.
This is called automatic classification or enrichment.
For example the OpenCalais Web Service (http://www.opencalais.com/) takes unstructured documents (text, HTML, XML) and automatically generates rich semantic metadata (tags) for named entities (people, companies etc), facts and events.
Eric tells us how such automatic classification systems are trained to "know" words that are not in the text of a document. These systems will be used in future because it isn't possible to (manually) index all new documents.
Eric mentioned Google Book Search which allows geographical recognition of places mentioned in the book. As well as returning links to the page number in the book where the place was mentioned, his example displayed a map of the places. (I am not sure how this fits in with training of systems).
Eric then talked about Knowledge Organization Systems (KOS). There are 4 types: categorization systems (taxonomies); metadata models (e.g. MARC, Dublin Core); relational models (thesauri, sematic networks or ontologies) and term lists (like authorization files). KOS has 4 functions: description and labelling, definition, translation and navigation).
Ontologies show relations between concepts. Graphical examples shown were a "wine ontology" and the Balzac statue by Rodin. Ontologies are used to represent knowledge about a small domain. Ontologies were then discussed in relation to the semantic web. Some ontology notation:
RDF (resource description framework) is a standard used to describe relations between an object and its metadata using XML. Properties (metadata) of RDF are in "triples" : subject <predicate> object. (think of: thing <property> value). Triples are used in "linked data". The graphical example Eric used was an author (the thing) who has a name (the property) and a value (John Smith). Of course the author has other triples associated with him.
SKOS (simple knowledge organization system) is a standard for describing Knowledge Organizations Systems and relations between them in RDF. A graphical example that Eric used was the thesaurus term "Economic cooperation". We also had a glimpse of the XML behind this.
Avantages of RDF and "linked (open) data": It's on the Internet, it's computer readable, it's open, it's standardized and it's meant to be re-used. But as anyone can contribute to it, this gets rather messy. Eric's slide of a "linked data cloud" dated September 2010 showed 24 billion RDF triples online.
Essentially, the semantic web still requires a lot of subject indexing but with smart systems that can infer meaning and improve dumb searches. Eric finishes by saying even a monkey may find correct information – even information he didn't know he was looking for. Eric Sievert's website is at
Software techniques for improving indexer consistency (Rudy Hirschmann, Einstein Papers Project)
Rudy had accepted last year’s Wheatley Medal on behalf of the Project and introduced his talk as partly a way of thanking the Society for the honour and for the fruitful contacts that had followed the award.
He began with an overview of the various phases of Einstein’s life (1879-1955), as revealed by surviving documents. I was fascinated to learn that Einstein wrote to his mother after Eddington had detected gravitation lensing, one of the predictions of special relativity, during the 1919 solar eclipse. Also about the FBI’s files on this suspicious character and some wonderful literary detective work by the Project, tracing descendents of a baby, Lieserl, from Einstein’s first marriage (a scoop which delayed the publication of volume one) and finding proof (in China) that Einstein had indeed known of the Michelson-Morley light propagation experiments.
Einstein bequeathed his papers to the Hebrew University in Jerusalem, who are cooperating with Princeton on the project. The massive files of over 75,000 documents were first indexed on cards, then microfilmed and are now being digitised. The first ten volumes were individually indexed by around 40 editors drawn from many disciplines, and the volume indexes gathered into a cumulation appearing as volume eleven. The work, proceeding chronologically, is only about one-third complete, but software and indexing principles are being applied to impose consistency in later volumes, using volume 11 as an authority with new terms accepted only after discussion and usually only to reflect new aspects of Einstein’s thinking. Another minor triumph was the resilience of the public website www.alberteinstein.info to extreme load factors (6 4 million hits on its second day).
The software being used (FrameMaker embedding and Soundex coding to control the unproofed text) is certainly a glimpse into an unfamiliar world for most book indexers. A surprising number of Wheatley medals go to indexers not using MACREX, CINDEX or SKY but slim volumes seldom win (and understandably so since indexing affords few economies of scale) but winners will seldom have been required to show the patience and dedication of the Einstein team. Rudy is a man of great charm and modesty; if he feels honoured to be awarded the Wheatley, I’m sure we should feel equally honoured to have made some indirect contribution to this.
Human vs. computer indexing discussion forum
Statements of the panelists:
Harry Bego thinks that e-books provide a good opportunity for automatic indexing.
Eric Sievert said that whether or not to use human or automatic indexing depends on the indexing scenario.
Rudy Hirschmann thinks that automatic indexing might be necessary due to the pressure to publish, especially for large indexing projects.
Some opinions expressed:
Jochen Fassbender questions whether automatic indexing will be able 1) to distinguish text words vs. concepts; 2) to identify context of concepts, that is, main headings and subheadings; and 3) to distinguish significant passages vs. passing mentions.
Harry Bego replied that his software TExtract is based on statistical and linguistic aspects and that this approach can identify context to some extent. TExtract is a semi-automatic program which proposes index entries to the indexer who can edit them as needed.
Maureen MacGlashan objected that editing semi-automatic indexes is time-consuming for indexers.
Drusilla Calvert said that automatic indexing might be possible for one book but not the next one because every book is different and the program would have to be changed each time.
The discussion shifted to the difference between British and American index styles. Frances Lennie said that harmonization would be great but tailoring an index to the readers’ expectations remains necessary.
Maureen MacGlashan ends the forum by saying that there is not yet enough research about what users of indexes really need.
Workshop for Dutch editors: How To Handle An Indexer(Pierke Bosschieter)
Pierke led a practical workshop for editors on how to deal with indexers. The workshop was attended by six editors, a typesetter and two indexers. In her workshop Pierke discussed the basics of indexing, types of indexes, the way indexers work, the (software) tools indexers use, and the planning and financing of indexes. She also extensively addressed the needs and expectations of indexers towards editors and vice versa.
Most of the editors present seemed quite aware of the indexing basics. They were particularly interested in how indexing fees are built up, in the training of indexers and in indexing software. With regard to the latter, editors wondered why most indexing programmes do not offer the possibility to link PDF’s to them. Editors therefore did express their interest in TExtract. An interesting, but also precarious point was that most editors confessed they rarely check submitted indexes on typing errors and other mistakes. Pierke stressed the importance of this and also emphasised points of attention on how to judge a good index. She also advised editors to give indexers the chance to check their indexes after typesetting. Pierke wondered if it happens a lot that editors write the index instead of hiring a professional indexer. This appears rarely the case, but editors have experience with authors wanting to write their own index. Editors wondered if it would make the indexing process more efficient if authors provide indexers with suggestions for indexing terms. The conclusion was that it can be useful to have a list of suggested terms, but this should be used rather as an addition to than as a basis for the index. Pierke also pointed out that in cases of translations, revisions or updates of publications, it is usually much more efficient to write a new index rather than translating and repaging it.
Towards the end of the workshop Pierke presented the editors with a top 5 wish list indexers have:
1. Better payment conditions;
2. In case the printing process is delayed, don’t let the indexer suffer by making his/her deadline more tight;
3. Don’t make any changes in proofs without notifying the indexer;
4. When it comes to indexing, editors should let the indexer’s expertise prevail over the ‘ignorance’ of some authors;
5. Give indexers a standard mention in the colophon, or ask them if they want to be included in it or not.
Pierke also pointed out what things editors may expect of indexers, such as meeting deadlines, delivering an end result as agreed upon, the indexer being flexible, and the indexer informing the editor about any (typing) errors found in the text. Subsequently Pierke asked if the editors could come up with a top five as well. Apparently the editors present were already pretty much satisfied, as they couldn’t compose such a list. The only thing mentioned again at this point was the possibility of a link between PDF’s and indexing software.
After the workshop Pierke was asked by various publishers to organise this workshop again in-house. It demonstrated that the editors were really interested and saw the benefit of it
Taxonomy development for indexers: methods, software and skills (Evert Jagerman and Jacqueline Pitchford)
In the Middelburg conference programme there was a clear division between indexing and classification. There were sessions on issues related to the familiar back-of-the-book indexing and there were sessions dealing with keyword and classification systems.
The Taxonomy Workshop fitted into the second category. First of all Evert gave a short theoretical introduction on different classification methods. He briefly discussed systems like taxonomies, thesauri, onthodologies, directories, topic maps and folksonomies.
Evert also described how he assisted companies in developing keyword/classification systems to make their documents and internal knowledge accessible. It soon became clear that often it is not practical to stick to the rules of one particular classification system. For instance, a taxonomy can become too rigid, and therefore unusuable, fairly quickly. As a consequence people often turn to other, more flexible systems such as thesauri, or they decide to include some features of other classification systems into the taxonomy.
During the second, interactive part of the workshop the group built a joint taxonomy on education. The taxonomy was built in the software Mindmanager, which was operated by Jacqueline. The taxonomy was built for a website which supports parents who need to select a school or school type for their children. During this practical session it again became clear that for this purpose a taxonomy would be too a rigid system. Moreover working in a bigger group was instructive and entertaining, but not very efficient.
The workshop as a whole offered good understanding of how to develop keyword/classification systems within an organisation. Topics such as hints and do’s and don’t’s for obtaining an assignment, communication with and informing the client, implementation and maintenance were discussed. All in all a very useful workshop for people who consider to offer services in this area.
The question arises how to make the link between indexing and building classifications. There is a big difference between the classical indexer who thinks in detail about the layout and coverage of a back-of-the-book index and the sometimes very broad way in which keywords are allocated to documents. For instance, the classical indexer can rack his or her brains over the question whether a page range needs to be indicated with a comma, dash, arrow or a number of dots. For the classification of documents on the internet on the other hand keyword allocation is sometimes left to the general public (folksonomy) through which many different synonyms can be allocated to documents.
So where will these two worlds meet in future? Traditional hardbacks are increasingly available in an electronic environment, on the internet or as e-book. In these cases searching in an index which is in the back of the book and which refers to a fictive page number is not an option. Can keyword and classification systems such as thesauri, topic maps etc. fulfill such a role in the indexing of books, in order that for as well traditional paper editions as electronic versions useful search systems can be developed?
At the end of the workshop such considerations were discussed. In this workshop Evert fed a good shot to professional indexers. As indexers, it is our task to keep on thinking on how our expertise can remain useful in the electronic age.
Cross-references workshop ( Ruth Pincoe)
Ruth Pincoe presented a workshop on the use of cross-references. This was accompanied by a very useful handout from a previous presentation she had given to the American Society for Indexing, entitled ‘See and See Also: Rules, Issues, and Controversies’. She began by discussing the definition and functions of see references. These are used to direct users from synonyms to the heading that is used in the index. These can be synonymous terms, but also antonyms, various forms of personal names, geographical names and names of organisations. They can also be used to direct readers from general to specific terms. They need to be used with care, as using too many can make the index cluttered and frustrate the user by sending them to and fro. There was then considerable discussion about whether it was best to use see references or double posting.
Ruth then moved on to see also references, which are used to direct the reader to additional information that is related in some way. There are three general types: associative, between synonyms, and general-to-specific. The main indexing texts all identify different categories of see also references.
The use of see under and see below references caused more discussion, with most indexers saying they never used them, although some participants said they found them very useful.
Some cross-references can be added as the index is being written, but the need for others only become clear once the final structure of the index is clear. Careful consideration and checking of cross-references is therefore necessary at the editing stage. Indexing software can check for blind see references and circular see also references, but one must not be too reliant on software to pick up all possible errors.
Discussion then moved to questions of style. Obviously one must follow the publisher’s house style if there is one, but often there is no particular house style. However, the most important rule is to be consistent. The Chicago Manual of Style has very particular recommendations regarding the use of italics, punctuation, and capitalisation. This is where the major differences between US and UK indexing practices were shown up most clearly. The Canadian indexers present seemed to prefer US style, and the Australian indexers tended to follow UK style.
As well as questions of style, there are several other controversial issues regarding cross-references, such as the reciprocity of see also references, and the placing of them – should they be at the beginning or the end of the other entries? Once again, there seemed to be a general divide between US and UK practices, but the general rule seems to be that “it all depends”, although consistency was reiterated as the main rule.
The workshop finished with the use of humour in cross-references, e.g. boring see civil engineers! Oh dear, one must also be careful not to antagonise the index users.
Indexing Modern Islamic Materials Workshop (Caroline Diepeveen and Dr. Joed Elich from Brill Publishing)
Dr Elich explained that Brill publish 500-600 monographs per year, all with indexes alongside their impressive collection of encyclopaedias and reference works, all also indexed.
We were given a brief history of Brill publishing and Dr Elich explained that although Brill now publish in English, this was one of the last languages adopted by the company which also publishes in around 60 other languages.
Brill publish a number of books in various Arab languages and about the Middle East, including Index Islamicus and the Encyclopaedia of Islam. As such, dealing with the transliteration of Arabic names is a big topic within the company. The company is currently developing a font that will encompass 65, 000 characters including diacritics from many languages to distribute to its authors and indexers as a preferred font.
The issue of transliteration was discussed, with the fact that many texts have been transliterated differently over the years. There are many different versions available as trends for transliteration have changed over the years. These versions may not be accepted at present, but this does not make them incorrect, merely different.
Indexers need to be able to cope with the diacritics required for Islamic transliterations and the use of Unicode with a stable reliable font was discussed and handouts showing various diacritics were distributed. Unicode fonts can be used with Macrex and with Sky Index and Cindex will soon be able to support it thanks to work being done by a user of Cindex.
Several national libraries (including the British Library, Library of Congress and others) are attempting to create an authority for Islamic names, but the work is being done in isolation so Brill are developing their own file for in house use, which will be made freely available on the Internet.
This was a very useful workshop for those who work with modern Islamic materials and I learned much about the use of diacritics and the fonts and resources available to cope with such work.
How to Handle Illustrative Material (Max MacMaster)
Max McMaster’s audience involved workshop took us in depth into the world of indexing illustrations. As with a lot of indexing problems we come across, when a question is posed the answer comes back “It all depends” so, thanks to his vast experience Max was able to guide us through many of the situations in which we would find ourselves. Important considerations when indexing illustrative material are: Is the topic of the illustration discussed elsewhere in the text? Does the material have a glossary entry that contributes significantly enough to warrant indexing? If the illustration is on the same page as the discussion of the topic, should it be indexed separately? Is the image simply a generic example of the topic, or is it specific to the discussion?
The main consideration is, as with every indexing decision: Is the material important to the text? A lively discussion was generated by one of the examples: a historical portrait of debutantes. Locator and layout options to be used for references to illustrations also generated a very detailed discussion. No definite conclusion or concensus was reached, of course. From double spread illustrations with captions on a different page to dealing with drop in pages with no page numbers Max guided us through different solutions to these and other problems covering the “all depends” to our satisfaction.
Web 2.0 and Library 2.0, a Dutch Perspective (Wouter Gerritsma)
Wouter Gerritsma, Information Specialist, Wagening Agricultural University, was educated as a tropical agronomist and has lived in the tropics and worked as a university researcher. But in 2000 he made a career switch he has yet to regret to the library of Wageningen University and Research.
Gerritsma has a "keen interest" in the rapidly changing information environment of academic libraries. In his daily work, he is involved in his library's adaptation of new technologies and in 2005, he started blogging about that changing information landscape. (You can follow his Wow! Wouter on the Web blog at http://wowter.net/.) In 2007, he was elected The Netherlands' Information Professional of the Year. He has been involved as a teacher and presenter with many practical trainng courses in web search and in Web 2.0 for information professionals.
The presentation he gave to SI can be found at his slideshare site ( http://www.slideshare.net/Wowter).
He began his talk with a review of the history of the Internet and an explanation of Web 2.0 and Library 2.0. Gerritsma explained that this new Web 2.0 has become both do-it-yourself and customizable. In sum, our use of the Internet has become interactive and now includes routine use of social software. We now learn directly from the behavior of our users and readers. The 2008 U.S election campaign of Barack Obama, for example, is a milestone in the history of the use of Web 2.0 technologies.
In libraries, the same sort of phenomena is taking place. Stacks are now 'open' as users can tag content for their own use and conduct research off-site. Many library catalogues are searchable from home, and are publically available. For example, a search on the Internet related to a new film may take you to a web page about the film with relevant information including links to where you can check out the book that the film is based on.
Another example of Library 2.0 in action is the National Library of Scotland's photostream on Flickr ( http://www.flickr.com/photos/nlscotland/), which is publically available and enjoyable to all.
"Make your resources available!" Gerritsma admonished. The library is still very much the place for study, and we should welcome our patrons' participation.
Paul Otlet, the man who wanted to classify the world (Stéphanie Monfroid)
Paul Otlet, known as ‘the man who wanted to classify the world’ was fascinated by knowledge, its collection and dissemination. He is considered one of the fathers of information science.
He was born in 1868 in Brussels and educated first at home, then at schools in Paris and Brussels. He obtained a law degree at Brussels university in 1890. Despite embarking on a legal career he was more interested in bibliography, and the idea of extracting information from books and making it freely available through catalogues. He became friends with Henri La Fontaine, a fellow lawyer with similar interests and they worked together on social science bibliographies. In 1895 they discovered the Dewey Decimal Classification, and worked on this to develop the Universal Decimal Classification, the first faceted classification scheme. Otlet had previously developed the idea of the 5x3 index card to record facts, and he began to use these together with the UDC scheme to catalogue facts in what became the Universal Bibliographic Repertory. He designed catalogue furniture to house the card collection, and these were to become the standard for library catalogues, and won a gold medal at the 1900 World Fair. Based on this collection Otlet set up a paid mail information service, sending out up to 50 result in response to each query, a kind of early search engine.
In 1906 his father died and he had to attend to his family’s business interests for a while, but in 1910 he and La Fontaine developed ideas for a city of knowledge or ‘Palais Mondal’ (World Palace) where the world’s knowledge would be stored. The First World War intervened, during which time Otlet worked for world peace in various capacities, having founded the Union of International Relations in 1907 with La Fontaine, which became the League of Nations. In 1919 they were given space in a government building in Brussels for the Palais Mondal, renamed the Mundaneum in 1924. The collection grew to 15 million entries and included letters, reports, newspaper articles and grew to include images, a picture collection which was ahead of its time.
Otlet had plans for a World City of knowledge with designs from Le Corbusier and envisaged information being made available to people in their own homes using telegraphy. He had ideas of linking documents in a way later performed by hyperlinks and of people sharing information in a manner not dissimilar to social networks of today. Whilst the technology was not available to realise his ambitions, his vision was way ahead of its time and predicted what the internet has since been able to achieve.
Unfortunately funding and support for his project ran out in 1934, and his collection was eventually partially destroyed by Nazi Germany. What remains, together with many of his writings, is accessible to the public in the Mundaneum museum in Mons.
The theme of the international session this year was marketing of indexing to authors and academics. With many international representatives present in Middelburg and only twenty minutes available for the international session, it unfortunately meant that the session was somewhat rushed. Many of the international representatives took a broad view of the marketing theme and did not limit themselves to marketing to authors and academics but also touched upon marketing indexing services in general.
Here is a very brief overview per society/network:
- ASI Int. Rep. (Pilar Wyman): The American Indexing Society runs a professional executive office, which also deals with marketing. A personal marketing tool of Pilar Wyman is the calendars that she sends out every year and which are always a big success.
- ISC Int. Rep. (Ruth Pincoe): The Indexing Society of Canada publishes an indexers available in hard copy, as well as online. Many of the Canadian indexers are also editors and conferences are always combined. The magpie-pins are made to good use for marketing purposes.
- ANZSI Int. Rep. (Mary Russell): The Australian and New Zealand Society of Indexers have a bookmark inserted in an academic journal each year. The bookmark draws much more attention than an advertisement in the same journal would. The society has recently published a guide for businesses on how to index your annual report.
- DNI Int. Rep. (Jochen Fassbender): The German Network of Indexers is still a relatively new group that needs to learn from experiences from others. The network has a website and a flyer and meets annually at the Frankfurt Book Fair. Jochen Fassbender himself was recently interviewed by a German journal for authors: Federwelt (Aug./Sept. 2010 issue).
- NIN Int. Rep (Caroline Diepeveen): The Netherlands Indexing Network also has lots of groundwork that still needs to be done. The conference here in Middelburg is an important event to demonstrate to the Dutch publishing/information world that we are there. It is hoped that it will create positive spin-off effects.
- ASAIB Int. Rep. (Marlene Burger): The Association of South African Indexers and Bibliographers is always represented and the Cape Town Book Fair. In that way they make themselves known to the South African publishing world. The association is also actively networking with other associations in South Africa, such as the National Library and the National Archives.
- Society of Indexers Int. Rep. (Jill Halliday): The Society of Indexers is very active in trying to get feedback from their membership on issues such as marketing. The academics and authors are a very difficult group to target. The SI is currently liaising with the university librarians, they have put a leaflet on their website.
Last updated: 01 October 2012 | Maintained by Webmaster | Page ID: 449
Top of page