Tag Archives: CHI

Composite image of multispectral imaging of an illuminated plate from a Book of Hours.

Fun with PhotoDoc: Multispectral Imaging with MISHA (Edition 13) 

As mentioned in a previous blog post, Andrew and Naomi from Case Western Reserve came to the lab in late February to demo the MISHA portable multispectral imaging system, made possible by a National Endowment for the Humanities (NEH) Research Grant awarded to the Rochester Institute of Technology. In total, Naomi and Andrew imaged five objects from the Public Library, UC Libraries, and one of our third-party institutional clients. Imaged books included, one Otto Ege item, two Book of Hours, one undated Latin music manuscript, and a Pentateuch volume from Hebrew Union College.  In all, thirteen separate capture sessions were carried out for the five objects. Afterwards, the raw data from the capture sessions was shared with the Lab via OSF (Open Science Framework) so that I could process the data in the NEH grant supported open access RCHIVE (Rochester Cultural Heritage Image processing and Visualization Environment) software.  

The image gallery above shows the recto of leaf 32 from the Public Library’s copy of Fifty Original Leaves from Medieval Manuscripts, Western Europe, XII-XVI century, by Otto Ege.

While each of the capture sessions took only two minutes to complete, I found that processing the raw data took me a bit longer to figure out. Processing the data felt very similar to using CHI’s RTI Builder and Viewer software. However, in this situation I did not have a week-long training opportunity to learn the ins and outs of the software and its functions. For the Spectral Analysis App, I had only a couple of brief documents to refer to, so the learning curve was a little steeper. I also experienced some issues with the software while processing the data with the flatfield files provided from the capture session. But in the end, the processed files seemed fine without the flatfield data, so it all worked out. 

The above image gallery depicts a leaf from Hebrew Union College’s Pentateuch Ms. 1 with adhesive staining, tape, and prior repairs.

What I discovered through processing all the MISHA data and then comparing it to the existing specialized imaging done in the Lab was that the suite of imaging we do in the Lab is very well rounded and, in general, suits our needs and our clientele quite well. In many cases, our results were at least comparable, if not better (specifically within the UV wavelengths) than the results accomplished using the MISHA. And, especially with our UV workflow, though our current capture time might be slightly longer than that of MISHA, the data processing time is significantly shorter and, in the case of UV especially, the side-by-side results of the accurate normal illumination next to the full color UV image(s) is ideal for our purposes.  

The images above show an example of scraped text on parchment from UC Libraries Hours of the Virgin from 1475, currently in the Lab for treatment. Compare these MISHA generated images to the documentation performed by Catarina Figueirinhas and myself using the Lab’s equipment and processes below.

That said, I am fully aware that not everyone has access to the equipment/training that I have been fortunate to curate/experience over the last five plus years. Also, not everyone uses their finished data exactly how we do. For instance, the needs and expectations of a conservation lab and cultural heritage institutions can be very different. Even within the conservation field, how we use the data provided by specialized imaging in our hybrid book and paper lab is quite different from the kind of data needed by a fine arts conservation lab. Ultimately, I think the core audience for a system like the MISHA system is an organization looking to expand their suite of imaging services, or an institution with no multispectral imaging infrastructure interested in imaging collections in a quick and easy manner. Though for the latter, I would say that there is a big learning curve in manipulating and processing the data, but if greater focus is put into making the software and processing steps user-friendly, especially to novice users, it is completely manageable. And if this step is taken, I think the system could help a lot of institutions dive deeper into the materiality and history of their collections.  

The images above depict another example of faded, scraped text. This flyleaf is from an undated Latin music manuscript that is part of the Public Library’s collection.  The images below represent imaging done by the Lab, both normal illumination and UV radiation, with the goal to increase the legibility of the inscription.

In the end, multispectral imaging is just plain FUN! So, the idea of making it more accessible to a wider audience is extremely exciting and I think the work that NEH, RIT, and colleagues like Andrew and Naomi are doing to share the power and wonder of multispectral imaging is amazing. The idea of a portable multispectral imaging system with free processing software that does not take a PhD to use is boundary-breaking, and it gives us a glimpse into a future of accessible and exciting imaging, which thus allows us to see and understand more of the past. I will always be an advocate for that kind of imaging! 

Jessica Ebert [UCL] – Sr. Conservation & Photographic Documentation Specialist

Fun with PhotoDoc – RTI (Edition 5)

At the beginning of April I was lucky enough to attend a RTI (Reflectance Transformation Imaging) workshop offered by Cultural Heritage Imaging (CHI) at Yale University.  CHI is a non-profit organization  that shares and teaches RTI and Photogrammetry technology with cultural heritage institutions around the world.  The class I attended was a 4-day NEH grant sponsored course taught by three RTI experts from CHI, and it was amazing!

This is a composite image of all the highlight points from one RTI section. The software uses these highlight points to map the surface shape and color of your object.


So, what is RTI?  CHI describes it on their website as “a computational photographic method that captures a subject’s surface shape and color and enables the interactive re-lighting of the subject from any direction”.  For highlight RTI, which is the least expensive and most accessible method for most institutions and what I was taught in the class, you basically take a series of 36 to 48 images of an object where everything is constant (settings and position of objects, camera and spheres) except for the light position.  With a reflective black sphere (or 2) set up next to your object, you move you light source around the object at varying angles.  Then, you take that set of images and plug them into the free RTI software provided by CHI and the algorithm detects the sphere(s) and the highlight points (from your light) captured on the sphere(s) and voila!…you have an fun and interactive way to look at your object’s surface texture.
Before I attended this fantastic training opportunity, our conservator and I knew right away what the subject of our first capture would be when I returned…a 16th century German Reformation text by Martin Luther with a highly decorated cover that is practically invisible under normal illumination.

Here’s a time lapse video of our first (and second) capture in the Lab…
That day (Tuesday) were were able to capture the upper and lower covers of the Reformation text (from ARB), the original silk cover from a 17th century Chinese manuscript (from Hebrew Union College) and an illuminated page from a German vellum prayer book (from PLCH).  And here our some snapshots of our results from two of those captures (click on the thumbnails for a larger view of the image)…

This possibly 13th century German Prayer Book has a full stiff vellum binding and an illuminated first page.  The varying modes highlight condition issues like worn/abraded parchment and flaking gold illumination, as well as the overall surface texture of the illumination.


I hope you’ve enjoyed getting a little sneak peek into RTI.  I will be demoing and discussing in further depth this afternoon from 1:30 to 3pm at the Lab’s annual Preservation Week Open House.  I also hope to do more RTI captures/processes in the future and share them here.
Jessica Ebert (UCL) – Conservation Technician