Photogrammetric Documentation of Weathered and Damaged Headstones at the Cataraqui Cemetery


George: Today for my collaborators in this
project. Ian Longo, one of my undergraduate students, who’s worked extensively in the
Cataraqui Cemetery in Kingston, Ontario, especially over the last year, on a funded project to
do high-resolution differentially corrected GPS mapping of the cemetery. I also give [inaudible
00:00:18] for Alexander Gabov, who holds up the conservation end of the project. He’s
a private conservator at a company named CSMO, and he’s worked extensively on stone conservation
projects. Just a brief introduction to this particular
cemetery, Cataraqui Cemetery is the largest in Kingston, Ontario. Kingston, Ontario, was
Canada’s first capital before it moved to Ottawa. It extends over 100 acres with over
40,000 individual interments. I think the actual number is 46, 47,000 at the moment.
It’s still an actively run cemetery. The layout is mixed. It was rather topical this morning
that Mount Auburn was mentioned. In fact, this was one of the models for the Cataraqui
Cemetery, along with Mount Hope in Rochester. Those areas have a garden-style layout, although
there are areas with landscaped rows. There’s also an extensive military section to the
cemetery. The cemetery was incorporated in 1850, but
burials go back at least 50 years before that time. This has led to some confusion in terms
of who is buried where and aligning headstones with the burial records. Like many historic
cemeteries …. I’m sure this goes without saying … it can be a bit difficult to match
archives with the headstones on the ground. The problem I was confronted with is one familiar
to you all. Cemeteries are important to local and national history. Personally, coming from
this as an outsider… in ancient history and classics, I typically deal with things
2,000 years or older… I was really surprised about the local passion for genealogy in Ontario
and elsewhere. The second problem, of course, insufficient funds for maintenance and conversation.
Even though the Cataraqui Cemetery, since 2011, has been listed as a national historic
site in Canada… and a small area in the cemetery is also a national historic site,
where Canada’s first prime minister, Sir John A. Macdonald was buried… there are still
not a lot of resources for the maintenance of the headstones.
Some reference has been made already in the conference to the topple test. That’s actively
applied in the cemetery by ground staff. If it can’t meet 100 pounds lateral force, then
the headstone goes down, usually face-down in the cemetery. There’s simply no money to
bring it up, unless the families are willing to pay for it.
As I’ve said, there’s poor documentation of the older stones. One thing we’ve noticed
in working with the genealogists is often the death records don’t accurately reflect
the dates. The headstones really are the gold standard there. Even more interesting for
me, someone who works in ancient epigraphy, writing on stone in Greek, Latin, and Semitic
languages, mainly in the Middle East, is that the legibility is very poor. The weathering
and, in some cases, vandalism severely reduces legibility and the utility of the headstones
to identify the gravesites. I thought a certain synergy could be developed
here where I could apply some of the technologies I’ve worked on for rapid and accurate survey,
epigraphic survey, in the Middle East, where I work, in Jordan, to document petroglyphs
and inscriptions. I thought that some of these technologies could be profitably applied to
local cemeteries. It goes, really, both ways, because it’s a very useful way for me to introduce
undergraduates to this technology without having to bring them overseas. We have lots
of different types of headstones to work on, and we can tweak the technologies without
having the great expense of overseas work. In essence, we needed an inexpensive technique
for monitoring and documenting these heavily weathered headstones at various levels of
detail. In the case of the inscriptions, we want sub-centimeter levels of accuracy. Then
at a larger scale, in terms of placing the headstones within the landscape, we wanted
something that could be quickly deployed at the scale of tens of meters.
There were really three proposed solutions we looked at. First was reflectance transformation
imaging. It’s something I’ve been involved with for about six years or so. It was developed
at Hewlett-Packard Labs in 2001. The second is stereophotogrammetry, which has seen rapid
growth in the last six or seven years as high-resolution cameras and fast computers have become widely
available. It’s an old technique. It’s more than a century old now, dating from the time
when cameras were mounted on manual theodolites by Germans around the time of the First World
War. Then the final solution for larger-scale documentation is GPS geotagging of photos,
which is a very cost-effective and almost free way to generate high-accuracy metadata.
Experiments were conducted with these three techniques between 2010 and 2013… a lot
of the data processing happened this year… with undergraduates and graduates from the
department of classics at Queens University, as well as graduates in the master of art
conservation program at Queens University, I think one of only five such programs in
North America, as well as some cemetery staff. It was really of paramount importance to me
and my collaborators that we use these techniques with essentially untrained volunteers. We
felt any technique that required highly specialized equipment and extensive expertise could not
easily be deployable in local cemeteries. We were pleasantly surprised by the results.
Just a few remarks now on each of these technologies, why you would select them and the underlying,
dare I say, math involved in each of them. Reflectance transformation imaging requires
multiple exposures from a fixed camera’s position on a tripod with generally a strobe light
moved at different locations around the headstone. These exposures, anywhere between 16 to 60
different exposures, are then combined in free software, an open-source software, that
produces pseudo-3D model where one can dynamically relight the headstone in software after the
capture. At the heart of this technology, so-called
highlight-based RTI is a clever trick where a red or black sphere is placed in the frame
of the photograph, red and black because they�re easily available through pool and snooker
gaming establishments. When the flash goes off, it puts a very small highlight into the
ball. You can see it here. The software automatically detects the black or red sphere and places
a red cross in the point of maximum light. Then this gives you a light vector for each
position of the strobe around the headstone. The RTI equipment is fairly basic. A 35-millimeter
dSLR camera I should say that all of the three techniques we’ve used are built around
digital SLR prosumer cameras. I know there are a lot of inexpensive and extremely high-quality
cameras available today, including cell phone cameras, but we think the dSLR is still the
gold standard, not least because it’s a highly modular technology where you can add on different
piggybacking technologies. A wireless shutter release and flash trigger
are also important, because you don’t want to have the camera move at all between the
individual exposures. A tripod to hold the camera. Two reflective black spheres. A high-powered
flash unit for working out of doors. We need essentially to overpower the sun. A portable
power unit, generally lithium ion portable power pack, if plug-in power’s not available.
We like to use a monopod with the strobe light so we can lift it up rather high. A ladder
is also helpful, and string, because we want to ensure that the flash unit is an equal
distance at each position from the headstone. This is an inexpensive technique, although
I sometimes underestimate this. We already have studio facilities on campus, so most
of this equipment was already available. Ab initio, you’re probably looking at an expenditure
of, say, 3 to 4,000 dollars to get into this. Here’s an example of RTI capture in the field
with a group of students. You can see here the two black spheres. We have gray cards
behind them to increase the contrast. The SLR camera here is mounted on a tripod. Here
we see the monopod in action with a high-powered strobe light there. The string there is placed
to ensure that it’s an equal distance from the stone each time. The student is moving
it around a notional hemisphere with respect to this vertical monument.
From this, you can see one of the problems in implementing RTI in a cemetery environment,
you can’t get that true 180-degree hemisphere around vertical headstones very easily. The
ground is in the way, and if you don’t have a student who’s about six-foot-four like Ian
here, it’s a bit difficult to get those top shots. Sometimes we use ladders. Here you
can see me up on a ladder with an umbrella. I’m trying to keep the sun out of those black
spheres to prevent the sun from putting a highlight in that could confuse the software.
What the software produces are surface normals. It’s worth here just looking at the underlying
math of various 3D techniques for a moment. This here is a 3D surface. At each vertex
of this 3D surface, we have an XYZ point in Cartesian space. These points then can be
linked up to perform a continuous surface. This is what we call a mesh. At each point
on the mesh, we have an arrow, or a vector, to be more precise, that is perfectly at right
angles with the surface, even a continuously curving surface. This is what we call the
surface normal vector. The surface normal vector is very important
if we want to dynamically relight a 3D surface in software, because it is the surface normal
vector that bisects the incident and accident light. If we want to shine a light within
software on a 3D surface, we need to know what the surface vector is. RTI calculates
only surface normals, not the other components like XYZ coordinates, although at the end
of the presentation, I’ll show you an example where we have been able to extract true 3D
information from the RTI. The second technology is stereophotogrammetry.
The easiest explanation of this goes back to stereopsis in human vision with two eyes,
where we calculate distance by the parallax or difference between a scene in the left
and right eyes. The important thing to realize here is that stereophotogrammetry is different
than 3D filming or 3D photography, or even our own stereovision.
Our stereovision is limited by the base or distance between our two eyes. We’ve evolved
from monkeys, and our stereovision is really only good to the out end of our arms, where
we could grasp fruit and swing on branches. Beyond that, we use other techniques to determine
distance. With digital cameras, this is not a problem. We can lengthen the base between
cameras. We won’t be using two separate cameras. We’ll instead be using a single
camera and taking shots at a distance from each other to calculate distance.
The photogrammetry equipment is a little bit simpler. It’s something we’ve been working
with extensively over the last year. We use a Nikon D800E camera, which is 36 megapixels,
and a fixed 50-millimeter Nikkor f/1.4 lens. For this application, we do not really want
to use zoom lenses. We don’t want to change focal distance. In fact, we don’t want to
change f-stop between the shots we’re taking. That disrupts the photogrammetric parameters.
If we want to get accurate scaling for the monuments, then we’d also put a scaling
target, which could be anything as simple as a ruler or a stadia rod into the image.
There are some fancier scaling targets that are easily printed off for this purpose.
There are a couple different geometries we can use for capturing photogrammetry data.
One is the conversion project. Think of a cross-eyes, where the two camera positions
go in and focus on the individual headstone. This is very effective. We only need about
two photographs. If we want to get full coverage of the headstone, sometimes we will take four
or five or six. This gives very high accuracy and doesn’t require any special calibration
procedure. It also gives us a good base-to-distance ratio. Base is the distance between the cameras,
distance to the object. This gives us the determination of accuracy in the crucial depth
axis. We used to do a lot of strip projects, where
we take strips of overlapping photos, but with 66 percent with 22 percent or 20 percent
sidelap of the headstone. We did this when we were working with 12-megapixel cameras.
In order to calibrate the camera, we would have to take an additional set of photos where
the camera was tipped up 90 degrees. We take two photos like that and another two at 270.
Given that we now have 36-megapixel cameras and they’re widely available, we tend to
use convergent projects. This gives you a sense of some of the underlying
data. There’s no color data in this particular point cloud, but generally we capture all
of the color data. For a given headstone, we’re collecting right now with a 36-megapixel
camera anywhere between, say, 2 to probably 30 million individual discrete measurements.
The accuracy we can predict at tens of microns, so a micron being a thousandth of a millimeter.
If you think of the width of a human hair, that’s about 200 microns. I just did an
industrial project recently where we achieved accuracies quite easily of 50 microns on machine
parts. Here you can see the weathered inscription
and the lettering. Even if there’s a very small amount of depth preserved on a weathered
headstone, we can still make it out. The technique we used to post-process these point clouds
is what’s called surface depth mapping. The headstone and its inscription is going
up and down on the surface. What we do is we apply a flowing mean surface, and then
we measure up and down the actual surface of the stone and color code it or assign a
grayscale value. This is very flexible. If the stone were continuously curving, then
we’d have a continuously curving flowing mean and measure from there.
The photogrammetry software generally is commercial. We began with the ADAM Tech CalibCab software,
part of the 3DM Analyst suite of software. This is developed for the mining industry.
I still use it for engineering projects, but it’s high cost, about 12,000 dollars or
so U.S. for a single license. Makes it prohibitive for cemeteries, although I would compare that
to LIDAR units, which retail for about 40 to 60,000 dollars. The package we’re using
extensively right now is Agisoft PhotoScan Pro, a Russian software package. We can get
that for 600 dollars per license as an educational institution, and that pricing may be available,
I don’t know, for nonprofits. There are other open I shouldn’t say
open, but free packages where you upload your photos, like 123D Catch run by Autodesk. I
would really advise against using that. If you read the fine print, once you upload your
photos, Autodesk owns them, so that may be something you want to avoid. It also doesn’t
give the high precision of these techniques. Once we produce the high-resolution point
cloud, we can manipulate the data in one of two open-source free software packages, CloudCompare
and MeshLab. Photogrammetry, you might get a sense, is
really the technology that we’ve settled on for cemetery documentation. In fact, it
would be very difficult for me right now to get my undergraduate students to do an RTI
in a cemetery. They don’t want to spend a day hoisting up strobe lights. It is fairly
quick in the field. Generally, per headstone, it’s about two to three minutes, if even
that. The results, as we’re about to see, match or exceed those of reflectance transformation
imaging. The software, yes, it can be expensive, but when you think of the overall expense
and time saved, we think it’s roughly equivalent to RTI. It works well, the photogrammetry
does, with the geotagging workflow, which I’ll mention in just a moment. The software,
however, especially for manipulating the point clouds, can be a bit difficult to use.
One of the cons here of 3D recording in general is the size of the data and finding a way
to store it long term. I really have to throw up my hands on this, is that we have no good
method of long-term data storage. The 3D data itself, these point clouds, can be rather
large, over a gigabyte per headstone. That said, what I recommend to cemeteries now is
that they store the underlying photographs, six to eight per headstone, which you would
want to take anyways for good documentation, and then the 3D data can be built on an as-need
basis. You can always go back to the photos and generate this data.
Geotagging is, I think, an interesting technology, especially apropos the last presentation.
We use a Solmeta GPS. It’s relatively high accuracy, about three meters accuracy, at
least for a non-differentially corrected system. We could check these points using a Trimble
survey GPS where we were getting around two centimeters of accuracy, and under one centimeter
if we tried. The GPS unit does also have inertial correction with a triple-axis compass and
an IMU. We’ve sometimes observed accuracies of under three meters.
One of the advantages of this, it’s only about 300 dollars. It’s added to the hot
shoe of the camera. Every time you take a shot, it gives you longitude, latitude, altitude,
and I think this is also very useful bearing. If we have a cemetery with a headstone with
multiple faces, it records the direction the camera was facing each time. This is just
freely added, so think of this replacing your photo log in the cemetery.
Also apropos the last presentation, it works nicely with Esri ArcGIS. Here we plotted not
a particularly good base map. This is just Google Earth of the cemetery. We see the individual
color-coded photographs taken by Ian. Then he can click on the individual geotagged photos
and bring them up in our GIS, and all of this without any operator intervention. They’re
just sucked up into the software. Adobe Lightroom will also do this. That’s a package some
of you may use. You can just click on the individual photos and bring them up.
A lot of work, I think, still needs to be done in integrating GPS geotagging into archaeological
workflows. I�d be interested in talking with others who have applications using this
sort of technology. This was absolutely crucial for me in the field. If I’m recording rock
art in Jordan, within five days, with a team of, say, four or five students, we’re recording
15, 16,000 photos, so we need a way of applying metadata automatically.
Let’s now look at some results here. The first thing I emphasize is that these technologies
are not miracle-working technologies. If there�s no depth remaining in the inscription, we’re
not going to get anything. Here’s a sandstone monument and some RTI enhancement applied
to it. Yes, we can get a few extra characters, but in the areas that are completely lost,
we’re just not going to get anything. I have to be can’t overpromise on these
technologies, although I should say we’re continually surprised.
Within the Sir John A. Macdonald enclosure, we looked at one headstone that was heavily
stained and highly eroded, by the headstone of Margaret Gilchrist. On the left-hand side,
we see a photo taken in 1984 of the monument, and we see the monument around 2010. You can
see the legibility is severely degraded. Here is the RTI enhancement in 2010. You see many
of the characters are now more legible. This is a dynamic process, so we can go into
the software and apply that filter and move the light around after the event, and to focus
in on particular characters and try and get a better understanding of the inscription.
When you watch epigraphers use this, they’re often just moving the right [inaudible 00:23:19]
around continuously, waiting for their brain to spark and see the letters they want to
see. Let�s look at the photogrammetry here. This
is couple of years later. The stone has been cleaned a little bit, but you can see that
there�s been still a lot of erosion on it. Here is a grayscale depth map of the same
stone. If we had time to compare this, you could see that we can see all of the same
things we see on the RTI and maybe even a few more characters. This was data captured
in about two minutes by an undergraduate student. Here’s color coding of the same depth information.
We’d also take this 3D surface and apply dynamic relighting to it within software like
MeshLab. Here we’re taking a surface, changing its texture to essentially be metal rather
than stone, and moving the light around until we get the light angles we want to reveal
the features in the inscription we hope to see.
Here’s another stone where there’s not a problem with weathering. It’s simply low
contrast. The inscription is nicely inscribed. It’s just very difficult to photograph.
Here’s a case where we’ve done RTI, and we can dynamically relight it to make it out
quite nicely. We have the comparable photogrammetry data of the same stone. I think this one turned
out really nicely, much less time and essentially the same results.
Within the Cataraqui Cemetery, we have a number of Chinese burials from the 20s, 30s,
and 40s. For various reasons, these have escaped the recording by the Ontario Genealogical
Society. These are particularly endangered. They’re low to the ground and rather humble
monuments. We’ve done RTI enhancement of them as well as photogrammetry. You can see
here what the point cloud looks like of this data that we can move around. This would be
about two to three million discrete measurements. One of the abiding problems we have in the
cemetery is biogrowth, like lichens and algaes, on the headstones. These are very difficult
to deal with with RTI. The lichen often disrupts the surface normals, and we simply can’t
get any good data. Here’s a very difficult headstone where we did RTI, we used various
raking-like positions, and we still couldn’t get it. You might say we could go and clean
it. I’ve had master’s of art conversation students work on this a little bit, but it’s
extremely time consuming given the numbers of headstones we have.
We were pleasantly surprised by what photogrammetry could do in this application. The lichen still
often follows the indentations of the stone. Here’s a case where we have a headstone,
and we did photogrammetry of it I think with just three shots and we did depth
mapping here. You might say this is not very impressive, but in terms of trying to match
up this headstone with burial records, we can see at least the name Margaret here,
Grace,which may or may not be part of the name, and some other letters. This
in itself may be enough if we narrow down the location and the cemetery, and have a
range of death records we’re looking at and burial records. This may be sufficient
to match up the stone. We have many marble stones like this with extensive biogrowth
on them. Here’s an interesting stone. A multi-material
headstone with granite and a marble insert. We have a few of these in the cemetery. As
we’ll see in a moment, the marble insert is starting to bow out the forces involved
there are astonishing and the front of the stone is beginning to spall. Here we did
RTI of it in 2010 to record the spalling area, so we got some important dates out of that.
Just this past year, the strain built up, and the stone snapped. The marble will simply
be stacked in front of the headstone. There�s no money for any conservation work on it.
What could photogrammetry do? Photogrammetry effectively recorded the bowing of the stone.
You can see this in the 3D model here. If this is something we wanted to monitor over
time, we could do it to high accuracy. Think of this as also has applications in conservation
treatment before and after, in terms of liability issues and others. Then on the right, we can
see a depth map, apply the filtering here. We can see that we get more or less the same
data as the RTI. Another problem type of stone for us is weathered
marble. RTI has not worked well on this for us, especially where we have this venous patterning
after the weathering. Here’s a raking-like photo, and here is an RTI enhancement. We’re
moving the light around, and it’s still really not giving us the information we want.
You can see here that the depth information is still there.
Here is a color-coded depth map of the same stone. It’s significantly improved. We can
see that the person buried is Emily Shibley, and we can make out other important numbers
on her headstone. I’m not looking here for perfection. Often, just part of a name is
sufficient for our purposes. Here’s the bottom of the same monument. We see less of
that venous patterning. On the top, you see an RTI enhancement, and the bottom a grayscale
depth map. We can see from this passage from Revelation 13, that we’re getting more or
less the same data out of the two techniques. I said at the beginning that RTI produces
surface normal maps. Here’s a color coding of those surface normal vectors. The blue
color indicates that the vector is pointing towards you, red and green that it�s away
from you. We can actually apply algorithms to surface normals to reconstruct the 3-dimensional
surface and to create a depth map, a grayscale depth map from it.
We did this, actually, as a apples-to-apples comparison between the photogrammetry and
the RTI. We took that, and we took the word right , and we did a 3D visualization
of the RTI data here. I was always told that you can�t extract 3D data from the RTI,
but here you can see with this character right, as we move it around, you can see real depth
information, and you can see the precision of this technique. Now we�re producing a
similar sort of data from surface normals as we do from photogrammetry, that produces
that XYZ data. We can also apply color coding according to depth and have a look at it that
way. If we now move to a photogrammetry 3D surface
plot so comparing apples to apples we can do the same thing and look at the same
word. We can see we get a little bit more precision, less smoothing in this technique.
The accuracy here would be probably around 30 or 40 microns, given the lens and camera
combination. Here we see the color coding. I think we’ve proven that the two techniques
are essentially equivalent in terms of accuracy and the data they provide. We really believe
that the photogrammetry has the edge with headstones because of the speed of capture
in the field. There are lots of people who have helped out in this project, providing
funding and moral support and volunteer time. We’d like to thank them by way of this slide.
I should say one thing before I end here is that I am always looking for collaborators
in this sort of work and headstones where we can really make a difference in terms of
bringing out genealogically or historically interesting features on a headstone using
these technologies. Please contact me if you have anything you’d like looked at. Thank
you.

One Comment

Add a Comment

Your email address will not be published. Required fields are marked *