- English
- Francais
The Muninn Project aims to programmatically recreate scenes of historical events using Linked Open Data - and with the ever-increasing availability of high-quality 3D printers, we are motivated to 3D-print these scenes. In this particular post, we will talk about how to 3D-print a battlefield: the trenches of Vimy Ridge. We believe that 3D-printed models of battlefields, such as the trenches of Vimy Ridge, could be quite useful to archeologists & other individuals studying past historical events, namely the Battle of Vimy Ridge. We will discuss how to retrieve 90m-resolution elevation data inside a bounding box from the Shuttle Radar Topography Mission (SRTM), how to scale & project it with the Geospatial Data Abstraction Library (GDAL) and also how to convert it to an STL file that can be 3D-printed; we will also discuss how to retrieve lists of trench coordinates from the Muninn Project's SPARQL server, and how to extrude trenches on our model of Vimy Ridge before 3D-printing it. Lastly, we will discuss issues regarding the size & resolution of our model of Vimy Ridge and suggest how we might improve the quality of our model in the future. Thanks to Lawrence Willett for letting us use his 3D printer.
In order to 3D-print a battlefield, we first need to determine its bounding box. Currently, the Muninn Project's SPARQL server provides lists of trench coordinates for Vimy Ridge contained in (2.70251452076269, 50.0620220454661, 2.75071474125368, 50.0722188449797). We chose to 3D-print a model of the trenches of Vimy Ridge that lie inside of this bounding box.
After determining the bounding box for our battlefield, we need to retrieve the elevation data inside of it. We've built software - written in a modern descendant of Scheme known as Racket - that allows us a retrieve raster images of elevation data inside of bounding boxes. Our software also allows us to UTM-project and scale our raster images to 1m resolution, necessary in order for us to extrude trenches on a 3D-printable model of our battlefield; we use GDAL behind the scenes. Here is a projected & scaled raster image of elevation data for our model of the trenches of Vimy Ridge:
We then create a raster image of trench data for Vimy Ridge with the same origin, dimensions, resolution and projection as our raster image of elevation data. First, we retrieve lists of trench coordinates inside of our bounding box using the Muninn Project's SPARQL server; as mentioned, currently all of the trench data that we have for Vimy Ridge is contained within the bounding box we chose to use for our model. Before getting lists of trench coordinates, we first find all known World War 1 objects inside of this bounding box:
SELECT DISTINCT ?thing ?state { ?thing <http://geovocab.org/geometry#geometry> ?geoma . ?thing <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://rdf.muninn-project.org/ontologies/military#MilitaryTrench> . ?geoma <http://linkedgeodata.org/ontology/posSeq> ?SEQ . ?SEQ <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/1999/02/22-rdf-syntax-ns#Seq> . ?SEQ ?List ?node . ?node <http://www.w3.org/2003/01/geo/wgs84_pos#lat> ?LAT . ?node <http://www.w3.org/2003/01/geo/wgs84_pos#long> ?LONG . OPTIONAL { ?thing <http://rdf.muninn-project.org/ontologies/graves#hasState> ?state . } FILTER (?LAT < 50.0722188449797 && ?LAT > 50.0620220454661) FILTER (?LONG < 2.75071474125368 && ?LONG > 2.70251452076269) } |
Then, we find lists of trench coordinates for each trench object, after filtering out known WW1 objects that are not trenches. For each trench object, we execute the following SPARQL query, giving us a list of trench coordinates:
SELECT ?List ?LAT ?LONG { <trench> <http://geovocab.org/geometry#geometry> ?geoma . <trench> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://rdf.muninn-project.org/ontologies/military#MilitaryTrench> . ?geoma <http://linkedgeodata.org/ontology/posSeq> ?SEQ . ?SEQ <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.w3.org/1999/02/22-rdf-syntax-ns#Seq> . ?SEQ ?List ?node . ?node <http://www.w3.org/2003/01/geo/wgs84_pos#lat> ?LAT . ?node <http://www.w3.org/2003/01/geo/wgs84_pos#long> ?LONG . FILTER (?LAT < 50.0722188449797 && ?LAT > 50.0620220454661) FILTER (?LONG < 2.75071474125368 && ?LONG > 2.70251452076269) } ORDER BY ASC(?List) |
where trench is the Uniform Resource Identifer (URI) for the trench object.
We then UTM-project each list of trench coordinates, and afterwards sort them by finding the shortest path between the two coordinates closest and farthest from the origin in each list. Finally, we are to produce a raster image of our trench data for Vimy Ridge: for each projected & sorted list of trench coordinates we create a "drawing pen", set its line-width to the desired the width of our trenches, set the grayscale value of its colour to the desired depth of our trenches and then draw lines between each sliding pair of coordinates in each list of coordinates, saving our drawing as a raster image:
Once we have projected & scaled raster images for both our elevation data and trench data, we then create a trench-extruded terrain mesh in Blender, which we can export to an STL file for 3D-printing. Secondly, we create a plane mesh with the same dimensions as our raster images, subdividing it so that it has the same resolution as our raster images as well. Thirdly, we add a displace modifier to set the height values of each point on our plane to each corresponding grayscale colour value of pixels on our elevation raster image. Lastly, we add another displace modifier, extruding trenches with the raster of image of trench data that we created:
Finally, we export the trench-extruded terrain mesh that we created as an STL file, which we send to the 3D printer:
We were able to produce a 3D-printed model of the trenches of Vimy Ridge with comparable quality & resolution to that of our digital 3D model. In the future, we'd like to increase the size of our 3D-printed model to include more of Vimy Ridge, namely the ridge itself. This may prove somewhat difficult because the polygon count of our trenched-extruded terrain mesh in Blender, and hence our STL file, might grow too high to view, export and print; in other words, we might hit our current hardware limitations when we try to 3D-print a larger battlefield. Though, we might be able to overcome these potential limitations by either (a) increasing our computing power (b) using Blender as a library to avoid the need to render our trench-extruded terrain mesh in Blender before exporting it or by (c) printing separate parts of our model and gluing them together afterwards.
We encourage you to try printing your own battlefield using the methods & software provide in this blog post - let us know how you made out! Stay tuned for more information on 3D-printing your own historical battlefields.
P.S. This is what it looks like when 3D-printing fails: