<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xml:base="https://blog.muninn-project.org"  xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel>
 <title>The Muninn Project - SPARQL</title>
 <link>https://blog.muninn-project.org/taxonomy/term/13</link>
 <description></description>
 <language>en</language>
<item>
 <title>CanLink: Linked Open Theses</title>
 <link>https://blog.muninn-project.org/node/121</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;h3&gt;
	&lt;a href=&quot;http://canlink.library.ualberta.ca&quot;&gt;CanaLien : un projet de données liées pour les thèses Canadiennes - CanLink : a linked data project for Canadian theses&lt;/a&gt; is now online!&lt;/h3&gt;
&lt;p&gt;CanLink is a collection of thesis data from &lt;a href=&quot;#instittutions&quot; style=&quot;font-family: arial, helvetica, sans-serif;&quot;&gt;collaborating institutions&lt;/a&gt; part of the &lt;a href=&quot;https://connect.library.utoronto.ca/display/U5LD/Canadian+Linked+Data+Initiative+Home&quot; style=&quot;font-family: arial, helvetica, sans-serif;&quot;&gt;Canadian Linked Data Initiative&lt;/a&gt;. It features over 5,000 theses from participating Canadian universities&lt;a href=&quot;#instittutions&quot;&gt;[1]&lt;/a&gt; on a broad range of topics, from &quot;&lt;a href=&quot;http://canlink.library.ualberta.ca/subject/d2c5ba6561ecdf514120cc85ea2f37b0&quot;&gt;post-humans&lt;/a&gt;&quot; to &quot;&lt;a href=&quot;http://canlink.library.ualberta.ca/subject/936a5bcd638823a8783d82d76c11bf3b&quot;&gt;mechano-electric feedback&lt;/a&gt;&quot; with new theses being added on an ongoing basis. The project is an initiative of the Digital Projects committee of the Canadian Linked Data Initiative with the development work done by Sharon Farnel, Rob Warren and Maharsh Patel&lt;a href=&quot;#mahrash&quot;&gt;[2]&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The data set is described in &lt;a href=&quot;http://canlink.library.ualberta.ca/void/canlinkmaindataset&quot;&gt;void / dcat format&lt;/a&gt; and is also registered in the &lt;a href=&quot;https://old.datahub.io/dataset/can-link&quot;&gt;Data Hub&lt;/a&gt;. The virtual machine is provided by West Grid and the domain name is provided by the University of Alberta.&lt;/p&gt;
&lt;!--break--&gt;&lt;h3&gt;
	Getting the data&lt;/h3&gt;
&lt;p&gt;The data is made available through a Linked Open Data interface and a website permits the &lt;a href=&quot;http://canlink.library.ualberta.ca/searchOnline.html&quot;&gt;simple querying&lt;/a&gt; of the data. A &lt;a href=&quot;http://canlink.library.ualberta.ca/downloads/&quot;&gt;download page&lt;/a&gt; is available for bulk retrieval of the raw data itself, as well as the &lt;a href=&quot;http://canlink.library.ualberta.ca/downloads/new_csh.nt.gz&quot;&gt;full Canadian Subject Headings dataset&lt;/a&gt; as re-hosted by canlink. The retrieval of individual records can be done through URL identifiers. Let&#039;s look at the thesis titled &quot;Isolation and identification of the flavouring principle in maple syrup&quot; (in 1925!) by Robinson. The thesis record itself can be retrieved in multiple different formats including &lt;a href=&quot;http://canlink.library.ualberta.ca/thesis/bffefd164e1a27d50e901670da6d0e9e.rdf&quot;&gt;rdf&lt;/a&gt;/xml, &lt;a href=&quot;http://canlink.library.ualberta.ca/thesis/bffefd164e1a27d50e901670da6d0e9e.ttl&quot;&gt;turtle&lt;/a&gt;, &lt;a href=&quot;http://canlink.library.ualberta.ca/thesis/bffefd164e1a27d50e901670da6d0e9e.json&quot;&gt;json&lt;/a&gt;-ld, &lt;a href=&quot;http://canlink.library.ualberta.ca/thesis/bffefd164e1a27d50e901670da6d0e9e.n3&quot;&gt;ntriples&lt;/a&gt;, &lt;a href=&quot;http://canlink.library.ualberta.ca/thesis/bffefd164e1a27d50e901670da6d0e9e.bib&quot;&gt;bibtex&lt;/a&gt; and &lt;a href=&quot;http://canlink.library.ualberta.ca/thesis/bffefd164e1a27d50e901670da6d0e9e.ris&quot;&gt;ris&lt;/a&gt; by simply adding the extension to the URL or going through &lt;a href=&quot;https://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html&quot;&gt;HTTP content negotiation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a name=&quot;instittutions&quot; id=&quot;instittutions&quot;&gt;&lt;sup&gt;1&lt;/sup&gt;&lt;/a&gt;. University of British Columbia, University of Alberta, Library and Archives Canada/Bibliothèque et Archives Canada, Queens University, University of Toronto, McGill University, Université de Montréal and Memorial University of Newfoundland.&lt;/p&gt;
&lt;p&gt;&lt;a name=&quot;mahrash&quot; id=&quot;mahrash&quot;&gt;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;. Maharsh was supported by Young Canada Works and the University of Alberta Library.&lt;/p&gt;
&lt;div style=&quot;display: none;&quot;&gt;
&lt;h3&gt;
		&lt;a href=&quot;http://canlink.library.ualberta.ca&quot;&gt;CanaLien : un projet de données liées pour les thèses Canadiennes - CanLink : a linked data project for Canadian theses&lt;/a&gt; is now online!&lt;/h3&gt;
&lt;p&gt;CanLink is a collection of thesis data from &lt;a href=&quot;#instittutions&quot; style=&quot;font-family: arial, helvetica, sans-serif;&quot;&gt;collaborating institutions&lt;/a&gt; part of the &lt;a href=&quot;https://connect.library.utoronto.ca/display/U5LD/Canadian+Linked+Data+Initiative+Home&quot; style=&quot;font-family: arial, helvetica, sans-serif;&quot;&gt;Canadian Linked Data Initiative&lt;/a&gt;. It features over 5,000 theses from participating Canadian universities&lt;a href=&quot;#instittutions&quot;&gt;[1]&lt;/a&gt; on a broad range of topics, from &quot;&lt;a href=&quot;http://canlink.library.ualberta.ca/subject/d2c5ba6561ecdf514120cc85ea2f37b0&quot;&gt;post-humans&lt;/a&gt;&quot; to &quot;&lt;a href=&quot;http://canlink.library.ualberta.ca/subject/936a5bcd638823a8783d82d76c11bf3b&quot;&gt;mechano-electric feedback&lt;/a&gt;&quot; with new theses being added on an ongoing basis. The project is an initiative of the Digital Projects committee of the Canadian Linked Data Initiative with the development work done by Sharon Farnel, Rob Warren and Maharsh Patel&lt;a href=&quot;#mahrash&quot;&gt;[2]&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The data set is described in &lt;a href=&quot;http://canlink.library.ualberta.ca/void/canlinkmaindataset&quot;&gt;void / dcat format&lt;/a&gt; and is also registered in the &lt;a href=&quot;https://old.datahub.io/dataset/can-link&quot;&gt;Data Hub&lt;/a&gt;. The virtual machine is provided by West Grid and the domain name is provided by the University of Alberta.&lt;/p&gt;
&lt;h3&gt;
		Getting the data&lt;/h3&gt;
&lt;p&gt;The data is made available through a Linked Open Data interface and a website permits the &lt;a href=&quot;http://canlink.library.ualberta.ca/searchOnline.html&quot;&gt;simple querying&lt;/a&gt; of the data. A &lt;a href=&quot;http://canlink.library.ualberta.ca/downloads/&quot;&gt;download page&lt;/a&gt; is available for bulk retrieval of the raw data itself, as well as the &lt;a href=&quot;http://canlink.library.ualberta.ca/downloads/new_csh.nt.gz&quot;&gt;full Canadian Subject Headings dataset&lt;/a&gt; as re-hosted by canlink. The retrieval of individual records can be done through URL identifiers. Let&#039;s look at the thesis titled &quot;Isolation and identification of the flavouring principle in maple syrup&quot; (in 1925!) by Robinson. The thesis record itself can be retrieved in multiple different formats including &lt;a href=&quot;http://canlink.library.ualberta.ca/thesis/bffefd164e1a27d50e901670da6d0e9e.rdf&quot;&gt;rdf&lt;/a&gt;/xml, &lt;a href=&quot;http://canlink.library.ualberta.ca/thesis/bffefd164e1a27d50e901670da6d0e9e.ttl&quot;&gt;turtle&lt;/a&gt;, &lt;a href=&quot;http://canlink.library.ualberta.ca/thesis/bffefd164e1a27d50e901670da6d0e9e.json&quot;&gt;json&lt;/a&gt;-ld, &lt;a href=&quot;http://canlink.library.ualberta.ca/thesis/bffefd164e1a27d50e901670da6d0e9e.n3&quot;&gt;ntriples&lt;/a&gt;, &lt;a href=&quot;http://canlink.library.ualberta.ca/thesis/bffefd164e1a27d50e901670da6d0e9e.bib&quot;&gt;bibtex&lt;/a&gt; and &lt;a href=&quot;http://canlink.library.ualberta.ca/thesis/bffefd164e1a27d50e901670da6d0e9e.ris&quot;&gt;ris&lt;/a&gt; by simply adding the extension to the URL or going through &lt;a href=&quot;https://www.w3.org/Protocols/rfc2616/rfc2616-sec12.html&quot;&gt;HTTP content negotiation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a name=&quot;instittutions&quot; id=&quot;instittutions&quot;&gt;&lt;sup&gt;1&lt;/sup&gt;&lt;/a&gt;. University of British Columbia, University of Alberta, Library and Archives Canada/Bibliothèque et Archives Canada, Queens University, University of Toronto, McGill University, Université de Montréal and Memorial University of Newfoundland.&lt;/p&gt;
&lt;p&gt;&lt;a name=&quot;mahrash&quot; id=&quot;mahrash&quot;&gt;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;. Maharsh was supported by Young Canada Works and the University of Alberta Library.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;form-item form-type-item&quot;&gt;
  &lt;label&gt;Language &lt;/label&gt;
 English
&lt;/div&gt;
&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-above&quot;&gt;&lt;div class=&quot;field-label&quot;&gt;Tags:&amp;nbsp;&lt;/div&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/123&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;#accessyxe&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/49&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;lod&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/124&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;canlink&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/125&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;cldi&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/126&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;canaliens&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/127&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;thesis&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/13&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;SPARQL&lt;/a&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Fri, 29 Sep 2017 15:59:17 +0000</pubDate>
 <dc:creator>warren</dc:creator>
 <guid isPermaLink="false">121 at https://blog.muninn-project.org</guid>
 <comments>https://blog.muninn-project.org/node/121#comments</comments>
</item>
<item>
 <title>Print your own Battlefield</title>
 <link>https://blog.muninn-project.org/node/89</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;The Muninn Project aims to programmatically recreate scenes of historical events using &lt;a href=&quot;http://lod-cloud.net/&quot;&gt;Linked Open Data&lt;/a&gt; - and with the ever-increasing availability of high-quality 3D printers, we are motivated to 3D-print these scenes. In this particular post, we will talk about how to 3D-print a battlefield: the trenches of Vimy Ridge. We believe that 3D-printed models of battlefields, such as the trenches of Vimy Ridge, could be quite useful to archeologists &amp;amp; other individuals studying past historical events, namely the &lt;a href=&quot;http://en.wikipedia.org/wiki/Battle_of_Vimy_Ridge&quot;&gt;Battle of Vimy Ridge&lt;/a&gt;. We will discuss how to retrieve 90m-resolution elevation data inside a bounding box from the &lt;a href=&quot;http://www2.jpl.nasa.gov/srtm/&quot;&gt;Shuttle Radar Topography Mission (SRTM)&lt;/a&gt;, how to scale &amp;amp; project it with the &lt;a href=&quot;http://www.gdal.org/&quot;&gt;Geospatial Data Abstraction Library (GDAL)&lt;/a&gt; and also how to convert it to an &lt;a href=&quot;http://en.wikipedia.org/wiki/STL_%28file_format%29&quot;&gt;STL file&lt;/a&gt; that can be 3D-printed; we will also discuss how to retrieve lists of trench coordinates from the Muninn Project&#039;s &lt;a href=&quot;http://rdf.muninn-project.org/sparql&quot;&gt;SPARQL server&lt;/a&gt;, and how to extrude trenches on our model of Vimy Ridge before 3D-printing it. Lastly, we will discuss issues regarding the size &amp;amp; resolution of our model of Vimy Ridge and suggest how we might improve the quality of our model in the future. Thanks to Lawrence Willett for letting us use his 3D printer.&lt;/p&gt;
&lt;p&gt;In order to 3D-print a battlefield, we first need to determine its bounding box. Currently, the Muninn Project&#039;s SPARQL server provides lists of trench coordinates for Vimy Ridge contained in (2.70251452076269, 50.0620220454661, 2.75071474125368, 50.0722188449797). We chose to 3D-print a model of the trenches of Vimy Ridge that lie inside of this bounding box.&lt;/p&gt;
&lt;p&gt;After determining the bounding box for our battlefield, we need to retrieve the elevation data inside of it. We&#039;ve built &lt;a href=&quot;https://github.com/markfarrell/elevation&quot;&gt;software&lt;/a&gt; - written in a modern descendant of Scheme known as &lt;a href=&quot;http://racket-lang.org/&quot;&gt;Racket&lt;/a&gt; - that allows us a retrieve raster images of elevation data inside of bounding boxes. Our software also allows us to &lt;a href=&quot;http://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system&quot;&gt;UTM-project&lt;/a&gt; and scale our raster images to 1m resolution, necessary in order for us to extrude trenches on a 3D-printable model of our battlefield; we use GDAL behind the scenes. Here is a projected &amp;amp; scaled raster image of elevation data for our model of the trenches of Vimy Ridge:&lt;/p&gt;
&lt;p&gt;&lt;img alt=&quot;&quot; src=&quot;http://i.imgur.com/8SyJsa3.png&quot; style=&quot;width: 500px; height: 162px;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;We then &lt;a href=&quot;https://github.com/markfarrell/trenches&quot;&gt;create a raster image&lt;/a&gt; of trench data for Vimy Ridge with the same origin, dimensions, resolution and projection as our raster image of elevation data. First, we retrieve lists of trench coordinates inside of our bounding box using the Muninn Project&#039;s SPARQL server; as mentioned, currently all of the trench data that we have for Vimy Ridge is contained within the bounding box we chose to use for our model. Before getting lists of trench coordinates, we first find all known World War 1 objects inside of this bounding box:&lt;/p&gt;
&lt;table border=&quot;1&quot; cellpadding=&quot;1&quot; cellspacing=&quot;1&quot; style=&quot;width: 500px;&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;pre&gt;
SELECT DISTINCT ?thing ?state {
  ?thing &amp;lt;http://geovocab.org/geometry#geometry&amp;gt; ?geoma .
  ?thing &amp;lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#type&amp;gt;
         &amp;lt;http://rdf.muninn-project.org/ontologies/military#MilitaryTrench&amp;gt; .
  ?geoma &amp;lt;http://linkedgeodata.org/ontology/posSeq&amp;gt; ?SEQ .
  ?SEQ &amp;lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#type&amp;gt;
       &amp;lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#Seq&amp;gt; .
  ?SEQ ?List ?node .
  ?node &amp;lt;http://www.w3.org/2003/01/geo/wgs84_pos#lat&amp;gt; ?LAT .
  ?node &amp;lt;http://www.w3.org/2003/01/geo/wgs84_pos#long&amp;gt; ?LONG .
  OPTIONAL { ?thing &amp;lt;http://rdf.muninn-project.org/ontologies/graves#hasState&amp;gt; ?state . }
  FILTER (?LAT  &amp;lt; 50.0722188449797 &amp;amp;&amp;amp; ?LAT  &amp;gt; 50.0620220454661)
  FILTER (?LONG &amp;lt; 2.75071474125368 &amp;amp;&amp;amp; ?LONG &amp;gt; 2.70251452076269)
}&lt;/pre&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Then, we find lists of trench coordinates for each trench object, after filtering out known WW1 objects that are not trenches. For each trench object&lt;em&gt;,&lt;/em&gt; we execute the following SPARQL query, giving us a list of trench coordinates:&lt;/p&gt;
&lt;table border=&quot;1&quot; cellpadding=&quot;1&quot; cellspacing=&quot;1&quot; style=&quot;width: 500px;&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;pre&gt;
SELECT ?List ?LAT ?LONG {
  &amp;lt;&lt;em&gt;trench&lt;/em&gt;&amp;gt; &amp;lt;http://geovocab.org/geometry#geometry&amp;gt; ?geoma .
  &amp;lt;&lt;em&gt;trench&lt;/em&gt;&amp;gt; &amp;lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#type&amp;gt;
           &amp;lt;http://rdf.muninn-project.org/ontologies/military#MilitaryTrench&amp;gt; .
  ?geoma &amp;lt;http://linkedgeodata.org/ontology/posSeq&amp;gt; ?SEQ .
  ?SEQ &amp;lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#type&amp;gt;
       &amp;lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#Seq&amp;gt; .
  ?SEQ ?List ?node .
  ?node &amp;lt;http://www.w3.org/2003/01/geo/wgs84_pos#lat&amp;gt; ?LAT .
  ?node &amp;lt;http://www.w3.org/2003/01/geo/wgs84_pos#long&amp;gt; ?LONG .
  FILTER (?LAT  &amp;lt; 50.0722188449797 &amp;amp;&amp;amp; ?LAT  &amp;gt; 50.0620220454661)
  FILTER (?LONG &amp;lt; 2.75071474125368 &amp;amp;&amp;amp; ?LONG &amp;gt; 2.70251452076269)
} ORDER BY ASC(?List)&lt;/pre&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;where &lt;em&gt;trench&lt;/em&gt; is the &lt;a href=&quot;http://en.wikipedia.org/wiki/Uniform_resource_identifier&quot;&gt;Uniform Resource Identifer (URI)&lt;/a&gt; for the trench object.&lt;/p&gt;
&lt;p&gt;We then &lt;a href=&quot;https://github.com/markfarrell/utm&quot;&gt;UTM-project&lt;/a&gt; each list of trench coordinates, and afterwards &lt;a href=&quot;https://github.com/markfarrell/sort-by-distance&quot;&gt;sort&lt;/a&gt; them by finding the shortest path between the two coordinates closest and farthest from the origin in each list. Finally, we are to produce a raster image of our trench data for Vimy Ridge: for each projected &amp;amp; sorted list of trench coordinates we create a &quot;drawing pen&quot;, set its line-width to the desired the width of our trenches, set the grayscale value of its colour to the desired depth of our trenches and then draw lines between each sliding pair of coordinates in each list of coordinates, saving our drawing as a raster image:&lt;/p&gt;
&lt;p&gt;&lt;img alt=&quot;&quot; src=&quot;http://i.imgur.com/zL6pO8U.png&quot; style=&quot;width: 496px; height: 161px; border-width: 2px; border-style: solid;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Once we have projected &amp;amp; scaled raster images for both our elevation data and trench data, we then create a trench-extruded terrain mesh in &lt;a href=&quot;http://www.blender.org/&quot;&gt;Blender&lt;/a&gt;, which we can export to an STL file for 3D-printing. Secondly, we create a plane mesh with the same dimensions as our raster images, subdividing it so that it has the same resolution as our raster images as well. Thirdly, we add a &lt;em&gt;displace modifier &lt;/em&gt;to set the height values of each point on our plane to each corresponding grayscale colour value of pixels on our elevation raster image. Lastly, we add another &lt;em&gt;displace modifier&lt;/em&gt;, extruding trenches with the raster of image of trench data that we created:&lt;/p&gt;
&lt;p&gt;&lt;img alt=&quot;&quot; src=&quot;http://i.imgur.com/T9CGAdg.png?2&quot; style=&quot;width: 500px; height: 167px;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Finally, we export the trench-extruded terrain mesh that we created as an STL file, which we send to the 3D printer:&lt;/p&gt;
&lt;p&gt;&lt;img alt=&quot;&quot; src=&quot;http://i.imgur.com/5IEthbD.png?2&quot; style=&quot;height: 172px; width: 500px;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;We were able to produce a 3D-printed model of the trenches of Vimy Ridge with comparable quality &amp;amp; resolution to that of our digital 3D model. In the future, we&#039;d like to increase the size of our 3D-printed model to include more of Vimy Ridge, namely the ridge itself. This may prove somewhat difficult because the polygon count of our trenched-extruded terrain mesh in Blender, and hence our STL file, might grow too high to view, export and print; in other words, we might hit our current hardware limitations when we try to 3D-print a larger battlefield. Though, we might be able to overcome these potential limitations by either (a) increasing our computing power (b) using Blender &lt;em&gt;as a library &lt;/em&gt;to avoid the need to render our trench-extruded terrain mesh in Blender before exporting it or by (c) printing separate parts of our model and gluing them together afterwards.&lt;/p&gt;
&lt;p&gt;We encourage you to try printing your own battlefield using the methods &amp;amp; software provide in this blog post - let us know how you made out! Stay tuned for more information on 3D-printing your own historical battlefields.&lt;/p&gt;
&lt;p&gt;P.S. This is what it looks like when 3D-printing fails:&lt;/p&gt;
&lt;table border=&quot;1&quot; cellpadding=&quot;1&quot; cellspacing=&quot;1&quot; style=&quot;width: 500px;&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
				&lt;img alt=&quot;&quot; src=&quot;http://i.imgur.com/wxELaNv.png?1&quot; style=&quot;width: 216px; height: 80px;&quot; /&gt;&lt;/td&gt;
&lt;td&gt;
				&lt;img alt=&quot;&quot; src=&quot;http://i.imgur.com/gMvNZHe.png?1&quot; style=&quot;width: 270px; height: 80px;&quot; /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;
	 &lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;form-item form-type-item&quot;&gt;
  &lt;label&gt;Language &lt;/label&gt;
 English
&lt;/div&gt;
&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-above&quot;&gt;&lt;div class=&quot;field-label&quot;&gt;Tags:&amp;nbsp;&lt;/div&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/93&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;3D Printing&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/90&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Vimy Ridge&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/70&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Trenches&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/89&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Elevation&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/91&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;SRTM&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/13&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;SPARQL&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/92&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;GDAL&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/81&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Racket&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/96&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Scheme&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/97&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;LISP&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/87&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Blender&lt;/a&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Fri, 27 Mar 2015 15:11:43 +0000</pubDate>
 <dc:creator>m4farrel</dc:creator>
 <guid isPermaLink="false">89 at https://blog.muninn-project.org</guid>
 <comments>https://blog.muninn-project.org/node/89#comments</comments>
</item>
<item>
 <title>Why you need a SPARQL server</title>
 <link>https://blog.muninn-project.org/node/68</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;The original title to this blog post was supposed to be &quot;&lt;em&gt;Hardening SPARQL Servers in the wild&lt;/em&gt;&quot;. But I&#039;ve since changed it to &quot;&lt;em&gt;Why you need a SPARQL server&lt;/em&gt;&quot; after reading a number of articles critical of SPARQL while at the same time juggling RDF/OWL sources without SPARQL stores and a multitude of APIs. The benefits of having a machine readable export format is gaining traction with data providers as a data-delivery model. However, the lack of support for search and discovery is still hampering data-delivery. If you are serious about delivering RDF/OWL data, having a SPARQL server is the best way to make your data available to the broadest possible audience while keeping your bandwidth costs down.&lt;/p&gt;
&lt;h2&gt;
	Classic Web Search Engines and Data Dumps&lt;/h2&gt;
&lt;p&gt;Many data providers still think of data delivery in terms of the classic web search model: external search engines crawl the web site and then answer queries from end-users. Certainly that is one of the underlying philosophy behind &lt;a href=&quot;http://schema.org/&quot;&gt;schema.org&lt;/a&gt; and &lt;a href=&quot;http://www.w3.org/TR/xhtml-rdfa-primer/&quot;&gt;rdfa&lt;/a&gt;: make each webpage easier to parse semantically by search engines so that they can answer queries or aggregate information accurately. This approach makes sense in terms of a general information retrieval engine but if your content is a niche, a consumer grade search engine is unlikely to support you or your users. The query language is likely to be &#039;human friendly&#039; which means the results will be tweaked to the search engines preference instead of your own specific request. Content people tend to think in terms of &quot;The Document&quot; being an html page while data people tend to think of the individual nodes within the RDF as the document. Not all data out there fits the &quot;One Node, One Document, One HTML Page&quot; paradigm. You can make the two ideas co-exists, just be aware that some webpages will get indexed that will look a bit odd.&lt;/p&gt;
&lt;p&gt;Another approach to data delivery is the data dump where you make the entirety of your data downloadable from a link on your site. This works especially well for data that is stable or only periodically updated. Of course, if your users only need one item out of the entire site, they are downloading a lot of data for no other reason than to search for it. If your dataset is extremely popular, your bandwidth utilization will go up. &lt;/p&gt;
&lt;p&gt;Erik Mill writes in an Oct 2, 2013 blog post that &lt;em&gt;&lt;a class=&quot;bookmark&quot; href=&quot;http://sunlightfoundation.com/blog/2013/10/02/government-apis-arent-a-backup-plan/&quot;&gt;Government APIs Aren&#039;t A Backup Plan&lt;/a&gt;&lt;/em&gt;, his point being that as the &lt;a href=&quot;http://en.wikipedia.org/wiki/United_States_federal_government_shutdown_of_2013&quot;&gt;US government shutdown&lt;/a&gt; most of the data APIs provided by it were likely to be offline and thus data dumps were the only way of ensuring appropriate retention of the data. This is obviously a problem if there exists only one source for the data, it has a danger of disappearing and there exists only one copy on the website. &lt;/p&gt;
&lt;p&gt;The problem lies in that there exists a limit to our capacity to manage the Download, Extract, Transform, Load processes that this model entails. If the data is that valuable, you may want to keep a complete copy of the collection, but in practice only a small portion of any given dataset is likely to be value to you. Perhaps a better way to look at data transfer and retention within the linked open data world is through longer term web caching. Most retrievals (actually SPARQL queries too) in the linked open data world are through an HTTP connection. Through the use of (smarter) SPARQL servers, or inline web caching proxies, the data is always retrained as long as it is needed or it expires. You can even calculate the &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;Expires&lt;/span&gt; and &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;Last-Modified&lt;/span&gt; headers based on the information contained within the &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;dcterms:accrualPeriodicity&lt;/span&gt; and &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;dcterms:modified&lt;/span&gt; tags of the &lt;a href=&quot;http://www.w3.org/TR/void/&quot;&gt;void&lt;/a&gt; dataset description.  Since only the URI&#039;s or query results that are used or repeated get kept, the overhead on the infrastructure is minimal. There is precedent for this in that most of the building blocks of html, rdf and xml schemas are themselves stored on the W3 servers: it is a single point of failure, but the data is so prevalent on the web that it becomes a non-issue.&lt;/p&gt;
&lt;p&gt;Dump versus Store remains a challenge: what information should be retrieved in its entirety and what information should simply be queried? Linked Open Data does offer some solutions in terms of providing a data delivery mechanism that allows the splitting up datasets for partial retrieval through multiple URLs. But this still leaves the problems of searching the data for the items of interest to the user. Two camps currently exist: the use of custom APIs or standardized SPARQL endpoints.&lt;/p&gt;
&lt;h2&gt;
	REST API versus SPARQL&lt;/h2&gt;
&lt;p&gt;Data APIs are currently popular on websites with queryable datasets as they are a quick (for the programmer) means of exposing the website data for external use. A debate that occasionally flares up even within the linked open data community is the creation of a customized API for searches versus a standard SPARQL endpoint.&lt;/p&gt;
&lt;p&gt;Dave Rog makes a number of criticism in his June 4th, 2013 blog post &lt;em&gt;&lt;a href=&quot;http://daverog.wordpress.com/2013/06/04/the-enduring-myth-of-the-sparql-endpoint/&quot; rel=&quot;bookmark&quot;&gt;The Enduring Myth of the SPARQL Endpoint&lt;/a&gt;&lt;/em&gt;. Both &lt;a href=&quot;https://en.wikipedia.org/wiki/Representational_state_transfer&quot;&gt;REST&lt;/a&gt; API&#039;s and SPARQL endpoints share many of the same problems: after all an API is simply a wrapper around code and databases for a specific purpose. For example, the Muninn Trench Map API internally makes use of 2 SPARQL endpoints and 3 different APIs with appropriate caching of intermediate results. Very clever use of SPARQL could probably replicate the functionality of the API in pure SPARQL but given the specificity of the service, Converting Great War coordinates systems is a bit of a niche area that very few people care about, a custom API is appropriate. &lt;/p&gt;
&lt;p&gt;SPARQL endpoints support a subset of the full &lt;a href=&quot;http://www.w3.org/TR/sparql11-overview/&quot;&gt;SPARQL database language&lt;/a&gt;, with a full secondary vocabulary dedicated to &lt;a href=&quot;http://www.w3.org/TR/sparql11-service-description/&quot;&gt;documenting the level of support by the endpoint&lt;/a&gt; and another one &lt;a href=&quot;https://github.com/gatemezing/sdm-vocab/blob/master/sdm-vocab.ttl&quot;&gt;documenting oddities such as the maximum amount number of rows returned&lt;/a&gt; by a query. To the best of my knowledge, the SOAP/UDDI/&lt;a href=&quot;http://www.w3.org/TR/wsdl&quot;&gt;WSDL&lt;/a&gt; stack is the only other querying mechanism has that amount of machine readable documentation. The reason that this machine readable documentation is important is that it can be read using the same SPARQL queries as with any other data, but it describes what the client can expect from the server.  With such a setup, a client can optimize its querying of the server by managing the complexity and style of queries being set to the server.&lt;/p&gt;
&lt;p&gt;As Dave Rog points out, there exists a great opportunity to use a SPARQL server on the client side as well as on the server side. For example, the SPARQL &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;LOAD&lt;/span&gt; command is useful for this purpose since nodes relevant to the query can be pre-fetched or in some SPARQL implementations fetched on an as-needed basis when doing joins.&lt;/p&gt;
&lt;p&gt;Response times to queries over the Internet can be a problem. A dedicated API designer can control both the query being created and the design of the underlying database to obtain a statistical guarantee of returning a result within a time frame. This is directly related to the specificity of the API approach: it does one thing and one thing well. A SPARQL endpoint is a full blown database engine that is answering ad-hoc queries from different clients, with result sets that may or may not be cacheable across clients. Setting a time limit to query runtime will encourage clients to keep their queries manageable; but it does require smarter clients that ask for reasonable queries.&lt;/p&gt;
&lt;p&gt;Interestingly, response time is not necessarily dependent on bandwidth - it&#039;s the latency that grinds things down because an API model functions with individual query response pairs. In certain situations a SPARQL endpoint can improve timing by aggregating a number of requests in a single query where an API would need to process each case serially.&lt;/p&gt;
&lt;h2&gt;
	SPARQL endpoints scale&lt;/h2&gt;
&lt;p&gt;When people start talking about &quot;scale&quot; they usually mean a) the hard drive on their desktop is bigger than yours or b) they have more computers than you do. The problem here isn&#039;t that you have billions of bytes to search, it is that accessing them requires dozens, if not hundreds of different interfaces. &lt;a href=&quot;https://twitter.com/mia_out&quot;&gt;Mia Ridge&lt;/a&gt; has a large list of &lt;a href=&quot;http://museum-api.pbworks.com/w/page/21933420/Museum%C2%A0APIs&quot;&gt;Museum APIs&lt;/a&gt; to browse through that is impressive in its breath and width. &lt;/p&gt;
&lt;p&gt;What happens when you are looking for something across several hundred API&#039;s? Where can I find a museum with a 18th century &lt;a href=&quot;http://en.wikipedia.org/wiki/Luckenbooth_brooch&quot;&gt;brooch&lt;/a&gt; of the type that would have been traded with in North America? Even if they have primitive support for a query parameter &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;q&lt;/span&gt; for a keyword search, aggregating the data is nearly impossible. A standardized query language is the only means of ensuring that the client is getting what it wanted in the format that it wanted.&lt;/p&gt;
&lt;p&gt;API are useful for one-off, specific query problems. This has to do with their need to implement their own basic query language to communicate requirements using url parameters and values.  This works well for small sets of key-value pairs parameters with titles such as &#039;&lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;keyword&lt;/span&gt;&#039;. As the number of parameters and the complexity of the query increases, tracking the parameters and their expected value requires some serious documentation.&lt;/p&gt;
&lt;p&gt;For reference, the &lt;a href=&quot;http://www.geonames.org/export/web-services.html&quot;&gt;Geonames API&lt;/a&gt; has about 19 endpoints (url&#039;s) with about 3 parameters each and the &lt;a href=&quot;http://trove.nla.gov.au/general/api-technical&quot;&gt;Trove API&lt;/a&gt; has 3 different endpoints with 4-5 parameters each. This is still manageable and with the good documentation provided, the required data can be retrieved quickly. This does not hold if what we are aspiring to is the full promise of the semantic web, there will always be one more API to write code for.&lt;/p&gt;
&lt;h2&gt;
	It&#039;s the Internet!&lt;/h2&gt;
&lt;p&gt;As with everything in the WWW stack, there are no guarantees and given the complexity and the number of institutions involved in getting a browser to load a webpage from a server across the world it is a wonder that anything works. SPARQL endpoints can timeout on queries just like APIs do, this is the price that we pay for running flexible queries with no program completion guarantee. As with APIs, SPARQL queries can also retrieve entirely too much data and completely drown out the client or create a Denial Of Service attach on the server. These problems are not new and SPARQL is just another application that needs to be protected.&lt;/p&gt;
&lt;p&gt;What we do have with SPARQL, Linked Open Data and HTTP headers are opportunities to negotiate with the client so that an informed decision can be made. There exists a Pareto-like tradeoff between bandwidth utilization, query complexity and processing power which all data providers need to think about when designing systems. The advantage of SPARQL and Linked Open Data are that there exists at present standardized, machine readable interfaces that can negotiate content format (&lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;xml&lt;/span&gt;, &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;ttl&lt;/span&gt;, &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;json&lt;/span&gt;, etc...), endpoint results set parameters as well as negotiate workload between server and client.&lt;/p&gt;
&lt;h2&gt;
	Hardening a SPARQL Server&lt;/h2&gt;
&lt;p&gt;When you are running a SPARQL endpoint you are really letting someone else&#039;s limited instruction set program  on your computer. That is a scary proposition for a lot of conservative administrators out there. The irony is that we let hundreds of javascript webpages run on our web browsers every day without thinking about it too much.&lt;/p&gt;
&lt;p&gt;There are a few tips that were learned with the &lt;a href=&quot;http://rdf.muninn-project.org/sparql&quot;&gt;Muninn SPARQL endpoint&lt;/a&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
		&lt;strong&gt;Set a reasonable query runtime limit - &lt;/strong&gt; Likely you will want to let queries run for a few seconds to enable a client to do some useful work. Muninn will run queries for about 20 seconds before stopping. Virtuoso Triple Store will try to estimate the runtime and preemptively refuse to run the query if the runtime is over the limit. Consider using the &lt;a href=&quot;http://github.com/gatemezing/sdm-vocab/blob/master/sdm-vocab.ttl&quot;&gt;triple store vocabulary &lt;/a&gt;to make this limit machine readable by the client so that the query can be modified by the client if needed.&lt;/li&gt;
&lt;li&gt;
		&lt;strong&gt;Set a maximum number of triples to be retrieved per query -&lt;/strong&gt; A SPARQL endpoint isn&#039;t meant to be a data dump facility and too much buffer space could be taken up in memory before the result set is streamed to the network. Documenting the maximum result set with &lt;a href=&quot;http://github.com/gatemezing/sdm-vocab/blob/master/sdm-vocab.ttl&quot;&gt;the triple store vocabulary&lt;/a&gt; is a good way of informing the client of the maximum number of triples that can be retrieved and it should also avoid unfortunate &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;{?s ?o ?p}&lt;/span&gt; queries by beginners. Clients that really need a large result set will send a series of &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;LIMIT&lt;/span&gt; / &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;OFFSET&lt;/span&gt; queries and you can expect many clients to use this technique.&lt;/li&gt;
&lt;li&gt;
		&lt;strong&gt;Monitor bandwidth and connections -&lt;/strong&gt; Using firewall rules, web server modules or SPARQL server configuration options, implement a connection and bandwidth limits. This prevents one runaway client from overwhelming your server with multiple concurrent requests. A favorite among poorly written clients is to do a series of &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;LIMIT&lt;/span&gt; / &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;OFFSET&lt;/span&gt; queries while assuming that the maximum number of returned triples is 50. The resulting series of queries is able to send your server into &lt;a href=&quot;http://en.wikipedia.org/wiki/Cardiac_dysrhythmia&quot;&gt;cardiac arrhythmia&lt;/a&gt; as the same result set is recomputed and thrown away over and over again. Limiting the number of concurrent connections to 3 or 4 will encourage clients to change their evil ways. Returning an http header with &lt;a href=&quot;http://tools.ietf.org/html/rfc6585#section-4&quot;&gt;429 - Too Many Requests&lt;/a&gt; or 509 - Bandwidth Limit Exceeded will signal that they are the problem and not you. You may want to set a &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;Retry-After&lt;/span&gt; header that will promote exponential backoff by the client - the objective isn&#039;t to punish clients but to promote the sharing of resources by communicating server load.&lt;/li&gt;
&lt;li&gt;
		&lt;strong&gt;Set a limit on the system load -&lt;/strong&gt;  Since your data is very valuable and everyone wants it, you can expect your endpoint to be very busy. After a certain machine load limit is reached, have the the SPARQL endpoint return &lt;a href=&quot;http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.4&quot;&gt;HTTP 503 Service Unavailable&lt;/a&gt;. It will signal that it&#039;s you and not them. You can encourage a retry at a later date with a &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;Retry-After&lt;/span&gt; header. This will signal the client to backoff from additional queries for a few moments, long enough for the server to catch up with its workload. Of course, some clients will be disappointed but your machine will still be usable and the clients that get through will still get service. An interesting proposal by Bryce Nesbitt in 2011 had &lt;a href=&quot;http://lists.w3.org/Archives/Public/ietf-http-wg/2011JanMar/0078.html&quot;&gt;suggested allowing Retry-After with successful HTTP 20x responses&lt;/a&gt; that would allow servers to suggest that bots should wait for the next &quot;quiet period&quot;. That may gain some traction with linked open data as we can try to move automated SPARQL queries to times of the day where the server is under utilized.&lt;/li&gt;
&lt;li&gt;
		&lt;strong&gt;Consider a reverse proxy -&lt;/strong&gt; Large internet sites sometimes do this to cache dynamic content. As mentioned in the previous section, there is plenty of information within the linked open data to auto configure the proxy with the appropriate &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;Expires&lt;/span&gt; and &lt;span style=&quot;font-family:courier new,courier,monospace;&quot;&gt;Last-Modified&lt;/span&gt; parameters. In high volume situation, a small cache that lasts for a few minutes can ensure that often requested queries are cached transparently for most SPARQL clients.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt; &lt;/p&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;form-item form-type-item&quot;&gt;
  &lt;label&gt;Language &lt;/label&gt;
 English
&lt;/div&gt;
&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-above&quot;&gt;&lt;div class=&quot;field-label&quot;&gt;Tags:&amp;nbsp;&lt;/div&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/13&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;SPARQL&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/55&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;API&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/56&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;EndPoint&lt;/a&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Sun, 08 Dec 2013 01:21:57 +0000</pubDate>
 <dc:creator>warren</dc:creator>
 <guid isPermaLink="false">68 at https://blog.muninn-project.org</guid>
 <comments>https://blog.muninn-project.org/node/68#comments</comments>
</item>
<item>
 <title>SPARQL and Linked Open Data</title>
 <link>https://blog.muninn-project.org/2011/05/sparql-and-linked-open-data</link>
 <description>&lt;div class=&quot;field field-name-body field-type-text-with-summary field-label-hidden&quot;&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; property=&quot;content:encoded&quot;&gt;&lt;p&gt;&lt;img alt=&quot;&quot; src=&quot;sites/default/files/field/image/rdf_w3c_icon.gif&quot; style=&quot;width: 118px; height: 128px; float: left;&quot; /&gt;After a few hiccups with the SPARQL database and the web front end, the Muninn website will be undergoing some major re-work. I&#039;ll update this blog post as the new interface features go online. Update: Feb 23, 2012 - The SPARQL server at &lt;a href=&quot;http://rdf.muninn-project.org/sparql&quot;&gt;http://rdf.muninn-project.org/sparql&lt;/a&gt; is answering queries.&lt;/p&gt;
&lt;!--break--&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;form-item form-type-item&quot;&gt;
  &lt;label&gt;Language &lt;/label&gt;
 English
&lt;/div&gt;
&lt;div class=&quot;field field-name-field-tags field-type-taxonomy-term-reference field-label-above&quot;&gt;&lt;div class=&quot;field-label&quot;&gt;Tags:&amp;nbsp;&lt;/div&gt;&lt;div class=&quot;field-items&quot;&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/13&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;SPARQL&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/14&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;Data&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item even&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/10&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;RDF&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;field-item odd&quot; rel=&quot;dc:subject&quot;&gt;&lt;a href=&quot;/taxonomy/term/4&quot; typeof=&quot;skos:Concept&quot; property=&quot;rdfs:label skos:prefLabel&quot; datatype=&quot;&quot;&gt;OWL&lt;/a&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description>
 <pubDate>Wed, 25 May 2011 05:23:42 +0000</pubDate>
 <dc:creator>warren</dc:creator>
 <guid isPermaLink="false">8 at https://blog.muninn-project.org</guid>
 <comments>https://blog.muninn-project.org/2011/05/sparql-and-linked-open-data#comments</comments>
</item>
</channel>
</rss>
