<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<h3 id="anker_DeepVGIDeepLearningVolun" style="font-size: 1.2em;
line-height: 1.1em; color: rgb(51, 51, 51); font-weight: bold;
margin: 20px 0px 17px -1px; font-family: Arial, Helvetica,
sans-serif; font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; letter-spacing: normal; orphans: 2;
text-align: left; text-indent: 0px; text-transform: none;
white-space: normal; widows: 2; word-spacing: 0px;
-webkit-text-stroke-width: 0px; background-color: rgb(255, 255,
255);">DeepVGI – Deep Learning Volunteered Geographic Information
- Combining OpenStreetMap, MapSwipe and Remote Sensing<br>
</h3>
<p style="font-size: 0.9em; margin: 0px 0px 17px; line-height:
1.3em; color: rgb(0, 0, 0); font-family: Arial, Helvetica,
sans-serif; font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; font-weight: normal; letter-spacing:
normal; orphans: 2; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);">Deep learning techniques,
esp. Convolutional Neural Networks (CNNs), are now widely studied
for predictive analytics with remote sensing images, which can be
further applied in different domains for ground object detection,
population mapping, etc. These methods usually train predicting
models with the supervision of a large set of training examples.
However, finding ground truths especially for developing and rural
areas is quite hard and manually labeling a large set of training
data is costly. On the other hand Volunteered Geographic
Information (VGI) (e.g., OpenStreetMap (OSM) and MapSwipe) which
is the geographic data provided voluntarily by
individuals, provides a free approach for such big data.</p>
<p style="font-size: 0.9em; margin: 0px 0px 17px; line-height:
1.3em; color: rgb(0, 0, 0); font-family: Arial, Helvetica,
sans-serif; font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; font-weight: normal; letter-spacing:
normal; orphans: 2; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);">In our project "DeepVGI",
we study predictive analytics methods with remote sensing images,
VGI, deep neural networks as well as other learning algorithms. It
aims at deeply learning from satellite imageries with the
supervision of such Volunteered Geographic Information.</p>
<p style="font-size: 0.9em; margin: 0px 0px 17px; line-height:
1.3em; color: rgb(0, 0, 0); font-family: Arial, Helvetica,
sans-serif; font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; font-weight: normal; letter-spacing:
normal; orphans: 2; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);">VGI data from OpenStreetMap
(OSM) and the mobile crowdsourcing application MapSwipe which
allows volunteers to label images with buildings or roads for
humanitarian aids are utilized. We develop an active learning
framework with deep neural networks by incorporating both VGI data
with more complete supervision knowledge. Our experiments show
that DeepVGI can achieve high building detection performance for
humanitarian mapping in rural African areas.</p>
<p style="font-size: 0.9em; margin: 0px 0px 17px; line-height:
1.3em; color: rgb(0, 0, 0); font-family: Arial, Helvetica,
sans-serif; font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; font-weight: normal; letter-spacing:
normal; orphans: 2; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);">Figure 1 shows some initial
results of DeepVGI, where OpenStreetMap and MapSwipe data are
utilized for training together with multi-layer artificial neural
networks and a VGI-based active learning strategy proposed by us.
DeepVGI outperforms Deep-OSM (i.e. deep models trained with only
OpenStreetMap data), and achieves close accuracy to the
volunteers.</p>
<p style="font-size: 0.9em; margin: 0px 0px 17px; line-height:
1.3em; color: rgb(0, 0, 0); font-family: Arial, Helvetica,
sans-serif; font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; font-weight: normal; letter-spacing:
normal; orphans: 2; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);"><img
src="cid:part1.AD35CCB1.6CA58247@uni-heidelberg.de"
style="font-size: 12.9691px; border-width: 0px; border-style:
initial; border-color: initial;" width="300"><span
class="Apple-converted-space"> </span><br style="font-size:
12.9691px;">
Figure 1: Initial Results of DeepVGI</p>
<p style="font-size: 0.9em; margin: 0px 0px 17px; line-height:
1.3em; color: rgb(0, 0, 0); font-family: Arial, Helvetica,
sans-serif; font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; font-weight: normal; letter-spacing:
normal; orphans: 2; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);">On the other hand, such
predictive analytics methods will be applied in geographic
applications like humanitarian mapping. It can help improve VGI
data quality, save volunteers’ time, etc. DeepVGI is also an
attempt to explore the interaction between human beings and
machines, between crowdsourcing and deep learning. Figure 2 shows
the research framework of DeepVGI project, where we will first
focus on learning and prediction between deep neural networks and
big spatial data (including VGI data from our history OSM
project).</p>
<p style="font-size: 0.9em; margin: 0px 0px 17px; line-height:
1.3em; color: rgb(0, 0, 0); font-family: Arial, Helvetica,
sans-serif; font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; font-weight: normal; letter-spacing:
normal; orphans: 2; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);"><img
src="cid:part2.9665BBBA.7EBB861D@uni-heidelberg.de"
style="font-size: 12.9691px; border-width: 0px; border-style:
initial; border-color: initial;" width="400"><span
class="Apple-converted-space"> </span><br style="font-size:
12.9691px;">
Figure 2 shows the overal Research Framework of DeepVGI</p>
<p style="font-size: 0.9em; margin: 0px 0px 17px; line-height:
1.3em; color: rgb(0, 0, 0); font-family: Arial, Helvetica,
sans-serif; font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; font-weight: normal; letter-spacing:
normal; orphans: 2; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);">Further details will be
made available soon. DeepVGI is a project of the<span
class="Apple-converted-space"> </span><a
href="http://www.geog.uni-heidelberg.de/gis/heigit_bigspatialdata_en.html"
class="blank" title="im neuen Fenster öffnen" style="font-size:
12.9691px; color: rgb(153, 0, 0); text-decoration: none;
background:
url("http://www2.geog.uni-heidelberg.de/media/fenster.png")
100% 2px no-repeat scroll transparent; padding-right: 13px;">HeiGIT
Big Spatial Data Analytics</a><span
class="Apple-converted-space"> </span>in cooperation with the<span
class="Apple-converted-space"> </span><a
href="http://www.geog.uni-heidelberg.de/gis/heigit_disastermanagement_en.html"
class="blank" title="im neuen Fenster öffnen" style="font-size:
12.9691px; color: rgb(153, 0, 0); text-decoration: none;
background:
url("http://www2.geog.uni-heidelberg.de/media/fenster.png")
100% 2px no-repeat scroll transparent; padding-right: 13px;">Humanitarian
VGI group</a><span class="Apple-converted-space"> </span>at
HeiGIT. The Heidelberg Institute for Geoinformation Technology
(HeiGIT) is currently being established with core funding by the
Klaus Tschira Stiftung (KTS) Heidelberg.<br>
</p>
<p style="font-size: 0.9em; margin: 0px 0px 17px; line-height:
1.3em; color: rgb(0, 0, 0); font-family: Arial, Helvetica,
sans-serif; font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; font-weight: normal; letter-spacing:
normal; orphans: 2; text-align: left; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255);"><a class="moz-txt-link-freetext" href="http://www.geog.uni-heidelberg.de/gis/deepvgi_en.html">http://www.geog.uni-heidelberg.de/gis/deepvgi_en.html</a><br>
</p>
<pre class="moz-signature" cols="72">
GIScience Research Group Heidelberg University
<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://uni-heidelberg.de/gis">http://uni-heidelberg.de/gis</a> <a moz-do-not-send="true" class="moz-txt-link-freetext" href="https://www.facebook.com/GIScienceHeidelberg">https://www.facebook.com/GIScienceHeidelberg</a> twitter.com/GIScienceHD
</pre>
</body>
</html>