OK, I think I get it now. After you've hit 4 SDs (or even arguably 3 SDs!) the contribution of more points to that square fades to almost nothing without extreme clusters of points.<div><br></div><div>Seems like this ought to be clarified in the UI and documentation? Should I file an enhancement request?</div>
<div><br></div><div>BTW, I dropped down to a SD of 200, and it ran in 212 minutes. I'm trying a SD of 300 now but on a grid with 1/4 as many squares as I cut the rows and cols by half. We'll see how it does.<br><div>
<br></div><div>Aren<br><br><div class="gmail_quote">On Thu, Dec 6, 2012 at 12:17 PM, Markus Metz <span dir="ltr"><<a href="mailto:markus.metz.giswork@gmail.com" target="_blank">markus.metz.giswork@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On Thu, Dec 6, 2012 at 3:13 PM, Aren Cambre <<a href="mailto:aren@arencambre.com">aren@arencambre.com</a>> wrote:<br>
> Thanks. I am using EPSG:3081, and its unit is meters. So is it the case that<br>
> a SD of x on EPSG:3081 means a search radius of 4x meters? If so, why 4x?<br>
<br>
</div>Because the gaussian function is infinite, a cutoff of 3 to 6 SDs is<br>
usually applied when using a gaussian function as kernel density<br>
function. In theory, an alternative is to combine the infinite<br>
gaussian function with a finite function, e.g. uniform. I think<br>
historically v.kernel had only one kernel function, gaussian. The<br>
original authors decided to ask for SD and not for the search radius<br>
as input and have set the cutoff to 4 x SD where the gaussian function<br>
is reasonably close to zero.<br>
<br>
BTW, search radius = 4 x SD applies only to the gaussian kernel, for<br>
all other kernel functions the search radius is equal to SD.<br>
<br>
Markus M<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
> On Thu, Dec 6, 2012 at 2:29 AM, Markus Metz <<a href="mailto:markus.metz.giswork@gmail.com">markus.metz.giswork@gmail.com</a>><br>
> wrote:<br>
>><br>
>> On Thu, Dec 6, 2012 at 3:58 AM, Aren Cambre <<a href="mailto:aren@arencambre.com">aren@arencambre.com</a>> wrote:<br>
>> > It's gotten slow again. This run will probably take more than 10 hours.<br>
>> > However, I am using a standard deviation of 1000. Is that what could be<br>
>> > causing this?<br>
>><br>
>> Yes. With a standard deviation of 1000, the search radius is now 4000,<br>
>> that is, for each cell a 8000x8000 box is searched. With many densely<br>
>> packed points, this can take quite some time.<br>
>><br>
>> Markus M<br>
>><br>
>> ><br>
>> > v.kernel input=tickets@PERMANENT output=tickets_new_heatmap_1000<br>
>> > stddeviation=1000<br>
>> > STDDEV: 1000.000000<br>
>> > RES: 18.290457 ROWS: 2370 COLS: 2650<br>
>> ><br>
>> > Writing output raster map using smooth parameter=1000.000000.<br>
>> ><br>
>> > Normalising factor=6482635.018778.<br>
>> ><br>
>> > On Sat, Nov 24, 2012 at 9:03 PM, Aren Cambre <<a href="mailto:aren@arencambre.com">aren@arencambre.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >> I installed r53983. The v.kernel execution that took almost a day now<br>
>> >> executes in 25.5 minutes. Thank you!<br>
>> >><br>
>> >> Aren<br>
>> >><br>
>> >><br>
>> >> On Fri, Nov 23, 2012 at 12:51 PM, Markus Metz<br>
>> >> <<a href="mailto:markus.metz.giswork@gmail.com">markus.metz.giswork@gmail.com</a>> wrote:<br>
>> >>><br>
>> >>> On Fri, Nov 23, 2012 at 5:35 PM, Aren Cambre <<a href="mailto:aren@arencambre.com">aren@arencambre.com</a>><br>
>> >>> wrote:<br>
>> >>> > Thanks!<br>
>> >>> ><br>
>> >>> > I am not familiar with GRASS's release customs. Will this become<br>
>> >>> > part<br>
>> >>> > of a<br>
>> >>> > binary release soon, or should I just pull down the latest release<br>
>> >>> > in<br>
>> >>> > the<br>
>> >>> > 6.4.2 trunk? I'm assuming this has been merged into the trunk...<br>
>> >>><br>
>> >>> It should be available as a binary for Windows by tomorrow in the<br>
>> >>> nightly builds [0].<br>
>> >>><br>
>> >>> Markus M<br>
>> >>><br>
>> >>> [0] <a href="http://wingrass.fsv.cvut.cz/grass64/" target="_blank">http://wingrass.fsv.cvut.cz/grass64/</a><br>
>> >>><br>
>> >>> ><br>
>> >>> > Aren<br>
>> >>> ><br>
>> >>> ><br>
>> >>> > On Fri, Nov 23, 2012 at 7:32 AM, Markus Metz<br>
>> >>> > <<a href="mailto:markus.metz.giswork@gmail.com">markus.metz.giswork@gmail.com</a>><br>
>> >>> > wrote:<br>
>> >>> >><br>
>> >>> >> On Fri, Nov 23, 2012 at 2:07 PM, Aren Cambre <<a href="mailto:aren@arencambre.com">aren@arencambre.com</a>><br>
>> >>> >> wrote:<br>
>> >>> >> > Isn't taking about 10,000% too much time considered a bug? :-)<br>
>> >>> >><br>
>> >>> >> Hmm, yes. v.kernel is fixed in devbr6 and relbr6 with r53982 and<br>
>> >>> >> r53983, respectively.<br>
>> >>> >><br>
>> >>> >> Markus M<br>
>> >>> >><br>
>> >>> >> ><br>
>> >>> >> > On Nov 23, 2012 5:11 AM, "Markus Metz"<br>
>> >>> >> > <<a href="mailto:markus.metz.giswork@gmail.com">markus.metz.giswork@gmail.com</a>><br>
>> >>> >> > wrote:<br>
>> >>> >> >><br>
>> >>> >> >> On Fri, Nov 23, 2012 at 4:14 AM, Aren Cambre<br>
>> >>> >> >> <<a href="mailto:aren@arencambre.com">aren@arencambre.com</a>><br>
>> >>> >> >> wrote:<br>
>> >>> >> >> > I'm able to reproduce reliably here. I'll email you details<br>
>> >>> >> >> > privately.<br>
>> >>> >> >><br>
>> >>> >> >> Thanks. I can confirm that v.kernel takes a long time in GRASS 6<br>
>> >>> >> >> with<br>
>> >>> >> >> the settings provided by you. It does not crash, however.<br>
>> >>> >> >><br>
>> >>> >> >> I can speed up v.kernel in GRASS 6 to complete in 10 minutes<br>
>> >>> >> >> instead<br>
>> >>> >> >> of 16+ hours, but I am not sure if this fix can/will go into<br>
>> >>> >> >> GRASS<br>
>> >>> >> >> 6.4<br>
>> >>> >> >> because by now only bugs should be fixed.<br>
>> >>> >> >><br>
>> >>> >> >> Markus M<br>
>> >>> >> >><br>
>> >>> >> >> ><br>
>> >>> >> >> > Aren<br>
>> >>> >> >> ><br>
>> >>> >> >> ><br>
>> >>> >> >> > On Thu, Nov 22, 2012 at 9:02 AM, Markus Metz<br>
>> >>> >> >> > <<a href="mailto:markus.metz.giswork@gmail.com">markus.metz.giswork@gmail.com</a>><br>
>> >>> >> >> > wrote:<br>
>> >>> >> >> >><br>
>> >>> >> >> >> On Sat, Nov 17, 2012 at 4:06 PM, Aren Cambre<br>
>> >>> >> >> >> <<a href="mailto:aren@arencambre.com">aren@arencambre.com</a>><br>
>> >>> >> >> >> wrote:<br>
>> >>> >> >> >> > I have a dataset of just over 700,000 incidents that<br>
>> >>> >> >> >> > happened<br>
>> >>> >> >> >> > in<br>
>> >>> >> >> >> > square-ish<br>
>> >>> >> >> >> > Texas county that's about 30 miles on each side.<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > Here's the parameters reported by v.kernel as it's<br>
>> >>> >> >> >> > executing:<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > STDDEV: 1000.000000<br>
>> >>> >> >> >> > RES: 111.419043 ROWS: 458 COLS: 447<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > Writing output raster map using smooth<br>
>> >>> >> >> >> > parameter=1000.000000.<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > Normalising factor=6482635.018778.<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > I am running this on a Windows 7 x64 machine with 8 GB RAM<br>
>> >>> >> >> >> > and<br>
>> >>> >> >> >> > an<br>
>> >>> >> >> >> > Intel<br>
>> >>> >> >> >> > Core<br>
>> >>> >> >> >> > i7 Q720 1.6 GHz with 4 physical cores. I notice that it's<br>
>> >>> >> >> >> > not<br>
>> >>> >> >> >> > multithreaded,<br>
>> >>> >> >> >> > only using 1 core.<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > It takes about 16 hours to complete. Is this correct? I'd<br>
>> >>> >> >> >> > like<br>
>> >>> >> >> >> > to<br>
>> >>> >> >> >> > use<br>
>> >>> >> >> >> > this<br>
>> >>> >> >> >> > on a dataset with closer to 5 million records, and I'm<br>
>> >>> >> >> >> > really<br>
>> >>> >> >> >> > concerned<br>
>> >>> >> >> >> > how<br>
>> >>> >> >> >> > long it may take.<br>
>> >>> >> >> >><br>
>> >>> >> >> >> The time required by v.kernel is a function of the number of<br>
>> >>> >> >> >> cells<br>
>> >>> >> >> >> and<br>
>> >>> >> >> >> the input parameter stddeviation. The larger any of these<br>
>> >>> >> >> >> values<br>
>> >>> >> >> >> is,<br>
>> >>> >> >> >> the more time v.kernel will need. Nevertheless, I think that<br>
>> >>> >> >> >> the<br>
>> >>> >> >> >> 16+<br>
>> >>> >> >> >> hours are not correct. I tested with a vector with 3 million<br>
>> >>> >> >> >> points<br>
>> >>> >> >> >> for a grid with 2700 rows and 1087 columns, magnitudes larger<br>
>> >>> >> >> >> than<br>
>> >>> >> >> >> the<br>
>> >>> >> >> >> grid used by you. v.kernel completes in just over one minute.<br>
>> >>> >> >> >><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > I posted my question about the 16+ hours at<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > <a href="http://gis.stackexchange.com/questions/41058/how-do-i-compute-v-kernel-maps-in-less-than-16-hours/" target="_blank">http://gis.stackexchange.com/questions/41058/how-do-i-compute-v-kernel-maps-in-less-than-16-hours/</a>.<br>
>> >>> >> >> >> > Bill Huber, who si apparently knowledgeable about kernel<br>
>> >>> >> >> >> > density<br>
>> >>> >> >> >> > calculations in general, posted a response, and he felt<br>
>> >>> >> >> >> > like a<br>
>> >>> >> >> >> > kernel<br>
>> >>> >> >> >> > density map shouldn't take much time at all. But digging<br>
>> >>> >> >> >> > more<br>
>> >>> >> >> >> > deeply,<br>
>> >>> >> >> >> > turns<br>
>> >>> >> >> >> > out he had come up with a kernel density calculation method<br>
>> >>> >> >> >> > over a<br>
>> >>> >> >> >> > decade<br>
>> >>> >> >> >> > ago using Fourier transforms. See<br>
>> >>> >> >> >> > <a href="http://www.directionsmag.com/features/convolution/129753" target="_blank">http://www.directionsmag.com/features/convolution/129753</a><br>
>> >>> >> >> >> > and<br>
>> >>> >> >> >> > the<br>
>> >>> >> >> >> > next<br>
>> >>> >> >> >> > two<br>
>> >>> >> >> >> > articles linked to it (they are short articles). Apparently<br>
>> >>> >> >> >> > this<br>
>> >>> >> >> >> > transforms<br>
>> >>> >> >> >> > it from an O(n^2) problem to an O(n ln n) complexity<br>
>> >>> >> >> >> > problem.<br>
>> >>> >> >> >><br>
>> >>> >> >> >> The approach of Bill Huber is raster-based, not vector based,<br>
>> >>> >> >> >> making<br>
>> >>> >> >> >> some things easier, at the cost of precision. The coordinate<br>
>> >>> >> >> >> precision, however, is only needed for kernel functions other<br>
>> >>> >> >> >> than<br>
>> >>> >> >> >> uniform. In GRASS, you could get something like a<br>
>> >>> >> >> >> raster-based<br>
>> >>> >> >> >> density<br>
>> >>> >> >> >> map by<br>
>> >>> >> >> >><br>
>> >>> >> >> >> - exporting the points with v.out.ascii<br>
>> >>> >> >> >> - re-importing the points with r.in.xyz method=n to get the<br>
>> >>> >> >> >> number<br>
>> >>> >> >> >> of<br>
>> >>> >> >> >> points per cell<br>
>> >>> >> >> >> - running a neighborhood analysis using a circular window<br>
>> >>> >> >> >> with<br>
>> >>> >> >> >> r.neighbors method=sum -c<br>
>> >>> >> >> >><br>
>> >>> >> >> >> Optionally you could use the gauss option of r.neighbors to<br>
>> >>> >> >> >> get<br>
>> >>> >> >> >> an<br>
>> >>> >> >> >> equivalent to v.kernel kernel=gaussian<br>
>> >>> >> >> >><br>
>> >>> >> >> >> HTH,<br>
>> >>> >> >> >><br>
>> >>> >> >> >> Markus M<br>
>> >>> >> >> >><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > I inspected v.kernel's main.c<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > (<a href="http://trac.osgeo.org/grass/browser/grass/trunk/vector/v.kernel/main.c" target="_blank">http://trac.osgeo.org/grass/browser/grass/trunk/vector/v.kernel/main.c</a>),<br>
>> >>> >> >> >> > and looks like v.kernel uses an output-centric method<br>
>> >>> >> >> >> > (using<br>
>> >>> >> >> >> > Bill's<br>
>> >>> >> >> >> > wording)<br>
>> >>> >> >> >> > of calculating the output, which seems like O(n^2)<br>
>> >>> >> >> >> > complexity.<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > So I guess what I'm getting at is it appears to me that the<br>
>> >>> >> >> >> > algorithm<br>
>> >>> >> >> >> > behind<br>
>> >>> >> >> >> > GRASS GIS's v.kernel is straightforward but is a greedy<br>
>> >>> >> >> >> > algorithm<br>
>> >>> >> >> >> > (<a href="http://en.wikipedia.org/wiki/Greedy_algorithm" target="_blank">http://en.wikipedia.org/wiki/Greedy_algorithm</a>), which is<br>
>> >>> >> >> >> > fine,<br>
>> >>> >> >> >> > but<br>
>> >>> >> >> >> > it<br>
>> >>> >> >> >> > make<br>
>> >>> >> >> >> > take a while to execute. Is this true?<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > Is there not spatial indexing I could add to the dataset?<br>
>> >>> >> >> >> > I've<br>
>> >>> >> >> >> > done<br>
>> >>> >> >> >> > various<br>
>> >>> >> >> >> > Google searches on that and can't come up with anything<br>
>> >>> >> >> >> > clear.<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > Aren<br>
>> >>> >> >> >> ><br>
>> >>> >> >> >> > _______________________________________________<br>
>> >>> >> >> >> > grass-user mailing list<br>
>> >>> >> >> >> > <a href="mailto:grass-user@lists.osgeo.org">grass-user@lists.osgeo.org</a><br>
>> >>> >> >> >> > <a href="http://lists.osgeo.org/mailman/listinfo/grass-user" target="_blank">http://lists.osgeo.org/mailman/listinfo/grass-user</a><br>
>> >>> >> >> >> ><br>
>> >>> >> >> ><br>
>> >>> >> >> ><br>
>> >>> ><br>
>> >>> ><br>
>> >><br>
>> >><br>
>> ><br>
><br>
><br>
</div></div></blockquote></div><br></div></div>