<div>Hi Bob, Steve,</div>
<div> </div>
<div>thank you very much for all these helpful clues, now I think I've got the points of the shp2tile command, it's really a good tool to slice shapefile.</div>
<div> </div>
<div>zhonghai<br> </div>
<div><span class="gmail_quote">On 5/18/06, <b class="gmail_sendername">Stephen Woodbridge</b> <<a href="mailto:woodbri@swoodbridge.com">woodbri@swoodbridge.com</a>> wrote:</span>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">Zhonghai Wang wrote:<br>> Hi folks,<br>><br>> I have a large shapefile, now I am trying to use shp2tile command to
<br>> slice it into pieces, with -r and -c is ok, but I do not fully<br>> understand the -q parameter, what does it actually mean? and what number<br>> should a use for this parameter normally?<br>><br>> or something like this? -- >shp2tile -q 10000 input_shapefile
<br>> output_shapefile<br><br>Hi Zhonghai,<br><br>The -r -c option breaks the extents of your shapefile into R x C rows<br>and columns and then tries to fit the objects into the best tile. I any<br>tile crosses a tile boundary by 5-10% then it is put into a "supertile"
<br>the could be the same extents as the original shape file. So typically<br>you will end up with r X c + 1 tiles.<br><br>The -q N option splits the extents in half either vertically or<br>horizontally and then sorts the objects into the 2 halves or put them in
<br>a supertile. Then if the either of the two halves has more than N<br>objects it is again split in half and this continues until all files<br>have less than N objects. This can cause some strange effects like tiles<br>
with 1 or a small number of objects and most tiles will have less than N<br>objects in them. Since this algorithm tends to spatially cluster objects<br>in a file, there is a good chance that if you need the file that all or
<br>most objects in the file will be used.<br><br>I recommend trying numbers like 10,000 and 20,000 as you initial tries.<br>I think you should probably not use numbers less then 8000, but it is<br>really up to you to try and measure the results to find what works best
<br>for your data.<br><br>-Steve W.<br></blockquote></div><br>