<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
Ola,<br>
Sim, as GPU estão a ser cada vez mais usadas para computação
científica, embora não seja a minha especialidade deixo aqui algumas
referencias interessantes na web (talvez ajude quem não conhece o
tópico):<br>
<br>
Extracto da Wikipedia:<br>
<a class="moz-txt-link-freetext" href="http://en.wikipedia.org/wiki/Graphics_processing_unit">http://en.wikipedia.org/wiki/Graphics_processing_unit</a><br>
<p>"A new concept is to use a <a
href="http://en.wikipedia.org/wiki/GPGPU" title="GPGPU">general
purpose graphics processing unit</a> as a modified form of <a
href="http://en.wikipedia.org/wiki/Stream_processing"
title="Stream processing">stream processor</a>. This concept turns the
massive <a href="http://en.wikipedia.org/wiki/Floating-point"
title="Floating-point" class="mw-redirect">floating-point</a>
computational power of a modern graphics accelerator's shader pipeline
into general-purpose computing power, as opposed to being hard wired
solely to do graphical operations. In certain applications requiring
massive vector operations, this can yield several orders of magnitude
higher performance than a conventional CPU. The two largest discrete
(see "Dedicated graphics cards" above) GPU designers, <a
href="http://en.wikipedia.org/wiki/ATI_Technologies"
title="ATI Technologies">ATI</a> and <a
href="http://en.wikipedia.org/wiki/NVIDIA" title="NVIDIA"
class="mw-redirect">NVIDIA</a>, are beginning to pursue this new
approach with an array of applications. Both nVidia and ATI have teamed
with <a href="http://en.wikipedia.org/wiki/Stanford_University"
title="Stanford University">Stanford University</a> to create a
GPU-based client for the <a
href="http://en.wikipedia.org/wiki/Folding@Home" title="Folding@Home"
class="mw-redirect">Folding@Home</a>
distributed computing project, for protein folding calculations. In
certain circumstances the GPU calculates forty times faster than the
conventional CPUs traditionally used by such applications.<sup
id="cite_ref-9" class="reference"><a
href="http://en.wikipedia.org/wiki/Graphics_processing_unit#cite_note-9"><span>[</span>10<span>]</span></a></sup><sup
id="cite_ref-10" class="reference"><a
href="http://en.wikipedia.org/wiki/Graphics_processing_unit#cite_note-10"><span>[</span>11<span>]</span></a></sup></p>
<p>Recently NVidia began releasing cards supporting an API extension to
the <a href="http://en.wikipedia.org/wiki/C_%28programming_language%29"
title="C (programming language)">C</a> programming language <a
href="http://en.wikipedia.org/wiki/CUDA" title="CUDA">CUDA</a>
("Compute Unified Device Architecture"), which allows specified
functions from a normal C program to run on the GPU's stream
processors. This makes C programs capable of taking advantage of a
GPU's ability to operate on large matrices in parallel, while still
making use of the CPU when appropriate. CUDA is also the first API to
allow CPU-based applications to access directly the resources of a GPU
for more general purpose computing without the limitations of using a
graphics API."</p>
Entretanto um SIG que já utiliza o CUDA (programação em cima das GPU) é
o Manifold:<br>
<a class="moz-txt-link-freetext" href="http://www.manifold.net/index.shtml">http://www.manifold.net/index.shtml</a><br>
"Using the optional <a
href="http://www.manifold.net/info/surface_tools.shtml"> <strong>Manifold
Surface Tools</strong></a> extension, Manifold Release 8 is the first
GIS ever to support <b>massively parallel</b> computing using <b>hundreds</b>
of stream processors via NVIDIA® CUDA™ technology. By installing an
NVIDIA GPGPU card (widely available for as little as $150 per card with
up to 256 processors per card) you can add true supercomputing
performance to your GIS installation. Manifold automatically recognizes
and utilizes up to four NVIDIA CUDA cards for up to <b>1024 processors</b>
with teraflops of computational performance. Dozens of functions can be
run by Manifold at supercomputer speeds with an NVIDIA CUDA-capable
card installed."<br>
<br>
Um comparativo do "Tom's Hardware" a falar das vantagens do uso das GPu
(embora não veja SIG por lá):<br>
<a class="moz-txt-link-freetext" href="http://www.tomshardware.com/reviews/nvidia-cuda-gpgpu,2299.html">http://www.tomshardware.com/reviews/nvidia-cuda-gpgpu,2299.html</a><br>
<br>
Portal que congrega informação sobre o assunto (boa referência):<br>
<a class="moz-txt-link-freetext" href="http://gpgpu.org/">http://gpgpu.org/</a><br>
<br>
A ATI (que agora é da AMD - para os mais distraídos...) tambem tem já
possibilidades semelhantes ao CUDA da NVIDIA e o OPENCL promete dar
acesso ao hardware de todos o sfabricantes:<br>
"The move to open standards with the new <a
href="http://www.khronos.org/news/press/releases/khronos_launches_heterogeneous_computing_initiative/">Heterogeneous
Computing Initiative</a> supporting <a
href="http://en.wikipedia.org/wiki/OpenCL">OpenCL</a>
(Open Computing Language), will be a good move for both AMD and
NVIDIA. The idea is that an application developer would write an
OpenCL-based stream computing application, and it would run on any GPU
or CPU with an OpenCL driver. Both AMD and Nvidia have indicated they
want to support this new standard."<br>
<a class="moz-txt-link-freetext" href="http://fireuser.com/blog/amd_stream_computing_and_nvidia_cuda_similar_but_different/">http://fireuser.com/blog/amd_stream_computing_and_nvidia_cuda_similar_but_different/</a><br>
<br>
Parece que a vantagem do OpenCL será a de usar o processador que se
quiser (ou todos?)<br>
"- OpenCL platform was created to define a general, open standard of
GPGPU acces and usage. It also enabled to use different GPGPU-able
devices to work parallel each other. Like if you have a big machine
with 2 processors and 3 videocards, the OpenCL device list will provide
you handles for 5 devices (3 GPUs and 2CPUs) and you can select on
which you want to run your code on. (While CUDA was not this flexible)"<br>
<br>
Não tenho notado conversas sobre isto nos projectos SIG open source,
mas acho que já será possível fazer uzo destas tecnologias? Ai sim
poderiamos ver o poder destas placas a chegar rapidamente a muitos :-)<br>
<br>
Victor Ferreira<br>
<br>
Pedro Matos wrote:
<blockquote
cite="mid:976dc9e31002240148h6bc894d7of673b5c570347e76@mail.gmail.com"
type="cite"><span style="display: none;"> </span>
<!--~-|**|PrettyHtmlStartT|**|-~-->
<div id="ygrp-mlmsg" style="position: relative;">
<div id="ygrp-msg" style="z-index: 1;"><!--~-|**|PrettyHtmlEndT|**|-~-->
<div id="ygrp-text">
<p>A questao das placas graficas originalmente desenhadas para jogos
aplicadas noutras areas é muito interessante. Nalgumas aplicacoes, os
militares americanos e ingleses (pelo menos) estao a desistir dos
tradicionais contratos de desenvolvimento de hardware dedicado por o
sector comercial ser imbativel.<br>
<br>
O pentagono acabou de comprar 2000 processadores xbox para montar um
super parallel computer, e os ingleses estimam que uma placa NVIDIA de
300 libras substitui equipamento militar no valor de 30.000 libras. (a
referencia é do Economist)<br>
<br>
É tempo de comecarmos tambem nos a prestar tanta atencao as nossas
placas graficas como a que damos ao software. Acabei de trocar o meu
computador standard por uma maquina mais potente com uma placa grafica
decente e nalgumas operacoes ArcGIS os ganhos de tempo melhoraram 10
vezes.<br>
<br>
Pedro<br>
<br>
<br>
<img moz-do-not-send="true" alt=""
src="http://www.directionsmag.com/newsletter.generator/images/s.gif"
border="0" height="1" width="28"> <br>
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tbody>
<tr>
<td><span>Top Story</span></td>
</tr>
<tr>
<td align="right"
background="http://www.directionsmag.com/newsletter.generator/images/line.gif"><br>
</td>
</tr>
</tbody>
</table>
</p>
<p valign="top"><a moz-do-not-send="true"
href="http://newsletter.directionsmag.com/link.php?M=74078&N=2504&L=30331"
target="_blank"><img moz-do-not-send="true"
src="http://www.directionsmag.com/images/articles/thumbnails/3418.jpg"
alt="Why Geospatial Users and Developers Should Know Their GPU from
their CPU"
border="0"></a><b><a moz-do-not-send="true"
href="http://newsletter.directionsmag.com/link.php?M=74078&N=2504&L=30331"
target="_blank">Why Geospatial Users and
Developers Should Know Their GPU from their CPU</a></b></p>
<p>The
buzz about advances in geospatial software has overshadowed that of
hardware for the last five to ten years. But Executive Editor Adena
Schutzberg suggests that perhaps we should pay more attention to
hardware, especially since the graphics processing unit (GPU) may be
one of our best weapons to increase productivity. <br>
</p>
<p><br>
</p>
<p><a moz-do-not-send="true"
href="http://www.directionsmag.com/article.php?article_id=3418">http://www.directio<wbr>nsmag.com/<wbr>article.php?<wbr>article_id=<wbr>3418</a><br>
</p>
</div>
</div>
</div>
</blockquote>
<br>
</body>
</html>