Next: MPI Data Distribution, Previous: Linking and Initializing MPI FFTW, Up: Distributed-memory FFTW with MPI [Contents][Index]
Before we document the FFTW MPI interface in detail, we begin with a
simple example outlining how one would perform a two-dimensional
N0
by N1
complex DFT.
#include <fftw3-mpi.h> int main(int argc, char **argv) { const ptrdiff_t N0 = ..., N1 = ...; fftw_plan plan; fftw_complex *data; ptrdiff_t alloc_local, local_n0, local_0_start, i, j; MPI_Init(&argc, &argv); fftw_mpi_init(); /* get local data size and allocate */ alloc_local = fftw_mpi_local_size_2d(N0, N1, MPI_COMM_WORLD, &local_n0, &local_0_start); data = fftw_alloc_complex(alloc_local); /* create plan for in-place forward DFT */ plan = fftw_mpi_plan_dft_2d(N0, N1, data, data, MPI_COMM_WORLD, FFTW_FORWARD, FFTW_ESTIMATE); /* initialize data to some function my_function(x,y) */ for (i = 0; i < local_n0; ++i) for (j = 0; j < N1; ++j) data[i*N1 + j] = my_function(local_0_start + i, j); /* compute transforms, in-place, as many times as desired */ fftw_execute(plan); fftw_destroy_plan(plan); MPI_Finalize(); }
As can be seen above, the MPI interface follows the same basic style of allocate/plan/execute/destroy as the serial FFTW routines. All of the MPI-specific routines are prefixed with ‘fftw_mpi_’ instead of ‘fftw_’. There are a few important differences, however:
First, we must call fftw_mpi_init()
after calling
MPI_Init
(required in all MPI programs) and before calling any
other ‘fftw_mpi_’ routine.
Second, when we create the plan with fftw_mpi_plan_dft_2d
,
analogous to fftw_plan_dft_2d
, we pass an additional argument:
the communicator, indicating which processes will participate in the
transform (here MPI_COMM_WORLD
, indicating all processes).
Whenever you create, execute, or destroy a plan for an MPI transform,
you must call the corresponding FFTW routine on all processes
in the communicator for that transform. (That is, these are
collective calls.) Note that the plan for the MPI transform
uses the standard fftw_execute
and fftw_destroy
routines
(on the other hand, there are MPI-specific new-array execute functions
documented below).
Third, all of the FFTW MPI routines take ptrdiff_t
arguments
instead of int
as for the serial FFTW. ptrdiff_t
is a
standard C integer type which is (at least) 32 bits wide on a 32-bit
machine and 64 bits wide on a 64-bit machine. This is to make it easy
to specify very large parallel transforms on a 64-bit machine. (You
can specify 64-bit transform sizes in the serial FFTW, too, but only
by using the ‘guru64’ planner interface. See 64-bit Guru Interface.)
Fourth, and most importantly, you don’t allocate the entire
two-dimensional array on each process. Instead, you call
fftw_mpi_local_size_2d
to find out what portion of the
array resides on each processor, and how much space to allocate.
Here, the portion of the array on each process is a local_n0
by
N1
slice of the total array, starting at index
local_0_start
. The total number of fftw_complex
numbers
to allocate is given by the alloc_local
return value, which
may be greater than local_n0 * N1
(in case some
intermediate calculations require additional storage). The data
distribution in FFTW’s MPI interface is described in more detail by
the next section.
Given the portion of the array that resides on the local process, it
is straightforward to initialize the data (here to a function
myfunction
) and otherwise manipulate it. Of course, at the end
of the program you may want to output the data somehow, but
synchronizing this output is up to you and is beyond the scope of this
manual. (One good way to output a large multi-dimensional distributed
array in MPI to a portable binary file is to use the free HDF5
library; see the HDF home page.)
Next: MPI Data Distribution, Previous: Linking and Initializing MPI FFTW, Up: Distributed-memory FFTW with MPI [Contents][Index]