<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://www.wiki.mohid.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=192.168.20.184&amp;*</id>
		<title>MohidWiki - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://www.wiki.mohid.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=192.168.20.184&amp;*"/>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Special:Contributions/192.168.20.184"/>
		<updated>2026-04-05T09:06:21Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.28.0</generator>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=MPIConfig.dat&amp;diff=448</id>
		<title>MPIConfig.dat</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=MPIConfig.dat&amp;diff=448"/>
				<updated>2008-06-02T15:14:13Z</updated>
		
		<summary type="html">&lt;p&gt;192.168.20.184: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The MPIConfig.dat file is no longer used in MPICH 1.2.x. You must write a command line with every option you intend to use.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
The '''MPIConfig.dat''' is the setup file that distributes a [[MOHID Water]] simulation in several processors associating each nested domain to a processor. &lt;br /&gt;
Below an example explanation will be given, considering that:&lt;br /&gt;
# There is a simulation with a nested domain called '''Rias''' nested into a father domain called '''Galicia'''&lt;br /&gt;
# There are 2 computers connected through a network called PC1 and PC2, each with one processor available&lt;br /&gt;
# The simulation data files are all located in PC1&lt;br /&gt;
# The full path to the simulation father domain working folder is '''\\PC1\Applications\Galicia\exe'''&lt;br /&gt;
# PC1 has a virtual Mapped Network Drive with the letter '''T:\''' redirecting to the following full path '''\\PC1\Applications\'''. &lt;br /&gt;
# The whole simulation path is relative to this '''T:\''' drive. &lt;br /&gt;
# The MOHID executable compiled with MPI options is located in PC1&lt;br /&gt;
# The operative system being used is MS Windows XP Professional and the MPICH version installed is 1.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The '''MPIConfig.dat''' must be configured like shown below. The construction of this file can be also done via [[MOHID GUI]]. &lt;br /&gt;
&lt;br /&gt;
'''Line 1''' - the first line contains a keyword '''exe''' containing the full path to the MOHID executable compiled with MPI options&lt;br /&gt;
&lt;br /&gt;
 exe \\PC1\EXE\Mohid_MPI.exe&lt;br /&gt;
&lt;br /&gt;
'''Line 2''' - this line contains a keyword '''dir''' containing path to the father domain working folder ('''\exe'''). &lt;br /&gt;
&lt;br /&gt;
 dir T:\Galicia\exe&lt;br /&gt;
&lt;br /&gt;
'''Line 3''' - this line contains a keyword '''map''' containing the letter of the mapped drive '''T:''' followed by the full path to the father domain working folder ('''\exe'''). &lt;br /&gt;
&lt;br /&gt;
 map T:\\PC1\Applications\Galicia\exe&lt;br /&gt;
&lt;br /&gt;
'''Line 4''' - this line contains a keyword '''hosts'''. Below this line the definition of each computer  specifying its name (e.g. PC1) and the number of processes to be launched in this computer.&lt;br /&gt;
&lt;br /&gt;
 hosts&lt;br /&gt;
 PC1 1&lt;br /&gt;
 PC2 1&lt;br /&gt;
&lt;br /&gt;
For example, if PC1 is a computer carrying 2 processors the following definition could be written: &lt;br /&gt;
&lt;br /&gt;
 hosts&lt;br /&gt;
 PC1 2&lt;br /&gt;
&lt;br /&gt;
[[Category:MOHID Water]]&lt;/div&gt;</summary>
		<author><name>192.168.20.184</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=MPI&amp;diff=446</id>
		<title>MPI</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=MPI&amp;diff=446"/>
				<updated>2007-12-12T17:44:59Z</updated>
		
		<summary type="html">&lt;p&gt;192.168.20.184: /* Linux installation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Is an acronym standing for '''Message Passing Interface'''. MPI are programing directives which allow a program to launch child processes in different ''nodes'' (other computers) through the network and communicate between them. This allows [[parallel processing]].&lt;br /&gt;
To Build a MOHID project for MPI refer to [[Compiling Mohid with MPI]].&lt;br /&gt;
==MPICH==&lt;br /&gt;
'''MPICH''' are the libraries containing the [[MPI]] directives.&lt;br /&gt;
===Linux installation===&lt;br /&gt;
*To build the '''MPICH''' libraries in linux, get them and:&lt;br /&gt;
 &amp;gt;./configure FC=ifort --enable-f90&lt;br /&gt;
 &amp;gt;make&lt;br /&gt;
 &amp;gt;make testing&lt;br /&gt;
 &amp;gt;make install&lt;br /&gt;
*To compile Mohid in linux in MPI refer to [[Compiling Mohid with MPI|this wiki]].&lt;br /&gt;
*To compile any program in linux simply type&lt;br /&gt;
 &amp;gt;mpif90 -i-static foo.f90&lt;br /&gt;
*To use an MPI program on a Linux machine simply type these lines:&lt;br /&gt;
 mpd &amp;amp;&lt;br /&gt;
 mpiexec -n 4 ./MohidWater&lt;br /&gt;
 mpdallexit&lt;br /&gt;
The three lines do the following: i) install the mpi daemon, ii) run MohidWater in 4 processes, iii)once the program is finished, kill all daemons.&lt;br /&gt;
&lt;br /&gt;
==Samples==&lt;br /&gt;
===cpi===&lt;br /&gt;
This little C program calculates pi.&lt;br /&gt;
 #include &amp;quot;mpi.h&amp;quot;&lt;br /&gt;
 #include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;math.h&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 double f(double);&lt;br /&gt;
 &lt;br /&gt;
 double f(double a)&lt;br /&gt;
 {&lt;br /&gt;
    return (4.0 / (1.0 + a*a));&lt;br /&gt;
 }&lt;br /&gt;
 &lt;br /&gt;
 int main(int argc,char *argv[])&lt;br /&gt;
 {&lt;br /&gt;
    int    n, myid, numprocs, i;&lt;br /&gt;
    double PI25DT = 3.141592653589793238462643;&lt;br /&gt;
    double mypi, pi, h, sum, x;&lt;br /&gt;
    double startwtime = 0.0, endwtime;&lt;br /&gt;
    int    namelen;&lt;br /&gt;
    char   processor_name[MPI_MAX_PROCESSOR_NAME];&lt;br /&gt;
 &lt;br /&gt;
    MPI_Init(&amp;amp;argc,&amp;amp;argv);&lt;br /&gt;
    MPI_Comm_size(MPI_COMM_WORLD,&amp;amp;numprocs);&lt;br /&gt;
    MPI_Comm_rank(MPI_COMM_WORLD,&amp;amp;myid);&lt;br /&gt;
    MPI_Get_processor_name(processor_name,&amp;amp;namelen);&lt;br /&gt;
 &lt;br /&gt;
    fprintf(stdout,&amp;quot;Process %d of %d is on %s\n&amp;quot;,&lt;br /&gt;
            myid, numprocs, processor_name);&lt;br /&gt;
    fflush(stdout);&lt;br /&gt;
 &lt;br /&gt;
    n = 10000;                  /* default # of rectangles */&lt;br /&gt;
    if (myid == 0)&lt;br /&gt;
        startwtime = MPI_Wtime();&lt;br /&gt;
    MPI_Bcast(&amp;amp;n, 1, MPI_INT, 0, MPI_COMM_WORLD);&lt;br /&gt;
 &lt;br /&gt;
    h   = 1.0 / (double) n;&lt;br /&gt;
    sum = 0.0;&lt;br /&gt;
    /* A slightly better approach starts from large i and works back */&lt;br /&gt;
    for (i = myid + 1; i &amp;lt;= n; i += numprocs)&lt;br /&gt;
    {&lt;br /&gt;
        x = h * ((double)i - 0.5);&lt;br /&gt;
        sum += f(x);&lt;br /&gt;
    }&lt;br /&gt;
    mypi = h * sum;&lt;br /&gt;
 &lt;br /&gt;
    MPI_Reduce(&amp;amp;mypi, &amp;amp;pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);&lt;br /&gt;
 &lt;br /&gt;
    if (myid == 0) {&lt;br /&gt;
        endwtime = MPI_Wtime();&lt;br /&gt;
        printf(&amp;quot;pi is approximately %.16f, Error is %.16f\n&amp;quot;,&lt;br /&gt;
               pi, fabs(pi - PI25DT));&lt;br /&gt;
        printf(&amp;quot;wall clock time = %f\n&amp;quot;, endwtime-startwtime);&lt;br /&gt;
        fflush(stdout);&lt;br /&gt;
    }&lt;br /&gt;
 &lt;br /&gt;
    MPI_Finalize();&lt;br /&gt;
    return 0;&lt;br /&gt;
 }&lt;br /&gt;
=== f90pi===&lt;br /&gt;
The same little program in fortran 90:&lt;br /&gt;
 !**********************************************************************&lt;br /&gt;
 !   pi3f90.f - compute pi by integrating f(x) = 4/(1 + x**2)&lt;br /&gt;
 !&lt;br /&gt;
 !  (C) 2001 by Argonne National Laboratory.&lt;br /&gt;
 !      See COPYRIGHT in top-level directory.&lt;br /&gt;
 !&lt;br /&gt;
 !   Each node:&lt;br /&gt;
 !    1) receives the number of rectangles used in the approximation.&lt;br /&gt;
 !    2) calculates the areas of it's rectangles.&lt;br /&gt;
 !    3) Synchronizes for a global summation.&lt;br /&gt;
 !   Node 0 prints the result.&lt;br /&gt;
 !&lt;br /&gt;
 !  Variables:&lt;br /&gt;
 !&lt;br /&gt;
 !    pi  the calculated result&lt;br /&gt;
 !    n   number of points of integration.&lt;br /&gt;
 !    x           midpoint of each rectangle's interval&lt;br /&gt;
 !    f           function to integrate&lt;br /&gt;
 !    sum,pi      area of rectangles&lt;br /&gt;
 !    tmp         temporary scratch space for global summation&lt;br /&gt;
 !    i           do loop index&lt;br /&gt;
 !****************************************************************************&lt;br /&gt;
 program main&lt;br /&gt;
 &lt;br /&gt;
 use mpi&lt;br /&gt;
 &lt;br /&gt;
 double precision  PI25DT&lt;br /&gt;
 parameter        (PI25DT = 3.141592653589793238462643d0)&lt;br /&gt;
 &lt;br /&gt;
 double precision  mypi, pi, h, sum, x, f, a&lt;br /&gt;
 integer n, myid, numprocs, i, rc&lt;br /&gt;
 !                                 function to integrate&lt;br /&gt;
 f(a) = 4.d0 / (1.d0 + a*a)&lt;br /&gt;
 &lt;br /&gt;
 call MPI_INIT( ierr )&lt;br /&gt;
 call MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )&lt;br /&gt;
 call MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )&lt;br /&gt;
 print *, 'Process ', myid, ' of ', numprocs, ' is alive'&lt;br /&gt;
 &lt;br /&gt;
 sizetype   = 1&lt;br /&gt;
 sumtype    = 2&lt;br /&gt;
 &lt;br /&gt;
 do&lt;br /&gt;
    if ( myid .eq. 0 ) then&lt;br /&gt;
       write(6,98)&lt;br /&gt;
 98    format('Enter the number of intervals: (0 quits)')&lt;br /&gt;
       read(5,99) n&lt;br /&gt;
 99    format(i10)&lt;br /&gt;
    endif&lt;br /&gt;
 &lt;br /&gt;
    call MPI_BCAST(n,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)&lt;br /&gt;
 &lt;br /&gt;
 !                                 check for quit signal&lt;br /&gt;
    if ( n .le. 0 ) exit&lt;br /&gt;
 &lt;br /&gt;
 !                                 calculate the interval size&lt;br /&gt;
    h = 1.0d0/n&lt;br /&gt;
 &lt;br /&gt;
    sum  = 0.0d0&lt;br /&gt;
    do i = myid+1, n, numprocs&lt;br /&gt;
       x = h * (dble(i) - 0.5d0)&lt;br /&gt;
       sum = sum + f(x)&lt;br /&gt;
    enddo&lt;br /&gt;
    mypi = h * sum&lt;br /&gt;
 &lt;br /&gt;
 !                                 collect all the partial sums&lt;br /&gt;
    call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0, &amp;amp;&lt;br /&gt;
                    MPI_COMM_WORLD,ierr)&lt;br /&gt;
 &lt;br /&gt;
 !                                 node 0 prints the answer.&lt;br /&gt;
    if (myid .eq. 0) then&lt;br /&gt;
        write(6, 97) pi, abs(pi - PI25DT)&lt;br /&gt;
 97     format('  pi is approximately: ', F18.16, &amp;amp;&lt;br /&gt;
               '  Error is: ', F18.16)&lt;br /&gt;
    endif&lt;br /&gt;
 &lt;br /&gt;
 enddo&lt;br /&gt;
 call MPI_FINALIZE(rc)&lt;br /&gt;
 stop&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
==Other references==&lt;br /&gt;
*[[Parallel processing]].&lt;br /&gt;
*[[Setup a MOHID simulation using MPI]].&lt;br /&gt;
*[[Compiling Mohid with MPI]].&lt;br /&gt;
*[[MPIConfig.dat]].&lt;br /&gt;
[[Category:Technology]]&lt;br /&gt;
[[Category:Linux]]&lt;br /&gt;
[[Category:Windows]]&lt;/div&gt;</summary>
		<author><name>192.168.20.184</name></author>	</entry>

	</feed>