<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://www.wiki.mohid.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=AngelaCanas</id>
		<title>MohidWiki - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://www.wiki.mohid.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=AngelaCanas"/>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Special:Contributions/AngelaCanas"/>
		<updated>2026-04-04T17:02:29Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.28.0</generator>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=3162</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=3162"/>
				<updated>2010-06-08T12:33:48Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time.&lt;br /&gt;
If more than one critial region is defined in the code then every critical region should have a different name or the execution outcome could become undeterminated. &lt;br /&gt;
Caution should be given to the naming of critical sections: if these names do not conflict with variable names they do conflict with names of subroutines and common blocks.&lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
An important rule about the BARRIER use is that this directive cannot be nested in a Work Sharing construct such a DO construct.&lt;br /&gt;
&lt;br /&gt;
Sometimes some variables used by threads cannot be made private. One notorious case consists in variables performing sums of values obtained in iterations of a cycle, which when summing integers are sometimes called «counters». As these variables should accumulate values obtained in each iteration these cannot be made private. This would imply that a critical section should be introduced for the actualization of the variable which would make the code sequential. To avoid this there is an OpenMP clause called REDUCTION which when applied to a DO Work Sharing construct has the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK) REDUCTION(+ : Counter)&lt;br /&gt;
&lt;br /&gt;
Where in this example «Counter» is the reduction variable. &lt;br /&gt;
&lt;br /&gt;
The REDUCTION operation has as restriction that the reduction variable only is actualized once inside the construct and must be a shared variable. &lt;br /&gt;
&lt;br /&gt;
In practice in the program execution a copy of the variable is made for each thread, which actualizes it as a local variable, and in the end of the construct the thread variables are added to the shared variable from which they originate. The knowledge of this process is important because in the case of the accumulation of real values this process can originate differences relative to the unparallel result due to rounded values. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
Although parallelization involves potentially gains in computer resources use it also involves overheads which can sometimes be important.&lt;br /&gt;
&lt;br /&gt;
The creation of the thread team (Workers) at the beginning of each parallel region is a source of such overheads. Because of this it is preferrable in each program/subrotine to be parallelized to create only one parallel region and then deal with the code that must be done only by one thread with OpenMP directives than to create several parallel regions.&lt;br /&gt;
&lt;br /&gt;
Another source of overhead is the synchronization between threads in Work Sharing constructs.&lt;br /&gt;
&lt;br /&gt;
These overheads cause that in some times the parallelization solution may not be performing better than the non parallelization solution, especially if the computational burden of the problem is low. &lt;br /&gt;
&lt;br /&gt;
This is the case of DO constructs when each cycle iteration has a small computational burden (e.g. equaling one matrix's value to other matrix's value) and is to be expected in nested loops when the parallelized DO loop is inner.&lt;br /&gt;
&lt;br /&gt;
Generally as the scale of the problem increases the overheads will be less significant and parallelization more advantageous.&lt;br /&gt;
&lt;br /&gt;
In very small scale problems parallelization may provide more comsumption of computational resources than the non parallelization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Race condition:'''''&lt;br /&gt;
&lt;br /&gt;
In some case execution of a parallel program may be suspended because different threads are competing for the actualization of some variable. This is called a race condition.&lt;br /&gt;
&lt;br /&gt;
The programmer should take care to avoid this situation as its debugging is very difficult. Most commonly the execution will be suspended and the processors would become idle without any error message. When a vast part of the code is parallelized it is then almost impossible to find the cause of the error unless one reckons possible race condition situations.&lt;br /&gt;
&lt;br /&gt;
A way to avoid this is to use critical regions or a REDUCTION clause.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''References:'''''&lt;br /&gt;
&lt;br /&gt;
Chandra, Rohit, 2001, Parallel Programming in OpenMP, Academic Press.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=3161</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=3161"/>
				<updated>2010-06-08T12:33:21Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time.&lt;br /&gt;
If more than one critial region is defined in the code then every critical region should have a different name or the execution outcome could become undeterminated. &lt;br /&gt;
Caution should be given to the naming of critical sections: if these names do not conflict with variable names they do conflict with names of subroutines and common blocks.&lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
An important rule about the BARRIER use is that this directive cannot be nested in a Work Sharing construct such a DO construct.&lt;br /&gt;
&lt;br /&gt;
Sometimes some variables used by threads cannot be made private. One notorious case consists in variables performing sums of values obtained in iterations of a cycle, which when summing integers are sometimes called «counters». As these variables should accumulate values obtained in each iteration these cannot be made private. This would imply that a critical section should be introduced for the actualization of the variable which would make the code sequential. To avoid this there is an OpenMP clause called REDUCTION which when applied to a DO Work Sharing construct has the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK) REDUCTION(+ : Counter)&lt;br /&gt;
&lt;br /&gt;
Where in this example «Counter» is the reduction variable. &lt;br /&gt;
&lt;br /&gt;
The REDUCTION operation has as restriction that the reduction variable only is actualized once inside the construct and must be a shared variable. &lt;br /&gt;
&lt;br /&gt;
In practice in the program execution a copy of the variable is made for each thread, which actualizes it as a local variable, and in the end of the construct the thread variables are added to the shared variable from which they originate. The knowledge of this process is important because in the case of the accumulation of real values this process can originate differences relative to the unparallel result due to rounded values. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
Although parallelization involves potentially gains in computer resources use it also involves overheads which can sometimes be important.&lt;br /&gt;
&lt;br /&gt;
The creation of the thread team (Workers) at the beginning of each parallel region is a source of such overheads. Because of this it is preferrable in each program/subrotine to be parallelized to create only one parallel region and then deal with the code that must be done only by one thread with OpenMP directives than to create several parallel regions.&lt;br /&gt;
&lt;br /&gt;
Another source of overhead is the synchronization between threads in Work Sharing constructs.&lt;br /&gt;
&lt;br /&gt;
These overheads cause that in some times the parallelization solution may not be performing better than the non parallelization solution, especially if the computational burden of the problem is low. &lt;br /&gt;
&lt;br /&gt;
This is the case of DO constructs when each cycle iteration has a small computational burden (e.g. equaling one matrix's value to other matrix's value) and is to be expected in nested loops when the parallelized DO loop is inner.&lt;br /&gt;
&lt;br /&gt;
Generally as the scale of the problem increases the overheads will be less significant and parallelization more advantageous.&lt;br /&gt;
&lt;br /&gt;
In very small scale problems parallelization may provide more comsumption of computational resources than the non parallelization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Race condition:'''''&lt;br /&gt;
&lt;br /&gt;
In some case execution of a parallel program may be suspended because different threads are competing for the actualization of some variable. This is called a race condition.&lt;br /&gt;
&lt;br /&gt;
The programmer should take care to avoid this situation as its debugging is very difficult. Most commonly the execution will be suspended and the processors would become idle without any error message. When a vast part of the code is parallelized it is then almost impossible to find the cause of the error unless one reckons possible race condition situations.&lt;br /&gt;
&lt;br /&gt;
A way to avoid this is to use critical regions or a REDUCTION clause.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''References:'''''&lt;br /&gt;
&lt;br /&gt;
 Chandra, Rohit, 2001, Parallel Programming in OpenMP, Academic Press.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=3154</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=3154"/>
				<updated>2010-05-31T14:47:48Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time.&lt;br /&gt;
If more than one critial region is defined in the code then every critical region should have a different name or the execution outcome could become undeterminated. &lt;br /&gt;
Caution should be given to the naming of critical sections: if these names do not conflict with variable names they do conflict with names of subroutines and common blocks.&lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
An important rule about the BARRIER use is that this directive cannot be nested in a Work Sharing construct such a DO construct.&lt;br /&gt;
&lt;br /&gt;
Sometimes some variables used by threads cannot be made private. One notorious case consists in variables performing sums of values obtained in iterations of a cycle, which when summing integers are sometimes called «counters». As these variables should accumulate values obtained in each iteration these cannot be made private. This would imply that a critical section should be introduced for the actualization of the variable which would make the code sequential. To avoid this there is an OpenMP clause called REDUCTION which when applied to a DO Work Sharing construct has the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK) REDUCTION(+ : Counter)&lt;br /&gt;
&lt;br /&gt;
Where in this example «Counter» is the reduction variable. &lt;br /&gt;
&lt;br /&gt;
The REDUCTION operation has as restriction that the reduction variable only is actualized once inside the construct and must be a shared variable. &lt;br /&gt;
&lt;br /&gt;
In practice in the program execution a copy of the variable is made for each thread, which actualizes it as a local variable, and in the end of the construct the thread variables are added to the shared variable from which they originate. The knowledge of this process is important because in the case of the accumulation of real values this process can originate differences relative to the unparallel result due to rounded values. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
Although parallelization involves potentially gains in computer resources use it also involves overheads which can sometimes be important.&lt;br /&gt;
&lt;br /&gt;
The creation of the thread team (Workers) at the beginning of each parallel region is a source of such overheads. Because of this it is preferrable in each program/subrotine to be parallelized to create only one parallel region and then deal with the code that must be done only by one thread with OpenMP directives than to create several parallel regions.&lt;br /&gt;
&lt;br /&gt;
Another source of overhead is the synchronization between threads in Work Sharing constructs.&lt;br /&gt;
&lt;br /&gt;
These overheads cause that in some times the parallelization solution may not be performing better than the non parallelization solution, especially if the computational burden of the problem is low. &lt;br /&gt;
&lt;br /&gt;
This is the case of DO constructs when each cycle iteration has a small computational burden (e.g. equaling one matrix's value to other matrix's value) and is to be expected in nested loops when the parallelized DO loop is inner.&lt;br /&gt;
&lt;br /&gt;
Generally as the scale of the problem increases the overheads will be less significant and parallelization more advantageous.&lt;br /&gt;
&lt;br /&gt;
In very small scale problems parallelization may provide more comsumption of computational resources than the non parallelization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Race condition:'''''&lt;br /&gt;
&lt;br /&gt;
In some case execution of a parallel program may be suspended because different threads are competing for the actualization of some variable. This is called a race condition.&lt;br /&gt;
&lt;br /&gt;
The programmer should take care to avoid this situation as its debugging is very difficult. Most commonly the execution will be suspended and the processors would become idle without any error message. When a vast part of the code is parallelized it is then almost impossible to find the cause of the error unless one reckons possible race condition situations.&lt;br /&gt;
&lt;br /&gt;
A way to avoid this is to use critical regions or a REDUCTION clause.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2984</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2984"/>
				<updated>2010-04-28T12:27:56Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time.&lt;br /&gt;
If more than one critial region is defined in the code then every critical region should have a different name or the execution outcome could become undeterminated. &lt;br /&gt;
Caution should be given to the naming of critical sections: if these names do not conflict with variable names they do conflict with names of subroutines and common blocks.&lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
An important rule about the BARRIER use is that this directive cannot be nested in a Work Sharing construct such a DO construct.&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
Although parallelization involves potentially gains in computer resources use it also involves overheads which can sometimes be important.&lt;br /&gt;
&lt;br /&gt;
The creation of the thread team (Workers) at the beginning of each parallel region is a source of such overheads. Because of this it is preferrable in each program/subrotine to be parallelized to create only one parallel region and then deal with the code that must be done only by one thread with OpenMP directives than to create several parallel regions.&lt;br /&gt;
&lt;br /&gt;
Another source of overhead is the synchronization between threads in Work Sharing constructs.&lt;br /&gt;
&lt;br /&gt;
These overheads cause that in some times the parallelization solution may not be performing better than the non parallelization solution, especially if the computational burden of the problem is low. &lt;br /&gt;
&lt;br /&gt;
This is the case of DO constructs when each cycle iteration has a small computational burden (e.g. equaling one matrix's value to other matrix's value) and is to be expected in nested loops when the parallelized DO loop is inner.&lt;br /&gt;
&lt;br /&gt;
Generally as the scale of the problem increases the overheads will be less significant and parallelization more advantageous.&lt;br /&gt;
&lt;br /&gt;
In very small scale problems parallelization may provide more comsumption of computational resources than the non parallelization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Race condition:'''''&lt;br /&gt;
&lt;br /&gt;
In some case execution of a parallel program may be suspended because different threads are competing for the actualization of some variable. This is called a race condition.&lt;br /&gt;
&lt;br /&gt;
The programmer should take care to avoid this situation as its debugging is very difficult. Most commonly the execution will be suspended and the processors would become idle without any error message. When a vast part of the code is parallelized it is then almost impossible to find the cause of the error unless one reckons possible race condition situations.&lt;br /&gt;
&lt;br /&gt;
A way to avoid this is to use critical regions or a REDUCTION clause.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2983</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2983"/>
				<updated>2010-04-27T18:06:58Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time.&lt;br /&gt;
If more than one critial region is defined in the code then every critical region should have a different name or the execution outcome could become undeterminated. &lt;br /&gt;
Caution should be given to the naming of critical sections: if these names do not conflict with variable names they do conflict with names of subroutines and common blocks.&lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
Although parallelization involves potentially gains in computer resources use it also involves overheads which can sometimes be important.&lt;br /&gt;
&lt;br /&gt;
The creation of the thread team (Workers) at the beginning of each parallel region is a source of such overheads. Because of this it is preferrable in each program/subrotine to be parallelized to create only one parallel region and then deal with the code that must be done only by one thread with OpenMP directives than to create several parallel regions.&lt;br /&gt;
&lt;br /&gt;
Another source of overhead is the synchronization between threads in Work Sharing constructs.&lt;br /&gt;
&lt;br /&gt;
These overheads cause that in some times the parallelization solution may not be performing better than the non parallelization solution, especially if the computational burden of the problem is low. &lt;br /&gt;
&lt;br /&gt;
This is the case of DO constructs when each cycle iteration has a small computational burden (e.g. equaling one matrix's value to other matrix's value) and is to be expected in nested loops when the parallelized DO loop is inner.&lt;br /&gt;
&lt;br /&gt;
Generally as the scale of the problem increases the overheads will be less significant and parallelization more advantageous.&lt;br /&gt;
&lt;br /&gt;
In very small scale problems parallelization may provide more comsumption of computational resources than the non parallelization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Race condition:'''''&lt;br /&gt;
&lt;br /&gt;
In some case execution of a parallel program may be suspended because different threads are competing for the actualization of some variable. This is called a race condition.&lt;br /&gt;
&lt;br /&gt;
The programmer should take care to avoid this situation as its debugging is very difficult. Most commonly the execution will be suspended and the processors would become idle without any error message. When a vast part of the code is parallelized it is then almost impossible to find the cause of the error unless one reckons possible race condition situations.&lt;br /&gt;
&lt;br /&gt;
A way to avoid this is to use critical regions or a REDUCTION clause.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2953</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2953"/>
				<updated>2010-04-15T11:42:41Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time.&lt;br /&gt;
If more than one critial region is defined in the code then every critical region should have a different name or the execution outcome could become undeterminated. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
Although parallelization involves potentially gains in computer resources use it also involves overheads which can sometimes be important.&lt;br /&gt;
&lt;br /&gt;
The creation of the thread team (Workers) at the beginning of each parallel region is a source of such overheads. Because of this it is preferrable in each program/subrotine to be parallelized to create only one parallel region and then deal with the code that must be done only by one thread with OpenMP directives than to create several parallel regions.&lt;br /&gt;
&lt;br /&gt;
Another source of overhead is the synchronization between threads in Work Sharing constructs.&lt;br /&gt;
&lt;br /&gt;
These overheads cause that in some times the parallelization solution may not be performing better than the non parallelization solution, especially if the computational burden of the problem is low. &lt;br /&gt;
&lt;br /&gt;
This is the case of DO constructs when each cycle iteration has a small computational burden (e.g. equaling one matrix's value to other matrix's value) and is to be expected in nested loops when the parallelized DO loop is inner.&lt;br /&gt;
&lt;br /&gt;
Generally as the scale of the problem increases the overheads will be less significant and parallelization more advantageous.&lt;br /&gt;
&lt;br /&gt;
In very small scale problems parallelization may provide more comsumption of computational resources than the non parallelization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Race condition:'''''&lt;br /&gt;
&lt;br /&gt;
In some case execution of a parallel program may be suspended because different threads are competing for the actualization of some variable. This is called a race condition.&lt;br /&gt;
&lt;br /&gt;
The programmer should take care to avoid this situation as its debugging is very difficult. Most commonly the execution will be suspended and the processors would become idle without any error message. When a vast part of the code is parallelized it is then almost impossible to find the cause of the error unless one reckons possible race condition situations.&lt;br /&gt;
&lt;br /&gt;
A way to avoid this is to use critical regions or a REDUCTION clause.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2952</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2952"/>
				<updated>2010-04-14T16:27:30Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time.&lt;br /&gt;
If more than one critial region is defined in the code then every critical region should have a different name or the execution outcome could become undeterminated. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
Although parallelization involves potentially gains in computer resources use it also involves overheads which can sometimes be important.&lt;br /&gt;
&lt;br /&gt;
The creation of the thread team (Workers) at the beginning of each parallel region is a source of such overheads. Because of this it is preferrable in each program/subrotine to be parallelized to create only one parallel region and then deal with the code that must be done only by one thread with OpenMP directives than to create several parallel regions.&lt;br /&gt;
&lt;br /&gt;
Another source of overhead is the synchronization between threads in Work Sharing constructs.&lt;br /&gt;
&lt;br /&gt;
These overheads cause that in some times the parallelization solution may not be performing better than the non parallelization solution, especially if the computational burden of the problem is low. &lt;br /&gt;
&lt;br /&gt;
This is the case of DO constructs when each cycle iteration has a small computational burden (e.g. equaling one matrix's value to other matrix's value) and is to be expected in nested loops when the parallelized DO loop is inner.&lt;br /&gt;
&lt;br /&gt;
Generally as the scale of the problem increases the overheads will be less significant and parallelization more advantageous.&lt;br /&gt;
&lt;br /&gt;
In very small scale problems parallelization may provide more comsumption of computational resources than the non parallelization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Race condition:'''''&lt;br /&gt;
&lt;br /&gt;
In some case execution of a parallel program may be suspended because different threads are competing for the actualization of some variable. This is called a race condition.&lt;br /&gt;
&lt;br /&gt;
The programmer should take care to avoid this situation as its debugging is very difficult. A way to avoid this is to use critical regions or a REDUCTION clause.&lt;br /&gt;
&lt;br /&gt;
It is also convenient to use barriers to make the threads available inside parallel regions when they are required. This may be required when processing inside a parallel region is restringed in some part only to some threads, e.g., through the !$OMP MASTER directive. &lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2951</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2951"/>
				<updated>2010-04-14T15:58:58Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time.&lt;br /&gt;
If more than one critial region is defined in the code then every critical region should have a different name or the execution outcome could become undeterminated. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
Although parallelization involves potentially gains in computer resources use it also involves overheads which can sometimes be important.&lt;br /&gt;
&lt;br /&gt;
The creation of the thread team (Workers) at the beginning of each parallel region is a source of such overheads. Because of this it is preferrable in each program/subrotine to be parallelized to create only one parallel region and then deal with the code that must be done only by one thread with OpenMP directives than to create several parallel regions.&lt;br /&gt;
&lt;br /&gt;
Another source of overhead is the synchronization between threads in Work Sharing constructs.&lt;br /&gt;
&lt;br /&gt;
These overheads cause that in some times the parallelization solution may not be performing better than the non parallelization solution, especially if the computational burden of the problem is low. &lt;br /&gt;
&lt;br /&gt;
This is the case of DO constructs when each cycle iteration has a small computational burden (e.g. equaling one matrix's value to other matrix's value) and is to be expected in nested loops when the parallelized DO loop is inner.&lt;br /&gt;
&lt;br /&gt;
Generally as the scale of the problem increases the overheads will be less significant and parallelization more advantageous.&lt;br /&gt;
&lt;br /&gt;
In very small scale problems parallelization may provide more comsumption of computational resources than the non parallelization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Race condition:'''''&lt;br /&gt;
&lt;br /&gt;
In some case execution of a parallel program may be suspended because different threads are competing for the actualization of some variable. This is called a race condition.&lt;br /&gt;
&lt;br /&gt;
The programmer should take care to avoid this situation as its debugging is very difficult. A way to avoid this is to use critical regions or a REDUCTION clause.&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2950</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2950"/>
				<updated>2010-04-14T15:34:19Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
Although parallelization involves potentially gains in computer resources use it also involves overheads which can sometimes be important.&lt;br /&gt;
&lt;br /&gt;
The creation of the thread team (Workers) at the beginning of each parallel region is a source of such overheads. Because of this it is preferrable in each program/subrotine to be parallelized to create only one parallel region and then deal with the code that must be done only by one thread with OpenMP directives than to create several parallel regions.&lt;br /&gt;
&lt;br /&gt;
Another source of overhead is the synchronization between threads in Work Sharing constructs.&lt;br /&gt;
&lt;br /&gt;
These overheads cause that in some times the parallelization solution may not be performing better than the non parallelization solution, especially if the computational burden of the problem is low. &lt;br /&gt;
&lt;br /&gt;
This is the case of DO constructs when each cycle iteration has a small computational burden (e.g. equaling one matrix's value to other matrix's value) and is to be expected in nested loops when the parallelized DO loop is inner.&lt;br /&gt;
&lt;br /&gt;
Generally as the scale of the problem increases the overheads will be less significant and parallelization more advantageous.&lt;br /&gt;
&lt;br /&gt;
In very small scale problems parallelization may provide more comsumption of computational resources than the non parallelization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Race condition:'''''&lt;br /&gt;
&lt;br /&gt;
In some case execution of a parallel program &lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2949</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=2949"/>
				<updated>2010-04-14T15:24:41Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
Although parallelization involves potentially gains in computer resources use it also involves overheads which can sometimes be important.&lt;br /&gt;
&lt;br /&gt;
The creation of the thread team (Workers) at the beginning of each parallel region is a source of such overheads. Because of this it is preferrable in each program/subrotine to be parallelized to create only one parallel region and then deal with the code that must be done only by one thread with OpenMP directives than to create several parallel regions.&lt;br /&gt;
&lt;br /&gt;
Another source of overhead is the synchronization between threads in Work Sharing constructs.&lt;br /&gt;
&lt;br /&gt;
These overheads cause that in some times the parallelization solution may not be performing better than the non parallelization solution, especially if the computational burden of the problem is low. &lt;br /&gt;
&lt;br /&gt;
This is the case of DO constructs when each cycle iteration has a small computational burden (e.g. equaling one matrix's value to other matrix's value) and is to be expected in nested loops when the parallelized DO loop is inner.&lt;br /&gt;
&lt;br /&gt;
Generally as the scale of the problem increases the overheads will be less significant and parallelization more advantageous.&lt;br /&gt;
&lt;br /&gt;
In very small scale problems parallelization may provide more comsumption of computational resources than the non parallelization.&lt;br /&gt;
&lt;br /&gt;
'''''Race condition:'''''&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1984</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1984"/>
				<updated>2009-06-02T16:51:11Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
Although parallelization involves potentially gains in computer resources use it also involves overheads which can sometimes be important.&lt;br /&gt;
&lt;br /&gt;
The creation of the thread team (Workers) at the beginning of each parallel region is a source of such overheads. Because of this it is preferrable in each program/subrotine to be parallelized to create only one parallel region and then deal with the code that must be done only by one thread with OpenMP directives than to create several parallel regions.&lt;br /&gt;
&lt;br /&gt;
Another source of overhead is the synchronization between threads in Work Sharing constructs.&lt;br /&gt;
&lt;br /&gt;
These overheads cause that in some times the parallelization solution may not be performing better than the non parallelization solution, especially if the computational burden of the problem is low. &lt;br /&gt;
&lt;br /&gt;
This is the case of DO constructs when each cycle iteration has a small computational burden (e.g. equaling one matrix's value to other matrix's value) and is to be expected in nested loops when the parallelized DO loop is inner.&lt;br /&gt;
&lt;br /&gt;
Generally as the scale of the problem increases the overheads will be less significant and parallelization more advantageous.&lt;br /&gt;
&lt;br /&gt;
In very small scale problems parallelization may provide more comsumption of computational resources than the non parallelization.&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1983</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1983"/>
				<updated>2009-05-29T14:25:16Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
Although parallelization involves potentially gains in computer resources use it also involves overheads which can sometimes be important.&lt;br /&gt;
&lt;br /&gt;
The creation of the thread team (Workers) at the beginning of each parallel region is a source of such overheads. Because of this it is preferrable in each program/subrotine to be parallelized to create only one parallel region and then deal with the code that must be done only by one thread with OpenMP directives than to create several parallel regions.&lt;br /&gt;
&lt;br /&gt;
Another source of overhead is the synchronization between threads in Work Sharing constructs.&lt;br /&gt;
&lt;br /&gt;
These overheads cause that in some times the parallelization solution may not be performing better than the non parallelization solution, especially if the computational burden of the problem is low. Generally as the scale of the problem increases the overheads will be less significant and parallelization more advantageous.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1982</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1982"/>
				<updated>2009-05-29T14:05:56Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
'''''Parallelization overheads:'''''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1953</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1953"/>
				<updated>2009-05-22T13:55:35Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. &lt;br /&gt;
&lt;br /&gt;
The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(DYNAMIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
The distribution of Chunk among the threads requires synchronization for each assignment. This causes a overhead that could be important. The DYNAMIC option is advisable when each iteration involves an amount of work which is not predictable. This could be the case when IF constructs are present inside the loop containg extra processing done only in specific cases. Also if the threads arrive to the DO loop at different times, e.g. if they come from a previous DO loop with a NOWAIT end clause. &lt;br /&gt;
&lt;br /&gt;
When the amount of work required in each DO loop iteration is predictable and the same for all threads it is advisable to use the STATIC option of SCHEDULE. This is particularly useful for DO loops which are unique in a parallel region:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO SCHEDULE(STATIC, CHUNK)&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1949</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1949"/>
				<updated>2009-05-18T12:46:57Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed.&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
Under this directive threads wait at the barrier point till all threads reach it.&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1948</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1948"/>
				<updated>2009-05-18T12:43:40Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed.&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
In these contructs the fact that no barrier exists at the exit may cause problems if the single thread processed code section is intended to be dealt with previously from the subsequent code. In this situation a barrier can be introduced at the end of the SINGLE/MASTER construct:&lt;br /&gt;
&lt;br /&gt;
 !$OMP BARRIER&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1947</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1947"/>
				<updated>2009-05-15T20:15:21Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed.&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
When one wants this single thread to be the Master then following syntax is used:&lt;br /&gt;
&lt;br /&gt;
 !$OMP MASTER&lt;br /&gt;
 ... (code to be processed by only Master thread)&lt;br /&gt;
 !$OMP END MASTER&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE/MASTER in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1946</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1946"/>
				<updated>2009-05-15T20:03:56Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of '''Private variables''', which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. &lt;br /&gt;
&lt;br /&gt;
For every processing made inside a parallel region must be assigned a Work Sharing construct. If this is not verified execution errors can occur. &lt;br /&gt;
&lt;br /&gt;
An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed.&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code to be processed by only one thread)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1945</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1945"/>
				<updated>2009-05-15T18:28:19Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''Directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
'''Sentinel''' is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
'''Clauses''' are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran DO loop are distributed by the threads. The way the work is distributed over the threads is managed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are completed.&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region (sequential lines of code appearing inside the parallel region). If they are dynamically linked with a parallel region (e.g. they appear in processing a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region (and also not in the lexical extent of the region) they are ignored and the enclosed code is performed by one thread only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time processes a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing at same time (problems can occur because memory locations are being assessed at same time). This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is processed only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL, no barriers exist in SINGLE in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1944</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1944"/>
				<updated>2009-05-15T18:11:32Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads: the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is accomplished by defining '''Parallel regions''' and by creating the threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then proceed in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread can have a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by each thread processing and that affect other threads processing.&lt;br /&gt;
&lt;br /&gt;
An obvious choice of these private variables are the loop variables when these are used to alter positions in matrixes or vectors in each iteration of the loop.&lt;br /&gt;
&lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region (this default behavior can, however, be altered by specific OpenMP directives).&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives (from one code line to another) are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a DO loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran do loop are distributed among the threads. The way the work is distributed over the threads is governed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are compleated.&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region. If they are dynamically linked with a parallel region (e.g. they appear in processing by a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region they are ignored and the enclosed code is performed by the Master only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time process a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing. This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is done only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL no barriers exist in SINGLE in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1928</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1928"/>
				<updated>2009-05-12T13:16:51Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then process in parallel (simultaneously), instead of the unique thread (Master) existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the thread processing. &lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a FOR loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran do loop are distributed among the threads. The way the work is distributed over the threads is governed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are compleated.&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region. If they are dynamically linked with a parallel region (e.g. they appear in processing by a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region they are ignored and the enclosed code is performed by the Master only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time process a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing. This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is done only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL no barriers exist in SINGLE in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=ConvertToHDF5&amp;diff=1927</id>
		<title>ConvertToHDF5</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=ConvertToHDF5&amp;diff=1927"/>
				<updated>2009-05-12T12:57:56Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* CONVERT MERCATOR FORMAT */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''ConvertToHDF5''' is an application which allows the making of several operations, called '''actions''', involving HDF5 files: conversion of data in other formats (e.g. NETCDF) to HDF5, grid interpolation, concatenation of several files.&lt;br /&gt;
&lt;br /&gt;
Running options for this application are specified by the user in a input file named [[ConvertToHDF5#Input file (ConvertToHDF5Action.dat)|'''ConvertToHDF5Action.dat''']]. Several actions can be specified in the same input file, being processed sequentially by the ConvertToHDF5 application.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
The operations involving HDF5 files performed by ConvertToHDF5, specified individually by an action, can be organized in [[#file management|file management]], [[#grid interpolation|grid interpolation]] and [[#format conversion|format conversion]]. These types and the respective actions are detailed in the next sections. &lt;br /&gt;
&lt;br /&gt;
The input file specification for each action can be found bellow in the [[#Input file (ConvertToHDF5Action.dat)|Input file (ConvertToHDF5Action.dat)]] section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===File management===&lt;br /&gt;
&lt;br /&gt;
====Glue files====&lt;br /&gt;
This action consists in joining or glue in a single HDF5 file two or more HDF5 files having the same HDF5 data groups and referring to time periods which come in sequence. Both sets of 2D and 3D HDF5 files can be glued.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Glue MOHID Water results files from several runs produced in continuous running of the model, for storage space economy reasons. Can be used to join data from other origins (e.g. results of meteorological models) as long as the HDF5 format is the one supported by MOHID Water.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 files to be glued. &amp;quot;Grid&amp;quot; and &amp;quot;Results&amp;quot; data groups should be equal in all these files.&lt;br /&gt;
&lt;br /&gt;
'''Output:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with glued &amp;quot;Results&amp;quot; data. &amp;quot;Residual&amp;quot; and &amp;quot;Statistics&amp;quot; HDF5 data groups are not copied to the output file since they are time period specific (different values potentially occour in each file). General statistics can be calculated for the glued HDF5 file data using tool [[HDF5Statistics]].&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#GLUES HDF5 FILES|GLUES HDF5 FILES]].&lt;br /&gt;
&lt;br /&gt;
===Grid interpolation===&lt;br /&gt;
&lt;br /&gt;
====Interpolate files====&lt;br /&gt;
This action performs the conversion of one HDF5 file data existing in one 2D or 3D spatial grid to another 2D or 3D spatial grid, creating a new HDF5 file. The interpolation is performed only for the data located a time window specified by the user.&lt;br /&gt;
&lt;br /&gt;
The HDF5 file containing data to be interpolated is called the '''father file'''.&lt;br /&gt;
&lt;br /&gt;
In case of 3D interpolation the application conducts first the horizontal grid interpolation&lt;br /&gt;
(keeping father geometry) and only after it conducts the vertical interpolation (from father geometry to new geometry).&lt;br /&gt;
&lt;br /&gt;
Several types of 2D interpolation are available for use: bilinear, spline 2D and triangulation.&lt;br /&gt;
For vertical interpolation (used in 3D interpolation) can be supplied several polinomial degrees for interpolation.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain an HDF5 file with data for forcing or providing initial conditions for a MOHID Water model, e.g. a meteorological forcing file.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
For 2D/3D interpolation:&lt;br /&gt;
&lt;br /&gt;
- father HDF5 file;&lt;br /&gt;
&lt;br /&gt;
- father horizontal data grid, in a grid data file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
- new horizontal data grid, in a grid data file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
For 3D interpolation also needed:&lt;br /&gt;
&lt;br /&gt;
- father vertical geometry, in a geometry file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
- new vertical geometry, in a geometry file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
- auxiliary horizontal data grid, in a grid data file in the format supported by MOHID; this file is used for horizontal grid interpolation in 3D interpolation operations.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with interpolated data. In case of 3D interpolation also produced an auxiliary HDF5 file with the result of the horizontal grid interpolation, which can be inspected to check if this operation is well performed.&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#INTERPOLATE GRIDS|INTERPOLATE GRIDS]].&lt;br /&gt;
&lt;br /&gt;
====Patch files====&lt;br /&gt;
This action consists in performing an interpolation of HDF5 data between grids, as in action [[#Interpolate files|Interpolate files]], but considering more than one HDF5 file as containing data to be interpolated to the new grid and a priority scale. The interpolation is performed only for the data located in the time window specified by the user. The present version of this action operates only on 2D data.&lt;br /&gt;
&lt;br /&gt;
Each HDF5 file containing data to be interpolated is called a '''father file''' and has an user-attributed '''priority level''' to be respected in the interpolation process: for each new grid cell the ConvertToHDF5 application will look for data first on the Level 1 father file and only in the case this data is inexistent will it look for data in Level 2 file, proceeding in looking for higher level files if no data is found subsequentely.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
To obtain an HDF5 file with data from several HDF5 files each containing data with different spatial resolution and only for a specific part of the new grid. This is, for instance, the case when one is preparing a best resolution meteorological HDF5 file for forcing MOHID Water from several meteorological model domains, having different spatial resolution and span, since the best resolution data is not available for all new grid cells.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
The new horizontal data grid, in a grid data file in the format supported by MOHID, and for each father file:&lt;br /&gt;
&lt;br /&gt;
- level of priority: 1 = maximum priority, priority decreases with increasing level value;&lt;br /&gt;
&lt;br /&gt;
- data grid, in the form of a grid data file in the format supported by MOHID.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with patched data.&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#PATCH HDF5 FILES|PATCH HDF5 FILES]].&lt;br /&gt;
&lt;br /&gt;
===Format conversion===&lt;br /&gt;
&lt;br /&gt;
====Meteorological model data====&lt;br /&gt;
Mohid does not simulate explicitly the atmosphere, but needs information about atmospheric properties in time and space. This requires that atmospheric properties are supplied to MOHID Water in supported formats. These formats can be derived from meteorological data in HDF5 format. Because the results of meteorological models are accessed in different formats conversion is required. &lt;br /&gt;
&lt;br /&gt;
The formats currently convertible to HDF5 in ConvertToHDF5 include the MM5 and the ERA40. These are succintly detailed in the next sections.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''ERA40''=====&lt;br /&gt;
This format refers to the European Centre for Medium-Range Weather Forecasts (ECMWF) 40 years re-analysises results, acessed by site http://data.ecmwf.int/data/d/era40_daily/. This data is available for several meteorological variables with maximum 6 hour periodicity for days in the period from 1957-09-01 to 2002-08-31. &lt;br /&gt;
&lt;br /&gt;
ERA40 data files are supplied by ECMWF in a NetCDF format and with an user-costumized time window, periodicity (time step range from 6 hours to a day) and meteorological properties set. The ERA40 meteorological properties which are recognized by MOHID are presented bellow together with the correspondent MOHID name: &lt;br /&gt;
&lt;br /&gt;
 ---ERA40 NAME---         ---MOHID NAME---&lt;br /&gt;
   sshf                     sensible heat                &lt;br /&gt;
   slhf                     latent heat                  &lt;br /&gt;
   msl                      atmospheric pressure &lt;br /&gt;
   tcc                      cloud cover &lt;br /&gt;
   p10u                     wind velocity X&lt;br /&gt;
   p10v                     wind velocity Y&lt;br /&gt;
   p2t                      air temperature&lt;br /&gt;
   ewss                     wind stress X&lt;br /&gt;
   nsss                     wind stress Y&lt;br /&gt;
&lt;br /&gt;
The standard ConvertToHDF5 action is to convert to HDF5 the data referring to all MOHID Water recognized property available in the ERA40 file, producing an individual HDF5 file for each property. The name of each HDF5 file generated includes the ERA40 meteorological property identificator correspondent to the data contained.&lt;br /&gt;
&lt;br /&gt;
Alternatively, ConvertToHDF5 can copy to a single ASCII file the heading information concerning each meteorological variable considered in the original ERA40 file.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain an HDF5 file with data suitable for being used for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
ERA40 NetCDF file.&lt;br /&gt;
&lt;br /&gt;
'''Output:'''&lt;br /&gt;
&lt;br /&gt;
One HDF5 file for each meteorological property contained in the original NetCDF file.&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT ERA40 FORMAT|CONVERT ERA40 FORMAT]].&lt;br /&gt;
&lt;br /&gt;
=====''MM5''=====&lt;br /&gt;
This format relates to the Fifth-Generation NCAR / Penn State Mesoscale Model (MM5) output files format. Almost every atmospheric property needed by MOHID Water is present in MM5 output files, enabling to run prediction simulations with MOHID Water when access to MM5 prevision files is available.&lt;br /&gt;
&lt;br /&gt;
The ConvertToHDF5 action converts MM5 results files from the original format to HDF5 format, allowing the easy use of these results in the MOHID framework. Conversion is only performed for the MM5 properties and the time window specified by the user.&lt;br /&gt;
&lt;br /&gt;
Besides the conversion, the application can calculate some properties not contained in&lt;br /&gt;
the MM5 files using the available information: these are windstress, relative humidity and precipitation.&lt;br /&gt;
&lt;br /&gt;
For conversion to be completed it is required the horizontal grid information of MM5 results which is available in special TERRAIN files.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Produce HDF5 meteorological data usable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
MM5 results file to convert and MM5 TERRAIN file. The TERRAIN file supplies the MM5 results grid information. &lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
An HDF5 file with MM5 results and a grid data file in MOHID format with the MM5 grid information.&lt;br /&gt;
This last file can be used to interpolate the MM5 data from the original grid to a new grid (see [[#Interpolate files|Interpolate files]]), for instance to produce an HDF5 file suitable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Compilation:'''&lt;br /&gt;
&lt;br /&gt;
Caution! The ConvertToHDF5 executable must be compiled with the [[Big-endian little-endian|Big-Endian]] option set (see compatibility in the project's settings).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT MM5 FORMAT|CONVERT MM5 FORMAT]].&lt;br /&gt;
&lt;br /&gt;
=====''Aladin''=====&lt;br /&gt;
This format relates to Aladin meteorological model results. Some of the atmospheric property needed by MOHID Water is present in Aladin output files, enabling to run prediction simulations with MOHID Water when access to Aladin prevision files is available.&lt;br /&gt;
&lt;br /&gt;
The ConvertToHDF5 action converts Aladin results files from the original format to HDF5 format, allowing the easy use of these results in the MOHID framework. Conversion is only performed for the MM5 properties and the time window specified by the user.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Produce HDF5 meteorological data usable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
Aladin netcdf results file to convert.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
An HDF5 file with Aladin results and a grid data file in MOHID format with the Aladin grid pseudo-information: a fake orography is created of 100 m depth.&lt;br /&gt;
This last file can be used to interpolate the Aladin data from the original grid to a new grid (see [[#Interpolate files|Interpolate files]]), for instance to produce an HDF5 file suitable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Compilation:'''&lt;br /&gt;
&lt;br /&gt;
Caution! The ConvertToHDF5 executable must be compiled with the [[Big-endian little-endian|Big-Endian]] option set (see compatibility in the project's settings).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT ALADIN FORMAT|CONVERT ALADIN FORMAT]].&lt;br /&gt;
&lt;br /&gt;
====Ocean model data====&lt;br /&gt;
Ocean model data, available in diverse formats, can be used by MOHID Water to specify boundary (open ocean boundary and surface), initial conditions or for validation. These uses require that the model data is in HDF5 format and conversion is therefore needed.&lt;br /&gt;
&lt;br /&gt;
Currently the large scale ocean models formats convertible into HDF5 by ConvertToHDF5 includes MERCATOR.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''MERCATOR''=====&lt;br /&gt;
MERCATOR data files are supplied in a NetCDF format and with an user-costumized spatial window and periodicity. Water level and water properties (temperature and salinity) data is available in type T files, velocity component u data is available in type U files and velocity component v data is available in type V files. The type of data of a specific MERCATOR file is generally indicated in the file name.&lt;br /&gt;
&lt;br /&gt;
The standard ConvertToHDF5 action is to convert to HDF5 the data referring to temperature, salinity, water level, component u of velocity and component v of velocity.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain HDF5 MERCATOR data usable for forcing or validation of MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
NetCDF MERCATOR results data files and NetCDF MERCATOR grid data files. It should be provided one grid data file of each type: T, U and V. These are generally provided by the MERCATOR services together with the results files.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
One HDF5 file containing all properties contained in the recognized set of properties (temperature, salinity, water level, velocity u and velocity v) and the correspondent grid data and geometry files, containing respectively the horizontal grid and the vertical discretization of the HDF5 file. The grid data and geometry files can be used afterwards to interpolate the MERCATOR data to another grid and geometry (see [[#Interpolate files|Interpolate files]]).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT MERCATOR FORMAT|CONVERT MERCATOR FORMAT]].&lt;br /&gt;
&lt;br /&gt;
====Climatological data====&lt;br /&gt;
Climatological data can be used in MOHID Water to specify boundary (open ocean boundary and surface), initial conditions or for validation, in case more realistic data (measurements or model) data is unavailable. This data is generally supplied by producers in formats not readly usable by MOHID Water which justifies the existence of a conversion tool.&lt;br /&gt;
&lt;br /&gt;
Two climatological data format conversions are implemented in ConvertToHDF5: Levitus ocean data and Hellerman Rosenstein meteorological data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''Levitus''=====&lt;br /&gt;
The Levitus climatology provides results for water temperature and salinity.&lt;br /&gt;
The ConvertToHDF5 action converts the climatological data for the properties and spatial window requested by the user. &lt;br /&gt;
Typically, it requires 3 steps to complete the task:&lt;br /&gt;
&lt;br /&gt;
- convert levitus format &lt;br /&gt;
&lt;br /&gt;
- extrapolate the data to the whole levitus domain(required to avoid uncoincidental coastlines) &lt;br /&gt;
&lt;br /&gt;
- interpolate with the model grid(bathymetry)&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain climatological data in HDF5 format to use as boundary forcing and/or initial condition specification in MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
Levitus climatological data files, one per property and per time period (e.g a month).&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with Levitus climatological data, grid data file with the horizontal&lt;br /&gt;
grid of the data and a geometry file with vertical discretization of the data (MOHID formats).&lt;br /&gt;
The grid data and the geometry files can be used to interpolate the climatological data from the original grid to a new grid (see [[#Interpolate files|Interpolate files]]).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT LEVITUS FORMAT|CONVERT LEVITUS FORMAT]].&lt;br /&gt;
&lt;br /&gt;
=====''Hellerman Rosenstein''=====&lt;br /&gt;
This is a meteorological climatology providing wind stress. There is a file per wind stress component. Since the data refer to surface values it is a 2D field.&lt;br /&gt;
&lt;br /&gt;
The ConvertToHDF5 action converts the climatological data for the properties and spatial window provided by the user.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain climatological data in HDF5 format to use as meteorological forcing in MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
Hellerman Rosenstein climatological data ASCII files, one per wind stress component.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with Hellerman Rosenstein climatological data and grid data file with the horizontal&lt;br /&gt;
grid of the climatological data. This grid data file can be used to interpolate the climatological data from the original horizontal grid to a new grid (see [[#Interpolate files|Interpolate files]]).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT HELLERMAN ROSENSTEIN ASCII|CONVERT HELLERMAN ROSENSTEIN ASCII]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''World Ocean Atlas 2005''=====&lt;br /&gt;
The World Ocean Atlas (WOA) 2005 climatology provides results for water temperature, salinity and several water quality and biology properties.&lt;br /&gt;
&lt;br /&gt;
Description, Action and Input Files are described in a separate page: [[ConvertToHDF5 WOA2005]].&lt;br /&gt;
&lt;br /&gt;
==Input file (ConvertToHDF5Action.dat)==&lt;br /&gt;
===General structure===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt; (block containing instructions for running a specific action) &lt;br /&gt;
 ACTION                    : ... (intended action)&lt;br /&gt;
 ... (action specific instructions)&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : ...&lt;br /&gt;
 ...&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GLUES HDF5 FILES===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : GLUES HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 3D_FILE                   : 0/1 (0 = 2D file, 1 = 3D file)&lt;br /&gt;
 &lt;br /&gt;
 TIME_GROUP                : ... (Default=&amp;quot;Time&amp;quot;. Other option: &amp;quot;SurfaceTime&amp;quot;.)&lt;br /&gt;
 &lt;br /&gt;
 BASE_GROUP                : ... (Default=&amp;quot;Results&amp;quot;. Other options: &amp;quot;Residual&amp;quot;, &amp;quot;SurfaceResults&amp;quot;.)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 &lt;br /&gt;
 (block of HDF5 data files)&lt;br /&gt;
 &amp;lt;&amp;lt;begin_list&amp;gt;&amp;gt;&lt;br /&gt;
 ... (path/name of HDF5 file with data to be included in glue, one per line, at least two files)&lt;br /&gt;
 ...                      &lt;br /&gt;
 &amp;lt;&amp;lt;end_list&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===INTERPOLATE GRIDS===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION     : ... (type of horizontal interpolation: 1 = Bilinear, 2 = Spline2D,&lt;br /&gt;
                                  3 = Triangulation)&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION_WINDOW      : ... ... ... ... (2D spatial window to consider for interpolation: &lt;br /&gt;
                                              Xmin Ymin Xmax Ymax; default = all domain)&lt;br /&gt;
 &lt;br /&gt;
 START                     : ... (start date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 END                       : ... (end date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D           : 0/1 (0 = 2D interpolation, 1 = 3D interpolation)&lt;br /&gt;
 &lt;br /&gt;
 FATHER_FILENAME           : ... (path/name of input HDF5 file with data to be interpolated)&lt;br /&gt;
 FATHER_GRID_FILENAME      : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization of input HDF5 file)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of output HDF5 file to be created)&lt;br /&gt;
 NEW_GRID_FILENAME         : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization intended for output HDF5 file)&lt;br /&gt;
 &lt;br /&gt;
 EXTRAPOLATE_2D            : 0/1/2/3/4/5 (2D extrapolation: 0=no extrapolation, 1=medium&lt;br /&gt;
                                      triangulation, 2=high triangulation, &lt;br /&gt;
                                      3=nearest neighbour, 4=nearest cell, &lt;br /&gt;
                                      5=constant value)&lt;br /&gt;
 &lt;br /&gt;
 EXTRAPOLATE_VALUE         : ... (name of the value to extrapolate to when EXTRAPOLATE_2D is&lt;br /&gt;
                                  set to constant value (5))&lt;br /&gt;
 &lt;br /&gt;
 DO_NOT_BELIEVE_MAP        : 0/1 (0=consider input HDF5 file map, 1=do not consider input HDF5&lt;br /&gt;
                                  file map)&lt;br /&gt;
 &lt;br /&gt;
 BASE_GROUP                : ... (name of base group of HDF5 variables containing data to be &lt;br /&gt;
                                  interpolated; default is &amp;quot;/Results&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 (if INTERPOLATION3D : 1 also required:)&lt;br /&gt;
 FATHER_GEOMETRY           : ... (path/name of file (MOHID format) with vertical discretization&lt;br /&gt;
                                  of input HDF5 file)&lt;br /&gt;
 NEW_GEOMETRY              : ... (path/name of file (MOHID format) with vertical discretization&lt;br /&gt;
                                  intended for output HDF5 file)&lt;br /&gt;
 POLI_DEGREE               : 1/... (degree of vertical interpolation: 1=linear, ...)&lt;br /&gt;
 &lt;br /&gt;
 AUX_GRID_FILENAME         : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization intended for auxiliar output HDF5 file;&lt;br /&gt;
                                  default is file provided in NEW_GRID_FILENAME)&lt;br /&gt;
 &lt;br /&gt;
 AUX_OUTPUTFILENAME        : ... (path/name of auxiliar output HDF5 file to contain result&lt;br /&gt;
                                  of horizontal grid interpolation)   &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - the file indicated in AUX_GRID_FILENAME can be different from the one indicated in&lt;br /&gt;
   NEW_GRID_FILENAME in terms of bathymetry, while the horizontal grid should be, commonly, the&lt;br /&gt;
   same: this altered bathymetry can be used to extend the water column in the original data so&lt;br /&gt;
   that the process of vertical interpolation is done easily;&lt;br /&gt;
 &lt;br /&gt;
 - in case of INTERPOLATION3D : 1, ConvertToHDF5 can generate new versions of bathymetry which &lt;br /&gt;
   are consistent with the geometry definition (extension is '.new'); there are possibly three&lt;br /&gt;
   bathymetry changes referring to father grid, new grid and aux grid (the same bathymetry is&lt;br /&gt;
   not altered twice); although initially new and aux grid are the same they can result &lt;br /&gt;
   different because of bathymetry changes;&lt;br /&gt;
 &lt;br /&gt;
 - in case the new geometry is 2D and father geometry is 3D then POLI_DEGREE : 1 &lt;br /&gt;
   (linear interpolation) should be used;&lt;br /&gt;
 &lt;br /&gt;
 - EXTRAPOLATE_2D : 1/2/3/4/5 should be considered if it is expected that the coast line is not&lt;br /&gt;
   coincidental in the father and new grids, to avoid lack of data in the interpolation&lt;br /&gt;
   process; extrapolation is performed for all cells even the land cells; &lt;br /&gt;
 &lt;br /&gt;
 - in case of DO_NOT_BELIEVE_MAP : 1 the application generates a map based on cells where&lt;br /&gt;
   interpolation results are available; this causes that if EXTRAPOLATE_2D : 1/2/3/4/5 is used&lt;br /&gt;
   the AUX_GRID_FILENAME should not have land cells in order for the new map to be concurrent&lt;br /&gt;
   with the result of extrapolation and avoid errors generation, specially if INTERPOLATION3D :&lt;br /&gt;
   1 is considered.&lt;br /&gt;
&lt;br /&gt;
===PATCH HDF5 FILES===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : PATCH HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION     : ... (type of interpolation: 3 = Triangulation, default and only&lt;br /&gt;
                                  one implemented)&lt;br /&gt;
 &lt;br /&gt;
 START                     : ... (start date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 END                       : ... (end date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 &lt;br /&gt;
 (block for each father HDF5 file, should be at least two)&lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                     : ... (integer priority level: 1 = highest, increase for lower&lt;br /&gt;
                                  priority)&lt;br /&gt;
 FATHER_FILENAME           : ... (path/name of input HDF5 file with data to be interpolated)&lt;br /&gt;
 FATHER_GRID_FILENAME      : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization of input HDF5 file)&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of output HDF5 file to be created)&lt;br /&gt;
 NEW_GRID_FILENAME         : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization intended for output HDF5 file)&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CONVERT ERA40 FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT ERA40 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                  : ... (path/name of ERA40 NetCDF file)&lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
                                 (root of name for all files produced)&lt;br /&gt;
 &lt;br /&gt;
 CONVERT_TO_ASCII          : 0/1 (1 = convert variable heading info for ASCII file; 0 = default)&lt;br /&gt;
 CONVERT_TO_HDF5           : 0/1 (1 = convert to HDF5 file; 0 = default)&lt;br /&gt;
 GRIDTO180                 : 0/1 (1 = convert grid from [0 360] to [-180 180], 0 = default)&lt;br /&gt;
 &lt;br /&gt;
 XX_VARIABLE               : ... (name of longitude variable in the input file: usual name &lt;br /&gt;
                                  is &amp;quot;longitude&amp;quot;)&lt;br /&gt;
 YY_VARIABLE               : ... (name of longitude variable in the input file: usual name &lt;br /&gt;
                                  is &amp;quot;latitude&amp;quot;)&lt;br /&gt;
 TIME_VARIABLE             : ... (name of time variable in the input file: usual name is&lt;br /&gt;
                                  &amp;quot;time&amp;quot;)&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - either CONVERT_TO_ASCII : 1 or CONVERT_TO_HDF5 : 1 must be chosen for any action to be&lt;br /&gt;
 performed by ConvertToHDF5;&lt;br /&gt;
 &lt;br /&gt;
 - when CONVERT_TO_HDF5 : 1 an HDF5 file is produced for every variable contained in the&lt;br /&gt;
 original ERA40 file; the name of each file is composed of the name indicated on FILENAME&lt;br /&gt;
 concatenated with the ERA40 variable identifier;&lt;br /&gt;
 &lt;br /&gt;
 - to the XX_VARIABLE, YY_VARIABLE and TIME_VARIABLE keywords should generally be  &lt;br /&gt;
 specified &amp;quot;longitude&amp;quot;, &amp;quot;latitude&amp;quot; and &amp;quot;time&amp;quot;, respectively; the option to&lt;br /&gt;
 include as keywords was made only to make the application robust to future variable name&lt;br /&gt;
 changes.&lt;br /&gt;
&lt;br /&gt;
===CONVERT MM5 FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT MM5 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                  : ... (path/name of MM5 file)&lt;br /&gt;
 TERRAIN_FILENAME          : ... (path/name of MM5 TERRAIN file)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data file with horizontal grid of MM5 data&lt;br /&gt;
                                  to be created)&lt;br /&gt;
 &lt;br /&gt;
 COMPUTE_WINDSTRESS        : 0/1 (1 = compute and write wind stress field; 0 = default)&lt;br /&gt;
 COMPUTE_RELATIVE_HUMIDITY : 0/1 (1 = compute and write relative humidity field; 0 = default)&lt;br /&gt;
 COMPUTE_PRECIPITATION     : 0/1 (1 = compute and write precipitation field; 0 = default)&lt;br /&gt;
 COMPUTE_WINDMODULUS       : 0/1 (1 = compute wind modulus; 0 = default)&lt;br /&gt;
 &lt;br /&gt;
 WRITE_XYZ                 : 0/1 (1 = write xyz center grid cells; 0 = default)&lt;br /&gt;
 WRITE_TERRAIN             : 0/1 (1 = write MM5 TERRAIN fields; 0 = default)&lt;br /&gt;
 &lt;br /&gt;
 START                     : ... (start date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 END                       : ... (end date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
  &lt;br /&gt;
 (block of MM5 properties to convert)&lt;br /&gt;
 &amp;lt;&amp;lt;BeginFields&amp;gt;&amp;gt;&lt;br /&gt;
 ... (name of MM5 property to convert do HDF5 format, one per line)&lt;br /&gt;
 ... &lt;br /&gt;
 &amp;lt;&amp;lt;EndFields&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - the name of each MM5 property to convert in &amp;lt;&amp;lt;BeginFields&amp;gt;&amp;gt;...&amp;lt;&amp;lt;EndFields&amp;gt;&amp;gt; block must&lt;br /&gt;
 conform to the MOHID designation specified in code of ModuleGlobalData; the correspondence is &lt;br /&gt;
 the following (see [[Module_InterfaceWaterAir]] for a more detailed explanation).&lt;br /&gt;
 &lt;br /&gt;
           ---MM5 NAME---    ---MOHID NAME---&lt;br /&gt;
             T2             air temperature&lt;br /&gt;
             PSTARCRS       atmospheric pressure&lt;br /&gt;
             U10            wind velocity X&lt;br /&gt;
             V10            wind velocity Y&lt;br /&gt;
             UST            wind shear velocity&lt;br /&gt;
             LHFLUX         latent heat&lt;br /&gt;
             SWDOWN         sensible heat&lt;br /&gt;
             SWDOWN         solar radiation&lt;br /&gt;
             LWDOWN         infrared radiation&lt;br /&gt;
             SWOUT          top outgoing shortwave radiation&lt;br /&gt;
             LWOUT          top outgoing longwave radiation&lt;br /&gt;
             SOIL T 1       soil temperature layer 1&lt;br /&gt;
             SOIL T 1       soil temperature layer 2&lt;br /&gt;
             SOIL T 1       soil temperature layer 3&lt;br /&gt;
             SOIL T 1       soil temperature layer 4&lt;br /&gt;
             SOIL T 1       soil temperature layer 5&lt;br /&gt;
             SOIL T 1       soil temperature layer 6&lt;br /&gt;
             Q2             2-meter mixing ratio&lt;br /&gt;
             TSEASFC        sea water temperature&lt;br /&gt;
             PBL HGT        PBL height&lt;br /&gt;
             PBL REGIME     PBL regime&lt;br /&gt;
             RAIN CON       accumulated convective precipitation        (cm)&lt;br /&gt;
             RAIN NON       accumulated non-convective precipitation    (cm)&lt;br /&gt;
             GROUND T       ground temperature&lt;br /&gt;
             RES TEMP       infinite reservoir slab temperature&lt;br /&gt;
             U              wind velocity X_3D&lt;br /&gt;
             V              wind velocity Y_3D&lt;br /&gt;
             W              wind velocity Z_3D&lt;br /&gt;
             T              air temperature_3D&lt;br /&gt;
             PP             atmospheric pressure_3D&lt;br /&gt;
             Q              mixing ratio_3D&lt;br /&gt;
             CLW            cloud water mixing ratio_3D&lt;br /&gt;
             RNW            rain water mixing ratio_3D&lt;br /&gt;
             ICE            cloud ice mixing ratio_3D&lt;br /&gt;
             SNOW           snow mixing ratio_3D&lt;br /&gt;
             RAD TEND       atmospheric radiation tendency_3D&lt;br /&gt;
&lt;br /&gt;
===CONVERT ALADIN FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT ALADIN FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : aladin.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : aladin_griddata.dat&lt;br /&gt;
 &lt;br /&gt;
 !Put here the name of any netcdf file for grid-data generation's sake.&lt;br /&gt;
 INPUT_GRID_FILENAME      :   D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKCLOUD_OPASYMP_19723_20088.nc&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 (path to aladin netcdf file)\ALADIN_BULKIR_OPASYMP_19723_20088.nc&lt;br /&gt;
 ...&lt;br /&gt;
 &amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - the name of each Aladin property to convert in &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;...&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt; block must conform to the following variables&lt;br /&gt;
 &lt;br /&gt;
           ---ALADIN NAME---    ---MOHID NAME---&lt;br /&gt;
             soclotot            CloudCover_&lt;br /&gt;
             sohumrel            RelativeHumidity_&lt;br /&gt;
             sofluxir            NonSolarFlux_&lt;br /&gt;
             sosspres            AtmosphericPressure_&lt;br /&gt;
             sosolarf            SolarRadiation_&lt;br /&gt;
             sotemair            AirTemperature_&lt;br /&gt;
             sowinmod            WindModulus_&lt;br /&gt;
             sowaprec            Precipitation_&lt;br /&gt;
             sozotaux            WindStressX_&lt;br /&gt;
             sometauy            WindStressY_&lt;br /&gt;
             sowindu10           WindVelocityX_&lt;br /&gt;
             sowindv10           WindVelocityY_&lt;br /&gt;
&lt;br /&gt;
===CONVERT MERCATOR FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT MERCATOR FORMAT&lt;br /&gt;
 &lt;br /&gt;
 READ_OPTION               : 1/2/3/4 (version of MERCATOR files)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data with horizontal discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME  : ... (path/name of geometry file with vertical discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 &lt;br /&gt;
 (if READ_OPTION : 1:)&lt;br /&gt;
 BASE_BULLETIN             : ...&lt;br /&gt;
 DATES_FILE                : ...&lt;br /&gt;
 NUM_DATES                 : ... &lt;br /&gt;
 &lt;br /&gt;
 (if READ_OPTION : 2/3:)&lt;br /&gt;
 INPUT_GRID_FILENAME       : ... (path/name of file with horizontal discretization of water&lt;br /&gt;
                                  properties and water level data)&lt;br /&gt;
 (if READ_OPTION : 2:)&lt;br /&gt;
 INPUT_GRID_FILENAME_U     : ... (path/name of file with horizontal discretization of velocity&lt;br /&gt;
                                  component U data)&lt;br /&gt;
 INPUT_GRID_FILENAME_V     : ... (path/name of file with horizontal discretization of velocity&lt;br /&gt;
                                  component V data)&lt;br /&gt;
 &lt;br /&gt;
 (if READ_OPTION : 3:)&lt;br /&gt;
 INPUT_BATHY_FILENAME      : ... (path/name of file with bathymetry)&lt;br /&gt;
     &lt;br /&gt;
 (if READ_OPTION : 3/4:)&lt;br /&gt;
 CALC_BAROTROPIC_VEL       : 0/1 (1 = calculate barotropic velocity, 0 = not calculate; &lt;br /&gt;
                                  default = 0)&lt;br /&gt;
 &lt;br /&gt;
 (if CALC_BAROTROPIC_VEL : 1 and READ_OPTION : 3:)&lt;br /&gt;
 INPUT_MESH_ZGRID_FILENAME : ... (path/name of file with information about layers ticknesses)&lt;br /&gt;
 &lt;br /&gt;
 (block of MERCATOR data files)&lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 ... (path/name of MERCATOR NetCDF data file, one per line, can be several)&lt;br /&gt;
 ... &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CONVERT LEVITUS FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT LEVITUS FORMAT&lt;br /&gt;
  &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data with horizontal discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME  : ... (path/name of geometry file with vertical discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 &lt;br /&gt;
 PERIODICITY               : ... (periodicity of Levitus data: &amp;quot;monthly&amp;quot;/&amp;quot;annual&amp;quot;; default is&lt;br /&gt;
                                  &amp;quot;monthly&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 SPATIAL_RESOLUTION        : ... (spatial resolution (degrees) of horizontal Levitus grid)&lt;br /&gt;
 &lt;br /&gt;
 FILL_VALUE                : ... (real value identificator for missing data; default is &lt;br /&gt;
                                  &amp;quot;-99.999900&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 (definition of spatial window to be present in output HDF5 file)&lt;br /&gt;
 LOWER_LEFT_CORNER         : ... ... (longitude and latitude (degrees) of south west corner)&lt;br /&gt;
 UPPER_RIGHT_CORNER        : ... ... (longitude and latitude (degrees) of north east corner)&lt;br /&gt;
 &lt;br /&gt;
 (block for each water property to be present in output HDF5 file, can be several)&lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                      : ... (name of property)&lt;br /&gt;
 ANNUAL_FILE               : ... (path/name of Levitus annual file)&lt;br /&gt;
 &lt;br /&gt;
 (block of Levitus data files)&lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 ... (path/name of Levitus data file (e.g. a monthly data file), one per line, can be several)&lt;br /&gt;
 ... &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CONVERT HELLERMAN ROSENSTEIN ASCII===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT HELLERMAN ROSENSTEIN ASCII&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data with horizontal discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
  &lt;br /&gt;
 PERIODICITY               : ... (periodicity of Hellerman Rosenstein data: &amp;quot;monthly&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 SPATIAL_RESOLUTION        : ... (spatial resolution (degrees) of horizontal Hellerman&lt;br /&gt;
                                  Rosenstein grid: default and only allowed value is &amp;quot;2.&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 FILL_VALUE                : ... (real value identificator for missing data; default is &lt;br /&gt;
                                  &amp;quot;-99.999900&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 (definition of spatial window to be present in output HDF5 file)&lt;br /&gt;
 LOWER_LEFT_CORNER         : ... ... (longitude and latitude (degrees) of south west corner)&lt;br /&gt;
 UPPER_RIGHT_CORNER        : ... ... (longitude and latitude (degrees) of north east corner)&lt;br /&gt;
   &lt;br /&gt;
 (block for each Hellerman Rosenstein data file)&lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                      : ... (name of property: &amp;quot;wind stress X&amp;quot;/&amp;quot;wind stress Y&amp;quot;)&lt;br /&gt;
 FILE                      : ... (path/name Hellerman Rosenstein file)&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Samples==&lt;br /&gt;
All sample files are named ''ConvertToHDF5Action.dat''.&lt;br /&gt;
&lt;br /&gt;
===Glue several MOHID(.hdf5) files===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : GLUES HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : SurfaceHydro_OP.hdf5&lt;br /&gt;
  &lt;br /&gt;
 &amp;lt;&amp;lt;begin_list&amp;gt;&amp;gt;&lt;br /&gt;
 D:\Projectos\SurfaceHydrodynamic_21.hdf5&lt;br /&gt;
 D:\Projectos\SurfaceHydrodynamic_22.hdf5&lt;br /&gt;
 &amp;lt;&amp;lt;end_list&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interpolate 2D MOHID(.hdf5) files to a new grid===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION    : 1&lt;br /&gt;
 FATHER_FILENAME          : D:\Projectos\MohidRun\test\res\Lagrangian_1.hdf5 &lt;br /&gt;
 OUTPUTFILENAME           : OilSpillThickness_GridRegular.hdf5&lt;br /&gt;
  &lt;br /&gt;
 START                    : 2006 6 21 17 22 30&lt;br /&gt;
 END                      : 2006 6 22 17 22 0&lt;br /&gt;
  &lt;br /&gt;
 FATHER_GRID_FILENAME     : D:\Projectos\MohidRun\GeneralData\batim\Tagus.dat_A&lt;br /&gt;
 NEW_GRID_FILENAME        : TagusConstSpacing.dat&lt;br /&gt;
  &lt;br /&gt;
 BASE_GROUP               : /Results/Oil/Data_2D&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interpolate 3D MOHID(.hdf5) files to a new grid===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION   : 1&lt;br /&gt;
 FATHER_FILENAME         : D:\Projectos\MohidRun\test\res\Lagrangian_1.hdf5 &lt;br /&gt;
 OUTPUTFILENAME          : OilSpillThickness_GridRegular.hdf5&lt;br /&gt;
 &lt;br /&gt;
 START                   : 2006 6 21 17 22 30&lt;br /&gt;
 END                     : 2006 6 22 17 22 0&lt;br /&gt;
 &lt;br /&gt;
 FATHER_GRID_FILENAME    : D:\Projectos\MohidRun\GeneralData\batim\Tagus.dat_A&lt;br /&gt;
 NEW_GRID_FILENAME       : TagusConstSpacing.dat&lt;br /&gt;
 &lt;br /&gt;
 BASE_GROUP              : /Results/Oil/Data_2D&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D         : 1&lt;br /&gt;
 FATHER_GEOMETRY         : D:\Projectos\MohidRun\test\data\Geometry_1.dat&lt;br /&gt;
 NEW_GEOMETRY            : TagusGeometry.dat&lt;br /&gt;
 AUX_GRID_FILENAME       : TagusConstSpacing.dat&lt;br /&gt;
 AUX_OUTPUTFILENAME      : Aux_GridRegular.hdf5&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Patch several MOHID(.hdf5) files to a new grid===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION : PATCH HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION   : 3&lt;br /&gt;
 &lt;br /&gt;
 START                   : 2005 2 28 13 0 0&lt;br /&gt;
 END                     : 2005 3 1 13 0 0&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                   : 3&lt;br /&gt;
 FATHER_FILENAME         : K:\MM5output\2005022812_2005030712\MM5OUT_D1.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME    : K:\MM5output\2005022812_2005030712\grid1.dat&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                   : 2&lt;br /&gt;
 FATHER_FILENAME         : K:\MM5output\2005022812_2005030712\MM5OUT_D2.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME    : K:\MM5output\2005022812_2005030712\grid2.dat&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                   : 1&lt;br /&gt;
 FATHER_FILENAME         : K:\MM5output\2005022812_2005030712\MM5OUT_D3.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME    : K:\MM5output\2005022812_2005030712\grid3.dat&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME          : MM5Forcing.hdf5&lt;br /&gt;
 NEW_GRID_FILENAME       : K:\Simula\GeneralData\Batim\CostaPortuguesa.dat&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert an ERA40 file to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                  : CONVERT ERA40 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                : D:\Aplica\ERA40\1971ERA1973.nc&lt;br /&gt;
 OUTPUTFILENAME          : D:\Aplica\ERA40\1971ERA1973T2&lt;br /&gt;
 &lt;br /&gt;
 CONVERT_TO_ASCII        : 0&lt;br /&gt;
 CONVERT_TO_HDF5         : 1&lt;br /&gt;
 &lt;br /&gt;
 XX_VARIABLE             : longitude&lt;br /&gt;
 YY_VARIABLE             : latitude&lt;br /&gt;
 TIME_VARIABLE           : time&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert a MM5 file to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                  : CONVERT MM5 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\MMOUT_D3&lt;br /&gt;
 TERRAIN_FILENAME        : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\TERRAIN_D3&lt;br /&gt;
 OUTPUT_GRID_FILENAME    : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\grid3.dat&lt;br /&gt;
 OUTPUTFILENAME          : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\MM5_D3.hdf5&lt;br /&gt;
 &lt;br /&gt;
 COMPUTE_WINDSTRESS        : 0&lt;br /&gt;
 COMPUTE_RELATIVE_HUMIDITY : 1&lt;br /&gt;
 WRITE_XYZ                 : 0&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;BeginFields&amp;gt;&amp;gt;&lt;br /&gt;
 solar radiation&lt;br /&gt;
 air temperature&lt;br /&gt;
 wind velocity X&lt;br /&gt;
 wind velocity Y&lt;br /&gt;
 sensible heat&lt;br /&gt;
 latent heat&lt;br /&gt;
 atmospheric pressure&lt;br /&gt;
 sea water temperature&lt;br /&gt;
 &amp;lt;&amp;lt;EndFields&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert Mercator-Ocean(.nc) to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : CONVERT MERCATOR FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : Psy2v2r1v_R20060628/MercatorR20060628.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME     : Psy2v2r1v_R20060628/MercatorGridR20060628.dat&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME : Psy2v2r1v_R20060628/MercatorGeometryR20060628.dat&lt;br /&gt;
 &lt;br /&gt;
 INPUT_GRID_FILENAME      : GridFiles/ist_meteog-gridT.nc&lt;br /&gt;
 INPUT_GRID_FILENAME_U    : GridFiles/ist_meteog-gridU.nc&lt;br /&gt;
 INPUT_GRID_FILENAME_V    : GridFiles/ist_meteog-gridV.nc&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 Psy2v2r1v_R20060628/ist_meteog-mercatorPsy2v2r1v_T_MEAN_ANA_20060621_R20060628.nc&lt;br /&gt;
 Psy2v2r1v_R20060628/ist_meteog-mercatorPsy2v2r1v_T_MEAN_ANA_20060622_R20060628.nc&lt;br /&gt;
 Psy2v2r1v_R20060628/ist_meteog-mercatorPsy2v2r1v_T_MEAN_ANA_20060623_R20060628.nc&lt;br /&gt;
 &amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert Levitus format to MOHID(.hdf5) and interpolate grid===&lt;br /&gt;
==== Convert ====&lt;br /&gt;
First convert the Levitus ASCII format to a raw HDF5 format:&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : CONVERT LEVITUS FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : Levitus.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME     : LevitusGrid.dat&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME : LevitusGeometry.dat&lt;br /&gt;
 &lt;br /&gt;
 PERIODICITY              : monthly&lt;br /&gt;
 SPATIAL_RESOLUTION       : 0.25&lt;br /&gt;
 FILL_VALUE               : -99.9999&lt;br /&gt;
 &lt;br /&gt;
 LOWER_LEFT_CORNER        : -16.0  31&lt;br /&gt;
 UPPER_RIGHT_CORNER       :   1.   40&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : salinity&lt;br /&gt;
 ANNUAL_FILE              : DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s000hr.obj&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s001&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s002&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s003&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s004&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s005&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s006&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s007&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s008&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s009&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s010&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s011&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s012&lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : temperature&lt;br /&gt;
 ANNUAL_FILE              : DataCenter\DadosBase\Ocean\Levitus\Data\Temp\t000hr.obj&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t001&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t002&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t003&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t004&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t005&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t006&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t007&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t008&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t009&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t010&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t011&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t012&lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Extrapolate ====&lt;br /&gt;
Then extrapolate the data (still in the raw HDF5 format):&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION    : 1&lt;br /&gt;
 &lt;br /&gt;
 FATHER_FILENAME          : Levitus.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME     : LevitusGrid.dat&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : LeviTusAllPointsWithData.hdf5&lt;br /&gt;
 NEW_GRID_FILENAME        : LevitusGrid.dat&lt;br /&gt;
 &lt;br /&gt;
 START                    : -9999 1  1 0 0 0&lt;br /&gt;
 END                      : -9999 12 1 0 0 0&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D          : 1&lt;br /&gt;
 FATHER_GEOMETRY          : LevitusGeometry.dat&lt;br /&gt;
 NEW_GEOMETRY             : LevitusGeometry.dat&lt;br /&gt;
 AUX_GRID_FILENAME        : LevitusGrid.dat&lt;br /&gt;
 AUX_OUTPUTFILENAME       : AuxLeviTusAllPointsWithData.hdf5&lt;br /&gt;
 &lt;br /&gt;
 POLI_DEGREE              : 3&lt;br /&gt;
 DO_NOT_BELIEVE_MAP       : 1&lt;br /&gt;
 EXTRAPOLATE_2D           : 2&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interpolate ====&lt;br /&gt;
Finally, interpolate to the final grid and geometry (same as the [[#Interpolate 3D MOHID(.hdf5) files to a new grid| Interpolate 3D sample]]):&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION    : 1&lt;br /&gt;
 FATHER_FILENAME          : LeviTusAllPointsWithData.hdf5&lt;br /&gt;
 OUTPUTFILENAME           : CadizMonthlyLevitus.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME     : LevitusGrid.dat&lt;br /&gt;
 NEW_GRID_FILENAME        : Algarve0.02SigmaSmooth_V3_CartMoreLayers.dat&lt;br /&gt;
 &lt;br /&gt;
 START                    : -9999 1  1 0 0 0&lt;br /&gt;
 END                      : -9999 12 1 0 0 0&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D          : 1&lt;br /&gt;
 FATHER_GEOMETRY          : LevitusGeometry.dat&lt;br /&gt;
 NEW_GEOMETRY             : Geometry_1.dat&lt;br /&gt;
 AUX_OUTPUTFILENAME       : AuxCadizMonthlyLevitus.hdf5&lt;br /&gt;
 AUX_GRID_FILENAME        : Aux12km.dat&lt;br /&gt;
 &lt;br /&gt;
 POLI_DEGREE              : 3&lt;br /&gt;
 DO_NOT_BELIEVE_MAP       : 1&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the programme may construct a new bathymetry twice. Use this bathymetry only on the AUX_GRID_FILENAME keyword.&lt;br /&gt;
&lt;br /&gt;
===Convert Hellerman Rosenstein ASCII format to MOHID(.hdf5)  ===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : CONVERT HELLERMAN ROSENSTEIN ASCII&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : ClimatologicWindStress.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME     : ClimatologicWindStressGrid.dat&lt;br /&gt;
 &lt;br /&gt;
 PERIODICITY              : monthly&lt;br /&gt;
 SPATIAL_RESOLUTION       : 2.&lt;br /&gt;
 FILL_VALUE               : -99.9999&lt;br /&gt;
 &lt;br /&gt;
 LOWER_LEFT_CORNER        : -180  -90&lt;br /&gt;
 UPPER_RIGHT_CORNER       : 180  90&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : wind stress X&lt;br /&gt;
 FILE                     : D:\Aplica\Dados\Hellerman_Rosenstein\TAUXX.DAT&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : wind stress Y&lt;br /&gt;
 FILE                     : D:\Aplica\Dados\Hellerman_Rosenstein\TAUYY.DAT&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert ALADIN(.nc) format to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT ALADIN FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : aladin.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : aladin_griddata.dat&lt;br /&gt;
 &lt;br /&gt;
 !Put here the name of any netcdf file for grid-data generation's sake.&lt;br /&gt;
 INPUT_GRID_FILENAME      :   D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKCLOUD_OPASYMP_19723_20088.nc&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKIR_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKPRES_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKSOLAR_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKTAIR_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKWIND_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_FLUXPRE_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_STRESSU_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_STRESSV_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_U10_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_V10_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKCLOUD_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKHUMI_OPASYMP_19723_20088.nc&lt;br /&gt;
 &amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== OceanColor modules compilation ==&lt;br /&gt;
Compiling the [[ConvertToHDF5]] tool with the OceanColor modules is more complicated than one might expect. A solution is proposed here for a release version using the Compaq Visual Fortran 6.6c. The difficulties rise because C code is embedded with a fortran interface and also, extra libraries such as hdf4 are required.&lt;br /&gt;
&lt;br /&gt;
=== Pre-requisites ===&lt;br /&gt;
&lt;br /&gt;
This is a list of prerequisites to successfully compile the tool:&lt;br /&gt;
*Compaq Visual Fortran 6.5 with patch 6.6c,&lt;br /&gt;
*VS .NET 2003 (Vc7 in particular),&lt;br /&gt;
*Hdf5 libraries ('''hdf5.lib''' '''hdf5_fortran.lib''' '''hdf5_hl.lib'''),&lt;br /&gt;
*Netcdf libraries ('''netcdf.lib''' '''netcdf_.lib'''),&lt;br /&gt;
*Hdf4 libraries ('''hd421.lib''', '''hm421.lib'''),&lt;br /&gt;
*szlib, zlib and jpeg libraries ('''szlib.lib''', '''zlib.lib''' and '''libjpeg.lib'''),&lt;br /&gt;
*the fortran source files ('''ModuleConvertModisL2.F90 ModuleConvertModisL3.F90 ModuleConvertOceanColorL2.F90'''),&lt;br /&gt;
*the C source files and their fortran interface files ('''readL2scan.c readL2Seadas.c''' and '''cdata.f crossp.f fgeonav.f''').&lt;br /&gt;
&lt;br /&gt;
=== CVF IDE configuration ===&lt;br /&gt;
# Configure everything as specified in [[Compiling with CVF]].&lt;br /&gt;
# Add the source files listed in the prerequisites above to the source files listing.&lt;br /&gt;
# Go to '''Tools--&amp;gt;Options...--&amp;gt;Directories'''. There, add the '''$DOTNET2K3/Vc7/bin''' to the '''Executable files''''; the '''$DOTNET2K3/Vc7/include''' and '''$DOTNET2K3/Vc7/PlatformSDK/include''' to the '''Include files'''; and finally, the '''$DOTNET2K3/Vc7/lib''', '''$DOTNET2K3/Vc7/PlatformSDK/lib''' and  '''$DOTNET2K3/Vc7/PlatformSDK/bin''' to the '''Library files'''.&lt;br /&gt;
# Go to '''Projects--&amp;gt;Settings--&amp;gt;Release--&amp;gt;Link--&amp;gt;Input'''. There, add the following libraries: '''netcdf.lib netcdf_.lib hd421.lib hm421.lib libjpeg.lib'''. (Make sure the hdf5 libraries as well as the szlib and zlib libraries are already mentioned).&lt;br /&gt;
&lt;br /&gt;
=== Troubleshoots ===&lt;br /&gt;
'''Q: I get unresolved external references during linkage, but I have all the libraries mentioned above included. What should I do?'''&lt;br /&gt;
&lt;br /&gt;
A: Unresolved external references can come out for two reasons:&lt;br /&gt;
#you didn't specified all the libraries required or all the paths for the default libraries or,&lt;br /&gt;
#[http://en.wikipedia.org/wiki/Name_decoration name mangling] problems. Use the [[dumpbin]] utility to the libraries to checkout which language convention they are using. If that's the problem then you need to try to get new libraries with the correct naming convention.&lt;br /&gt;
&lt;br /&gt;
That's it, you should now be able to build the [[ConvertToHdf5]] project successfully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Q: I got a message saying the entry point _NF_PUT_ATT_REAL@28 could not be located in netcdf.dll'''&lt;br /&gt;
&lt;br /&gt;
A: copy the file netcdf.dll to the exe folder&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://www.hdfgroup.org/ HDF5 Homepage]&lt;br /&gt;
*[http://www.hdfgroup.org/ HDF4 Homepage]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
*[[Module_Atmosphere]]&lt;br /&gt;
*[[Module_InterfaceWaterAir]]&lt;br /&gt;
*[[Coupling_Water-Atmosphere_User_Manual]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Tools]]&lt;br /&gt;
[[Category:Hdf5]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=ConvertToHDF5&amp;diff=1926</id>
		<title>ConvertToHDF5</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=ConvertToHDF5&amp;diff=1926"/>
				<updated>2009-05-12T12:57:28Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* CONVERT MERCATOR FORMAT */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''ConvertToHDF5''' is an application which allows the making of several operations, called '''actions''', involving HDF5 files: conversion of data in other formats (e.g. NETCDF) to HDF5, grid interpolation, concatenation of several files.&lt;br /&gt;
&lt;br /&gt;
Running options for this application are specified by the user in a input file named [[ConvertToHDF5#Input file (ConvertToHDF5Action.dat)|'''ConvertToHDF5Action.dat''']]. Several actions can be specified in the same input file, being processed sequentially by the ConvertToHDF5 application.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
The operations involving HDF5 files performed by ConvertToHDF5, specified individually by an action, can be organized in [[#file management|file management]], [[#grid interpolation|grid interpolation]] and [[#format conversion|format conversion]]. These types and the respective actions are detailed in the next sections. &lt;br /&gt;
&lt;br /&gt;
The input file specification for each action can be found bellow in the [[#Input file (ConvertToHDF5Action.dat)|Input file (ConvertToHDF5Action.dat)]] section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===File management===&lt;br /&gt;
&lt;br /&gt;
====Glue files====&lt;br /&gt;
This action consists in joining or glue in a single HDF5 file two or more HDF5 files having the same HDF5 data groups and referring to time periods which come in sequence. Both sets of 2D and 3D HDF5 files can be glued.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Glue MOHID Water results files from several runs produced in continuous running of the model, for storage space economy reasons. Can be used to join data from other origins (e.g. results of meteorological models) as long as the HDF5 format is the one supported by MOHID Water.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 files to be glued. &amp;quot;Grid&amp;quot; and &amp;quot;Results&amp;quot; data groups should be equal in all these files.&lt;br /&gt;
&lt;br /&gt;
'''Output:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with glued &amp;quot;Results&amp;quot; data. &amp;quot;Residual&amp;quot; and &amp;quot;Statistics&amp;quot; HDF5 data groups are not copied to the output file since they are time period specific (different values potentially occour in each file). General statistics can be calculated for the glued HDF5 file data using tool [[HDF5Statistics]].&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#GLUES HDF5 FILES|GLUES HDF5 FILES]].&lt;br /&gt;
&lt;br /&gt;
===Grid interpolation===&lt;br /&gt;
&lt;br /&gt;
====Interpolate files====&lt;br /&gt;
This action performs the conversion of one HDF5 file data existing in one 2D or 3D spatial grid to another 2D or 3D spatial grid, creating a new HDF5 file. The interpolation is performed only for the data located a time window specified by the user.&lt;br /&gt;
&lt;br /&gt;
The HDF5 file containing data to be interpolated is called the '''father file'''.&lt;br /&gt;
&lt;br /&gt;
In case of 3D interpolation the application conducts first the horizontal grid interpolation&lt;br /&gt;
(keeping father geometry) and only after it conducts the vertical interpolation (from father geometry to new geometry).&lt;br /&gt;
&lt;br /&gt;
Several types of 2D interpolation are available for use: bilinear, spline 2D and triangulation.&lt;br /&gt;
For vertical interpolation (used in 3D interpolation) can be supplied several polinomial degrees for interpolation.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain an HDF5 file with data for forcing or providing initial conditions for a MOHID Water model, e.g. a meteorological forcing file.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
For 2D/3D interpolation:&lt;br /&gt;
&lt;br /&gt;
- father HDF5 file;&lt;br /&gt;
&lt;br /&gt;
- father horizontal data grid, in a grid data file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
- new horizontal data grid, in a grid data file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
For 3D interpolation also needed:&lt;br /&gt;
&lt;br /&gt;
- father vertical geometry, in a geometry file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
- new vertical geometry, in a geometry file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
- auxiliary horizontal data grid, in a grid data file in the format supported by MOHID; this file is used for horizontal grid interpolation in 3D interpolation operations.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with interpolated data. In case of 3D interpolation also produced an auxiliary HDF5 file with the result of the horizontal grid interpolation, which can be inspected to check if this operation is well performed.&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#INTERPOLATE GRIDS|INTERPOLATE GRIDS]].&lt;br /&gt;
&lt;br /&gt;
====Patch files====&lt;br /&gt;
This action consists in performing an interpolation of HDF5 data between grids, as in action [[#Interpolate files|Interpolate files]], but considering more than one HDF5 file as containing data to be interpolated to the new grid and a priority scale. The interpolation is performed only for the data located in the time window specified by the user. The present version of this action operates only on 2D data.&lt;br /&gt;
&lt;br /&gt;
Each HDF5 file containing data to be interpolated is called a '''father file''' and has an user-attributed '''priority level''' to be respected in the interpolation process: for each new grid cell the ConvertToHDF5 application will look for data first on the Level 1 father file and only in the case this data is inexistent will it look for data in Level 2 file, proceeding in looking for higher level files if no data is found subsequentely.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
To obtain an HDF5 file with data from several HDF5 files each containing data with different spatial resolution and only for a specific part of the new grid. This is, for instance, the case when one is preparing a best resolution meteorological HDF5 file for forcing MOHID Water from several meteorological model domains, having different spatial resolution and span, since the best resolution data is not available for all new grid cells.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
The new horizontal data grid, in a grid data file in the format supported by MOHID, and for each father file:&lt;br /&gt;
&lt;br /&gt;
- level of priority: 1 = maximum priority, priority decreases with increasing level value;&lt;br /&gt;
&lt;br /&gt;
- data grid, in the form of a grid data file in the format supported by MOHID.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with patched data.&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#PATCH HDF5 FILES|PATCH HDF5 FILES]].&lt;br /&gt;
&lt;br /&gt;
===Format conversion===&lt;br /&gt;
&lt;br /&gt;
====Meteorological model data====&lt;br /&gt;
Mohid does not simulate explicitly the atmosphere, but needs information about atmospheric properties in time and space. This requires that atmospheric properties are supplied to MOHID Water in supported formats. These formats can be derived from meteorological data in HDF5 format. Because the results of meteorological models are accessed in different formats conversion is required. &lt;br /&gt;
&lt;br /&gt;
The formats currently convertible to HDF5 in ConvertToHDF5 include the MM5 and the ERA40. These are succintly detailed in the next sections.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''ERA40''=====&lt;br /&gt;
This format refers to the European Centre for Medium-Range Weather Forecasts (ECMWF) 40 years re-analysises results, acessed by site http://data.ecmwf.int/data/d/era40_daily/. This data is available for several meteorological variables with maximum 6 hour periodicity for days in the period from 1957-09-01 to 2002-08-31. &lt;br /&gt;
&lt;br /&gt;
ERA40 data files are supplied by ECMWF in a NetCDF format and with an user-costumized time window, periodicity (time step range from 6 hours to a day) and meteorological properties set. The ERA40 meteorological properties which are recognized by MOHID are presented bellow together with the correspondent MOHID name: &lt;br /&gt;
&lt;br /&gt;
 ---ERA40 NAME---         ---MOHID NAME---&lt;br /&gt;
   sshf                     sensible heat                &lt;br /&gt;
   slhf                     latent heat                  &lt;br /&gt;
   msl                      atmospheric pressure &lt;br /&gt;
   tcc                      cloud cover &lt;br /&gt;
   p10u                     wind velocity X&lt;br /&gt;
   p10v                     wind velocity Y&lt;br /&gt;
   p2t                      air temperature&lt;br /&gt;
   ewss                     wind stress X&lt;br /&gt;
   nsss                     wind stress Y&lt;br /&gt;
&lt;br /&gt;
The standard ConvertToHDF5 action is to convert to HDF5 the data referring to all MOHID Water recognized property available in the ERA40 file, producing an individual HDF5 file for each property. The name of each HDF5 file generated includes the ERA40 meteorological property identificator correspondent to the data contained.&lt;br /&gt;
&lt;br /&gt;
Alternatively, ConvertToHDF5 can copy to a single ASCII file the heading information concerning each meteorological variable considered in the original ERA40 file.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain an HDF5 file with data suitable for being used for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
ERA40 NetCDF file.&lt;br /&gt;
&lt;br /&gt;
'''Output:'''&lt;br /&gt;
&lt;br /&gt;
One HDF5 file for each meteorological property contained in the original NetCDF file.&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT ERA40 FORMAT|CONVERT ERA40 FORMAT]].&lt;br /&gt;
&lt;br /&gt;
=====''MM5''=====&lt;br /&gt;
This format relates to the Fifth-Generation NCAR / Penn State Mesoscale Model (MM5) output files format. Almost every atmospheric property needed by MOHID Water is present in MM5 output files, enabling to run prediction simulations with MOHID Water when access to MM5 prevision files is available.&lt;br /&gt;
&lt;br /&gt;
The ConvertToHDF5 action converts MM5 results files from the original format to HDF5 format, allowing the easy use of these results in the MOHID framework. Conversion is only performed for the MM5 properties and the time window specified by the user.&lt;br /&gt;
&lt;br /&gt;
Besides the conversion, the application can calculate some properties not contained in&lt;br /&gt;
the MM5 files using the available information: these are windstress, relative humidity and precipitation.&lt;br /&gt;
&lt;br /&gt;
For conversion to be completed it is required the horizontal grid information of MM5 results which is available in special TERRAIN files.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Produce HDF5 meteorological data usable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
MM5 results file to convert and MM5 TERRAIN file. The TERRAIN file supplies the MM5 results grid information. &lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
An HDF5 file with MM5 results and a grid data file in MOHID format with the MM5 grid information.&lt;br /&gt;
This last file can be used to interpolate the MM5 data from the original grid to a new grid (see [[#Interpolate files|Interpolate files]]), for instance to produce an HDF5 file suitable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Compilation:'''&lt;br /&gt;
&lt;br /&gt;
Caution! The ConvertToHDF5 executable must be compiled with the [[Big-endian little-endian|Big-Endian]] option set (see compatibility in the project's settings).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT MM5 FORMAT|CONVERT MM5 FORMAT]].&lt;br /&gt;
&lt;br /&gt;
=====''Aladin''=====&lt;br /&gt;
This format relates to Aladin meteorological model results. Some of the atmospheric property needed by MOHID Water is present in Aladin output files, enabling to run prediction simulations with MOHID Water when access to Aladin prevision files is available.&lt;br /&gt;
&lt;br /&gt;
The ConvertToHDF5 action converts Aladin results files from the original format to HDF5 format, allowing the easy use of these results in the MOHID framework. Conversion is only performed for the MM5 properties and the time window specified by the user.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Produce HDF5 meteorological data usable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
Aladin netcdf results file to convert.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
An HDF5 file with Aladin results and a grid data file in MOHID format with the Aladin grid pseudo-information: a fake orography is created of 100 m depth.&lt;br /&gt;
This last file can be used to interpolate the Aladin data from the original grid to a new grid (see [[#Interpolate files|Interpolate files]]), for instance to produce an HDF5 file suitable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Compilation:'''&lt;br /&gt;
&lt;br /&gt;
Caution! The ConvertToHDF5 executable must be compiled with the [[Big-endian little-endian|Big-Endian]] option set (see compatibility in the project's settings).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT ALADIN FORMAT|CONVERT ALADIN FORMAT]].&lt;br /&gt;
&lt;br /&gt;
====Ocean model data====&lt;br /&gt;
Ocean model data, available in diverse formats, can be used by MOHID Water to specify boundary (open ocean boundary and surface), initial conditions or for validation. These uses require that the model data is in HDF5 format and conversion is therefore needed.&lt;br /&gt;
&lt;br /&gt;
Currently the large scale ocean models formats convertible into HDF5 by ConvertToHDF5 includes MERCATOR.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''MERCATOR''=====&lt;br /&gt;
MERCATOR data files are supplied in a NetCDF format and with an user-costumized spatial window and periodicity. Water level and water properties (temperature and salinity) data is available in type T files, velocity component u data is available in type U files and velocity component v data is available in type V files. The type of data of a specific MERCATOR file is generally indicated in the file name.&lt;br /&gt;
&lt;br /&gt;
The standard ConvertToHDF5 action is to convert to HDF5 the data referring to temperature, salinity, water level, component u of velocity and component v of velocity.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain HDF5 MERCATOR data usable for forcing or validation of MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
NetCDF MERCATOR results data files and NetCDF MERCATOR grid data files. It should be provided one grid data file of each type: T, U and V. These are generally provided by the MERCATOR services together with the results files.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
One HDF5 file containing all properties contained in the recognized set of properties (temperature, salinity, water level, velocity u and velocity v) and the correspondent grid data and geometry files, containing respectively the horizontal grid and the vertical discretization of the HDF5 file. The grid data and geometry files can be used afterwards to interpolate the MERCATOR data to another grid and geometry (see [[#Interpolate files|Interpolate files]]).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT MERCATOR FORMAT|CONVERT MERCATOR FORMAT]].&lt;br /&gt;
&lt;br /&gt;
====Climatological data====&lt;br /&gt;
Climatological data can be used in MOHID Water to specify boundary (open ocean boundary and surface), initial conditions or for validation, in case more realistic data (measurements or model) data is unavailable. This data is generally supplied by producers in formats not readly usable by MOHID Water which justifies the existence of a conversion tool.&lt;br /&gt;
&lt;br /&gt;
Two climatological data format conversions are implemented in ConvertToHDF5: Levitus ocean data and Hellerman Rosenstein meteorological data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''Levitus''=====&lt;br /&gt;
The Levitus climatology provides results for water temperature and salinity.&lt;br /&gt;
The ConvertToHDF5 action converts the climatological data for the properties and spatial window requested by the user. &lt;br /&gt;
Typically, it requires 3 steps to complete the task:&lt;br /&gt;
&lt;br /&gt;
- convert levitus format &lt;br /&gt;
&lt;br /&gt;
- extrapolate the data to the whole levitus domain(required to avoid uncoincidental coastlines) &lt;br /&gt;
&lt;br /&gt;
- interpolate with the model grid(bathymetry)&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain climatological data in HDF5 format to use as boundary forcing and/or initial condition specification in MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
Levitus climatological data files, one per property and per time period (e.g a month).&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with Levitus climatological data, grid data file with the horizontal&lt;br /&gt;
grid of the data and a geometry file with vertical discretization of the data (MOHID formats).&lt;br /&gt;
The grid data and the geometry files can be used to interpolate the climatological data from the original grid to a new grid (see [[#Interpolate files|Interpolate files]]).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT LEVITUS FORMAT|CONVERT LEVITUS FORMAT]].&lt;br /&gt;
&lt;br /&gt;
=====''Hellerman Rosenstein''=====&lt;br /&gt;
This is a meteorological climatology providing wind stress. There is a file per wind stress component. Since the data refer to surface values it is a 2D field.&lt;br /&gt;
&lt;br /&gt;
The ConvertToHDF5 action converts the climatological data for the properties and spatial window provided by the user.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain climatological data in HDF5 format to use as meteorological forcing in MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
Hellerman Rosenstein climatological data ASCII files, one per wind stress component.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with Hellerman Rosenstein climatological data and grid data file with the horizontal&lt;br /&gt;
grid of the climatological data. This grid data file can be used to interpolate the climatological data from the original horizontal grid to a new grid (see [[#Interpolate files|Interpolate files]]).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT HELLERMAN ROSENSTEIN ASCII|CONVERT HELLERMAN ROSENSTEIN ASCII]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''World Ocean Atlas 2005''=====&lt;br /&gt;
The World Ocean Atlas (WOA) 2005 climatology provides results for water temperature, salinity and several water quality and biology properties.&lt;br /&gt;
&lt;br /&gt;
Description, Action and Input Files are described in a separate page: [[ConvertToHDF5 WOA2005]].&lt;br /&gt;
&lt;br /&gt;
==Input file (ConvertToHDF5Action.dat)==&lt;br /&gt;
===General structure===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt; (block containing instructions for running a specific action) &lt;br /&gt;
 ACTION                    : ... (intended action)&lt;br /&gt;
 ... (action specific instructions)&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : ...&lt;br /&gt;
 ...&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GLUES HDF5 FILES===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : GLUES HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 3D_FILE                   : 0/1 (0 = 2D file, 1 = 3D file)&lt;br /&gt;
 &lt;br /&gt;
 TIME_GROUP                : ... (Default=&amp;quot;Time&amp;quot;. Other option: &amp;quot;SurfaceTime&amp;quot;.)&lt;br /&gt;
 &lt;br /&gt;
 BASE_GROUP                : ... (Default=&amp;quot;Results&amp;quot;. Other options: &amp;quot;Residual&amp;quot;, &amp;quot;SurfaceResults&amp;quot;.)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 &lt;br /&gt;
 (block of HDF5 data files)&lt;br /&gt;
 &amp;lt;&amp;lt;begin_list&amp;gt;&amp;gt;&lt;br /&gt;
 ... (path/name of HDF5 file with data to be included in glue, one per line, at least two files)&lt;br /&gt;
 ...                      &lt;br /&gt;
 &amp;lt;&amp;lt;end_list&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===INTERPOLATE GRIDS===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION     : ... (type of horizontal interpolation: 1 = Bilinear, 2 = Spline2D,&lt;br /&gt;
                                  3 = Triangulation)&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION_WINDOW      : ... ... ... ... (2D spatial window to consider for interpolation: &lt;br /&gt;
                                              Xmin Ymin Xmax Ymax; default = all domain)&lt;br /&gt;
 &lt;br /&gt;
 START                     : ... (start date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 END                       : ... (end date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D           : 0/1 (0 = 2D interpolation, 1 = 3D interpolation)&lt;br /&gt;
 &lt;br /&gt;
 FATHER_FILENAME           : ... (path/name of input HDF5 file with data to be interpolated)&lt;br /&gt;
 FATHER_GRID_FILENAME      : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization of input HDF5 file)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of output HDF5 file to be created)&lt;br /&gt;
 NEW_GRID_FILENAME         : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization intended for output HDF5 file)&lt;br /&gt;
 &lt;br /&gt;
 EXTRAPOLATE_2D            : 0/1/2/3/4/5 (2D extrapolation: 0=no extrapolation, 1=medium&lt;br /&gt;
                                      triangulation, 2=high triangulation, &lt;br /&gt;
                                      3=nearest neighbour, 4=nearest cell, &lt;br /&gt;
                                      5=constant value)&lt;br /&gt;
 &lt;br /&gt;
 EXTRAPOLATE_VALUE         : ... (name of the value to extrapolate to when EXTRAPOLATE_2D is&lt;br /&gt;
                                  set to constant value (5))&lt;br /&gt;
 &lt;br /&gt;
 DO_NOT_BELIEVE_MAP        : 0/1 (0=consider input HDF5 file map, 1=do not consider input HDF5&lt;br /&gt;
                                  file map)&lt;br /&gt;
 &lt;br /&gt;
 BASE_GROUP                : ... (name of base group of HDF5 variables containing data to be &lt;br /&gt;
                                  interpolated; default is &amp;quot;/Results&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 (if INTERPOLATION3D : 1 also required:)&lt;br /&gt;
 FATHER_GEOMETRY           : ... (path/name of file (MOHID format) with vertical discretization&lt;br /&gt;
                                  of input HDF5 file)&lt;br /&gt;
 NEW_GEOMETRY              : ... (path/name of file (MOHID format) with vertical discretization&lt;br /&gt;
                                  intended for output HDF5 file)&lt;br /&gt;
 POLI_DEGREE               : 1/... (degree of vertical interpolation: 1=linear, ...)&lt;br /&gt;
 &lt;br /&gt;
 AUX_GRID_FILENAME         : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization intended for auxiliar output HDF5 file;&lt;br /&gt;
                                  default is file provided in NEW_GRID_FILENAME)&lt;br /&gt;
 &lt;br /&gt;
 AUX_OUTPUTFILENAME        : ... (path/name of auxiliar output HDF5 file to contain result&lt;br /&gt;
                                  of horizontal grid interpolation)   &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - the file indicated in AUX_GRID_FILENAME can be different from the one indicated in&lt;br /&gt;
   NEW_GRID_FILENAME in terms of bathymetry, while the horizontal grid should be, commonly, the&lt;br /&gt;
   same: this altered bathymetry can be used to extend the water column in the original data so&lt;br /&gt;
   that the process of vertical interpolation is done easily;&lt;br /&gt;
 &lt;br /&gt;
 - in case of INTERPOLATION3D : 1, ConvertToHDF5 can generate new versions of bathymetry which &lt;br /&gt;
   are consistent with the geometry definition (extension is '.new'); there are possibly three&lt;br /&gt;
   bathymetry changes referring to father grid, new grid and aux grid (the same bathymetry is&lt;br /&gt;
   not altered twice); although initially new and aux grid are the same they can result &lt;br /&gt;
   different because of bathymetry changes;&lt;br /&gt;
 &lt;br /&gt;
 - in case the new geometry is 2D and father geometry is 3D then POLI_DEGREE : 1 &lt;br /&gt;
   (linear interpolation) should be used;&lt;br /&gt;
 &lt;br /&gt;
 - EXTRAPOLATE_2D : 1/2/3/4/5 should be considered if it is expected that the coast line is not&lt;br /&gt;
   coincidental in the father and new grids, to avoid lack of data in the interpolation&lt;br /&gt;
   process; extrapolation is performed for all cells even the land cells; &lt;br /&gt;
 &lt;br /&gt;
 - in case of DO_NOT_BELIEVE_MAP : 1 the application generates a map based on cells where&lt;br /&gt;
   interpolation results are available; this causes that if EXTRAPOLATE_2D : 1/2/3/4/5 is used&lt;br /&gt;
   the AUX_GRID_FILENAME should not have land cells in order for the new map to be concurrent&lt;br /&gt;
   with the result of extrapolation and avoid errors generation, specially if INTERPOLATION3D :&lt;br /&gt;
   1 is considered.&lt;br /&gt;
&lt;br /&gt;
===PATCH HDF5 FILES===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : PATCH HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION     : ... (type of interpolation: 3 = Triangulation, default and only&lt;br /&gt;
                                  one implemented)&lt;br /&gt;
 &lt;br /&gt;
 START                     : ... (start date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 END                       : ... (end date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 &lt;br /&gt;
 (block for each father HDF5 file, should be at least two)&lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                     : ... (integer priority level: 1 = highest, increase for lower&lt;br /&gt;
                                  priority)&lt;br /&gt;
 FATHER_FILENAME           : ... (path/name of input HDF5 file with data to be interpolated)&lt;br /&gt;
 FATHER_GRID_FILENAME      : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization of input HDF5 file)&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of output HDF5 file to be created)&lt;br /&gt;
 NEW_GRID_FILENAME         : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization intended for output HDF5 file)&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CONVERT ERA40 FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT ERA40 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                  : ... (path/name of ERA40 NetCDF file)&lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
                                 (root of name for all files produced)&lt;br /&gt;
 &lt;br /&gt;
 CONVERT_TO_ASCII          : 0/1 (1 = convert variable heading info for ASCII file; 0 = default)&lt;br /&gt;
 CONVERT_TO_HDF5           : 0/1 (1 = convert to HDF5 file; 0 = default)&lt;br /&gt;
 GRIDTO180                 : 0/1 (1 = convert grid from [0 360] to [-180 180], 0 = default)&lt;br /&gt;
 &lt;br /&gt;
 XX_VARIABLE               : ... (name of longitude variable in the input file: usual name &lt;br /&gt;
                                  is &amp;quot;longitude&amp;quot;)&lt;br /&gt;
 YY_VARIABLE               : ... (name of longitude variable in the input file: usual name &lt;br /&gt;
                                  is &amp;quot;latitude&amp;quot;)&lt;br /&gt;
 TIME_VARIABLE             : ... (name of time variable in the input file: usual name is&lt;br /&gt;
                                  &amp;quot;time&amp;quot;)&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - either CONVERT_TO_ASCII : 1 or CONVERT_TO_HDF5 : 1 must be chosen for any action to be&lt;br /&gt;
 performed by ConvertToHDF5;&lt;br /&gt;
 &lt;br /&gt;
 - when CONVERT_TO_HDF5 : 1 an HDF5 file is produced for every variable contained in the&lt;br /&gt;
 original ERA40 file; the name of each file is composed of the name indicated on FILENAME&lt;br /&gt;
 concatenated with the ERA40 variable identifier;&lt;br /&gt;
 &lt;br /&gt;
 - to the XX_VARIABLE, YY_VARIABLE and TIME_VARIABLE keywords should generally be  &lt;br /&gt;
 specified &amp;quot;longitude&amp;quot;, &amp;quot;latitude&amp;quot; and &amp;quot;time&amp;quot;, respectively; the option to&lt;br /&gt;
 include as keywords was made only to make the application robust to future variable name&lt;br /&gt;
 changes.&lt;br /&gt;
&lt;br /&gt;
===CONVERT MM5 FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT MM5 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                  : ... (path/name of MM5 file)&lt;br /&gt;
 TERRAIN_FILENAME          : ... (path/name of MM5 TERRAIN file)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data file with horizontal grid of MM5 data&lt;br /&gt;
                                  to be created)&lt;br /&gt;
 &lt;br /&gt;
 COMPUTE_WINDSTRESS        : 0/1 (1 = compute and write wind stress field; 0 = default)&lt;br /&gt;
 COMPUTE_RELATIVE_HUMIDITY : 0/1 (1 = compute and write relative humidity field; 0 = default)&lt;br /&gt;
 COMPUTE_PRECIPITATION     : 0/1 (1 = compute and write precipitation field; 0 = default)&lt;br /&gt;
 COMPUTE_WINDMODULUS       : 0/1 (1 = compute wind modulus; 0 = default)&lt;br /&gt;
 &lt;br /&gt;
 WRITE_XYZ                 : 0/1 (1 = write xyz center grid cells; 0 = default)&lt;br /&gt;
 WRITE_TERRAIN             : 0/1 (1 = write MM5 TERRAIN fields; 0 = default)&lt;br /&gt;
 &lt;br /&gt;
 START                     : ... (start date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 END                       : ... (end date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
  &lt;br /&gt;
 (block of MM5 properties to convert)&lt;br /&gt;
 &amp;lt;&amp;lt;BeginFields&amp;gt;&amp;gt;&lt;br /&gt;
 ... (name of MM5 property to convert do HDF5 format, one per line)&lt;br /&gt;
 ... &lt;br /&gt;
 &amp;lt;&amp;lt;EndFields&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - the name of each MM5 property to convert in &amp;lt;&amp;lt;BeginFields&amp;gt;&amp;gt;...&amp;lt;&amp;lt;EndFields&amp;gt;&amp;gt; block must&lt;br /&gt;
 conform to the MOHID designation specified in code of ModuleGlobalData; the correspondence is &lt;br /&gt;
 the following (see [[Module_InterfaceWaterAir]] for a more detailed explanation).&lt;br /&gt;
 &lt;br /&gt;
           ---MM5 NAME---    ---MOHID NAME---&lt;br /&gt;
             T2             air temperature&lt;br /&gt;
             PSTARCRS       atmospheric pressure&lt;br /&gt;
             U10            wind velocity X&lt;br /&gt;
             V10            wind velocity Y&lt;br /&gt;
             UST            wind shear velocity&lt;br /&gt;
             LHFLUX         latent heat&lt;br /&gt;
             SWDOWN         sensible heat&lt;br /&gt;
             SWDOWN         solar radiation&lt;br /&gt;
             LWDOWN         infrared radiation&lt;br /&gt;
             SWOUT          top outgoing shortwave radiation&lt;br /&gt;
             LWOUT          top outgoing longwave radiation&lt;br /&gt;
             SOIL T 1       soil temperature layer 1&lt;br /&gt;
             SOIL T 1       soil temperature layer 2&lt;br /&gt;
             SOIL T 1       soil temperature layer 3&lt;br /&gt;
             SOIL T 1       soil temperature layer 4&lt;br /&gt;
             SOIL T 1       soil temperature layer 5&lt;br /&gt;
             SOIL T 1       soil temperature layer 6&lt;br /&gt;
             Q2             2-meter mixing ratio&lt;br /&gt;
             TSEASFC        sea water temperature&lt;br /&gt;
             PBL HGT        PBL height&lt;br /&gt;
             PBL REGIME     PBL regime&lt;br /&gt;
             RAIN CON       accumulated convective precipitation        (cm)&lt;br /&gt;
             RAIN NON       accumulated non-convective precipitation    (cm)&lt;br /&gt;
             GROUND T       ground temperature&lt;br /&gt;
             RES TEMP       infinite reservoir slab temperature&lt;br /&gt;
             U              wind velocity X_3D&lt;br /&gt;
             V              wind velocity Y_3D&lt;br /&gt;
             W              wind velocity Z_3D&lt;br /&gt;
             T              air temperature_3D&lt;br /&gt;
             PP             atmospheric pressure_3D&lt;br /&gt;
             Q              mixing ratio_3D&lt;br /&gt;
             CLW            cloud water mixing ratio_3D&lt;br /&gt;
             RNW            rain water mixing ratio_3D&lt;br /&gt;
             ICE            cloud ice mixing ratio_3D&lt;br /&gt;
             SNOW           snow mixing ratio_3D&lt;br /&gt;
             RAD TEND       atmospheric radiation tendency_3D&lt;br /&gt;
&lt;br /&gt;
===CONVERT ALADIN FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT ALADIN FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : aladin.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : aladin_griddata.dat&lt;br /&gt;
 &lt;br /&gt;
 !Put here the name of any netcdf file for grid-data generation's sake.&lt;br /&gt;
 INPUT_GRID_FILENAME      :   D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKCLOUD_OPASYMP_19723_20088.nc&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 (path to aladin netcdf file)\ALADIN_BULKIR_OPASYMP_19723_20088.nc&lt;br /&gt;
 ...&lt;br /&gt;
 &amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - the name of each Aladin property to convert in &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;...&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt; block must conform to the following variables&lt;br /&gt;
 &lt;br /&gt;
           ---ALADIN NAME---    ---MOHID NAME---&lt;br /&gt;
             soclotot            CloudCover_&lt;br /&gt;
             sohumrel            RelativeHumidity_&lt;br /&gt;
             sofluxir            NonSolarFlux_&lt;br /&gt;
             sosspres            AtmosphericPressure_&lt;br /&gt;
             sosolarf            SolarRadiation_&lt;br /&gt;
             sotemair            AirTemperature_&lt;br /&gt;
             sowinmod            WindModulus_&lt;br /&gt;
             sowaprec            Precipitation_&lt;br /&gt;
             sozotaux            WindStressX_&lt;br /&gt;
             sometauy            WindStressY_&lt;br /&gt;
             sowindu10           WindVelocityX_&lt;br /&gt;
             sowindv10           WindVelocityY_&lt;br /&gt;
&lt;br /&gt;
===CONVERT MERCATOR FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT MERCATOR FORMAT&lt;br /&gt;
 &lt;br /&gt;
 READ_OPTION               : 1/2/3/4 (version of MERCATOR files)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data with horizontal discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME  : ... (path/name of geometry file with vertical discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
&lt;br /&gt;
 (if READ_OPTION : 1:)&lt;br /&gt;
 BASE_BULLETIN             : ...&lt;br /&gt;
 DATES_FILE                : ...&lt;br /&gt;
 NUM_DATES                 : ... &lt;br /&gt;
&lt;br /&gt;
 (if READ_OPTION : 2/3:)&lt;br /&gt;
 INPUT_GRID_FILENAME       : ... (path/name of file with horizontal discretization of water&lt;br /&gt;
                                  properties and water level data)&lt;br /&gt;
 (if READ_OPTION : 2:)&lt;br /&gt;
 INPUT_GRID_FILENAME_U     : ... (path/name of file with horizontal discretization of velocity&lt;br /&gt;
                                  component U data)&lt;br /&gt;
 INPUT_GRID_FILENAME_V     : ... (path/name of file with horizontal discretization of velocity&lt;br /&gt;
                                  component V data)&lt;br /&gt;
 &lt;br /&gt;
 (if READ_OPTION : 3:)&lt;br /&gt;
 INPUT_BATHY_FILENAME      : ... (path/name of file with bathymetry)&lt;br /&gt;
     &lt;br /&gt;
 (if READ_OPTION : 3/4:)&lt;br /&gt;
 CALC_BAROTROPIC_VEL       : 0/1 (1 = calculate barotropic velocity, 0 = not calculate; &lt;br /&gt;
                                  default = 0)&lt;br /&gt;
&lt;br /&gt;
 (if CALC_BAROTROPIC_VEL : 1 and READ_OPTION : 3:)&lt;br /&gt;
 INPUT_MESH_ZGRID_FILENAME : ... (path/name of file with information about layers ticknesses)&lt;br /&gt;
 &lt;br /&gt;
 (block of MERCATOR data files)&lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 ... (path/name of MERCATOR NetCDF data file, one per line, can be several)&lt;br /&gt;
 ... &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CONVERT LEVITUS FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT LEVITUS FORMAT&lt;br /&gt;
  &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data with horizontal discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME  : ... (path/name of geometry file with vertical discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 &lt;br /&gt;
 PERIODICITY               : ... (periodicity of Levitus data: &amp;quot;monthly&amp;quot;/&amp;quot;annual&amp;quot;; default is&lt;br /&gt;
                                  &amp;quot;monthly&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 SPATIAL_RESOLUTION        : ... (spatial resolution (degrees) of horizontal Levitus grid)&lt;br /&gt;
 &lt;br /&gt;
 FILL_VALUE                : ... (real value identificator for missing data; default is &lt;br /&gt;
                                  &amp;quot;-99.999900&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 (definition of spatial window to be present in output HDF5 file)&lt;br /&gt;
 LOWER_LEFT_CORNER         : ... ... (longitude and latitude (degrees) of south west corner)&lt;br /&gt;
 UPPER_RIGHT_CORNER        : ... ... (longitude and latitude (degrees) of north east corner)&lt;br /&gt;
 &lt;br /&gt;
 (block for each water property to be present in output HDF5 file, can be several)&lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                      : ... (name of property)&lt;br /&gt;
 ANNUAL_FILE               : ... (path/name of Levitus annual file)&lt;br /&gt;
 &lt;br /&gt;
 (block of Levitus data files)&lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 ... (path/name of Levitus data file (e.g. a monthly data file), one per line, can be several)&lt;br /&gt;
 ... &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CONVERT HELLERMAN ROSENSTEIN ASCII===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT HELLERMAN ROSENSTEIN ASCII&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data with horizontal discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
  &lt;br /&gt;
 PERIODICITY               : ... (periodicity of Hellerman Rosenstein data: &amp;quot;monthly&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 SPATIAL_RESOLUTION        : ... (spatial resolution (degrees) of horizontal Hellerman&lt;br /&gt;
                                  Rosenstein grid: default and only allowed value is &amp;quot;2.&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 FILL_VALUE                : ... (real value identificator for missing data; default is &lt;br /&gt;
                                  &amp;quot;-99.999900&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 (definition of spatial window to be present in output HDF5 file)&lt;br /&gt;
 LOWER_LEFT_CORNER         : ... ... (longitude and latitude (degrees) of south west corner)&lt;br /&gt;
 UPPER_RIGHT_CORNER        : ... ... (longitude and latitude (degrees) of north east corner)&lt;br /&gt;
   &lt;br /&gt;
 (block for each Hellerman Rosenstein data file)&lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                      : ... (name of property: &amp;quot;wind stress X&amp;quot;/&amp;quot;wind stress Y&amp;quot;)&lt;br /&gt;
 FILE                      : ... (path/name Hellerman Rosenstein file)&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Samples==&lt;br /&gt;
All sample files are named ''ConvertToHDF5Action.dat''.&lt;br /&gt;
&lt;br /&gt;
===Glue several MOHID(.hdf5) files===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : GLUES HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : SurfaceHydro_OP.hdf5&lt;br /&gt;
  &lt;br /&gt;
 &amp;lt;&amp;lt;begin_list&amp;gt;&amp;gt;&lt;br /&gt;
 D:\Projectos\SurfaceHydrodynamic_21.hdf5&lt;br /&gt;
 D:\Projectos\SurfaceHydrodynamic_22.hdf5&lt;br /&gt;
 &amp;lt;&amp;lt;end_list&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interpolate 2D MOHID(.hdf5) files to a new grid===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION    : 1&lt;br /&gt;
 FATHER_FILENAME          : D:\Projectos\MohidRun\test\res\Lagrangian_1.hdf5 &lt;br /&gt;
 OUTPUTFILENAME           : OilSpillThickness_GridRegular.hdf5&lt;br /&gt;
  &lt;br /&gt;
 START                    : 2006 6 21 17 22 30&lt;br /&gt;
 END                      : 2006 6 22 17 22 0&lt;br /&gt;
  &lt;br /&gt;
 FATHER_GRID_FILENAME     : D:\Projectos\MohidRun\GeneralData\batim\Tagus.dat_A&lt;br /&gt;
 NEW_GRID_FILENAME        : TagusConstSpacing.dat&lt;br /&gt;
  &lt;br /&gt;
 BASE_GROUP               : /Results/Oil/Data_2D&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interpolate 3D MOHID(.hdf5) files to a new grid===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION   : 1&lt;br /&gt;
 FATHER_FILENAME         : D:\Projectos\MohidRun\test\res\Lagrangian_1.hdf5 &lt;br /&gt;
 OUTPUTFILENAME          : OilSpillThickness_GridRegular.hdf5&lt;br /&gt;
 &lt;br /&gt;
 START                   : 2006 6 21 17 22 30&lt;br /&gt;
 END                     : 2006 6 22 17 22 0&lt;br /&gt;
 &lt;br /&gt;
 FATHER_GRID_FILENAME    : D:\Projectos\MohidRun\GeneralData\batim\Tagus.dat_A&lt;br /&gt;
 NEW_GRID_FILENAME       : TagusConstSpacing.dat&lt;br /&gt;
 &lt;br /&gt;
 BASE_GROUP              : /Results/Oil/Data_2D&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D         : 1&lt;br /&gt;
 FATHER_GEOMETRY         : D:\Projectos\MohidRun\test\data\Geometry_1.dat&lt;br /&gt;
 NEW_GEOMETRY            : TagusGeometry.dat&lt;br /&gt;
 AUX_GRID_FILENAME       : TagusConstSpacing.dat&lt;br /&gt;
 AUX_OUTPUTFILENAME      : Aux_GridRegular.hdf5&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Patch several MOHID(.hdf5) files to a new grid===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION : PATCH HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION   : 3&lt;br /&gt;
 &lt;br /&gt;
 START                   : 2005 2 28 13 0 0&lt;br /&gt;
 END                     : 2005 3 1 13 0 0&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                   : 3&lt;br /&gt;
 FATHER_FILENAME         : K:\MM5output\2005022812_2005030712\MM5OUT_D1.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME    : K:\MM5output\2005022812_2005030712\grid1.dat&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                   : 2&lt;br /&gt;
 FATHER_FILENAME         : K:\MM5output\2005022812_2005030712\MM5OUT_D2.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME    : K:\MM5output\2005022812_2005030712\grid2.dat&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                   : 1&lt;br /&gt;
 FATHER_FILENAME         : K:\MM5output\2005022812_2005030712\MM5OUT_D3.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME    : K:\MM5output\2005022812_2005030712\grid3.dat&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME          : MM5Forcing.hdf5&lt;br /&gt;
 NEW_GRID_FILENAME       : K:\Simula\GeneralData\Batim\CostaPortuguesa.dat&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert an ERA40 file to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                  : CONVERT ERA40 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                : D:\Aplica\ERA40\1971ERA1973.nc&lt;br /&gt;
 OUTPUTFILENAME          : D:\Aplica\ERA40\1971ERA1973T2&lt;br /&gt;
 &lt;br /&gt;
 CONVERT_TO_ASCII        : 0&lt;br /&gt;
 CONVERT_TO_HDF5         : 1&lt;br /&gt;
 &lt;br /&gt;
 XX_VARIABLE             : longitude&lt;br /&gt;
 YY_VARIABLE             : latitude&lt;br /&gt;
 TIME_VARIABLE           : time&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert a MM5 file to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                  : CONVERT MM5 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\MMOUT_D3&lt;br /&gt;
 TERRAIN_FILENAME        : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\TERRAIN_D3&lt;br /&gt;
 OUTPUT_GRID_FILENAME    : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\grid3.dat&lt;br /&gt;
 OUTPUTFILENAME          : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\MM5_D3.hdf5&lt;br /&gt;
 &lt;br /&gt;
 COMPUTE_WINDSTRESS        : 0&lt;br /&gt;
 COMPUTE_RELATIVE_HUMIDITY : 1&lt;br /&gt;
 WRITE_XYZ                 : 0&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;BeginFields&amp;gt;&amp;gt;&lt;br /&gt;
 solar radiation&lt;br /&gt;
 air temperature&lt;br /&gt;
 wind velocity X&lt;br /&gt;
 wind velocity Y&lt;br /&gt;
 sensible heat&lt;br /&gt;
 latent heat&lt;br /&gt;
 atmospheric pressure&lt;br /&gt;
 sea water temperature&lt;br /&gt;
 &amp;lt;&amp;lt;EndFields&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert Mercator-Ocean(.nc) to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : CONVERT MERCATOR FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : Psy2v2r1v_R20060628/MercatorR20060628.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME     : Psy2v2r1v_R20060628/MercatorGridR20060628.dat&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME : Psy2v2r1v_R20060628/MercatorGeometryR20060628.dat&lt;br /&gt;
 &lt;br /&gt;
 INPUT_GRID_FILENAME      : GridFiles/ist_meteog-gridT.nc&lt;br /&gt;
 INPUT_GRID_FILENAME_U    : GridFiles/ist_meteog-gridU.nc&lt;br /&gt;
 INPUT_GRID_FILENAME_V    : GridFiles/ist_meteog-gridV.nc&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 Psy2v2r1v_R20060628/ist_meteog-mercatorPsy2v2r1v_T_MEAN_ANA_20060621_R20060628.nc&lt;br /&gt;
 Psy2v2r1v_R20060628/ist_meteog-mercatorPsy2v2r1v_T_MEAN_ANA_20060622_R20060628.nc&lt;br /&gt;
 Psy2v2r1v_R20060628/ist_meteog-mercatorPsy2v2r1v_T_MEAN_ANA_20060623_R20060628.nc&lt;br /&gt;
 &amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert Levitus format to MOHID(.hdf5) and interpolate grid===&lt;br /&gt;
==== Convert ====&lt;br /&gt;
First convert the Levitus ASCII format to a raw HDF5 format:&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : CONVERT LEVITUS FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : Levitus.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME     : LevitusGrid.dat&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME : LevitusGeometry.dat&lt;br /&gt;
 &lt;br /&gt;
 PERIODICITY              : monthly&lt;br /&gt;
 SPATIAL_RESOLUTION       : 0.25&lt;br /&gt;
 FILL_VALUE               : -99.9999&lt;br /&gt;
 &lt;br /&gt;
 LOWER_LEFT_CORNER        : -16.0  31&lt;br /&gt;
 UPPER_RIGHT_CORNER       :   1.   40&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : salinity&lt;br /&gt;
 ANNUAL_FILE              : DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s000hr.obj&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s001&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s002&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s003&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s004&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s005&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s006&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s007&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s008&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s009&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s010&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s011&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s012&lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : temperature&lt;br /&gt;
 ANNUAL_FILE              : DataCenter\DadosBase\Ocean\Levitus\Data\Temp\t000hr.obj&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t001&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t002&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t003&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t004&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t005&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t006&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t007&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t008&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t009&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t010&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t011&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t012&lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Extrapolate ====&lt;br /&gt;
Then extrapolate the data (still in the raw HDF5 format):&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION    : 1&lt;br /&gt;
 &lt;br /&gt;
 FATHER_FILENAME          : Levitus.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME     : LevitusGrid.dat&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : LeviTusAllPointsWithData.hdf5&lt;br /&gt;
 NEW_GRID_FILENAME        : LevitusGrid.dat&lt;br /&gt;
 &lt;br /&gt;
 START                    : -9999 1  1 0 0 0&lt;br /&gt;
 END                      : -9999 12 1 0 0 0&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D          : 1&lt;br /&gt;
 FATHER_GEOMETRY          : LevitusGeometry.dat&lt;br /&gt;
 NEW_GEOMETRY             : LevitusGeometry.dat&lt;br /&gt;
 AUX_GRID_FILENAME        : LevitusGrid.dat&lt;br /&gt;
 AUX_OUTPUTFILENAME       : AuxLeviTusAllPointsWithData.hdf5&lt;br /&gt;
 &lt;br /&gt;
 POLI_DEGREE              : 3&lt;br /&gt;
 DO_NOT_BELIEVE_MAP       : 1&lt;br /&gt;
 EXTRAPOLATE_2D           : 2&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interpolate ====&lt;br /&gt;
Finally, interpolate to the final grid and geometry (same as the [[#Interpolate 3D MOHID(.hdf5) files to a new grid| Interpolate 3D sample]]):&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION    : 1&lt;br /&gt;
 FATHER_FILENAME          : LeviTusAllPointsWithData.hdf5&lt;br /&gt;
 OUTPUTFILENAME           : CadizMonthlyLevitus.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME     : LevitusGrid.dat&lt;br /&gt;
 NEW_GRID_FILENAME        : Algarve0.02SigmaSmooth_V3_CartMoreLayers.dat&lt;br /&gt;
 &lt;br /&gt;
 START                    : -9999 1  1 0 0 0&lt;br /&gt;
 END                      : -9999 12 1 0 0 0&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D          : 1&lt;br /&gt;
 FATHER_GEOMETRY          : LevitusGeometry.dat&lt;br /&gt;
 NEW_GEOMETRY             : Geometry_1.dat&lt;br /&gt;
 AUX_OUTPUTFILENAME       : AuxCadizMonthlyLevitus.hdf5&lt;br /&gt;
 AUX_GRID_FILENAME        : Aux12km.dat&lt;br /&gt;
 &lt;br /&gt;
 POLI_DEGREE              : 3&lt;br /&gt;
 DO_NOT_BELIEVE_MAP       : 1&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the programme may construct a new bathymetry twice. Use this bathymetry only on the AUX_GRID_FILENAME keyword.&lt;br /&gt;
&lt;br /&gt;
===Convert Hellerman Rosenstein ASCII format to MOHID(.hdf5)  ===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : CONVERT HELLERMAN ROSENSTEIN ASCII&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : ClimatologicWindStress.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME     : ClimatologicWindStressGrid.dat&lt;br /&gt;
 &lt;br /&gt;
 PERIODICITY              : monthly&lt;br /&gt;
 SPATIAL_RESOLUTION       : 2.&lt;br /&gt;
 FILL_VALUE               : -99.9999&lt;br /&gt;
 &lt;br /&gt;
 LOWER_LEFT_CORNER        : -180  -90&lt;br /&gt;
 UPPER_RIGHT_CORNER       : 180  90&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : wind stress X&lt;br /&gt;
 FILE                     : D:\Aplica\Dados\Hellerman_Rosenstein\TAUXX.DAT&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : wind stress Y&lt;br /&gt;
 FILE                     : D:\Aplica\Dados\Hellerman_Rosenstein\TAUYY.DAT&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert ALADIN(.nc) format to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT ALADIN FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : aladin.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : aladin_griddata.dat&lt;br /&gt;
 &lt;br /&gt;
 !Put here the name of any netcdf file for grid-data generation's sake.&lt;br /&gt;
 INPUT_GRID_FILENAME      :   D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKCLOUD_OPASYMP_19723_20088.nc&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKIR_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKPRES_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKSOLAR_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKTAIR_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKWIND_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_FLUXPRE_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_STRESSU_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_STRESSV_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_U10_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_V10_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKCLOUD_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKHUMI_OPASYMP_19723_20088.nc&lt;br /&gt;
 &amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== OceanColor modules compilation ==&lt;br /&gt;
Compiling the [[ConvertToHDF5]] tool with the OceanColor modules is more complicated than one might expect. A solution is proposed here for a release version using the Compaq Visual Fortran 6.6c. The difficulties rise because C code is embedded with a fortran interface and also, extra libraries such as hdf4 are required.&lt;br /&gt;
&lt;br /&gt;
=== Pre-requisites ===&lt;br /&gt;
&lt;br /&gt;
This is a list of prerequisites to successfully compile the tool:&lt;br /&gt;
*Compaq Visual Fortran 6.5 with patch 6.6c,&lt;br /&gt;
*VS .NET 2003 (Vc7 in particular),&lt;br /&gt;
*Hdf5 libraries ('''hdf5.lib''' '''hdf5_fortran.lib''' '''hdf5_hl.lib'''),&lt;br /&gt;
*Netcdf libraries ('''netcdf.lib''' '''netcdf_.lib'''),&lt;br /&gt;
*Hdf4 libraries ('''hd421.lib''', '''hm421.lib'''),&lt;br /&gt;
*szlib, zlib and jpeg libraries ('''szlib.lib''', '''zlib.lib''' and '''libjpeg.lib'''),&lt;br /&gt;
*the fortran source files ('''ModuleConvertModisL2.F90 ModuleConvertModisL3.F90 ModuleConvertOceanColorL2.F90'''),&lt;br /&gt;
*the C source files and their fortran interface files ('''readL2scan.c readL2Seadas.c''' and '''cdata.f crossp.f fgeonav.f''').&lt;br /&gt;
&lt;br /&gt;
=== CVF IDE configuration ===&lt;br /&gt;
# Configure everything as specified in [[Compiling with CVF]].&lt;br /&gt;
# Add the source files listed in the prerequisites above to the source files listing.&lt;br /&gt;
# Go to '''Tools--&amp;gt;Options...--&amp;gt;Directories'''. There, add the '''$DOTNET2K3/Vc7/bin''' to the '''Executable files''''; the '''$DOTNET2K3/Vc7/include''' and '''$DOTNET2K3/Vc7/PlatformSDK/include''' to the '''Include files'''; and finally, the '''$DOTNET2K3/Vc7/lib''', '''$DOTNET2K3/Vc7/PlatformSDK/lib''' and  '''$DOTNET2K3/Vc7/PlatformSDK/bin''' to the '''Library files'''.&lt;br /&gt;
# Go to '''Projects--&amp;gt;Settings--&amp;gt;Release--&amp;gt;Link--&amp;gt;Input'''. There, add the following libraries: '''netcdf.lib netcdf_.lib hd421.lib hm421.lib libjpeg.lib'''. (Make sure the hdf5 libraries as well as the szlib and zlib libraries are already mentioned).&lt;br /&gt;
&lt;br /&gt;
=== Troubleshoots ===&lt;br /&gt;
'''Q: I get unresolved external references during linkage, but I have all the libraries mentioned above included. What should I do?'''&lt;br /&gt;
&lt;br /&gt;
A: Unresolved external references can come out for two reasons:&lt;br /&gt;
#you didn't specified all the libraries required or all the paths for the default libraries or,&lt;br /&gt;
#[http://en.wikipedia.org/wiki/Name_decoration name mangling] problems. Use the [[dumpbin]] utility to the libraries to checkout which language convention they are using. If that's the problem then you need to try to get new libraries with the correct naming convention.&lt;br /&gt;
&lt;br /&gt;
That's it, you should now be able to build the [[ConvertToHdf5]] project successfully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Q: I got a message saying the entry point _NF_PUT_ATT_REAL@28 could not be located in netcdf.dll'''&lt;br /&gt;
&lt;br /&gt;
A: copy the file netcdf.dll to the exe folder&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://www.hdfgroup.org/ HDF5 Homepage]&lt;br /&gt;
*[http://www.hdfgroup.org/ HDF4 Homepage]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
*[[Module_Atmosphere]]&lt;br /&gt;
*[[Module_InterfaceWaterAir]]&lt;br /&gt;
*[[Coupling_Water-Atmosphere_User_Manual]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Tools]]&lt;br /&gt;
[[Category:Hdf5]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=ConvertToHDF5&amp;diff=1925</id>
		<title>ConvertToHDF5</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=ConvertToHDF5&amp;diff=1925"/>
				<updated>2009-05-12T12:36:18Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* CONVERT MERCATOR FORMAT */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''ConvertToHDF5''' is an application which allows the making of several operations, called '''actions''', involving HDF5 files: conversion of data in other formats (e.g. NETCDF) to HDF5, grid interpolation, concatenation of several files.&lt;br /&gt;
&lt;br /&gt;
Running options for this application are specified by the user in a input file named [[ConvertToHDF5#Input file (ConvertToHDF5Action.dat)|'''ConvertToHDF5Action.dat''']]. Several actions can be specified in the same input file, being processed sequentially by the ConvertToHDF5 application.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
The operations involving HDF5 files performed by ConvertToHDF5, specified individually by an action, can be organized in [[#file management|file management]], [[#grid interpolation|grid interpolation]] and [[#format conversion|format conversion]]. These types and the respective actions are detailed in the next sections. &lt;br /&gt;
&lt;br /&gt;
The input file specification for each action can be found bellow in the [[#Input file (ConvertToHDF5Action.dat)|Input file (ConvertToHDF5Action.dat)]] section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===File management===&lt;br /&gt;
&lt;br /&gt;
====Glue files====&lt;br /&gt;
This action consists in joining or glue in a single HDF5 file two or more HDF5 files having the same HDF5 data groups and referring to time periods which come in sequence. Both sets of 2D and 3D HDF5 files can be glued.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Glue MOHID Water results files from several runs produced in continuous running of the model, for storage space economy reasons. Can be used to join data from other origins (e.g. results of meteorological models) as long as the HDF5 format is the one supported by MOHID Water.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 files to be glued. &amp;quot;Grid&amp;quot; and &amp;quot;Results&amp;quot; data groups should be equal in all these files.&lt;br /&gt;
&lt;br /&gt;
'''Output:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with glued &amp;quot;Results&amp;quot; data. &amp;quot;Residual&amp;quot; and &amp;quot;Statistics&amp;quot; HDF5 data groups are not copied to the output file since they are time period specific (different values potentially occour in each file). General statistics can be calculated for the glued HDF5 file data using tool [[HDF5Statistics]].&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#GLUES HDF5 FILES|GLUES HDF5 FILES]].&lt;br /&gt;
&lt;br /&gt;
===Grid interpolation===&lt;br /&gt;
&lt;br /&gt;
====Interpolate files====&lt;br /&gt;
This action performs the conversion of one HDF5 file data existing in one 2D or 3D spatial grid to another 2D or 3D spatial grid, creating a new HDF5 file. The interpolation is performed only for the data located a time window specified by the user.&lt;br /&gt;
&lt;br /&gt;
The HDF5 file containing data to be interpolated is called the '''father file'''.&lt;br /&gt;
&lt;br /&gt;
In case of 3D interpolation the application conducts first the horizontal grid interpolation&lt;br /&gt;
(keeping father geometry) and only after it conducts the vertical interpolation (from father geometry to new geometry).&lt;br /&gt;
&lt;br /&gt;
Several types of 2D interpolation are available for use: bilinear, spline 2D and triangulation.&lt;br /&gt;
For vertical interpolation (used in 3D interpolation) can be supplied several polinomial degrees for interpolation.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain an HDF5 file with data for forcing or providing initial conditions for a MOHID Water model, e.g. a meteorological forcing file.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
For 2D/3D interpolation:&lt;br /&gt;
&lt;br /&gt;
- father HDF5 file;&lt;br /&gt;
&lt;br /&gt;
- father horizontal data grid, in a grid data file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
- new horizontal data grid, in a grid data file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
For 3D interpolation also needed:&lt;br /&gt;
&lt;br /&gt;
- father vertical geometry, in a geometry file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
- new vertical geometry, in a geometry file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
- auxiliary horizontal data grid, in a grid data file in the format supported by MOHID; this file is used for horizontal grid interpolation in 3D interpolation operations.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with interpolated data. In case of 3D interpolation also produced an auxiliary HDF5 file with the result of the horizontal grid interpolation, which can be inspected to check if this operation is well performed.&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#INTERPOLATE GRIDS|INTERPOLATE GRIDS]].&lt;br /&gt;
&lt;br /&gt;
====Patch files====&lt;br /&gt;
This action consists in performing an interpolation of HDF5 data between grids, as in action [[#Interpolate files|Interpolate files]], but considering more than one HDF5 file as containing data to be interpolated to the new grid and a priority scale. The interpolation is performed only for the data located in the time window specified by the user. The present version of this action operates only on 2D data.&lt;br /&gt;
&lt;br /&gt;
Each HDF5 file containing data to be interpolated is called a '''father file''' and has an user-attributed '''priority level''' to be respected in the interpolation process: for each new grid cell the ConvertToHDF5 application will look for data first on the Level 1 father file and only in the case this data is inexistent will it look for data in Level 2 file, proceeding in looking for higher level files if no data is found subsequentely.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
To obtain an HDF5 file with data from several HDF5 files each containing data with different spatial resolution and only for a specific part of the new grid. This is, for instance, the case when one is preparing a best resolution meteorological HDF5 file for forcing MOHID Water from several meteorological model domains, having different spatial resolution and span, since the best resolution data is not available for all new grid cells.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
The new horizontal data grid, in a grid data file in the format supported by MOHID, and for each father file:&lt;br /&gt;
&lt;br /&gt;
- level of priority: 1 = maximum priority, priority decreases with increasing level value;&lt;br /&gt;
&lt;br /&gt;
- data grid, in the form of a grid data file in the format supported by MOHID.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with patched data.&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#PATCH HDF5 FILES|PATCH HDF5 FILES]].&lt;br /&gt;
&lt;br /&gt;
===Format conversion===&lt;br /&gt;
&lt;br /&gt;
====Meteorological model data====&lt;br /&gt;
Mohid does not simulate explicitly the atmosphere, but needs information about atmospheric properties in time and space. This requires that atmospheric properties are supplied to MOHID Water in supported formats. These formats can be derived from meteorological data in HDF5 format. Because the results of meteorological models are accessed in different formats conversion is required. &lt;br /&gt;
&lt;br /&gt;
The formats currently convertible to HDF5 in ConvertToHDF5 include the MM5 and the ERA40. These are succintly detailed in the next sections.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''ERA40''=====&lt;br /&gt;
This format refers to the European Centre for Medium-Range Weather Forecasts (ECMWF) 40 years re-analysises results, acessed by site http://data.ecmwf.int/data/d/era40_daily/. This data is available for several meteorological variables with maximum 6 hour periodicity for days in the period from 1957-09-01 to 2002-08-31. &lt;br /&gt;
&lt;br /&gt;
ERA40 data files are supplied by ECMWF in a NetCDF format and with an user-costumized time window, periodicity (time step range from 6 hours to a day) and meteorological properties set. The ERA40 meteorological properties which are recognized by MOHID are presented bellow together with the correspondent MOHID name: &lt;br /&gt;
&lt;br /&gt;
 ---ERA40 NAME---         ---MOHID NAME---&lt;br /&gt;
   sshf                     sensible heat                &lt;br /&gt;
   slhf                     latent heat                  &lt;br /&gt;
   msl                      atmospheric pressure &lt;br /&gt;
   tcc                      cloud cover &lt;br /&gt;
   p10u                     wind velocity X&lt;br /&gt;
   p10v                     wind velocity Y&lt;br /&gt;
   p2t                      air temperature&lt;br /&gt;
   ewss                     wind stress X&lt;br /&gt;
   nsss                     wind stress Y&lt;br /&gt;
&lt;br /&gt;
The standard ConvertToHDF5 action is to convert to HDF5 the data referring to all MOHID Water recognized property available in the ERA40 file, producing an individual HDF5 file for each property. The name of each HDF5 file generated includes the ERA40 meteorological property identificator correspondent to the data contained.&lt;br /&gt;
&lt;br /&gt;
Alternatively, ConvertToHDF5 can copy to a single ASCII file the heading information concerning each meteorological variable considered in the original ERA40 file.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain an HDF5 file with data suitable for being used for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
ERA40 NetCDF file.&lt;br /&gt;
&lt;br /&gt;
'''Output:'''&lt;br /&gt;
&lt;br /&gt;
One HDF5 file for each meteorological property contained in the original NetCDF file.&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT ERA40 FORMAT|CONVERT ERA40 FORMAT]].&lt;br /&gt;
&lt;br /&gt;
=====''MM5''=====&lt;br /&gt;
This format relates to the Fifth-Generation NCAR / Penn State Mesoscale Model (MM5) output files format. Almost every atmospheric property needed by MOHID Water is present in MM5 output files, enabling to run prediction simulations with MOHID Water when access to MM5 prevision files is available.&lt;br /&gt;
&lt;br /&gt;
The ConvertToHDF5 action converts MM5 results files from the original format to HDF5 format, allowing the easy use of these results in the MOHID framework. Conversion is only performed for the MM5 properties and the time window specified by the user.&lt;br /&gt;
&lt;br /&gt;
Besides the conversion, the application can calculate some properties not contained in&lt;br /&gt;
the MM5 files using the available information: these are windstress, relative humidity and precipitation.&lt;br /&gt;
&lt;br /&gt;
For conversion to be completed it is required the horizontal grid information of MM5 results which is available in special TERRAIN files.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Produce HDF5 meteorological data usable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
MM5 results file to convert and MM5 TERRAIN file. The TERRAIN file supplies the MM5 results grid information. &lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
An HDF5 file with MM5 results and a grid data file in MOHID format with the MM5 grid information.&lt;br /&gt;
This last file can be used to interpolate the MM5 data from the original grid to a new grid (see [[#Interpolate files|Interpolate files]]), for instance to produce an HDF5 file suitable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Compilation:'''&lt;br /&gt;
&lt;br /&gt;
Caution! The ConvertToHDF5 executable must be compiled with the [[Big-endian little-endian|Big-Endian]] option set (see compatibility in the project's settings).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT MM5 FORMAT|CONVERT MM5 FORMAT]].&lt;br /&gt;
&lt;br /&gt;
=====''Aladin''=====&lt;br /&gt;
This format relates to Aladin meteorological model results. Some of the atmospheric property needed by MOHID Water is present in Aladin output files, enabling to run prediction simulations with MOHID Water when access to Aladin prevision files is available.&lt;br /&gt;
&lt;br /&gt;
The ConvertToHDF5 action converts Aladin results files from the original format to HDF5 format, allowing the easy use of these results in the MOHID framework. Conversion is only performed for the MM5 properties and the time window specified by the user.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Produce HDF5 meteorological data usable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
Aladin netcdf results file to convert.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
An HDF5 file with Aladin results and a grid data file in MOHID format with the Aladin grid pseudo-information: a fake orography is created of 100 m depth.&lt;br /&gt;
This last file can be used to interpolate the Aladin data from the original grid to a new grid (see [[#Interpolate files|Interpolate files]]), for instance to produce an HDF5 file suitable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Compilation:'''&lt;br /&gt;
&lt;br /&gt;
Caution! The ConvertToHDF5 executable must be compiled with the [[Big-endian little-endian|Big-Endian]] option set (see compatibility in the project's settings).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT ALADIN FORMAT|CONVERT ALADIN FORMAT]].&lt;br /&gt;
&lt;br /&gt;
====Ocean model data====&lt;br /&gt;
Ocean model data, available in diverse formats, can be used by MOHID Water to specify boundary (open ocean boundary and surface), initial conditions or for validation. These uses require that the model data is in HDF5 format and conversion is therefore needed.&lt;br /&gt;
&lt;br /&gt;
Currently the large scale ocean models formats convertible into HDF5 by ConvertToHDF5 includes MERCATOR.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''MERCATOR''=====&lt;br /&gt;
MERCATOR data files are supplied in a NetCDF format and with an user-costumized spatial window and periodicity. Water level and water properties (temperature and salinity) data is available in type T files, velocity component u data is available in type U files and velocity component v data is available in type V files. The type of data of a specific MERCATOR file is generally indicated in the file name.&lt;br /&gt;
&lt;br /&gt;
The standard ConvertToHDF5 action is to convert to HDF5 the data referring to temperature, salinity, water level, component u of velocity and component v of velocity.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain HDF5 MERCATOR data usable for forcing or validation of MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
NetCDF MERCATOR results data files and NetCDF MERCATOR grid data files. It should be provided one grid data file of each type: T, U and V. These are generally provided by the MERCATOR services together with the results files.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
One HDF5 file containing all properties contained in the recognized set of properties (temperature, salinity, water level, velocity u and velocity v) and the correspondent grid data and geometry files, containing respectively the horizontal grid and the vertical discretization of the HDF5 file. The grid data and geometry files can be used afterwards to interpolate the MERCATOR data to another grid and geometry (see [[#Interpolate files|Interpolate files]]).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT MERCATOR FORMAT|CONVERT MERCATOR FORMAT]].&lt;br /&gt;
&lt;br /&gt;
====Climatological data====&lt;br /&gt;
Climatological data can be used in MOHID Water to specify boundary (open ocean boundary and surface), initial conditions or for validation, in case more realistic data (measurements or model) data is unavailable. This data is generally supplied by producers in formats not readly usable by MOHID Water which justifies the existence of a conversion tool.&lt;br /&gt;
&lt;br /&gt;
Two climatological data format conversions are implemented in ConvertToHDF5: Levitus ocean data and Hellerman Rosenstein meteorological data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''Levitus''=====&lt;br /&gt;
The Levitus climatology provides results for water temperature and salinity.&lt;br /&gt;
The ConvertToHDF5 action converts the climatological data for the properties and spatial window requested by the user. &lt;br /&gt;
Typically, it requires 3 steps to complete the task:&lt;br /&gt;
&lt;br /&gt;
- convert levitus format &lt;br /&gt;
&lt;br /&gt;
- extrapolate the data to the whole levitus domain(required to avoid uncoincidental coastlines) &lt;br /&gt;
&lt;br /&gt;
- interpolate with the model grid(bathymetry)&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain climatological data in HDF5 format to use as boundary forcing and/or initial condition specification in MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
Levitus climatological data files, one per property and per time period (e.g a month).&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with Levitus climatological data, grid data file with the horizontal&lt;br /&gt;
grid of the data and a geometry file with vertical discretization of the data (MOHID formats).&lt;br /&gt;
The grid data and the geometry files can be used to interpolate the climatological data from the original grid to a new grid (see [[#Interpolate files|Interpolate files]]).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT LEVITUS FORMAT|CONVERT LEVITUS FORMAT]].&lt;br /&gt;
&lt;br /&gt;
=====''Hellerman Rosenstein''=====&lt;br /&gt;
This is a meteorological climatology providing wind stress. There is a file per wind stress component. Since the data refer to surface values it is a 2D field.&lt;br /&gt;
&lt;br /&gt;
The ConvertToHDF5 action converts the climatological data for the properties and spatial window provided by the user.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain climatological data in HDF5 format to use as meteorological forcing in MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
Hellerman Rosenstein climatological data ASCII files, one per wind stress component.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with Hellerman Rosenstein climatological data and grid data file with the horizontal&lt;br /&gt;
grid of the climatological data. This grid data file can be used to interpolate the climatological data from the original horizontal grid to a new grid (see [[#Interpolate files|Interpolate files]]).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT HELLERMAN ROSENSTEIN ASCII|CONVERT HELLERMAN ROSENSTEIN ASCII]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''World Ocean Atlas 2005''=====&lt;br /&gt;
The World Ocean Atlas (WOA) 2005 climatology provides results for water temperature, salinity and several water quality and biology properties.&lt;br /&gt;
&lt;br /&gt;
Description, Action and Input Files are described in a separate page: [[ConvertToHDF5 WOA2005]].&lt;br /&gt;
&lt;br /&gt;
==Input file (ConvertToHDF5Action.dat)==&lt;br /&gt;
===General structure===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt; (block containing instructions for running a specific action) &lt;br /&gt;
 ACTION                    : ... (intended action)&lt;br /&gt;
 ... (action specific instructions)&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : ...&lt;br /&gt;
 ...&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GLUES HDF5 FILES===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : GLUES HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 3D_FILE                   : 0/1 (0 = 2D file, 1 = 3D file)&lt;br /&gt;
 &lt;br /&gt;
 TIME_GROUP                : ... (Default=&amp;quot;Time&amp;quot;. Other option: &amp;quot;SurfaceTime&amp;quot;.)&lt;br /&gt;
 &lt;br /&gt;
 BASE_GROUP                : ... (Default=&amp;quot;Results&amp;quot;. Other options: &amp;quot;Residual&amp;quot;, &amp;quot;SurfaceResults&amp;quot;.)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 &lt;br /&gt;
 (block of HDF5 data files)&lt;br /&gt;
 &amp;lt;&amp;lt;begin_list&amp;gt;&amp;gt;&lt;br /&gt;
 ... (path/name of HDF5 file with data to be included in glue, one per line, at least two files)&lt;br /&gt;
 ...                      &lt;br /&gt;
 &amp;lt;&amp;lt;end_list&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===INTERPOLATE GRIDS===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION     : ... (type of horizontal interpolation: 1 = Bilinear, 2 = Spline2D,&lt;br /&gt;
                                  3 = Triangulation)&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION_WINDOW      : ... ... ... ... (2D spatial window to consider for interpolation: &lt;br /&gt;
                                              Xmin Ymin Xmax Ymax; default = all domain)&lt;br /&gt;
 &lt;br /&gt;
 START                     : ... (start date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 END                       : ... (end date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D           : 0/1 (0 = 2D interpolation, 1 = 3D interpolation)&lt;br /&gt;
 &lt;br /&gt;
 FATHER_FILENAME           : ... (path/name of input HDF5 file with data to be interpolated)&lt;br /&gt;
 FATHER_GRID_FILENAME      : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization of input HDF5 file)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of output HDF5 file to be created)&lt;br /&gt;
 NEW_GRID_FILENAME         : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization intended for output HDF5 file)&lt;br /&gt;
 &lt;br /&gt;
 EXTRAPOLATE_2D            : 0/1/2/3/4/5 (2D extrapolation: 0=no extrapolation, 1=medium&lt;br /&gt;
                                      triangulation, 2=high triangulation, &lt;br /&gt;
                                      3=nearest neighbour, 4=nearest cell, &lt;br /&gt;
                                      5=constant value)&lt;br /&gt;
 &lt;br /&gt;
 EXTRAPOLATE_VALUE         : ... (name of the value to extrapolate to when EXTRAPOLATE_2D is&lt;br /&gt;
                                  set to constant value (5))&lt;br /&gt;
 &lt;br /&gt;
 DO_NOT_BELIEVE_MAP        : 0/1 (0=consider input HDF5 file map, 1=do not consider input HDF5&lt;br /&gt;
                                  file map)&lt;br /&gt;
 &lt;br /&gt;
 BASE_GROUP                : ... (name of base group of HDF5 variables containing data to be &lt;br /&gt;
                                  interpolated; default is &amp;quot;/Results&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 (if INTERPOLATION3D : 1 also required:)&lt;br /&gt;
 FATHER_GEOMETRY           : ... (path/name of file (MOHID format) with vertical discretization&lt;br /&gt;
                                  of input HDF5 file)&lt;br /&gt;
 NEW_GEOMETRY              : ... (path/name of file (MOHID format) with vertical discretization&lt;br /&gt;
                                  intended for output HDF5 file)&lt;br /&gt;
 POLI_DEGREE               : 1/... (degree of vertical interpolation: 1=linear, ...)&lt;br /&gt;
 &lt;br /&gt;
 AUX_GRID_FILENAME         : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization intended for auxiliar output HDF5 file;&lt;br /&gt;
                                  default is file provided in NEW_GRID_FILENAME)&lt;br /&gt;
 &lt;br /&gt;
 AUX_OUTPUTFILENAME        : ... (path/name of auxiliar output HDF5 file to contain result&lt;br /&gt;
                                  of horizontal grid interpolation)   &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - the file indicated in AUX_GRID_FILENAME can be different from the one indicated in&lt;br /&gt;
   NEW_GRID_FILENAME in terms of bathymetry, while the horizontal grid should be, commonly, the&lt;br /&gt;
   same: this altered bathymetry can be used to extend the water column in the original data so&lt;br /&gt;
   that the process of vertical interpolation is done easily;&lt;br /&gt;
 &lt;br /&gt;
 - in case of INTERPOLATION3D : 1, ConvertToHDF5 can generate new versions of bathymetry which &lt;br /&gt;
   are consistent with the geometry definition (extension is '.new'); there are possibly three&lt;br /&gt;
   bathymetry changes referring to father grid, new grid and aux grid (the same bathymetry is&lt;br /&gt;
   not altered twice); although initially new and aux grid are the same they can result &lt;br /&gt;
   different because of bathymetry changes;&lt;br /&gt;
 &lt;br /&gt;
 - in case the new geometry is 2D and father geometry is 3D then POLI_DEGREE : 1 &lt;br /&gt;
   (linear interpolation) should be used;&lt;br /&gt;
 &lt;br /&gt;
 - EXTRAPOLATE_2D : 1/2/3/4/5 should be considered if it is expected that the coast line is not&lt;br /&gt;
   coincidental in the father and new grids, to avoid lack of data in the interpolation&lt;br /&gt;
   process; extrapolation is performed for all cells even the land cells; &lt;br /&gt;
 &lt;br /&gt;
 - in case of DO_NOT_BELIEVE_MAP : 1 the application generates a map based on cells where&lt;br /&gt;
   interpolation results are available; this causes that if EXTRAPOLATE_2D : 1/2/3/4/5 is used&lt;br /&gt;
   the AUX_GRID_FILENAME should not have land cells in order for the new map to be concurrent&lt;br /&gt;
   with the result of extrapolation and avoid errors generation, specially if INTERPOLATION3D :&lt;br /&gt;
   1 is considered.&lt;br /&gt;
&lt;br /&gt;
===PATCH HDF5 FILES===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : PATCH HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION     : ... (type of interpolation: 3 = Triangulation, default and only&lt;br /&gt;
                                  one implemented)&lt;br /&gt;
 &lt;br /&gt;
 START                     : ... (start date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 END                       : ... (end date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 &lt;br /&gt;
 (block for each father HDF5 file, should be at least two)&lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                     : ... (integer priority level: 1 = highest, increase for lower&lt;br /&gt;
                                  priority)&lt;br /&gt;
 FATHER_FILENAME           : ... (path/name of input HDF5 file with data to be interpolated)&lt;br /&gt;
 FATHER_GRID_FILENAME      : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization of input HDF5 file)&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of output HDF5 file to be created)&lt;br /&gt;
 NEW_GRID_FILENAME         : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization intended for output HDF5 file)&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CONVERT ERA40 FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT ERA40 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                  : ... (path/name of ERA40 NetCDF file)&lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
                                 (root of name for all files produced)&lt;br /&gt;
 &lt;br /&gt;
 CONVERT_TO_ASCII          : 0/1 (1 = convert variable heading info for ASCII file; 0 = default)&lt;br /&gt;
 CONVERT_TO_HDF5           : 0/1 (1 = convert to HDF5 file; 0 = default)&lt;br /&gt;
 GRIDTO180                 : 0/1 (1 = convert grid from [0 360] to [-180 180], 0 = default)&lt;br /&gt;
 &lt;br /&gt;
 XX_VARIABLE               : ... (name of longitude variable in the input file: usual name &lt;br /&gt;
                                  is &amp;quot;longitude&amp;quot;)&lt;br /&gt;
 YY_VARIABLE               : ... (name of longitude variable in the input file: usual name &lt;br /&gt;
                                  is &amp;quot;latitude&amp;quot;)&lt;br /&gt;
 TIME_VARIABLE             : ... (name of time variable in the input file: usual name is&lt;br /&gt;
                                  &amp;quot;time&amp;quot;)&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - either CONVERT_TO_ASCII : 1 or CONVERT_TO_HDF5 : 1 must be chosen for any action to be&lt;br /&gt;
 performed by ConvertToHDF5;&lt;br /&gt;
 &lt;br /&gt;
 - when CONVERT_TO_HDF5 : 1 an HDF5 file is produced for every variable contained in the&lt;br /&gt;
 original ERA40 file; the name of each file is composed of the name indicated on FILENAME&lt;br /&gt;
 concatenated with the ERA40 variable identifier;&lt;br /&gt;
 &lt;br /&gt;
 - to the XX_VARIABLE, YY_VARIABLE and TIME_VARIABLE keywords should generally be  &lt;br /&gt;
 specified &amp;quot;longitude&amp;quot;, &amp;quot;latitude&amp;quot; and &amp;quot;time&amp;quot;, respectively; the option to&lt;br /&gt;
 include as keywords was made only to make the application robust to future variable name&lt;br /&gt;
 changes.&lt;br /&gt;
&lt;br /&gt;
===CONVERT MM5 FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT MM5 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                  : ... (path/name of MM5 file)&lt;br /&gt;
 TERRAIN_FILENAME          : ... (path/name of MM5 TERRAIN file)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data file with horizontal grid of MM5 data&lt;br /&gt;
                                  to be created)&lt;br /&gt;
 &lt;br /&gt;
 COMPUTE_WINDSTRESS        : 0/1 (1 = compute and write wind stress field; 0 = default)&lt;br /&gt;
 COMPUTE_RELATIVE_HUMIDITY : 0/1 (1 = compute and write relative humidity field; 0 = default)&lt;br /&gt;
 COMPUTE_PRECIPITATION     : 0/1 (1 = compute and write precipitation field; 0 = default)&lt;br /&gt;
 COMPUTE_WINDMODULUS       : 0/1 (1 = compute wind modulus; 0 = default)&lt;br /&gt;
 &lt;br /&gt;
 WRITE_XYZ                 : 0/1 (1 = write xyz center grid cells; 0 = default)&lt;br /&gt;
 WRITE_TERRAIN             : 0/1 (1 = write MM5 TERRAIN fields; 0 = default)&lt;br /&gt;
 &lt;br /&gt;
 START                     : ... (start date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 END                       : ... (end date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
  &lt;br /&gt;
 (block of MM5 properties to convert)&lt;br /&gt;
 &amp;lt;&amp;lt;BeginFields&amp;gt;&amp;gt;&lt;br /&gt;
 ... (name of MM5 property to convert do HDF5 format, one per line)&lt;br /&gt;
 ... &lt;br /&gt;
 &amp;lt;&amp;lt;EndFields&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - the name of each MM5 property to convert in &amp;lt;&amp;lt;BeginFields&amp;gt;&amp;gt;...&amp;lt;&amp;lt;EndFields&amp;gt;&amp;gt; block must&lt;br /&gt;
 conform to the MOHID designation specified in code of ModuleGlobalData; the correspondence is &lt;br /&gt;
 the following (see [[Module_InterfaceWaterAir]] for a more detailed explanation).&lt;br /&gt;
 &lt;br /&gt;
           ---MM5 NAME---    ---MOHID NAME---&lt;br /&gt;
             T2             air temperature&lt;br /&gt;
             PSTARCRS       atmospheric pressure&lt;br /&gt;
             U10            wind velocity X&lt;br /&gt;
             V10            wind velocity Y&lt;br /&gt;
             UST            wind shear velocity&lt;br /&gt;
             LHFLUX         latent heat&lt;br /&gt;
             SWDOWN         sensible heat&lt;br /&gt;
             SWDOWN         solar radiation&lt;br /&gt;
             LWDOWN         infrared radiation&lt;br /&gt;
             SWOUT          top outgoing shortwave radiation&lt;br /&gt;
             LWOUT          top outgoing longwave radiation&lt;br /&gt;
             SOIL T 1       soil temperature layer 1&lt;br /&gt;
             SOIL T 1       soil temperature layer 2&lt;br /&gt;
             SOIL T 1       soil temperature layer 3&lt;br /&gt;
             SOIL T 1       soil temperature layer 4&lt;br /&gt;
             SOIL T 1       soil temperature layer 5&lt;br /&gt;
             SOIL T 1       soil temperature layer 6&lt;br /&gt;
             Q2             2-meter mixing ratio&lt;br /&gt;
             TSEASFC        sea water temperature&lt;br /&gt;
             PBL HGT        PBL height&lt;br /&gt;
             PBL REGIME     PBL regime&lt;br /&gt;
             RAIN CON       accumulated convective precipitation        (cm)&lt;br /&gt;
             RAIN NON       accumulated non-convective precipitation    (cm)&lt;br /&gt;
             GROUND T       ground temperature&lt;br /&gt;
             RES TEMP       infinite reservoir slab temperature&lt;br /&gt;
             U              wind velocity X_3D&lt;br /&gt;
             V              wind velocity Y_3D&lt;br /&gt;
             W              wind velocity Z_3D&lt;br /&gt;
             T              air temperature_3D&lt;br /&gt;
             PP             atmospheric pressure_3D&lt;br /&gt;
             Q              mixing ratio_3D&lt;br /&gt;
             CLW            cloud water mixing ratio_3D&lt;br /&gt;
             RNW            rain water mixing ratio_3D&lt;br /&gt;
             ICE            cloud ice mixing ratio_3D&lt;br /&gt;
             SNOW           snow mixing ratio_3D&lt;br /&gt;
             RAD TEND       atmospheric radiation tendency_3D&lt;br /&gt;
&lt;br /&gt;
===CONVERT ALADIN FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT ALADIN FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : aladin.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : aladin_griddata.dat&lt;br /&gt;
 &lt;br /&gt;
 !Put here the name of any netcdf file for grid-data generation's sake.&lt;br /&gt;
 INPUT_GRID_FILENAME      :   D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKCLOUD_OPASYMP_19723_20088.nc&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 (path to aladin netcdf file)\ALADIN_BULKIR_OPASYMP_19723_20088.nc&lt;br /&gt;
 ...&lt;br /&gt;
 &amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - the name of each Aladin property to convert in &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;...&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt; block must conform to the following variables&lt;br /&gt;
 &lt;br /&gt;
           ---ALADIN NAME---    ---MOHID NAME---&lt;br /&gt;
             soclotot            CloudCover_&lt;br /&gt;
             sohumrel            RelativeHumidity_&lt;br /&gt;
             sofluxir            NonSolarFlux_&lt;br /&gt;
             sosspres            AtmosphericPressure_&lt;br /&gt;
             sosolarf            SolarRadiation_&lt;br /&gt;
             sotemair            AirTemperature_&lt;br /&gt;
             sowinmod            WindModulus_&lt;br /&gt;
             sowaprec            Precipitation_&lt;br /&gt;
             sozotaux            WindStressX_&lt;br /&gt;
             sometauy            WindStressY_&lt;br /&gt;
             sowindu10           WindVelocityX_&lt;br /&gt;
             sowindv10           WindVelocityY_&lt;br /&gt;
&lt;br /&gt;
===CONVERT MERCATOR FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT MERCATOR FORMAT&lt;br /&gt;
 &lt;br /&gt;
 READ_OPTION               : 1/2/3/4 (version of MERCATOR files)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data with horizontal discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME  : ... (path/name of geometry file with vertical discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 &lt;br /&gt;
 INPUT_GRID_FILENAME       : ... (path/name of file with horizontal discretization of water&lt;br /&gt;
                                  properties and water level data)&lt;br /&gt;
 INPUT_GRID_FILENAME_U     : ... (path/name of file with horizontal discretization of velocity&lt;br /&gt;
                                  component U data)&lt;br /&gt;
 INPUT_GRID_FILENAME_V     : ... (path/name of file with horizontal discretization of velocity&lt;br /&gt;
                                  component V data)&lt;br /&gt;
  &lt;br /&gt;
 (block of MERCATOR data files)&lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 ... (path/name of MERCATOR NetCDF data file, one per line, can be several)&lt;br /&gt;
 ... &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CONVERT LEVITUS FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT LEVITUS FORMAT&lt;br /&gt;
  &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data with horizontal discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME  : ... (path/name of geometry file with vertical discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 &lt;br /&gt;
 PERIODICITY               : ... (periodicity of Levitus data: &amp;quot;monthly&amp;quot;/&amp;quot;annual&amp;quot;; default is&lt;br /&gt;
                                  &amp;quot;monthly&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 SPATIAL_RESOLUTION        : ... (spatial resolution (degrees) of horizontal Levitus grid)&lt;br /&gt;
 &lt;br /&gt;
 FILL_VALUE                : ... (real value identificator for missing data; default is &lt;br /&gt;
                                  &amp;quot;-99.999900&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 (definition of spatial window to be present in output HDF5 file)&lt;br /&gt;
 LOWER_LEFT_CORNER         : ... ... (longitude and latitude (degrees) of south west corner)&lt;br /&gt;
 UPPER_RIGHT_CORNER        : ... ... (longitude and latitude (degrees) of north east corner)&lt;br /&gt;
 &lt;br /&gt;
 (block for each water property to be present in output HDF5 file, can be several)&lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                      : ... (name of property)&lt;br /&gt;
 ANNUAL_FILE               : ... (path/name of Levitus annual file)&lt;br /&gt;
 &lt;br /&gt;
 (block of Levitus data files)&lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 ... (path/name of Levitus data file (e.g. a monthly data file), one per line, can be several)&lt;br /&gt;
 ... &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CONVERT HELLERMAN ROSENSTEIN ASCII===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT HELLERMAN ROSENSTEIN ASCII&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data with horizontal discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
  &lt;br /&gt;
 PERIODICITY               : ... (periodicity of Hellerman Rosenstein data: &amp;quot;monthly&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 SPATIAL_RESOLUTION        : ... (spatial resolution (degrees) of horizontal Hellerman&lt;br /&gt;
                                  Rosenstein grid: default and only allowed value is &amp;quot;2.&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 FILL_VALUE                : ... (real value identificator for missing data; default is &lt;br /&gt;
                                  &amp;quot;-99.999900&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 (definition of spatial window to be present in output HDF5 file)&lt;br /&gt;
 LOWER_LEFT_CORNER         : ... ... (longitude and latitude (degrees) of south west corner)&lt;br /&gt;
 UPPER_RIGHT_CORNER        : ... ... (longitude and latitude (degrees) of north east corner)&lt;br /&gt;
   &lt;br /&gt;
 (block for each Hellerman Rosenstein data file)&lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                      : ... (name of property: &amp;quot;wind stress X&amp;quot;/&amp;quot;wind stress Y&amp;quot;)&lt;br /&gt;
 FILE                      : ... (path/name Hellerman Rosenstein file)&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Samples==&lt;br /&gt;
All sample files are named ''ConvertToHDF5Action.dat''.&lt;br /&gt;
&lt;br /&gt;
===Glue several MOHID(.hdf5) files===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : GLUES HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : SurfaceHydro_OP.hdf5&lt;br /&gt;
  &lt;br /&gt;
 &amp;lt;&amp;lt;begin_list&amp;gt;&amp;gt;&lt;br /&gt;
 D:\Projectos\SurfaceHydrodynamic_21.hdf5&lt;br /&gt;
 D:\Projectos\SurfaceHydrodynamic_22.hdf5&lt;br /&gt;
 &amp;lt;&amp;lt;end_list&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interpolate 2D MOHID(.hdf5) files to a new grid===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION    : 1&lt;br /&gt;
 FATHER_FILENAME          : D:\Projectos\MohidRun\test\res\Lagrangian_1.hdf5 &lt;br /&gt;
 OUTPUTFILENAME           : OilSpillThickness_GridRegular.hdf5&lt;br /&gt;
  &lt;br /&gt;
 START                    : 2006 6 21 17 22 30&lt;br /&gt;
 END                      : 2006 6 22 17 22 0&lt;br /&gt;
  &lt;br /&gt;
 FATHER_GRID_FILENAME     : D:\Projectos\MohidRun\GeneralData\batim\Tagus.dat_A&lt;br /&gt;
 NEW_GRID_FILENAME        : TagusConstSpacing.dat&lt;br /&gt;
  &lt;br /&gt;
 BASE_GROUP               : /Results/Oil/Data_2D&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interpolate 3D MOHID(.hdf5) files to a new grid===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION   : 1&lt;br /&gt;
 FATHER_FILENAME         : D:\Projectos\MohidRun\test\res\Lagrangian_1.hdf5 &lt;br /&gt;
 OUTPUTFILENAME          : OilSpillThickness_GridRegular.hdf5&lt;br /&gt;
 &lt;br /&gt;
 START                   : 2006 6 21 17 22 30&lt;br /&gt;
 END                     : 2006 6 22 17 22 0&lt;br /&gt;
 &lt;br /&gt;
 FATHER_GRID_FILENAME    : D:\Projectos\MohidRun\GeneralData\batim\Tagus.dat_A&lt;br /&gt;
 NEW_GRID_FILENAME       : TagusConstSpacing.dat&lt;br /&gt;
 &lt;br /&gt;
 BASE_GROUP              : /Results/Oil/Data_2D&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D         : 1&lt;br /&gt;
 FATHER_GEOMETRY         : D:\Projectos\MohidRun\test\data\Geometry_1.dat&lt;br /&gt;
 NEW_GEOMETRY            : TagusGeometry.dat&lt;br /&gt;
 AUX_GRID_FILENAME       : TagusConstSpacing.dat&lt;br /&gt;
 AUX_OUTPUTFILENAME      : Aux_GridRegular.hdf5&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Patch several MOHID(.hdf5) files to a new grid===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION : PATCH HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION   : 3&lt;br /&gt;
 &lt;br /&gt;
 START                   : 2005 2 28 13 0 0&lt;br /&gt;
 END                     : 2005 3 1 13 0 0&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                   : 3&lt;br /&gt;
 FATHER_FILENAME         : K:\MM5output\2005022812_2005030712\MM5OUT_D1.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME    : K:\MM5output\2005022812_2005030712\grid1.dat&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                   : 2&lt;br /&gt;
 FATHER_FILENAME         : K:\MM5output\2005022812_2005030712\MM5OUT_D2.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME    : K:\MM5output\2005022812_2005030712\grid2.dat&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                   : 1&lt;br /&gt;
 FATHER_FILENAME         : K:\MM5output\2005022812_2005030712\MM5OUT_D3.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME    : K:\MM5output\2005022812_2005030712\grid3.dat&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME          : MM5Forcing.hdf5&lt;br /&gt;
 NEW_GRID_FILENAME       : K:\Simula\GeneralData\Batim\CostaPortuguesa.dat&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert an ERA40 file to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                  : CONVERT ERA40 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                : D:\Aplica\ERA40\1971ERA1973.nc&lt;br /&gt;
 OUTPUTFILENAME          : D:\Aplica\ERA40\1971ERA1973T2&lt;br /&gt;
 &lt;br /&gt;
 CONVERT_TO_ASCII        : 0&lt;br /&gt;
 CONVERT_TO_HDF5         : 1&lt;br /&gt;
 &lt;br /&gt;
 XX_VARIABLE             : longitude&lt;br /&gt;
 YY_VARIABLE             : latitude&lt;br /&gt;
 TIME_VARIABLE           : time&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert a MM5 file to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                  : CONVERT MM5 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\MMOUT_D3&lt;br /&gt;
 TERRAIN_FILENAME        : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\TERRAIN_D3&lt;br /&gt;
 OUTPUT_GRID_FILENAME    : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\grid3.dat&lt;br /&gt;
 OUTPUTFILENAME          : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\MM5_D3.hdf5&lt;br /&gt;
 &lt;br /&gt;
 COMPUTE_WINDSTRESS        : 0&lt;br /&gt;
 COMPUTE_RELATIVE_HUMIDITY : 1&lt;br /&gt;
 WRITE_XYZ                 : 0&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;BeginFields&amp;gt;&amp;gt;&lt;br /&gt;
 solar radiation&lt;br /&gt;
 air temperature&lt;br /&gt;
 wind velocity X&lt;br /&gt;
 wind velocity Y&lt;br /&gt;
 sensible heat&lt;br /&gt;
 latent heat&lt;br /&gt;
 atmospheric pressure&lt;br /&gt;
 sea water temperature&lt;br /&gt;
 &amp;lt;&amp;lt;EndFields&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert Mercator-Ocean(.nc) to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : CONVERT MERCATOR FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : Psy2v2r1v_R20060628/MercatorR20060628.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME     : Psy2v2r1v_R20060628/MercatorGridR20060628.dat&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME : Psy2v2r1v_R20060628/MercatorGeometryR20060628.dat&lt;br /&gt;
 &lt;br /&gt;
 INPUT_GRID_FILENAME      : GridFiles/ist_meteog-gridT.nc&lt;br /&gt;
 INPUT_GRID_FILENAME_U    : GridFiles/ist_meteog-gridU.nc&lt;br /&gt;
 INPUT_GRID_FILENAME_V    : GridFiles/ist_meteog-gridV.nc&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 Psy2v2r1v_R20060628/ist_meteog-mercatorPsy2v2r1v_T_MEAN_ANA_20060621_R20060628.nc&lt;br /&gt;
 Psy2v2r1v_R20060628/ist_meteog-mercatorPsy2v2r1v_T_MEAN_ANA_20060622_R20060628.nc&lt;br /&gt;
 Psy2v2r1v_R20060628/ist_meteog-mercatorPsy2v2r1v_T_MEAN_ANA_20060623_R20060628.nc&lt;br /&gt;
 &amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert Levitus format to MOHID(.hdf5) and interpolate grid===&lt;br /&gt;
==== Convert ====&lt;br /&gt;
First convert the Levitus ASCII format to a raw HDF5 format:&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : CONVERT LEVITUS FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : Levitus.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME     : LevitusGrid.dat&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME : LevitusGeometry.dat&lt;br /&gt;
 &lt;br /&gt;
 PERIODICITY              : monthly&lt;br /&gt;
 SPATIAL_RESOLUTION       : 0.25&lt;br /&gt;
 FILL_VALUE               : -99.9999&lt;br /&gt;
 &lt;br /&gt;
 LOWER_LEFT_CORNER        : -16.0  31&lt;br /&gt;
 UPPER_RIGHT_CORNER       :   1.   40&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : salinity&lt;br /&gt;
 ANNUAL_FILE              : DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s000hr.obj&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s001&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s002&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s003&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s004&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s005&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s006&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s007&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s008&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s009&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s010&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s011&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s012&lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : temperature&lt;br /&gt;
 ANNUAL_FILE              : DataCenter\DadosBase\Ocean\Levitus\Data\Temp\t000hr.obj&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t001&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t002&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t003&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t004&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t005&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t006&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t007&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t008&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t009&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t010&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t011&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t012&lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Extrapolate ====&lt;br /&gt;
Then extrapolate the data (still in the raw HDF5 format):&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION    : 1&lt;br /&gt;
 &lt;br /&gt;
 FATHER_FILENAME          : Levitus.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME     : LevitusGrid.dat&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : LeviTusAllPointsWithData.hdf5&lt;br /&gt;
 NEW_GRID_FILENAME        : LevitusGrid.dat&lt;br /&gt;
 &lt;br /&gt;
 START                    : -9999 1  1 0 0 0&lt;br /&gt;
 END                      : -9999 12 1 0 0 0&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D          : 1&lt;br /&gt;
 FATHER_GEOMETRY          : LevitusGeometry.dat&lt;br /&gt;
 NEW_GEOMETRY             : LevitusGeometry.dat&lt;br /&gt;
 AUX_GRID_FILENAME        : LevitusGrid.dat&lt;br /&gt;
 AUX_OUTPUTFILENAME       : AuxLeviTusAllPointsWithData.hdf5&lt;br /&gt;
 &lt;br /&gt;
 POLI_DEGREE              : 3&lt;br /&gt;
 DO_NOT_BELIEVE_MAP       : 1&lt;br /&gt;
 EXTRAPOLATE_2D           : 2&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interpolate ====&lt;br /&gt;
Finally, interpolate to the final grid and geometry (same as the [[#Interpolate 3D MOHID(.hdf5) files to a new grid| Interpolate 3D sample]]):&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION    : 1&lt;br /&gt;
 FATHER_FILENAME          : LeviTusAllPointsWithData.hdf5&lt;br /&gt;
 OUTPUTFILENAME           : CadizMonthlyLevitus.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME     : LevitusGrid.dat&lt;br /&gt;
 NEW_GRID_FILENAME        : Algarve0.02SigmaSmooth_V3_CartMoreLayers.dat&lt;br /&gt;
 &lt;br /&gt;
 START                    : -9999 1  1 0 0 0&lt;br /&gt;
 END                      : -9999 12 1 0 0 0&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D          : 1&lt;br /&gt;
 FATHER_GEOMETRY          : LevitusGeometry.dat&lt;br /&gt;
 NEW_GEOMETRY             : Geometry_1.dat&lt;br /&gt;
 AUX_OUTPUTFILENAME       : AuxCadizMonthlyLevitus.hdf5&lt;br /&gt;
 AUX_GRID_FILENAME        : Aux12km.dat&lt;br /&gt;
 &lt;br /&gt;
 POLI_DEGREE              : 3&lt;br /&gt;
 DO_NOT_BELIEVE_MAP       : 1&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the programme may construct a new bathymetry twice. Use this bathymetry only on the AUX_GRID_FILENAME keyword.&lt;br /&gt;
&lt;br /&gt;
===Convert Hellerman Rosenstein ASCII format to MOHID(.hdf5)  ===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : CONVERT HELLERMAN ROSENSTEIN ASCII&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : ClimatologicWindStress.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME     : ClimatologicWindStressGrid.dat&lt;br /&gt;
 &lt;br /&gt;
 PERIODICITY              : monthly&lt;br /&gt;
 SPATIAL_RESOLUTION       : 2.&lt;br /&gt;
 FILL_VALUE               : -99.9999&lt;br /&gt;
 &lt;br /&gt;
 LOWER_LEFT_CORNER        : -180  -90&lt;br /&gt;
 UPPER_RIGHT_CORNER       : 180  90&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : wind stress X&lt;br /&gt;
 FILE                     : D:\Aplica\Dados\Hellerman_Rosenstein\TAUXX.DAT&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : wind stress Y&lt;br /&gt;
 FILE                     : D:\Aplica\Dados\Hellerman_Rosenstein\TAUYY.DAT&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert ALADIN(.nc) format to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT ALADIN FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : aladin.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : aladin_griddata.dat&lt;br /&gt;
 &lt;br /&gt;
 !Put here the name of any netcdf file for grid-data generation's sake.&lt;br /&gt;
 INPUT_GRID_FILENAME      :   D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKCLOUD_OPASYMP_19723_20088.nc&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKIR_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKPRES_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKSOLAR_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKTAIR_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKWIND_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_FLUXPRE_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_STRESSU_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_STRESSV_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_U10_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_V10_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKCLOUD_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKHUMI_OPASYMP_19723_20088.nc&lt;br /&gt;
 &amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== OceanColor modules compilation ==&lt;br /&gt;
Compiling the [[ConvertToHDF5]] tool with the OceanColor modules is more complicated than one might expect. A solution is proposed here for a release version using the Compaq Visual Fortran 6.6c. The difficulties rise because C code is embedded with a fortran interface and also, extra libraries such as hdf4 are required.&lt;br /&gt;
&lt;br /&gt;
=== Pre-requisites ===&lt;br /&gt;
&lt;br /&gt;
This is a list of prerequisites to successfully compile the tool:&lt;br /&gt;
*Compaq Visual Fortran 6.5 with patch 6.6c,&lt;br /&gt;
*VS .NET 2003 (Vc7 in particular),&lt;br /&gt;
*Hdf5 libraries ('''hdf5.lib''' '''hdf5_fortran.lib''' '''hdf5_hl.lib'''),&lt;br /&gt;
*Netcdf libraries ('''netcdf.lib''' '''netcdf_.lib'''),&lt;br /&gt;
*Hdf4 libraries ('''hd421.lib''', '''hm421.lib'''),&lt;br /&gt;
*szlib, zlib and jpeg libraries ('''szlib.lib''', '''zlib.lib''' and '''libjpeg.lib'''),&lt;br /&gt;
*the fortran source files ('''ModuleConvertModisL2.F90 ModuleConvertModisL3.F90 ModuleConvertOceanColorL2.F90'''),&lt;br /&gt;
*the C source files and their fortran interface files ('''readL2scan.c readL2Seadas.c''' and '''cdata.f crossp.f fgeonav.f''').&lt;br /&gt;
&lt;br /&gt;
=== CVF IDE configuration ===&lt;br /&gt;
# Configure everything as specified in [[Compiling with CVF]].&lt;br /&gt;
# Add the source files listed in the prerequisites above to the source files listing.&lt;br /&gt;
# Go to '''Tools--&amp;gt;Options...--&amp;gt;Directories'''. There, add the '''$DOTNET2K3/Vc7/bin''' to the '''Executable files''''; the '''$DOTNET2K3/Vc7/include''' and '''$DOTNET2K3/Vc7/PlatformSDK/include''' to the '''Include files'''; and finally, the '''$DOTNET2K3/Vc7/lib''', '''$DOTNET2K3/Vc7/PlatformSDK/lib''' and  '''$DOTNET2K3/Vc7/PlatformSDK/bin''' to the '''Library files'''.&lt;br /&gt;
# Go to '''Projects--&amp;gt;Settings--&amp;gt;Release--&amp;gt;Link--&amp;gt;Input'''. There, add the following libraries: '''netcdf.lib netcdf_.lib hd421.lib hm421.lib libjpeg.lib'''. (Make sure the hdf5 libraries as well as the szlib and zlib libraries are already mentioned).&lt;br /&gt;
&lt;br /&gt;
=== Troubleshoots ===&lt;br /&gt;
'''Q: I get unresolved external references during linkage, but I have all the libraries mentioned above included. What should I do?'''&lt;br /&gt;
&lt;br /&gt;
A: Unresolved external references can come out for two reasons:&lt;br /&gt;
#you didn't specified all the libraries required or all the paths for the default libraries or,&lt;br /&gt;
#[http://en.wikipedia.org/wiki/Name_decoration name mangling] problems. Use the [[dumpbin]] utility to the libraries to checkout which language convention they are using. If that's the problem then you need to try to get new libraries with the correct naming convention.&lt;br /&gt;
&lt;br /&gt;
That's it, you should now be able to build the [[ConvertToHdf5]] project successfully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Q: I got a message saying the entry point _NF_PUT_ATT_REAL@28 could not be located in netcdf.dll'''&lt;br /&gt;
&lt;br /&gt;
A: copy the file netcdf.dll to the exe folder&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://www.hdfgroup.org/ HDF5 Homepage]&lt;br /&gt;
*[http://www.hdfgroup.org/ HDF4 Homepage]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
*[[Module_Atmosphere]]&lt;br /&gt;
*[[Module_InterfaceWaterAir]]&lt;br /&gt;
*[[Coupling_Water-Atmosphere_User_Manual]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Tools]]&lt;br /&gt;
[[Category:Hdf5]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=ConvertToHDF5&amp;diff=1924</id>
		<title>ConvertToHDF5</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=ConvertToHDF5&amp;diff=1924"/>
				<updated>2009-05-12T12:30:45Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* CONVERT MM5 FORMAT */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''ConvertToHDF5''' is an application which allows the making of several operations, called '''actions''', involving HDF5 files: conversion of data in other formats (e.g. NETCDF) to HDF5, grid interpolation, concatenation of several files.&lt;br /&gt;
&lt;br /&gt;
Running options for this application are specified by the user in a input file named [[ConvertToHDF5#Input file (ConvertToHDF5Action.dat)|'''ConvertToHDF5Action.dat''']]. Several actions can be specified in the same input file, being processed sequentially by the ConvertToHDF5 application.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
The operations involving HDF5 files performed by ConvertToHDF5, specified individually by an action, can be organized in [[#file management|file management]], [[#grid interpolation|grid interpolation]] and [[#format conversion|format conversion]]. These types and the respective actions are detailed in the next sections. &lt;br /&gt;
&lt;br /&gt;
The input file specification for each action can be found bellow in the [[#Input file (ConvertToHDF5Action.dat)|Input file (ConvertToHDF5Action.dat)]] section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===File management===&lt;br /&gt;
&lt;br /&gt;
====Glue files====&lt;br /&gt;
This action consists in joining or glue in a single HDF5 file two or more HDF5 files having the same HDF5 data groups and referring to time periods which come in sequence. Both sets of 2D and 3D HDF5 files can be glued.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Glue MOHID Water results files from several runs produced in continuous running of the model, for storage space economy reasons. Can be used to join data from other origins (e.g. results of meteorological models) as long as the HDF5 format is the one supported by MOHID Water.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 files to be glued. &amp;quot;Grid&amp;quot; and &amp;quot;Results&amp;quot; data groups should be equal in all these files.&lt;br /&gt;
&lt;br /&gt;
'''Output:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with glued &amp;quot;Results&amp;quot; data. &amp;quot;Residual&amp;quot; and &amp;quot;Statistics&amp;quot; HDF5 data groups are not copied to the output file since they are time period specific (different values potentially occour in each file). General statistics can be calculated for the glued HDF5 file data using tool [[HDF5Statistics]].&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#GLUES HDF5 FILES|GLUES HDF5 FILES]].&lt;br /&gt;
&lt;br /&gt;
===Grid interpolation===&lt;br /&gt;
&lt;br /&gt;
====Interpolate files====&lt;br /&gt;
This action performs the conversion of one HDF5 file data existing in one 2D or 3D spatial grid to another 2D or 3D spatial grid, creating a new HDF5 file. The interpolation is performed only for the data located a time window specified by the user.&lt;br /&gt;
&lt;br /&gt;
The HDF5 file containing data to be interpolated is called the '''father file'''.&lt;br /&gt;
&lt;br /&gt;
In case of 3D interpolation the application conducts first the horizontal grid interpolation&lt;br /&gt;
(keeping father geometry) and only after it conducts the vertical interpolation (from father geometry to new geometry).&lt;br /&gt;
&lt;br /&gt;
Several types of 2D interpolation are available for use: bilinear, spline 2D and triangulation.&lt;br /&gt;
For vertical interpolation (used in 3D interpolation) can be supplied several polinomial degrees for interpolation.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain an HDF5 file with data for forcing or providing initial conditions for a MOHID Water model, e.g. a meteorological forcing file.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
For 2D/3D interpolation:&lt;br /&gt;
&lt;br /&gt;
- father HDF5 file;&lt;br /&gt;
&lt;br /&gt;
- father horizontal data grid, in a grid data file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
- new horizontal data grid, in a grid data file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
For 3D interpolation also needed:&lt;br /&gt;
&lt;br /&gt;
- father vertical geometry, in a geometry file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
- new vertical geometry, in a geometry file in the format supported by MOHID;&lt;br /&gt;
&lt;br /&gt;
- auxiliary horizontal data grid, in a grid data file in the format supported by MOHID; this file is used for horizontal grid interpolation in 3D interpolation operations.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with interpolated data. In case of 3D interpolation also produced an auxiliary HDF5 file with the result of the horizontal grid interpolation, which can be inspected to check if this operation is well performed.&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#INTERPOLATE GRIDS|INTERPOLATE GRIDS]].&lt;br /&gt;
&lt;br /&gt;
====Patch files====&lt;br /&gt;
This action consists in performing an interpolation of HDF5 data between grids, as in action [[#Interpolate files|Interpolate files]], but considering more than one HDF5 file as containing data to be interpolated to the new grid and a priority scale. The interpolation is performed only for the data located in the time window specified by the user. The present version of this action operates only on 2D data.&lt;br /&gt;
&lt;br /&gt;
Each HDF5 file containing data to be interpolated is called a '''father file''' and has an user-attributed '''priority level''' to be respected in the interpolation process: for each new grid cell the ConvertToHDF5 application will look for data first on the Level 1 father file and only in the case this data is inexistent will it look for data in Level 2 file, proceeding in looking for higher level files if no data is found subsequentely.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
To obtain an HDF5 file with data from several HDF5 files each containing data with different spatial resolution and only for a specific part of the new grid. This is, for instance, the case when one is preparing a best resolution meteorological HDF5 file for forcing MOHID Water from several meteorological model domains, having different spatial resolution and span, since the best resolution data is not available for all new grid cells.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
The new horizontal data grid, in a grid data file in the format supported by MOHID, and for each father file:&lt;br /&gt;
&lt;br /&gt;
- level of priority: 1 = maximum priority, priority decreases with increasing level value;&lt;br /&gt;
&lt;br /&gt;
- data grid, in the form of a grid data file in the format supported by MOHID.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with patched data.&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#PATCH HDF5 FILES|PATCH HDF5 FILES]].&lt;br /&gt;
&lt;br /&gt;
===Format conversion===&lt;br /&gt;
&lt;br /&gt;
====Meteorological model data====&lt;br /&gt;
Mohid does not simulate explicitly the atmosphere, but needs information about atmospheric properties in time and space. This requires that atmospheric properties are supplied to MOHID Water in supported formats. These formats can be derived from meteorological data in HDF5 format. Because the results of meteorological models are accessed in different formats conversion is required. &lt;br /&gt;
&lt;br /&gt;
The formats currently convertible to HDF5 in ConvertToHDF5 include the MM5 and the ERA40. These are succintly detailed in the next sections.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''ERA40''=====&lt;br /&gt;
This format refers to the European Centre for Medium-Range Weather Forecasts (ECMWF) 40 years re-analysises results, acessed by site http://data.ecmwf.int/data/d/era40_daily/. This data is available for several meteorological variables with maximum 6 hour periodicity for days in the period from 1957-09-01 to 2002-08-31. &lt;br /&gt;
&lt;br /&gt;
ERA40 data files are supplied by ECMWF in a NetCDF format and with an user-costumized time window, periodicity (time step range from 6 hours to a day) and meteorological properties set. The ERA40 meteorological properties which are recognized by MOHID are presented bellow together with the correspondent MOHID name: &lt;br /&gt;
&lt;br /&gt;
 ---ERA40 NAME---         ---MOHID NAME---&lt;br /&gt;
   sshf                     sensible heat                &lt;br /&gt;
   slhf                     latent heat                  &lt;br /&gt;
   msl                      atmospheric pressure &lt;br /&gt;
   tcc                      cloud cover &lt;br /&gt;
   p10u                     wind velocity X&lt;br /&gt;
   p10v                     wind velocity Y&lt;br /&gt;
   p2t                      air temperature&lt;br /&gt;
   ewss                     wind stress X&lt;br /&gt;
   nsss                     wind stress Y&lt;br /&gt;
&lt;br /&gt;
The standard ConvertToHDF5 action is to convert to HDF5 the data referring to all MOHID Water recognized property available in the ERA40 file, producing an individual HDF5 file for each property. The name of each HDF5 file generated includes the ERA40 meteorological property identificator correspondent to the data contained.&lt;br /&gt;
&lt;br /&gt;
Alternatively, ConvertToHDF5 can copy to a single ASCII file the heading information concerning each meteorological variable considered in the original ERA40 file.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain an HDF5 file with data suitable for being used for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
ERA40 NetCDF file.&lt;br /&gt;
&lt;br /&gt;
'''Output:'''&lt;br /&gt;
&lt;br /&gt;
One HDF5 file for each meteorological property contained in the original NetCDF file.&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT ERA40 FORMAT|CONVERT ERA40 FORMAT]].&lt;br /&gt;
&lt;br /&gt;
=====''MM5''=====&lt;br /&gt;
This format relates to the Fifth-Generation NCAR / Penn State Mesoscale Model (MM5) output files format. Almost every atmospheric property needed by MOHID Water is present in MM5 output files, enabling to run prediction simulations with MOHID Water when access to MM5 prevision files is available.&lt;br /&gt;
&lt;br /&gt;
The ConvertToHDF5 action converts MM5 results files from the original format to HDF5 format, allowing the easy use of these results in the MOHID framework. Conversion is only performed for the MM5 properties and the time window specified by the user.&lt;br /&gt;
&lt;br /&gt;
Besides the conversion, the application can calculate some properties not contained in&lt;br /&gt;
the MM5 files using the available information: these are windstress, relative humidity and precipitation.&lt;br /&gt;
&lt;br /&gt;
For conversion to be completed it is required the horizontal grid information of MM5 results which is available in special TERRAIN files.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Produce HDF5 meteorological data usable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
MM5 results file to convert and MM5 TERRAIN file. The TERRAIN file supplies the MM5 results grid information. &lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
An HDF5 file with MM5 results and a grid data file in MOHID format with the MM5 grid information.&lt;br /&gt;
This last file can be used to interpolate the MM5 data from the original grid to a new grid (see [[#Interpolate files|Interpolate files]]), for instance to produce an HDF5 file suitable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Compilation:'''&lt;br /&gt;
&lt;br /&gt;
Caution! The ConvertToHDF5 executable must be compiled with the [[Big-endian little-endian|Big-Endian]] option set (see compatibility in the project's settings).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT MM5 FORMAT|CONVERT MM5 FORMAT]].&lt;br /&gt;
&lt;br /&gt;
=====''Aladin''=====&lt;br /&gt;
This format relates to Aladin meteorological model results. Some of the atmospheric property needed by MOHID Water is present in Aladin output files, enabling to run prediction simulations with MOHID Water when access to Aladin prevision files is available.&lt;br /&gt;
&lt;br /&gt;
The ConvertToHDF5 action converts Aladin results files from the original format to HDF5 format, allowing the easy use of these results in the MOHID framework. Conversion is only performed for the MM5 properties and the time window specified by the user.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Produce HDF5 meteorological data usable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
Aladin netcdf results file to convert.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
An HDF5 file with Aladin results and a grid data file in MOHID format with the Aladin grid pseudo-information: a fake orography is created of 100 m depth.&lt;br /&gt;
This last file can be used to interpolate the Aladin data from the original grid to a new grid (see [[#Interpolate files|Interpolate files]]), for instance to produce an HDF5 file suitable for forcing MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Compilation:'''&lt;br /&gt;
&lt;br /&gt;
Caution! The ConvertToHDF5 executable must be compiled with the [[Big-endian little-endian|Big-Endian]] option set (see compatibility in the project's settings).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT ALADIN FORMAT|CONVERT ALADIN FORMAT]].&lt;br /&gt;
&lt;br /&gt;
====Ocean model data====&lt;br /&gt;
Ocean model data, available in diverse formats, can be used by MOHID Water to specify boundary (open ocean boundary and surface), initial conditions or for validation. These uses require that the model data is in HDF5 format and conversion is therefore needed.&lt;br /&gt;
&lt;br /&gt;
Currently the large scale ocean models formats convertible into HDF5 by ConvertToHDF5 includes MERCATOR.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''MERCATOR''=====&lt;br /&gt;
MERCATOR data files are supplied in a NetCDF format and with an user-costumized spatial window and periodicity. Water level and water properties (temperature and salinity) data is available in type T files, velocity component u data is available in type U files and velocity component v data is available in type V files. The type of data of a specific MERCATOR file is generally indicated in the file name.&lt;br /&gt;
&lt;br /&gt;
The standard ConvertToHDF5 action is to convert to HDF5 the data referring to temperature, salinity, water level, component u of velocity and component v of velocity.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain HDF5 MERCATOR data usable for forcing or validation of MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
NetCDF MERCATOR results data files and NetCDF MERCATOR grid data files. It should be provided one grid data file of each type: T, U and V. These are generally provided by the MERCATOR services together with the results files.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
One HDF5 file containing all properties contained in the recognized set of properties (temperature, salinity, water level, velocity u and velocity v) and the correspondent grid data and geometry files, containing respectively the horizontal grid and the vertical discretization of the HDF5 file. The grid data and geometry files can be used afterwards to interpolate the MERCATOR data to another grid and geometry (see [[#Interpolate files|Interpolate files]]).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT MERCATOR FORMAT|CONVERT MERCATOR FORMAT]].&lt;br /&gt;
&lt;br /&gt;
====Climatological data====&lt;br /&gt;
Climatological data can be used in MOHID Water to specify boundary (open ocean boundary and surface), initial conditions or for validation, in case more realistic data (measurements or model) data is unavailable. This data is generally supplied by producers in formats not readly usable by MOHID Water which justifies the existence of a conversion tool.&lt;br /&gt;
&lt;br /&gt;
Two climatological data format conversions are implemented in ConvertToHDF5: Levitus ocean data and Hellerman Rosenstein meteorological data.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''Levitus''=====&lt;br /&gt;
The Levitus climatology provides results for water temperature and salinity.&lt;br /&gt;
The ConvertToHDF5 action converts the climatological data for the properties and spatial window requested by the user. &lt;br /&gt;
Typically, it requires 3 steps to complete the task:&lt;br /&gt;
&lt;br /&gt;
- convert levitus format &lt;br /&gt;
&lt;br /&gt;
- extrapolate the data to the whole levitus domain(required to avoid uncoincidental coastlines) &lt;br /&gt;
&lt;br /&gt;
- interpolate with the model grid(bathymetry)&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain climatological data in HDF5 format to use as boundary forcing and/or initial condition specification in MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
Levitus climatological data files, one per property and per time period (e.g a month).&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with Levitus climatological data, grid data file with the horizontal&lt;br /&gt;
grid of the data and a geometry file with vertical discretization of the data (MOHID formats).&lt;br /&gt;
The grid data and the geometry files can be used to interpolate the climatological data from the original grid to a new grid (see [[#Interpolate files|Interpolate files]]).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT LEVITUS FORMAT|CONVERT LEVITUS FORMAT]].&lt;br /&gt;
&lt;br /&gt;
=====''Hellerman Rosenstein''=====&lt;br /&gt;
This is a meteorological climatology providing wind stress. There is a file per wind stress component. Since the data refer to surface values it is a 2D field.&lt;br /&gt;
&lt;br /&gt;
The ConvertToHDF5 action converts the climatological data for the properties and spatial window provided by the user.&lt;br /&gt;
&lt;br /&gt;
'''Typical use:'''&lt;br /&gt;
&lt;br /&gt;
Obtain climatological data in HDF5 format to use as meteorological forcing in MOHID Water models.&lt;br /&gt;
&lt;br /&gt;
'''Data input requirements:'''&lt;br /&gt;
&lt;br /&gt;
Hellerman Rosenstein climatological data ASCII files, one per wind stress component.&lt;br /&gt;
&lt;br /&gt;
'''Ouput:'''&lt;br /&gt;
&lt;br /&gt;
HDF5 file with Hellerman Rosenstein climatological data and grid data file with the horizontal&lt;br /&gt;
grid of the climatological data. This grid data file can be used to interpolate the climatological data from the original horizontal grid to a new grid (see [[#Interpolate files|Interpolate files]]).&lt;br /&gt;
&lt;br /&gt;
'''ConvertToHDF5 action:''' [[#CONVERT HELLERMAN ROSENSTEIN ASCII|CONVERT HELLERMAN ROSENSTEIN ASCII]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=====''World Ocean Atlas 2005''=====&lt;br /&gt;
The World Ocean Atlas (WOA) 2005 climatology provides results for water temperature, salinity and several water quality and biology properties.&lt;br /&gt;
&lt;br /&gt;
Description, Action and Input Files are described in a separate page: [[ConvertToHDF5 WOA2005]].&lt;br /&gt;
&lt;br /&gt;
==Input file (ConvertToHDF5Action.dat)==&lt;br /&gt;
===General structure===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt; (block containing instructions for running a specific action) &lt;br /&gt;
 ACTION                    : ... (intended action)&lt;br /&gt;
 ... (action specific instructions)&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : ...&lt;br /&gt;
 ...&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===GLUES HDF5 FILES===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : GLUES HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 3D_FILE                   : 0/1 (0 = 2D file, 1 = 3D file)&lt;br /&gt;
 &lt;br /&gt;
 TIME_GROUP                : ... (Default=&amp;quot;Time&amp;quot;. Other option: &amp;quot;SurfaceTime&amp;quot;.)&lt;br /&gt;
 &lt;br /&gt;
 BASE_GROUP                : ... (Default=&amp;quot;Results&amp;quot;. Other options: &amp;quot;Residual&amp;quot;, &amp;quot;SurfaceResults&amp;quot;.)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 &lt;br /&gt;
 (block of HDF5 data files)&lt;br /&gt;
 &amp;lt;&amp;lt;begin_list&amp;gt;&amp;gt;&lt;br /&gt;
 ... (path/name of HDF5 file with data to be included in glue, one per line, at least two files)&lt;br /&gt;
 ...                      &lt;br /&gt;
 &amp;lt;&amp;lt;end_list&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===INTERPOLATE GRIDS===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION     : ... (type of horizontal interpolation: 1 = Bilinear, 2 = Spline2D,&lt;br /&gt;
                                  3 = Triangulation)&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION_WINDOW      : ... ... ... ... (2D spatial window to consider for interpolation: &lt;br /&gt;
                                              Xmin Ymin Xmax Ymax; default = all domain)&lt;br /&gt;
 &lt;br /&gt;
 START                     : ... (start date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 END                       : ... (end date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D           : 0/1 (0 = 2D interpolation, 1 = 3D interpolation)&lt;br /&gt;
 &lt;br /&gt;
 FATHER_FILENAME           : ... (path/name of input HDF5 file with data to be interpolated)&lt;br /&gt;
 FATHER_GRID_FILENAME      : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization of input HDF5 file)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of output HDF5 file to be created)&lt;br /&gt;
 NEW_GRID_FILENAME         : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization intended for output HDF5 file)&lt;br /&gt;
 &lt;br /&gt;
 EXTRAPOLATE_2D            : 0/1/2/3/4/5 (2D extrapolation: 0=no extrapolation, 1=medium&lt;br /&gt;
                                      triangulation, 2=high triangulation, &lt;br /&gt;
                                      3=nearest neighbour, 4=nearest cell, &lt;br /&gt;
                                      5=constant value)&lt;br /&gt;
 &lt;br /&gt;
 EXTRAPOLATE_VALUE         : ... (name of the value to extrapolate to when EXTRAPOLATE_2D is&lt;br /&gt;
                                  set to constant value (5))&lt;br /&gt;
 &lt;br /&gt;
 DO_NOT_BELIEVE_MAP        : 0/1 (0=consider input HDF5 file map, 1=do not consider input HDF5&lt;br /&gt;
                                  file map)&lt;br /&gt;
 &lt;br /&gt;
 BASE_GROUP                : ... (name of base group of HDF5 variables containing data to be &lt;br /&gt;
                                  interpolated; default is &amp;quot;/Results&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 (if INTERPOLATION3D : 1 also required:)&lt;br /&gt;
 FATHER_GEOMETRY           : ... (path/name of file (MOHID format) with vertical discretization&lt;br /&gt;
                                  of input HDF5 file)&lt;br /&gt;
 NEW_GEOMETRY              : ... (path/name of file (MOHID format) with vertical discretization&lt;br /&gt;
                                  intended for output HDF5 file)&lt;br /&gt;
 POLI_DEGREE               : 1/... (degree of vertical interpolation: 1=linear, ...)&lt;br /&gt;
 &lt;br /&gt;
 AUX_GRID_FILENAME         : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization intended for auxiliar output HDF5 file;&lt;br /&gt;
                                  default is file provided in NEW_GRID_FILENAME)&lt;br /&gt;
 &lt;br /&gt;
 AUX_OUTPUTFILENAME        : ... (path/name of auxiliar output HDF5 file to contain result&lt;br /&gt;
                                  of horizontal grid interpolation)   &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - the file indicated in AUX_GRID_FILENAME can be different from the one indicated in&lt;br /&gt;
   NEW_GRID_FILENAME in terms of bathymetry, while the horizontal grid should be, commonly, the&lt;br /&gt;
   same: this altered bathymetry can be used to extend the water column in the original data so&lt;br /&gt;
   that the process of vertical interpolation is done easily;&lt;br /&gt;
 &lt;br /&gt;
 - in case of INTERPOLATION3D : 1, ConvertToHDF5 can generate new versions of bathymetry which &lt;br /&gt;
   are consistent with the geometry definition (extension is '.new'); there are possibly three&lt;br /&gt;
   bathymetry changes referring to father grid, new grid and aux grid (the same bathymetry is&lt;br /&gt;
   not altered twice); although initially new and aux grid are the same they can result &lt;br /&gt;
   different because of bathymetry changes;&lt;br /&gt;
 &lt;br /&gt;
 - in case the new geometry is 2D and father geometry is 3D then POLI_DEGREE : 1 &lt;br /&gt;
   (linear interpolation) should be used;&lt;br /&gt;
 &lt;br /&gt;
 - EXTRAPOLATE_2D : 1/2/3/4/5 should be considered if it is expected that the coast line is not&lt;br /&gt;
   coincidental in the father and new grids, to avoid lack of data in the interpolation&lt;br /&gt;
   process; extrapolation is performed for all cells even the land cells; &lt;br /&gt;
 &lt;br /&gt;
 - in case of DO_NOT_BELIEVE_MAP : 1 the application generates a map based on cells where&lt;br /&gt;
   interpolation results are available; this causes that if EXTRAPOLATE_2D : 1/2/3/4/5 is used&lt;br /&gt;
   the AUX_GRID_FILENAME should not have land cells in order for the new map to be concurrent&lt;br /&gt;
   with the result of extrapolation and avoid errors generation, specially if INTERPOLATION3D :&lt;br /&gt;
   1 is considered.&lt;br /&gt;
&lt;br /&gt;
===PATCH HDF5 FILES===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : PATCH HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION     : ... (type of interpolation: 3 = Triangulation, default and only&lt;br /&gt;
                                  one implemented)&lt;br /&gt;
 &lt;br /&gt;
 START                     : ... (start date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 END                       : ... (end date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 &lt;br /&gt;
 (block for each father HDF5 file, should be at least two)&lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                     : ... (integer priority level: 1 = highest, increase for lower&lt;br /&gt;
                                  priority)&lt;br /&gt;
 FATHER_FILENAME           : ... (path/name of input HDF5 file with data to be interpolated)&lt;br /&gt;
 FATHER_GRID_FILENAME      : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization of input HDF5 file)&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of output HDF5 file to be created)&lt;br /&gt;
 NEW_GRID_FILENAME         : ... (path/name of input grid data file with horizontal&lt;br /&gt;
                                  discretization intended for output HDF5 file)&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CONVERT ERA40 FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT ERA40 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                  : ... (path/name of ERA40 NetCDF file)&lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
                                 (root of name for all files produced)&lt;br /&gt;
 &lt;br /&gt;
 CONVERT_TO_ASCII          : 0/1 (1 = convert variable heading info for ASCII file; 0 = default)&lt;br /&gt;
 CONVERT_TO_HDF5           : 0/1 (1 = convert to HDF5 file; 0 = default)&lt;br /&gt;
 GRIDTO180                 : 0/1 (1 = convert grid from [0 360] to [-180 180], 0 = default)&lt;br /&gt;
 &lt;br /&gt;
 XX_VARIABLE               : ... (name of longitude variable in the input file: usual name &lt;br /&gt;
                                  is &amp;quot;longitude&amp;quot;)&lt;br /&gt;
 YY_VARIABLE               : ... (name of longitude variable in the input file: usual name &lt;br /&gt;
                                  is &amp;quot;latitude&amp;quot;)&lt;br /&gt;
 TIME_VARIABLE             : ... (name of time variable in the input file: usual name is&lt;br /&gt;
                                  &amp;quot;time&amp;quot;)&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - either CONVERT_TO_ASCII : 1 or CONVERT_TO_HDF5 : 1 must be chosen for any action to be&lt;br /&gt;
 performed by ConvertToHDF5;&lt;br /&gt;
 &lt;br /&gt;
 - when CONVERT_TO_HDF5 : 1 an HDF5 file is produced for every variable contained in the&lt;br /&gt;
 original ERA40 file; the name of each file is composed of the name indicated on FILENAME&lt;br /&gt;
 concatenated with the ERA40 variable identifier;&lt;br /&gt;
 &lt;br /&gt;
 - to the XX_VARIABLE, YY_VARIABLE and TIME_VARIABLE keywords should generally be  &lt;br /&gt;
 specified &amp;quot;longitude&amp;quot;, &amp;quot;latitude&amp;quot; and &amp;quot;time&amp;quot;, respectively; the option to&lt;br /&gt;
 include as keywords was made only to make the application robust to future variable name&lt;br /&gt;
 changes.&lt;br /&gt;
&lt;br /&gt;
===CONVERT MM5 FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT MM5 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                  : ... (path/name of MM5 file)&lt;br /&gt;
 TERRAIN_FILENAME          : ... (path/name of MM5 TERRAIN file)&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data file with horizontal grid of MM5 data&lt;br /&gt;
                                  to be created)&lt;br /&gt;
 &lt;br /&gt;
 COMPUTE_WINDSTRESS        : 0/1 (1 = compute and write wind stress field; 0 = default)&lt;br /&gt;
 COMPUTE_RELATIVE_HUMIDITY : 0/1 (1 = compute and write relative humidity field; 0 = default)&lt;br /&gt;
 COMPUTE_PRECIPITATION     : 0/1 (1 = compute and write precipitation field; 0 = default)&lt;br /&gt;
 COMPUTE_WINDMODULUS       : 0/1 (1 = compute wind modulus; 0 = default)&lt;br /&gt;
 &lt;br /&gt;
 WRITE_XYZ                 : 0/1 (1 = write xyz center grid cells; 0 = default)&lt;br /&gt;
 WRITE_TERRAIN             : 0/1 (1 = write MM5 TERRAIN fields; 0 = default)&lt;br /&gt;
 &lt;br /&gt;
 START                     : ... (start date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
 END                       : ... (end date for output file: yyyy mm dd hh mm ss)&lt;br /&gt;
  &lt;br /&gt;
 (block of MM5 properties to convert)&lt;br /&gt;
 &amp;lt;&amp;lt;BeginFields&amp;gt;&amp;gt;&lt;br /&gt;
 ... (name of MM5 property to convert do HDF5 format, one per line)&lt;br /&gt;
 ... &lt;br /&gt;
 &amp;lt;&amp;lt;EndFields&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - the name of each MM5 property to convert in &amp;lt;&amp;lt;BeginFields&amp;gt;&amp;gt;...&amp;lt;&amp;lt;EndFields&amp;gt;&amp;gt; block must&lt;br /&gt;
 conform to the MOHID designation specified in code of ModuleGlobalData; the correspondence is &lt;br /&gt;
 the following (see [[Module_InterfaceWaterAir]] for a more detailed explanation).&lt;br /&gt;
 &lt;br /&gt;
           ---MM5 NAME---    ---MOHID NAME---&lt;br /&gt;
             T2             air temperature&lt;br /&gt;
             PSTARCRS       atmospheric pressure&lt;br /&gt;
             U10            wind velocity X&lt;br /&gt;
             V10            wind velocity Y&lt;br /&gt;
             UST            wind shear velocity&lt;br /&gt;
             LHFLUX         latent heat&lt;br /&gt;
             SWDOWN         sensible heat&lt;br /&gt;
             SWDOWN         solar radiation&lt;br /&gt;
             LWDOWN         infrared radiation&lt;br /&gt;
             SWOUT          top outgoing shortwave radiation&lt;br /&gt;
             LWOUT          top outgoing longwave radiation&lt;br /&gt;
             SOIL T 1       soil temperature layer 1&lt;br /&gt;
             SOIL T 1       soil temperature layer 2&lt;br /&gt;
             SOIL T 1       soil temperature layer 3&lt;br /&gt;
             SOIL T 1       soil temperature layer 4&lt;br /&gt;
             SOIL T 1       soil temperature layer 5&lt;br /&gt;
             SOIL T 1       soil temperature layer 6&lt;br /&gt;
             Q2             2-meter mixing ratio&lt;br /&gt;
             TSEASFC        sea water temperature&lt;br /&gt;
             PBL HGT        PBL height&lt;br /&gt;
             PBL REGIME     PBL regime&lt;br /&gt;
             RAIN CON       accumulated convective precipitation        (cm)&lt;br /&gt;
             RAIN NON       accumulated non-convective precipitation    (cm)&lt;br /&gt;
             GROUND T       ground temperature&lt;br /&gt;
             RES TEMP       infinite reservoir slab temperature&lt;br /&gt;
             U              wind velocity X_3D&lt;br /&gt;
             V              wind velocity Y_3D&lt;br /&gt;
             W              wind velocity Z_3D&lt;br /&gt;
             T              air temperature_3D&lt;br /&gt;
             PP             atmospheric pressure_3D&lt;br /&gt;
             Q              mixing ratio_3D&lt;br /&gt;
             CLW            cloud water mixing ratio_3D&lt;br /&gt;
             RNW            rain water mixing ratio_3D&lt;br /&gt;
             ICE            cloud ice mixing ratio_3D&lt;br /&gt;
             SNOW           snow mixing ratio_3D&lt;br /&gt;
             RAD TEND       atmospheric radiation tendency_3D&lt;br /&gt;
&lt;br /&gt;
===CONVERT ALADIN FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT ALADIN FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : aladin.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : aladin_griddata.dat&lt;br /&gt;
 &lt;br /&gt;
 !Put here the name of any netcdf file for grid-data generation's sake.&lt;br /&gt;
 INPUT_GRID_FILENAME      :   D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKCLOUD_OPASYMP_19723_20088.nc&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 (path to aladin netcdf file)\ALADIN_BULKIR_OPASYMP_19723_20088.nc&lt;br /&gt;
 ...&lt;br /&gt;
 &amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 ''Remarks:''&lt;br /&gt;
 &lt;br /&gt;
 - the name of each Aladin property to convert in &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;...&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt; block must conform to the following variables&lt;br /&gt;
 &lt;br /&gt;
           ---ALADIN NAME---    ---MOHID NAME---&lt;br /&gt;
             soclotot            CloudCover_&lt;br /&gt;
             sohumrel            RelativeHumidity_&lt;br /&gt;
             sofluxir            NonSolarFlux_&lt;br /&gt;
             sosspres            AtmosphericPressure_&lt;br /&gt;
             sosolarf            SolarRadiation_&lt;br /&gt;
             sotemair            AirTemperature_&lt;br /&gt;
             sowinmod            WindModulus_&lt;br /&gt;
             sowaprec            Precipitation_&lt;br /&gt;
             sozotaux            WindStressX_&lt;br /&gt;
             sometauy            WindStressY_&lt;br /&gt;
             sowindu10           WindVelocityX_&lt;br /&gt;
             sowindv10           WindVelocityY_&lt;br /&gt;
&lt;br /&gt;
===CONVERT MERCATOR FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT MERCATOR FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data with horizontal discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME  : ... (path/name of geometry file with vertical discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 &lt;br /&gt;
 INPUT_GRID_FILENAME       : ... (path/name of file with horizontal discretization of water&lt;br /&gt;
                                  properties and water level data)&lt;br /&gt;
 INPUT_GRID_FILENAME_U     : ... (path/name of file with horizontal discretization of velocity&lt;br /&gt;
                                  component U data)&lt;br /&gt;
 INPUT_GRID_FILENAME_V     : ... (path/name of file with horizontal discretization of velocity&lt;br /&gt;
                                  component V data)&lt;br /&gt;
  &lt;br /&gt;
 (block of MERCATOR data files)&lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 ... (path/name of MERCATOR NetCDF data file, one per line, can be several)&lt;br /&gt;
 ... &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CONVERT LEVITUS FORMAT===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT LEVITUS FORMAT&lt;br /&gt;
  &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data with horizontal discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME  : ... (path/name of geometry file with vertical discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
 &lt;br /&gt;
 PERIODICITY               : ... (periodicity of Levitus data: &amp;quot;monthly&amp;quot;/&amp;quot;annual&amp;quot;; default is&lt;br /&gt;
                                  &amp;quot;monthly&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 SPATIAL_RESOLUTION        : ... (spatial resolution (degrees) of horizontal Levitus grid)&lt;br /&gt;
 &lt;br /&gt;
 FILL_VALUE                : ... (real value identificator for missing data; default is &lt;br /&gt;
                                  &amp;quot;-99.999900&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 (definition of spatial window to be present in output HDF5 file)&lt;br /&gt;
 LOWER_LEFT_CORNER         : ... ... (longitude and latitude (degrees) of south west corner)&lt;br /&gt;
 UPPER_RIGHT_CORNER        : ... ... (longitude and latitude (degrees) of north east corner)&lt;br /&gt;
 &lt;br /&gt;
 (block for each water property to be present in output HDF5 file, can be several)&lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                      : ... (name of property)&lt;br /&gt;
 ANNUAL_FILE               : ... (path/name of Levitus annual file)&lt;br /&gt;
 &lt;br /&gt;
 (block of Levitus data files)&lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 ... (path/name of Levitus data file (e.g. a monthly data file), one per line, can be several)&lt;br /&gt;
 ... &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===CONVERT HELLERMAN ROSENSTEIN ASCII===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT HELLERMAN ROSENSTEIN ASCII&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : ... (path/name of HDF5 file to be created)&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : ... (path/name of grid data with horizontal discretization to be&lt;br /&gt;
                                  created)&lt;br /&gt;
  &lt;br /&gt;
 PERIODICITY               : ... (periodicity of Hellerman Rosenstein data: &amp;quot;monthly&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 SPATIAL_RESOLUTION        : ... (spatial resolution (degrees) of horizontal Hellerman&lt;br /&gt;
                                  Rosenstein grid: default and only allowed value is &amp;quot;2.&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 FILL_VALUE                : ... (real value identificator for missing data; default is &lt;br /&gt;
                                  &amp;quot;-99.999900&amp;quot;)&lt;br /&gt;
 &lt;br /&gt;
 (definition of spatial window to be present in output HDF5 file)&lt;br /&gt;
 LOWER_LEFT_CORNER         : ... ... (longitude and latitude (degrees) of south west corner)&lt;br /&gt;
 UPPER_RIGHT_CORNER        : ... ... (longitude and latitude (degrees) of north east corner)&lt;br /&gt;
   &lt;br /&gt;
 (block for each Hellerman Rosenstein data file)&lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                      : ... (name of property: &amp;quot;wind stress X&amp;quot;/&amp;quot;wind stress Y&amp;quot;)&lt;br /&gt;
 FILE                      : ... (path/name Hellerman Rosenstein file)&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Samples==&lt;br /&gt;
All sample files are named ''ConvertToHDF5Action.dat''.&lt;br /&gt;
&lt;br /&gt;
===Glue several MOHID(.hdf5) files===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : GLUES HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : SurfaceHydro_OP.hdf5&lt;br /&gt;
  &lt;br /&gt;
 &amp;lt;&amp;lt;begin_list&amp;gt;&amp;gt;&lt;br /&gt;
 D:\Projectos\SurfaceHydrodynamic_21.hdf5&lt;br /&gt;
 D:\Projectos\SurfaceHydrodynamic_22.hdf5&lt;br /&gt;
 &amp;lt;&amp;lt;end_list&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interpolate 2D MOHID(.hdf5) files to a new grid===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION    : 1&lt;br /&gt;
 FATHER_FILENAME          : D:\Projectos\MohidRun\test\res\Lagrangian_1.hdf5 &lt;br /&gt;
 OUTPUTFILENAME           : OilSpillThickness_GridRegular.hdf5&lt;br /&gt;
  &lt;br /&gt;
 START                    : 2006 6 21 17 22 30&lt;br /&gt;
 END                      : 2006 6 22 17 22 0&lt;br /&gt;
  &lt;br /&gt;
 FATHER_GRID_FILENAME     : D:\Projectos\MohidRun\GeneralData\batim\Tagus.dat_A&lt;br /&gt;
 NEW_GRID_FILENAME        : TagusConstSpacing.dat&lt;br /&gt;
  &lt;br /&gt;
 BASE_GROUP               : /Results/Oil/Data_2D&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Interpolate 3D MOHID(.hdf5) files to a new grid===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION   : 1&lt;br /&gt;
 FATHER_FILENAME         : D:\Projectos\MohidRun\test\res\Lagrangian_1.hdf5 &lt;br /&gt;
 OUTPUTFILENAME          : OilSpillThickness_GridRegular.hdf5&lt;br /&gt;
 &lt;br /&gt;
 START                   : 2006 6 21 17 22 30&lt;br /&gt;
 END                     : 2006 6 22 17 22 0&lt;br /&gt;
 &lt;br /&gt;
 FATHER_GRID_FILENAME    : D:\Projectos\MohidRun\GeneralData\batim\Tagus.dat_A&lt;br /&gt;
 NEW_GRID_FILENAME       : TagusConstSpacing.dat&lt;br /&gt;
 &lt;br /&gt;
 BASE_GROUP              : /Results/Oil/Data_2D&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D         : 1&lt;br /&gt;
 FATHER_GEOMETRY         : D:\Projectos\MohidRun\test\data\Geometry_1.dat&lt;br /&gt;
 NEW_GEOMETRY            : TagusGeometry.dat&lt;br /&gt;
 AUX_GRID_FILENAME       : TagusConstSpacing.dat&lt;br /&gt;
 AUX_OUTPUTFILENAME      : Aux_GridRegular.hdf5&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Patch several MOHID(.hdf5) files to a new grid===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION : PATCH HDF5 FILES&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION   : 3&lt;br /&gt;
 &lt;br /&gt;
 START                   : 2005 2 28 13 0 0&lt;br /&gt;
 END                     : 2005 3 1 13 0 0&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                   : 3&lt;br /&gt;
 FATHER_FILENAME         : K:\MM5output\2005022812_2005030712\MM5OUT_D1.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME    : K:\MM5output\2005022812_2005030712\grid1.dat&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                   : 2&lt;br /&gt;
 FATHER_FILENAME         : K:\MM5output\2005022812_2005030712\MM5OUT_D2.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME    : K:\MM5output\2005022812_2005030712\grid2.dat&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_father&amp;gt;&amp;gt;&lt;br /&gt;
 LEVEL                   : 1&lt;br /&gt;
 FATHER_FILENAME         : K:\MM5output\2005022812_2005030712\MM5OUT_D3.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME    : K:\MM5output\2005022812_2005030712\grid3.dat&lt;br /&gt;
 &amp;lt;&amp;lt;end_father&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME          : MM5Forcing.hdf5&lt;br /&gt;
 NEW_GRID_FILENAME       : K:\Simula\GeneralData\Batim\CostaPortuguesa.dat&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert an ERA40 file to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                  : CONVERT ERA40 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                : D:\Aplica\ERA40\1971ERA1973.nc&lt;br /&gt;
 OUTPUTFILENAME          : D:\Aplica\ERA40\1971ERA1973T2&lt;br /&gt;
 &lt;br /&gt;
 CONVERT_TO_ASCII        : 0&lt;br /&gt;
 CONVERT_TO_HDF5         : 1&lt;br /&gt;
 &lt;br /&gt;
 XX_VARIABLE             : longitude&lt;br /&gt;
 YY_VARIABLE             : latitude&lt;br /&gt;
 TIME_VARIABLE           : time&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert a MM5 file to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                  : CONVERT MM5 FORMAT&lt;br /&gt;
 &lt;br /&gt;
 FILENAME                : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\MMOUT_D3&lt;br /&gt;
 TERRAIN_FILENAME        : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\TERRAIN_D3&lt;br /&gt;
 OUTPUT_GRID_FILENAME    : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\grid3.dat&lt;br /&gt;
 OUTPUTFILENAME          : K:\MM5output\DataCenter\Modelos\Meteo_IST\Fev2005\MM5_D3.hdf5&lt;br /&gt;
 &lt;br /&gt;
 COMPUTE_WINDSTRESS        : 0&lt;br /&gt;
 COMPUTE_RELATIVE_HUMIDITY : 1&lt;br /&gt;
 WRITE_XYZ                 : 0&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;BeginFields&amp;gt;&amp;gt;&lt;br /&gt;
 solar radiation&lt;br /&gt;
 air temperature&lt;br /&gt;
 wind velocity X&lt;br /&gt;
 wind velocity Y&lt;br /&gt;
 sensible heat&lt;br /&gt;
 latent heat&lt;br /&gt;
 atmospheric pressure&lt;br /&gt;
 sea water temperature&lt;br /&gt;
 &amp;lt;&amp;lt;EndFields&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert Mercator-Ocean(.nc) to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : CONVERT MERCATOR FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : Psy2v2r1v_R20060628/MercatorR20060628.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME     : Psy2v2r1v_R20060628/MercatorGridR20060628.dat&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME : Psy2v2r1v_R20060628/MercatorGeometryR20060628.dat&lt;br /&gt;
 &lt;br /&gt;
 INPUT_GRID_FILENAME      : GridFiles/ist_meteog-gridT.nc&lt;br /&gt;
 INPUT_GRID_FILENAME_U    : GridFiles/ist_meteog-gridU.nc&lt;br /&gt;
 INPUT_GRID_FILENAME_V    : GridFiles/ist_meteog-gridV.nc&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 Psy2v2r1v_R20060628/ist_meteog-mercatorPsy2v2r1v_T_MEAN_ANA_20060621_R20060628.nc&lt;br /&gt;
 Psy2v2r1v_R20060628/ist_meteog-mercatorPsy2v2r1v_T_MEAN_ANA_20060622_R20060628.nc&lt;br /&gt;
 Psy2v2r1v_R20060628/ist_meteog-mercatorPsy2v2r1v_T_MEAN_ANA_20060623_R20060628.nc&lt;br /&gt;
 &amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert Levitus format to MOHID(.hdf5) and interpolate grid===&lt;br /&gt;
==== Convert ====&lt;br /&gt;
First convert the Levitus ASCII format to a raw HDF5 format:&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : CONVERT LEVITUS FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : Levitus.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME     : LevitusGrid.dat&lt;br /&gt;
 OUTPUT_GEOMETRY_FILENAME : LevitusGeometry.dat&lt;br /&gt;
 &lt;br /&gt;
 PERIODICITY              : monthly&lt;br /&gt;
 SPATIAL_RESOLUTION       : 0.25&lt;br /&gt;
 FILL_VALUE               : -99.9999&lt;br /&gt;
 &lt;br /&gt;
 LOWER_LEFT_CORNER        : -16.0  31&lt;br /&gt;
 UPPER_RIGHT_CORNER       :   1.   40&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : salinity&lt;br /&gt;
 ANNUAL_FILE              : DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s000hr.obj&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s001&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s002&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s003&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s004&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s005&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s006&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s007&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s008&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s009&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s010&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s011&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Salinity\s012&lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : temperature&lt;br /&gt;
 ANNUAL_FILE              : DataCenter\DadosBase\Ocean\Levitus\Data\Temp\t000hr.obj&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t001&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t002&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t003&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t004&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t005&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t006&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t007&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t008&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t009&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t010&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t011&lt;br /&gt;
 DataCenter\DadosBase\Ocean\Levitus\Data\Temperature\t012&lt;br /&gt;
 &amp;lt;&amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Extrapolate ====&lt;br /&gt;
Then extrapolate the data (still in the raw HDF5 format):&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION    : 1&lt;br /&gt;
 &lt;br /&gt;
 FATHER_FILENAME          : Levitus.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME     : LevitusGrid.dat&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : LeviTusAllPointsWithData.hdf5&lt;br /&gt;
 NEW_GRID_FILENAME        : LevitusGrid.dat&lt;br /&gt;
 &lt;br /&gt;
 START                    : -9999 1  1 0 0 0&lt;br /&gt;
 END                      : -9999 12 1 0 0 0&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D          : 1&lt;br /&gt;
 FATHER_GEOMETRY          : LevitusGeometry.dat&lt;br /&gt;
 NEW_GEOMETRY             : LevitusGeometry.dat&lt;br /&gt;
 AUX_GRID_FILENAME        : LevitusGrid.dat&lt;br /&gt;
 AUX_OUTPUTFILENAME       : AuxLeviTusAllPointsWithData.hdf5&lt;br /&gt;
 &lt;br /&gt;
 POLI_DEGREE              : 3&lt;br /&gt;
 DO_NOT_BELIEVE_MAP       : 1&lt;br /&gt;
 EXTRAPOLATE_2D           : 2&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Interpolate ====&lt;br /&gt;
Finally, interpolate to the final grid and geometry (same as the [[#Interpolate 3D MOHID(.hdf5) files to a new grid| Interpolate 3D sample]]):&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : INTERPOLATE GRIDS&lt;br /&gt;
 &lt;br /&gt;
 TYPE_OF_INTERPOLATION    : 1&lt;br /&gt;
 FATHER_FILENAME          : LeviTusAllPointsWithData.hdf5&lt;br /&gt;
 OUTPUTFILENAME           : CadizMonthlyLevitus.hdf5&lt;br /&gt;
 FATHER_GRID_FILENAME     : LevitusGrid.dat&lt;br /&gt;
 NEW_GRID_FILENAME        : Algarve0.02SigmaSmooth_V3_CartMoreLayers.dat&lt;br /&gt;
 &lt;br /&gt;
 START                    : -9999 1  1 0 0 0&lt;br /&gt;
 END                      : -9999 12 1 0 0 0&lt;br /&gt;
 &lt;br /&gt;
 INTERPOLATION3D          : 1&lt;br /&gt;
 FATHER_GEOMETRY          : LevitusGeometry.dat&lt;br /&gt;
 NEW_GEOMETRY             : Geometry_1.dat&lt;br /&gt;
 AUX_OUTPUTFILENAME       : AuxCadizMonthlyLevitus.hdf5&lt;br /&gt;
 AUX_GRID_FILENAME        : Aux12km.dat&lt;br /&gt;
 &lt;br /&gt;
 POLI_DEGREE              : 3&lt;br /&gt;
 DO_NOT_BELIEVE_MAP       : 1&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that the programme may construct a new bathymetry twice. Use this bathymetry only on the AUX_GRID_FILENAME keyword.&lt;br /&gt;
&lt;br /&gt;
===Convert Hellerman Rosenstein ASCII format to MOHID(.hdf5)  ===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                   : CONVERT HELLERMAN ROSENSTEIN ASCII&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME           : ClimatologicWindStress.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME     : ClimatologicWindStressGrid.dat&lt;br /&gt;
 &lt;br /&gt;
 PERIODICITY              : monthly&lt;br /&gt;
 SPATIAL_RESOLUTION       : 2.&lt;br /&gt;
 FILL_VALUE               : -99.9999&lt;br /&gt;
 &lt;br /&gt;
 LOWER_LEFT_CORNER        : -180  -90&lt;br /&gt;
 UPPER_RIGHT_CORNER       : 180  90&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : wind stress X&lt;br /&gt;
 FILE                     : D:\Aplica\Dados\Hellerman_Rosenstein\TAUXX.DAT&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;beginfield&amp;gt;&amp;gt;&lt;br /&gt;
 NAME                     : wind stress Y&lt;br /&gt;
 FILE                     : D:\Aplica\Dados\Hellerman_Rosenstein\TAUYY.DAT&lt;br /&gt;
 &amp;lt;&amp;lt;endfield&amp;gt;&amp;gt;&lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Convert ALADIN(.nc) format to MOHID(.hdf5)===&lt;br /&gt;
 &amp;lt;begin_file&amp;gt;&lt;br /&gt;
 ACTION                    : CONVERT ALADIN FORMAT&lt;br /&gt;
 &lt;br /&gt;
 OUTPUTFILENAME            : aladin.hdf5&lt;br /&gt;
 OUTPUT_GRID_FILENAME      : aladin_griddata.dat&lt;br /&gt;
 &lt;br /&gt;
 !Put here the name of any netcdf file for grid-data generation's sake.&lt;br /&gt;
 INPUT_GRID_FILENAME      :   D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKCLOUD_OPASYMP_19723_20088.nc&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;&amp;lt;begin_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKIR_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKPRES_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKSOLAR_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKTAIR_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKWIND_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_FLUXPRE_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_STRESSU_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_STRESSV_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_U10_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_V10_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKCLOUD_OPASYMP_19723_20088.nc&lt;br /&gt;
 D:\Aplica\BiscayAplica\FORCAGES\METEO\ALADIN_BULKHUMI_OPASYMP_19723_20088.nc&lt;br /&gt;
 &amp;lt;&amp;lt;end_input_files&amp;gt;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;end_file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== OceanColor modules compilation ==&lt;br /&gt;
Compiling the [[ConvertToHDF5]] tool with the OceanColor modules is more complicated than one might expect. A solution is proposed here for a release version using the Compaq Visual Fortran 6.6c. The difficulties rise because C code is embedded with a fortran interface and also, extra libraries such as hdf4 are required.&lt;br /&gt;
&lt;br /&gt;
=== Pre-requisites ===&lt;br /&gt;
&lt;br /&gt;
This is a list of prerequisites to successfully compile the tool:&lt;br /&gt;
*Compaq Visual Fortran 6.5 with patch 6.6c,&lt;br /&gt;
*VS .NET 2003 (Vc7 in particular),&lt;br /&gt;
*Hdf5 libraries ('''hdf5.lib''' '''hdf5_fortran.lib''' '''hdf5_hl.lib'''),&lt;br /&gt;
*Netcdf libraries ('''netcdf.lib''' '''netcdf_.lib'''),&lt;br /&gt;
*Hdf4 libraries ('''hd421.lib''', '''hm421.lib'''),&lt;br /&gt;
*szlib, zlib and jpeg libraries ('''szlib.lib''', '''zlib.lib''' and '''libjpeg.lib'''),&lt;br /&gt;
*the fortran source files ('''ModuleConvertModisL2.F90 ModuleConvertModisL3.F90 ModuleConvertOceanColorL2.F90'''),&lt;br /&gt;
*the C source files and their fortran interface files ('''readL2scan.c readL2Seadas.c''' and '''cdata.f crossp.f fgeonav.f''').&lt;br /&gt;
&lt;br /&gt;
=== CVF IDE configuration ===&lt;br /&gt;
# Configure everything as specified in [[Compiling with CVF]].&lt;br /&gt;
# Add the source files listed in the prerequisites above to the source files listing.&lt;br /&gt;
# Go to '''Tools--&amp;gt;Options...--&amp;gt;Directories'''. There, add the '''$DOTNET2K3/Vc7/bin''' to the '''Executable files''''; the '''$DOTNET2K3/Vc7/include''' and '''$DOTNET2K3/Vc7/PlatformSDK/include''' to the '''Include files'''; and finally, the '''$DOTNET2K3/Vc7/lib''', '''$DOTNET2K3/Vc7/PlatformSDK/lib''' and  '''$DOTNET2K3/Vc7/PlatformSDK/bin''' to the '''Library files'''.&lt;br /&gt;
# Go to '''Projects--&amp;gt;Settings--&amp;gt;Release--&amp;gt;Link--&amp;gt;Input'''. There, add the following libraries: '''netcdf.lib netcdf_.lib hd421.lib hm421.lib libjpeg.lib'''. (Make sure the hdf5 libraries as well as the szlib and zlib libraries are already mentioned).&lt;br /&gt;
&lt;br /&gt;
=== Troubleshoots ===&lt;br /&gt;
'''Q: I get unresolved external references during linkage, but I have all the libraries mentioned above included. What should I do?'''&lt;br /&gt;
&lt;br /&gt;
A: Unresolved external references can come out for two reasons:&lt;br /&gt;
#you didn't specified all the libraries required or all the paths for the default libraries or,&lt;br /&gt;
#[http://en.wikipedia.org/wiki/Name_decoration name mangling] problems. Use the [[dumpbin]] utility to the libraries to checkout which language convention they are using. If that's the problem then you need to try to get new libraries with the correct naming convention.&lt;br /&gt;
&lt;br /&gt;
That's it, you should now be able to build the [[ConvertToHdf5]] project successfully.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Q: I got a message saying the entry point _NF_PUT_ATT_REAL@28 could not be located in netcdf.dll'''&lt;br /&gt;
&lt;br /&gt;
A: copy the file netcdf.dll to the exe folder&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
*[http://www.hdfgroup.org/ HDF5 Homepage]&lt;br /&gt;
*[http://www.hdfgroup.org/ HDF4 Homepage]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
*[[Module_Atmosphere]]&lt;br /&gt;
*[[Module_InterfaceWaterAir]]&lt;br /&gt;
*[[Coupling_Water-Atmosphere_User_Manual]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Tools]]&lt;br /&gt;
[[Category:Hdf5]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1923</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1923"/>
				<updated>2009-05-11T19:39:58Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several threads (one for each available core): the processing inside a parallel region is divided between the Workers and the Master, which then process in parallel (simultaneously), instead of the unique thread (Master) existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each thread has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the thread processing. &lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a FOR loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran do loop are distributed among the threads. The way the work is distributed over the threads is governed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are compleated.&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region. If they are dynamically linked with a parallel region (e.g. they appear in processing by a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region they are ignored and the enclosed code is performed by the Master only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time process a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing. This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is done only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL no barriers exist in SINGLE in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1922</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1922"/>
				<updated>2009-05-11T18:43:44Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several Worker threads (one for each available core): the processing inside a parallel region is divided between the Workers, which then process in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each Worker has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the Worker processing. &lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a FOR loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran do loop are distributed among the threads. The way the work is distributed over the threads is governed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are compleated.&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region. If they are dynamically linked with a parallel region (e.g. they appear in processing by a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region they are ignored and the enclosed code is performed by the Master only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time process a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing. This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is done only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
As with CRITICAL no barriers exist in SINGLE in the entry and exit.&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1921</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1921"/>
				<updated>2009-05-11T18:33:18Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several Worker threads (one for each available core): the processing inside a parallel region is divided between the Workers, which then process in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each Worker has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the Worker processing. &lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a FOR loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran do loop are distributed among the threads. The way the work is distributed over the threads is governed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are compleated.&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region. If they are dynamically linked with a parallel region (e.g. they appear in processing by a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region they are ignored and the enclosed code is performed by the Master only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time process a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing. This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
Within a Work Sharing construct can also be specified that a portion of the code is done only by one thread, e.g. to read information from a file common to all threads. This is done with the following syntax:&lt;br /&gt;
&lt;br /&gt;
 !$OMP SINGLE [clause ...]&lt;br /&gt;
 ... (code)&lt;br /&gt;
 !$OMP END SINGLE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1920</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1920"/>
				<updated>2009-05-11T18:05:04Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several Worker threads (one for each available core): the processing inside a parallel region is divided between the Workers, which then process in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each Worker has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the Worker processing. &lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a FOR loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran do loop are distributed among the threads. The way the work is distributed over the threads is governed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are compleated.&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region. If they are dynamically linked with a parallel region (e.g. they appear in processing by a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region they are ignored and the enclosed code is performed by the Master only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time process a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing. This is commanded as following:&lt;br /&gt;
&lt;br /&gt;
 !$OMP CRITICAL [Name]&lt;br /&gt;
 ... (code to be processed sequentially)&lt;br /&gt;
 !$OMP END CRITICAL [Name]&lt;br /&gt;
&lt;br /&gt;
In this case, no barriers exist in the entry and exit and all threads process the enclosed code although one at a time. &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1919</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1919"/>
				<updated>2009-05-11T18:00:51Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several Worker threads (one for each available core): the processing inside a parallel region is divided between the Workers, which then process in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each Worker has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the Worker processing. &lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
- PRIVATE (private variables list);&lt;br /&gt;
- NOWAIT: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a FOR loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. An important Work Sharing construct is the DO loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ... (do loop in Fortran)&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
In this construct the iterations of the enclosed Fortran do loop are distributed among the threads. The way the work is distributed over the threads is governed by the SCHEDULE clause of the DO construct. An important option of SCHEDULE is DYNAMIC: provided a Chunk number of iterations each thread will process this fixed number of iterations. After finishing a Chunk each thread will began another available Chunk till all iterations are compleated.&lt;br /&gt;
&lt;br /&gt;
Important characteristics of the Work Sharing constructs are that they do not create new threads, must contemplate all the existing threads or none at all and there is no barrier on entry (any available thread is not required to wait for the others) but a barrier on exit exists. The barrier on exit can be removed by the referred NOWAIT clause.&lt;br /&gt;
&lt;br /&gt;
Work Sharing directives can appear outside the lexical extent of a parallel region. If they are dynamically linked with a parallel region (e.g. they appear in processing by a subroutine call) they are orphaned. If, however, they appear outside this dynamic connection with a parallel region they are ignored and the enclosed code is performed by the Master only and no parallelization is processed.&lt;br /&gt;
&lt;br /&gt;
Inside a Work Sharing construct can be defined a critical region, if it is conveninent that only one thread at a time process a specific code. This can be used to avoid problems with input/output operations potentially occuring by several threads processing.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1918</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1918"/>
				<updated>2009-05-11T16:55:29Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several Worker threads (one for each available core): the processing inside a parallel region is divided between the Workers, which then process in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each Worker has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the Worker processing. &lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
- Private (list);&lt;br /&gt;
- Nowait: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a FOR loop) within a parallel region; this is specified at the end of a parallel construct;&lt;br /&gt;
&lt;br /&gt;
A parallel region is typically specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP PARALLEL PRIVATE(PrivateVariable1,...,PrivateVariableN)&lt;br /&gt;
&lt;br /&gt;
Inside a parallel region the work is distributed by the threads through '''Work Sharing constructs'''. An important Work Sharing construct is the Do loop, specified as:&lt;br /&gt;
&lt;br /&gt;
 !$OMP DO&lt;br /&gt;
 ...&lt;br /&gt;
 !$OMP END DO&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1917</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1917"/>
				<updated>2009-05-11T14:20:07Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several Worker threads (one for each available core): the processing inside a parallel region is divided between the Workers, which then process in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each Worker has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the Worker processing. &lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
- Private;&lt;br /&gt;
- Nowait: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a FOR loop) within a parallel region;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1916</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1916"/>
				<updated>2009-05-11T14:19:47Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
'''''Basic OpenMP syntax:'''''&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several Worker threads (one for each available core): the processing inside a parallel region is divided between the Workers, which then process in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each Worker has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the Worker processing. &lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
- Private;&lt;br /&gt;
- Nowait: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a FOR loop) within a parallel region;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1915</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1915"/>
				<updated>2009-05-11T14:19:11Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
'''Basic OpenMP syntax:'''''Italic text''&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several Worker threads (one for each available core): the processing inside a parallel region is divided between the Workers, which then process in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each Worker has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the Worker processing. &lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
- Private;&lt;br /&gt;
- Nowait: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a FOR loop) within a parallel region;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1914</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1914"/>
				<updated>2009-05-11T14:18:38Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
Basic OpenMP syntax:&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several Worker threads (one for each available core): the processing inside a parallel region is divided between the Workers, which then process in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each Worker has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the Worker processing. &lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive. Important clauses are:&lt;br /&gt;
- Private;&lt;br /&gt;
- Nowait: specify that threads will not syncronize (i.e. wait for each other) at the end of a specific construct (e.g. a FOR loop) within a parallel region;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1913</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1913"/>
				<updated>2009-05-11T14:10:57Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
Basic OpenMP syntax:&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several Worker threads (one for each available core): the processing inside a parallel region is divided between the Workers, which then process in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each Worker has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the Worker processing. &lt;br /&gt;
However it should be noted that in private variables the values are undefined in enter and exit of the parallel region and that by default these variables have no storage association with the variables outside the parallel region.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel is !$OMP in either fixed or free format. The continuation of directives are according with the underlying language: &amp;amp; in Fortran case.&lt;br /&gt;
&lt;br /&gt;
Clauses are used to specify additional information for the directive &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1912</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1912"/>
				<updated>2009-05-11T14:01:46Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
Basic OpenMP syntax:&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the '''Master''' thread and the '''Workers''' threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining '''Parallel regions''' and by creating several Worker threads (one for each available core): the processing inside a parallel region is divided between the Workers, which then process in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each Worker has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the Worker processing.&lt;br /&gt;
&lt;br /&gt;
The instructions in OpenMP are provided by a set of '''directives'''. These refer to several actions such as the definition of Parallel regions, Work sharing and Data attributes. Directives are specific for the underlying programming language being used, either C or Fortran.&lt;br /&gt;
&lt;br /&gt;
In Fortran directives are case insensitive. The basic syntax in as following:&lt;br /&gt;
&lt;br /&gt;
 sentinel directive [clause [[,] clause] ...]&lt;br /&gt;
&lt;br /&gt;
Sentinel &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1911</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1911"/>
				<updated>2009-05-11T13:50:45Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]]. Without special compilation these instructions are not considered in normal Fortran compilation. &lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
Basic OpenMP syntax:&lt;br /&gt;
&lt;br /&gt;
In this section is provided an introduction with the basic concepts in OpenMP programming. The OpenMP language reference manual (http://www.openmp.org) should be consulted for further details.&lt;br /&gt;
&lt;br /&gt;
OpenMP processing is made by a set of threads composed by the Master thread and the Workers threads.&lt;br /&gt;
&lt;br /&gt;
OpenMP parallelization is achieved by defining parallel regions and by creating several Worker threads (one for each available core): the processing inside a parallel region is divided between the Workers, which then process in parallel (simultaneously), instead of the unique thread existent in unparallel processing.&lt;br /&gt;
&lt;br /&gt;
Each Worker has a set of private variables, which are affected only by this thread. The choice of the private variables, which are explicitly defined in programming, is a central part of the OpenMP programming. Should be made private all the variables whose values are altered by the Worker processing.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1910</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1910"/>
				<updated>2009-05-11T13:30:37Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives '''to optimize loops'''. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]].&lt;br /&gt;
&lt;br /&gt;
OpenMP parallel processing can be used in '''multi core processors''' present in the same computer. It cannot be used to parallel processing using processors located in several computers in a cluster.  &lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chosen according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable, since the resource costs and the savings achieved with the parallelization are larger in this loop than in the others.&lt;br /&gt;
&lt;br /&gt;
Basic OpenMP syntax:&lt;br /&gt;
&lt;br /&gt;
Parallelization is achieved by defining parallel regions and by creating several '''threads''' (one for each available core): the processing is divided between each threads, which then process in parallel.&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	<entry>
		<id>http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1909</id>
		<title>Parallel processing</title>
		<link rel="alternate" type="text/html" href="http://www.wiki.mohid.com/index.php?title=Parallel_processing&amp;diff=1909"/>
				<updated>2009-05-11T13:08:16Z</updated>
		
		<summary type="html">&lt;p&gt;AngelaCanas: /* Parallel processing via OpenMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The historical need in numerical models to reduce computational time became a priority to the Mohid development team as an operational hydrodynamic and water quality model to the Tagus Estuary, in Lisbon, Portugal, was implemented using the [[Mohid Water]] model full capabilities. Thus, parallel processing has been implemented in Mohid Water in 2003, by using [http://www.mcs.anl.gov/mpi/mpich/ MPICH], a free portable implementation of [http://www.mcs.anl.gov/mpi/ MPI], the standard for message-passing libraries. &lt;br /&gt;
&lt;br /&gt;
Currently, and due to the use of the new Intel Fortran compiler both Mohid Water and Mohid Land have parallelization features using [http://www.openmp.org/ OpenMP].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via MPI ==&lt;br /&gt;
The [[Mohid Water]] ability to run [[nested models]] was accomplished by creating a linked list of all the models and by attributing to each one a father-son identification, through which the models communicate. The first stage for introducing parallel processing in Mohid was to add the possibility of launching a process by each model to run, and then, using MPICH, establish communication between models. This enables each sub-model to run in a different processor (even if the processor belongs to a different computer, as long as it is in the same network) and in parallel, instead of running all in the same processor and each model having to wait for the others to perform their calculations. &lt;br /&gt;
&lt;br /&gt;
Parallel processing as it is presently implemented in Mohid, could not be achieved without [[object oriented programming]] philosophy, as each model is an instance of [[class Model]] and no changes, exception made to the implementation of the MPI communications calls needed to be added. Using this feature, computational speed was improved (varying from application to application), as now the whole model will take the same time as the slowest model to run plus the time to communicate with the other processes. Here, the network communication speed plays an important role, as it can become limiting. However, the amount of information passing between models, depending of course on the memory allocated for each model, has not yet proven to be big enough to make a 100 Mbps network connection time limiting.&lt;br /&gt;
&lt;br /&gt;
[[Domain Composition]] is an ongoing project, in a very early stage, aimed at decoupling a domain into several subdomain that communicate among them (2-way) via MPI.&lt;br /&gt;
&lt;br /&gt;
Find here information on how to [[setup a MOHID simulation using MPI]] and on [[compiling Mohid with MPI]].&lt;br /&gt;
&lt;br /&gt;
== Parallel processing via OpenMP ==&lt;br /&gt;
Parallel processing using OpenMP is currently being implemented in Mohid by defining directives to optimize loops. These directives are defined as comments in the code and therefore need special compilation options. See more on [[compiling Mohid with OpenMP]].&lt;br /&gt;
&lt;br /&gt;
Loops optimization is introduced in a first phase in loops referring to grid variables (grid indexes, k, j, i) located in the Modifier section of MOHID modules. &lt;br /&gt;
Modifier loops are possibly used several times or involve a large resource allocation in MOHID simulations, hence these are locations with larger potential resource gains involved in parallelization.&lt;br /&gt;
&lt;br /&gt;
In case of loops with several looping variables, the parallelized variable is chose according with cost involved in the loop through this variable. E.g. in a 3D loop (k, j, i loop variables) if j dimension is much larger than the k (number of layers) or i then parallel processing is introduced in j variable.&lt;br /&gt;
&lt;br /&gt;
Basic OpenMP syntax:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>AngelaCanas</name></author>	</entry>

	</feed>