In the parallel domain decomposition algorithm, domain decomposition occurs at two levels:
The interaction between the two levels of domain decomposition is performed by the routines issnddhs, isrcddhs, isrcddnd and issnddnd, see figure 1 These routines perform a local copy of data from host to node (which is not necessary because in such a case data is shared) or a PVM send of data. The PVM multi-block messages always consist of self-contained submessages for the internal subfaces of blocks. See messages needed for the transport of domain decomposition information.
Figure 1: Domain decomposition on two levels
For the global domain decomposition the host stores for each block an array virt of virtual unknowns. The node stores such an array for each of the local blocks. If a certain node resides on the host this node and the host program share the arrays of virtual unknowns for the local blocks. The virt array contains for each block the virtual unknowns that are needed for discretizing across internal boundaries. The virt array is just large enough to contain all virtual unknowns but not larger.
The routines ismblk and isddbnd form the interaction between the subdomain solvers and the virt array:
All the multi-block transport routines, ismblk, issnddhs, isrcddhs, isrcddnd and issnddnd are based on four basic communication routines isddput, isddget, isddvput, isddvget. These routines provide the communication between the solut and virt arrays, see fig. 2. The intermediate representation these routines operate on is the same as that used in the multi-block messages
Figure 2: The four basic communication routines