[Kratos] SynchonizeDofs

Riccardo Rossi rrossi en cimne.upc.edu
Lun Jun 17 18:45:22 CEST 2013


Dear Bui,
      i wrote this code long ago and i don't remember the details, however

local_keys[destination] and remote_keys[destination

for example represent the things to be exchanged with the "destination"
processor. what is not clear about this?



please observe that this works under the hypothesis that the nodes already
have the same dof list on both processors (which shall always be the case,
since we add the dofs from the python function AddDofs), and that what is
happening is simply that some dofs are NOT included by the elements in one
partition while they are on the other partition

hope this helps...otherwise try to describe a bit more the problem...

ciao
riccardo



On Mon, Jun 17, 2013 at 5:46 PM, Hoang Giang Bui <hgbk2008 en gmail.com> wrote:

>  Hi Riccardo
>
> I guessed you mentioned about
> trilinos_residualbased_elimination_builder_and_solver.h. However it still
> does not handle all the dofs correctly. Some dofs are synchronized and some
> do not in my case. Looking in the SetUpDofSet, I was not clear about the
> meaning of the code segment below:
>
>
>         //now send our local ndofs to the other processors using coloring
>         for (unsigned int i_color = 0; i_color <
> r_model_part.GetCommunicator().NeighbourIndices().size(); i_color++)
>         {
>             int destination =
> r_model_part.GetCommunicator().NeighbourIndices()[i_color];
>             if (destination >= 0)
> //            if (destination >= 0 && destination != rank)
>             {
>                 MPI_Status status;
>                 int send_tag = i_color;
>                 int receive_tag = i_color;
>
>                 //first of all obtain the number of nodes we will need to
> get remotely
>                 int remote_gids_size;
>                 int local_gids_size = local_gids[destination].size();
>                 MPI_Sendrecv(&local_gids_size, 1, MPI_INT, destination,
> send_tag, &remote_gids_size, 1, MPI_INT, destination,
> receive_tag,MPI_COMM_WORLD, &status);
>                 remote_gids[destination].resize(remote_gids_size);
>                 remote_ndofs[destination].resize(remote_gids_size);
>
>                 //receive the remote GiDs
>                 MPI_Sendrecv(local_gids[destination].data(),
> local_gids[destination].size(), MPI_INT, destination, send_tag,
> remote_gids[destination].data(), remote_gids_size, MPI_INT, destination,
> receive_tag,MPI_COMM_WORLD, &status);
>
>                 //receive the remote ndofs (same size as the gids)
>                 MPI_Sendrecv(local_ndofs[destination].data(),
> local_ndofs[destination].size(), MPI_INT, destination, send_tag,
> remote_ndofs[destination].data(), remote_gids_size, MPI_INT, destination,
> receive_tag,MPI_COMM_WORLD, &status);
>
>                 //find the number of non local dofs to receive
>                 int remote_keys_size;
>                 int local_keys_size = local_keys[destination].size();
>                 MPI_Sendrecv(&local_keys_size, 1, MPI_INT, destination,
> send_tag, &remote_keys_size, 1, MPI_INT, destination,
> receive_tag,MPI_COMM_WORLD, &status);
>                 remote_keys[destination].resize(remote_keys_size);
>
>                 //receive the keys
>                 MPI_Sendrecv(local_keys[destination].data(),
> local_keys[destination].size(), MPI_INT, destination, send_tag,
> remote_keys[destination].data(), remote_keys_size, MPI_INT, destination,
> receive_tag,MPI_COMM_WORLD, &status);
>             }
>         }
>
>
> Can you help to explain why we use same source and dest in MPI_Sendrecv
> and the outcome of this part ?
>
>
> Ciao
> Bui
>
>
>
>
>
> On 06/13/13 13:02, Riccardo Rossi wrote:
>
>    Dear Bui,
>               the issue you raise is quite hairy, but i think i solved it
> some time ago in the builder and solver.
>
>  to begin, all of the nodes in the kratos shall have the same list of
> dofs, which means that some of the dofs may be uninitialized in some of the
> nodes (or even wors, may be utilized in one domain but not in the next)
>
>  in any case when dofs are syncronized the "owner" simply spawns its
> values to all of the others, assuming that each node has all of the dofs...
>
>
>  the thing works as the builder and solver (with the modifications i
> committed some time ago) shall correctly handle one node having two
> different lists of active dofs on two different mpi domains.
>  This implies that after the solution is finished the owner of the node
> shall have the correct values and hence the sync should work.
>
>  to understand how this works look around line 540 of the
> trilinos_residualbased_builderandsolver....
>
>  Riccardo
>
>
>
>
> On Thu, Jun 13, 2013 at 12:04 PM, Hoang Giang Bui <hgbk2008 en gmail.com>wrote:
>
>>
>> Hi
>>
>> I want to understand more about the method Kratos use to synchronize
>> dofs between different process. I have a case like this:
>>
>> + Process 1 contains elements with multiple dofs (DISPLACEMENT(_XYZ) &
>> WATER_PRESSURE)
>>
>> + Process 2 contains only conditions which involve only DISPLACEMENT(_XYZ)
>>
>> + node k belongs to process 2
>>
>> When dofs are enumerated in process 1, node k on the boundary of process
>> 1 & 2 is fully enumerated with DISPLACEMENT(_XYZ) and WATER_PRESSURE
>> When dofs are enumerated in process 2, node k only has DISPLACEMENT(_XYZ)
>>
>> When dofs are synchronized, the dof on process 2 supersede process 1.
>> Which means WATER_PRESSURE dof on node k is assigned with EquationId 0.
>>
>> In this case what should I do to enable dof enumeration correctly in
>> parallel?
>>
>> Ciao
>> Bui
>>
>> _______________________________________________
>> Kratos mailing list
>> Kratos en listas.cimne.upc.edu
>> http://listas.cimne.upc.edu/cgi-bin/mailman/listinfo/kratos
>>
>
>
>
> --
>
> Dr. Riccardo Rossi, Civil Engineer
>
> Member of Kratos Team
>
> International Center for Numerical Methods in Engineering - CIMNE
> Campus Norte, Edificio C1
>
> c/ Gran Capitán s/n
>
> 08034 Barcelona, España
>
> Tel:        (+34) 93 401 56 96
>
> Fax:       (+34) 93.401.6517
> web:       www.cimne.com
>
>
>


-- 

Dr. Riccardo Rossi, Civil Engineer

Member of Kratos Team

International Center for Numerical Methods in Engineering - CIMNE
Campus Norte, Edificio C1

c/ Gran Capitán s/n

08034 Barcelona, España

Tel:        (+34) 93 401 56 96

Fax:       (+34) 93.401.6517
web:       www.cimne.com
------------ próxima parte ------------
Se ha borrado un adjunto en formato HTML...
URL: http://listas.cimne.upc.edu/pipermail/kratos/attachments/20130617/622326f0/attachment-0001.htm 


Más información sobre la lista de distribución Kratos