MPI+pthreads. Program stuck on MPI_Ssend and MPI_R

2019-05-20 19:46发布

问题:

I have debugged this program for 2 weeks. It has just 93 lines. But I still cannot find the bug. Please help me.

This program is OK on my laptop. But it stuck when I run it on the cluster of my lab, Shanghai super-computing center China and Jinan super-computing center China.

The logic of this program is very simple. There are 2 MPI processes. One is master (pid = 0),the other is slave(pid = 1). Master waits for requests on tag = 0. Slave sends message to master tag:0 every second and wait for ACK message. Once the master get a request, master will send an ACK message to slave tag:100.

The problem is that after a few seconds the program will stuck. Master will stuck at MPI_Recv, waiting for a request on tag:0. Slave will stuck at MPI_Ssend, trying to send a message to master tag:0. This MPI communication should match each other. But I don't know why it stuck.

Some hints: Program will not get stuck on the following situations:

1.Adding a sleep() function after pthread_create(&tid,&attr,master_server_handler,NULL); in the void *master_server(void *null_arg) function.

Or

2.Using joinable pthread attribute instead of detachable attribute to create master_server_handler. (pthread_create(&tid,&attr,master_server_handler,NULL); replaced it with pthread_create(&tid,NULL,master_server_handler,NULL); )

Or

3.Using master_server_handler function instead of pthread_create(&tid,&attr,master_server_handler,NULL);.

Or

4.Replacing MPI_Ssend in void *master_server_handler(void *arg) with MPI_Send.

The program is OK under each of these situations. All these modifications can be found in the notes of the program.

I don't know why it will stuck. I tried openmpi and mpich2. The program will stuck on both of them.

Any hints please...

I can provide VPN of my lab if it is needed. You can log in the cluster in my lab. (e-mail:674322022@qq.com)

BTW: I enabled the parameter of threads supporting while compiling openmpi and mpich2. For openmpi the parameter is --with-threads=poxis --enable-mpi-thread-multiple. Mpich2 is --enable-threads.


Machines in my lab is CentOS. Out put of uname -a is Linux node5 2.6.18-238.12.1.el5xen #1 SMP Tue May 31 14:02:29 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux.

I run the program using: mpiexec -n 2 ./a.out

Following are the source code, out put of the program, the back-trace info when the program stuck.

Source code

#include "stdio.h"
#include "pthread.h"
#include "stdlib.h"
#include "string.h"
#include "mpi.h"

void send_heart_beat();
void *heart_beat_daemon(void *null_arg);
void *master_server(void *null_arg);
void *master_server_handler(void *arg);

int main(int argc,char *argv[])
{
    int p,id;
    pthread_t tid;

    MPI_Init(&argc,&argv);
    MPI_Comm_size(MPI_COMM_WORLD,&p);
    MPI_Comm_rank(MPI_COMM_WORLD,&id);

    if(id==0)
    {
        //master
        pthread_create(&tid,NULL,master_server,NULL);
        pthread_join(tid,NULL);
    }
    else
    {
        //slave
        pthread_create(&tid,NULL,heart_beat_daemon,NULL);
        pthread_join(tid,NULL);
    }

    MPI_Finalize();

    return 0;
}

void *heart_beat_daemon(void *null_arg)
{
    while(1)
    {
        sleep(1);
        send_heart_beat();
    }
}

void send_heart_beat()
{
    char send_msg[5];
    char ack_msg[5];

    strcpy(send_msg,"AAAA");

    MPI_Ssend(send_msg,5,MPI_CHAR,0,0,MPI_COMM_WORLD);

    MPI_Recv(ack_msg,5,MPI_CHAR,0,100,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
}

void *master_server(void *null_arg)
{
    char msg[5];

    pthread_t tid;
    pthread_attr_t attr;

    pthread_attr_init(&attr);
    pthread_attr_setdetachstate(&attr,PTHREAD_CREATE_DETACHED);

    while(1)
    {

        MPI_Recv(msg,5,MPI_CHAR,1,0,MPI_COMM_WORLD,MPI_STATUS_IGNORE);

        pthread_create(&tid,&attr,master_server_handler,NULL);
//      sleep(2);
//      master_server_handler(NULL);
//      pthread_create(&tid,NULL,master_server_handler,fun_arg);
//      pthread_join(tid,NULL);
    }
}

void *master_server_handler(void *arg)
{
    static int count;
    char ack[5];

    count ++;
    printf("recved a msg %d\n",count);

    strcpy(ack,"ACK:");
    MPI_Ssend(ack,5,MPI_CHAR,1,100,MPI_COMM_WORLD);
//  MPI_Send(ack,5,MPI_CHAR,1,100,MPI_COMM_WORLD);
}       

out put of the program:

    recved a msg 1
    recved a msg 2
    recved a msg 3
    recved a msg 4
    recved a msg 5
    recved a msg 6
    recved a msg 7
    recved a msg 8
    recved a msg 9
    recved a msg 10
    recved a msg 11
    recved a msg 12
    recved a msg 13
    recved a msg 14
    recved a msg 15

back-trace of master when stuck:

    (gdb) bt
    #0  opal_progress () at runtime/opal_progress.c:175
    #1  0x00002b17ed288f75 in opal_condition_wait (addr=<value optimized out>,
        count=<value optimized out>, datatype=<value optimized out>, src=1, tag=0,
        comm=0x601520, status=0x0) at ../../../../opal/threads/condition.h:99
    #2  ompi_request_wait_completion (addr=<value optimized out>,
        count=<value optimized out>, datatype=<value optimized out>, src=1, tag=0,
        comm=0x601520, status=0x0) at ../../../../ompi/request/request.h:377
    #3  mca_pml_ob1_recv (addr=<value optimized out>, count=<value optimized out>,
        datatype=<value optimized out>, src=1, tag=0, comm=0x601520, status=0x0)
        at pml_ob1_irecv.c:105
    #4  0x00002b17ed1ef049 in PMPI_Recv (buf=0x2b17f2495120, count=5,
        type=0x601320, source=1, tag=0, comm=0x601520, status=0x0) at precv.c:78
    #5  0x0000000000400d75 in master_server (null_arg=0x0) at main.c:73
    #6  0x0000003b5a00683d in start_thread () from /lib64/libpthread.so.0
    #7  0x0000003b594d526d in clone () from /lib64/libc.so.6

back-trace of slave when stuck:

    (gdb) bt
    #0  0x00002adff87ef975 in opal_atomic_cmpset_32 (btl=<value optimized out>, endpoint=<value optimized out>,
        registration=0x0, convertor=0x124e46a8, order=0 '\000', reserve=32, size=0x2adffda74fe8, flags=3)
        at ../../../../opal/include/opal/sys/amd64/atomic.h:85
    #1  opal_atomic_lifo_pop (btl=<value optimized out>, endpoint=<value optimized out>, registration=0x0,
        convertor=0x124e46a8, order=0 '\000', reserve=32, size=0x2adffda74fe8, flags=3)
        at ../../../../opal/class/opal_atomic_lifo.h:100
    #2  mca_btl_sm_prepare_src (btl=<value optimized out>, endpoint=<value optimized out>, registration=0x0,
        convertor=0x124e46a8, order=0 '\000', reserve=32, size=0x2adffda74fe8, flags=3) at btl_sm.c:697
    #3  0x00002adff8877678 in mca_bml_base_prepare_src (sendreq=0x124e4600, bml_btl=0x124ea860, size=5, flags=0)
        at ../../../../ompi/mca/bml/bml.h:339
    #4  mca_pml_ob1_send_request_start_rndv (sendreq=0x124e4600, bml_btl=0x124ea860, size=5, flags=0)
        at pml_ob1_sendreq.c:815
    #5  0x00002adff8869e82 in mca_pml_ob1_send_request_start (buf=0x2adffda75100, count=5,
        datatype=<value optimized out>, dst=0, tag=0, sendmode=MCA_PML_BASE_SEND_SYNCHRONOUS, comm=0x601520)
        at pml_ob1_sendreq.h:363
    #6  mca_pml_ob1_send (buf=0x2adffda75100, count=5, datatype=<value optimized out>, dst=0, tag=0,
        sendmode=MCA_PML_BASE_SEND_SYNCHRONOUS, comm=0x601520) at pml_ob1_isend.c:119
    #7  0x00002adff87d2be6 in PMPI_Ssend (buf=0x2adffda75100, count=5, type=0x601320, dest=0, tag=0,
        comm=0x601520) at pssend.c:76
    #8  0x0000000000400cf4 in send_heart_beat () at main.c:55
    #9  0x0000000000400cb6 in heart_beat_daemon (null_arg=0x0) at main.c:44
    #10 0x0000003b5a00683d in start_thread () from /lib64/libpthread.so.0
    #11 0x0000003b594d526d in clone () from /lib64/libc.so.6

回答1:

MPI provides four different levels of thread support: MPI_THREAD_SINGLE, MPI_THREAD_SERIALIZED, MPI_THREAD_FUNNELED, and MPI_THREAD_MULTIPLE. In order to be able to make MPI calls from different threads concurrently, you have to initialise MPI with the MPI_THREAD_MULTIPLE level of thread support and make sure that the library actually provides that level:

int provided;

MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided);
if (provided < MPI_THREAD_MULTIPLE)
{
   printf("Error: the MPI library doesn't provide the required thread level\n");
   MPI_Abort(MPI_COMM_WORLD, 0);
}

If you call MPI_Init instead of MPI_Init_thread, the library is free to chose whatever default thread support level its creators deemed best. For Open MPI that is MPI_THREAD_SINGLE, i.e. no support for threads. You can control the default level by setting the environment variable OMPI_MPI_THREAD_LEVEL but that is not recommended - MPI_Init_thread should be used instead.



回答2:

IIRC, It's bad juju to call MPI_Init in one thread, then call MPI functions from another thread. Take a look at the documentation for MPI_Init_thread for how to make MPI thread-safe(-er).



标签: pthreads mpi