We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.

Motoko • 5 years ago

Hi, I just start using CodinGame today, so I don't quite understand why I end up with standard error like this but still successfully get my code passed.

------- THIS IS MY STANDARD ERROR---------------------

Unexpected end of /proc/mounts line `overlay / overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/ISKVD7PF2YNC2NCOKEFYAGTX5K:/var/lib/docker/overlay2/l/F7MFFX7YDORR53GXYQ67BINBYJ:/var/lib/docker/overlay2/l/RAPKSZE33KIKOMTFYTUBZR55XI:/var/lib/docker/overlay2/l/HJ4Y66DLXALJI7ZEUNTNBT624J:/var/lib/docker/overlay2/l/MMJV4M7DGEPMZOQEUT4SCQ2BKX:/var/lib/docker/overlay2/l/DBID5NMUWBSO2CUWLNIPCKGX5A:/var/lib/docker/overlay2/l/IMATGCQKVBAS7MSGL44RV5MZFR:/var/lib/docker/overlay2/l/CIZ2DO3FSCJLDHHC3TYICRQRRE:/var/lib/docker/overlay2/l/Q7Y2WMYDMDO2S'
Unexpected end of /proc/mounts line `IM4HQZE4ITOFX:/var/lib/docker/overlay2/l/DQCG4DFUEJOZTRBS3EBJM4U5TR:/var/lib/docker/overlay2/l/IOLC4C6BUUWM4XX7IGLLQAG4WW:/var/lib/docker/overlay2/l/HAJ2SGNGXUTZOHBYW6LU5LMOUX:/var/lib/docker/overlay2/l/ORZNVWUMSFFKASYY5CQZVQCBYN:/var/lib/docker/overlay2/l/DFL2XTC2JZQBTSG42IRCPRY6UY:/var/lib/docker/overlay2/l/CBV4LLYZWQNYOL2ESAZIL63XLT:/var/lib/docker/overlay2/l/NTJ56UWHM6XDG35R4MMMJ2UPQV,upperdir=/var/lib/docker/overlay2/a578b55f4114ed3d72a59f9355be0e0d683cbadcd3f632b7aaead083aa946cab/diff,workdir=/var/li'
mpirun noticed that process rank 0 with PID 15 on node 58637ec980be exited on signal 11 (Segmentation fault).
Traceback (most recent call last):
File "check_blocking.py", line 16, in <module>
v = int(line)
ValueError: invalid literal for int() with base 10: '[58637ec980be:00015] *** Process received signal ***'

Anonymous • 3 years ago

I have the same issue. Does anyone know what is wrong with my code?

Anonymous • 3 years ago

I don't think there is anything wrong with the code... I commented out everything in main() and the error still comes up. The problem seems to be with the test. I'll try to run it locally

fujisan43 • 4 years ago

a solution for the splitting exercise would be nice!

Anastasia Shamakina • 4 years ago

Wow! It is super! I have never seen such quality tutorial. Thank You for Your work!

Please correct a little mistake in the "MPI Status Retrieval" Section in the following line:
MPI_Recv(&values, 5, MPI_DOUBLE, MPI_ANY_SOURCE, MPI_ANY_TAG, &status);

AkshatShah21 • 3 years ago

In P2P Communications, exercise 1, two command line arguments are passed as input. The code reads argv[1], but it appears that the rank 0 process gets one of the args and rank 1 process gets the other. How does this happen?

So I went to the Github repo and found this.
Apparently the script runs the mpirun command in a way that allows this.
More on the mpirun man page

Nonconformizt • 3 years ago

If you keep seeing

"An error occurred in MPI_Scatter on communicator MPI_COMM_WORLD"

or "Segmentation fault", the following may help:

MPI_Scatter parameters send_count and recv_count is the number of
elements you want to send to each process (I thought
send_count is total number of elements in buffer)

The same for MPI_Gather - send_count = recv_count = single process element count

I found this out my program resulted in errors in console and i didn't
any see fractal plot. (There was a "Success" message though, this is a
bit misleading)

Anonymous • 5 years ago
C111L • 5 years ago

Thank you so much for making this tutorial! It's so much better than the course I am taking.

Mạnh Lê • 5 years ago

Thank you! A great tutorial!

Motoko • 5 years ago

Hi, the 8th tutorial is both hard and useful. I successfully get result = 8s for non-blocking case, by using intelMPI on Visual Studio.
However, for blocking case, I got result = 9s, which is a combination of 3 and 6 in process 0. Does anyone know why?

Joao Salvado • 2 years ago

void play_non_blocking_scenario() {
MPI_Request request;
MPI_Status status;
int request_finished = 0;
MPI_Request request2;
MPI_Status status2;
int request_finished2 = 0;

// Initialising buffer :
for (int i=0; i < buffer_count; ++i)
buffer[i] = (rank == 0 ? i*2 : 0);

// Starting the chronometer
double time = -MPI_Wtime(); // This command helps us measure time. We will see more about it later on !

////////// You should not modify anything BEFORE this point //////////

if (rank == 0) {

// 1- Initialise the non-blocking send to process 1
MPI_Isend(buffer, buffer_count, MPI_INT, 1, 0, MPI_COMM_WORLD, &request);

double time_left = 6000.0;
while (time_left > 0.0) {
usleep(1000); // We work for 1ms

// 2- Test if the request is finished (only if not already finished)
MPI_Test(&request, &request_finished, &status);


// 1ms left to work
time_left -= 1000.0;

// 3- If the request is not yet complete, wait here.
if(!request_finished) MPI_Wait(&request, &status);

// Modifying the buffer for second step
for (int i=0; i < buffer_count; ++i)
buffer[i] = -i;

// 4- Prepare another request for process 1 with a different tag
MPI_Isend(buffer, buffer_count, MPI_INT, 1, 1, MPI_COMM_WORLD, &request2);

time_left = 3000.0;
while (time_left > 0.0) {
usleep(1000); // We work for 1ms

// 5- Test if the request is finished (only if not already finished)
MPI_Test(&request2, &request_finished2, &status2);

// 1ms left to work
time_left -= 1000.0;
// 6- Wait for it to finish
if(!request_finished2) MPI_Wait(&request2, &status2);

else {
// Work for 5 seconds

// 7- Initialise the non-blocking receive from process 0
MPI_Irecv(buffer, buffer_count, MPI_INT, 0, 0, MPI_COMM_WORLD, &request);

// 8- Wait here for the request to be completed
MPI_Wait(&request, &status);


// Work for 3 seconds

// 9- Initialise another non-blocking receive
MPI_Irecv(buffer, buffer_count, MPI_INT, 0, 1, MPI_COMM_WORLD, &request2);

// 10- Wait for it to be completed
MPI_Wait(&request2, &status2);

////////// should not modify anything AFTER this point //////////

// Stopping the chronometer
time += MPI_Wtime();

// This line gives us the maximum time elapsed on each process.
// We will see about reduction later on !
double final_time;
MPI_Reduce(&time, &final_time, 1, MPI_DOUBLE, MPI_MAX, 0, MPI_COMM_WORLD);

if (rank == 0)
std::cout << "Total time for non-blocking scenario : " << final_time << "s" << std::endl;

Sys • 5 years ago

Actually I'm working on simple mpi program which is about matrices multiplication and I need to know it's performance and output result comparing with sequential program, So could you please to guide me how Can I do the comparison ???