1
README.CV -- Condition Variables
2
--------------------------------
4
The original implementation of condition variables in
5
pthreads-win32 was based on a discussion paper:
7
"Strategies for Implementing POSIX Condition Variables
8
on Win32": http://www.cs.wustl.edu/~schmidt/win32-cv-1.html
10
The changes suggested below were made on Feb 6 2001. This
11
file is included in the package for the benefit of anyone
12
interested in understanding the pthreads-win32 implementation
13
of condition variables and the (sometimes subtle) issues that
14
it attempts to resolve.
16
Thanks go to the individuals whose names appear throughout
21
--------------------------------------------------------------------
23
fyi.. (more detailed problem description/demos + possible fix/patch)
32
To: ace-bugs@cs.wustl.edu
34
From: Alexander Terekhov/Germany/IBM@IBMDE
35
Subject: Implementation of POSIX CVs: spur.wakeups/lost
36
signals/deadlocks/unfairness
42
5.1.12 (pthread-win32 snapshot 2000-12-29)
44
HOST MACHINE and OPERATING SYSTEM:
46
IBM IntelliStation Z Pro, 2 x XEON 1GHz, Win2K
48
TARGET MACHINE and OPERATING SYSTEM, if different from HOST:
49
COMPILER NAME AND VERSION (AND PATCHLEVEL):
51
Microsoft Visual C++ 6.0
53
AREA/CLASS/EXAMPLE AFFECTED:
55
Implementation of POSIX condition variables - OS.cpp/.h
57
DOES THE PROBLEM AFFECT:
63
a) spurious wakeups (minor problem)
66
d) unfairness (minor problem)
70
Please see attached copy of discussion thread
71
from comp.programming.threads for more details on
72
some reported problems. (i've also posted a "fyi"
73
message to ace-users a week or two ago but
74
unfortunately did not get any response so far).
76
It seems that current implementation suffers from
77
two essential problems:
79
1) cond.waiters_count does not accurately reflect
80
number of waiters blocked on semaphore - w/o
81
proper synchronisation that could result (in the
82
time window when counter is not accurate)
83
in spurious wakeups organised by subsequent
84
_signals and _broadcasts.
86
2) Always having (with no e.g. copy_and_clear/..)
87
the same queue in use (semaphore+counter)
88
neither signal nor broadcast provide 'atomic'
89
behaviour with respect to other threads/subsequent
90
calls to signal/broadcast/wait.
92
Each problem and combination of both could produce
95
a) spurious wakeups (minor problem)
97
it is possible that waiter(s) which was already
98
unblocked even so is still counted as blocked
99
waiter. signal and broadcast will release
100
semaphore which will produce a spurious wakeup
101
for a 'real' waiter coming later.
105
signalling thread ends up consuming its own
106
signal. please see demo/discussion below.
108
c) broadcast deadlock
110
last_waiter processing code does not correctly
111
handle the case with multiple threads
112
waiting for the end of broadcast.
113
please see demo/discussion below.
115
d) unfairness (minor problem)
117
without SignalObjectAndWait some waiter(s)
118
may end up consuming broadcasted signals
119
multiple times (spurious wakeups) because waiter
120
thread(s) can be preempted before they call
121
semaphore wait (but after count++ and mtx.unlock).
125
See below... run problem demos programs (tennis.cpp and
126
tennisb.cpp) number of times concurrently (on multiprocessor)
127
and in multiple sessions or just add a couple of "Sleep"s
128
as described in the attached copy of discussion thread
129
from comp.programming.threads
131
SAMPLE FIX/WORKAROUND:
133
See attached patch to pthread-win32.. well, I can not
134
claim that it is completely bug free but at least my
135
test and tests provided by pthreads-win32 seem to work.
136
Perhaps that will help.
142
>> Forum: comp.programming.threads
143
>> Thread: pthread_cond_* implementation questions
147
David Schwartz <davids@webmaster.com> wrote:
149
> terekhov@my-deja.com wrote:
151
>> BTW, could you please also share your view on other perceived
152
>> "problems" such as nested broadcast deadlock, spurious wakeups
153
>> and (the latest one) lost signals??
155
>I'm not sure what you mean. The standard allows an implementation
156
>to do almost whatever it likes. In fact, you could implement
157
>pthread_cond_wait by releasing the mutex, sleeping a random
158
>amount of time, and then reacquiring the mutex. Of course,
159
>this would be a pretty poor implementation, but any code that
160
>didn't work under that implementation wouldn't be strictly
163
The implementation you suggested is indeed correct
164
one (yes, now I see it :). However it requires from
165
signal/broadcast nothing more than to "{ return 0; }"
166
That is not the case for pthread-win32 and ACE
167
implementations. I do think that these implementations
168
(basically the same implementation) have some serious
169
problems with wait/signal/broadcast calls. I am looking
170
for help to clarify whether these problems are real
171
or not. I think that I can demonstrate what I mean
172
using one or two small sample programs.
180
#include "ace/Synch.h"
181
#include "ace/Thread.h"
186
PLAYER_A, // Player A playes the ball
187
PLAYER_B, // Player B playes the ball
194
enum GAME_STATE eGameState;
195
ACE_Mutex* pmtxGameStateLock;
196
ACE_Condition< ACE_Mutex >* pcndGameStateChange;
204
// For access to game state variable
205
pmtxGameStateLock->acquire();
208
while ( eGameState < GAME_OVER ) {
211
cout << endl << "PLAYER-A" << endl;
213
// Now its PLAYER-B's turn
214
eGameState = PLAYER_B;
216
// Signal to PLAYER-B that now it is his turn
217
pcndGameStateChange->signal();
219
// Wait until PLAYER-B finishes playing the ball
222
pcndGameStateChange->wait();
224
if ( PLAYER_B == eGameState )
225
cout << endl << "----PLAYER-A: SPURIOUS WAKEUP!!!" << endl;
227
} while ( PLAYER_B == eGameState );
232
eGameState = (GAME_STATE)(eGameState+1);
233
cout << endl << "PLAYER-A GONE" << endl;
235
// No more access to state variable needed
236
pmtxGameStateLock->release();
238
// Signal PLAYER-A gone event
239
pcndGameStateChange->broadcast();
251
// For access to game state variable
252
pmtxGameStateLock->acquire();
255
while ( eGameState < GAME_OVER ) {
258
cout << endl << "PLAYER-B" << endl;
260
// Now its PLAYER-A's turn
261
eGameState = PLAYER_A;
263
// Signal to PLAYER-A that now it is his turn
264
pcndGameStateChange->signal();
266
// Wait until PLAYER-A finishes playing the ball
269
pcndGameStateChange->wait();
271
if ( PLAYER_A == eGameState )
272
cout << endl << "----PLAYER-B: SPURIOUS WAKEUP!!!" << endl;
274
} while ( PLAYER_A == eGameState );
279
eGameState = (GAME_STATE)(eGameState+1);
280
cout << endl << "PLAYER-B GONE" << endl;
282
// No more access to state variable needed
283
pmtxGameStateLock->release();
285
// Signal PLAYER-B gone event
286
pcndGameStateChange->broadcast();
294
main (int, ACE_TCHAR *[])
297
pmtxGameStateLock = new ACE_Mutex();
298
pcndGameStateChange = new ACE_Condition< ACE_Mutex >( *pmtxGameStateLock
302
eGameState = START_GAME;
305
ACE_Thread::spawn( playerA );
306
ACE_Thread::spawn( playerB );
308
// Give them 5 sec. to play
309
Sleep( 5000 );//sleep( 5 );
311
// Set game over state
312
pmtxGameStateLock->acquire();
313
eGameState = GAME_OVER;
316
pcndGameStateChange->broadcast();
318
// Wait for players to stop
321
pcndGameStateChange->wait();
323
} while ( eGameState < BOTH_PLAYERS_GONE );
326
cout << endl << "GAME OVER" << endl;
327
pmtxGameStateLock->release();
328
delete pcndGameStateChange;
329
delete pmtxGameStateLock;
338
#include "ace/Synch.h"
339
#include "ace/Thread.h"
344
PLAYER_A, // Player A playes the ball
345
PLAYER_B, // Player B playes the ball
352
enum GAME_STATE eGameState;
353
ACE_Mutex* pmtxGameStateLock;
354
ACE_Condition< ACE_Mutex >* pcndGameStateChange;
362
// For access to game state variable
363
pmtxGameStateLock->acquire();
366
while ( eGameState < GAME_OVER ) {
369
cout << endl << "PLAYER-A" << endl;
371
// Now its PLAYER-B's turn
372
eGameState = PLAYER_B;
374
// Signal to PLAYER-B that now it is his turn
375
pcndGameStateChange->broadcast();
377
// Wait until PLAYER-B finishes playing the ball
380
pcndGameStateChange->wait();
382
if ( PLAYER_B == eGameState )
383
cout << endl << "----PLAYER-A: SPURIOUS WAKEUP!!!" << endl;
385
} while ( PLAYER_B == eGameState );
390
eGameState = (GAME_STATE)(eGameState+1);
391
cout << endl << "PLAYER-A GONE" << endl;
393
// No more access to state variable needed
394
pmtxGameStateLock->release();
396
// Signal PLAYER-A gone event
397
pcndGameStateChange->broadcast();
409
// For access to game state variable
410
pmtxGameStateLock->acquire();
413
while ( eGameState < GAME_OVER ) {
416
cout << endl << "PLAYER-B" << endl;
418
// Now its PLAYER-A's turn
419
eGameState = PLAYER_A;
421
// Signal to PLAYER-A that now it is his turn
422
pcndGameStateChange->broadcast();
424
// Wait until PLAYER-A finishes playing the ball
427
pcndGameStateChange->wait();
429
if ( PLAYER_A == eGameState )
430
cout << endl << "----PLAYER-B: SPURIOUS WAKEUP!!!" << endl;
432
} while ( PLAYER_A == eGameState );
437
eGameState = (GAME_STATE)(eGameState+1);
438
cout << endl << "PLAYER-B GONE" << endl;
440
// No more access to state variable needed
441
pmtxGameStateLock->release();
443
// Signal PLAYER-B gone event
444
pcndGameStateChange->broadcast();
452
main (int, ACE_TCHAR *[])
455
pmtxGameStateLock = new ACE_Mutex();
456
pcndGameStateChange = new ACE_Condition< ACE_Mutex >( *pmtxGameStateLock
460
eGameState = START_GAME;
463
ACE_Thread::spawn( playerA );
464
ACE_Thread::spawn( playerB );
466
// Give them 5 sec. to play
467
Sleep( 5000 );//sleep( 5 );
470
pmtxGameStateLock->acquire();
471
cout << endl << "---Noise ON..." << endl;
472
pmtxGameStateLock->release();
473
for ( int i = 0; i < 100000; i++ )
474
pcndGameStateChange->broadcast();
475
cout << endl << "---Noise OFF" << endl;
477
// Set game over state
478
pmtxGameStateLock->acquire();
479
eGameState = GAME_OVER;
480
cout << endl << "---Stopping the game..." << endl;
483
pcndGameStateChange->broadcast();
485
// Wait for players to stop
488
pcndGameStateChange->wait();
490
} while ( eGameState < BOTH_PLAYERS_GONE );
493
cout << endl << "GAME OVER" << endl;
494
pmtxGameStateLock->release();
495
delete pcndGameStateChange;
496
delete pmtxGameStateLock;
504
David Schwartz <davids@webmaster.com> wrote:
507
>> That is really good.
509
>> Tomorrow (I have to go urgently now) I will try to
510
>> demonstrate the lost-signal "problem" of current
511
>> pthread-win32 and ACE-(variant w/o SingleObjectAndWait)
512
>> implementations: players start suddenly drop their balls :-)
513
>> (with no change in source code).
515
>Signals aren't lost, they're going to the main thread,
516
>which isn't coded correctly to handle them. Try this:
518
> // Wait for players to stop
521
> pthread_cond_wait( &cndGameStateChange,&mtxGameStateLock );
522
>printf("Main thread stole a signal\n");
524
> } while ( eGameState < BOTH_PLAYERS_GONE );
526
>I bet everytime you thing a signal is lost, you'll see that printf.
527
>The signal isn't lost, it was stolen by another thread.
529
well, you can probably loose your bet.. it was indeed stolen
530
by "another" thread but not the one you seem to think of.
532
I think that what actually happens is the following:
534
H:\SA\UXX\pt\PTHREADS\TESTS>tennis3.exe
540
----PLAYER-B: SPURIOUS WAKEUP!!!
548
H:\SA\UXX\pt\PTHREADS\TESTS>
550
here you can see that PLAYER-B after playing his first
551
ball (which came via signal from PLAYER-A) just dropped
552
it down. What happened is that his signal to player A
553
was consumed as spurious wakeup by himself (player B).
555
The implementation has a problem:
561
{ /** Critical Section
563
inc cond.waiters_count
568
/* Atomic only if using Win32 SignalObjectAndWait
572
/*** ^^-- A THREAD WHICH DID SIGNAL MAY ACQUIRE THE MUTEX,
573
/*** GO INTO WAIT ON THE SAME CONDITION AND OVERTAKE
574
/*** ORIGINAL WAITER(S) CONSUMING ITS OWN SIGNAL!
578
Player-A after playing game's initial ball went into
579
wait (called _wait) but was pre-empted before reaching
580
wait semaphore. He was counted as waiter but was not
581
actually waiting/blocked yet.
587
{ /** Critical Section
589
waiters_count = cond.waiters_count
593
if ( waiters_count != 0 )
599
Player-B after he received signal/ball from Player A
600
called _signal. The _signal did see that there was
601
one waiter blocked on the condition (Player-A) and
602
released the semaphore.. (but it did not unblock
603
Player-A because he was not actually blocked).
604
Player-B thread continued its execution, called _wait,
605
was counted as second waiter BUT was allowed to slip
606
through opened semaphore gate (which was opened for
607
Player-B) and received his own signal. Player B remained
608
blocked followed by Player A. Deadlock happened which
609
lasted until main thread came in and said game over.
611
It seems to me that the implementation fails to
612
correctly implement the following statement
615
http://www.opengroup.org/
616
onlinepubs/007908799/xsh/pthread_cond_wait.html
618
"These functions atomically release mutex and cause
619
the calling thread to block on the condition variable
620
cond; atomically here means "atomically with respect
621
to access by another thread to the mutex and then the
622
condition variable". That is, if another thread is
623
able to acquire the mutex after the about-to-block
624
thread has released it, then a subsequent call to
625
pthread_cond_signal() or pthread_cond_broadcast()
626
in that thread behaves as if it were issued after
627
the about-to-block thread has blocked."
629
Question: Am I right?
631
(I produced the program output above by simply
638
{ /** Critical Section
640
inc cond.waiters_count
645
/* Atomic only if using Win32 SignalObjectAndWait
651
/*** ^^-- A THREAD WHICH DID SIGNAL MAY ACQUIRE THE MUTEX,
652
/*** GO INTO WAIT ON THE SAME CONDITION AND OVERTAKE
653
/*** ORIGINAL WAITER(S) CONSUMING ITS OWN SIGNAL!
657
to the source code of pthread-win32 implementation:
659
http://sources.redhat.com/cgi-bin/cvsweb.cgi/pthreads/
660
condvar.c?rev=1.36&content-type=text/
661
x-cvsweb-markup&cvsroot=pthreads-win32
665
* We keep the lock held just long enough to increment the count of
666
* waiters by one (above).
667
* Note that we can't keep it held across the
668
* call to sem_wait since that will deadlock other calls
669
* to pthread_cond_signal
671
cleanup_args.mutexPtr = mutex;
672
cleanup_args.cv = cv;
673
cleanup_args.resultPtr = &result;
675
pthread_cleanup_push (ptw32_cond_wait_cleanup, (void *)
678
if ((result = pthread_mutex_unlock (mutex)) == 0)
683
* Wait to be awakened by
684
* pthread_cond_signal, or
685
* pthread_cond_broadcast, or
689
* ptw32_sem_timedwait is a cancelation point,
690
* hence providing the
691
* mechanism for making pthread_cond_wait a cancelation
692
* point. We use the cleanup mechanism to ensure we
693
* re-lock the mutex and decrement the waiters count
694
* if we are canceled.
696
if (ptw32_sem_timedwait (&(cv->sema), abstime) == -1) {
701
pthread_cleanup_pop (1); /* Always cleanup */
704
BTW, on my system (2 CPUs) I can manage to get
705
signals lost even without any source code modification
706
if I run the tennis program many times in different
711
David Schwartz <davids@webmaster.com> wrote:
712
>terekhov@my-deja.com wrote:
714
>> well, it might be that the program is in fact buggy.
715
>> but you did not show me any bug.
717
>You're right. I was close but not dead on. I was correct, however,
718
>that the code is buggy because it uses 'pthread_cond_signal' even
719
>though not any thread waiting on the condition variable can do the
720
>job. I was wrong in which thread could be waiting on the cv but
721
>unable to do the job.
723
Okay, lets change 'pthread_cond_signal' to 'pthread_cond_broadcast'
724
but also add some noise from main() right before declaring the game
725
to be over (I need it in order to demonstrate another problem of
726
pthread-win32/ACE implementations - broadcast deadlock)...
730
It is my understanding of POSIX conditions,
731
that on correct implementation added noise
732
in form of unnecessary broadcasts from main,
733
should not break the tennis program. The
734
only 'side effect' of added noise on correct
735
implementation would be 'spurious wakeups' of
736
players (in fact they are not spurious,
737
players just see them as spurious) unblocked,
738
not by another player but by main before
739
another player had a chance to acquire the
740
mutex and change the game state variable:
763
----PLAYER-A: SPURIOUS WAKEUP!!!
773
---Stopping the game...
781
H:\SA\UXX\pt\PTHREADS\TESTS>
783
On pthread-win32/ACE implementations the
812
H:\SA\UXX\pt\PTHREADS\TESTS>
815
The implementation has problems:
821
{ /** Critical Section
823
inc cond.waiters_count
828
/* Atomic only if using Win32 SignalObjectAndWait
833
/*** ^^-- WAITER CAN BE PREEMPTED AFTER BEING UNBLOCKED...
835
{ /** Critical Section
837
dec cond.waiters_count
839
/*** ^^- ...AND BEFORE DECREMENTING THE COUNT (1)
841
last_waiter = ( cond.was_broadcast &&
842
cond.waiters_count == 0 )
846
cond.was_broadcast = FALSE
855
/* Atomic only if using Win32 SignalObjectAndWait
857
cond.auto_reset_event_or_sem.post /* Event for Win32
860
/*** ^^-- ...AND BEFORE CALL TO mtx.acquire (2)
862
/*** ^^-- NESTED BROADCASTS RESULT IN A DEADLOCK
869
/*** ^^-- ...AND BEFORE CALL TO mtx.acquire (3)
878
{ /** Critical Section
880
waiters_count = cond.waiters_count
882
if ( waiters_count != 0 )
884
cond.was_broadcast = TRUE
890
if ( waiters_count != 0 )
892
cond.sem.post waiters_count
894
/*** ^^^^^--- SPURIOUS WAKEUPS DUE TO (1)
896
cond.auto_reset_event_or_sem.wait /* Event for Win32
898
/*** ^^^^^--- DEADLOCK FOR FURTHER BROADCASTS IF THEY
899
HAPPEN TO GO INTO WAIT WHILE PREVIOUS
900
BROADCAST IS STILL IN PROGRESS/WAITING
904
a) cond.waiters_count does not accurately reflect
905
number of waiters blocked on semaphore - that could
906
result (in the time window when counter is not accurate)
907
in spurios wakeups organised by subsequent _signals
908
and _broadcasts. From standard compliance point of view
909
that is OK but that could be a real problem from
910
performance/efficiency point of view.
912
b) If subsequent broadcast happen to go into wait on
913
cond.auto_reset_event_or_sem before previous
914
broadcast was unblocked from cond.auto_reset_event_or_sem
915
by its last waiter, one of two blocked threads will
916
remain blocked because last_waiter processing code
917
fails to unblock both threads.
919
In the situation with tennisb.c the Player-B was put
920
in a deadlock by noise (broadcast) coming from main
921
thread. And since Player-B holds the game state
922
mutex when it calls broadcast, the whole program
923
stalled: Player-A was deadlocked on mutex and
924
main thread after finishing with producing the noise
925
was deadlocked on mutex too (needed to declare the
928
(I produced the program output above by simply
935
{ /** Critical Section
937
waiters_count = cond.waiters_count
939
if ( waiters_count != 0 )
941
cond.was_broadcast = TRUE
947
if ( waiters_count != 0 )
951
cond.sem.post waiters_count
953
/*** ^^^^^--- SPURIOUS WAKEUPS DUE TO (1)
955
cond.auto_reset_event_or_sem.wait /* Event for Win32
957
/*** ^^^^^--- DEADLOCK FOR FURTHER BROADCASTS IF THEY
958
HAPPEN TO GO INTO WAIT WHILE PREVIOUS
959
BROADCAST IS STILL IN PROGRESS/WAITING
963
to the source code of pthread-win32 implementation:
965
http://sources.redhat.com/cgi-bin/cvsweb.cgi/pthreads/
966
condvar.c?rev=1.36&content-type=text/
967
x-cvsweb-markup&cvsroot=pthreads-win32
970
{(wereWaiters)sroot=pthreads-win32eb.cgi/pthreads/Yem...m
972
* Wake up all waiters
979
result = (ptw32_increase_semaphore( &cv->sema, cv->waiters )
985
result = (ReleaseSemaphore( cv->sema, cv->waiters, NULL )
989
#endif /* NEED_SEM */
993
(void) pthread_mutex_unlock(&(cv->waitersLock));
995
if (wereWaiters && result == 0)
998
* Wait for all the awakened threads to acquire their part of
999
* the counting semaphore
1002
if (WaitForSingleObject (cv->waitersDone, INFINITE)
1018
BTW, on my system (2 CPUs) I can manage to get
1019
the program stalled even without any source code
1020
modification if I run the tennisb program many
1021
times in different shell sessions.
1026
struct pthread_cond_t_ {
1027
long nWaitersBlocked; /* Number of threads blocked
1029
long nWaitersUnblocked; /* Number of threads unblocked
1031
long nWaitersToUnblock; /* Number of threads to unblock
1033
sem_t semBlockQueue; /* Queue up threads waiting for the
1035
/* condition to become signalled
1037
sem_t semBlockLock; /* Semaphore that guards access to
1039
/* | waiters blocked count/block queue
1041
/* +-> Mandatory Sync.LEVEL-1
1043
pthread_mutex_t mtxUnblockLock; /* Mutex that guards access to
1045
/* | waiters (to)unblock(ed) counts
1047
/* +-> Optional* Sync.LEVEL-2
1049
}; /* Opt*) for _timedwait and
1053
pthread_cond_init (pthread_cond_t * cond, const pthread_condattr_t * attr)
1054
int result = EAGAIN;
1055
pthread_cond_t cv = NULL;
1062
if ((attr != NULL && *attr != NULL) &&
1063
((*attr)->pshared == PTHREAD_PROCESS_SHARED))
1066
* Creating condition variable that can be shared between
1074
cv = (pthread_cond_t) calloc (1, sizeof (*cv));
1082
cv->nWaitersBlocked = 0;
1083
cv->nWaitersUnblocked = 0;
1084
cv->nWaitersToUnblock = 0;
1086
if (sem_init (&(cv->semBlockLock), 0, 1) != 0)
1091
if (sem_init (&(cv->semBlockQueue), 0, 0) != 0)
1096
if (pthread_mutex_init (&(cv->mtxUnblockLock), 0) != 0)
1097
{(pthread_mutex_init
1112
(void) sem_destroy (&(cv->semBlockQueue));
1115
(void) sem_destroy (&(cv->semBlockLock));
1123
} /* pthread_cond_init */
1126
pthread_cond_destroy (pthread_cond_t * cond)
1132
* Assuming any race condition here is harmless.
1140
if (*cond != (pthread_cond_t) PTW32_OBJECT_AUTO_INIT)
1145
* Synchronize access to waiters blocked count (LEVEL-1)
1147
if (sem_wait(&(cv->semBlockLock)) != 0)
1148
{(sem_wait(&(cv->semBlockLock))
1153
* Synchronize access to waiters (to)unblock(ed) counts (LEVEL-2)
1155
if ((result = pthread_mutex_lock(&(cv->mtxUnblockLock))) != 0)
1157
(void) sem_post(&(cv->semBlockLock));
1162
* Check whether cv is still busy (still has waiters blocked)
1164
if (cv->nWaitersBlocked - cv->nWaitersUnblocked > 0)
1165
{(cv->nWaitersBlocked
1166
(void) sem_post(&(cv->semBlockLock));
1167
(void) pthread_mutex_unlock(&(cv->mtxUnblockLock));
1172
* Now it is safe to destroy
1174
(void) sem_destroy (&(cv->semBlockLock));
1175
(void) sem_destroy (&(cv->semBlockQueue));
1176
(void) pthread_mutex_unlock (&(cv->mtxUnblockLock));
1177
(void) pthread_mutex_destroy (&(cv->mtxUnblockLock));
1185
* See notes in ptw32_cond_check_need_init() above also.
1187
EnterCriticalSection(&ptw32_cond_test_init_lock);
1192
if (*cond == (pthread_cond_t) PTW32_OBJECT_AUTO_INIT)
1195
* This is all we need to do to destroy a statically
1196
* initialised cond that has not yet been used (initialised).
1197
* If we get to here, another thread
1198
* waiting to initialise this cond will get an EINVAL.
1205
* The cv has been initialised while we were waiting
1206
* so assume it's in use.
1211
LeaveCriticalSection(&ptw32_cond_test_init_lock);
1218
* Arguments for cond_wait_cleanup, since we can only pass a
1219
* single void * to it.
1222
pthread_mutex_t * mutexPtr;
1225
} ptw32_cond_wait_cleanup_args_t;
1228
ptw32_cond_wait_cleanup(void * args)
1230
ptw32_cond_wait_cleanup_args_t * cleanup_args =
1231
(ptw32_cond_wait_cleanup_args_t *) args;
1232
pthread_cond_t cv = cleanup_args->cv;
1233
int * resultPtr = cleanup_args->resultPtr;
1234
int eLastSignal; /* enum: 1=yes 0=no -1=cancelled/timedout w/o signal(s)
1239
* Whether we got here as a result of signal/broadcast or because of
1240
* timeout on wait or thread cancellation we indicate that we are no
1241
* longer waiting. The waiter is responsible for adjusting waiters
1242
* (to)unblock(ed) counts (protected by unblock lock).
1243
* Unblock lock/Sync.LEVEL-2 supports _timedwait and cancellation.
1245
if ((result = pthread_mutex_lock(&(cv->mtxUnblockLock))) != 0)
1247
*resultPtr = result;
1251
cv->nWaitersUnblocked++;
1253
eLastSignal = (cv->nWaitersToUnblock == 0) ?
1254
-1 : (--cv->nWaitersToUnblock == 0);
1257
* No more LEVEL-2 access to waiters (to)unblock(ed) counts needed
1259
if ((result = pthread_mutex_unlock(&(cv->mtxUnblockLock))) != 0)
1261
*resultPtr = result;
1268
if (eLastSignal == 1)
1271
* ...it means that we have end of 'atomic' signal/broadcast
1273
if (sem_post(&(cv->semBlockLock)) != 0)
1274
{(sem_post(&(cv->semBlockLock))
1280
* If not last signal and not timed out/cancelled wait w/o signal...
1282
else if (eLastSignal == 0)
1285
* ...it means that next waiter can go through semaphore
1287
if (sem_post(&(cv->semBlockQueue)) != 0)
1288
{(sem_post(&(cv->semBlockQueue))
1295
* XSH: Upon successful return, the mutex has been locked and is owned
1296
* by the calling thread
1298
if ((result = pthread_mutex_lock(cleanup_args->mutexPtr)) != 0)
1300
*resultPtr = result;
1303
} /* ptw32_cond_wait_cleanup */
1306
ptw32_cond_timedwait (pthread_cond_t * cond,
1307
pthread_mutex_t * mutex,
1308
const struct timespec *abstime)
1312
ptw32_cond_wait_cleanup_args_t cleanup_args;
1314
if (cond == NULL || *cond == NULL)
1320
* We do a quick check to see if we need to do more work
1321
* to initialise a static condition variable. We check
1322
* again inside the guarded section of ptw32_cond_check_need_init()
1323
* to avoid race conditions.
1325
if (*cond == (pthread_cond_t) PTW32_OBJECT_AUTO_INIT)
1327
result = ptw32_cond_check_need_init(cond);
1330
if (result != 0 && result != EBUSY)
1338
* Synchronize access to waiters blocked count (LEVEL-1)
1340
if (sem_wait(&(cv->semBlockLock)) != 0)
1341
{(sem_wait(&(cv->semBlockLock))
1345
cv->nWaitersBlocked++;
1348
* Thats it. Counted means waiting, no more access needed
1350
if (sem_post(&(cv->semBlockLock)) != 0)
1351
{(sem_post(&(cv->semBlockLock))
1356
* Setup this waiter cleanup handler
1358
cleanup_args.mutexPtr = mutex;
1359
cleanup_args.cv = cv;
1360
cleanup_args.resultPtr = &result;
1362
pthread_cleanup_push (ptw32_cond_wait_cleanup, (void *) &cleanup_args);
1365
* Now we can release 'mutex' and...
1367
if ((result = pthread_mutex_unlock (mutex)) == 0)
1371
* ...wait to be awakened by
1372
* pthread_cond_signal, or
1373
* pthread_cond_broadcast, or
1375
* thread cancellation
1379
* ptw32_sem_timedwait is a cancellation point,
1380
* hence providing the mechanism for making
1381
* pthread_cond_wait a cancellation point.
1382
* We use the cleanup mechanism to ensure we
1383
* re-lock the mutex and adjust (to)unblock(ed) waiters
1384
* counts if we are cancelled, timed out or signalled.
1386
if (ptw32_sem_timedwait (&(cv->semBlockQueue), abstime) != 0)
1387
{(ptw32_sem_timedwait
1395
pthread_cleanup_pop (1);
1399
* "result" can be modified by the cleanup handler.
1403
} /* ptw32_cond_timedwait */
1407
ptw32_cond_unblock (pthread_cond_t * cond,
1413
if (cond == NULL || *cond == NULL)
1421
* No-op if the CV is static and hasn't been initialised yet.
1422
* Assuming that any race condition is harmless.
1424
if (cv == (pthread_cond_t) PTW32_OBJECT_AUTO_INIT)
1430
* Synchronize access to waiters blocked count (LEVEL-1)
1432
if (sem_wait(&(cv->semBlockLock)) != 0)
1433
{(sem_wait(&(cv->semBlockLock))
1438
* Synchronize access to waiters (to)unblock(ed) counts (LEVEL-2)
1439
* This sync.level supports _timedwait and cancellation
1441
if ((result = pthread_mutex_lock(&(cv->mtxUnblockLock))) != 0)
1447
* Adjust waiters blocked and unblocked counts (collect garbage)
1449
if (cv->nWaitersUnblocked != 0)
1450
{(cv->nWaitersUnblocked
1451
cv->nWaitersBlocked -= cv->nWaitersUnblocked;
1452
cv->nWaitersUnblocked = 0;
1456
* If (after adjustment) there are still some waiters blocked counted...
1458
if ( cv->nWaitersBlocked > 0)
1461
* We will unblock first waiter and leave semBlockLock/LEVEL-1 locked
1462
* LEVEL-1 access is left disabled until last signal/unblock
1465
cv->nWaitersToUnblock = (unblockAll) ? cv->nWaitersBlocked : 1;
1468
* No more LEVEL-2 access to waiters (to)unblock(ed) counts needed
1469
* This sync.level supports _timedwait and cancellation
1471
if ((result = pthread_mutex_unlock(&(cv->mtxUnblockLock))) != 0)
1478
* Now, with LEVEL-2 lock released let first waiter go through
1481
if (sem_post(&(cv->semBlockQueue)) != 0)
1482
{(sem_post(&(cv->semBlockQueue))
1487
* No waiter blocked - no more LEVEL-1 access to blocked count needed...
1489
else if (sem_post(&(cv->semBlockLock)) != 0)
1494
* ...and no more LEVEL-2 access to waiters (to)unblock(ed) counts needed
1496
* This sync.level supports _timedwait and cancellation
1500
result = pthread_mutex_unlock(&(cv->mtxUnblockLock));
1505
} /* ptw32_cond_unblock */
1508
pthread_cond_wait (pthread_cond_t * cond,
1509
pthread_mutex_t * mutex)
1511
/* The NULL abstime arg means INFINITE waiting. */
1512
return(ptw32_cond_timedwait(cond, mutex, NULL));
1513
} /* pthread_cond_wait */
1517
pthread_cond_timedwait (pthread_cond_t * cond,
1518
pthread_mutex_t * mutex,
1519
const struct timespec *abstime)
1521
if (abstime == NULL)
1526
return(ptw32_cond_timedwait(cond, mutex, abstime));
1527
} /* pthread_cond_timedwait */
1531
pthread_cond_signal (pthread_cond_t * cond)
1533
/* The '0'(FALSE) unblockAll arg means unblock ONE waiter. */
1534
return(ptw32_cond_unblock(cond, 0));
1535
} /* pthread_cond_signal */
1538
pthread_cond_broadcast (pthread_cond_t * cond)
1540
/* The '1'(TRUE) unblockAll arg means unblock ALL waiters. */
1541
return(ptw32_cond_unblock(cond, 1));
1542
} /* pthread_cond_broadcast */
1547
TEREKHOV@de.ibm.com on 17.01.2001 01:00:57
1549
Please respond to TEREKHOV@de.ibm.com
1551
To: pthreads-win32@sourceware.cygnus.com
1553
Subject: win32 conditions: sem+counter+event = broadcast_deadlock +
1554
spur.wakeup/unfairness/incorrectness ??
1564
Problem 1: broadcast_deadlock
1566
It seems that current implementation does not provide "atomic"
1567
broadcasts. That may lead to "nested" broadcasts... and it seems
1568
that nested case is not handled correctly -> producing a broadcast
1569
DEADLOCK as a result.
1573
N (>1) waiting threads W1..N are blocked (in _wait) on condition's
1576
Thread B1 calls pthread_cond_broadcast, which results in "releasing" N
1577
W threads via incrementing semaphore counter by N (stored in
1578
cv->waiters) BUT cv->waiters counter does not change!! The caller
1579
thread B1 remains blocked on cv->waitersDone event (auto-reset!!) BUT
1580
condition is not protected from starting another broadcast (when called
1581
on another thread) while still waiting for the "old" broadcast to
1582
complete on thread B1.
1584
M (>=0, <N) W threads are fast enough to go thru their _wait call and
1585
decrement cv->waiters counter.
1587
L (N-M) "late" waiter W threads are a) still blocked/not returned from
1588
their semaphore wait call or b) were preempted after sem_wait but before
1589
lock( &cv->waitersLock ) or c) are blocked on cv->waitersLock.
1591
cv->waiters is still > 0 (= L).
1593
Another thread B2 (or some W thread from M group) calls
1594
pthread_cond_broadcast and gains access to counter... neither a) nor b)
1595
prevent thread B2 in pthread_cond_broadcast from gaining access to
1596
counter and starting another broadcast ( for c) - it depends on
1597
cv->waitersLock scheduling rules: FIFO=OK, PRTY=PROBLEM,... )
1599
That call to pthread_cond_broadcast (on thread B2) will result in
1600
incrementing semaphore by cv->waiters (=L) which is INCORRECT (all
1601
W1..N were in fact already released by thread B1) and waiting on
1602
_auto-reset_ event cv->waitersDone which is DEADLY WRONG (produces a
1605
All late W1..L threads now have a chance to complete their _wait call.
1606
Last W_L thread sets an auto-reselt event cv->waitersDone which will
1607
release either B1 or B2 leaving one of B threads in a deadlock.
1609
Problem 2: spur.wakeup/unfairness/incorrectness
1613
a) because of the same problem with counter which does not reflect the
1614
actual number of NOT RELEASED waiters, the signal call may increment
1615
a semaphore counter w/o having a waiter blocked on it. That will result
1616
in (best case) spurious wake ups - performance degradation due to
1617
unnecessary context switches and predicate re-checks and (in worth case)
1618
unfairness/incorrectness problem - see b)
1620
b) neither signal nor broadcast prevent other threads - "new waiters"
1621
(and in the case of signal, the caller thread as well) from going into
1622
_wait and overtaking "old" waiters (already released but still not returned
1623
from sem_wait on condition's semaphore). Win semaphore just [API DOC]:
1624
"Maintains a count between zero and some maximum value, limiting the number
1625
of threads that are simultaneously accessing a shared resource." Calling
1626
ReleaseSemaphore does not imply (at least not documented) that on return
1627
from ReleaseSemaphore all waiters will in fact become released (returned
1628
from their Wait... call) and/or that new waiters calling Wait... afterwards
1629
will become less importance. It is NOT documented to be an atomic release
1631
waiters... And even if it would be there is still a problem with a thread
1632
being preempted after Wait on semaphore and before Wait on cv->waitersLock
1633
and scheduling rules for cv->waitersLock itself
1634
(??WaitForMultipleObjects??)
1635
That may result in unfairness/incorrectness problem as described
1636
for SetEvent impl. in "Strategies for Implementing POSIX Condition
1638
on Win32": http://www.cs.wustl.edu/~schmidt/win32-cv-1.html
1640
Unfairness -- The semantics of the POSIX pthread_cond_broadcast function is
1641
to wake up all threads currently blocked in wait calls on the condition
1642
variable. The awakened threads then compete for the external_mutex. To
1644
fairness, all of these threads should be released from their
1645
pthread_cond_wait calls and allowed to recheck their condition expressions
1646
before other threads can successfully complete a wait on the condition
1649
Unfortunately, the SetEvent implementation above does not guarantee that
1651
threads sleeping on the condition variable when cond_broadcast is called
1653
acquire the external_mutex and check their condition expressions. Although
1654
the Pthreads specification does not mandate this degree of fairness, the
1655
lack of fairness can cause starvation.
1657
To illustrate the unfairness problem, imagine there are 2 threads, C1 and
1659
that are blocked in pthread_cond_wait on condition variable not_empty_ that
1660
is guarding a thread-safe message queue. Another thread, P1 then places two
1661
messages onto the queue and calls pthread_cond_broadcast. If C1 returns
1663
pthread_cond_wait, dequeues and processes the message, and immediately
1665
again then it and only it may end up acquiring both messages. Thus, C2 will
1666
never get a chance to dequeue a message and run.
1668
The following illustrates the sequence of events:
1670
1. Thread C1 attempts to dequeue and waits on CV non_empty_
1671
2. Thread C2 attempts to dequeue and waits on CV non_empty_
1672
3. Thread P1 enqueues 2 messages and broadcasts to CV not_empty_
1674
5. Thread C1 wakes up from CV not_empty_, dequeues a message and runs
1675
6. Thread C1 waits again on CV not_empty_, immediately dequeues the 2nd
1678
8. Thread C2 is the only thread left and blocks forever since
1679
not_empty_ will never be signaled
1681
Depending on the algorithm being implemented, this lack of fairness may
1683
concurrent programs that have subtle bugs. Of course, application
1685
should not rely on the fairness semantics of pthread_cond_broadcast.
1687
there are many cases where fair implementations of condition variables can
1688
simplify application code.
1690
Incorrectness -- A variation on the unfairness problem described above
1692
when a third consumer thread, C3, is allowed to slip through even though it
1693
was not waiting on condition variable not_empty_ when a broadcast occurred.
1695
To illustrate this, we will use the same scenario as above: 2 threads, C1
1697
C2, are blocked dequeuing messages from the message queue. Another thread,
1699
then places two messages onto the queue and calls pthread_cond_broadcast.
1701
returns from pthread_cond_wait, dequeues and processes the message. At this
1702
time, C3 acquires the external_mutex, calls pthread_cond_wait and waits on
1703
the events in WaitForMultipleObjects. Since C2 has not had a chance to run
1704
yet, the BROADCAST event is still signaled. C3 then returns from
1705
WaitForMultipleObjects, and dequeues and processes the message in the
1707
Thus, C2 will never get a chance to dequeue a message and run.
1709
The following illustrates the sequence of events:
1711
1. Thread C1 attempts to dequeue and waits on CV non_empty_
1712
2. Thread C2 attempts to dequeue and waits on CV non_empty_
1713
3. Thread P1 enqueues 2 messages and broadcasts to CV not_empty_
1715
5. Thread C1 wakes up from CV not_empty_, dequeues a message and runs
1717
7. Thread C3 waits on CV not_empty_, immediately dequeues the 2nd
1720
9. Thread C2 is the only thread left and blocks forever since
1721
not_empty_ will never be signaled
1723
In the above case, a thread that was not waiting on the condition variable
1724
when a broadcast occurred was allowed to proceed. This leads to incorrect
1725
semantics for a condition variable.
1733
-----------------------------------------------------------------------------
1735
Subject: RE: FYI/comp.programming.threads/Re: pthread_cond_*
1736
implementation questions
1737
Date: Wed, 21 Feb 2001 11:54:47 +0100
1738
From: TEREKHOV@de.ibm.com
1739
To: lthomas@arbitrade.com
1740
CC: rpj@ise.canberra.edu.au, Thomas Pfaff <tpfaff@gmx.net>,
1741
Nanbor Wang <nanbor@cs.wustl.edu>
1745
generation number 8..
1747
had some time to revisit timeouts/spurious wakeup problem..
1748
found some bugs (in 7.b/c/d) and something to improve
1749
(7a - using IPC semaphores but it should speedup Win32
1755
---------- Algorithm 8a / IMPL_SEM,UNBLOCK_STRATEGY == UNBLOCK_ALL ------
1757
semBlockLock - bin.semaphore
1758
semBlockQueue - semaphore
1759
mtxExternal - mutex or CS
1760
mtxUnblockLock - mutex or CS
1762
nWaitersBlocked - int
1763
nWaitersToUnblock - int
1767
[auto: register int result ] // error checking omitted
1768
[auto: register int nSignalsWasLeft ]
1769
[auto: register int nWaitersWasGone ]
1771
sem_wait( semBlockLock );
1773
sem_post( semBlockLock );
1775
unlock( mtxExternal );
1776
bTimedOut = sem_wait( semBlockQueue,timeout );
1778
lock( mtxUnblockLock );
1779
if ( 0 != (nSignalsWasLeft = nWaitersToUnblock) ) {
1780
if ( bTimeout ) { // timeout (or canceled)
1781
if ( 0 != nWaitersBlocked ) {
1785
nWaitersGone++; // count spurious wakeups
1788
if ( 0 == --nWaitersToUnblock ) {
1789
if ( 0 != nWaitersBlocked ) {
1790
sem_post( semBlockLock ); // open the gate
1791
nSignalsWasLeft = 0; // do not open the gate below
1794
else if ( 0 != (nWaitersWasGone = nWaitersGone) ) {
1799
else if ( INT_MAX/2 == ++nWaitersGone ) { // timeout/canceled or spurious
1801
sem_wait( semBlockLock );
1802
nWaitersBlocked -= nWaitersGone; // something is going on here -
1803
test of timeouts? :-)
1804
sem_post( semBlockLock );
1807
unlock( mtxUnblockLock );
1809
if ( 1 == nSignalsWasLeft ) {
1810
if ( 0 != nWaitersWasGone ) {
1811
// sem_adjust( -nWaitersWasGone );
1812
while ( nWaitersWasGone-- ) {
1813
sem_wait( semBlockLock ); // better now than spurious
1817
sem_post( semBlockLock ); // open the gate
1820
lock( mtxExternal );
1822
return ( bTimedOut ) ? ETIMEOUT : 0;
1827
[auto: register int result ]
1828
[auto: register int nSignalsToIssue]
1830
lock( mtxUnblockLock );
1832
if ( 0 != nWaitersToUnblock ) { // the gate is closed!!!
1833
if ( 0 == nWaitersBlocked ) { // NO-OP
1834
return unlock( mtxUnblockLock );
1837
nWaitersToUnblock += nSignalsToIssue=nWaitersBlocked;
1838
nWaitersBlocked = 0;
1841
nSignalsToIssue = 1;
1842
nWaitersToUnblock++;
1846
else if ( nWaitersBlocked > nWaitersGone ) { // HARMLESS RACE CONDITION!
1847
sem_wait( semBlockLock ); // close the gate
1848
if ( 0 != nWaitersGone ) {
1849
nWaitersBlocked -= nWaitersGone;
1853
nSignalsToIssue = nWaitersToUnblock = nWaitersBlocked;
1854
nWaitersBlocked = 0;
1857
nSignalsToIssue = nWaitersToUnblock = 1;
1862
return unlock( mtxUnblockLock );
1865
unlock( mtxUnblockLock );
1866
sem_post( semBlockQueue,nSignalsToIssue );
1870
---------- Algorithm 8b / IMPL_SEM,UNBLOCK_STRATEGY == UNBLOCK_ONEBYONE
1873
semBlockLock - bin.semaphore
1874
semBlockQueue - bin.semaphore
1875
mtxExternal - mutex or CS
1876
mtxUnblockLock - mutex or CS
1878
nWaitersBlocked - int
1879
nWaitersToUnblock - int
1883
[auto: register int result ] // error checking omitted
1884
[auto: register int nWaitersWasGone ]
1885
[auto: register int nSignalsWasLeft ]
1887
sem_wait( semBlockLock );
1889
sem_post( semBlockLock );
1891
unlock( mtxExternal );
1892
bTimedOut = sem_wait( semBlockQueue,timeout );
1894
lock( mtxUnblockLock );
1895
if ( 0 != (nSignalsWasLeft = nWaitersToUnblock) ) {
1896
if ( bTimeout ) { // timeout (or canceled)
1897
if ( 0 != nWaitersBlocked ) {
1899
nSignalsWasLeft = 0; // do not unblock next waiter
1900
below (already unblocked)
1903
nWaitersGone = 1; // spurious wakeup pending!!
1906
if ( 0 == --nWaitersToUnblock &&
1907
if ( 0 != nWaitersBlocked ) {
1908
sem_post( semBlockLock ); // open the gate
1909
nSignalsWasLeft = 0; // do not open the gate below
1912
else if ( 0 != (nWaitersWasGone = nWaitersGone) ) {
1917
else if ( INT_MAX/2 == ++nWaitersGone ) { // timeout/canceled or spurious
1919
sem_wait( semBlockLock );
1920
nWaitersBlocked -= nWaitersGone; // something is going on here -
1921
test of timeouts? :-)
1922
sem_post( semBlockLock );
1925
unlock( mtxUnblockLock );
1927
if ( 1 == nSignalsWasLeft ) {
1928
if ( 0 != nWaitersWasGone ) {
1929
// sem_adjust( -1 );
1930
sem_wait( semBlockQueue ); // better now than spurious
1933
sem_post( semBlockLock ); // open the gate
1935
else if ( 0 != nSignalsWasLeft ) {
1936
sem_post( semBlockQueue ); // unblock next waiter
1939
lock( mtxExternal );
1941
return ( bTimedOut ) ? ETIMEOUT : 0;
1946
[auto: register int result ]
1948
lock( mtxUnblockLock );
1950
if ( 0 != nWaitersToUnblock ) { // the gate is closed!!!
1951
if ( 0 == nWaitersBlocked ) { // NO-OP
1952
return unlock( mtxUnblockLock );
1955
nWaitersToUnblock += nWaitersBlocked;
1956
nWaitersBlocked = 0;
1959
nWaitersToUnblock++;
1962
unlock( mtxUnblockLock );
1964
else if ( nWaitersBlocked > nWaitersGone ) { // HARMLESS RACE CONDITION!
1965
sem_wait( semBlockLock ); // close the gate
1966
if ( 0 != nWaitersGone ) {
1967
nWaitersBlocked -= nWaitersGone;
1971
nWaitersToUnblock = nWaitersBlocked;
1972
nWaitersBlocked = 0;
1975
nWaitersToUnblock = 1;
1978
unlock( mtxUnblockLock );
1979
sem_post( semBlockQueue );
1982
unlock( mtxUnblockLock );
1988
---------- Algorithm 8c / IMPL_EVENT,UNBLOCK_STRATEGY == UNBLOCK_ONEBYONE
1991
hevBlockLock - auto-reset event
1992
hevBlockQueue - auto-reset event
1993
mtxExternal - mutex or CS
1994
mtxUnblockLock - mutex or CS
1996
nWaitersBlocked - int
1997
nWaitersToUnblock - int
2001
[auto: register int result ] // error checking omitted
2002
[auto: register int nSignalsWasLeft ]
2003
[auto: register int nWaitersWasGone ]
2005
wait( hevBlockLock,INFINITE );
2007
set_event( hevBlockLock );
2009
unlock( mtxExternal );
2010
bTimedOut = wait( hevBlockQueue,timeout );
2012
lock( mtxUnblockLock );
2013
if ( 0 != (SignalsWasLeft = nWaitersToUnblock) ) {
2014
if ( bTimeout ) { // timeout (or canceled)
2015
if ( 0 != nWaitersBlocked ) {
2017
nSignalsWasLeft = 0; // do not unblock next waiter
2018
below (already unblocked)
2021
nWaitersGone = 1; // spurious wakeup pending!!
2024
if ( 0 == --nWaitersToUnblock )
2025
if ( 0 != nWaitersBlocked ) {
2026
set_event( hevBlockLock ); // open the gate
2027
nSignalsWasLeft = 0; // do not open the gate below
2030
else if ( 0 != (nWaitersWasGone = nWaitersGone) ) {
2035
else if ( INT_MAX/2 == ++nWaitersGone ) { // timeout/canceled or spurious
2037
wait( hevBlockLock,INFINITE );
2038
nWaitersBlocked -= nWaitersGone; // something is going on here -
2039
test of timeouts? :-)
2040
set_event( hevBlockLock );
2043
unlock( mtxUnblockLock );
2045
if ( 1 == nSignalsWasLeft ) {
2046
if ( 0 != nWaitersWasGone ) {
2047
reset_event( hevBlockQueue ); // better now than spurious
2050
set_event( hevBlockLock ); // open the gate
2052
else if ( 0 != nSignalsWasLeft ) {
2053
set_event( hevBlockQueue ); // unblock next waiter
2056
lock( mtxExternal );
2058
return ( bTimedOut ) ? ETIMEOUT : 0;
2063
[auto: register int result ]
2065
lock( mtxUnblockLock );
2067
if ( 0 != nWaitersToUnblock ) { // the gate is closed!!!
2068
if ( 0 == nWaitersBlocked ) { // NO-OP
2069
return unlock( mtxUnblockLock );
2072
nWaitersToUnblock += nWaitersBlocked;
2073
nWaitersBlocked = 0;
2076
nWaitersToUnblock++;
2079
unlock( mtxUnblockLock );
2081
else if ( nWaitersBlocked > nWaitersGone ) { // HARMLESS RACE CONDITION!
2082
wait( hevBlockLock,INFINITE ); // close the gate
2083
if ( 0 != nWaitersGone ) {
2084
nWaitersBlocked -= nWaitersGone;
2088
nWaitersToUnblock = nWaitersBlocked;
2089
nWaitersBlocked = 0;
2092
nWaitersToUnblock = 1;
2095
unlock( mtxUnblockLock );
2096
set_event( hevBlockQueue );
2099
unlock( mtxUnblockLock );
2105
---------- Algorithm 8d / IMPL_EVENT,UNBLOCK_STRATEGY == UNBLOCK_ALL ------
2107
hevBlockLock - auto-reset event
2108
hevBlockQueueS - auto-reset event // for signals
2109
hevBlockQueueB - manual-reset even // for broadcasts
2110
mtxExternal - mutex or CS
2111
mtxUnblockLock - mutex or CS
2112
eBroadcast - int // 0: no broadcast, 1: broadcast, 2:
2113
broadcast after signal(s)
2115
nWaitersBlocked - int
2116
nWaitersToUnblock - int
2120
[auto: register int result ] // error checking omitted
2121
[auto: register int eWasBroadcast ]
2122
[auto: register int nSignalsWasLeft ]
2123
[auto: register int nWaitersWasGone ]
2125
wait( hevBlockLock,INFINITE );
2127
set_event( hevBlockLock );
2129
unlock( mtxExternal );
2130
bTimedOut = waitformultiple( hevBlockQueueS,hevBlockQueueB,timeout,ONE );
2132
lock( mtxUnblockLock );
2133
if ( 0 != (SignalsWasLeft = nWaitersToUnblock) ) {
2134
if ( bTimeout ) { // timeout (or canceled)
2135
if ( 0 != nWaitersBlocked ) {
2137
nSignalsWasLeft = 0; // do not unblock next waiter
2138
below (already unblocked)
2140
else if ( 1 != eBroadcast ) {
2144
if ( 0 == --nWaitersToUnblock ) {
2145
if ( 0 != nWaitersBlocked ) {
2146
set_event( hevBlockLock ); // open the gate
2147
nSignalsWasLeft = 0; // do not open the gate below
2151
if ( 0 != (eWasBroadcast = eBroadcast) ) {
2154
if ( 0 != (nWaitersWasGone = nWaitersGone ) {
2159
else if ( 0 != eBroadcast ) {
2160
nSignalsWasLeft = 0; // do not unblock next waiter
2161
below (already unblocked)
2164
else if ( INT_MAX/2 == ++nWaitersGone ) { // timeout/canceled or spurious
2166
wait( hevBlockLock,INFINITE );
2167
nWaitersBlocked -= nWaitersGone; // something is going on here -
2168
test of timeouts? :-)
2169
set_event( hevBlockLock );
2172
unlock( mtxUnblockLock );
2174
if ( 1 == nSignalsWasLeft ) {
2175
if ( 0 != eWasBroadcast ) {
2176
reset_event( hevBlockQueueB );
2178
if ( 0 != nWaitersWasGone ) {
2179
reset_event( hevBlockQueueS ); // better now than spurious
2182
set_event( hevBlockLock ); // open the gate
2184
else if ( 0 != nSignalsWasLeft ) {
2185
set_event( hevBlockQueueS ); // unblock next waiter
2188
lock( mtxExternal );
2190
return ( bTimedOut ) ? ETIMEOUT : 0;
2195
[auto: register int result ]
2196
[auto: register HANDLE hevBlockQueue ]
2198
lock( mtxUnblockLock );
2200
if ( 0 != nWaitersToUnblock ) { // the gate is closed!!!
2201
if ( 0 == nWaitersBlocked ) { // NO-OP
2202
return unlock( mtxUnblockLock );
2205
nWaitersToUnblock += nWaitersBlocked;
2206
nWaitersBlocked = 0;
2208
hevBlockQueue = hevBlockQueueB;
2211
nWaitersToUnblock++;
2213
return unlock( mtxUnblockLock );
2216
else if ( nWaitersBlocked > nWaitersGone ) { // HARMLESS RACE CONDITION!
2217
wait( hevBlockLock,INFINITE ); // close the gate
2218
if ( 0 != nWaitersGone ) {
2219
nWaitersBlocked -= nWaitersGone;
2223
nWaitersToUnblock = nWaitersBlocked;
2224
nWaitersBlocked = 0;
2226
hevBlockQueue = hevBlockQueueB;
2229
nWaitersToUnblock = 1;
2231
hevBlockQueue = hevBlockQueueS;
2235
return unlock( mtxUnblockLock );
2238
unlock( mtxUnblockLock );
2239
set_event( hevBlockQueue );
2242
---------------------- Forwarded by Alexander Terekhov/Germany/IBM on
2243
02/21/2001 09:13 AM ---------------------------
2248
To: Louis Thomas <lthomas@arbitrade.com>
2251
From: Alexander Terekhov/Germany/IBM@IBMDE
2252
Subject: RE: FYI/comp.programming.threads/Re: pthread_cond_* implementatio
2256
>Sorry, gotta take a break and work on something else for a while.
2258
>calls, unfortunately. I'll get back to you in two or three days.
2260
ok. no problem. here is some more stuff for pauses you might have
2263
---------- Algorithm 7d / IMPL_EVENT,UNBLOCK_STRATEGY == UNBLOCK_ALL ------
2265
hevBlockLock - auto-reset event
2266
hevBlockQueueS - auto-reset event // for signals
2267
hevBlockQueueB - manual-reset even // for broadcasts
2268
mtxExternal - mutex or CS
2269
mtxUnblockLock - mutex or CS
2272
nWaitersBlocked - int
2273
nWaitersToUnblock - int
2277
[auto: register int result ] // error checking omitted
2278
[auto: register int bWasBroadcast ]
2279
[auto: register int nSignalsWasLeft ]
2281
wait( hevBlockLock,INFINITE );
2283
set_event( hevBlockLock );
2285
unlock( mtxExternal );
2286
bTimedOut = waitformultiple( hevBlockQueueS,hevBlockQueueB,timeout,ONE );
2288
lock( mtxUnblockLock );
2289
if ( 0 != (SignalsWasLeft = nWaitersToUnblock) ) {
2290
if ( bTimeout ) { // timeout (or canceled)
2291
if ( 0 != nWaitersBlocked ) {
2293
nSignalsWasLeft = 0; // do not unblock next waiter
2294
below (already unblocked)
2296
else if ( !bBroadcast ) {
2297
wait( hevBlockQueueS,INFINITE ); // better now than spurious
2301
if ( 0 == --nWaitersToUnblock ) {
2302
if ( 0 != nWaitersBlocked ) {
2304
reset_event( hevBlockQueueB );
2307
set_event( hevBlockLock ); // open the gate
2308
nSignalsWasLeft = 0; // do not open the gate below
2311
else if ( false != (bWasBroadcast = bBroadcast) ) {
2316
bWasBroadcast = bBroadcast;
2319
else if ( INT_MAX/2 == ++nWaitersGone ) { // timeout/canceled or spurious
2321
wait( hevBlockLock,INFINITE );
2322
nWaitersBlocked -= nWaitersGone; // something is going on here -
2323
test of timeouts? :-)
2324
set_event( hevBlockLock );
2327
unlock( mtxUnblockLock );
2329
if ( 1 == nSignalsWasLeft ) {
2330
if ( bWasBroadcast ) {
2331
reset_event( hevBlockQueueB );
2333
set_event( hevBlockLock ); // open the gate
2335
else if ( 0 != nSignalsWasLeft && !bWasBroadcast ) {
2336
set_event( hevBlockQueueS ); // unblock next waiter
2339
lock( mtxExternal );
2341
return ( bTimedOut ) ? ETIMEOUT : 0;
2346
[auto: register int result ]
2347
[auto: register HANDLE hevBlockQueue ]
2349
lock( mtxUnblockLock );
2351
if ( 0 != nWaitersToUnblock ) { // the gate is closed!!!
2352
if ( 0 == nWaitersBlocked ) { // NO-OP
2353
return unlock( mtxUnblockLock );
2356
nWaitersToUnblock += nWaitersBlocked;
2357
nWaitersBlocked = 0;
2359
hevBlockQueue = hevBlockQueueB;
2362
nWaitersToUnblock++;
2364
return unlock( mtxUnblockLock );
2367
else if ( nWaitersBlocked > nWaitersGone ) { // HARMLESS RACE CONDITION!
2368
wait( hevBlockLock,INFINITE ); // close the gate
2369
if ( 0 != nWaitersGone ) {
2370
nWaitersBlocked -= nWaitersGone;
2374
nWaitersToUnblock = nWaitersBlocked;
2375
nWaitersBlocked = 0;
2377
hevBlockQueue = hevBlockQueueB;
2380
nWaitersToUnblock = 1;
2382
hevBlockQueue = hevBlockQueueS;
2386
return unlock( mtxUnblockLock );
2389
unlock( mtxUnblockLock );
2390
set_event( hevBlockQueue );
2395
----------------------------------------------------------------------------
2397
Subject: RE: FYI/comp.programming.threads/Re: pthread_cond_* implementatio
2399
Date: Mon, 26 Feb 2001 22:20:12 -0600
2400
From: Louis Thomas <lthomas@arbitrade.com>
2401
To: "'TEREKHOV@de.ibm.com'" <TEREKHOV@de.ibm.com>
2402
CC: rpj@ise.canberra.edu.au, Thomas Pfaff <tpfaff@gmx.net>,
2404
<nanbor@cs.wustl.edu>
2406
Sorry all. Busy week.
2408
> this insures the fairness
2409
> which POSIX does not (e.g. two subsequent broadcasts - the gate does
2411
> that first wave waiters will start the race for the mutex before waiters
2412
> from the second wave - Linux pthreads process/unblock both waves
2415
I'm not sure how we are any more fair about this than Linux. We certainly
2416
don't guarantee that the threads released by the first broadcast will get
2417
the external mutex before the threads of the second wave. In fact, it is
2418
possible that those threads will never get the external mutex if there is
2419
enough contention for it.
2421
> e.g. i was thinking about implementation with a pool of
2422
> N semaphores/counters [...]
2424
I considered that too. The problem is as you mentioned in a). You really
2425
need to assign threads to semaphores once you know how you want to wake them
2426
up, not when they first begin waiting which is the only time you can assign
2429
> well, i am not quite sure that i've fully understood your scenario,
2431
Hmm. Well, it think it's an important example, so I'll try again. First, we
2432
have thread A which we KNOW is waiting on a condition. As soon as it becomes
2433
unblocked for any reason, we will know because it will set a flag. Since the
2434
flag is not set, we are 100% confident that thread A is waiting on the
2435
condition. We have another thread, thread B, which has acquired the mutex
2436
and is about to wait on the condition. Thus it is pretty clear that at any
2437
point, either just A is waiting, or A and B are waiting. Now thread C comes
2438
along. C is about to do a broadcast on the condition. A broadcast is
2439
guaranteed to unblock all threads currently waiting on a condition, right?
2440
Again, we said that either just A is waiting, or A and B are both waiting.
2441
So, when C does its broadcast, depending upon whether B has started waiting
2442
or not, thread C will unblock A or unblock A and B. Either way, C must
2445
Now, you said anything that happens is correct so long as a) "a signal is
2446
not lost between unlocking the mutex and waiting on the condition" and b) "a
2447
thread must not steal a signal it sent", correct? Requirement b) is easy to
2448
satisfy: in this scenario, thread C will never wait on the condition, so it
2449
won't steal any signals. Requirement a) is not hard either. The only way we
2450
could fail to meet requirement a) in this scenario is if thread B was
2451
started waiting but didn't wake up because a signal was lost. This will not
2454
Now, here is what happens. Assume thread C beats thread B. Thread C looks to
2455
see how many threads are waiting on the condition. Thread C sees just one
2456
thread, thread A, waiting. It does a broadcast waking up just one thread
2457
because just one thread is waiting. Next, before A can become unblocked,
2458
thread B begins waiting. Now there are two threads waiting, but only one
2459
will be unblocked. Suppose B wins. B will become unblocked. A will not
2460
become unblocked, because C only unblocked one thread (sema_post cond, 1).
2461
So at the end, B finishes and A remains blocked.
2463
We have met both of your requirements, so by your rules, this is an
2464
acceptable outcome. However, I think that the spec says this is an
2465
unacceptable outcome! We know for certain that A was waiting and that C did
2466
a broadcast, but A did not become unblocked! Yet, the spec says that a
2467
broadcast wakes up all waiting threads. This did not happen. Do you agree
2468
that this shows your rules are not strict enough?
2470
> and what about N2? :) this one does allow almost everything.
2472
Don't get me started about rule #2. I'll NEVER advocate an algorithm that
2473
uses rule 2 as an excuse to suck!
2475
> but it is done (decrement)under mutex protection - this is not a subject
2476
> of a race condition.
2478
You are correct. My mistake.
2480
> i would remove "_bTimedOut=false".. after all, it was a real timeout..
2482
I disagree. A thread that can't successfully retract its waiter status can't
2483
really have timed out. If a thread can't return without executing extra code
2484
to deal with the fact that someone tried to unblock it, I think it is a poor
2486
didn't realize someone was trying to signal us. After all, a signal is more
2487
important than a time out.
2489
> when nSignaled != 0, it is possible to update nWaiters (--) and do not
2492
I realize this, but I was thinking that writing it the other ways saves
2493
another if statement.
2495
> adjust only if nGone != 0 and save one cache memory write - probably much
2498
Hmm. You are probably right.
2500
> well, in a strange (e.g. timeout test) program you may (theoretically)
2501
> have an overflow of nWaiters/nGone counters (with waiters repeatedly
2503
> out and no signals at all).
2505
Also true. Not only that, but you also have the possibility that one could
2506
overflow the number of waiters as well! However, considering the limit you
2507
have chosen for nWaitersGone, I suppose it is unlikely that anyone would be
2508
able to get INT_MAX/2 threads waiting on a single condition. :)
2512
It looks correct to me.
2514
What are IPC semaphores?
2516
In the line where you state, "else if ( nWaitersBlocked > nWaitersGone ) {
2517
// HARMLESS RACE CONDITION!" there is no race condition for nWaitersGone
2518
because nWaitersGone is never modified without holding mtxUnblockLock. You
2519
are correct that there is a harmless race on nWaitersBlocked, which can
2520
increase and make the condition become true just after we check it. If this
2521
happens, we interpret it as the wait starting after the signal.
2523
I like your optimization of this. You could improve Alg. 6 as follows:
2524
---------- Algorithm 6b ----------
2528
// this is safe because nWaiting can only be decremented by a thread that
2529
// owns counters and nGone can only be changed by a thread that owns
2531
if (nWaiting>nGone) {
2533
sema_wait gate // close gate if not already closed
2539
_nSig=bAll?nWaiting:1
2545
sema_post queue, _nSig
2548
---------- ---------- ----------
2549
I guess this wouldn't apply to Alg 8a because nWaitersGone changes meanings
2550
depending upon whether the gate is open or closed.
2552
In the loop "while ( nWaitersWasGone-- ) {" you do a sema_wait on
2553
semBlockLock. Perhaps waiting on semBlockQueue would be a better idea.
2555
What have you gained by making the last thread to be signaled do the waits
2556
for all the timed out threads, besides added complexity? It took me a long
2557
time to figure out what your objective was with this, to realize you were
2558
using nWaitersGone to mean two different things, and to verify that you
2559
hadn't introduced any bug by doing this. Even now I'm not 100% sure.
2561
What has all this playing about with nWaitersGone really gained us besides a
2562
lot of complexity (it is much harder to verify that this solution is
2563
correct), execution overhead (we now have a lot more if statements to
2564
evaluate), and space overhead (more space for the extra code, and another
2565
integer in our data)? We did manage to save a lock/unlock pair in an
2566
uncommon case (when a time out occurs) at the above mentioned expenses in
2569
As for 8b, c, and d, they look ok though I haven't studied them thoroughly.
2570
What would you use them for?
2575
-----------------------------------------------------------------------------
2577
Subject: RE: FYI/comp.programming.threads/Re: pthread_cond_* implementatio
2579
Date: Tue, 27 Feb 2001 15:51:28 +0100
2580
From: TEREKHOV@de.ibm.com
2581
To: Louis Thomas <lthomas@arbitrade.com>
2582
CC: rpj@ise.canberra.edu.au, Thomas Pfaff <tpfaff@gmx.net>,
2583
Nanbor Wang <nanbor@cs.wustl.edu>
2587
>> that first wave waiters will start the race for the mutex before waiters
2588
>> from the second wave - Linux pthreads process/unblock both waves
2591
>I'm not sure how we are any more fair about this than Linux. We certainly
2592
>don't guarantee that the threads released by the first broadcast will get
2593
>the external mutex before the threads of the second wave. In fact, it is
2594
>possible that those threads will never get the external mutex if there is
2595
>enough contention for it.
2597
correct. but gate is nevertheless more fair than Linux because of the
2598
barrier it establishes between two races (1st and 2nd wave waiters) for
2599
the mutex which under 'normal' circumstances (e.g. all threads of equal
2600
priorities,..) will 'probably' result in fair behaviour with respect to
2603
>> well, i am not quite sure that i've fully understood your scenario,
2605
>Hmm. Well, it think it's an important example, so I'll try again. ...
2607
ok. now i seem to understand this example. well, now it seems to me
2608
that the only meaningful rule is just:
2610
a) "a signal is not lost between unlocking the mutex and waiting on the
2615
b) "a thread must not steal a signal it sent"
2617
is not needed at all because a thread which violates b) also violates a).
2619
i'll try to explain..
2621
i think that the most important thing is how POSIX defines waiter's
2624
"if another thread is able to acquire the mutex after the about-to-block
2626
has released it, then a subsequent call to pthread_cond_signal() or
2627
pthread_cond_broadcast() in that thread behaves as if it were issued after
2628
the about-to-block thread has blocked. "
2630
my understanding is the following:
2632
1) there is no guarantees whatsoever with respect to whether
2634
will actually unblock any 'waiter' if it is done w/o acquiring the mutex
2636
(note that a thread may release it before signal/broadcast - it does not
2639
2) it is guaranteed that waiters become 'visible' - eligible for unblock as
2641
as signalling thread acquires the mutex (but not before!!)
2645
>So, when C does its broadcast, depending upon whether B has started
2647
>or not, thread C will unblock A or unblock A and B. Either way, C must
2650
right. but only if C did acquire the mutex prior to broadcast (it may
2651
release it before broadcast as well).
2653
implementation will violate waiters visibility rule (signal will become
2655
if C will not unblock A.
2657
>Now, here is what happens. Assume thread C beats thread B. Thread C looks
2659
>see how many threads are waiting on the condition. Thread C sees just one
2660
>thread, thread A, waiting. It does a broadcast waking up just one thread
2661
>because just one thread is waiting. Next, before A can become unblocked,
2662
>thread B begins waiting. Now there are two threads waiting, but only one
2663
>will be unblocked. Suppose B wins. B will become unblocked. A will not
2664
>become unblocked, because C only unblocked one thread (sema_post cond, 1).
2665
>So at the end, B finishes and A remains blocked.
2667
thread C did acquire the mutex ("Thread C sees just one thread, thread A,
2668
waiting"). beginning from that moment it is guaranteed that subsequent
2669
broadcast will unblock A. Otherwise we will have a lost signal with respect
2670
to A. I do think that it does not matter whether the signal was physically
2671
(completely) lost or was just stolen by another thread (B) - in both cases
2672
it was simply lost with respect to A.
2674
>..Do you agree that this shows your rules are not strict enough?
2676
probably the opposite.. :-) i think that it shows that the only meaningful
2679
a) "a signal is not lost between unlocking the mutex and waiting on the
2682
with clarification of waiters visibility as defined by POSIX above.
2684
>> i would remove "_bTimedOut=false".. after all, it was a real timeout..
2686
>I disagree. A thread that can't successfully retract its waiter status
2688
>really have timed out. If a thread can't return without executing extra
2690
>to deal with the fact that someone tried to unblock it, I think it is a
2693
>didn't realize someone was trying to signal us. After all, a signal is
2695
>important than a time out.
2697
a) POSIX does allow timed out thread to consume a signal (cancelled is
2699
b) ETIMEDOUT status just says that: "The time specified by abstime to
2700
pthread_cond_timedwait() has passed."
2701
c) it seem to me that hiding timeouts would violate "The
2702
pthread_cond_timedwait()
2703
function is the same as pthread_cond_wait() except that an error is
2705
the absolute time specified by abstime passes (that is, system time equals
2707
exceeds abstime) before the condition cond is signaled or broadcasted"
2709
the abs. time did really pass before cond was signaled (waiter was
2710
released via semaphore). however, if it really matters, i could imaging
2712
can save an abs. time of signal/broadcast and compare it with timeout after
2713
unblock to find out whether it was a 'real' timeout or not. absent this
2715
i do think that hiding timeouts would result in technical violation of
2716
specification.. but i think that this check is not important and we can
2718
trust timeout error code provided by wait since we are not trying to make
2719
'hard' realtime implementation.
2721
>What are IPC semaphores?
2724
int semctl(int, int, int, ...);
2725
int semget(key_t, int, int);
2726
int semop(int, struct sembuf *, size_t);
2728
they support adjustment of semaphore counter (semvalue)
2729
in one single call - imaging Win32 ReleaseSemaphore( hsem,-N )
2731
>In the line where you state, "else if ( nWaitersBlocked > nWaitersGone ) {
2732
>// HARMLESS RACE CONDITION!" there is no race condition for nWaitersGone
2733
>because nWaitersGone is never modified without holding mtxUnblockLock. You
2734
>are correct that there is a harmless race on nWaitersBlocked, which can
2735
>increase and make the condition become true just after we check it. If
2737
>happens, we interpret it as the wait starting after the signal.
2739
well, the reason why i've asked on comp.programming.threads whether this
2741
condition is harmless or not is that in order to be harmless it should not
2742
violate the waiters visibility rule (see above). Fortunately, we increment
2743
the counter under protection of external mutex.. so that any (signalling)
2744
thread which will acquire the mutex next, should see the updated counter
2745
(in signal) according to POSIX memory visibility rules and mutexes
2746
(memory barriers). But i am not so sure how it actually works on
2748
which does not explicitly define any memory visibility rules :(
2750
>I like your optimization of this. You could improve Alg. 6 as follows:
2751
>---------- Algorithm 6b ----------
2755
> // this is safe because nWaiting can only be decremented by a thread
2757
> // owns counters and nGone can only be changed by a thread that owns
2759
> if (nWaiting>nGone) {
2760
> if (0==nSignaled) {
2761
> sema_wait gate // close gate if not already closed
2767
> _nSig=bAll?nWaiting:1
2773
> sema_post queue, _nSig
2776
>---------- ---------- ----------
2777
>I guess this wouldn't apply to Alg 8a because nWaitersGone changes
2779
>depending upon whether the gate is open or closed.
2783
>In the loop "while ( nWaitersWasGone-- ) {" you do a sema_wait on
2784
>semBlockLock. Perhaps waiting on semBlockQueue would be a better idea.
2786
you are correct. my mistake.
2788
>What have you gained by making the last thread to be signaled do the waits
2789
>for all the timed out threads, besides added complexity? It took me a long
2790
>time to figure out what your objective was with this, to realize you were
2791
>using nWaitersGone to mean two different things, and to verify that you
2792
>hadn't introduced any bug by doing this. Even now I'm not 100% sure.
2794
>What has all this playing about with nWaitersGone really gained us besides
2796
>lot of complexity (it is much harder to verify that this solution is
2797
>correct), execution overhead (we now have a lot more if statements to
2798
>evaluate), and space overhead (more space for the extra code, and another
2799
>integer in our data)? We did manage to save a lock/unlock pair in an
2800
>uncommon case (when a time out occurs) at the above mentioned expenses in
2803
well, please consider the following:
2805
1) with multiple waiters unblocked (but some timed out) the trick with
2807
seem to ensure potentially higher level of concurrency by not delaying
2808
most of unblocked waiters for semaphore cleanup - only the last one
2809
will be delayed but all others would already contend/acquire/release
2810
the external mutex - the critical section protected by mtxUnblockLock is
2811
made smaller (increment + couple of IFs is faster than system/kernel call)
2812
which i think is good in general. however, you are right, this is done
2813
at expense of 'normal' waiters..
2815
2) some semaphore APIs (e.g. POSIX IPC sems) do allow to adjust the
2816
semaphore counter in one call => less system/kernel calls.. imagine:
2818
if ( 1 == nSignalsWasLeft ) {
2819
if ( 0 != nWaitersWasGone ) {
2820
ReleaseSemaphore( semBlockQueue,-nWaitersWasGone ); // better now
2823
sem_post( semBlockLock ); // open the gate
2826
3) even on win32 a single thread doing multiple cleanup calls (to wait)
2827
will probably result in faster execution (because of processor caching)
2828
than multiple threads each doing a single call to wait.
2830
>As for 8b, c, and d, they look ok though I haven't studied them
2832
>What would you use them for?
2834
8b) for semaphores which do not allow to unblock multiple waiters
2835
in a single call to post/release (e.g. POSIX realtime semaphores -
2838
8c/8d) for WinCE prior to 3.0 (WinCE 3.0 does have semaphores)
2840
ok. so, which one is the 'final' algorithm(s) which we should use in
2846
----------------------------------------------------------------------------
2848
Louis Thomas <lthomas@arbitrade.com> on 02/27/2001 05:20:12 AM
2850
Please respond to Louis Thomas <lthomas@arbitrade.com>
2852
To: Alexander Terekhov/Germany/IBM@IBMDE
2853
cc: rpj@ise.canberra.edu.au, Thomas Pfaff <tpfaff@gmx.net>, Nanbor Wang
2854
<nanbor@cs.wustl.edu>
2855
Subject: RE: FYI/comp.programming.threads/Re: pthread_cond_* implementatio
2858
Sorry all. Busy week.
2860
> this insures the fairness
2861
> which POSIX does not (e.g. two subsequent broadcasts - the gate does
2863
> that first wave waiters will start the race for the mutex before waiters
2864
> from the second wave - Linux pthreads process/unblock both waves
2867
I'm not sure how we are any more fair about this than Linux. We certainly
2868
don't guarantee that the threads released by the first broadcast will get
2869
the external mutex before the threads of the second wave. In fact, it is
2870
possible that those threads will never get the external mutex if there is
2871
enough contention for it.
2873
> e.g. i was thinking about implementation with a pool of
2874
> N semaphores/counters [...]
2876
I considered that too. The problem is as you mentioned in a). You really
2877
need to assign threads to semaphores once you know how you want to wake
2879
up, not when they first begin waiting which is the only time you can assign
2882
> well, i am not quite sure that i've fully understood your scenario,
2884
Hmm. Well, it think it's an important example, so I'll try again. First, we
2885
have thread A which we KNOW is waiting on a condition. As soon as it
2887
unblocked for any reason, we will know because it will set a flag. Since
2889
flag is not set, we are 100% confident that thread A is waiting on the
2890
condition. We have another thread, thread B, which has acquired the mutex
2891
and is about to wait on the condition. Thus it is pretty clear that at any
2892
point, either just A is waiting, or A and B are waiting. Now thread C comes
2893
along. C is about to do a broadcast on the condition. A broadcast is
2894
guaranteed to unblock all threads currently waiting on a condition, right?
2895
Again, we said that either just A is waiting, or A and B are both waiting.
2896
So, when C does its broadcast, depending upon whether B has started waiting
2897
or not, thread C will unblock A or unblock A and B. Either way, C must
2900
Now, you said anything that happens is correct so long as a) "a signal is
2901
not lost between unlocking the mutex and waiting on the condition" and b)
2903
thread must not steal a signal it sent", correct? Requirement b) is easy to
2904
satisfy: in this scenario, thread C will never wait on the condition, so it
2905
won't steal any signals. Requirement a) is not hard either. The only way
2907
could fail to meet requirement a) in this scenario is if thread B was
2908
started waiting but didn't wake up because a signal was lost. This will not
2911
Now, here is what happens. Assume thread C beats thread B. Thread C looks
2913
see how many threads are waiting on the condition. Thread C sees just one
2914
thread, thread A, waiting. It does a broadcast waking up just one thread
2915
because just one thread is waiting. Next, before A can become unblocked,
2916
thread B begins waiting. Now there are two threads waiting, but only one
2917
will be unblocked. Suppose B wins. B will become unblocked. A will not
2918
become unblocked, because C only unblocked one thread (sema_post cond, 1).
2919
So at the end, B finishes and A remains blocked.
2921
We have met both of your requirements, so by your rules, this is an
2922
acceptable outcome. However, I think that the spec says this is an
2923
unacceptable outcome! We know for certain that A was waiting and that C did
2924
a broadcast, but A did not become unblocked! Yet, the spec says that a
2925
broadcast wakes up all waiting threads. This did not happen. Do you agree
2926
that this shows your rules are not strict enough?
2928
> and what about N2? :) this one does allow almost everything.
2930
Don't get me started about rule #2. I'll NEVER advocate an algorithm that
2931
uses rule 2 as an excuse to suck!
2933
> but it is done (decrement)under mutex protection - this is not a subject
2934
> of a race condition.
2936
You are correct. My mistake.
2938
> i would remove "_bTimedOut=false".. after all, it was a real timeout..
2940
I disagree. A thread that can't successfully retract its waiter status
2942
really have timed out. If a thread can't return without executing extra
2944
to deal with the fact that someone tried to unblock it, I think it is a
2947
didn't realize someone was trying to signal us. After all, a signal is more
2948
important than a time out.
2950
> when nSignaled != 0, it is possible to update nWaiters (--) and do not
2953
I realize this, but I was thinking that writing it the other ways saves
2954
another if statement.
2956
> adjust only if nGone != 0 and save one cache memory write - probably much
2959
Hmm. You are probably right.
2961
> well, in a strange (e.g. timeout test) program you may (theoretically)
2962
> have an overflow of nWaiters/nGone counters (with waiters repeatedly
2964
> out and no signals at all).
2966
Also true. Not only that, but you also have the possibility that one could
2967
overflow the number of waiters as well! However, considering the limit you
2968
have chosen for nWaitersGone, I suppose it is unlikely that anyone would be
2969
able to get INT_MAX/2 threads waiting on a single condition. :)
2973
It looks correct to me.
2975
What are IPC semaphores?
2977
In the line where you state, "else if ( nWaitersBlocked > nWaitersGone ) {
2978
// HARMLESS RACE CONDITION!" there is no race condition for nWaitersGone
2979
because nWaitersGone is never modified without holding mtxUnblockLock. You
2980
are correct that there is a harmless race on nWaitersBlocked, which can
2981
increase and make the condition become true just after we check it. If this
2982
happens, we interpret it as the wait starting after the signal.
2984
I like your optimization of this. You could improve Alg. 6 as follows:
2985
---------- Algorithm 6b ----------
2989
// this is safe because nWaiting can only be decremented by a thread that
2990
// owns counters and nGone can only be changed by a thread that owns
2992
if (nWaiting>nGone) {
2994
sema_wait gate // close gate if not already closed
3000
_nSig=bAll?nWaiting:1
3006
sema_post queue, _nSig
3009
---------- ---------- ----------
3010
I guess this wouldn't apply to Alg 8a because nWaitersGone changes meanings
3011
depending upon whether the gate is open or closed.
3013
In the loop "while ( nWaitersWasGone-- ) {" you do a sema_wait on
3014
semBlockLock. Perhaps waiting on semBlockQueue would be a better idea.
3016
What have you gained by making the last thread to be signaled do the waits
3017
for all the timed out threads, besides added complexity? It took me a long
3018
time to figure out what your objective was with this, to realize you were
3019
using nWaitersGone to mean two different things, and to verify that you
3020
hadn't introduced any bug by doing this. Even now I'm not 100% sure.
3022
What has all this playing about with nWaitersGone really gained us besides
3024
lot of complexity (it is much harder to verify that this solution is
3025
correct), execution overhead (we now have a lot more if statements to
3026
evaluate), and space overhead (more space for the extra code, and another
3027
integer in our data)? We did manage to save a lock/unlock pair in an
3028
uncommon case (when a time out occurs) at the above mentioned expenses in
3031
As for 8b, c, and d, they look ok though I haven't studied them thoroughly.
3032
What would you use them for?