Opened 8 years ago
Last modified 8 years ago
#374 new defect
File streams are flushed too late at termination
Reported by: | dmik | Owned by: | |
---|---|---|---|
Priority: | normal | Milestone: | new |
Component: | libc | Version: | 0.6.6 |
Severity: | normal | Keywords: | |
Cc: |
Description
LIBC flushes all file streams from its _DLL_InitTerm() in a callback called via _CRT_term(). However, if these streams are bound to TCP sockets (e.g. via dup2 and further parent-child inheritance) then flushing fails. This happens because TCP sockets are closed from __exit() via a __libc_spmTerm() callback. __exit() in turn is eventually called from LIBC exit() which is called after main() returns (or directly from main) which apparently happens much earlier than OS/2 calls _DLL_InitTerm(). There is also a _CRT_term() call before __exit() (and hence before closing TCP sockets) but this _CRT_term() call only decreases the _CRT_init() reference counter and since the counter is not zero by that time (due to a pending _CRT_term() from _DLL_InitTerm of the LIBC DLL itself and other kLIBC-based DLLs) the callbacks are not processed.
_CRT_init()/_CRT_term() calls can be nested and the reference counter makes sure that only the first init and the last term call does actual job. In case of a simple hello application it's something like this:
_CRT_init in _DLL_InitTerm of LIBC DLL -> actual init (1) _CRT_init in _DLL_InitTerm of GCC DLL _CRT_init in EXE main exit -> close open TCP sockets (2) _CRT_term in EXE _CRT_term in _DLL_InitTerm of GCC DLL _CRT_term in _DLL_InitTerm of LIBC DLL -> actual term (3)
Buffers of buffered streams are flushed in (3) but given that sockets are closed in (2), flush for them just fails and this leads to data loss on the receiving application. An example of such an application is attached.
Attachments (1)
Change History (3)
by , 8 years ago
Attachment: | fork_flush.c added |
---|
comment:1 by , 8 years ago
comment:2 by , 8 years ago
Note that the atexit() workaround uses the fact that atexit handlers are processed from LIBC exit() right BEFORE __libc_spmTerm() is called and hence before TCP sockets are closed. I guess a possible fix to this within kLIBC is to move the flushall() call from the _CRT_term() callback to the exit() function (of course after all atexit handlers and other user-level accessible callbacks which may still do some I/O on behalf of the application).
A workaround is to either to completely disable flushing in the application with
setvbuf(stdout, NULL, _IONBF, 0);
(see e.g. http://trac.netlabs.org/ports/changeset/1873 for a real life example) or to add an atexit handler and call flushall() from there (this is what the attached example does when you uncomment#define WORKAROUND
). This is, however, very boring to add such a workaround into every application (and it needs to be done in EVERY application because any app's output may turn out to be a TCP socket). So until this ticket is resolved, we will fix it in LIBCx where we already intercept main. See https://github.com/bitwiseworks/libcx/issues/31 for more info.