﻿id	summary	reporter	owner	description	type	status	priority	milestone	component	version	severity	resolution	keywords	cc
374	File streams are flushed too late at termination	dmik		"LIBC flushes all file streams from its _DLL_InitTerm() in a callback called via _CRT_term(). However, if these streams are bound to TCP sockets (e.g. via dup2 and further parent-child inheritance) then flushing fails. This happens because TCP sockets are closed from !__exit() via a !__libc_spmTerm() callback. !__exit() in turn is eventually called from LIBC exit() which is called after main() returns (or directly from main) which apparently happens much earlier than OS/2 calls _DLL_InitTerm(). There is also a _CRT_term() call before !__exit() (and hence before closing TCP sockets) but this _CRT_term() call only decreases the _CRT_init() reference counter and since the counter is not zero by that time (due to a pending _CRT_term() from _DLL_InitTerm of the LIBC DLL itself and other kLIBC-based DLLs) the callbacks are not processed.

_CRT_init()/_CRT_term() calls can be nested and the reference counter makes sure that only the first init and the last term call does actual job. In case of a simple hello application it's something like this:
{{{
_CRT_init in _DLL_InitTerm of LIBC DLL -> actual init (1)
_CRT_init in _DLL_InitTerm of GCC DLL
_CRT_init in EXE
main
exit -> close open TCP sockets (2)
_CRT_term in EXE
_CRT_term in _DLL_InitTerm of GCC DLL
_CRT_term in _DLL_InitTerm of LIBC DLL -> actual term (3)
}}}

Buffers of buffered streams are flushed in (3) but given that sockets are closed in (2), flush for them just fails and this leads to data loss on the receiving application. An example of such an application is attached."	defect	new	normal	new	libc	0.6.6	normal			
