Please keep in mind when reading the following that this is not a rant on
Indy or ICS...
The only thing I am interested in is solving the problems we have with some
of our apps !
As I read this thread I am getting a clearer of what the problem with our
internal apps is :
We have multithreaded apps - every thread has FTP and TELNET client
component in SYNC mode AND 1 DB connection.
They run on a 1 GBit / s network ( previously 100 MBit / s ).
They move around big files with length of 1-2 GB via FTP.
They do sometimes lenghty queries in the DB.
They do execute some analytical code on those big files ( via Win32 memory
mapped files ).
They do use sometimes ActiveX objects with lengthy operations...
These apps are hanging sometimes - the hanging got more often when we
switched from 100MBit to 1 GBit network and when our files grew from
500 -1000 MB TO 1 -2 GB.
I was never able to reliably reproduce the problems - as far as I ever
traced it they hung in the message loop of ICS...
This happened with multi- and with singlethreaded apps.
This whole discussion leads me to the following conclusions about the
reasons for our apps to hang :
1. Sync mode of ICS is implemented by Async mode with some looping /
messages pump.
2. Async mode of ICS depends on the thread / app messages pump and is
therefore vulnerable if something goes wrong way with message pump / loop.
3. The messages loop gets sometimes in trouble when some of our lengthy
operations block too long
Solutions to our scenario :
a) With ICS : we would have to restructure our apps in a way that all
sockets operations are run in a separate thread from all lengthy
operations...
b) With Indy 9 / 10 : we would just have to reimplement the FTP / TELNET
abstraction layer in our apps with one based on Indy instead of ICS.
Please give me some feedback on the above thoughts...
Yahia
Post by Francois PietteIf you have some operation which has to wait (a blocking operation, not
an async operation), then you can always use a thread to execute it
without blocking. Without mention that you can always have a thread per
This requires you to determine EVERY operation that mnight block and break
it out. And its not only big operations like DB access, even smaller
operations have potential to block, but often dont. When the do -
bottleneck.
Threading individual pieces then just increases the complexity of the
system. Non blocking servers are ideal for file serving type applications,
but anytime logic or unpredictible "locks" are introduced they design
either falls apart or becomes quite complex.
Post by Francois PietteBut you _MUST_ always write use thread while with ICS you use a thread
only when required. Much less threads in the system. With Indy, you
_MUST_ have a thread per connection which is really overkill. With lot's
In most cases threads are sleeping and there is no over kill. Only after
1,000 is reached does any problem even begin to show.
Post by Francois Pietteof connections, your system will spend more time switching between
threads than doing actual processing. With ICS, you can tailor you
This is a fallacy Francois and you know it. Threads are only swtiched to
when they are active, and socket threads spend most of their time in sleep
states. So in a given pass maybe 50 from a thousand will be scheduled. And
even 1000 is nothing to a modern CPU. Divide 1 second by 1000, then factor
in 1GZ or more. Each thread will still have MANY MANY CPU cycles, in fact
many more than its quantum. Because of this even if EVERY thread was
active, and every thread got scheduled, because of the quantum each thread
would still get scheduled a very large number of times that second.
--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"
Need extra help with an Indy problem?
http://www.atozedsoftware.com/indy/experts/support.html
ELKNews - Get your free copy at http://www.atozedsoftware.com