On 15 Nov '06, at 2:02 AM, Satish Mittal wrote:
> So but the time saved here is essentially just to send the
> subsequent request packets - the actual data transfer for each such
> request (and that is huge compared to each request packet) is still
> sequential, until we can read all these responses also parallely and
> then construct out the complete response at our end?
You're neglecting latency — the time saved is that of sending the next
request, waiting till the packet gets to the server, and waiting for
the server to process it, and waiting for the response packets to
start arriving. Since the ping time to the server can be a good
fraction of a second, twice that is a big overhead per file.
It doesn't necessarily mean handling multiple requests in parallel,
just starting the next request before receiving the last packets of
the previous one. At any time you'd only be receiving the data for a
single request.
—Jens
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
libssh2-devel mailing list
libssh2-devel_at_lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libssh2-devel
Received on 2006-11-15