On Wed, 7 Sep 2011, liuzl wrote:
> Agree. When transfer big file, i split it into several blocks and
> transfered in several sftp connections at the same time.
> Each connection will just download it's own block,the read-ahead behavior
> will cross the blocks eachother and waste network traffic.
No, that's not true. The read-ahead concept is the only way we can achieve
high speed SFTP transfers. Sure your application can use multiple connections
but that's not a sustainable network solution and you will be forced to use a
large amount of connections (>10) to reach decent speeds without the
read-ahead concept. Assuming high-latency high-bandwidth networks.
Stricktly speaking we need "pipelining" and not necessarily read-ahead, but
with our existing API it's hard to imagine a way to accomplish pipelining
without reading ahead.
I don't think this is much of a "waste" of network traffic, especially not if
we improve it with for example file size and letting the program change the
read-amount.
It does come to a somewhat extreme situation with a 6MB buffer, yes. But
perhaps we shouldn't target the library design for such edge cases? (But of
course make sure we deal with them properly.)
I think our main focus should first be to fix the bugs, then work on improving
behavior.
-- / daniel.haxx.se _______________________________________________ libssh2-devel http://cool.haxx.se/cgi-bin/mailman/listinfo/libssh2-develReceived on 2011-09-07