On Tue, Dec 14, 2010 at 01:49:01PM +0100, Daniel Stenberg wrote:
> My first test with my new code, using the 4MB/30000 sizes:
>
> Got 102400000 bytes in 20585 ms = 4974496.0 bytes/sec
>
> Correct. Check the number of zeroes. Ten times the data in a third of the
> time: 31 times faster in total...
That's an amazing improvement!
[...]
> When we use this approach we have a significant over-read for small files. If
> we for example were to write an application that moves over a directory with
> 100 files, each being 20 bytes, we would perform terribly slow and waste a
> lot of bandwidth.
One approach could be to use a slow-start that only queues a few
over-reads at the start, then increases the window exponentially to a
maximum of 4 MB as data is read. This shouldn't penalize the small file
case much while (hopefully) allowing large file transfers to happen
reasonably quickly.
> IMPROVEMENTS
>
> I think that we should consider having the SFTP code do an SSH_FXP_STAT
> query first to figure out the size of the remote file so that _no_
> "over-read" will be done and thus there will be no punishment for small
> files. Of course this will then not work exactly like today in cases when for
> example the file is being written to while the download begins.
The latter case could be handled by queuing a single over-read past the EOF.
The normal case will be that it won't return any data, but if the file size
has increased and it does return data, then treat the file as infinitely long
and continue queuing over-reads in 4 MB chunks until the true end.
>>> Dan
_______________________________________________
libssh2-devel http://cool.haxx.se/cgi-bin/mailman/listinfo/libssh2-devel
Received on 2010-12-14