On Thu, 19 Apr 2012, Adam Craig wrote:
> I found that, with the remote server, while larger packet sizes yielded some
> improvement in speed, it was only barely noticeable compared to the
> fluctuations in speed due to outside factors. On the local network, the
> improvement in speed is pronounced from 2K to 20K but levels off at higher
> sizes.
Thanks for doing this!
Unfortunately I still believe there are too many unknowns in this equation and
you've done your tests on a too limited test set to make us draw very many
conclusions for libssh2 genericly as opposed for libssh2 in your particular
case.
For example:
A) you only used OpenSSH servers on Linux, possibly not even using very
different versions. we must expect different server ends to react
differently
B) what were the RTT times against these servers? SFTP will perform
significantly different for varying RTT. One of the major challanges with
SFTP is to make it run fast for the whole range of possible RTT
C) which crypto backend and version was used in libssh2? I've seen people
report very different results when using gcrypt vs openssl
D) how large buffers did you pass in to libssh2_sftp_read() when you ran these
tests and how did did different such sizes affect the test results?
> I noticed the following comment on the line defining the limit on
> upload chunk size:
> /*
> * MAX_SFTP_OUTGOING_SIZE MUST not be larger than 32500 or so. This is the
> * amount of data sent in each FXP_WRITE packet
> */
> #define MAX_SFTP_OUTGOING_SIZE 32500
>
> My guess is the "MUST" is due to the following, found in libssh2_priv.h:
It is due to phrasing in the SSH and SFTP protocol specs, yes.
> I also noticed the following in sftp.c:
> /* This is the maximum packet length to accept, as larger than this indicate
> some kind of server problem. */
> #define LIBSSH2_SFTP_PACKET_MAXLEN 80000
>
> I do not see why 80K is the cutoff here.
It is no magic "cutoff" really, it is just a way for libssh2 to detect
problems as if it gets a very large packet size it is a sign that something is
wrong and it should escape rather than trying to allocate memory for it.
> If the spec says the max packet size is 35K, then it would make more sense
> to make all the hard-coded limits on packet size 35K
Then read the spec. Does it say 35K? And further, does 35K work against all
the relevant server implementations? We're being slightly conservative in the
name of interoperability.
> and let users limit packet size further through the buffer sizes they pass
> to the read and write statements.
Then you have grossly missed the finer implementation details of the SFTP read
and write functions libssh2 provides. It does a lot of work to allow
applications to get high throughput by splitting up the buffer in many slices.
(which is what I alluded to above in my point "D"). Leaving that to the
application would certainly be possible, but will require a new API to still
offer high speed transfers.
-- / daniel.haxx.se _______________________________________________ libssh2-devel http://cool.haxx.se/cgi-bin/mailman/listinfo/libssh2-develReceived on 2012-04-20