>
>> So will it not care for any ACKs? If you send a 10GB file and the first packet
>> is never acked? Maybe a limit for amount of outstanding un-acked data?
>
> No, not like that; available acks are processed after every successful
> send. We just don't wait for acks that are not available yet, at least
> in nonblocking mode. I don't know how the code would behave in
> blocking mode, I should probably check that.
I’ve written custom pipelining code for upload/download using some of the libssh2 primitive functions. I also recommend having a max outstanding packet number, I’ve saturated lesser servers by flooding them with concurrent writes. Currently our value is 64 outstanding writes which, I think, mirrors OpenSSH.
The approach I took is slowly increasing the concurrent requests. I send an initial request and wait to see if it fails due of a write error or other transient error and then start sending concurrent requests if the first synchronous write was successful.
I then ramp the requests slowly by first sending 2 writes, drain the ready replies, then increase the outstanding requests by one and drain and so on until I reach the max of 64 outstanding requests. This gradual increase allows small files to upload without immediately doing a bunch of concurrent writes to the server, seems to work well in practice.
Hope this helps,
Will
_______________________________________________
libssh2-devel https://cool.haxx.se/cgi-bin/mailman/listinfo/libssh2-devel
Received on 2018-08-27