Subject: RE: Libssh2 usage from cURL with various buffer sizes.

RE: Libssh2 usage from cURL with various buffer sizes.

From: Patrik Thunström <>
Date: Fri, 18 Nov 2011 15:00:02 +0100

-----Original Message-----
From: Alexander Lamaison []
Sent: den 18 november 2011 12:58
To: libssh2 development
Subject: Re: Libssh2 usage from cURL with various buffer sizes.

> This behaviour is sounds like a problem with 1.3.0 that was
> (partially) fixed recently. Have you tried the latest from git?

> Alex

Now I have, using the latest snapshot (November 18 daily).
As you said, it seems to be partially fixed.

The big performance hit related to the buffer size is definitely gone.

Running through the test suite showed a issue when running a 16k buffer
against the OpenSSH server however. When doing the 1000 x 20kB file set
twice in a row, the process just stalls in sftp_readdir, which is done
before starting transfer to determine all files to download. The time was
more specifically spent in sftp_packet_requirev, and all the exclusive time
was just as previous profiling spent in sftp_packet_ask. This shown through
a short profiling round over a few minutes, while the process stalled.

This with a 100% reproduction rate, and I've tried somewhere around 10

The other sets of files did not get the same issue, and the CoreSFTP did not
either show the same issue with any of the sets or 16k/16M combinations, so
I could only reproduce it with the 16k buffer, OpenSSH server and 1000 x
20kB file combination.

So, it's a clear improvement, but instead other issues have surfaced. Not
sure how related they could be to the improvements made.

Also, slowdown over multiple files is still noticeable when running it with
a larger buffer. This is however a lot less of an issue, as it can always be
worked around by reinitializing the connection to the server.

Received on 2011-11-18