Opened 22 months ago

Last modified 16 months ago

#2280 new enhancement

Snark pipeline size

Reported by: Zlatin Balevsky Owned by: zzz
Priority: trivial Milestone: undecided
Component: apps/i2psnark Version: 0.9.35
Keywords: Cc:
Parent Tickets: Sensitive: no


Starting a discussion on the limits to the snark pipeline (5 pieces, 128 kb). I've seen uTorrent reach 30+ requests in its pipeline and I would think that for high-latency environment longer pipeline makes even more sense.


Change History (3)

comment:1 Changed 22 months ago by zzz

I think the effective pipeline is one more, or 6 * 16KB = 96 KB, equivalent to a streaming window size of 57. By I2P standards, that's a lot of data in flight.

If we do want to raise it, we may first need to fix the outbound bandwidth limiting, so we don't make the flood of outbound data on unchoke even worse.

comment:2 Changed 22 months ago by Zlatin Balevsky

Are you referring to thottling in snark or in the router? What is wrong with it?

Streaming window size isn't really relevant here; the benefit of a longer pipeline is that fewer round-trip communication needs to happen for the data to flow. It doesn't mean the data is "in flight" and it is definitely not in flight in the streaming layer.

The round-trips between requests and responses are impacting snark performance a lot; if you remember your first observation when the ack-everything bug got fixed was that snark downloads suddenly became much slower.

comment:3 Changed 16 months ago by zzz

Component: apps/otherapps/i2psnark
Owner: set to zzz

Taking another look due to your post i2pforum.i2p/viewtopic.php?f=12&t=570

Your statements in comment 2 above don't match my understanding of how things work.

A larger pipeline does not reduce RTTs. It could reduce stalls.

In BT, you make requests, and the chunks start coming back. After you get each piece, you make another request, so the pipeline stays full. The only way to stall is if all pieces are in-flight, which requires a current streaming window of 57 or more.

Note: See TracTickets for help on using tickets.