Opened 13 months ago
Last modified 11 months ago
#2705 new enhancement
Clean up logic for requesting immediate acks
Reported by: | Zlatin Balevsky | Owned by: | |
---|---|---|---|
Priority: | minor | Milestone: | undecided |
Component: | streaming | Version: | 0.9.45 |
Keywords: | testnet | Cc: | |
Parent Tickets: | Sensitive: | no |
Description
The logic for when to request an immediate ack relies on what I call "alchemy" in the sense that it uses magic numbers and transmit window ratios.
I decided to test whether it makes any difference and if the code can be simplified. The attached spreadsheet lists the final speed readings as reported by eepget of 1MB file. The testnet is set up with 70ms delay between each link for 140ms RTT between any two nodes. The tunnels are ECIES, 0.9.45-0.
tl;dr: The current logic improves upon the case where immediate acks are never requested, but always requesting improves more. Note that even though the acks are called immediate, they're still delayed by 150ms.
Subtickets
Attachments (1)
Change History (5)
Changed 13 months ago by
Attachment: | Requesting_Immediate_Acks.ods added |
---|
comment:1 Changed 12 months ago by
comment:2 Changed 12 months ago by
I think the current logic unnecessarily complicated. In bulk connections like HTTP GET the sender will always be maxing out their send window, which will result in earlier packets scheduled for ack later than later packets at the receiver side. That is a noop as whenever the ack gets sent it will ack both packets.
In non-bulk connections like IRC the send window will not be maxed out and the ack will be properly delayed, but what is the use of that delay if the connection is idle most of the time?
comment:3 Changed 12 months ago by
Version: | 0.9.46 → 0.9.45 |
---|
comment:4 Changed 11 months ago by
#2706 is resolved. Discussion of the heuristics and the reason for them are in comments 1, 2, and 8 in that ticket.
The code in question is Connection.java line 418, where we send optional delay of 0 to request immediate ack when our send window is almost exhausted. This is to work around a flaw in the streaming protocol, where the receiver does not know the receive window. The discussion in comment 2 above is no longer accurate since we went to a single timer in #2715.
The remaining issue is whether the heuristics need to be tuned to prevent stalls and yet not cause excessive acks.
Probably low priority, I don't think the current implementation is terrible, but definitely worth a look at some point.
related: #2706 see comments there
probably can't do both at once, possibly same code
ConnectionPacketHandler?
I think I understand CPH so I'd have to see your patch to understand what you think needs simplifying