Opened 3 months ago

Last modified 2 days ago

#2506 new enhancement

Time UDP bandwidth retrans precisely

Reported by: jogger Owned by: zzz
Priority: minor Milestone: 0.9.43
Component: router/transport Version: 0.9.40
Keywords: Cc:
Parent Tickets: Sensitive: no

Description

While investigating #2505 I found that locked_shouldSend() sets a random retrans time if no b/w.

I have already developed and tested the sliding b/w window in allocateSendingBytes() for #2412. Building on top of that it would be easy to determine the exact point in time when enough b/w is available for the currently blocked transmission. Would make things a bit faster. Operating on random values would be another reason to abandon getNextDelay()

If you agree I would code that.

Subtickets

#2427: Maintaining bandwidth correctly: PeerState.javanewzzz

Change History (11)

comment:1 Changed 3 months ago by zzz

I think we have to decide what to do about #2412 first. It's still open and it didn't get into .40 as we've been busy and haven't addressed your latest comments there yet, unfortunately.

comment:2 Changed 3 months ago by jogger

OK. I also do not like shooting at moving targets. I already started detailed measurements of your and my proposal for #2412 (big difference) and must now redo those after fixing #2505. One result of fixing #2505 is that the case addressed by the OP now strikes multiple times per second which makes solving it really worthwile.

comment:3 Changed 12 days ago by jogger

Sensitive: unset

As I have just posted a patch for #2427, won´t post a patch for PS again. After all it is a one liner for correct retrans timing. Just preplace the random calc in locked_shouldSend() by

                state.setNextSendTime(now + 1 + (size - _sendWindowBytesRemaining) * 1000 / getSendWindowBytes());

comment:4 Changed 12 days ago by zzz

This is at PeerState? line 1891.

I also saw this line when reviewing #2574 and mentioned it in comment 5 there. However, there I suggested that it should be based on the FIFOBandwidthRefiller frequency, that's probably not right, this bandwidth throttle is based on the SSU window, not the global bandwidth limiter.

The current calc depends on ack frequency an adds a random time. Agreed that the random() isn't helpful and just serves to put messages out of order for no reason.

In OP you say it would be "easy to determine the exact point in time when enough b/w is available for the currently blocked transmission" but I don't see how that's possible and I don't see how the change in comment 3 does that. You only get more window if something gets acked, and you can't predict when that will happen.

In general, the longer the RTT the longer you'll have to wait for acks, so I propose making it a function of RTT, something like setNextSendTime(now + (_rtt/2));

comment:5 Changed 11 days ago by zzz

or if we do want it related to the size, something like:

setNextSendTime(now + Math.max(ACKSencer.ACK_FREQUENCY, _rtt * (size - _sendWindowbytesRemaining) / getSendWindowBytes()))

comment:6 Changed 11 days ago by jogger

I agree regarding ACKs and your solutions work, but introduce unnecessary delay. Once we set nextSendTime too late, this is final for this message. ACKs do not help then.

As said before, my suggestion builds on top of #2427 (decade old stats bug now finally worked out). If we get this log

INFO [acket pusher] router.transport.udp.PeerState: Allocation of 1033 rejected w/ wsize=9225 available=933 for message 4031855372: OB Message 4031855372 type 18 with 1 fragments of size 1033 volleys: 0 lifetime: 1 pending fragments: 0
INFO [acket pusher] router.transport.udp.PeerState: Allocation of 1033 rejected w/ wsize=15423 available=267 for message 1824686231: OB Message 1824686231 type 18 with 1 fragments of size 1033 volleys: 0 lifetime: 1 pending fragments: 0

it is then clear that comment 3 provides a point in time when the sliding window will let this message through without wasting CPU on a further reject in the meantime. ACKs might make sending possible earlier but from the logs I see a small average delay. Would be 11ms/50ms in the typical examples above. Not worth trying earlier, no reason trying later.

comment:7 Changed 10 days ago by jogger

This should make both of us happy:

if (size > _sendWindowBytesRemaining)
   state.setNextSendTime(now + 1 + (size - _sendWindowBytesRemaining) * 1000 / getSendWindowBytes());
else
   state.setNextSendTime(now + _rtt / 2);

comment:8 Changed 3 days ago by zzz

I'm still confused.

You are saying that you only know the "exact time" (as discussed in OP and comments 3, 4, 6) if we first apply the patch in #2427? That is, this ticket depends on #2427? If so, I need to go look at #2427 first.

comment:9 Changed 3 days ago by jogger

exactly. Comment 7 complements #2427. The sendtime set there is the point in time when it is guaranteed that the new code in #2427 refills the window enough to let the packet go through.

comment:10 Changed 2 days ago by zzz

Milestone: undecided0.9.43

OK thanks, that's the root of my confusion with the "exact point in time" statement in the OP. Let's tag this for 43.

comment:11 Changed 2 days ago by zzz

Add a subticket #2427 (Maintaining bandwidth correctly: PeerState?.java).

Note: See TracTickets for help on using tickets.