Opened 7 months ago

Closed 7 days ago

#2714 closed defect (fixed)

SSU: OMF nextSendDelay not calculated properly

Reported by: Zlatin Balevsky Owned by: zzz
Priority: minor Milestone: 0.9.48
Component: router/transport Version: 0.9.45
Keywords: ssu Cc:
Parent Tickets: Sensitive: no

Description

The current logic will ignore the value of PeerState.getNextDelay if it happens to the last or only peer with something to send. The patch below fixes that

diff --git a/router/java/src/net/i2p/router/transport/udp/OutboundMessageFragments.java b/router/java/src/net/i2p/router/transport/udp/OutboundMessageFragments.java
index 2159b7707..03ec420dc 100644
--- a/router/java/src/net/i2p/router/transport/udp/OutboundMessageFragments.java
+++ b/router/java/src/net/i2p/router/transport/udp/OutboundMessageFragments.java
@@ -270,8 +270,8 @@ class OutboundMessageFragments {
         List<OutboundMessageState> states = null;
         // Keep track of how many we've looked at, since we don't start the iterator at the beginning.
         int peersProcessed = 0;
+        int nextSendDelay = Integer.MAX_VALUE;
         while (_alive && (states == null) ) {
-            int nextSendDelay = Integer.MAX_VALUE;
             // no, not every time - O(n**2) - do just before waiting below
             //finishMessages();
 
@@ -288,8 +288,8 @@ class OutboundMessageFragments {
                     // Otherwise, wait()
                     long now = _context.clock().now();
                     while (_iterator.hasNext()) {
-                        peer = _iterator.next();
-                        int remaining = peer.finishMessages(now);
+                        PeerState p = _iterator.next();
+                        int remaining = p.finishMessages(now);
                         if (remaining <= 0) {
                             // race with add()
                             _iterator.remove();
@@ -298,20 +298,19 @@ class OutboundMessageFragments {
                             continue;
                         }
                         peersProcessed++;
-                        states = peer.allocateSend();
+                        states = p.allocateSend();
                         if (states != null) {
+                            peer = p;
                             // we have something to send and we will be returning it
                             break;
-                        } else if (peersProcessed >= _activePeers.size()) {
+                        } 
+                        int delay = p.getNextDelay();
+                        if (delay < nextSendDelay)
+                            nextSendDelay = delay;
+                        
+                        if (peersProcessed >= _activePeers.size()) {
                             // we've gone all the way around, time to sleep
                             break;
-                        } else {
-                            // Update the minimum delay for all peers
-                            // which will be used if we found nothing to send across all peers
-                            int delay = peer.getNextDelay();
-                            if (delay < nextSendDelay)
-                                nextSendDelay = delay;
-                            peer = null;
                         }
                     }
 
@@ -329,8 +328,10 @@ class OutboundMessageFragments {
                         // use max of 1 second so finishMessages() and/or PeerState.finishMessages()
                         // gets called regularly
                         int toWait = Math.min(Math.max(nextSendDelay, 10), MAX_WAIT);
-                        //if (_log.shouldLog(Log.DEBUG))
-                        //    _log.debug("wait for " + toWait);
+                        if (_log.shouldLog(Log.DEBUG))
+                            _log.debug("wait for " + toWait);
+
+                        nextSendDelay = Integer.MAX_VALUE;
                         // wait.. or somethin'
                         synchronized (_activePeers) {
                             try {

Subtickets

Change History (5)

comment:1 Changed 7 months ago by zzz

This patch conflicts with the pending rolled-up jogger patch (4330-line preliminary version received January 23 2020), which purports to fix the following tickets:

#657 #774 #1613 #2259 #2386 #2412 #2427 #2506 #2512 #2576 #2609 #2613 #2640 #2641 #2642 #2646 #2647 #2648 #2649 #2650 #2653 #2654 #2655 #2656 #2657 #2658 #2660 #2664 #2668 #2675

To this date we have not come to an agreement on that patch. We have not agreed to take it as-is and he has not agreed to split it up into smaller patches.

It is worth reviewing his patch for an alternative way of fixing this.
Accepting the patch here would probably seal a decision not to take the jogger patch.

It's also worth finding which of the above tickets may overlap with this one.

comment:2 Changed 8 days ago by zzz

Milestone: undecided0.9.48

see comments in #2713
my preference is to apply this separately

comment:3 Changed 8 days ago by Zlatin Balevsky

I need this patch to run experiments in the testnet where very often there is only a single peer ready to send to. I agree it's cleaner to apply it separately from #2710 but until that happens I will continue to test the cumulative patch for that ticket.

comment:4 Changed 8 days ago by Zlatin Balevsky

I meant #2713

comment:5 Changed 7 days ago by zzz

Resolution: fixed
Status: newclosed
Note: See TracTickets for help on using tickets.