Opened 7 weeks ago

Last modified 7 weeks ago

#2714 new defect

SSU: OMF nextSendDelay not calculated properly

Reported by: Zlatin Balevsky Owned by: zzz
Priority: minor Milestone: undecided
Component: router/transport Version: 0.9.45
Keywords: ssu Cc:
Parent Tickets: Sensitive: no

Description

The current logic will ignore the value of PeerState.getNextDelay if it happens to the last or only peer with something to send. The patch below fixes that

diff --git a/router/java/src/net/i2p/router/transport/udp/OutboundMessageFragments.java b/router/java/src/net/i2p/router/transport/udp/OutboundMessageFragments.java
index 2159b7707..03ec420dc 100644
--- a/router/java/src/net/i2p/router/transport/udp/OutboundMessageFragments.java
+++ b/router/java/src/net/i2p/router/transport/udp/OutboundMessageFragments.java
@@ -270,8 +270,8 @@ class OutboundMessageFragments {
         List<OutboundMessageState> states = null;
         // Keep track of how many we've looked at, since we don't start the iterator at the beginning.
         int peersProcessed = 0;
+        int nextSendDelay = Integer.MAX_VALUE;
         while (_alive && (states == null) ) {
-            int nextSendDelay = Integer.MAX_VALUE;
             // no, not every time - O(n**2) - do just before waiting below
             //finishMessages();
 
@@ -288,8 +288,8 @@ class OutboundMessageFragments {
                     // Otherwise, wait()
                     long now = _context.clock().now();
                     while (_iterator.hasNext()) {
-                        peer = _iterator.next();
-                        int remaining = peer.finishMessages(now);
+                        PeerState p = _iterator.next();
+                        int remaining = p.finishMessages(now);
                         if (remaining <= 0) {
                             // race with add()
                             _iterator.remove();
@@ -298,20 +298,19 @@ class OutboundMessageFragments {
                             continue;
                         }
                         peersProcessed++;
-                        states = peer.allocateSend();
+                        states = p.allocateSend();
                         if (states != null) {
+                            peer = p;
                             // we have something to send and we will be returning it
                             break;
-                        } else if (peersProcessed >= _activePeers.size()) {
+                        } 
+                        int delay = p.getNextDelay();
+                        if (delay < nextSendDelay)
+                            nextSendDelay = delay;
+                        
+                        if (peersProcessed >= _activePeers.size()) {
                             // we've gone all the way around, time to sleep
                             break;
-                        } else {
-                            // Update the minimum delay for all peers
-                            // which will be used if we found nothing to send across all peers
-                            int delay = peer.getNextDelay();
-                            if (delay < nextSendDelay)
-                                nextSendDelay = delay;
-                            peer = null;
                         }
                     }
 
@@ -329,8 +328,10 @@ class OutboundMessageFragments {
                         // use max of 1 second so finishMessages() and/or PeerState.finishMessages()
                         // gets called regularly
                         int toWait = Math.min(Math.max(nextSendDelay, 10), MAX_WAIT);
-                        //if (_log.shouldLog(Log.DEBUG))
-                        //    _log.debug("wait for " + toWait);
+                        if (_log.shouldLog(Log.DEBUG))
+                            _log.debug("wait for " + toWait);
+
+                        nextSendDelay = Integer.MAX_VALUE;
                         // wait.. or somethin'
                         synchronized (_activePeers) {
                             try {

Subtickets

Change History (1)

comment:1 Changed 7 weeks ago by zzz

This patch conflicts with the pending rolled-up jogger patch (4330-line preliminary version received January 23 2020), which purports to fix the following tickets:

#657 #774 #1613 #2259 #2386 #2412 #2427 #2506 #2512 #2576 #2609 #2613 #2640 #2641 #2642 #2646 #2647 #2648 #2649 #2650 #2653 #2654 #2655 #2656 #2657 #2658 #2660 #2664 #2668 #2675

To this date we have not come to an agreement on that patch. We have not agreed to take it as-is and he has not agreed to split it up into smaller patches.

It is worth reviewing his patch for an alternative way of fixing this.
Accepting the patch here would probably seal a decision not to take the jogger patch.

It's also worth finding which of the above tickets may overlap with this one.

Note: See TracTickets for help on using tickets.