Opened 7 weeks ago

Last modified 7 weeks ago

#2713 new enhancement

SSU: single streaming timer, reset after ack

Reported by: Zlatin Balevsky Owned by: zzz
Priority: minor Milestone: undecided
Component: router/transport Version: 0.9.45
Keywords: ssu Cc:
Parent Tickets: Sensitive: no

Description (last modified by Zlatin Balevsky)

This is the SSU equivalent of #2710. There we have the problem - each message has it's own timer so to speak, whereas there should be a single timer per connection.

The patch below replaces the nextSendTime variable in OutboundMessageState with a single variable in PeerState. There are few other changes to bring the implementation more in-line with RFCs 6298 and 5681: up to half of the outstanding window can be retransmitted during congestion as opposed to just a single message as it is now.

The patch also includes the patch to #2714

Testnet results in a synthetic (no loss or delay) and realistic (70ms delay, 0.5% loss) will be attached soon.

diff --git a/router/java/src/net/i2p/router/transport/udp/OutboundMessageFragments.java b/router/java/src/net/i2p/router/transport/udp/OutboundMessageFragments.java
index 2159b7707..ad8e7f2a7 100644
--- a/router/java/src/net/i2p/router/transport/udp/OutboundMessageFragments.java
+++ b/router/java/src/net/i2p/router/transport/udp/OutboundMessageFragments.java
@@ -270,8 +270,8 @@ class OutboundMessageFragments {
         List<OutboundMessageState> states = null;
         // Keep track of how many we've looked at, since we don't start the iterator at the beginning.
         int peersProcessed = 0;
+        int nextSendDelay = Integer.MAX_VALUE;
         while (_alive && (states == null) ) {
-            int nextSendDelay = Integer.MAX_VALUE;
             // no, not every time - O(n**2) - do just before waiting below
             //finishMessages();
 
@@ -288,30 +288,29 @@ class OutboundMessageFragments {
                     // Otherwise, wait()
                     long now = _context.clock().now();
                     while (_iterator.hasNext()) {
-                        peer = _iterator.next();
-                        int remaining = peer.finishMessages(now);
+                        PeerState p = _iterator.next();
+                        int remaining = p.finishMessages(now);
                         if (remaining <= 0) {
                             // race with add()
                             _iterator.remove();
                             if (_log.shouldLog(Log.DEBUG))
-                                _log.debug("No more pending messages for " + peer.getRemotePeer());
+                                _log.debug("No more pending messages for " + p.getRemotePeer());
                             continue;
                         }
                         peersProcessed++;
-                        states = peer.allocateSend();
+                        states = p.allocateSend();
                         if (states != null) {
+                            peer = p;
                             // we have something to send and we will be returning it
                             break;
-                        } else if (peersProcessed >= _activePeers.size()) {
+                        } 
+                        int delay = p.getNextDelay();
+                        if (delay < nextSendDelay)
+                            nextSendDelay = delay;
+                        
+                        if (peersProcessed >= _activePeers.size()) {
                             // we've gone all the way around, time to sleep
                             break;
-                        } else {
-                            // Update the minimum delay for all peers
-                            // which will be used if we found nothing to send across all peers
-                            int delay = peer.getNextDelay();
-                            if (delay < nextSendDelay)
-                                nextSendDelay = delay;
-                            peer = null;
                         }
                     }
 
@@ -329,8 +328,10 @@ class OutboundMessageFragments {
                         // use max of 1 second so finishMessages() and/or PeerState.finishMessages()
                         // gets called regularly
                         int toWait = Math.min(Math.max(nextSendDelay, 10), MAX_WAIT);
-                        //if (_log.shouldLog(Log.DEBUG))
-                        //    _log.debug("wait for " + toWait);
+                        if (_log.shouldLog(Log.DEBUG))
+                            _log.debug("wait for " + toWait);
+
+                        nextSendDelay = Integer.MAX_VALUE;
                         // wait.. or somethin'
                         synchronized (_activePeers) {
                             try {
@@ -370,6 +371,12 @@ class OutboundMessageFragments {
         return packets;
     }
 
+    void nudge() {
+        synchronized(_activePeers) {
+            _activePeers.notifyAll();
+        }
+    }
+
     /**
      *  @return null if state or peer is null
      */
diff --git a/router/java/src/net/i2p/router/transport/udp/OutboundMessageState.java b/router/java/src/net/i2p/router/transport/udp/OutboundMessageState.java
index f5a42ee27..f010e4dd3 100644
--- a/router/java/src/net/i2p/router/transport/udp/OutboundMessageState.java
+++ b/router/java/src/net/i2p/router/transport/udp/OutboundMessageState.java
@@ -31,7 +31,6 @@ class OutboundMessageState implements CDPQEntry {
     private long _fragmentAcks;
     private final int _numFragments;
     private final long _startedOn;
-    private long _nextSendTime;
     private int _pushCount;
     private int _maxSends;
     // we can't use the ones in _message since it is null for injections
@@ -77,7 +76,6 @@ class OutboundMessageState implements CDPQEntry {
         _i2npMessage = msg;
         _peer = peer;
         _startedOn = _context.clock().now();
-        _nextSendTime = _startedOn;
         _expiration = _startedOn + EXPIRATION;
         //_expiration = msg.getExpiration();
 
@@ -166,9 +164,6 @@ class OutboundMessageState implements CDPQEntry {
         return isComplete();
     }
     
-    public long getNextSendTime() { return _nextSendTime; }
-    public void setNextSendTime(long when) { _nextSendTime = when; }
-
     /**
      *  The max number of sends for any fragment, which is the
      *  same as the push count, at least as it's coded now.
diff --git a/router/java/src/net/i2p/router/transport/udp/PeerState.java b/router/java/src/net/i2p/router/transport/udp/PeerState.java
index d67390e5b..44a27395c 100644
--- a/router/java/src/net/i2p/router/transport/udp/PeerState.java
+++ b/router/java/src/net/i2p/router/transport/udp/PeerState.java
@@ -219,8 +219,8 @@ public class PeerState {
     //private final CoDelPriorityBlockingQueue<OutboundMessageState> _outboundQueue;
     private final PriBlockingQueue<OutboundMessageState> _outboundQueue;
 
-    /** which outbound message is currently being retransmitted */
-    private OutboundMessageState _retransmitter;
+    /** when the retransmit timer is about to trigger */
+    private volatile long _retransmitTimer;
     
     private final UDPTransport _transport;
     
@@ -246,9 +246,6 @@ public class PeerState {
     private static final int MINIMUM_WINDOW_BYTES = DEFAULT_SEND_WINDOW_BYTES;
     private static final int MAX_SEND_WINDOW_BYTES = 1024*1024;
 
-    /** max number of msgs returned from allocateSend() */
-    private static final int MAX_ALLOCATE_SEND = 2;
-
     /**
      *  Was 32 before 0.9.2, but since the streaming lib goes up to 128,
      *  we would just drop our own msgs right away during slow start.
@@ -928,6 +925,14 @@ public class PeerState {
             _sendWindowBytes = MINIMUM_WINDOW_BYTES;
         //if (congestionAt/2 < _slowStartThreshold)
             _slowStartThreshold = congestionAt/2;
+
+        int oldRto = _rto;
+        long oldTimer = _retransmitTimer - now;
+        _rto = Math.min(MAX_RTO, Math.max(minRTO(), _rto << 1 ));
+        _retransmitTimer = now + _rto;
+        if (_log.shouldLog(Log.DEBUG))
+            _log.debug(_remotePeer + " Congestion, RTO: " + oldRto + " -> " + _rto + " timer: " + oldTimer + " -> " + (_retransmitTimer - now));
+
         return true;
     }
     
@@ -1186,7 +1191,7 @@ public class PeerState {
      *  We sent a message which was ACKed containing the given # of bytes.
      *  Caller should synch on this
      */
-    private void locked_messageACKed(int bytesACKed, long lifetime, int numSends) {
+    private void locked_messageACKed(int bytesACKed, long lifetime, int numSends, boolean anyPending) {
         _consecutiveFailedSends = 0;
         // _lastFailedSendPeriod = -1;
         if (numSends < 2) {
@@ -1231,17 +1236,31 @@ public class PeerState {
                 adjustMTU();
             //}
         }
+        
+        if (!anyPending) {
+            if (_log.shouldLog(Log.DEBUG))
+                _log.debug(_remotePeer + " nothing pending, cancelling timer");
+            _retransmitTimer = 0;
+        } else {
+            // any time new data gets acked, push out the timer
+            long now = _context.clock().now();
+            long oldTimer = _retransmitTimer - now;
+            _retransmitTimer = now + getRTO();
+            if (_log.shouldLog(Log.DEBUG))
+               _log.debug(_remotePeer + " ACK, timer: " + oldTimer + " -> " + (_retransmitTimer - now));
+        }
+        _transport.getOMF().nudge();
     }
 
     /**
      *  We sent a message which was ACKed containing the given # of bytes.
      */
-    private void messageACKed(int bytesACKed, long lifetime, int numSends) {
+    private void messageACKed(int bytesACKed, long lifetime, int numSends, boolean anyPending) {
         synchronized(this) {
-            locked_messageACKed(bytesACKed, lifetime, numSends);
+            locked_messageACKed(bytesACKed, lifetime, numSends, anyPending);
         }
         if (numSends >= 2 && _log.shouldLog(Log.INFO))
-            _log.info("acked after numSends=" + numSends + " w/ lifetime=" + lifetime + " and size=" + bytesACKed);
+            _log.info(_remotePeer + " acked after numSends=" + numSends + " w/ lifetime=" + lifetime + " and size=" + bytesACKed);
         
         _context.statManager().addRateData("udp.sendBps", _sendBps);
     }
@@ -1548,7 +1567,6 @@ public class PeerState {
 
             List<OutboundMessageState> tempList;
             synchronized (_outboundMessages) {
-                    _retransmitter = null;
                     tempList = new ArrayList<OutboundMessageState>(_outboundMessages);
                     _outboundMessages.clear();
             }
@@ -1610,21 +1628,15 @@ public class PeerState {
                 OutboundMessageState state = iter.next();
                 if (state.isComplete()) {
                     iter.remove();
-                    if (_retransmitter == state)
-                        _retransmitter = null;
                     if (succeeded == null) succeeded = new ArrayList<OutboundMessageState>(4);
                     succeeded.add(state);
                 } else if (state.isExpired(now)) {
                     iter.remove();
-                    if (_retransmitter == state)
-                        _retransmitter = null;
                     _context.statManager().addRateData("udp.sendFailed", state.getPushCount());
                     if (failed == null) failed = new ArrayList<OutboundMessageState>(4);
                     failed.add(state);
                 } else if (state.getPushCount() > OutboundMessageFragments.MAX_VOLLEYS) {
                     iter.remove();
-                    if (state == _retransmitter)
-                        _retransmitter = null;
                     _context.statManager().addRateData("udp.sendAggressiveFailed", state.getPushCount());
                     if (failed == null) failed = new ArrayList<OutboundMessageState>(4);
                     failed.add(state);
@@ -1667,9 +1679,28 @@ public class PeerState {
      * @return allocated messages to send (never empty), or null if no messages or no resources
      */
     List<OutboundMessageState> allocateSend() {
+        long now = _context.clock().now();
+        List<OutboundMessageState> rv = allocateSend2(now >= _retransmitTimer);
+        if (rv != null && !rv.isEmpty()) {
+            synchronized(this) {
+                long old = _retransmitTimer;
+                if (_retransmitTimer == 0)
+                    _retransmitTimer = now + getRTO();
+                if (_log.shouldLog(Log.DEBUG))
+                    _log.debug(_remotePeer + " allocated " + rv.size() + " pushing retransmitter from " + old + " to " + _retransmitTimer);
+            }
+        }
+        return rv;
+    }
+
+    /**
+     * @param canSendOld if any already sent messages can be sent.  If false, only new messages will be considered
+     */
+    private List<OutboundMessageState> allocateSend2(boolean canSendOld) {
         if (_dead) return null;
         List<OutboundMessageState> rv = null;
         synchronized (_outboundMessages) {
+            if (canSendOld) {
             for (OutboundMessageState state : _outboundMessages) {
                 // We have 3 return values, because if allocateSendingBytes() returns false,
                 // then we can stop iterating.
@@ -1686,18 +1717,22 @@ public class PeerState {
                     }
                      */
                     if (rv == null)
-                        rv = new ArrayList<OutboundMessageState>(MAX_ALLOCATE_SEND);
+                        rv = new ArrayList<OutboundMessageState>(_outboundMessages.size());
                     rv.add(state);
-                    if (rv.size() >= MAX_ALLOCATE_SEND)
+                    if (rv.size() >= _outboundMessages.size() / 2)
                         return rv;
                 } else if (should == ShouldSend.NO_BW) {
                     // no more bandwidth available
                     // we don't bother looking for a smaller msg that would fit.
                     // By not looking further, we keep strict sending order, and that allows
                     // some efficiency in acked() below.
-                    if (rv == null && _log.shouldLog(Log.DEBUG))
-                        _log.debug("Nothing to send (BW) to " + _remotePeer + ", with " + _outboundMessages.size() +
-                                   " / " + _outboundQueue.size() + " remaining");
+                    if (_log.shouldLog(Log.DEBUG)) {
+                        if (rv == null)
+                            _log.debug("Nothing to send (BW) to " + _remotePeer + ", with " + _outboundMessages.size() +
+                                       " / " + _outboundQueue.size() + " remaining");
+                        else 
+                           _log.debug(_remotePeer + " ran out of BW, but managed to send " + rv.size());
+                    }
                     return rv;
                 } /* else {
                     OutNetMessage msg = state.getMessage();
@@ -1705,7 +1740,7 @@ public class PeerState {
                         msg.timestamp("passed over for allocation with " + msgs.size() + " peers");
                 } */
             }
-
+            }
             // Peek at head of _outboundQueue and see if we can send it.
             // If so, pull it off, put it in _outbundMessages, test
             // again for bandwidth if necessary, and return it.
@@ -1729,9 +1764,9 @@ public class PeerState {
                         if (_log.shouldLog(Log.DEBUG))
                             _log.debug("Allocate sending (NEW) to " + _remotePeer + ": " + dequeuedState.getMessageId());
                         if (rv == null)
-                            rv = new ArrayList<OutboundMessageState>(MAX_ALLOCATE_SEND);
+                            rv = new ArrayList<OutboundMessageState>(_concurrentMessagesAllowed);
                         rv.add(dequeuedState);
-                        if (rv.size() >= MAX_ALLOCATE_SEND)
+                        if (rv.size() >= _concurrentMessagesAllowed)
                             return rv;
                     }
                 }
@@ -1739,7 +1774,8 @@ public class PeerState {
         }
         if ( rv == null && _log.shouldLog(Log.DEBUG))
             _log.debug("Nothing to send to " + _remotePeer + ", with " + _outboundMessages.size() +
-                       " / " + _outboundQueue.size() + " remaining");
+                       " / " + _outboundQueue.size() + " remaining, rtx timer in " + (_retransmitTimer - _context.clock().now()));
+
         return rv;
     }
     
@@ -1755,23 +1791,10 @@ public class PeerState {
         int rv = Integer.MAX_VALUE;
         if (_dead) return rv;
         long now = _context.clock().now();
-        synchronized (_outboundMessages) {
-            if (_retransmitter != null) {
-                rv = (int)(_retransmitter.getNextSendTime() - now);
-                return rv;
-            }
-            for (OutboundMessageState state : _outboundMessages) {
-                int delay = (int)(state.getNextSendTime() - now);
-                // short circuit once we hit something ready to go
-                if (delay <= 0)
-                    return delay;
-                if (delay < rv)
-                    rv = delay;
-            }
+        synchronized(this) {
+            if (_retransmitTimer >= now) 
+                return (int) (_retransmitTimer - now);
         }
-        // failsafe... is this OK?
-        if (rv > 100 && !_outboundQueue.isEmpty())
-            rv = 100;
         return rv;
     }
 
@@ -1827,52 +1850,19 @@ public class PeerState {
      */
     private ShouldSend locked_shouldSend(OutboundMessageState state) {
         long now = _context.clock().now();
-        if (state.getNextSendTime() <= now) {
-            OutboundMessageState retrans = _retransmitter;
-            if ( (retrans != null) && ( (retrans.isExpired(now) || retrans.isComplete()) ) ) {
-                _retransmitter = null;
-                retrans = null;
-	    }
-            
-            if ( (retrans != null) && (retrans != state) ) {
-                // choke it, since there's already another message retransmitting to this
-                // peer.
-                _context.statManager().addRateData("udp.blockedRetransmissions", _packetsRetransmitted);
-                int max = state.getMaxSends();
-                if ( (max <= 0) && (!THROTTLE_INITIAL_SEND) ) {
-                    //if (state.getMessage() != null)
-                    //    state.getMessage().timestamp("another message is retransmitting, but we want to send our first volley...");
-                } else if ( (max <= 0) || (THROTTLE_RESENDS) ) {
-                    //if (state.getMessage() != null)
-                    //    state.getMessage().timestamp("choked, with another message retransmitting");
-                    return ShouldSend.NO;
-                } else {
-                    //if (state.getMessage() != null)
-                    //    state.getMessage().timestamp("another message is retransmitting, but since we've already begun sending...");                    
-                }
-            }
-
             int size = state.getUnackedSize();
             if (allocateSendingBytes(size, state.getPushCount())) {
                 if (_log.shouldLog(Log.DEBUG))
-                    _log.debug("Allocation of " + size + " allowed with " 
+                    _log.debug(_remotePeer + " Allocation of " + size + " allowed with " 
                               + getSendWindowBytesRemaining() 
                               + "/" + getSendWindowBytes() 
                               + " remaining"
                               + " for message " + state.getMessageId() + ": " + state);
 
-                int rto = getRTO();
-                if (state.getPushCount() > 0) {
-                    _retransmitter = state;
-                    rto = Math.min(MAX_RTO, rto << state.getPushCount()); // Section 5.5 RFC 6298
-                }
 
                 if (state.push())
                     _messagesSent++;
             
-                // messages with multiple fragments need more time
-                state.setNextSendTime(now + rto + ((state.getFragmentCount() - 1) * ACKSender.ACK_FREQUENCY));
-
                 //if (peer.getSendWindowBytesRemaining() > 0)
                 //    _throttle.unchoke(peer.getRemotePeer());
                 return ShouldSend.YES;
@@ -1881,13 +1871,9 @@ public class PeerState {
                 //if (state.getMessage() != null)
                 //    state.getMessage().timestamp("send rejected, available=" + getSendWindowBytesRemaining());
                 if (_log.shouldLog(Log.INFO))
-                    _log.info("Allocation of " + size + " rejected w/ wsize=" + getSendWindowBytes()
+                    _log.info(_remotePeer + " Allocation of " + size + " rejected w/ wsize=" + getSendWindowBytes()
                               + " available=" + getSendWindowBytesRemaining()
                               + " for message " + state.getMessageId() + ": " + state);
-                state.setNextSendTime(now + (ACKSender.ACK_FREQUENCY / 2) +
-                                      _context.random().nextInt(ACKSender.ACK_FREQUENCY)); //(now + 1024) & ~SECOND_MASK);
-                if (_log.shouldLog(Log.INFO))
-                    _log.info("Retransmit after choke for next send time in " + (state.getNextSendTime()-now) + "ms");
                 //_throttle.choke(peer.getRemotePeer());
 
                 //if (state.getMessage() != null)
@@ -1896,9 +1882,7 @@ public class PeerState {
                 //                                 + getSendWindowBytesRemaining());
                 return ShouldSend.NO_BW;
             }
-        } // nextTime <= now 
 
-        return ShouldSend.NO;
     }
     
     /**
@@ -1910,6 +1894,7 @@ public class PeerState {
     boolean acked(long messageId) {
         if (_dead) return false;
         OutboundMessageState state = null;
+        boolean anyPending;
         synchronized (_outboundMessages) {
             for (Iterator<OutboundMessageState> iter = _outboundMessages.iterator(); iter.hasNext(); ) {
                 state = iter.next();
@@ -1925,8 +1910,7 @@ public class PeerState {
                     state = null;
                 }
             }
-            if ( (state != null) && (state == _retransmitter) )
-                _retransmitter = null;
+            anyPending = !_outboundMessages.isEmpty();
         }
         
         if (state != null) {
@@ -1948,7 +1932,7 @@ public class PeerState {
             _context.statManager().addRateData("udp.sendConfirmVolley", numSends);
             _transport.succeeded(state);
             // this adjusts the rtt/rto/window/etc
-            messageACKed(state.getMessageSize(), state.getLifetime(), numSends);
+            messageACKed(state.getMessageSize(), state.getLifetime(), numSends, anyPending);
             //if (getSendWindowBytesRemaining() > 0)
             //    _throttle.unchoke(peer.getRemotePeer());
             
@@ -1976,6 +1960,7 @@ public class PeerState {
     
         OutboundMessageState state = null;
         boolean isComplete = false;
+        boolean anyPending;
         synchronized (_outboundMessages) {
             for (Iterator<OutboundMessageState> iter = _outboundMessages.iterator(); iter.hasNext(); ) {
                 state = iter.next();
@@ -1984,8 +1969,6 @@ public class PeerState {
                     if (complete) {
                         isComplete = true;
                         iter.remove();
-                        if (state == _retransmitter)
-                            _retransmitter = null;
                     }
                     break;
                 } else if (state.getPushCount() <= 0) {
@@ -1997,6 +1980,7 @@ public class PeerState {
                     state = null;
                 }
             }
+            anyPending = !_outboundMessages.isEmpty();
         }
         
         if (state != null) {
@@ -2020,7 +2004,7 @@ public class PeerState {
                 _transport.succeeded(state);
                 
                 // this adjusts the rtt/rto/window/etc
-                messageACKed(state.getMessageSize(), state.getLifetime(), numSends);
+                messageACKed(state.getMessageSize(), state.getLifetime(), numSends, anyPending);
                 //if (state.getPeer().getSendWindowBytesRemaining() > 0)
                 //    _throttle.unchoke(state.getPeer().getRemotePeer());
 
@@ -2085,13 +2069,10 @@ public class PeerState {
         synchronized (oldPeer._outboundMessages) {
             tmp2.addAll(oldPeer._outboundMessages);
             oldPeer._outboundMessages.clear();
-            retransmitter = oldPeer._retransmitter;
-            oldPeer._retransmitter = null;
         }
         if (!_dead) {
             synchronized (_outboundMessages) {
                 _outboundMessages.addAll(tmp2);
-                _retransmitter = retransmitter;
             }
         }
     }
diff --git a/router/java/src/net/i2p/router/transport/udp/UDPTransport.java b/router/java/src/net/i2p/router/transport/udp/UDPTransport.java
index ad841f374..22a263843 100644
--- a/router/java/src/net/i2p/router/transport/udp/UDPTransport.java
+++ b/router/java/src/net/i2p/router/transport/udp/UDPTransport.java
@@ -341,6 +341,10 @@ public class UDPTransport extends TransportImpl implements TimedWeightedPriority
 
         _context.simpleTimer2().addPeriodicEvent(new PingIntroducers(), MIN_EXPIRE_TIMEOUT * 3 / 4);
     }
+
+    OutboundMessageFragments getOMF() {
+        return _fragments;
+    }
     
     /**
      *  Pick a port if not previously configured, so that TransportManager may

Subtickets

Attachments (1)

ssu_streaming_in_loss.ods (22.1 KB) - added by Zlatin Balevsky 7 weeks ago.
Results from testing the ssu and streaming patches in the testnet with various packet loss probabilities.

Download all attachments as: .zip

Change History (7)

comment:1 Changed 7 weeks ago by Zlatin Balevsky

Description: modified (diff)

comment:2 Changed 7 weeks ago by Zlatin Balevsky

Description: modified (diff)

comment:3 Changed 7 weeks ago by Zlatin Balevsky

Description: modified (diff)

comment:4 Changed 7 weeks ago by Zlatin Balevsky

Description: modified (diff)

comment:5 Changed 7 weeks ago by Zlatin Balevsky

Description: modified (diff)

Changed 7 weeks ago by Zlatin Balevsky

Attachment: ssu_streaming_in_loss.ods added

Results from testing the ssu and streaming patches in the testnet with various packet loss probabilities.

comment:6 Changed 7 weeks ago by zzz

This patch conflicts with the pending rolled-up jogger patch (4330-line preliminary version received January 23 2020), which purports to fix the following tickets:

#657 #774 #1613 #2259 #2386 #2412 #2427 #2506 #2512 #2576 #2609 #2613 #2640 #2641 #2642 #2646 #2647 #2648 #2649 #2650 #2653 #2654 #2655 #2656 #2657 #2658 #2660 #2664 #2668 #2675

To this date we have not come to an agreement on that patch. We have not agreed to take it as-is and he has not agreed to split it up into smaller patches.

It is worth reviewing his patch for an alternative way of fixing this.
Accepting the patch here would probably seal a decision not to take the jogger patch.

It's also worth finding which of the above tickets may overlap with this one.

Note: See TracTickets for help on using tickets.