Opened 6 years ago

Closed 6 years ago

#1042 closed defect (fixed)

Streaming transition from TIME-WAIT to removal

Reported by: zzz Owned by: zzz
Priority: minor Milestone: 0.9.9
Component: streaming Version: 0.9.7.1
Keywords: Cc: zab@…
Parent Tickets:

Description

In TCP terminology, the Connection goes into TIME-WAIT state when it's closed, in case things need to be retransmitted or re-acked. But we don't ever get out of that state even when we know everything is acked. So the connection sits there eating memory until the DisconnectTimer? goes off.

Need logic to determine when we are really done so the resources can be freed. Note that there are two cases, one where we got the close first and one where we sent it first. Knowing all our packets were acked is easy. Knowing that he got all our acks is harder. Note also that any half-close fixes may affect this ticket.

http://pastethis.i2p/show/wIHnFl1IEkhSdhxKxWdH/

related: #1039 #1040 #1041

ref: RFC 793 http://tools.ietf.org/html/rfc793

Subtickets

Change History (13)

comment:1 Changed 6 years ago by zzz

<zzz> streaming notes http://pastethis.i2p/show/BkyfYfreJXQgRCeeHyCG/
<RSS> Title: Paste #BkyfYfreJXQgRCeeHyCG | LodgeIt?! (at pastethis.i2p)
<zzz> if you send the first CLOSE, not sure you can ever know when to stop.
<zzz> if you send your close after receiving his, you should be able to stop as soon as your close is acked.
<zab> if we're the http server, why do we need to know that the client has received our ack of their close?
<zab> (we're the http server and sender of first close on connection)
<zzz> because if they don't get our ack, they will retransmit the close, and we (should?) retransmit the ack
<zzz> the server side is the important side to fix, but it's the "hard" side. However if we implement half-close, the client becomes the "hard" side
<zzz> you think we don't need to stick around to resend the ack? maybe not...
<zab> ok, so we hang around for 1 or 2 x RTO to see if the client retx's their close
<zab> if we don't get a retx of the close we assume the client got our ack
<zzz> we only know our RTO, not his
<zab> the two "should" be very close if not the same (guessing)
<zzz> sure they are correlated but correlated != "very close"
<zab> ok we know the max RTO is 45000, we can use that to be extra sure
<zab> my point here is we do not retx our ack of their close unless they retx their close
<zzz> http://tools.ietf.org/html/rfc793 page 22
<RSS> Title: RFC 793 - Transmission Control Protocol (at tools.ietf.org)
<zzz> seems to confirm
<zzz> if you send the first close, you go thru TIME-WAIT. If you rcv the first close, you can go straight to CLOSED
<zab> do you send anything while in TIME-WAIT?

  • zab still loading the ietf page

<zzz> checking...
<zzz> TIME-WAIT STATE
<zzz> The only thing that can arrive in this state is a
<zzz> retransmission of the remote FIN. Acknowledge it, and restart
<zzz> the 2 MSL timeout.
<zzz> (p. 72)
<zzz> MSL
<zzz> Maximum Segment Lifetime, the time a TCP segment can exist in
<zzz> the internetwork system. Arbitrarily defined to be 2 minutes.
<zzz> so TIME-WAIT is a fixed 4 minutes
<zab> If nothing arrives it should end in 2 minutes, no?
<zab> the timer only gets reset if the FIN is retx'd
<zzz> "2 MSL" where MSL == 2 minutes
<zab> oh
<zab> I wonder why
<zab> what exactly is the diff between LAST-ACK and TIME-WAIT?
<zzz> LAST-ACK just waits for the ack of the fin, then you are done.
<zzz> TIME-WAIT is a 4 minute purgatory, you cannot escape early
<zzz> that's THE important difference between the left side (sent close first) and the right side (rcvd close first)
<zab> so LAST-ACK is the receiving close side, ok
<zzz> correct
<zab> that purgatory duration seems totally made up. I see no reason it should not be based on RTT or RTO
<zzz> maybe. but that's just a detail. whether it's 2 minutes or 4 or 5, it's still a hog.
<zzz> ofc we have to check if we do re-ack the close...
<zab> This must be a big problem for many websites, I don't believe they're all keeping state 4 minutes after pageview
<zzz> I think that's what half-close does for you. The burden goes to the client.
<zzz> so it's a two-stage fix. First, we have to avoid TIME-WAIT if we sent the first close. That fixes it for clients.
<zzz> Second, we implement half-close. That shifts the burden from the servers to the clients.
<zab> the second change undoes the first one
<zzz> no. it moves the fix from the client to the server
<zzz> gonna paste all this into the ticket
<zab> but the client will enter TIME-WAIT after it receives the close from the server which is what the first stage avoids
<zzz> see? if the server receives the first close, it can do a clean shutdown, following the right side of the state diagram
<zzz> only one side can avoid TIME-WAIT
<zzz> right now, neither does
<zab> that's why I'm sayng stage 2 undoes what stage 1 will do
<zzz> stage 1 allows the client to avoid TIME-WAIT . stage 2 allows the server to avoid TIME-WAIT but undoes it for the client.
<zzz> stage 2 is better because you want to minimize resources on the server
<zab> yeah
<zab> so why bother with stage 1 at all?
<zzz> they are relatively independent tasks
<zzz> and could be done in either order
<zzz> I suspect stage 1 is easier is why I listed it first
<zab> another question is how to deploy stage 2
<zab> what happens if a client sends SYN-GET-CLOSE now?
<zzz> good q's. don't know.
<zab> it should be possible for a stage-2 capable server to work with current code clients
<zab> if not it becomes a very painful upgrade process
<zzz> there may be problems in both streaming and i2ptunnel, on both the client and server sides. I really don't know.
<zzz> stage 1 at least provides network-wide goodness, even if it optimizes the wrong side
<zzz> there's very few eepsites with enough traffic for this to matter
<zzz> the conn limit issue can be fixed easily, just by replacing isConnected() with closeSendTime() > 0 && closeRecvTime() > 0 or similar

comment:2 Changed 6 years ago by zzz

<zzz> SYN-GET-CLOSE may work now as long as the data is totally received before the 5 minute teardown. or maybe not.
<zab> are you sure you won't need to reschedule(0) the DisconnectEvents??
<zzz> I don't know if we ever did SYN+CLOSE, or if it was changed on purpose or accidentally
<zzz> SYN-GET-CLOSE may work now as long as the data is totally received before the 5 minute teardown. or maybe not.
<zab> are you sure you won't need to reschedule(0) the DisconnectEvents??
<zzz> not sure about anything...
<zab> I'd favor a fix where the connection state is an explicit enum (that possibly matches the state in the rfc)
<zab> I'm wary of the jrandom style of multiple variables that do sort of similar things
<zab> (states in the rfc)
<zzz> agreed in a fantasy world. In the real world that might be too hard or too disruptive. maybe.
<zzz> or maybe that's the only real world fix, to refactor it into states before doing anything else.
<zab> SchedulerDead?
<zab> the DISCONNECT_TIMEOUT constant is used in several schedulers

comment:3 Changed 6 years ago by zab

  • Cc zab@… added

comment:4 Changed 6 years ago by zzz

  • Owner set to zzz
  • Status changed from new to accepted

Half-close not mentioned in RFC 793; see RFC 1122 sec. 4.2.2.13 and http://superuser.com/questions/298919/what-is-tcp-half-open-connection-and-tcp-half-closed-connection 2nd answer.

For either branch of the state diagram in RFC 793 Figure 6, we must notify Connection when our close was acked. Stubbed out in i2p.i2p.zzz.test2 39fea4d5b98669dcc2a837d1f4f9e31a45a55e19. Although doesn't handle if there are outstanding NACKs.

More to do.

comment:5 Changed 6 years ago by zzz

We can go directly to CLOSED, skipping TIME-WAIT, as follows. (portion of pending patch)

            long cso = _closeSentOn.get();
            long cro = _closeReceivedOn.get();
            if (cro > 0 && cro < cso && getUnackedPacketsSent() <= 0) {
                if (_log.shouldLog(Log.INFO))
                    _log.info("Rcv close -> send close -> last acked, skip TIME-WAIT for " + toString(), new Exception());
                // They sent the first CLOSE.
                // We do not need to enter TIME-WAIT, we are done.
                // clean disconnect, don't schedule TIME-WAIT
                // remove conn
                disconnectComplete();
            } else {
                scheduleDisconnectEvent();
            }

comment:6 follow-up: Changed 6 years ago by zab

According to the RFC if (cro > 0 && cro < cso) we are in LAST_ACK state. Doesn't that have different requirements for timeout than TIME_WAIT? Are you sure we can reuse scheduleDisconnectEvent() in that state?

comment:7 in reply to: ↑ 6 Changed 6 years ago by zzz

Replying to zab:

According to the RFC

I'm not strictly mapping the 793 states to the code, for several reasons:

  • Simpler
  • 793 doesn't cover half-close
  • I'm not implementing a single state variable so it's hard to do a bunch of states. As we discussed above we may do it in the future.
  • Especially on the left side of the 793 pic, there's about 6 states I've mushed together into TIME_WAIT, more or less

But, back to thr right side of the 793 pic, roughly speaking:

1) The (cro > 0 && cro < cso) gets you to LAST_ACK state;

2) the getUnackedPacketsSent() <= 0 gets you from LAST_ACK to CLOSED state.

If we haven't got the ack of their close yet, I'm trying not to call the above code yet. I think. Paste to follow.

comment:8 Changed 6 years ago by zzz

Pending patch:

  • Streaming: Major rework of connection disconnect process. Tickets 1040-1042.
    • Prevent multiple calls or reentrancy in disconnect() (ticket #1041)
    • Implement processing of close to skip TIME-WAIT, and wait for all packets to be acked (not just the CLOSE) before doing so, if possible (ticket #1042)
    • Don't call disconnectComplete() from I2PSocketFull.destroy() so retransmissions and acks can still happen (removes some close loops)
    • Don't call disconnect() until we have both sent and received a CLOSE (ticket #1040)
    • Don't reject incoming packets in CPH just because we sent a CLOSE and it was acked (ticket #1040)
    • Retransmit CLOSE if not acked (ticket #1040)
    • Ack packets even if we sent a CLOSE (ticket #1040)
    • Send received packets to the MessageInputStream? even if we haven't received a SYN
    • Don't send a RESET after timeout of an outbound connection
    • Make _isInbound final
    • Work around bugs on other end by limiting retransmission of CLOSE packets
    • Don't call MessageInputStream?.messageReceived() for ack-only packets
    • More cleanups, javadocs, log tweaks

http://pastethis.i2p/show/3VzSz0NYz3joTMqlsBNr/

comment:9 Changed 6 years ago by zzz

Fixed up after zab comments, checked in to i2p.i2p.zzz.test2 33d2af1d4e98089e8f260804254f5b5bca1dd0d5

Needs more testing

comment:14 Changed 6 years ago by zzz

Major changes propped as 0.9.8.1-2. Leaving open for more testing. To flip the fix from the client to server we have to fix #1040.

comment:15 Changed 6 years ago by str4d

  • Keywords testing-needed added

comment:16 Changed 6 years ago by str4d

  • Keywords testing-needed removed
  • Status changed from accepted to testing

comment:17 Changed 6 years ago by zzz

  • Resolution set to fixed
  • Status changed from testing to closed

As I said above this isn't a huge benefit on the client side, need to fix #1040 to move the fix from client to server, but since we have that ticket I'm closing this one.

Note: See TracTickets for help on using tickets.