Opened 6 years ago

Closed 4 years ago

#1071 closed defect (fixed)

Too much time in HTTP Server blockingHandle()

Reported by: zzz Owned by:
Priority: maintenance Milestone: 0.9.19
Component: apps/i2ptunnel Version: 0.9.8.1
Keywords: Cc:
Parent Tickets: Sensitive: no

Description

Previous changes reduced effects of slowloris/darkloris (search tickets for more info) but still seeing a lot of time stuck in blockingHandle(). Should we reduce timeouts more? Why seeing this so often? Are we not correctly flushing / batching in HTTP client? Or are the headers exceeding 1730 bytes regularly and we're losing one? Headers are not gzipped in this direction until I2CP. This is really bad to clog up the acceptor this long. Maybe we need multiple acceptors for busy servers?

Following is sample with logging tweaks in branch i2p.i2p.zzz.test2:

10/07 18:06:42.432 WARN  [7.0.0.1:xxxx] .i2ptunnel.I2PTunnelHTTPServer: Took a while to handle the request for /127.0.0.1:xxxx [2871, read headers
: 2870, socket create: 0, start runners: 1]
10/07 18:29:52.989 WARN  [7.0.0.1:xxxx] .i2ptunnel.I2PTunnelHTTPServer: Took a while to handle the request for /127.0.0.1:xxxx [17300, read header
s: 17299, socket create: 0, start runners: 1]
10/07 18:30:37.081 WARN  [7.0.0.1:xxxx] .i2ptunnel.I2PTunnelHTTPServer: Took a while to handle the request for /127.0.0.1:xxxx [16759, read header
s: 16758, socket create: 1, start runners: 0]
10/07 18:37:32.175 WARN  [7.0.0.1:xxxx] .i2ptunnel.I2PTunnelHTTPServer: Took a while to handle the request for /127.0.0.1:xxxx [4296, read headers
: 4294, socket create: 1, start runners: 1]

644 logged in 5 days (multiple servers); highest total time 45166.

Subtickets

Change History (3)

comment:1 Changed 6 years ago by zzz

Milestone: 0.9.10
Priority: minormaintenance

We have a pool of up to 65 threads to run blockingHandle() per-dest so really not a problem. Still need to investigate why, but the speculation above that we need multiple is wrong, we are good. Reducing priority.

comment:2 Changed 6 years ago by zzz

One problem is we didn't start the timer until after we read the first line. Fixed in 30f8db8d667e4b48250fa95976a2532973d14802 0.9.8.1-6

comment:3 Changed 4 years ago by zzz

Milestone: 0.9.19
Resolution: fixed
Status: newclosed

There were several problems remaining with intra-line timeouts and enforcement of total timeout, caused by use of DataHelper?.readline(). In IRCServer also, for non-webirc mode only, which nobody uses anymore. All fixed shortly before the 0.9.19 release. First line and total header timeouts should now be strictly enforced. #335 also contributed to the problem and was fixed.

The first-line timeout of 15s and total timeout of 30s should be sufficient for most uses but is not configurable. The 65 thread limit is configurable. If anybody runs into trouble we could add configs for the timeouts.

Note: See TracTickets for help on using tickets.