Opened 2 years ago

Closed 2 years ago

#2438 closed defect (fixed)

Correct # of threads within NTCP Send Finisher

Reported by: jogger Owned by: zzz
Priority: major Milestone: 0.9.39
Component: router/transport Version: 0.9.38
Keywords: Cc:
Parent Tickets: Sensitive: no


There are only MIN_THREADS threads (= 1) created. The code should read:

             super(num, num, 10*1000, TimeUnit.MILLISECONDS,
                   new LinkedBlockingQueue<Runnable>(), new CustomThreadFactory());

This oversight is currently a performance bottleneck because the executor is put into wait() after each task, causing a context switch. Depending on hardware and OS this limits the maximum TCP send rate to something over 1000 packets (Linux).

The Java docs also recommend newCachedThreadPool, don´t think this would help.

NTCP Writer does not have this problem as it loops over its stuff, so anything added during a context switch gets shoot out once it gets hold of the CPU again.


Change History (3)

comment:1 Changed 2 years ago by jogger

I discovered this doing code analysis, but now that my router is up to speed I can confirm TCP traffic levels higher than ever before. The traffic spikes zab has seen must have been caused by #2412 alone.

comment:2 Changed 2 years ago by Zlatin Balevsky

From teh Javadocs:

If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing.
If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread.

So the current code is creating a single core thread and all subsequent requests get queued on it.

<li><em> Unbounded queues.</em> Using an unbounded queue (for
example a {@link LinkedBlockingQueue?} without a predefined
capacity) will cause new tasks to wait in the queue when all
corePoolSize threads are busy. Thus, no more than corePoolSize
threads will ever be created. (And the value of the maximumPoolSize
therefore doesn't have any effect.)

So you're right, it was probably intended to be core pool size of 4. I don't however see where

the executor is put into wait()

in fact if you look at the source of ThreadPoolExecutor::getTask it calls LinkedBlockingQueue::take. That in turn calls "await" only if the queue is empty.

comment:3 Changed 2 years ago by zzz

Milestone: undecided0.9.39
Resolution: fixed
Status: newclosed

I hate the ThreadPoolExecutor? API. It's caused us countless problems over the years. And the ScheduledThreadPoolExecutor? is worse.
The line was changed from "num, num" to "MIN_THREADS, num" by me in April 2015.

In d505721e58c3b7bbcdde170cecf9ed3f2e1843d3 to be 0.9.38-11

Note: See TracTickets for help on using tickets.