Opened 7 years ago

Closed 7 years ago

#742 closed enhancement (wontfix)

"Idle" status based on OS user idle time

Reported by: Zlatin Balevsky Owned by:
Priority: minor Milestone:
Component: apps/i2ptunnel Version: 0.9.2
Keywords: usability jni idle Cc:
Parent Tickets: Sensitive: no

Description

Right now idle status is determined on whether i2p router itself has been used. This is annoying because it takes a while to re-open the client tunnels and eepsites sometimes timeout.

A little JNI code and a little extra code abstraction can detect the last time the user interacted with the system in any way (moved a mouse, typed something) and solve this problem. This will increase load on the network a bit, but provide better user experience. In the studies we did @ LimeWire, the increase in network load was negligible.

I think I even have the code for Windows, OSX & Gnome somewhere. Headless & server deployments of course would continue using the in-router idle time.

Subtickets

Change History (3)

comment:1 Changed 7 years ago by zzz

Component: router/generalapps/i2ptunnel

Huh? idle status for what? i2ptunnel reduce-on-idle and close-on-idle for eepproxy?

Those are based not on 'whether i2prouter itself has been used' but whether traffic has gone through that particular tunnel.

For eepproxy, reduce-on-idle is enabled by default and isn't really noticeable afaik. delay-open-until-required and close-on-idle are not enabled by default. If they are bugging you, disable them again, or make the idle timeout number big.

So your JNI is a complex solution for a problem only you have? Or are your settings the defaults and the 1-tunnel idle state really does cause delays?

And how does limewire's load have anything to do with the effect of i2p of lots more tunnels?

comment:2 in reply to:  1 Changed 7 years ago by DISABLED

Replying to zzz:

Huh? idle status for what? i2ptunnel reduce-on-idle and close-on-idle for eepproxy?

Eeproxy

For eepproxy, reduce-on-idle is enabled by default and isn't really noticeable afaik.

I guess it became more noticeable when the tunnel creation success rates went down.

So your JNI is a complex solution for a problem only you have? Or are your settings the defaults and the 1-tunnel idle state really does cause delays?

Definitely the tunnel creation in my case

And how does limewire's load have anything to do with the effect of i2p of lots more tunnels?

I mentioned that because it is reflective of user behavior, i.e. if a consumer desktop is idle it will likely stay idle for a long time ( like when the user is not there). Hence, if idle time is taken from the OS it does not increase load on the network by too much, or in i2p case the number of open tunnels.

— zab

comment:3 Changed 7 years ago by zzz

Resolution: wontfix
Status: newclosed

Interesting idea. But we're not going to do this. A complex and hackish solution for a minor issue. Things should work fine even with only one tunnel. If not, that's what we have to fix.

This ticket was also filed while the network was in bad shape due to a conn limit bug. Things are much better now.

Note: See TracTickets for help on using tickets.