wiki:petconpaper

Version 3 (modified by anonymous, 10 years ago) (diff)

--

Peer Profiling and Selection in I2P

PET-CON 2009.1
DRAFT 2009-01-19
zzz@i2pmail.org
Note - this looks like a one-hour talk, Pet-con limit is 10-15m, so a little editing is needed... Maybe this is the outline for a paper, not a 10m talk.

Overview of I2P

    - echelon does this part
  • Unidirectional tunnels, garlic (~= onion) routing, hidden services...
  • Started by jrandom in 2003; vanished in late 2007
  • ~800 routers == peers == users at any given time
  • ~50-100 new routers/day
  • 7 releases in 2008 (pic from stats.i2p)
  • Total usage currently: ~10MBps, ~40K tunnel hops
  • Network growth: doubled in last 6 months (pic from stats.i2p)
  • Hidden services oriented - P2P friendly - only one HTTP outproxy (~= "exit node")

Tunnel Overview

  • Client tunnels - for all user-generated traffic
  • Exploratory tunnels - low bw, used by the router for tunnel builds, tunnel tests, netdb queries... and for "exploration", as we will see below...
  • Tunnel lifetime 10m
  • Selected by each peer
  • Strict ordering within a tunnel (predecessor attack)
  • Tunnel build request can be rejected or dropped for a number of reasons
  • Periodic testing of each tunnel (~1KB per minute)

NetDB Overview

    - echelon will write this
  • RouterInfo for each router, and LeaseSet (~= "hidden service descriptor"), not discussed here
  • K/L/M/N/O bw classes ( <12 / 12+ / 32+ / 64+ / 128+ KBps of configured shared capacity) for each router
  • Information only, not used for any routing decisions...
  • ...Except that K routers are not used
  • Lots of stats in there for network debugging, but NOTHING is trusted or used except IP/port (and "K")
  • Serious anonymity and DOS implications if we trust claimed capacity
  • L is the default, so about 96.5% of routers are L-O, and route for others (pic from stats.i2p)

Peer Profiling and Tiers

  • Continuous gathering of stats, per-peer and per-tunnel
  • Keep totals and average of each stat for several time periods (1m, 10m, 1h, 1d)
  • Re-sort into tiers every 30s
  • Stats are persistent across restarts
  • 3 Tiers:
    • Not Failing Tier: peers is pretty much everybody that is responsive; typ 300-500 peers
    • High Capacity Tier: peers are Not Failing and have above-average tunnels-built-per-hour; typ 10-30 peers
    • Fast Tier: peers are High Capacity and have above-average bandwidth-per-tunnel; typ 8-15 peers
  • Use only my own traffic for these measurements, not routed traffic (we don't know if the peer before or after us in a tunnel is a true participant or the originator or destination, so that data wouldn't be valid)
  • The two metrics above are relative, not absolute, as they depend on traffic;
  • Each tier contains 8 minimum, peers are 'promoted' if the tier is too small (pic: sample table from profiles.jsp)
    Peer (385, hiding 51)Groups (Caps)SpeedCapacityIntegrationFailing? 
    ++ 16BDe7Fast, High Capacity (OR 0.6.5)16,582.8837.470.00 netDb/profile/+-
    ++ 2Mnb~bFast, High Capacity (OR 0.6.5)2,944.3740.740.00 netDb/profile/+-
    ++ 2hHaG5Fast, High Capacity (OR 0.6.5)7,103.6134.300.00 netDb/profile/+-
    ++ CIIICFFast, High Capacity (OR 0.6.5)18,248.6814.64397.00 netDb/profile/+-
    ++ KQ~TdxFast, High Capacity (OR 0.6.5)8,025.648.970.00 netDb/profile/+-
    ++ M92fGWFast, High Capacity (OR 0.6.5)2,623.8644.220.00 netDb/profile/+-
    ++ g247zUFast, High Capacity (OR 0.6.5)18,113.8282.020.00 netDb/profile/+-
    ++ nF6ArZFast, High Capacity (OR 0.6.5)1,776.9846.540.00 netDb/profile/+-

    ++ 1oDXZQHigh Capacity (OR 0.6.5)3,722.1810.450.00 4/20 Test Fails netDb/profile/+-
    ++ BE-r13High Capacity (OR 0.6.5)4,504.2637.510.00 8/56 Test Fails netDb/profile/+-
    ++ LHxb70High Capacity (LR 0.6.5)2,459.3819.630.00 netDb/profile/+-
    ++ OPZ3i5High Capacity (LR 0.6.5)321.747.390.00 netDb/profile/+-
    ++ UM5WvvHigh Capacity (NR 0.6.5)2,137.3011.030.00 netDb/profile/+-
    ++ er9GCLHigh Capacity (LR 0.6.5)207.266.810.00 netDb/profile/+-
    ++ vXE9dGHigh Capacity (LR 0.6.5)241.7416.220.00 netDb/profile/+-
    ++ xHKlu4High Capacity (MR 0.6.5)0.007.400.00 netDb/profile/+-

    ++ -L9FoJNot Failing (OR 0.6.5)626.845.210.00 netDb/profile/+-
       -Nvi-8Not Failing (LR 0.6.5)0.001.000.00 netDb/profile/+-
       -dlmDxNot Failing (OR 0.6.5)2,290.465.090.00 netDb/profile/+-
       -qO3ahNot Failing (LU 0.6.5)0.005.000.00 netDb/profile/+-
    ++ -yUfotNot Failing (KR 0.6.5)0.005.000.00 netDb/profile/+-
       02YO6lNot Failing (LR 0.6.5)0.005.000.00 Unreachable netDb/profile/+-
       064wSqNot Failing (LU 0.6.5)0.005.000.00 netDb/profile/+-
       08vuC5Not Failing0.001.000.00 Unreachable netDb/profile/+-
    ++ 0rmD6zNot Failing (NR 0.6.5)5,057.750.000.00 3/6 Test Fails netDb/profile/+-
       0uOOg6Not Failing (LR 0.6.5)23.985.000.00 netDb/profile/+-
       0xl1nhNot Failing (NR 0.6.5)0.005.000.00 netDb/profile/+-
       12oGTFNot Failing (LR 0.6.5)0.005.000.00 netDb/profile/+-
       135LOSNot Failing (LR 0.6.5)154.585.600.00 netDb/profile/+-

Rating Details

  • Speed - simple - the average amount of data per tunnel we have sent through the peer in the last minute
  • Capacity - not so simple - The weighted number of successful tunnel builds through the peer. Let r(t) be the successful builds per hour, over a certain time period; R = 4*r(10m) + 3*r(30m) + 2*r(1h) + r(1d);
  • Growth factor adds a little bit each time, so that new peers are tried
  • Not required for measurement: any special-purpose 'load'
  • Not used for rating: latency, ....
  • "Bonuses" can be used to manually adjust preferences for individual peers

Capacity: Crime, Blame, and Punishment

  • Raw build capacity isn't sufficient - we have to decrement for bad behavior, because tunnel build attempts are expensive
    • A peer that accepts 10/10 tunnels is better than one that accepts 10/100.
    • A peer that explicitly rejects a request is better than one that drops it.
    • A peer that accepts a request but later drops data through the tunnel is the worst
  • Punishment: Capacity is decremented for build request rejections, build request timeouts, and tunnel test failures.
  • Problem: We don't know who to blame when a request or tunnel test message is dropped
    • Naive solution: Don't bother blaming anybody. This doesn't work well at all!
    • Much better solution: Blame everybody, with a weighted value. The bad peers will be discovered over time.
  • Example 1: Tunnel build request expired (3 outbound hops A-B-C), reply was due back through our inbound exploratory tunnel (2 hops D-E). Blame A, B, and C with weight 1/3, blame D-E with weight 1/2. Alternately: It was probably D's fault, because that's the usual failure point in I2P (the gateway of the inbound tunnel can't be reached), so blame D with weight 3/4 and E with weight 1/4. (pic needed here)

Peer Selection

  • Client tunnels from Fast tier
  • Exploratory tunnels from Not Failing tier, but a varying % is from High Capacity, more when build failure is high (maintains a minimum level of service while continuing to explore)
  • Equal weight within each tier

How It Works, Lessons for Other Implementations

  • Great! Core system has been in place since the beginning of I2P. Pretty much includes everything proposed in "Tuneup for Tor"
  • Stability - Tier members don't change that often; as using a peer for tunnels tends to increase that peer's metrics, which keeps it in the pool. This is a desirable quality for anonymity.
  • Most client tunnels go through the highest-performance peers when the network is uncrowded; traffic "spreads out" to lower-performing peers when the network gets busy
  • The parameters, weighting, and calculations are VERY powerful "knobs" to turn; difficult to test and debug in a distributed network; impossible to fully optimize; can take months or years to get right
  • The basic measurements are much simpler than they used to be
  • The punishment is what keeps the network running well. How heavily you punish determines how fast the load spreads out across the network as the it gets busy, and how quickly an overloaded peer gets avoided.
  • Start simple, get more complex if necessary, you will probably have to simplify again later (for example, our speed calc used to be pages and pages of code, now a one-liner)
  • You don't need to generate large amounts of special-purpose data for testing. Normal traffic will do. Our exploratory tunnels average only 100KB in 10m (~150Bps)... Our unused client tunnels average only 10KB in 10m from tunnel tests (~15Bps)... but this is sufficient. When you are low bw, you don't care how accurately you sort into tiers. When you push a lot of data, your tiering will be better. Even very-low-bw routers tend to accurately find fast peers and thus are well-prepared when higher bw is demanded.

References

  • A Tune-up for Tor: Improving Security and Performance in the Tor Network Robin Snader and Nikita Borisov @ UIUC
  • http://www.i2p2.de/