Opened 3 years ago

Last modified 8 months ago

#1689 assigned enhancement

Overhaul proxy error pages

Reported by: str4d Owned by: sadie
Priority: minor Milestone: eventually
Component: apps/i2ptunnel Version: 0.9.22
Keywords: usability error Cc:
Parent Tickets:

Description

<notxmz> i'd like to see something done about the destination unavailable error which sometimes shows even when the site is online
<notxmz> doesn't make for a good user experience
<str4d> notxmz: the error pages need significant work
<str4d> (also to improve their mobile theming)
<notxmz> do you think an auto retry on that specific error page would help temporarily?
<str4d> Maybe
<str4d> Part of the issue historically was that there wasn't much information provided to the client by the router across I2CP
<str4d> ie. there were many reasons why a page would be unavailable
<str4d> zzz did good work to improve error reporting across I2CP
<str4d> But the associated error pages he added need to be made clearer

Suggested improvements:

  • Better explanation of errors
    • Clearly show what user should do next
  • Possible auto-retry (via meta refresh) if relevant.
  • Better CSS for mobile

Subtickets (add)

Attachments (1)

reportage-1689.patch (40.5 KB) - added by slumlord 9 months ago.
Patch by Reportage

Download all attachments as: .zip

Change History (12)

comment:1 Changed 3 years ago by str4d

Relevant source files:

  • apps/i2ptunnel/java/src/net/i2p/i2ptunnel/I2PTunnelHTTPClientBase.java
  • apps/i2ptunnel/java/src/net/i2p/i2ptunnel/localServer/LocalHTTPServer.java
  • installer/resources/proxy/*

comment:2 Changed 3 years ago by xmz

I referred to the wrong error when talking to str4d but I think he understood what I was referring to. The issue is when a client's router is unable to contact the destination when trying to access a webpage even though the destination is online and well integrated into the i2p network. I have run into this issue while actively browsing a site and it will unexpectedly start failing for a period of time before it resumes functioning. The site is up during this period but the client is unable to connect to the destination. This isn't desirable behaviour in my opinion and harms the user experience. I suggested an auto-refresh but I do not know about the backend state that causes this error and an auto-refresh may or may not be helpful. I think that i2p should aim to avoid this problem as much as possible - if the destination is online, the client must be able to access it in the large majority of connection attempts. I hate to compare i2p to Tor but Tor does handle this quite well - any hidden service is easily accessible within minutes of being placed on the Tor network.

comment:3 Changed 3 years ago by zzz

The proxy error pages were my very first patch for I2P, back in 2005, and I've resurrected the page documenting the patch at http://zzz.i2p/flock/index.html

My goal was to make the error pages 'friendly' like they were in Freenet, not 'scary'. Clearly we can still do better, and I'm supportive of more changes toward that goal.

A couple of responses to comments in IRC:

  • <PrivacyHawk?> Privoxy's 503 error page is kinda cool, and informative.

Maybe informative but disagree about 'cool'. Definitely still scary. A neighbor came over to use my computer once and got the privoxy page and just about fell off the couch. He was really freaked out.

  • <str4d> One useful way forward would be to look at how different browsers present different kinds of network errors

Agreed, and I've done that before. There's some good models and some bad ones. In the last redo I tried to take terminology from browser pages. It's important to use terms and phrases that are familiar and analogous to clearnet.

  • re: auto retry (OP)

I'm skeptical about this. I think it would be confusing and could burn network resources if left unattended. I would like to see a big fat 'retry' button instead of the link, and to not have it be javascript:window.location.reload. That's a hack because the URL is not available from the .ht pages. A possible fix would be to have TranslateReader? do variable substitution on the fly, or "pipe" the output through some sort of replacement filter.

Any substitution method could also tackle the problem of hardcoded links to 127.0.0.1:7657, instead of the actual console host/port.

but auto-retry would be different than literally every browser and proxy out there today.

  • re: reliable errors (OP)

The goal in the OP is to eliminate transient errors. A site is either up or down; if up, we should always be able to connect, and never show an error. That's not possible in any network, and by its nature I2P is going to have a lot more transient errors than clearnet. The best way to reduce transient errors is to improve the network in general, and we're working on that every release. Other than that, all we have is the usual tradeoffs of false positives vs. timeout setting.

The most important settings are the LS lookup timeout (and max search depth), and the overall load timeout once we get a LS. For an example tradeoff, if we want to wait 5 minutes before timing out and displaying an error page, we can have a lot less false negatives (or would we call it false positives?)

One big decision point is whether to propagate a LS lookup failure back to the client as a hard failure, or to let streaming retransmit which will force a retry. We tried the former but it caused too many false positives. The network is in a lot better shape now so maybe it's time to look at it again. See PacketQueue? line 256 in streaming. Changing this would, from OP's perspective I think (?), make things worse.

The odd thing is that the situation is the opposite for b32 addresses, since the client must look up the dest before calling streaming, and there's no retry support. So we do fail-fast for b32 if no LS. I believe that others have noted this behavior but I can't find a ticket on it.

comment:4 Changed 3 years ago by str4d

  • Status changed from new to open

comment:5 Changed 21 months ago by slumlord

Thanks for the detailed response, zzz

I agree, auto-retry would cause more problems than it solves.

From a usability perspective, a new user to I2P may try to access sites, be met with the error page for an unreachable HS and get annoyed/frustrated that "I2P isn't working". To be honest, my experience with I2P has been annoying & frustrating at times, especially when deep into a task and a hidden service is suddenly unreachable, consuming my work in the process. I've learned to work with this by always having a backup of any work, and even having multiple routers which I can switch to and continue my work if one router is unable to contact the hidden service. The average user isn't going to be able to do such things, however.

I've persevered through the flaky connections because I2P has a great community (hello IRC2P friends) dedicated to making this network better :)

We could perhaps consider adding further information to the Unreachable HS error page, such as suggestions to check if the router is firewalled, information about how long it generally takes for a new, unfirewalled router to be considered integrated into the network, this may let the user know that they should allow the router some time to "learn" about the network. Perhaps some simple graphics describing the situation could be helpful to users who are unfamiliar with anonymous networks.

I'm still learning about I2P's internals so I can't provide much comment on the technical aspects but thank you for taking the time to write out the excellent explanation, zzz.

comment:6 Changed 14 months ago by zzz

  • Status changed from open to infoneeded

Need info from str4d on whether the UI changes in 0.9.31 address this, or is there more to do?

comment:7 Changed 9 months ago by Reportage

  • Status changed from infoneeded to open

Patch to provide theme-specific logo in proxy errors, and automatic retry for relevant errors upon fail:


Thank you for your effort, Reportage. I am attaching this patch as a file. - slumlord

Last edited 9 months ago by slumlord (previous) (diff)

comment:8 Changed 9 months ago by Reportage

In use, the patch above automatically retries connecting to an eepsite where appropriate. Given the volatile nature of eepsite connectivity, where even sites with a solid uptime can present as unavailable, automatic retries enhance the user experience by not requiring user intervention beyond the first connection attempt for sites experiencing connectivity issues.

Requiring manual intervention to retry a failed connection is both onerous on the user and will also often lead to the belief that the site is unavailable after a couple of failed attempts; in usage, automatic retries will either reach the site in the event of transient network issues, or confirm that the site is indeed unavailable.

Changed 9 months ago by slumlord

Patch by Reportage

comment:9 Changed 8 months ago by zzz

  • Owner set to sadie
  • Status changed from open to assigned

For overall guidance, assigning to sadie to get input from usability studies.

Re: auto-retry (comments 7 and 8), we aren't going to do that, for the reasons given in comments 3 and 5.

comment:10 Changed 8 months ago by slumlord

Now that we have rate-limits built into hidden services, I could see a situation where an auto-retry triggers a rate-limit and keeps the user in that rate-limited state if the auto-retry isn't disabled.

comment:11 Changed 8 months ago by Reportage

The proxy response determines the polling period expressly to avoid being caught by rate-limiting, though the actual polling period might be tweaked to minimize the chance of being caught in a loop.

For 'Website unreachable' errors, the default polling period is every 15 seconds after a failed attempt, which works out as something like every 25-30 seconds when you factor in the timeout.

For sites that reject the connection attempt, possibly due to rate limiting being in effect, a polling period of 90 seconds is currently implemented in the patch.

As an alternative, I looked at implementing a javascript refresh mechanism which would permit more fine-grained control over both the interval and the number of polls, but given the reliance on javascript and the security risk this entails, I decided against it.

I'm not sure why automatic polling is considered so onerous on the network.. it's only performing an http get request on an interval. Perhaps zzz can shed some light?

Note: See TracTickets for help on using tickets.