Repeating the Web's Mistakes (was gemini+submit:// (was Re: Uploading Gemini content))

solderpunk solderpunk at SDF.ORG
Wed Jun 17 14:28:05 BST 2020


On Sun, Jun 14, 2020 at 08:57:57PM +0000, defdefred wrote:
> On Sunday, June 14, 2020 3:22 AM, Sean Conner <sean at conman.org> wrote:
> > [3] http://boston.conman.org/2019/07/09-12
> > http://boston.conman.org/2019/08/06.2
> 
> Should we deduce that a significative part of the internet traffic is fake request?
> That a shame concidering the environmental impact of the digital world.
> Maybe blocking all this non-human request is the solution?

It's true that this is a shame.  As Sean says, however, it's extremely
difficult to actually block all non-human requests.

I am sensitive to this issue and I hope that as part of the general
emphasis on being small and simple, the Gemini community can help also
foster a culture of not treating the internet as an ephemeral magic
thing with no physical impact.  Non-human traffic is not evil and
can serve a good purpose, but we should be careful with it.

In some ways, Gemini is disadvantagaed here with its lack of facilities
for things like conditional fetching.  If we make a norm of using small
self-signed certificates using eliptic curve ciphers, and supporting TLS
session resumption, we might be able to get request overhead down to the
point where clients can address well-known endpoints to query the time
of last change for something without it actually being a losing
proposition most of the time.

But even in the absence of this, we can be smarter.  For example,
software which consumes RSS/Atom feeds gets, for free, information on
the distribution of times between consecutive updates for the last 10 or
maybe more updates.  Instead of polling everything several times a day,
aggregators *could* poll each feed at a specific frequency which is
matched to its typical publication schedule.

Cheers,
Solderpunk



More information about the Gemini mailing list