gemini+stream://

Kevin Sangeelee kevin at susa.net
Sat Aug 15 17:00:18 BST 2020


Gemini currently allows a fetch-then-process model, while a URL that
refers to a streaming resource forces me to: -

a) intercept the response and make a decision on how to proceed, or
b) wait for a timeout

There's plenty of tech for which implementing the above is trivial,
but it's currently not mandatory. If my client pipes the output to
another process, there's no reason for either process *not* to wait
till the server closes the connection - I currently have every reason
to expect that the server is sending me data for any Gemini request.

Knowing in advance that a server will not close a connection means
that steaming works, existing clients don't hang or break, new clients
aren't forced to add extra complexity, and unnecessary requests can be
avoided entirely.

This is just my take, anyway!

Kevin

On Sat, 15 Aug 2020 at 11:47, cage <cage-dev at twistfold.it> wrote:
>
> On Fri, Aug 14, 2020 at 11:39:20PM +0000, James Tomasino wrote:
>
> Hi!
>
> Honestly i  fail to understand  why a new  scheme is needed  here. The
> protocol already supports  stream as discussed in  a previous messages
> and i  do no  see a lott  of advantages for  using a  different scheme
> except (as you wrote) to signal to  the user that the content will not
> end.
>
> Probably i am missing something, please help me to understand.
>
> >  6. It is  still a single client-initiated request  happening in the
> > foreground. We  aren't creating background threads  of who-know-what
> > running services.  We're getting  an ongoing document  in real-time,
> > that's all.
>
> I do not think this is entirely  true if you want to update/keep alive
> the  UI  of  the  client  while   the  content  is  flowing  from  the
> server. Some kind of concurrent works enter in the equation, i think.
>
> Bye!
> C.


More information about the Gemini mailing list