Assuming disallow-all, and some research on robots.txt in Geminispace (Was: Re: robots.txt for Gemini formalised)
James Tomasino
tomasino at lavabit.com
Thu Nov 26 13:04:58 GMT 2020
On 11/26/20 10:27 AM, Krixano wrote:
>> *But* by putting things on the web, the creator has granted the
> world some implied license.
>
> This is not true. The only implied license is to view
> the thing put online. Redistributing it is not implied by putting
> something online, and neither is modifying, unless it's
> under Fair Use (a transformative work).
>
> Christian Seibold
Wow this thread blew up overnight. Anyway, I was the one that first posted about Field v. Google as one example case about litigation related to search engines and copyright. In an effort to avoid more "someone is wrong on the internet" arguments, here's the crux:
- If you as a copyright holder want to deny your content being cached and served by a 3rd party (for instance a search engine) you have a well known mechanism to do so in robots.txt.
- If your content is archived or cached against your desires your means of remediation are legal ones. Taking the issue to court will result in a court deciding if you are within your rights to protect your content or if the searcher/archiver/indexer is under fair use.
- The rules around copyright and media protections are established in each country, but are nearly universally applied worldwide via the Berne Convention and/or agreements like the Electronic Commerce Directive.
- Existing legal precedent suggests you can expect a ruling in favor of implied consent if you do not have a robots.txt.
All of this is to suggest we save ourselves the trouble down the road and just use robots.txt as-is.
Finally, and completely unrelated to everything: it was Oracle who tried to claim their APIs via patent rather than the other way around. See:
https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_Inc.#First_phase:_API_copyrightability_and_patents
More information about the Gemini
mailing list