Is Web 2.0 stealing content?
I got an interesting comment on one of my posts earlier today.
Berka wrote: So when did stealing content by screen scraping become a part of web 2.0? Well, I don’t know since I am over 35! I thought instead of just responding to that comment, I should write a new post, as this is a very important thing that I have pretty strong views on.
In todays environment, information does not have to stick to something. Information is something that can move around independent on the carrier. This as a contrast to how things have worked historically. Information tied to some carrier, and when moving information, in reality the carrier is moved. Such as the plastic disc the CD is made of, or the sheet of paper a newspaper is made of. Today information can move just by itself.
The other important issue has to do with digitized information. Digitization implies there is nothing called “original” anymore. Anything that is digitized can be replicated and multiplied, so that one get more than one original. The replication is also without any loss of quality, and this is what makes the two instances of the original both originals.
What does this have to do with Web2.0? Well, in the Very Old Web, one could only have text. Text with links. The existence of links made it possible to create a web of information. Quickly one could go from one piece of information to another. This is in reality a very old invention, but Tim Berners Lee around 1990 connected this technology with Internet technologies, and the World Wide Web was born.
Some years later web clients started being able to directly show the images that where linked inline, so that the web page included graphics. More and more layout elements where added to the HTML standard, and although we can have a separate discussion on how good HTML is for layout design, that is definitely the direction “The Web” has moved.
With Web2.0, two things have been added:
- The ability to update only a part of a webpage using javascript that is executed in the client (AJAX)
- The ability to create a web page based on information from other sources
Now of course many of you do not agree with me. That the 2nd bullet not at all is Web2.0. That people could include data from other sources already earlier. Sure, but the conceptual model of where information is coming that is part of a service has changed. That is for me Web2.0.
I have for example created some overlay maps on top of Google Maps. Many newspapers in Sweden include information from SMHI on the webpages on weather. This inclusion of data other people have created to create even more complex services. What is sometimes called meshing. Nothing stops of course some organisation to not give the rights to someone to reuse information, so if SF is not allowing people to “reuse” information from their site, of course that is their decision. But it is so wrong. It is so from the 20th century.
Today, by letting other people reuse information that you make available, specifically if it is information that is used by people to (in this case) know what movie to go to. Allowing people to reuse the information will increase the interest and sale in other corners of the business model. Today TV4 said that illegal copying of tv series before it is broadcasted is not only a bad thing. It is even a good thing as that increases the interest in the series and illegal copying does not have any impact on the number of viewers. The contrary.
Because of this, I draw the conclusion (although I am also more than 35 years old), that SF should be happy if people wanted to display the cinema program on other sites than their own. Why not (for example) allow me include what movie is running in the cinema close to where I live on my website? Normally of course SF have to pay to get ads on other sites, but here people want to display the information for free, and SF say no. Weird.
Information want to be free, will be free, and people will continue to share information with each other. People have always done so. Always. And now when information is digitized, that will not stop. Because of this, fighting what I could call “commercial copyright infringement” have to use mechanisms and methods that are as modern as the mechanisms used to share the information. Trying to police by using mechanisms from the 20th century, like they try to do in Denmark, will not work. Ever. People will share information with their friends. Information they want to pay for. If they could.