(Editor’s note. Venky Harinarayan, co-founder of search engine Kosmix.com, says events of the past week underscore the need to draw up a new sort of privacy contract for Web 2.0 — but avoiding an Orwellian world may be the challenge).

Privacy — the soap opera. A different episode every week, but with the same story.

Let’s recount last week’s highlights:

The Governator of California, Arnold Schwarzenegger, described certain ethnicities as “hot blooded” in an audio file, which got him in some hot water. But now the focus is on how the audio file was obtained by his Opponent in the November Elections — did someone hack into Arnold’s computers or was the audio file on a public Web site? Was Arnold’s privacy violated?

A software designer placed a sexually explicit ad on Craigslist, and then posted all the responses to the ad to his Web site. Unfortunately for the respondents, their replies included very personal and sensitive information including their email ids and pictures that did not belong in an exhibition.

Then there was the Facebook episode, which brought college campuses across America to a standstill. And I’m not even bringing up HP’s board of directors, who improved our vocabulary with the word “pretexting” -• most of the privacy breaches at HP happened online.

So what’s happening here?

Blame Web 2.0. Web 2.0 is a squishy and much abused word, but there is a fundamental and disruptive idea that surrounds the most successful Web 2.0 companies like Myspace, Facebook, Flickr, Youtube, … The idea is simple: Consumers are not anonymous browsers as they were in Web 1.0, but are now publishers as well. They are now first class citizens of the Web.

This desire of consumers to participate fully on the Web runs headlong into their purported desire for privacy. Privacy 1.0 operated in a binary world: information about a consumer was either totally private or totally public.

Unfortunately, Privacy 1.0 is incompatible with Web 2.0. The Facebook issue illustrates this point. A Facebook user Alice writes something on her page, and her friend John reads it there. But if Facebook automatically decides to publish the same information to John on his page, Alice feels like her privacy is violated. From a Privacy 1.0 point of view this makes no sense — Alice’s information was available to John in any case, right?

The reality is that a consumer’s expectation of privacy changes as they participate more fully on the Web. Rather than expecting absolute anonymity, or absolute visibility, consumers as publishers of media now have expectations similar to that of media companies. It’s similar to what you hear in an NFL broadcast: Any publication, rebroadcast, or other use of my work without my express written consent is prohibited!

This is Privacy 2.0 — the consumer as a media publisher, who expects all the rights and protections afforded to traditional media publishers.

In the Facebook case, the issue was rebroadcasting Alice’s content to John, without seeking Alice’s permission. To Facebook’s credit, they quickly fixed the privacy aspect in the right way, by giving Alice control of the rebroadcast.

While the central issue appears to be a legal issue of whether the company or the consumer owns the content, this is something of a red herring. If the Consumer thinks they own the content, they do! The real issue for all companies, whether they are in search or in social networking, is to understand the consumer’s implicit license to his data and to stick to that license.

Consumers are not just interacting with companies, but also with other consumers. This is another big source of privacy breaches. The Craigslist example was a consumer-consumer interaction. Consider another example: you send me an email, and I publish your contact information by putting it in my blog, or selling it to a spammer. Intuitively, I am violating your privacy. Yet, there’s a company called Jigsaw.com, which is based on people trading their contact information.

Everything we do online is either to consume — like reading a Web page or searching — or to publish — like sending an e-mail, sending a search engine our search query, or writing a blog entry. Because we function in both areas, we must be guaranteed the Privacy rights for content we publish, but in turn must accept the responsibility of avoiding infringement of other’s Privacy 2.0 rights.

Privacy 2.0 is riddled with unanswered questions, but we need to start moving in this direction, because people will only participate in content generation and social networking if they’re guaranteed a minimum set of rights.

So what do we do? In the real world, the rule of law protects media companies from having their rights infringed, but this is an area where I really would prefer not to get the lawyers involved.

What is clear is that consumers who publish their content to companies must have a way to control the usage of that content. More challenging and less clear is the problem of what other consumers are allowed to do with your data. I’m not going to sue my mother for forwarding a private e-mail to other family members, but there certainly is a line where “fair use” becomes unfair.

Standards for identity and explicit content controls based on these standards can help. Web 1.0 was caused by the emergence of technological standards like HTTP, HTML, and global URLs that helped us identify and access servers. Similar standards for the consumer will help, but will be very controversial — think a URL for every consumer and services like Verisign to authenticate identity. Twenty-two years after Orwell’s 1984, this is still a scary thought.

I’m hopeful that we can solve this issue the old fashioned Silicon Valley way — with technology and start-ups. Where there’s a minefield, there’s an opportunity!