thoughts and updates on the regulation of new technologies and human rights

These are the slides from my presentation at the University of Leicester in December 2013 at a conference on The Legal Challenges of Social Media to Freedom of Expression.

What is a Joke?

I have been asked by several people for copies of my slides from my presentation at BILETA 2014 on Unraveling Intermediary Liability. This is the beginning of a much larger project on this subject matter. I hope they are of some use.

Unraveling Intermediary Liablity

This week the European Commission’s Directorate-General for Enterprise and Industry announced that the ICT sector would be one of three business sectors that would be the focus of a year-long project to develop sector-specific guidance on corporate responsibility and human rights. In case anyone is curious the other two are employment and recruitment, and oil and gas.

The selection of the ICT sector comes as no surprise given the events of the last few years. Technology and its capacity to be a tool of democratisation and repression , and the role and responsibility of businesses to take a stand (or not) has dominated the news and policy debates. To me as the CSR/HR/ICT person, this is all fascinating and more than anything, welcome attention to an issue I was yammering on about five years ago to a sea of largely uninterested faces.  It is also a natural extension of the European Commission’s recent commitment to develop sector-specific guidance in its communication on corporate social responsibility.

The project seeks to take the former UN Special Representative John Ruggie’s Protect, Respect and Remedy Framework and his Guiding Principles on its implementation and figure out how it should work in the ICT sector. For more on these documents see here. What, for example, should Vodafone have done when the Egyptian Government ordered it to turn off mobile phone networks? Resist the order? Immediately cease work in Egypt? Comply? In the end Vodafone did comply.  For more on this see Salil Tripathi’s excellent commentary. But the next question is whether a corporate governance framework would have helped Vodafone navigate these issues. Vodafone had been one of the key drafters of the Global Network Initiative, one of the leading CSR frameworks for technology companies concerning human rights, but it pulled out at the last minute. Would being a member of the GNI have made a difference? As we go forward with this project to provide guidance to the ICT sector, we should remember this scenario and ask – what guidance would help a company in the position of Vodafone? If it has not provided any, then this will have all been a theoretical exercise. More fundamental, we need to ask: when is guidance not enough? When do we need the rule of law?

Over the next year I suspect I will talk quite a bit about this project, but let me offer some preliminary thoughts. We need to be clear about what we are asking of this guidance. One of the most difficult issues here is teasing out what the difference is between what I call pure-CSR, or voluntary responsibilities undertaken by businesses, and legal obligation, and when the two overlap. The waters are far from clear here. For example, when a business voluntarily commits to human rights responsibility (in whatever form), this can eventually take the form of legal obligation by actions for breach of contract (i.e. employment situation) or tort (i.e. falling below a standard of care undertaken by the company). Further voluntary principles can sometimes inform legislation (in Canada a voluntary code on hockey helmets later became the basis of legislation on same).  Putting all of that aside, much work is needed to determine what we are asking of the ICT sector – voluntary commitment, regulation, legal obligation.

The second issue, is whose responsibility is this? This is a critical issue to human rights law, which is only binding on states. Where the line between voluntariness and the law is so hazy, we must be even more mindful on whom we are imposing obligations. Are we asking the ICT sector to commit to human rights codes (not necessarily as a matter of law)? If not, and we want something more, which I suspect is the case, then this is a government responsibility. It might be that the rules are developed and debated in a multi-stakeholder form, but in the end this is a duty of the state to ensure that the human rights of its citizens are protected. The role of businesses here are the institutions through which the state’s human rights duties are realised, and since businesses are private, profit-making institutions it is no small matter to go this route. Businesses are not supposed to be moral arbiters of the world’s problems, but in the ICT sector the fact is they are forced to make moral decisions every day. This doe not necessitate top-down laws (though it might). It can also take quasi-regulatory form aka OFCOM (but let’s not go down that road of discussion for now shall we…). The important thing to take away, for the moment, is on whom the responsibility lies matters because that determines the enforceability of the rules.

One of the questions coming out of the phone hacking scandal and the announcement that News of the World will be shutting its offices following its final publication this Sunday is the sufficiency of the Press Complaints Commission as the industry’s self-regulator. The PCC did very little when the first hacking allegations came to light. When Clive Goodman was imprisoned for hacking in 2006 the PCC announced it was launching an investigation and concluding later that it was an isolated incident. The PCC it can be said has had nothing to do with bringing to light what has happened here. Rather, the outrage of the public and dogged pursuit of certain MPs and competing papers and the plans for public inquiries caused NoW’s fall. This highlights that more regulates the behaviour of the press than a formal regulator. Bringing to mind regulatory theorists for the Internet environment, such as Lawrence Lessig, Colin Scott and Andrew Murray, this incident reminds us that the public has a role to play in regulating behaviour, that naming and shaming indeed can at times be quite effective. But that is cold comfort to the family of Milly Dowler and any of the other victims of NoW hacking, and it is only effective after the fact in penalising behaviour. What the hacking scandal also highlights is the weaknesses of the PCC as a regulatory body, in terms of its accountability, independence from industry, and fundamental role in maintaining a standard of conduct for the press.

Is the PCC enough? Well certainly not in its current form. But before we go gallivanting off arguing for stricter regulation of the press we must be mindful of the critical role the press plays in pursuing the public interest and the consequential need to give due attention to independence of the press and freedom of expression in this environment. More regulation might hamper their ability to carry out this role. Sure the PCC has utterly failed as a self-regulatory body and needs to be reformed, but the answer is not necessarily to reconstitute the body as something akin to OFCOM with greater government oversight. What has happened here is as much a cultural problem as well as a legal one – the culture in the offices of NoW, but also the public culture in consuming the paper every week. NoW had a weekly circulation of 2.7 million. Our appetite fed the practices and while the blame no doubt falls on Rupert Murdoch’s empire for engaging in such criminal practices, we should take a moment to consider whether we need to change our own practices as consumers.

Last week, I had a thoroughly enjoyable experience presenting on a panel with Andrew Murray of the LSE, Dr. Daithi Mac Sithigh of UEA, and solicitor Stratos Camatsos, under the chairmanship of Ben Allgrove from Baker & McKenzie.  It was put together by the UCL Student Human Rights Program to discuss “Internet & E-Rights: Challenges and Perspectives”.  The questions from the audience were thoughtful and engaging. If anyone is interested in seeing the slides from my presentation here they are:

I presented last week in the Cyberlaw section at the Society of Legal Scholars Annual Conference on the human rights compliance of the Internet Watch Foundation’s regulatory structure.  It was an enjoyable time and nicely low-key and informal.  If you are interested in seeing the slides of my presentation, here they are:

Kaschke v Gray and Hilton raises questions about how involved a blog owner can be in checking or moderating his or her blog before incurring liability under the E-Commerce Directive.  You can see discussions on the case here and here.

This case revolves around the labour political blog and a post by John Gray claiming that Johanna Kaschke, a local political activist, had been arrested on suspicion of being a member of the Baader-Meinhof terrorist group. Kaschke sued for libel, stating that although she had been arrested, she had never been a member of a gang nor accused of being one, and additionally, that the government of West Germany had paid her compensation for false arrest and imprisonment.

Alex Hilton, the blog’s owner, sought summary judgment dismissal of the claim, which was refused by Master Rose. Justice Stadlen upheld the appeal and in the reasoning provided some guidance to bloggers on the nature and extent of their liability for user-generated content.

The Court clarified that (1) operation of a chat room qualifies as an information society service (ISS) under the Directive thus allowing it to take advantage of the exemptions of liability under Directive, and (2) moderation of one part of a website does not prevent other areas of the same site from being exempt from liability. The latter confirms that if your website includes a mix of user generated content in some parts (exempt) and your own content in other parts (non-exempt), you can still avail yourself of the exemptions under the E-Commerce Directive for the user-generated parts.

However, the decision muddies the waters when it comes to editorial control.  Under Regulation 19 of the E-Commerce Regulations 2002, an ISS whose service consists of the storage of information provided by the recipient of the service is not liable unless it has actual knowledge of the unlawful content or acts expeditiously to remove the content once it becomes aware it is unlawful.  The interesting thing about the Labourhome case is that it raises the question: what is meant by ‘storage’? What activities move it beyond mere storage? In effect, what is editorial control?

Hilton exercised no control over certain automated areas of his homepage, which listed ‘recent blogs’ and ‘recommended blogs’ (automated by users’ votes).  However, sometimes Hilton looked at these posts and considered whether they should be given a more prominent position. If promoted, more detail was provided such as the date and time of the post, and a preview of the post. This, according to Justice Stadlen means that Hilton might not be able to use the exemption under Regulation 19 for hosts, because it might be more than mere storage. Even the fixing of spelling mistakes risks losing the protection of Regulation 19. Justice Stadlen reasoned that the act of fixing a spelling mistake goes beyond mere storage of information adding:

“The fact that Mr Hilton on a few occasions removed blog posts on grounds of bad language, political provocation or offensiveness falling short of defamation again in my view makes it at least arguable that the service provided in respect of those individual blog posts and also in respect of the general service consisting of making available webpages on his website for such blogs to be posted consisted of more than mere storage.”

Although Hilton is required under the Regulations to remove illegal content once notified, and has similar indirect liability under the Defamation Act, it raises a few issues of concern.

Hilton moderated comments after they were postedfor offensiveness. The risk, Hilton argued, of finding him liable would be to discourage any monitoring of blog’s by the owners for offensive material.  Hilton’s lawyers argued that pre-moderation of content should fall foul of Regulation 19’s exemption, but not post-moderation of content. The Judge side-stepped the argument saying, “The question whether the removal by a service provider of a blog on grounds of offensiveness or political content is in itself enough to prevent his storage of that blog post from consisting only of storage and thus sufficient to withhold Regulation 19 immunity is not one which it is necessary to decide”.

This is unfortunate, because this seems to be exactly what the case was about. Granted this was a summary judgment proceeding, and Justice Stadlen emphasised that the extent of Hilton’s involvement was a question for trial. That said, the case has significant regulatory implications that the judge seemed to overlook.

1. The exemptions in the Regulations (via the E-Commerce Directive) were crafted in order to encourage ISPs and the like to self-regulate.  This case has the opposite effect: it encourages blog owners to purposefully avoid ANY moderating, because otherwise it risks liability.

2. It creates uncertainty about what qualifies as editorial involvement:

  • If the correction of spelling mistakes is editorial involvement, what about the removal of comment spam, which I know I remove from my comments.
  • Many comments sections are set up so that comments by someone only need to be approved for the first one, and any future comments are unmoderated. Does the approval of one comment hold an ISS liable for any future, unmoderated comments by this person?
  • This case indicates that any moderation for offensiveness opens the ISS up to liability – what about as in this case where some comments are moderated and others are not?

The other side is the risk of an ISS going ‘Steve Jobs’ and with the exemption of liability and encouragement of self-regulation, remove content, or in Jobs case, applications, as it sees fit without a proper accountability regime in place. Jobs most recent move was to refuse a request by a Pulitzer-prize winning satirist for an iphone app on the basis that in making fun of public figures, the satirist violated Apple’s Terms of Service. Apple has since backed down on this decision. See the article here.  While this deserves a separate blog post, it is mentioned here, because the answer to a case like Labourhome is not to simply take the American approach and exempt an ISS from any liability because it results in an accountability deficiency as seen with Apple.  Moderation for offensiveness can be a good thing, but without structures of fairness, transparency and accountability, there is the risk of moderators ‘going Apple’. However, the effect in the Labourhome case creates a problem at the other end of the scale: owners of blogs and chatrooms effectively cannot and should not moderate their space for any offensiveness. The losers in this scenario are the public – their iphones are tethered and their moderators fearful. Surely we deserved something more nuanced.

I wanted to take a moment to recognise a legal giant and a great man, Alan Hunter, who passed away earlier this week.  I worked for him at Code Hunter in Calgary, Alberta way back when before I got it into my head that I should do a PhD.  It was an honour to work with him, and I can thank him for having a significant influence on the lawyer that I am today.  My thoughts are with his family and friends.

I learned about a website recently, which allows the public to search for personal information on people.  It is an American website, and so I did what any loving niece would do, I looked up my Uncle John.  I learned there are 976 people in the USA with his name (at least 976 people in the spokeo database with his name), and 18 in the state in which he lives. I was able to easily find him by scrolling down through the list and finding his city and address.  I clicked on his name and a profile appeared, which was alarming in its detail. It had his address, phone number and confirmation of his marriage to my aunt, all information that is available in most phone books.  However, the information does not stop there. It correctly lists his age, ethnicity, education level, profession, whether he has children, how many people live in his house, how long he has lived there, whether he owns it, and what style of house it is.  His neighbourhood is profiled, and strangely enough, so are his interests.  This is where the profile takes liberties. It values his house at over $1 million.  If this is true, congratulations Uncle John! However, the person who alerted me to this website said her house was also listed at this value, and incorrectly.  However, it is the lifestyle and interests section that takes the most liberties.  Apparently my uncle enjoys sports and reading and the outdoors, which I’d say is true. He also apparently loves to read about politics, but is not interested in politics. And he enters sweepstakes and loves home decorating.  And there are a series of photos of strange men that I can say are NOT my uncle.

This is where the profile shifts subtly to a work of fiction.  There are privacy concerns with pooling together seemingly inconsequential data and creating a profile of a person, but it is even more invasive when part of the profile is untrue.  There are just enough points of truth in the profile to create an air of reality to it, so that a person looking at the profile might accept everything on the profile to be true.  And make judgements of him. Potential Employers, friends, partners, all might turn to such a profile to make assessments about you, which might affect you in the real world. Yet the untruths do not satisfy the requirements for a cause of action in libel (even in the UK!), nor would the nature of the information likely provide grounds for an action in breach of privacy, although there might be a cause of action under the Data Protection Act.  In tiny writing at the bottom of his profile spokeo writes, “Profile data is derived from marketing surveys, consumer records, and public data sources and is not guaranteed to be 100% accurate. The data provided to you by Spokeo may not be used as a factor in establishing a consumer’s eligibility for credit, insurance, employment purposes or for any other purpose authorized under the FCRA.”  Notice the use of the word “may” for the use of the information by potential employers etc. It does not expressly forbid the use of the information, but conveniently exempts itself from liability for the truth of the content. Even if it did expressly forbid it, how would spokeo know that a potential employer looked at its site and relied on the information in a profile in making a decision between two candidates for a job?

I viewed the ‘basic’ profile. If I pay$2.95 per month I get a one year membership to view a more detailed profile of him.  Instead, I deleted his profile.  You can thank me later Uncle John.

I will be presenting at the 2010 BILETA Conference in Vienna next week on Who is the Keeper? A Framework for Identifying Online Gatekeepers. Here are the slides for my presentation: