Public libraries, police and the normalisation of surveillance

Police presence in libraries, no matter how abstract, normalises state surveillance. (Image c/o Thomas Hawk.)

In an era of unjustified, economically incoherent cuts in investment in public services, there has been an increasing drive to make various parts of the public sector work together to cut costs (“cut costs” in a very superficial sense of course). One such collaboration that keeps popping up is a partnership between the police and public libraries. An idea that should never even be entertained, let alone discussed as a serious and reasonable proposition.

The latest such proposal is one that would see one particular police force close down its inquiry desks and effectively move them to the local public library service, requiring library staff to assist in the reporting of crimes online for those without internet access at home. According to a statement on the Norfolk constabulary’s website:

The six month trial will run from the end of September in Thetford and Gorleston and will involve library staff signposting customers to police services, while also helping them complete online self-reporting forms, a function which will soon be available as part of the Constabulary’s new website.

Such a move changes the library space from a safe one, to one that is subject to a subtle form of surveillance whereby people’s behaviours are modified by the knowledge that the space is one where the police have a presence, even if in abstract. Effectively, it normalises surveillance. The knowledge that it is a space to report crime impedes the library as a space to freely engage in ideas, particularly in the current political climate.

Take Prevent, for example. A racist strategy that demonises non-whites, it has led to a series of actions that have been an affront to the rights of the individual, particularly in terms of intellectual freedom, both directly and via the culture that it has encouraged. The recent detainment of Faizah Shaheen being a good example of the consequences of not only the normalisation of surveillance but the encouragement to “snitch”.

The experiences of Faizah Shaheen and Mohammed Umar Farooq should serve as a warning to library workers and those providing library services. Where there is a police presence, no matter how abstract it may be, there is a risk to people of colour. Facilitating police reports in libraries has a very obvious and malign consequence. It makes the library a space of authority and control. In an environment whereby people are detained due to their reading habits, using a public library as an extension of the police inquiry desk poses threats not only in terms of people reporting individuals (although this online crime reporting will happen in the library whether the library encourages it or not, the key is the normalisation of the space as a place to interact with the police), but also has an inhibiting effect upon those using the space.

Would a person of colour feel comfortable accessing information or borrowing books if they do so in an environment that encourages and enables the reporting of crime, particularly when reading can lead to detainment under anti-terrorism legislation? Individuals will feel that they cannot access information freely in an environment that has become an extension of the police station (which is partly how surveillance works – controlling and directing individuals, preventing activity from taking place).

This relationship with the police continues to be proposed in authorities across the country. Earlier this week it was revealed that police desks in Angus would be moved into the council’s libraries. And there have also been “community police hubs” (how innocuous sounding) relocating to public libraries. And what’s coming around the corner should very much set alarm bells ringing about the suitability of public libraries and the police sharing space, whether it be abstract or physical.

Earlier this year, it emerged that under Theresa May’s proposed investigatory powers bill, public libraries will be required to store internet users’ records for up to 12 months, again, seriously undermining the library as a safe space for intellectual freedom. Not only does such a move normalise surveillance, making it part and parcel of every aspect of every citizen’s life, but it turns public libraries into a space less about intellectual freedom and more about monitoring citizens on behalf of an authoritarian state. It goes without saying, that this poses a threat to the very notion of intellectual freedom, a notion that public libraries should be actively defending and advancing.

As public libraries increasingly become a place where the state seeks to control and observe the intellectual behaviour of others on the basis of supposed threats posed by organised terror, so public libraries lose their purpose. They cease to become places of exploration and interrogation and become nothing more than repositories of state sanctioned ideas and values. This process of normalisation needs to stop, for the benefit of all the communities we serve.

How do we support the development of privacy literacy?

privacy literacy

What role can/should librarians and libraries play in ensuring privacy literacy? (Image c/o Karol Franks on Flickr.)

In “The digital divide in the post-Snowden era” I explored the extent to which internet privacy should be considered an element of the digital divide, as an extension of the skills divide. The focus of the piece was very much in terms of state and corporate surveillance, but this is not the be all and end all (and is arguably a more provocative angle than was necessary). My particular area of interest has always been in terms of the gap between the information the state accesses about us, as compared to the amount of information we access about the state. But good privacy practices shouldn’t solely be seen in terms of theoretical concerns about individual freedom (although I’d argue this is a very important aspect).

For the past couple of days, I’ve been following the Surveillance and Society Conference in Barcelona (#ssn2016), which has obviously been of great interest in terms of the aforementioned article. Reading through the tweets yesterday, one in particular stood out for me:

I’d not really considered the term “privacy literacy” before, but it seems to me this is exactly the sort of things we (librarians) should be considering in our roles. Rather than necessarily seeing online privacy technologies as a key component of protecting citizens from state and corporate surveillance, we should it in terms of privacy literacy and, by extension, information literacy information literacy. Privacy literacy should at least be considered as vital as information literacy because arguably you are not free to exploit information unless you also have privacy [citation needed].

It’s also important, in my view, to consider awareness and ability to use online security tools as “good practice”. When teaching people how to use the internet, we guide them on basic security practices, eg look for the padlock when conducting any financial transactions. But perhaps we should be going beyond this in ensuring individuals protect themselves as much as possible online. Web activity isn’t, after all, only subject to observance by the state, it’s also at risk of being accessed and used by criminals. Insecure email, web usage and communications puts individuals at risk of criminal activity, including data theft. One of the concerns in the “debate” (such as it is) over encryption is that weakened encryption, backdoors etc not only make it easier for the state to access data, it also makes it easier for hackers with malicious intent to access and steal data. Encryption technologies offer a protection against that, as well as offering some protection for intellectual privacy.

But, as I argue in my article, such technologies are not necessarily easy to use. For example, I recently went through the process of setting up PGP (Pretty Good Privacy) encrypted email following the publication of the article. Even as someone with a whole host of privileges, it was not an easy process by any stretch of the imagination. Of course there were folks I could call on to help me out, but I wanted to experience the process of doing it independently, with as little guidance as possible. It wasn’t easy. It took some degree of effort, even after discovering an online guide to help me through it. I managed it in the end, but one wonders how many people would be bothered to make the effort when it takes very little effort to create an account via some large commercial providers (although even then there are those that will experience difficulties following that process). Indeed, it has a reputation for being a bit of a nightmare in terms of being user-friendly. It’s important to note, of course, that PGP is not perfect as a secure method of communications (neither are even the most secure of mobile messenger apps). However, it does offer greater security than many of the alternatives.

All of this begs the question, how do we get people to develop better online privacy behaviours? Some of it is down to the support people are given when they go online. Public libraries are very good at providing that first level “here’s how you search online, here’s how you set up an email account”, but also in providing some basic security guidance (“look for https/padlock icon”). What happens far less is providing some extensive online security support. And given the difficulties around some of the software available to ensure greater online security, there is clearly a need for more. But it’s not just about teaching/showing people how to adopt a more secure approach to their activity online.

Clearly some technologies are difficult to use. Some might also argue that many are not overly bothered about ensuring their security. But the growing use of ad blocking software suggests that usability of technology can make a difference. According to a report earlier this week, it is predicted that around 30% of British internet users will use ad blocking software by the end of next year. Ultimately, if the software to protect privacy is usable, people will use it. As Sara Sinclair Brody argues:

Open-source developers, in turn, need to prioritize user-experience research and design, as well as to optimize their tools for large organizations. The focus of too many projects has long been on users who resemble the developers themselves. It is time to professionalize the practice of open-source development, recruit designers and usability researchers to the cause, and take a human-centered approach to software design.

Given our role in offering guidance and support to those learning how to use the internet effectively, perhaps there is a role here for librarians in working with open source developers more extensively to ensure that the user experience is greatly improved making it easier for people to use the technology and, as with ad blocking software, maybe then we will see it’s rapid expansion (maybe something for UX folk to engage with).

Of course, I see privacy as about protecting individuals from state and corporate surveillance – this ultimately stems from my political outlook. But the kind of practices that ensure protection from such surveillance are also just good practice in ensuring individuals’ data isn’t susceptible to any malign activity. The question is, as we encourage private sector bodies to provide internet training, who benefit from internet users making data accessible, how do we re-assert the primacy of privacy and security?

The permanence of corporate surveillance

Image c/o Barbara Friedman on Flickr.

I’ve been thinking a lot recently about the nature of surveillance now as compared to how it operated in the pre-internet era (if we can even imagine such an era even existed). Surveillance is, of course, an age-old technique employed by the state to protect, to control and to manage. In many respects, the Snowden revelations shouldn’t have surprised us in the least. Did anyone really believe that a mass communication tool could be introduced without the state wishing to have a poke around in what was being communicated? Perhaps the only real surprise was the scale. Nonetheless, history provided us with the clues.

However, we can draw a very clear line between the kind of surveillance that was popularly recognised before 2013 and that which has come to light post-2013. The first, and most obvious, point to make is that surveillance has historically been targeted, not indiscriminate. Targets were identified and surveillance approved and conducted. It may be against particular groups, or specific individuals, but it was always targeted. Now, however, everyone’s communications are subject to collection and scrutiny. We are all, to a certain extent, suspects.

The other clear difference is the fluidity of the nature of our surveillance regimes. It is not merely the state that collects vast amounts of data about our activities, the corporate sector also gathers huge amounts of information about what w do, where we go, who we talk to etc etc. This data does not reside securely in the hands of corporations however. We know, following Snowden, that much of the data private corporations collect about our activities is also accessed by the state, either with or without the consent of said corporations. Thus we find ourselves in an environment of what has been described as “liquid surveillance” – a fluid state of surveillance where data flows, particularly between the state and corporations.

But there is a further difference between that which occurred pre-Snowden and that which we know post-2013: the permanence of it. Before the emergence of the internet, the course of surveillance wasn’t always unimpeded.  There were concerns and efforts to limit its scope or even to roll it back. The use of wiretaps in the United States is a good example of surveillance strategies being strongly criticised and, ultimately, rolled back.

Back in the early part of the 20th century, there was outrage about the federal use of wiretapping. This outrage wasn’t restricted merely to the strands of libertarianism on the left and the right (such as the right can be described as “libertarian” when it argues for the replacement of state authority with corporate authority), it cut across the entire mainstream of political opinion. Conservative newspapers were as outraged as the liberal press. The outrage was such that, in 1934, the Communications Act federally outlawed the use of wiretaps (reinforced by a Supreme Court ruling in 1939).

Although these safeguards were whittled away by successive administrations (Democrat and Republican), there was still a sense at the heart of the establishment that surveillance must be limited, at least this was the case publicly if not privately. In 1967, for example, the President’s Commission on Law Enforcement and Administration of Justice stated that “privacy of communications is essential if citizens are to think and act creatively and constructively” (the mere fact that our current government thinks privacy of communication is unnecessary suggests they rather don’t want citizens to think and act constructively…). Privacy of communications is crucial in a democratic society, the fact that this was endorsed by the President’s Commission underlines the extent to which this was hardly a view taken by a few radicals outside the mainstream. It was, to all intents and purposes, a conservative viewpoint on the impact of such intrusions. The big difference now, I think, is I couldn’t envisage such an acknowledgement or a restriction upon contemporary forms surveillance.

The emergence of the notion that information is a commodity has changed all this. In a capitalist society, where information/data has value, where the harvesting of such data can produce profit, corporations are obliged to seek out that commodity, secure it and draw profit from it. Any effort to inhibit this will surely be resisted, both by the corporations themselves, and their allies in the political elite (particularly on the right of course). It is simply not possible to imagine a situation where the current environment is over-turned. Pandora’s box has been opened, there is no way we are going to be able to put everything back inside. Corporate surveillance is, therefore, a permanent state of affairs. It will never face the legislative restrictions that wiretapping faced in the last century. No, it is a permanent fixture because a commodity that drives profit will not ever be restricted so long as capitalist orthodoxy is dominant. Therefore, in a state in which data flows between the state and corporate bodies, it is hard to imagine that surveillance in a capitalist society can ever truly be curtailed.

We may well be able to limit the extent to which the state directly collects data on individuals, but will we ever really halt access to data that we have voluntarily surrendered to profit-making entities on the internet? Is it possible to prevent this in a capitalist society? It seems to me that it probably isn’t. Whilst a large state society results in intrusive state surveillance, surely a free market, “libertarian” society would result in wide scale corporate surveillance (under the guise of being voluntary…”voluntary” being a notion to which right-libertarians have a liberal interpretation)? And as we edge towards an extreme free-market state, won’t such surveillance become permanent and inescapable? Perhaps, under capitalism, corporate surveillance is here to stay?

Surveillance, libraries and digital inclusion

surveillance

Librarians have a key role to playing in terms of digital inclusion and protecting intellectual privacy. [Image c/o Duca di Spinaci on Flickr – CC-BY-NC license]

Towards the end of last year, I was privileged to be invited to talk at CILIP’s Multimedia Information and Technology (MmIT) Group AGM about digital inclusion as a representative of the Radical Librarians Collective (see the presentation below – which includes a list of recommended reading!). The invitation was well timed in terms of coming up with a focus for my talk as I have spent the best part of 5 months working on a journal article for the Journal of Radical Librarianship on the digital divide (which, pending peer review, will hopefully be published in the early part of this year). Specifically, I’ve been interested in looking at digital inclusion from a slightly different angle, that of the divide in terms of state and corporate surveillance.

As followers of this blog will know, I’ve been talking about surveillance and the Snowden revelations for some time now. Concerned about the gathering of information about us, whilst the state seeks to limit the amount of information we obtain about them, I’ve mainly been focused on the impact this has in terms of our democratic processes. However, since the emergence of the Library Freedom Project (founded by the awesome Alison Macrina), I’ve been increasingly interested in the role that libraries and librarianship has to play in this area. It seems to me, that the disclosures have to expand the terms by which we define what the digital divide is. Whilst there has always been a focus on access, and on skills, there must be greater attention on what people actually do online and, furthermore, the extent to which individuals are able to act freely in terms of seeking information.

Being able to seek out information that offers alternatives to the status quo (indeed, not just “offers” but challenges) is vital in a democratic society. Without the ability to seek out and understand alternatives, it is hard to accept that our society can possibly be described as “democratic”. What is clear from Snowden’s disclosures is that the ability to seek out information and communicate with others whilst ensuring your intellectual privacy is increasingly difficult. Difficult unless you have the skills and knowledge with which to defend your intellectual privacy.

I tend to think that I am fairly skilled in terms of using the internet. I can seek out information quickly and efficiently, I can provide assistance for others, I am fairly innovative in the ways in which I use certain online services. What I lack, however, is the skills necessary to really ensure my intellectual privacy, to defend myself against state or corporate surveillance. I have some skills, I have some basic knowledge, but I don’t know how to protect myself fully. And yet I consider myself reasonably skilled. What about those that have difficulties in using the internet in a basic way? What about those that struggle to do the things that I take for granted? Aren’t they even more exposed to state and corporate surveillance? Isn’t their intellectual privacy even more under threat? Surveillance tends to affect the most disadvantaged to the greatest extent, is intellectual privacy something only for the privileged?

I don’t want to get into this even further here (wait for the longer version!), but I do think there are issues here about the nature of the digital divide and how we should view digital inclusion post-Snowden. There was a time when it was considered fanciful that librarians could even consider to provide the sort of skills that the state may see as a threat to the status quo. However, the efforts by the Library Freedom Project in the United States underlines that this is no longer the case. If librarians in the United States, the home of the NSA, can help people defend their intellectual privacy, why can’t we do the same in the United Kingdom? I’m not suggesting that we can collectively as a profession start setting up Tor nodes in libraries or teaching people how to use encryption technologies, but we need to have the debate about how we ensure the intellectual privacy of everyone in our society, not just the privileged few.

CILIP’s Ethical principles for library and information professionals states that we must have a:

“Commitment to the defence, and the advancement, of access to information, ideas and works of the imagination.

If we are to defend and advance that access to information then we must, in my mind, do whatever we can to defend the intellectual privacy of everyone.

You can also download a PDF version of this presentation here [PDF – 6.29MB].

Recommended Reading

Coustick-Deal, R. (2015). Responding to “Nothing to hide, Nothing to fear”. Open Rights Group.
Gallagher, R. (2015). From Radio to Porn, British Spies Track Web Users’ Online Identities. The Intercept.
Murray, A. (2015). Finding Proportionality in Surveillance Laws. Paul Bernal’s Blog.
Richards, N. M., (2008). Intellectual Privacy. Texas Law Review, Vol. 87.
Shubber, K. (2013). A simple guide to GCHQ’s internet surveillance
programme Tempora. Wired.
@thegrugq. Short guide to better information security.
@thegrugq (2015). Operational Telegram.
Whitten, A. & Tygar, J.D. (1999). Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0.

Library Freedom Project. Privacy toolkit for librarians.
Let’s Encrypt.
Electronic Frontier Foundation
Digital Citizenship and Surveillance Society
Surveillance & Society (OA journal).
The Digital Divide in the post-Snowden era (a micro-blog curating interesting links and resources – by me!)

Surveillance, freedom, Tor and libraries

Surveillance inhibits intellectual freedom.

The internet has brought new threats to intellectual freedoms…what can librarians do? (Image c/o Amélien Bayle.

For some time now I’ve followed (and admired greatly) the work of Alison Macrina and the Library Freedom Project (LFP) in the United States. Teaching citizens how to protect themselves from surveillance (both state and corporate) seems to me to be a fundamental role for librarians in a digital information society. Indeed, the International Federation of Library Associations and Institutions’ (IFLA) internet manifesto clearly states:

“Library and information services…have a responsibility to…strive to ensure the privacy of their users, and that the resources and services that they use remain confidential.”

In a post-Snowden world where state and corporate surveillance has merged as the internet has expanded, the principles of protecting privacy and ensuring intellectual freedom are more vital than ever.

Alison’s work has been particularly inspiring from afar due to the inherent difficulties of being able to deliver anything equivalent in UK public libraries. Whilst conducting the kind of work she does in the US is not without its hurdles, I tend to feel that the prospect of even offering the kind of support she provides would be impossible with our library and professional structures. I find it hard to conceive of a local authority permitting any kind of service that teaches citizens how to protect themselves online. Whilst libraries themselves are presented as “neutral” (despite the reality), they are delivered and sustained by political entities. Not only are they sustained by political entities, they are sustained by political entities that are broadly supportive of both the need for surveillance in the traditional sense (ie state) but also, due to the infection of neoliberal dogma, accepting of corporate data collection (corporate surveillance). In fact, considering recent developments, it would appear they are rather keen on using libraries as a mechanism to increase susceptibility to corporate data collection.

The recent announcement of a partnership between BT and Barclays in public libraries demonstrates how far we are from being able to provide the kind of training that Alison can provide in the United States. Presented as a crucial weapon in the bid to close the digital divide, the government announced a pilot project whereby BT provide wifi in public libraries and Barclays, through their Digital Eagles scheme, provides “free technology advice”. Putting aside the very obvious concerns about private influence in a public service, it’s pretty clear that a scheme funded by Barclays will work in the interests of Barclays (and by extension, corporate interests in general). It goes without saying that the kind of training the Digital Eagles provides does nothing to protect the privacy of internet users. A flick through their various guides finds advocacy of Google and Yahoo! as “very reliable and easy to use” and the guide to online safety only provides the most basic of advice. If you want to learn about protecting yourself from corporate surveillance, surprise, surprise, a large bank is probably not going to offer a solution.

That’s how far away we are, in one of the most surveilled countries in the world, from being able to provide citizens with protection from state-corporate surveillance infrastructures. Rather than protecting people from such surveillance, we are partnering up with private companies who seek to benefit from the data collection opportunities the internet provides. We’re not so much protecting citizens from data collection, but encouraging greater data collection.

Of course, efforts by the LFP have not been without their own difficulties. Yesterday it emerged that the Department for Homeland Security contacted the police department in Lebanon, New Hampshire regarding Kilton Public Library becoming the first library in the country to become part of the anonymous Web surfing service Tor. Using the standard of trope that surveillance avoidance puts people in danger, the police applied pressure to ensure that the library pulled the plug on the project.

The ability to source and access information without restriction should be a core function of libraries. In a world of mass surveillance, a “chilling effect” inhibits our right to obtain information without fear. Tools such as Tor provide us with that freedom to seek out information without fear of state or corporate surveillance. This is a fundamental core concern of the librarianship profession, and it’s one that I think we have been slow (generally speaking) to address. Whether it be for fear of reprisals or lack of the requisite knowledge to provide this kind of support. The move by the Department for Homeland Security must be a concern for all of us, whether we reside in the US or not. If attempts to deliver projects that protect citizens from mass surveillance are shut down before they even get off the ground in the US, we can be assured that even attempting an equivalent in the UK would be impossible to get off the ground.

Ultimately, we are being pushed into a position that compromises the ethical underpinning of our profession. We know that seeking and obtaining information freely online is compromised due to a combination of state and corporate surveillance, and yet any attempt to protect our users to enable free and uninhibited access is shut down. So where do we go from here? Private tuition outside the confines of local government influence? Who knows. In the meantime, it’s vital to put pressure on the U.S. Department of Homeland Security and the local police department and assert that an attack on intellectual freedom in libraries should not be tolerated under any circumstances, not least on spurious grounds of security.

You can add your support here. Regardless of whether you are a US citizen or not, I’d urge you to sign. Intellectual freedom gets to the heart of our profession. When it is attacked, we are attacked.

When it comes to the internet, it’s not just government snooping we should be worried about…

Corporations want your data as much as governments want to snoop.
(Image: El Alma Del Ebro in Zaragoza by Saucepolis on Flickr.)

Remember the early days of the internet?  When start-up companies seemed to be, somehow a different breed from the companies that we had grown accustomed to? “Don’t be evil” appeared not only to be Google’s mantra, but the mantra of a whole host of companies that emerged in tandem with the growth of the internet.  Whereas we had grown accustomed to companies that were focused on shareholder profit over rather than the interests of ‘consumers’ or society in general, these companies seemed to be benign, friendly, sensitive to their social responsibilities.

In contrast to the growth of these ‘benign forces’ of the internet, governments and politicians have become increasingly suspicious of the technology, predominantly because it is an area over which they do not feel they exercise sufficient control.  In the UK, this has manifested itself most obviously and most recently in the Data Communications Bill (or Snoopers’ Charter).  A particularly invasive piece of legislation that was seriously considered by the coalition, it proposed to grant powers to the Home Secretary (or another cabinet minister) to order any ‘communications data’ by ‘telecommunication operators’ to be gathered and retained, effectively collecting ostensibly private data on citizens for whatever purpose they deemed worthy.  It appears, on the face of it, that these proposals have now been abandoned, although that is not to say they won’t come back in a slightly modified form.  If one were a cynic, one might suggest the Liberal Democrats applied pressure to drop the legislation in advance of the local elections to ensure they were case in a positive light? Unlikely perhaps, but my cynical mind can’t help but believe there is more to this than simply a matter of principle, after all Nick Clegg wasn’t always so opposed…

This suspicion, however, doesn’t begin and end at the Snoopers’ Charter. There was also, for example, the introduction of the Digital Economy Act, which enables the blocking of website access for anyone who is deemed to have infringed copyright laws but, consequently, also risks penalising those entirely innocent of any such activity.  Then there is the Regulation of Investigatory Powers Act 2000 (Ripa) used to investigate Osita Mba, a whistleblower who uncovered a “sweetheart” deal with Goldman SachsUsing Ripa:

…HMRC can see websites viewed by taxpayers, where a mobile phone call was made or received, and the date and time of emails, texts and phone calls. According to the revenue website, these powers “can only be used when investigating serious crime”.

And it doesn’t end with proposed or existing legislation; individual politicians have also made calls for illiberal and unhelpful restrictions on the internet. Back in 2011, following the riots, one politician called for Twitter and Facebook to be blacked out during any further disturbances.  Needless to say this was a particularly stupid and disturbing suggestion, not least because the very same social media helped people in the area affected by the riots to communicate with others and ensure their own safety.  There’s no doubt that the freedom provided by the internet frightens those who believe it threatens existing power structures, underlining that, from their point of view, freedom only goes so far…

The desire to highlight some of these illiberal measures isn’t solely restricted to organisation such as the Open Rights Group, many of the giants of the internet are quick to point the finger at the role of government as a threat to the freedom of the individual. Take, for example, the largest of all the companies to emerge in the internet era – Google.

Last week, in an article for The Guardian, Eric Schmidt (executive chairman) and Jared Cohen (Director, Google Ideas) warned that global governments are monitoring and censoring access to the web, which could lead to the internet becoming ever increasingly under state control.  The usual examples are rolled out of authoritarian regimes seeking to restrict what their citizens can access online.  Curiously, however, there is no mention of the United States or Europe (Russia appears eight times, China seven), it appears that we are not affected by the government monitoring or censoring access to the web – oh, apart from the Data Communications Bill, the Digital Economy Act, Ripa etc etc.This omission seems curious considering an admission by Schmidt in a separate interview with Alan Rusbridger, also in The Guardian.

During the interview, Rusbridger notes:

But [Schmidt’s] company collects and stores an extraordinary amount of data about all of us, albeit in an anonymised form. Which is all well and good, until government agencies come knocking on Schmidt’s door – as they did more than 20,000 times in the second half of last year. The company usually obliges with US officials. (It’s more complicated with others.) This will only get worse.

Clearly, as the legislative examples shown above demonstrate, attempts to monitor the web are not only restricted to authoritarian regimes but are also a problem in Western, (supposedly) liberal democracies as well.  When the US is making 20,000 requests in six months (around 100 requests a day on average), it is clear that the problem is not restricted to just China, Russia and other authoritarian regimes.  But there’s another side to this equation. A side that Schmidt and others in the business community seem to be reluctant to talk about, for very obvious reasons.

The extract from Rusbridger’s interview with Schmidt reveals two facts that everyone concerned with the internet and the free flow of information need to be worried about.  First are the actual requests from US officials for data from Google. The second is the data that Google collects and makes available to US officials.  There are, I would argue, two concerns about the future of the internet: government control and corporate control. The former Schmidt is keen to talk about, the latter not so much.

Google’s business is data.  They collect data from users to ‘enhance the user experience’ (a brilliant phrase used to suggest that the collection of your personal data is actually doing you a favour).  The volume of data collected is vast and is collected for a specific purpose: to make money (to “enhance the user experience”). These services do not charge you to make money, they use a commodity you are giving away for free and then selling it on to advertisers. The transaction is different from the traditional service model (consumer purchases goods from service provider), but it is effective and relies on your data to ensure profitability for the service provider. For example, Google was making $14.70 per 1,000 searches in 2010.  Some services do not even require you to visit the service itself to obtain your personal data.  Facebook, for example, has been known to track light users of the service across 87% of the internet.

Google’s executive chairman, Eric Schmidt (image c/o Jolie O’Dellon Flickr).

The sheer volume of data handled by many of the largest internet companies should be a cause for concern. Indeed, not only is the data collection itself a concern, but also the willingness with which they give it up to government agencies (note in the aforementioned interview, Schmidt suggests that Google usually say yes to government requests for data).  Of course, many would argue that there is nothing to fear about the collection of personal data: if you have done nothing wrong etc. But you are not in control of the personal data and the rules that govern its use, corporations and governments are. Imagine for a moment a different type of government, a different set of rules, a different environment altogether, would you be so keen on US officials demanding your data and it being handed over as easily as Google do now? And what if Google engineered this change in government? Sounds far-fetched doesn’t it? Maybe it’s not as far-fetched as it might sound…

A recent study by United States-based psychologists, led by Dr. Robert Epstein of the American Institute for Behavioral Research and Technology, revealed the disturbing amount of power at the hands of companies like Google. Epstein’s study found that Google has the capability to influence the outcome of democratic elections by manipulating search rankings.  The study (available here – PDF) presented three groups of eligible American voters with actual web pages and search engine results from the 2010 Australian general election. Participants were randomly assigned to one of three groups, two groups were provided with search engine rankings favouring one of the candidates, the remaining group were provided with rankings that favoured neither:

Beforehand, individuals reported having little or no familiarity with the candidates at all. Based on short biographies, they were asked to rate each candidate and say how they would vote.

They then spent time gathering information using a mock search engine, after which they again rated the candidates in various ways and again said how they would vote.

Before their Internet search, there were no significant differences in how they rated the candidates. Afterwards, however, two thirds of the people in the first two groups said they would vote for the candidate that was favored in the search rankings – a dramatic shift that could easily “flip” the results of many elections, especially close ones, concludes the report.

Now, there is nothing to suggest that Google have actually weighted search results in the way suggested in the study nor that they ever have the intention of doing so, but they can. Not only can they do it, but they can do it without our awareness of such manipulation.

Governments may attempt to monitor us through the introduction of ever more illiberal regulatory measures applied to the internet, but it’s important to remember that the corporations profiting from the internet also benefit from our manipulation.  It strikes me that there are two crucial considerations that we need to remember when we reflect upon the role of the corporation (as opposed to that of the state) in the development of the internet:

1)      The relationship between the user and the service.  Unlike traditional relationships, we are not simply the consumers purchasing goods from a service provider.  They are taking data from us and selling it to advertisers to make money.  Our data is the product and we are the vendor.  The problem is we are not remunerated for this transaction, only permitted to use a service under the terms stipulated by the service provider.  They are not acting out of kindness in offering such services for free, they want more data from users to increase profits.  Users need to be more aware that they are the vendors in this relationship, not the customers.  Of course, we believe and trust them because we are not ‘buying’ from them, we still see them as providing us with something for free when actually they make their money using our data.

2)      Considering the volume of data given away, there is a need to remind ourselves of the nature of government and corporations.  Like governments, corporations are not fixed.  Corporations change.  They change either because of a need to increase profits, or they change because they have been bought out by a rival.  You may well be happy giving Google all your data, but what happens when it is no longer Google?  What if your personal data fell into the hands of a company you were not comfortable gaining access to it?  What then? And whilst a takeover attempt of Google may seem far-fetched at this point, remember that that the very idea that Time Warner would merge with a company called AOL was a fanciful notion towards the end of the last century. Nothing remains static in either the worlds of business or technology.

Above all else, however, we need to remember that companies like Google and Facebook are just that: companies. Whilst they appear warm, fuzzy and less stuffy than traditional corporations, they are still corporations.  Corporations that are acting the same as every other corporation before them, lobbying government to lighten regulation, maximising profit and, where possible, shift the focus onto government shortcomings in the hope that their own activities won’t be subject to scrutiny. They are, after all, just corporations like any other and we should treat them with the same scepticism as we treat older, more established corporations.  For when it comes to the internet, we need to keep a close eye on both the governments who regulate it and the corporations who profit from it.