How do we support the development of privacy literacy?

privacy literacy

What role can/should librarians and libraries play in ensuring privacy literacy? (Image c/o Karol Franks on Flickr.)

In “The digital divide in the post-Snowden era” I explored the extent to which internet privacy should be considered an element of the digital divide, as an extension of the skills divide. The focus of the piece was very much in terms of state and corporate surveillance, but this is not the be all and end all (and is arguably a more provocative angle than was necessary). My particular area of interest has always been in terms of the gap between the information the state accesses about us, as compared to the amount of information we access about the state. But good privacy practices shouldn’t solely be seen in terms of theoretical concerns about individual freedom (although I’d argue this is a very important aspect).

For the past couple of days, I’ve been following the Surveillance and Society Conference in Barcelona (#ssn2016), which has obviously been of great interest in terms of the aforementioned article. Reading through the tweets yesterday, one in particular stood out for me:

I’d not really considered the term “privacy literacy” before, but it seems to me this is exactly the sort of things we (librarians) should be considering in our roles. Rather than necessarily seeing online privacy technologies as a key component of protecting citizens from state and corporate surveillance, we should it in terms of privacy literacy and, by extension, information literacy information literacy. Privacy literacy should at least be considered as vital as information literacy because arguably you are not free to exploit information unless you also have privacy [citation needed].

It’s also important, in my view, to consider awareness and ability to use online security tools as “good practice”. When teaching people how to use the internet, we guide them on basic security practices, eg look for the padlock when conducting any financial transactions. But perhaps we should be going beyond this in ensuring individuals protect themselves as much as possible online. Web activity isn’t, after all, only subject to observance by the state, it’s also at risk of being accessed and used by criminals. Insecure email, web usage and communications puts individuals at risk of criminal activity, including data theft. One of the concerns in the “debate” (such as it is) over encryption is that weakened encryption, backdoors etc not only make it easier for the state to access data, it also makes it easier for hackers with malicious intent to access and steal data. Encryption technologies offer a protection against that, as well as offering some protection for intellectual privacy.

But, as I argue in my article, such technologies are not necessarily easy to use. For example, I recently went through the process of setting up PGP (Pretty Good Privacy) encrypted email following the publication of the article. Even as someone with a whole host of privileges, it was not an easy process by any stretch of the imagination. Of course there were folks I could call on to help me out, but I wanted to experience the process of doing it independently, with as little guidance as possible. It wasn’t easy. It took some degree of effort, even after discovering an online guide to help me through it. I managed it in the end, but one wonders how many people would be bothered to make the effort when it takes very little effort to create an account via some large commercial providers (although even then there are those that will experience difficulties following that process). Indeed, it has a reputation for being a bit of a nightmare in terms of being user-friendly. It’s important to note, of course, that PGP is not perfect as a secure method of communications (neither are even the most secure of mobile messenger apps). However, it does offer greater security than many of the alternatives.

All of this begs the question, how do we get people to develop better online privacy behaviours? Some of it is down to the support people are given when they go online. Public libraries are very good at providing that first level “here’s how you search online, here’s how you set up an email account”, but also in providing some basic security guidance (“look for https/padlock icon”). What happens far less is providing some extensive online security support. And given the difficulties around some of the software available to ensure greater online security, there is clearly a need for more. But it’s not just about teaching/showing people how to adopt a more secure approach to their activity online.

Clearly some technologies are difficult to use. Some might also argue that many are not overly bothered about ensuring their security. But the growing use of ad blocking software suggests that usability of technology can make a difference. According to a report earlier this week, it is predicted that around 30% of British internet users will use ad blocking software by the end of next year. Ultimately, if the software to protect privacy is usable, people will use it. As Sara Sinclair Brody argues:

Open-source developers, in turn, need to prioritize user-experience research and design, as well as to optimize their tools for large organizations. The focus of too many projects has long been on users who resemble the developers themselves. It is time to professionalize the practice of open-source development, recruit designers and usability researchers to the cause, and take a human-centered approach to software design.

Given our role in offering guidance and support to those learning how to use the internet effectively, perhaps there is a role here for librarians in working with open source developers more extensively to ensure that the user experience is greatly improved making it easier for people to use the technology and, as with ad blocking software, maybe then we will see it’s rapid expansion (maybe something for UX folk to engage with).

Of course, I see privacy as about protecting individuals from state and corporate surveillance – this ultimately stems from my political outlook. But the kind of practices that ensure protection from such surveillance are also just good practice in ensuring individuals’ data isn’t susceptible to any malign activity. The question is, as we encourage private sector bodies to provide internet training, who benefit from internet users making data accessible, how do we re-assert the primacy of privacy and security?

Why librarians need to act on mass surveillance

We need to speak out as a profession against mass surveillance. Image c/o floeschie on Flickr.

Today the Investigatory Powers Bill has its second reading in parliament. The introduction of the Bill is not only a threat to society in general, it poses a serious threat to our profession and, in particular, our commitment to defend the intellectual privacy of our users. We must speak up as a profession to defend the rights of our users and, wherever possible, seek to defend their intellectual privacy.

Ever since the disclosures by Edward Snowden in 2013, I’ve been concerned about the impact of mass surveillance both on our society, and on us as professionals. Disappointingly, there seemed to be little in the way of action by the profession (particularly in the UK – hampered by a professional body that cannot be overtly political), until the Library Freedom Project came along and started making waves in the United States. Inspired by Alison Macrina’s work, I started to consider more deeply the impact of mass surveillance on our communities and the various issues it raised. For me, alongside concerns about intellectual privacy, it highlighted a further aspect of the digital divide: autonomy of internet use. Given the limited amount of literature on the relationship between the digital divide and surveillance, I decided this was an important area to explore more extensively. So, I started reading around and pulling together an extended piece for the Journal of Radical Librarianship on the topic.

The main inspiration for the piece was the article Intellectual Privacy by Neil Richards (which is available OA here and is highly recommended). For me this really crystallised some of the key issues around surveillance and the protection of intellectual privacy (the ability to read, communicate and seek out information without being observed doing so). Aside from the very crucial focus on intellectual privacy and its importance, Richards also highlighted the role of librarians in supposedly developing some of the “norms” of the concept itself. This role seems particularly strong in the United States (where Richards drew most of his examples), with even the ALA taking a role in advocating for the intellectual privacy of individuals through a variety of initiatives.

As well as Richards’ works, David Lyon also played a key role in forming my views. Lyon is a leading figure in surveillance studies and has written a number of invaluable pieces on the topic that, as with Richards, helped to clarify my thinking (see, for example, his paper on understanding surveillance today). For example, Lyon’s definition of surveillance was particularly useful in understanding how surveillance operates upon individuals. For Lyon, surveillance is about the “focused, systematic and routine attention to personal details for purposes of influence, management, protection or direction”. It’s interesting (yet unsurprising in some respects) the extent to which surveillance within the UK is seen as primarily about protection, with little consideration with regards to how mass surveillance controls or “manages” individuals (or maybe we just don’t care that it controls us). What I also found particularly useful here is that Lyon’s definition doesn’t solely apply to the mass data collection by the state, it also relates to that growing phenomenon: corporate surveillance.

Surveillance and ethics

Clearly, there is a conflict between intellectual privacy and mass surveillance. If you exist in the conditions of the latter then you clearly cannot have the former. For society it presents a serious issue – for librarians it presents a critical issue that gets to the core of our professional ethics. If we cannot (or do not) protect the intellectual privacy of our users, then we are failing as professionals. Indeed, given we exist in a state of mass de-professionalisation, where volunteers are seen as adequate replacements for “expensive professionals”, we are rather making the case for our own extinction. If we do not have a set of ethics and professional values that we not only espouse but actively promote, what makes us any better than a volunteer?

In terms of the profession in general, there are clear guidelines from organisations representing our profession regarding the conflict between mass surveillance and our ethics. In 2005, for example, the Chartered Institute for Library and Information Professionals (CILIP) produced a “Statement on intellectual freedom, access to information and censorship” and endorsed the Council of Europe’s ‘Public access to and freedom of expression in networked information: Guidelines for a European cultural policy’. The Council of Europe’s guidelines clearly stated that individuals are to “decide for themselves what they should, or should not, access” and that those providing the service should “respect the privacy of users and treat knowledge of what they have accessed or wish to access as confidential”. Furthermore, the International Federation of Library Associations and Institutions (IFLA) advised in their ‘Statement on Privacy in the Library Environment’ that information professionals have a responsibility to “reject electronic surveillance”, provide training on “tools to use to protect their privacy” and “respect and advance privacy at the level of practices and as a principle”. The message is clear, we have an obligation to ensure the privacy of our users and to provide them with the tools necessary to enable them to ensure the can defend their intellectual privacy.

Tackling the digital divide

This task is made even more urgent given the nature of the digital divide. We know well enough that access isn’t merely enough, but that individuals also require the skills with which to exploit the internet to their own advantage. In a report published in 2014, the BBC found that 1 in 5 adults lacked the four basic skills (send and receive emails, use a search engine, browse the internet, and fill out an online application form). Given that the most disadvantaged are most likely to be affected by mass surveillance it’s clear there is a need to provide the necessary support to ensure that everyone is able to ensure their intellectual privacy, not merely those with the means by which to do so. What is clear, post-Snowden is that the digital divide is as much about the gap between those who can protect their intellectual privacy and those who cannot, as it is about having the skills to be able to use the internet to benefit individuals economically, educationally and in terms of healthcare.

We, as a profession, have a clear commitment to tackle the digital divide. We play a crucial role in levelling the playing field, ensuring both access to the internet and support as individuals seek to exploit it to their own advantage. This crucial role is, of course, being undermined by the delivery of such support by the private sector, in particular banks (see Barclays Digital Eagles). Of course, corporations have no interest in ensuring privacy of the individual online, because greater privacy results in the exposure of less personal data which large corporations can exploit to drive profit. We, as a profession, are not beholden to share-holders. We have no reason to expose our users’ personal data for exploitation. We have ethical obligations not to expose the reading habits of our users. It is this that distinguishes us from banks and from volunteer run libraries.

It is, therefore, incumbent on us as library and information professionals to develop our skills with regards to online intellectual privacy, to seek to defend the intellectual privacy of our users and, more broadly, to speak out against government legislation that attacks our professional values as well as posing a threat to society in general. We have an obligation as professionals to defend intellectual privacy and to ensure that it is not only a value afforded to those endowed with social, cultural and economic capital, but also to the most disadvantaged and vulnerable in our communities. If we are serious about tackling the digital divide in all its manifestations, then we need to be serious about ensuring autonomy of use for all. So long as our communities are vulnerable to mass surveillance we will not achieve true equality of access to the internet and the wealth of information it provides. That is why we must act.

 

Clark, I. (2016). The digital divide in the post-Snowden era. Journal of Radical Librarianship, 2, 1-32. Retrieved from https://journal.radicallibrarianship.org/index.php/journal/article/view/12/24

The Imbalance In Transparency

Transparency

Image c/o Jonathan McIntosh on Flickr (CC BY-SA).

Yesterday was a big day in terms of transparency, democracy and information rights. After months of criticism for the way in which it has been loaded to discriminate in favour of curbing Freedom of Information legislation, the Independent Commission on Freedom of Information published their findings, followed by publication of the government’s response. On top of all this, the government published its revised Investigatory Powers Bill (or “Snooper’s Charter”). In terms of the information flow between state and the individual, these two developments couldn’t be more important. The question is, to what extent is the information flow weighted in favour of citizens rather than the state? A question to which the answer is, I think, obvious to anyone with even the vaguest grasp of the history of the British state.

Given the sheer size of the debate and discussion in these two areas, I thought I’d bang all this together in one post, but split it up into two many themes: Information From Them and Information FOR Them. Seems to me that both these areas say a lot about where we are as a country, and I think such a distinction further emphasises the current state of play.

Information From Them

The FoI commission may have found that there is no case for new legislation with respect to the Act (meaning no substantial changes to how it operates), but this does not mean that it won’t continue to have serious limitations. The Act itself is imperfect as it stands now (and the increased outsourcing of public services to the private sector further limits its scope), and it’s not clear to what extent the government will use the findings of the Commission to come up with new and innovative ways to further restrict its impact. As Maurice Frankel, director of the Campaign for FoI, notes, rather than changes to the legislation it “could be that they are now possibly talking about various forms of guidance”.

For the government, the FoI Act has a very narrow appeal. It’s less about creating a culture of full transparency across government, both nationally and regionally, and more about beating the drum for value and efficiency. The Freedom of Information Act is more than just providing citizens with access to information on how taxpayers’ money is spent, it’s about holding politicians to account, ensuring that that all of their decisions are subject to scrutiny, not merely about how money is spent. This narrow perspective is still very much central to the government’s thinking, as evidenced by Matt Hancock’s statement in response to the findings:

“We will not make any legal changes to FoI. We will spread transparency throughout public services, making sure all public bodies routinely publish details of senior pay and perks. After all, taxpayers should know if their money is funding a company car or a big pay off.”

For the Conservatives, it makes sense that this is the extent of their endorsement of transparency. Spending taxpayers’ money plays directly into their narrative of difficult economic conditions that warrant the rolling back of public spending. Ensuring a focus on FoI as purely a mechanism to monitor local government spending shifts the emphasis and, ultimately, sends a message about how they view FoI. It’s not about transparency, or holding politicians to account. It is purely and simply about being a stick with which members of the public can beat local government profligacy.

One recommendation that is worth noting is the position regarding the “Cabinet veto”. The Commission recommended that:

“…the government legislates to clarify beyond doubt that it does have this power. We recommend that the veto should be exercisable where the executive takes a different view of the public interest in release, and that the power is exercisable to overturn a decision of the IC. We recommend that in cases where the IC upholds a decision of the public authority, the executive has the power to issue a ‘confirmatory’ veto with the effect that appeal routes would fall away, and any challenge would instead be by way of judicial review of that veto in the High Court.”

Although the government have decided that the veto will only be deployed “after an Information Commissioner decision”, the Minister’s statement adds that so long as this approach proves “effective”, legislation will not be brought forward “at this stage”. This is, to say the least, disappointing. As has been noted before, the veto simply acts as a way for ministers to avoid embarrassment (see the Prince Charles letters for example). Of course concerns about this particular aspect need to be considered in the context of the fact that the worst case scenario regarding Freedom of Information has not come to pass, but the phrase “at this stage” should put us all on alert regarding the government’s intentions.

That said, contrast the government’s position on freedom of information (where openness comes with caveats) with their position on surveillance (where caveats barely seem to exist)…

Information For Them

Following a number of critical reports about its Investigatory Powers Bill, the Home Office yesterday put forward revised draft legislation seeking to, in their words, “reflect the majority of the recommendations” from these reports. The reality is quite different, and very troubling on a number of levels, not least because of the intention to rush this bill through parliament at a time where other stories with substantial ramifications are dominating the news cycle (the intention seems to be to rush it through before DRIPA expires at the end of the year).

What of the proposals themselves? Well, they don’t make for comforting reading if you care about individual liberty and intellectual privacy. Despite criticism that the initial draft lacked any sense that privacy was to form the backbone of the legislation, the only change in this respect has been to add the word “privacy” to the heading for Part 1 (“General Protections” becomes “General Privacy Protections”). This tells you all you need to know about how the government views privacy. It’s a minor concern when compared to the apparent desire to engage in mass surveillance.

The Bill proposes that police forces will be able to access all web browsing records and hack into phones, servers and computers. Although the Home Office later claimed that hacking powers date from the 1997 Police Act and would only be used in “exceptional circumstances”, when giving evidence to the scrutiny committee, Det Supt Paul Hudson noted that these powers were used “in the majority of serious crime cases”. Needless to say, he refused to provide any further detail on the record. But there does appear to be a shift here from the police being able to view any illegal sites you have visited, to enabling them to view any website you visit.

In terms of encryption technologies (the bête noire of Western democracies hostile to privacy), there has been some clarity and yet there also seems to be somewhat of a loophole that could prove advantageous to those who know what tools to use to ensure their intellectual privacy. In the government’s response to pre-legislative scrutiny it advises:

“The revised Bill makes clear that obligations to remove encryption from communications only relate to electronic protections that have been applied by, or on behalf of, the company on whom the obligation has been placed and / or where the company is removing encryption for their own business purposes.”

The implication here seems pretty clear: to ensure you provide sufficiently strong encryption technologies, move towards encryption that you do not control, rather than those you do. If you don’t control it, you cannot remove it. I suspect the net consequence of this will be a muddying of the waters for those who wish to protect their intellectual privacy. It is already difficult to differentiate between which encryption tools truly protect you from mass surveillance, and which arguably do not (consequence being a new manifestation of the digital divide). Being able to differentiate between which tools do control the encryption placed on communications and which tools do not will undeniably require a degree of social capital that not everyone has the privilege to possess.

There are many significant concerns regarding this draft bill, many of which would take a huge blog post to cover…and I’ve not even read the full bill and accompanying documents yet. Rather than hit the 2,000 word mark, I’ve put together a list of key resources below. As librarians and information professionals we need to be on top of this. Defending the intellectual privacy of our users (whether that be in schools, public libraries, further or higher education) is a fundamental ethical concern. We need to take whatever steps we can to ensure we advance privacy, ensure the protection of digital rights and reject the monitoring and/or collection of users’ personal data that would compromise such privacy.

One thing I will add is that the combination of these two developments speaks volumes about the nature and transparency of government and in the United Kingdom. It is far less about ensuring a democratic system by which elected officials can be held to account, and far more about treating citizens with suspicion and thus undermining the democratic process. Given these circumstances, it is difficult to conclude that we live in a fully functioning democracy. When the state is entitled to more information about us than we are about them, there is no democracy.

Further resources

IFLA Statement on Privacy in the Library Environment

Investigatory Powers Bill – all government documents

Privacy International statement on IPBill

Investigatory Powers Bill – How To Make It Fit-For-Purpose

Don’t Spy On Us (authors of the above report on making it fit for purpose)

Access Now statement on IP Bill

 

Independent Commission on Freedom of Information report

Statement by Matt Hancock on Commission’s report

Campaign for Freedom of Information statement

Barclays and the library marketing opportunity

Image c/o MattJP on Flickr (cc-by)

Just before Christmas I wrote a post questioning why Barclays are in our libraries. Somewhat alarmed by the invasion of a public space by a corporate entity, I was particularly concerned about the kinds of tools that they recommend as part of their digital skills drive. Unsurprisingly, they were things like Google, Yahoo! and Outlook (see the aforementioned post for reasons why I find this problematic). The Google thing particularly troubled me, and it rather suggested that (surprise, surprise) there may well be an ulterior motive as to why Barclays are offering up their help in public libraries.

In Nick Stopforth’s post on the Libraries Taskforce blog, he argues that:

“…there is no hard sell (or even soft sell) from the Digital Inclusion Stakeholder partners in libraries…”

Barclays are not promoting their banking services in doing this, they are solely concerned with helping people develop their digital skills and get online. I don’t buy this. In fact, I have never bought this. As my grandfather (an Arkwright style shopkeeper who would be appalled his grandson has turned out to be a socialist) used to say “nothing is free”. Barclays aren’t offering this for free with no immediate return. They are doing it because there is a business advantage in them doing so. I think Nick’s statement may well be wrong and that there is a soft sell element to this. I’m a suspicious sort, so I thought I’d dig around a bit and see what I can find out.

As part of something else I am working on at the moment (which seems to be never quite achieving closure), I had been digging around finding out more about how Google Ads works. Here’s what it says on their Gmail help page:

We are always looking for more ways to deliver to you the most useful and relevant ads – for example, we may use your Google search queries and clicks, Google Profile, and other Google Account information to show you more relevant ads in Gmail.

In light of the fact that Barclays recommends Google as a search engine and email provider, this seemed to me to be quite intriguing. If Barclays are setting people up with Google accounts in libraries then at any point during the session taking them to the Barclays site (say, maybe to point them to their Internet Help pages as reference points after the session), there is a very high chance that Barclays adverts will be delivered to that user’s inbox. So I thought I’d ask them directly if this is what they do. And lo:

digital eagles

Now, of course, this is fairly circumstantial. Maybe the Digital Eagles don’t always sign people up for Google Accounts and maybe they don’t always direct people to their website. I’ve never been to one of their sessions, I’m not aware of anyone who has and there seems to be very little information on exactly what they do in these sessions available to the general public. BUT signing them up for a Google account, and visiting the Barclays Internet Help pages in the same session will significantly increase the chances of the individual in question receiving targeted ads in their inbox promoting various services Barclays delivers. In short then, Digital Eagles in libraries is a great opportunity for the bank to deliver direct advertising to individuals who are not currently online, who lack digital skills and, potentially, are not existing customers of Barclays (their Internet Help page also promotes their online banking services). I’m sure this is not their sole reason for providing digital skills support, and it might be that this is entirely coincidental. But it is worrying (indeed, I was telling a more politically centrist IT friend of mine about the project and his instant reaction was “that’s completely inappropriate”).

The best alternative (aside from not letting Barclays in the building at all) would be for the tools that they recommend to people were privacy related rather than the kind of tools that gather data to serve adverts. So, for example, rather than Google’s search engine, they have to show individuals how to use DuckDuckGo. This would ensure that the user’s search history is not then used to deliver adverts and would ensure that there was no potential whatsoever for Barclays to either hard sell or soft sell their products. At present this relationship provides far too much opportunity for the latter, even if the former is prohibited.

I think we’ve generally done ourselves (the profession as a whole) a huge disservice when it comes to digital skills support. We KNOW this stuff. We know this stuff BETTER than Barclays do. Right across the profession we’ve got people who help people with digital skills, who teach people essential skills with regards to digital literacy, and yet we’ve outsourced these services to banks. Which when we read that back, doesn’t that sound odd? The skills and knowledge we have around using the internet effectively we are not passing onto the general public, we are asking providers of financial services to do it for us. How did we get into this mess? Is it a question of leadership? Is it the hollowing out of public services by central government? Is it the decline in professional ethics? For me it’s all these things and more. One thing is for certain, the future is bleak if we continue to believe that others can do it better than us.

The permanence of corporate surveillance

Image c/o Barbara Friedman on Flickr.

I’ve been thinking a lot recently about the nature of surveillance now as compared to how it operated in the pre-internet era (if we can even imagine such an era even existed). Surveillance is, of course, an age-old technique employed by the state to protect, to control and to manage. In many respects, the Snowden revelations shouldn’t have surprised us in the least. Did anyone really believe that a mass communication tool could be introduced without the state wishing to have a poke around in what was being communicated? Perhaps the only real surprise was the scale. Nonetheless, history provided us with the clues.

However, we can draw a very clear line between the kind of surveillance that was popularly recognised before 2013 and that which has come to light post-2013. The first, and most obvious, point to make is that surveillance has historically been targeted, not indiscriminate. Targets were identified and surveillance approved and conducted. It may be against particular groups, or specific individuals, but it was always targeted. Now, however, everyone’s communications are subject to collection and scrutiny. We are all, to a certain extent, suspects.

The other clear difference is the fluidity of the nature of our surveillance regimes. It is not merely the state that collects vast amounts of data about our activities, the corporate sector also gathers huge amounts of information about what w do, where we go, who we talk to etc etc. This data does not reside securely in the hands of corporations however. We know, following Snowden, that much of the data private corporations collect about our activities is also accessed by the state, either with or without the consent of said corporations. Thus we find ourselves in an environment of what has been described as “liquid surveillance” – a fluid state of surveillance where data flows, particularly between the state and corporations.

But there is a further difference between that which occurred pre-Snowden and that which we know post-2013: the permanence of it. Before the emergence of the internet, the course of surveillance wasn’t always unimpeded.  There were concerns and efforts to limit its scope or even to roll it back. The use of wiretaps in the United States is a good example of surveillance strategies being strongly criticised and, ultimately, rolled back.

Back in the early part of the 20th century, there was outrage about the federal use of wiretapping. This outrage wasn’t restricted merely to the strands of libertarianism on the left and the right (such as the right can be described as “libertarian” when it argues for the replacement of state authority with corporate authority), it cut across the entire mainstream of political opinion. Conservative newspapers were as outraged as the liberal press. The outrage was such that, in 1934, the Communications Act federally outlawed the use of wiretaps (reinforced by a Supreme Court ruling in 1939).

Although these safeguards were whittled away by successive administrations (Democrat and Republican), there was still a sense at the heart of the establishment that surveillance must be limited, at least this was the case publicly if not privately. In 1967, for example, the President’s Commission on Law Enforcement and Administration of Justice stated that “privacy of communications is essential if citizens are to think and act creatively and constructively” (the mere fact that our current government thinks privacy of communication is unnecessary suggests they rather don’t want citizens to think and act constructively…). Privacy of communications is crucial in a democratic society, the fact that this was endorsed by the President’s Commission underlines the extent to which this was hardly a view taken by a few radicals outside the mainstream. It was, to all intents and purposes, a conservative viewpoint on the impact of such intrusions. The big difference now, I think, is I couldn’t envisage such an acknowledgement or a restriction upon contemporary forms surveillance.

The emergence of the notion that information is a commodity has changed all this. In a capitalist society, where information/data has value, where the harvesting of such data can produce profit, corporations are obliged to seek out that commodity, secure it and draw profit from it. Any effort to inhibit this will surely be resisted, both by the corporations themselves, and their allies in the political elite (particularly on the right of course). It is simply not possible to imagine a situation where the current environment is over-turned. Pandora’s box has been opened, there is no way we are going to be able to put everything back inside. Corporate surveillance is, therefore, a permanent state of affairs. It will never face the legislative restrictions that wiretapping faced in the last century. No, it is a permanent fixture because a commodity that drives profit will not ever be restricted so long as capitalist orthodoxy is dominant. Therefore, in a state in which data flows between the state and corporate bodies, it is hard to imagine that surveillance in a capitalist society can ever truly be curtailed.

We may well be able to limit the extent to which the state directly collects data on individuals, but will we ever really halt access to data that we have voluntarily surrendered to profit-making entities on the internet? Is it possible to prevent this in a capitalist society? It seems to me that it probably isn’t. Whilst a large state society results in intrusive state surveillance, surely a free market, “libertarian” society would result in wide scale corporate surveillance (under the guise of being voluntary…”voluntary” being a notion to which right-libertarians have a liberal interpretation)? And as we edge towards an extreme free-market state, won’t such surveillance become permanent and inescapable? Perhaps, under capitalism, corporate surveillance is here to stay?

Surveillance, libraries and digital inclusion

surveillance

Librarians have a key role to playing in terms of digital inclusion and protecting intellectual privacy. [Image c/o Duca di Spinaci on Flickr – CC-BY-NC license]

Towards the end of last year, I was privileged to be invited to talk at CILIP’s Multimedia Information and Technology (MmIT) Group AGM about digital inclusion as a representative of the Radical Librarians Collective (see the presentation below – which includes a list of recommended reading!). The invitation was well timed in terms of coming up with a focus for my talk as I have spent the best part of 5 months working on a journal article for the Journal of Radical Librarianship on the digital divide (which, pending peer review, will hopefully be published in the early part of this year). Specifically, I’ve been interested in looking at digital inclusion from a slightly different angle, that of the divide in terms of state and corporate surveillance.

As followers of this blog will know, I’ve been talking about surveillance and the Snowden revelations for some time now. Concerned about the gathering of information about us, whilst the state seeks to limit the amount of information we obtain about them, I’ve mainly been focused on the impact this has in terms of our democratic processes. However, since the emergence of the Library Freedom Project (founded by the awesome Alison Macrina), I’ve been increasingly interested in the role that libraries and librarianship has to play in this area. It seems to me, that the disclosures have to expand the terms by which we define what the digital divide is. Whilst there has always been a focus on access, and on skills, there must be greater attention on what people actually do online and, furthermore, the extent to which individuals are able to act freely in terms of seeking information.

Being able to seek out information that offers alternatives to the status quo (indeed, not just “offers” but challenges) is vital in a democratic society. Without the ability to seek out and understand alternatives, it is hard to accept that our society can possibly be described as “democratic”. What is clear from Snowden’s disclosures is that the ability to seek out information and communicate with others whilst ensuring your intellectual privacy is increasingly difficult. Difficult unless you have the skills and knowledge with which to defend your intellectual privacy.

I tend to think that I am fairly skilled in terms of using the internet. I can seek out information quickly and efficiently, I can provide assistance for others, I am fairly innovative in the ways in which I use certain online services. What I lack, however, is the skills necessary to really ensure my intellectual privacy, to defend myself against state or corporate surveillance. I have some skills, I have some basic knowledge, but I don’t know how to protect myself fully. And yet I consider myself reasonably skilled. What about those that have difficulties in using the internet in a basic way? What about those that struggle to do the things that I take for granted? Aren’t they even more exposed to state and corporate surveillance? Isn’t their intellectual privacy even more under threat? Surveillance tends to affect the most disadvantaged to the greatest extent, is intellectual privacy something only for the privileged?

I don’t want to get into this even further here (wait for the longer version!), but I do think there are issues here about the nature of the digital divide and how we should view digital inclusion post-Snowden. There was a time when it was considered fanciful that librarians could even consider to provide the sort of skills that the state may see as a threat to the status quo. However, the efforts by the Library Freedom Project in the United States underlines that this is no longer the case. If librarians in the United States, the home of the NSA, can help people defend their intellectual privacy, why can’t we do the same in the United Kingdom? I’m not suggesting that we can collectively as a profession start setting up Tor nodes in libraries or teaching people how to use encryption technologies, but we need to have the debate about how we ensure the intellectual privacy of everyone in our society, not just the privileged few.

CILIP’s Ethical principles for library and information professionals states that we must have a:

“Commitment to the defence, and the advancement, of access to information, ideas and works of the imagination.

If we are to defend and advance that access to information then we must, in my mind, do whatever we can to defend the intellectual privacy of everyone.

You can also download a PDF version of this presentation here [PDF – 6.29MB].

Recommended Reading

Coustick-Deal, R. (2015). Responding to “Nothing to hide, Nothing to fear”. Open Rights Group.
Gallagher, R. (2015). From Radio to Porn, British Spies Track Web Users’ Online Identities. The Intercept.
Murray, A. (2015). Finding Proportionality in Surveillance Laws. Paul Bernal’s Blog.
Richards, N. M., (2008). Intellectual Privacy. Texas Law Review, Vol. 87.
Shubber, K. (2013). A simple guide to GCHQ’s internet surveillance
programme Tempora. Wired.
@thegrugq. Short guide to better information security.
@thegrugq (2015). Operational Telegram.
Whitten, A. & Tygar, J.D. (1999). Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0.

Library Freedom Project. Privacy toolkit for librarians.
Let’s Encrypt.
Electronic Frontier Foundation
Digital Citizenship and Surveillance Society
Surveillance & Society (OA journal).
The Digital Divide in the post-Snowden era (a micro-blog curating interesting links and resources – by me!)

Why are Barclays in our libraries?

In many respects, having a pop at the banks is a bit of a case of “low hanging fruit”…but in the case of Barclays and their supposed altruistic effort to boost the digital skills of the nation, sometimes that low hanging fruit is too tempting to ignore. And when that fruit is also a fruit that compromises the library service and the profession to which I belong, then that fruit needs picking and crushing. I think I may have hit a metaphorical dead end, so let’s move on – what exactly is my beef?

Concerns have been raised about the relationship between public libraries (which don’t have a profit motive because they provide a social good) and Barclays (which does have a profit motive and, well, social good…hmm) for some time now. The main cause for concern? The invasion of a public space by a corporate entity providing a service traditionally delivered by library staff (in one form or another). Of course, once a corporate entity (driven by profit) enters a public space, that public space has been corrupted. It’s no longer a public space, but an “opportunity” for corporate enterprises to exploit (because they are driven by profit and are answerable to shareholders). The decision, therefore, to allow Barclays to use a public space to “help” the community seemed a little bit out of kilter with what we would ordinarily expert in the delivery of public library services.

What do Barclays actually do?

Well, I’ll hold my hands up and say I’ve not experienced it first hand, so all I have to go on is whatever information is in the public domain. A quick glance of their website gives a fair indication of the kind of support they provide. For example, they help people set up email accounts. Great. Email is a great way to connect people at great distance, particularly useful for those who have relatives far afield and are unable to visit. So what email services to they advise? Well, this is hardly going to come as a surprise: Google, Yahoo! and Microsoft. Brilliant. All of which rely on, you guessed it, advertising (and have generally not been too great when it comes to privacy see here, here and here – the last one is really interesting, check it out…then never ever use Outlook for personal email). And the way the advertising works is particularly interesting…

On their website, Google explain how ads are delivered to your inbox:

We are always looking for more ways to deliver you the most useful and relevant ads – for example, we may use your Google search queries on the Web, the sites you visit, Google Profile, +1’s and other Google Account information to show you more relevant ads in Gmail.

Handily, Barclays also have a load of useful resources on their website, including how to create an email account. Which handily seems to favour Google. So, get email guidance from Barclays, create a Google account, login, head to the Barclays website for more hints and tips and VOILA!, Barclays advertising direct to your inbox. Nice one Barclays. You’ve found a way to drive up online advertising direct to customers and potential customers without having to worry about a large advertising spend, all the while appearing as if you are simply trying to help people for no other reason than to provide a social good.

Of course, much of this is speculation given I’ve not actually experienced the delivery of their support. Maybe they never introduce them to the materials they have on their website. But it seems hard to believe that people would receive help from a Barclays Digital Eagle to create an email account then never visit the Barclays website ever again, or indeed manage to have help from a Barclays Digital Eagle without ever being aware that they also offer advice online. Can we seriously believe that they do not mention Barclays at all to library users? Or mention the fact that they are Digital Eagles? Do they really just sit in the library as a member of staff, never revealing anything at all about the company that employs them? Well, it seems that some library leaders believe that this is exactly the case…

Capitalism is neutral

Having a pootle around the Libraries Taskforce website (fascinating stuff, watch how many times they mention “business” in their various materials), I was interested to see an article by Nick Stopforth on the Barclays/public library initiative which was…er…interesting. Here’s his take on the partnership:

“These initiatives will not achieve their aims – to increase digital participation, skills and confidence – to best effect in isolation. We will see more people supported more effectively and with greater reach by working out new connections, new opportunities, and being entrepreneurial and opportunistic. Library services will have to be as customer focussed and facilitative as always, but also more corporate, and with appropriate risk management in place.”

Oh dear…

“To reassure stakeholders and customers who will understandably have a view that all off this sounds to be contrary to the ethos of library services to provide free and neutral public spaces, there is no hard sell (or even soft sell) from the Digital Inclusion Stakeholder partners in libraries.”

So they never once mention the materials on the Barclays website, never direct them there, never inform them of the support materials they provide, never mention that they are Digital Eagles (which may prompt an online search on one of their recommended search engines)? Never? At all? Not once? Ok…

So I think that we have a choice – our corporate partners could provide those free, neutral digital skills support hours in other venues, or they could provide the support in libraries.

“Neutral digital skills”? NEUTRAL. Let’s have a look at the services they recommend:

Email: Gmail, Yahoo!, Outlook.

Search engines: Google, Yahoo!

Setting up a community group: Facebook, Google, Yahoo!, social media.

Well, that all seems neutral. Recommending a series of services that monetise your data and help ensure targetted advertising. Surely if it was truly “neutral” you would also have things like Duck Duck Go for search engine, riseup for email, Tor for browsing, Crystal for ad blocking, Ghostery for tracking etc etc. Surely the recommendation of these services would be “neutral” (if we are to accept the premise that that is even a thing), not the promotion of services that, ultimately, lead to the delivery of advertising direct to the user? Encouraging the surrendering of personal data to a large corporation for profit is not by any stretch of the imagination “neutral”. Nor is it in the best interests of users. Encouraging them to give up their data to drive the profits of large corporations is not what we should be about. We should be about protecting their personal data, ensuring that they aren’t a cash cow but a citizen seeking information and communicating with others securely, ensuring the protection of their intellectual privacy.

The choice should not be “either they deliver those services in competition with us or we incorporate them”. The choice should be whether we seek to deliver a service that ensures people connect online and use the internet freely without surrendering their personal data or whether we just ask as a conduit for the profit motive of private enterprise (or “neutrality” as it now appears to be dubbed). The latter, for me, should never be central to the mission of the public library service. It’s saddening that we have allowed the supposed threats to our future force us to become a service geared to the benefit of large corporations, rather than asserting our confidence as a public service providing a common good.

Lack of blogging and 2016

What are you looking at by Andreas Levers (used under a CC-BY-NC license).

I’m very conscious that I have not been blogging too frequently over here for the past few months – there have been reasons for this. For the past couple of months I’ve been working away at a journal article that has pulled together a lot of my main interests (surveillance, digital divide etc) and, in all honesty, it has taken up a huge amount of any “spare” time that I’ve had. What with the reading and the writing and the re-drafting (and the impending re-drafting that will no doubt be imminent), I’ve had little time to actually blog about some of the many developments that have interested me over the past couple of months (the continual developments re surveillance, freedom of information etc etc). I’m hoping that’s going to change in 2016 (as much as it can when you have a family requiring attention too of course!).

I’m hopeful the article itself will emerge in early 2016, all being well. It’s currently going through the peer-review process which is a new experience for me. I guess it’ll be a little while yet before it’s published (if it is accepted of course), but I’ll certainly post details here if and when it does jump the necessary hurdles.

Whilst I’ve not had huge amounts of time to devote to blogging, I have created a new micro-blog related to the article that I have submitted. The intention really is partly to pull together resources that are interesting and relevant as a way of helping to keep the article itself “live”. One of the difficulties I found with writing about a current topic was the volume of new developments I was coming across every single day. It’s bad enough trying to put something down and prevent yourself from keep tweaking and editing it, it’s even worse when every day there is a new angle to consider, a new snippet of information that affects what you’ve written. It’s with this in mind that I decided I wanted to keep developing my thinking in this area, and the micro-blog seemed a good way of doing so in a way that enables me to share information for others interested in the same themes (as well as helping me to track developments for further articles, talks on the topic).

If you are interested in issues around surveillance, you can find my micro-blog here.

Plans for 2016

All being well with the article, I am planning on pushing on and attempting to secure talks at conferences about some of the themes within it. Of course, if the article doesn’t make it, then this is all a bit redundant. But in a rare stab at optimism I’m going to go with the notion that maybe it will be interesting enough to warrant publication. I’ve rarely submitted abstracts for talks before, but I think 2016 might be the year I give it a go. I’m fortunate in that I already have a couple of talks lined up in the new year, both of which I am very much looking forward to.

Soon after we return to work after the Christmas/New Year holiday, I will be talking at CILIP’s Multimedia Information and Technology Group AGM entitled “What is the library’s role in digital citizenship?” on January 7th. My talk at this event will be based primarily around the article I mentioned above, focusing on Snowden, surveillance and the impact upon democracy and the individual – specifically in terms of a privacy divide. I’m very nervous about the talk (I believe it’s “sold out” which adds to the nerves!) but I’m very much looking forward to it. It’s this area that I particularly wish to explore at other conferences during 2016 and I plan on trying to submit abstracts to as many as I can. So, yeah, no pressure on making sure that the talk proves interesting and valuable. I kinda see this area as one that is going to continually develop, with new challenges emerging that will necessitate continual development of the exploration of this area. Certainly my experience over the past few months has taught me that it’s going to be a challenge to keep up with developments.

If you are coming to the AGM on the 7th January and you are interested in the themes covered, please do come and have a chat with me (or email me afterwards if you prefer). It’s a topic that I really want to engage with people on and it’s one that I feel I have a lot to learn about. For me, online surveillance forces us to reconsider the digital divide and how it manifests itself. The difficulty, I think, is identifying what we, as a profession, can do to tackle this particular aspect of the divide. Particularly in a country that is well known for being regressive and invasive when it comes to individual privacy and liberty.

Anyway, more on this in 2016 I guess. I hope to be much more active on this blog in the coming months. I guess, for now, it’s a case of have a great Christmas and New Year and…well…watch this space!

The Snowden revelations had nothing to do with Paris

Surveillance

Mass surveillance is simply about control, we should resist the calls to permit mass surveillance by our intelligence agencies. (Image c/o Frederico Cintra on Flickr used under CC-BY)

Encryption. It’s the weapon of choice for terrorist communications. At least, that’s what they say. Within days of the attack, the director of the CIA, John Brennan, complained about the hand-wringing over mass surveillance and claimed that the Snowden revelations about intelligence gathering had made it harder to identify figures involved in Islamic State. This was followed by FBI Director James Comey calling for “access to encrypted data” to detect terrorist threats. With the government’s attempts to legalise mass surveillance via the investigatory powers bill, the use of encryption technologies is once again on the agenda.

And yet…

In the wake of Paris it does not appear that encryption technologies were used by the terrorists in planning and organising the events that took place last week. Reports on Wednesday suggested that rather than using complex encryption technologies, the terrorists were simply communicating using SMS. Alongside the fact that at least one of the individuals was known to the intelligence agencies, it’s not clear what difference either mass surveillance or the beloved (and non-sensical) back-door to encryption would have made in this particular case.

This notion that encryption technologies provides a safe space for terrorists to plan their activities doesn’t hold up to much scrutiny. Of course Snowden gets the blame, he’s a “traitor” to the US specifically and the West in general (how dare a whistle-blower reveal that states are monitoring the internet activities of all their citizens), but there’s scant evidence that his revelations have made any difference at all. Much less that they have endangered anyone in any Western state.

A report recently published by Flashlight underlines the extent to which any suggestion by politicians, or intelligence agencies, that Snowden’s revelations have forced terrorists to adapt their communications strategies is complete garbage. Dedicated to gathering intelligence about online communities in the “deep and dark web”, they recently produced a report that suggests the Snowden revelations have had a limited impact. The primary findings from the report include:

  • The underlying public encryption methods employed by online jihadists do not appear to have significantly changed since the emergence of Edward Snowden.

 

  • Well prior to Edward Snowden, online jihadists were already aware that law enforcement and intelligence agencies were attempting to monitor them. As a result, the Snowden revelations likely merely confirmed the suspicions of many of these actors, the more advanced of which were already making use of – and developing –secure communications software.

The second of these is so obvious, it seems bizarre that it needs to be stated. Of course terrorists would have been aware that intelligence agencies would be attempting to monitor them and of course they would have been taking precautions. The Snowden revelations merely confirmed what they already suspected and, ultimately, reinforced that they were correct to make use of secure communications software.

This understanding of the use of encryption software by terrorists is not new. Before the Snowden revelations, in 2008, it was noted that encryption technologies were no more frequently used by terrorists than by the general population. Furthermore, that encryption technologies were more frequently discussed by intelligence agencies rather than by terrorists, primarily because of it is more “technically challenging” and therefore less appealing to use. Those that were technically able were, of course, would clearly have been using the technology back in 2008 – long before the Snowden revelations. If researchers were writing papers on the use of encryption technologies back in 2008, then of course terrorists who were seeking to hide their activities from the state would also be aware of the existence of such technologies. It would be breath-takingly naïve to believe that they weren’t aware of such technologies pre-Snowden. And no-one could reasonable accuse intelligence agencies of being naïve. They know that this is the case, but the political urge for mass surveillance is so strong, the will to talk up the threat of encryption technologies is so tempting and the desire to prevent future whistle-blowers revealing the undemocratic activities of the state, that of course they will link any terrorist attack to the information revealed by Snowden.

What we need to remember is that this is part and parcel of an effort to make Western democratic societies accept the need for mass surveillance. The facts don’t support it, but the desire to create a state in which everyone is monitored ultimately leads to a disciplined populace more easily controlled by the state (see Foucault). Encryption isn’t the problem. Mass surveillance isn’t the answer. As Paris showed, the information was there, the clues were present…mass surveillance or back doors to encryption wouldn’t have made one iota of difference in terms of the tragedy in Paris. As politicians and ignorant political commentators talk up the need for mass surveillance, we must not forget that one simple fact.