This would work perfectly if it weren't for all the humans: Two factor authentication in late modern societies
First Monday

This would work perfectly if it weren't for all the humans: Two factor authentication in late modern societies by Paul Watters, Patrick Scolyer-Gray, A.S.M. Kayes, and Mohammad Jabed Morshed Chowdhury



Abstract
Late modern societies are now dependent on innumerable digitally networked technologies, yet there are intractable incongruencies between the technologies that we develop, and the corresponding technological literacies of users. This disjuncture has greatly increased the scope and scale of the risks to which globalized publics are exposed. With public cybersecurity literacies necessarily in decline as a result of the techno-social dynamism of “liquid modernity”, we now face an immense and exponentially growing matrix of cyberthreats and vulnerabilities, of which many carry potentially catastrophic consequences. Our interrogation of two-factor authentication systems, popularly implemented through short messaging services (SMSs), is demonstrative of vulnerabilities that continue to emerge as a result of widespread and entrenched disjunctures between the design of contemporary ICT systems, and the various flawed assumptions that undergird their implementation. We examined 400 authentication messages that were automatically posted to a public forum by Web sites commonly used to receive SMS authentication tokens on behalf of users. We found that 76.5 percent of those messages included the name of the application for which the message was intended: in so doing, over three quarters of our sample risked compromising their accounts. Occasionally, we even observed usernames and passwords posted together. The socio-technical implications of our findings for ICT system design in today’s globalized late modern societies are discussed.

Contents

Introduction
Methods
Results
Discussion
Conclusion

 


 

Introduction

As a series of practices, protocols, techniques and strategies, cybersecurity is tasked with, among other things, reducing the risks for businesses and individuals in society (Biener, et al., 2015). In pursuit of this objective, cybersecurity researchers and practitioners have developed a myriad of architectural principles, design paradigms, methods, and strategies over the past few decades that represent a tremendous investment by cybersecurity specialists into risk reduction that has already had a considerable impact on society (Choo, 2011; Gibson, 2015). Few would challenge the proposition that risk is something that most people will at least try to minimize, and we therefore tend to treat risks in cybersecurity contexts as problems in need of immediate attention and a solution.

However, the immediacy of cybersecurity approaches are designed to treat acute symptoms rather than the chronic problems of late modernity. Here, we aim to identify and provide some insight into how a globalized socio-cultural paradigm can have significant implications for the efficacy of cybersecurity design and, in particular, its implementation and practice. To this end, we revisit some classic sociological theories of modernity; the broad contextual insights that they offer, as abstract as they might seem, are tools that can help us to recognize what, where, how, and why the most carefully designed two-factor authentication systems implemented worldwide are actually vulnerable to a whole array of cyberthreats.

But first, some conceptual groundwork: When we speak of cybersecurity principles, we are referring to logics and associated practices such as those of “least privilege” (Schneider, 2003), where system access controls are designed to ensure that as few authorized personnel as possible have access to as little information as practically possible without compromising operational information processing needs (i.e., people and data are strategically compartmentalized on a need to know basis; see Slay and Miller, 2008; Kayes, et al., 2018). The compartmentalization paradigm is manifest in other cybersecurity control strategies, including those designed around authentication: the notion that users should be able to prove that they are authorized personnel before they are granted access to information (Tran, et al., 2005).

‘Authentication’, as a security protocol, is the process of proving one’s identity and corresponding authority to access information systems through the provision of an identifying factor, such as a username, that is unique to that individual (Burrows, et al., 1989). In most cases, authentication processes will also require that identifying factors are paired with a verification factor — a secret — a unit of information known only to the user, such as a password (Bellovin and Merritt, 1992).

Authentication systems are used to protect information systems around the world, even though they are relatively vulnerable to a range of attacks. For example, typing a password into an HTML field in a browser can be intercepted by malware; alternatively, a brute force attack against an encrypted password could recover the plaintext, facilitating unauthorized access to user-accounts (Herley and Florêncio, 2008). Brute force attacks are those in which every possible character combination is attempted sequentially using a systematic process of elimination to identify a target’s password. Password-based authentication systems are also vulnerable to dictionary attacks; programs that seek to compromise a user’s password by matching it with a list of potential candidates based on their lexical characteristics, such as word frequency (Wang, et al., 2016).

To overcome the limitations of this type of authentication, new approaches have been proposed over the years (Aloul, et al., 2009). A common example is the use of a second factor in authentication: this means that in addition to presenting a factor such as “something you know”, you are also required to present an authentication factor such as “something you have”, or “something you are” (Ometov, et al., 2018). The “something you have” should be something which generates a code that is unique to you, and is tied to a particular device to which only you have access: an example would be an RSA token that generates a one-time password every 30 seconds (e.g., the VPN security key fobs that have been popular with law firms since the 1990s and, more recently, Multi-Factor Security smartphone apps such as ‘Okta Verify’) (Dasgupta, et al., 2017). The “something you are” refers to biometric characteristics, such as fingerprints, retinas, DNA, and so on (Bachurina and Watters, 2006). Note that the technical designs of these authentication systems are themselves imperfect (Young, 2012), but with each layer of authentication that we add, the more we reduce risk.

However, as practical or otherwise encouraging these so called second factor authentication processes might seem, the integrity of these security systems fundamentally relies on the assumption that end-user expectations will be met by users who are frequently undereducated, disinterested, stressed, and/or underpaid (Prabhakar, et al., 2003). For example, the vulnerabilities presented by biometric security technologies are so numerous and extensive that they have become tropes of popular culture (e.g., the severed hand of a protagonist’s victim that is used to bypass locks is a common plot device). In addition to film and television, there are a number of humorous videos in circulation that show how the assumptions behind biometric technology can often reveal weaknesses in the design of such systems (Yager and Dunstone, 2010).

The severed hand trick is far-fetched, but the vulnerabilities are very real. A good example is the use of fingerprint technology as a biometric factor: it is possible to use gels and other malleable materials to collect biometric imprints by applying them to any device on which a fingerprint has been left. This creates a tool that can fool biometric scanners that have no means of discriminating between the organic and the artificial. The diversity and global adoption of biometric authentication technologies, along with their respective vulnerabilities, provides us with a demonstrative example of how market forces (i.e., supply and demand) (Camp and Wolfram, 2004), as well as international network, system, and individual dynamics inform the choices that are made when attacks on authentication systems are planned and implemented (for a model, see Watters, et al., 2012).

In this respect, biometrics are also demonstrative of how the design of authentication technologies are easily undermined by the manner and contexts in which they are implemented, and it is at this juncture we begin to see how the scholarly and vocational predilections negotiated by cybersecurity specialists run head-on into the conditions of late modern societies. When we speak of “late modernity”, we are referring to a theoretical framework developed by Giddens (1991) in the 1990s as an alternative to ‘postmodernity’ (a term which suggests that society has reached a new epoch that follows ‘modernity’) used to refer to the “high-point” of modernity. There are unmistakable symmetries between the variables identified by Watters, et al., (2012) and the key characteristics of late modern societies described by Giddens (1991): ‘Market forces’ drive, but are also informed by exponential increases in the rationalization of political-economic structures throughout society, while the prominence of international, network, and system dynamics reflects late modernity’s pluralism, globalization, and global interdependencies. Turning to matters of individual agency, even more relationships are found in Gidden’s commentary on the primacy of individualism and endemic alienation in late modern societies.

Late modern societies are further characterized by a constellation of variables such as social, cultural, and political instabilities, as well as various forms of insecurity and risk (perceived and actual) (Beck and Levy, 2013). Indeed, at least since Ulrich Beck’s (1992) seminal work, Risk society, modernity scholars have included risk as one of the key and overarching properties of “late modern” (Giddens, 1991) societies. According to Beck (1992), risks are an unavoidable side-effect of living in post-industrial societies in which the rapid development and structural transformations associated with the rise and implementation of new technologies are a primary contributor to widespread societal confusion, fragmentation, and fear (Beck, 2009).

In this paper, we consider how aspects of late modernity intersect with, and inform, the assumptions behind the design and use of two factor authentication systems. We focus on the “something you have” family of authentication factors: specifically, using SMS messages to prove who you are, based on the assumption that a user will have a unique phone number, associated with a physical phone in their possession (Mulliner, et al., 2013). SMS messages are commonly used for authentication because they are cheap to send: An RSA SecurID Authenticator might cost US$40–50, but such tools are often overlooked because most users have access to mobile phones that can be used to send text messages at negligible costs (Anderson and Moore, 2006). Yet in accepting a lower standard of proof vis-à-vis “something you have”, organizations may have introduced unnecessary risks into the authentication process (Federal Financial Institutions Examination Council, 2005).

This paper aims to provide some preliminary explanations for why users may be so willing to subvert critical controls in security processes, intentionally or otherwise. Indeed, the vast majority of ICT users have no logical reason to undermine the integrity of the security systems upon which they rely, and they often do not have the technical skills to do so. Instead, users are often assisted by a combination of mercenary and well-intentioned entrepreneurs who have found novel ways to monetize the demand generated by a vulnerable and technologically uninformed public.

In this study, we examine how third parties have been able to set up systems to receive SMS messages on the behalf of users who do not wish to use their personal mobile phone numbers to receive authentication messages for the digital services they utilize. Obviously, self-disclosures of any combination of personal and/or identifying information to unknown parties will leave users vulnerable to any number of possible cyberthreats, from coercion to identity theft. Of course, from a user’s point of view, it might make sense to absorb additional risks to achieve certain objectives. For example, the risk of attracting social sanctions and/or embarrassment (Davenport, 2002) when subscribing to adult entertainment services may be too great to justify using a personal phone number to authenticate access to said services.

However, the logic that underpins and justifies patterns of behavior that involve, for example, transferring Internet banking account access-authorities to anonymous third parties, are much harder to understand, and raise questions about why users would not be deterred by the implications of disclosing things that should be kept secret. The confluence of common desires for convenience, highly motivated and entrepreneurial cyber-scammers, and failures to properly consider the risks inherent to our reliance on brittle digital security systems that have the potential to undermine critical systems and processes throughout our globalized society, are demonstrative of a perfect storm borne out of the conditions of late modernity.

 

++++++++++

Methods

Using the approach of algorithmic ethnography (Watters, 2015), the most popular site for third-party SMS gateways was identified using Google Metrics. Algorithmic ethnography involves modelling user-activity when engaging with algorithms; in this case, we modelled the information-seeking behavior involved in third-party SMS sites that are used to receive free SMS messages via an SMS gateway available to eight different countries, including Canada, Denmark, Finland, Great Britain, Israel, the Netherlands, Poland, and Sweden. The users were fully aware of what they were doing, and the activities we observed were not structured by any obvious intermediary or confounding variable such as coercion, or any other explanatory mechanism such as technological necessity.

Using a cross-sectional approach, 50 messages were downloaded in their original format from the SMS gateway during the same time intervals on multiple occasions. Each SMS message received via each gateway was then analyzed to determine whether the sender could be identified from the text of the message. Each downloaded record consisted of an HTML <div> block in which the short message was contained. Messages typically consisted of an authentication code, as well as descriptive text that could, and in some cases did, identify the service and/or the organization with which the message was associated. The sending telephone number was also included with every message posted to the Web site.

 

++++++++++

Results

Of the sampled cases, 76.5 percent featured an identifiable sender [see Figure 1]. There was a considerable range in the identifiability of service providers between nationalities, from 92 percent in the U.K., to only 66 percent in Israel. The ease with which an attacker could potentially reconstruct a user’s identity increases proportionately with the availability and salience of identifying information (e.g., a registered mobile phone number) that need only be combined with one additional authentication factor (e.g., a password). The risks raised by precarious user-identity security was compounded further by our discovery that duplicate phone numbers were involved in 5.43 percent of sampled cases. By our estimates, the third-party authentication Web sites we observed permit potentially thousands of users to register and gain access to authentication credentials by using a shared identifying factor. In so doing, users routinely and completely circumvent the most fundamental principles of user access control.

Also of note were variations in the identifiability of service providers, which were patterned in accordance with nationality distributions. This might be indicative of a relationship between behavioral tendencies (and corresponding vulnerabilities) that may be attributable to the socio-cultural particulars of the social groups and cultural structures that comprise and inform practice in those nationalities.

Table 1 shows the top 10 most frequently encountered platforms (i.e., Web service providers, dating sites, banks, etc.) that were identified and compiled sample-wide based on categories of corresponding industries and services.

 

Table 1: Top 10 sites where the sender could be identified across all countries.
SiteTotalCategories of service
Tinder28Dating site
Netzme20Financial services
Proton13Anonymisation service
Sweatcoin12Cryptocurrency
Paypal10Financial services
DPD10Transport
Lime9Transport
Telegram8Anonymisation service
DENT8Cryptocurrency
Line8Telecommunications

 

In one way or another, the top 10 service providers for which users sought authentication involved particularly sensitive information, and were used for anonymization services, financial services, transport (including the delivery of packages), telecommunications, and cryptocurrencies. Despite the obvious value of the activities and the goods with which these services were linked, in many cases, the account identifier did not even require a separate e-mail address for authentication purposes. This puts many user-accounts at risk of becoming compromised because and underscores the possibilities for exploitation afforded by the identifying factors and passwords disclosed in the SMS messages.

 

Frequency distribution of the sampled service providers identified within each country
 
Figure 1: Frequency distribution of the sampled service providers identified within each country (Great Britain, Sweden, Netherlands, Finland, Denmark, Poland, Canada, Israel).

 

In some cases, all one required to retrieve a username and password to access a system was a mobile phone number and an ‘I-forgot-my-password’ link. A good example of this is shown in Figure 2 as it applied to a major social media company: the password recovery request was serviced by providing both the username and the password in plain text to the user (as well as everyone and anyone who might have been on the Web site at the time of the request). In this case, the social media company seems to have assumed that the account holder is always the sole owner and in physical possession of the mobile phone associated with the account’s phone number, which is not always true.

 

Example of the well-known social media company, WeChat, sending a paired username and password through SMS in response to an account recovery request
 
Figure 2: Example of the well-known social media company, WeChat, sending a paired username and password through SMS in response to an account recovery request. If intercepted, by a third party, this information is the digital equivalent to leaving your car unlocked with the keys behind the sun visor.

 

In fact, it is not only the site operators who now have knowledge of the service providers, the usernames and passwords that we observed: this Web site is a goldmine for cybercriminals, who need only monitor and extract credentials at their leisure with practically zero effort. Here, anyone, curious academics included, will find disclosures of sensitive personal information that include customer names, insurance policy numbers, vehicle registration numbers, and individualized password reset links (including user IDs and their accompanying randomly generated access keys). In short, we have revealed here a pattern of behavior that is globalized, fundamentally subversive of two-factor authentication, and is (ostensibly) inexplicably reckless: why are so many people willing to expose themselves to such risks?

 

++++++++++

Discussion

The data we have examined in this study are obviously not going to be able to answer this question, but the overarching patterns of behavior that we have observed here are still worthy of analysis. First, we must consider what possible reasons a user might have for using a third-party Web site to receive authentication factors. For example, certain organizations have found novel ways to incentivize subscribing to their digital mobile services, such as through enmeshing typical mobile phone services with enticing ‘options’ that promise profit shares in exchange for rights to subscribers’ devices, which are subsequently used as authentication token generators. Messaging rates are also typically set relatively low, especially for incoming messages, which are free of charge.

This helps to explain why at least some users would choose to absorb the vulnerabilities of using a third-party authentication provider. Incoming SMS messages are free of charge as standard in Australia, but this is not the case in the United States, so it may be the case that for those who routinely use authentication services, it might be a frugal decision to seek out cheaper alternatives. However, although our data cannot speak directly to individual motivations, we can reasonably suggest that, in general, the motivation for most users could relate to convenience, promises of future financial gain, as well as various other motivations that range from laziness to suspicion.

Assuming that the sampled users were the owners of the devices with which the phone numbers we observed were associated, then we must assume that these users chose to use the authentication service we studied. Moreover, we must assume that at least some of these users were fully aware of their subversive behavior, which leaves us with two broad groups of people: 1) users who chose to use the service but did not recognize or understand the consequences and/or risks that doing so entailed; and 2), users who knew exactly what they were doing as well as the consequences, but still elected to engage in the same behavior. We have no means of knowing how the users are distributed into these categories, but we can explain the behavior of both groups using insight from theories of modernity.

The former category of people are likely the victims of the macro-social processes of rationalization intrinsic to late modernity which have necessitated the implementation of these authentication technologies in the first place. Globalized political-economic mechanisms have resulted in constantly evolving work environments, over-specialization and redundancy, the casualization of labor and high turnover rates, as well as associated declines in skilled employees, not to mention the various implications of enormous populations of migrant workers. Zygmunt Bauman (2013) uses an analogy to the properties of fluids in his conceptualization of “liquid modernity” to explain this commonly encountered problem of technological developments that emerge faster than people are able to adapt: people living in late modern societies do not have the time or the opportunity to cultivate a level of computer literacy appropriate to the environments in which they operate.

Ironically, it is also possible that people from this same group also believe that they are mitigating risks by using third-party Web sites for authentication. Instead of having to worry about the SMS message logs that they cannot control, these users have instead taken what they may perceive as a less conspicuous approach to authentication. But why would users feel the need to be inconspicuous? This invites reconsideration of the most frequent categories of services identified in our sample. Excluding the ostensibly mundane services such as Paypal and ‘Lime’, the majority of the services that sampled users accessed had something to do with sex, discrete financial transactions (i.e., cryptocurrencies), and anonymity.

This apparently pervasive, indeed global demand for anonymity, sex, and cryptocurrencies is a thematic combination worthy of its own investigation, but for our purposes we will now turn to the second group — those who knowingly subverted two-factor authentication systems. Intuitively, we might be inclined to think that the motivations for these behavioral predilections are intrinsic to the key themes of the services themselves, but consider what it is exactly that is at stake here. There are only so many reasons that can justify using a third-party authentication service to access a cryptocurrency wallet even when users might prefer to keep any trace of their activities compartmentalized from their mobile device. Similarly, casual sex (i.e., Tinder) is sufficiently normalised in enough societies around the world to begin to question the explanatory power of discretion. Indeed, if discretion is so desirable, then why not purchase a disposable ‘burner’ phone and avoid the risks of a third-party completely?

The logic grows weaker still if we stop focusing on sex, anonymity, and cash, and instead view the sample as a whole, which we must remember included users who accessed everything from their e-mail, social media, and vehicle-rental service accounts. Why would someone voluntarily expose themselves to risks as severe as identity theft just to access a Lime account? Setting aside the financial incentive of free SMSs, for these users, they must either think that they are somehow not at risk in the first place, or the risks to which they are aware they are exposed are not a deterrent. The former explanation seems unlikely, but both are entirely possible, and inconsistent with the group’s awareness of the extent and severity of the risks to which they are exposed. Why would the group most likely to recognize risks be so willing to embrace vulnerability?

To explain this ostensibly contradictory pattern of behavior, we can look to Simmel’s (1971) writings on modernity, including what Simmel called the “metropolis”. After all, what is the contemporary online environment of the ‘digital age’ if not a symbolic representation (and indeed an extension of) precisely the kind of sprawling cityscape that Simmel’s seminal work critiqued? In short, Simmel (1971) argued that the immense complexity of the metropolis (which for the sake of simplicity we will treat as synonymous with late modern societies) forced residents to protect themselves from a kind of emotional exhaustion by “deadening their senses” to the chaotic flows of people, information, communication, and exchanges of material goods to which they were constantly exposed. Simmel’s work suggests that those who live in overwhelmingly complex societies will adapt to their circumstances by becoming more or less emotionless — “colourless” — or even cynical.

Users who have been eroded by late-modern societies to the point of becoming “blasé” will eventually see no value, let alone meaning, in the conduct of everyday life (Simmel, 1971). Perhaps this sounds extreme, but consider the everyday reality for the hundreds of millions of people who live their lives pay-cheque to pay-cheque, endure the endless banalities and cold calculating nature of rationalization processes (e.g., increases in working hours and productivity quotas as well as the cost of living), who live in overcrowded and ethnically diverse apartment complexes while consuming news media that promotes xenophobia and perceptions of global and intractable ideological, political, and/or ethnic conflict. In light of the populist politics of late, there may be more “blasé” individuals living in late modern societies than we realized.

With this in mind and returning to the questions raised by the behavior of the users that we observed, Simmel’s work is instructive in that it invites consideration of the socio-cultural contexts in which the sampled users live. In essence, a core design flaw in two-factor authentication systems is that even the users who are aware of the risks intrinsic to subverting said systems do not care if they do so. In terms of priorities, against alienation, fear, and a society that has become so complicated that simply treading water in today’s conditions of ‘liquid modernity’ is physically, emotionally, and intellectually so exhausting for so many people that an entirely new underclass has emerged (see Bauman, 2013), at what point does cybersecurity become a matter of import?

Clearly, this is a problem: Without a clear imperative and/or a socially enforced directive to avoid risk, consequences such as those of authentication system-failure, as well as any number of other security protocols that rely on the cooperation of users, are likely to increase in frequency and severity. The scope of this paper is limited, but it is worth raising one last insight from theories of modernity. Alienation does not necessarily have a direct relationship with the concept of anomie, but it provides a useful explanatory point of origin. In its original Durkheimian conceptualization, anomie referred to a lack of social bonds, particularly the absence of social rules that provided a shared set of norms and expectations that bound society together with common behavioral expectations.

Essentially every modernity theorist emphasizes that late modern societies are characterized by extensive individualization and alienation. This will therefore necessarily result in the cultivation of anomie in late modern societies, at least to some extent. With this in mind, perhaps the most valuable piece of insight provided by modernity theorists stems from the observation that late modern societies fractured by anomie are unlikely to be able to forge the shared assumptions prerequisite to the successful implementation of all security systems, authentication included. From here, the question becomes a matter of choice: do we design our security systems to accommodate for anomie, or should our security systems serve as a source of techno-social materials designed to combat anomie?

To be clear, this analysis is intended to highlight a series of increasingly visible problems in contemporary society, and our aim is to spark a conversation that will take cybersecurity and ICT design paradigms in a new direction. This is not to be confused with a claim to know with any degree of certainty what the users of third-party authentication Web sites individually think. Instead, we have made arguments couched in examples based on empirical evidence of behavioral patterns, and proposed some explanatory arguments.

 

++++++++++

Conclusion

Late modern societies cast a new light on age old problems, but because the pace of technological evolutions are of such speed, the societies into which these developments are introduced might exist within the rubrics of the ‘digital age’, but this does not automatically impart the skills or ‘literacies’ required to use these technologies. Recognizing the importance of managing the presentation of self when online in the wake of the collision of human identity constructs with the affordances of social media, for example, is still a work in progress for millions of people world-wide, but social media accounts are not treated as if they were risks. On the contrary, said platforms are embraced and routinely placed at the core of our “digital footprints” (Golder and Macy, 2014), which we then populate with geotagged photographs, a ready-made map of our social networks, and just enough commentary for strangers to reconstruct our daily routines.

This study has illustrated some of the inherent problems with widely accepted approaches to identifying and authenticating users that rely on faulty assumptions about the behavioral predilections of users, and the underlying protocols and systems that are designed to protect customers that do not take the developments of late modernity into account. This study highlights the need for fundamental changes to digital network system-design: authentication is just one example of a process which is strong in the abstract, but can be weak in the execution. Wherever humans are able to subvert critical controls, they typically will, and we have already seen this principle demonstrated repeatedly throughout the rise and subsequent adoption of cloud-based ICTs.

The seemingly constant emergence of new information rationalization technologies is just one of many contemporary human compulsions symptomatic of the structural transformations currently underway throughout the overwhelmingly complex architecture of globalized late modern societies (Giddens, 1991). The “liquidity” (Bauman, 2013) of globalized labor markets and the rise of advanced service economies has necessitated the development of technologies that can facilitate seamless access to ubiquitous ICT services and vast quantities of data for applications that are mostly deployed in the cloud (Kayes, et al., 2019). However, the question remains, are late modern societies suitably prepared to handle these developments?

This is a very limited study, and it may be the case that other Web sites who operate these kinds of SMS services are rarely utilized and/or might have better security controls and/or content filtering systems. This being said, because we only examined a relatively small sample, our results indicate that there is a strong possibility that cybercriminals have already identified a lucrative niche market that is easily exploited by pharming these ostensibly free and helpful services. From our analysis, we concluded that although some users are clearly fully aware that they are breaking an authentication security model, many more are not, and neither of these groups of users are safe from cyberattacks.

Given the relative infrequency of screening for duplicate phone numbers, system designers could build in further checks and balances — for example, if a phone number appears to be shared between two or more customers, then a query could be raised. If the same phone number is shared by hundreds of different uses, then there is clearly a problem which any decent anomaly detection tool should be able to identify if such assurance measures were to be put in place. Furthermore, particularly financial institutions who already have large intelligence gathering capabilities could be building databases of known malicious SMS numbers, and then preventing new customers from registering with those phone numbers.

In broader terms, the study highlights our dependence (in late modern societies) on systems that are built around a set of security assumptions that may never be met in practice. As the availability and complexity of information communication systems increase and proliferate throughout late modern societies, we can expect a proportionate increase in the sophistication and damage incurred by cyber-vulnerabilities. Information systems in late modern societies are ubiquitous: todays “;network society” (Castells, 1997) is buttressed by countless digitally networked technologies that have become interdependent pieces of critical techno-social infrastructure world-wide. The techno-cultural trajectory towards the convergence and consolidation of socio-cultural imperatives and technologies (see Jenkins, 2006) further compounds the issue by linking together vulnerable systems, thus creating a near limitless supply of access pathways for those who would exploit said vulnerabilities.

Of course, all those who participate in this network society of ours have implicitly consented in perpetuity to personally absorb the risks that accompany the vulnerabilities we create (intentionally or otherwise) in exchange for the privileges of living in late modern society (Beck, 1992). In other words, we have accepted the terms and conditions — the EUA — of Ulrich Beck’s “risk society”. End of article

 

About the authors

Paul A. Watters is Professor of Cybersecurity at La Trobe University, and leader of the Cybersecurity and Networking Research Group. Professor Watters currently holds an Australian Discovery Council grant and three grants from the Oceania Cybersecurity Centre (OCSC). His research interests lie in the human factors which underpin (or undermine) cybersecurity.
E-mail: p [dot] watters [at] latrobe [dot] edu [dot] au

Dr. Patrick Scolyer-Gray is a sociologist and Associate Lecturer in Cybersecurity at La Trobe University, Australia. His research interests include cyberwarfare, information warfare, participatory and deliberative media, epidemiological studies of information flows, media manipulation and ‘post-truth’ politics.
E-mail: p [dot] scolyer-gray [at] latrobe [dot] edu [dot] au

Dr. A.S.M. Kayes is a Lecturer in Cybersecurity at La Trobe University, Australia. His research interests include information modelling, authorization, context-aware access control, fuzzy computation, security and privacy protection.
E-mail: a [dot] kayes [at] latrobe [dot] edu [dot] au

Dr. Mohammad Jabed Morshed Chowdhury is Associate Lecturer in Cybersecurity at La Trobe University, Australia. His research interest includes user-centric data sharing, access control modeling, privacy preservation techniques, and blockchain technologies and applications.
E-mail: m [dot] chowdhury [at] latrobe [dot] edu [dot] au

 

References

F. Aloul, S. Zahidi, and W. El-Hajj, 2009. “Two factor authentication using mobile phones,” 2009 IEEE/ACS International Conference on Computer Systems and Applications, pp. 641–644.
doi: https://doi.org/10.1109/AICCSA.2009.5069395, accessed 21 June 2019.

R. Anderson and T. Moore, 2006. “The economics of information security,” Science, volume 314, number 5799 (27 October), pp. 610–613.
doi: https://doi.org/10.1126/science.1130992, accessed 21 June 2019.

O. Bachurina and P.A. Watters, 2006. “Multi fingerprint biometric verification using XML Web Services: A funds transfer case study,” Proceedings of Sixth International Conference on Recent Advances in Soft Computing, pp. 441–448.

Z. Bauman, 2013. Liquid modernity. Cambridge: Polity Press.

U. Beck, 2009. World at risk. Translated by C. Cronin. Cambridge: Polity Press.

U. Beck, 1992. Risk society: Towards a new modernity. Translated by M. Ritter. London: Sage.

U.Beck and D. Levy, 2013. “Cosmopolitanized nations: Re-imagining collectivity in world risk society,” Theory, Culture & Society, volume 30, number 2, pp. 3–31.
doi: https://doi.org/10.1177/0263276412457223, accessed 21 June 2019.

S.M. Bellovin and M. Merritt, 1992. “Encrypted key exchange: Password-based protocols secure against dictionary attacks,” Proceedings 1992 IEEE Computer Society Symposium on Research in Security and Privacy, pp. 72–84.
doi: https://doi.org/10.1109/RISP.1992.213269, accessed 21 June 2019.

C. Biener, M. Eling, and J.H. Wirfs, 2015. “Insurability of cyber risk: An empirical analysis,” Geneva Papers on Risk and Insurance — Issues and Practice, volume 40, number 1, pp. 131–158.
doi: https://doi.org/10.1057/gpp.2014.19, accessed 21 June 2019.

M. Burrows, M. Abadi, and R.M. Needham, 1989. “A logic of authentication,” Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences, volume 426, number 1871 (8 December), pp. 233–271.
doi: https://doi.org/10.1098/rspa.1989.0125, accessed 21 June 2019.

L.J. Camp and C. Wolfram, 2004. “Pricing security,” In: L.J. Camp and C. Wolfram (editors). Economics of information security. Boston, Mass.: Springer, pp. 17–34.
doi: https://doi.org/10.1007/1-4020-8090-5_2, accessed 21 June 2019.

M. Castells, 1997. The information age: Economy, society and culture. Volume 3: End of millennium. Oxford: Blackwell.

K.-K.R. Choo, 2011. “The cyber threat landscape: Challenges and future research direction,” Computers & Security, volume 30, number 8, pp. 719–731.
doi: https://doi.org/10.1016/j.cose.2011.08.004, accessed 21 June 2019.

D. Dasgupta, R. Arunava, and A. Nag, 2017. “Multi-factor authentication,” In: D. Dasgupta, R. Arunava, and A. Nag. Advances in user authentication. Cham, Switzerland: Springer, pp. 185–233.
doi: https://doi.org/10.1007/978-3-319-58808-7, accessed 21 June 2019.

D. Davenport, 2002. “Anonymity on the Internet: Why the price may be too high,” Communications of the ACM, volume 45, number 4, pp. 33–35.
doi: https://doi.org/10.1145/505248.505267, accessed 21 June 2019.

Federal Financial Institutions Examination Council, 2005. “Authentication in an Internet banking environment,” at https://www.ffiec.gov/pdf/authentication_guidance.pdf, 28 June 2006.

D. Gibson, 2015. Managing risk in information systems. Sudbury: Jones & Bartlett Learning.

A. Giddens, 1991. Modernity and self-identity: Self and society in the late modern age. Stanford, Calif.: Stanford University Press.

S.A. Golder and M.W. Macy, 2014. “Digital footprints: Opportunities and challenges for online social research,” Annual Review of Sociology, volume 40, pp. 129–152.
doi: https://doi.org/10.1146/annurev-soc-071913-043145, accessed 21 June 2019.

H. Jenkins, 2006. Convergence culture: Where old and new media collide. New York: New York University Press.

C. Herley and D. Florêncio, 2008. “Protecting financial institutions from brute-force attacks,” In: S. Jajodia, P. Samarati, and S. Cimato (editors). Proceedings of the Ifip Tc 11 23rd International Information Security Conference. Boston, Mass.: Springer, pp. 681–685.
doi: https://doi.org/10.1146/annurev-soc-071913-043145, accessed 21 June 2019.

A.S.M. Kayes, W. Rahayu, T. Dillon, E. Chang, and J. Han, 2019. “Context-aware access control with imprecise context characterization for cloud-based data resources,” Future Generation Computer Systems, volume 93, pp. 237–255.
doi: https://doi.org/10.1016/j.future.2018.10.036, accessed 21 June 2019.

A.S.M. Kayes, J. Han, W. Rahayu, T. Dillon, M.S. Islam, and A. Colman, 2018. “A policy model and framework for context-aware access control to information resources,” Computer Journal>, volume 62, number 5, pp. 670–705.
doi: https://doi.org/10.1093/comjnl/bxy065, accessed 21 June 2019.

C. Mulliner, R. Borgaonkar, P. Stewin, and J.-P. Seifert, 2013. “SMS-based one-time passwords: Attacks and defense,” In: K. Rieck, P. Stewin, and J.-P. Seifert (editors). Detection of intrusions and malware, and vulnerability assessment. Lecture Notes in Computer Science, volume 7967. Berlin: Springer, pp. 150–159.
doi: https://doi.org/10.1007/978-3-642-39235-1_9, accessed 21 June 2019.

A. Ometov, S. Bezzateev, N. Mäkitalo, S. Andreev, T. Mikkonen, and Y. Koucheryavy, 2018. “Multi-factor authentication: A survey,” Cryptography, volume 2, number 1.
doi: https://doi.org/10.3390/cryptography2010001, accessed 21 June 2019.

S. Prabhakar, S. Pankanti, and A.K. Jain, 2003. “Biometric recognition: Security and privacy concerns,” IEEE Security & Privacy, volume 1, number 2, pp. 33–42.
doi: https://doi.org/10.1109/MSECP.2003.1193209, accessed 21 June 2019.

F.B. Schneider, 2003. “Least privilege and more [computer security]” IEEE Security & Privacy, volume 1, number 5, pp. 55–59.
doi: https://doi.org/10.1109/MSECP.2003.1236236, accessed 21 June 2019.

G. Simmel, 1971. “The metropolis and mental life,” In: D.N. Levine (editor). On individuality and social forms; Selected writings. Chicago: University of Chicago Press, pp. 324–336.

J. Slay and M. Miller, 2008. “Lessons learned from the Maroochy Water Breach,” In: E.Goetz and S. Shenoi (editors). Critical infrastructure protection. Boston, Mass.: Springer, pp. 73–82.
doi: https://doi.org/10.1007/978-0-387-75462-8_6, accessed 21 June 2019.

H. Tran, M. Hitchens, V. Varadharajan, and P. Watters, 2005. “A trust based access control framework for P2P file-sharing systems,” Proceedings of the 38th Annual Hawaii International Conference on System Sciences.
doi: https://doi.org/10.1109/HICSS.2005.58, accessed 21 June 2019.

D. Wang, Z. Zhang, P. Wang, J. Yan, and X. Huang, 2016. “Targeted online password guessing: An underestimated threat,” CCS ’16: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1,242–1,254.
doi: https://doi.org/10.1145/2976749.2978339, accessed 21 June 2019.

P. Watters, 2015. “Censorship is possible but difficult: A study in algorithmic ethnography,” First Monday, volume 20, number 1, at https://firstmonday.org/article/view/5612/4202, accessed 21 June 2019.
doi: http://dx.doi.org/10.5210/fm.v20i1.5612, accessed 21 June 2019.

P.A. Watters, S. McCombie, R. Layton, and J. Pieprzyk, 2012. “Characterising and predicting cyber attacks using the Cyber Attacker Model Profile (CAMP),” Journal of Money Laundering Control, volume 15, number 4, pp. 430–441.
doi: https://doi.org/10.1108/13685201211266015, accessed 21 June 2019.

N. Yager and T. Dunstone, 2010. “The biometric menagerie,” IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 32, number 2, pp. 220–230.
doi: https://doi.org/10.1109/TPAMI.2008.291, accessed 21 June 2019.

S. Young, 2012. “Contemplating corporate disclosure obligations arising from cybersecurity breaches,” Journal of Corporation Law, volume 38, number 3, pp. 659–679.

 


Editorial history

Received 6 May 2019; revised 18 June 2019; accepted 18 June 2019.


Copyright © 2019, Paul Watters, Patrick Scolyer-Gray, A.S.M. Kayes, and Mohammad Jabed Morshed Chowdhury. All Rights Reserved.

This would work perfectly if it weren’t for all the humans: Two factor authentication in late modern societies
by Paul Watters, Patrick Scolyer-Gray, A.S.M. Kayes, and Mohammad Jabed Morshed Chowdhury.
First Monday, Volume 24, Number 7 - 1 July 2019
https://www.firstmonday.dk/ojs/index.php/fm/article/view/10095/8050
doi: http://dx.doi.org/10.5210/fm.v24i7.10095





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.