Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Mapping the global geography of cybercrime with the World Cybercrime Index

Roles Data curation, Formal analysis, Investigation, Methodology, Visualization, Writing – original draft

* E-mail: [email protected]

Affiliations Department of Sociology, University of Oxford, Oxford, United Kingdom, Canberra School of Professional Studies, University of New South Wales, Canberra, Australia

ORCID logo

Roles Conceptualization, Investigation, Methodology, Writing – original draft

Affiliations Department of Sociology, University of Oxford, Oxford, United Kingdom, Oxford School of Global and Area Studies, University of Oxford, Oxford, United Kingdom

Roles Formal analysis, Methodology, Writing – review & editing

Affiliations Department of Sociology, University of Oxford, Oxford, United Kingdom, Leverhulme Centre for Demographic Science, University of Oxford, Oxford, United Kingdom

Roles Funding acquisition, Methodology, Writing – review & editing

Affiliation Department of Software Systems and Cybersecurity, Faculty of IT, Monash University, Victoria, Australia

Roles Conceptualization, Funding acquisition, Methodology, Writing – review & editing

Affiliation Centre d’études européennes et de politique comparée, Sciences Po, Paris, France

  • Miranda Bruce, 
  • Jonathan Lusthaus, 
  • Ridhi Kashyap, 
  • Nigel Phair, 
  • Federico Varese

PLOS

  • Published: April 10, 2024
  • https://doi.org/10.1371/journal.pone.0297312
  • Peer Review
  • Reader Comments

Table 1

Cybercrime is a major challenge facing the world, with estimated costs ranging from the hundreds of millions to the trillions. Despite the threat it poses, cybercrime is somewhat an invisible phenomenon. In carrying out their virtual attacks, offenders often mask their physical locations by hiding behind online nicknames and technical protections. This means technical data are not well suited to establishing the true location of offenders and scholarly knowledge of cybercrime geography is limited. This paper proposes a solution: an expert survey. From March to October 2021 we invited leading experts in cybercrime intelligence/investigations from across the world to participate in an anonymized online survey on the geographical location of cybercrime offenders. The survey asked participants to consider five major categories of cybercrime, nominate the countries that they consider to be the most significant sources of each of these types of cybercrimes, and then rank each nominated country according to the impact, professionalism, and technical skill of its offenders. The outcome of the survey is the World Cybercrime Index, a global metric of cybercriminality organised around five types of cybercrime. The results indicate that a relatively small number of countries house the greatest cybercriminal threats. These findings partially remove the veil of anonymity around cybercriminal offenders, may aid law enforcement and policymakers in fighting this threat, and contribute to the study of cybercrime as a local phenomenon.

Citation: Bruce M, Lusthaus J, Kashyap R, Phair N, Varese F (2024) Mapping the global geography of cybercrime with the World Cybercrime Index. PLoS ONE 19(4): e0297312. https://doi.org/10.1371/journal.pone.0297312

Editor: Naeem Jan, Korea National University of Transportation, REPUBLIC OF KOREA

Received: October 11, 2023; Accepted: January 3, 2024; Published: April 10, 2024

Copyright: © 2024 Bruce et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The dataset and relevant documents have been uploaded to the Open Science Framework. Data can be accessed via the following URL: https://osf.io/5s72x/?view_only=ea7ee238f3084054a6433fbab43dc9fb .

Funding: This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant agreement No. 101020598 – CRIMGOV, Federico Varese PI). FV received the award and is the Primary Investigator. The ERC did not play any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. Funder website: https://erc.europa.eu/faq-programme/h2020 .

Competing interests: The authors have declared that no competing interests exist.

Introduction

Although the geography of cybercrime attacks has been documented, the geography of cybercrime offenders–and the corresponding level of “cybercriminality” present within each country–is largely unknown. A number of scholars have noted that valid and reliable data on offender geography are sparse [ 1 – 4 ], and there are several significant obstacles to establishing a robust metric of cybercriminality by country. First, there are the general challenges associated with the study of any hidden population, for whom no sampling frame exists [ 5 , 6 ]. If cybercriminals themselves cannot be easily accessed or reliably surveyed, then cybercriminality must be measured through a proxy. This is the second major obstacle: deciding what kind of proxy data would produce the most valid measure of cybercriminality. While there is much technical data on cybercrime attacks, this data captures artefacts of the digital infrastructure or proxy (obfuscation) services used by cybercriminals, rather than their true physical location. Non-technical data, such as legal cases, can provide geographical attribution for a small number of cases, but the data are not representative of global cybercrime. In short, the question of how best to measure the geography of cybercriminal offenders is complex and unresolved.

There is tremendous value in developing a metric for cybercrime. Cybercrime is a major challenge facing the world, with the most sober cost estimates in the hundreds of millions [ 7 , 8 ], but with high-end estimates in the trillions [ 9 ]. By accurately identifying which countries are cybercrime hotspots, the public and private sectors could concentrate their resources on these hotspots and spend less time and funds on cybercrime countermeasures in countries where the problem is limited. Whichever strategies are deployed in the fight against cybercrime (see for example [ 10 – 12 ]), they should be targeted at countries that produce the largest cybercriminal threat [ 3 ]. A measure of cybercriminality would also enable other lines of scholarly inquiry. For instance, an index of cybercriminality by country would allow for a genuine dependent variable to be deployed in studies attempting to assess which national characteristics–such as educational attainment, Internet penetration, or GDP–are associated with cybercrime [ 4 , 13 ]. These associations could also be used to identify future cybercrime hubs so that early interventions could be made in at-risk countries before a serious cybercrime problem develops. Finally, this metric would speak directly to theoretical debates on the locality of cybercrime, and organized crime more generally [ 11 – 14 ]. The challenge we have accepted is to develop a metric that is both global and robust. The following sections respectively outline the background elements of this study, the methods, the results, and then discussion and limitations.

Profit-driven cybercrime, which is the focus of this paper/research, has been studied by both social scientists and computer scientists. It has been characterised by empirical contributions that have sought to illuminate the nature and organisation of cybercrime both online and offline [ 15 – 20 ]. But, as noted above, the geography of cybercrime has only been addressed by a handful of scholars, and they have identified a number of challenges connected to existing data. In a review of existing work in this area, Lusthaus et al. [ 2 ] identify two flaws in existing cybercrime metrics: 1) their ability to correctly attribute the location of cybercrime offenders; 2) beyond a handful of examples, their ability to compare the severity and scale of cybercrime between countries.

Building attribution into a cybercrime index is challenging. Often using technical data, cybersecurity firms, law enforcement agencies and international organisations regularly publish reports that identify the major sources of cyber attacks (see for example [ 21 – 24 ]). Some of these sources have been aggregated by scholars (see [ 20 , 25 – 29 ]). But the kind of technical data contained in these reports cannot accurately measure offender location. Kigerl [ 1 ] provides some illustrative remarks:

Where the cybercriminals live is not necessarily where the cyberattacks are coming from. An offender from Romania can control zombies in a botnet, mostly located in the United States, from which to send spam to countries all over the world, with links contained in them to phishing sites located in China. The cybercriminal’s reach is not limited by national borders (p. 473).

As cybercriminals often employ proxy services to hide their IP addresses, carry out attacks across national boundaries, collaborate with partners around the world, and can draw on infrastructure based in different countries, superficial measures do not capture the true geographical distribution of these offenders. Lusthaus et al. [ 2 ] conclude that attempts to produce an index of cybercrime by country using technical data suffer from a problem of validity. “If they are a measure of anything”, they argue, “they are a measure of cyber-attack geography”, not of the geography of offenders themselves (p. 452).

Non-technical data are far better suited to incorporating attribution. Court records, indictments and other investigatory materials speak more directly to the identification of offenders and provide more granular detail on their location. But while this type of data is well matched to micro-level analysis and case studies, there are fundamental questions about the representativeness of these small samples, even if collated. First, any sample would capture cases only where cybercriminals had been prosecuted, and would not include offenders that remain at large. Second, if the aim was to count the number of cybercrime prosecutions by country, this may reflect the seriousness with which various countries take cybercrime law enforcement or the resources they have to pursue it, rather than the actual level of cybercrime within each country (for a discussion see [ 30 , 31 ]). Given such concerns, legal data is also not an appropriate approach for such a research program.

Furthermore, to carry out serious study on this topic, a cybercrime metric should aim to include as many countries as possible, and the sample must allow for variation so that high and low cybercrime countries can be compared. If only a handful of widely known cybercrime hubs are studied, this will result in selection on the dependent variable. The obvious challenge in providing such a comparative scale is the lack of good quality data to devise it. As an illustration, in their literature review Hall et al. [ 10 ] identify the “dearth of robust data” on the geographical location of cybercriminals, which means they are only able to include six countries in their final analysis (p. 285. See also [ 4 , 32 , 33 ]).

Considering the weaknesses within both existing technical and legal data discussed above, Lusthaus et al. [ 2 ] argue for the use of an expert survey to establish a global metric of cybercriminality. Expert survey data “can be extrapolated and operationalised”, and “attribution can remain a key part of the survey, as long as the participants in the sample have an extensive knowledge of cybercriminals and their operations” (p. 453). Up to this point, no such study has been produced. Such a survey would need to be very carefully designed for the resulting data to be both reliable and valid. One criticism of past cybercrime research is that surveys were used whenever other data was not immediately available, and that they were not always designed with care (for a discussion see [ 34 ]).

In response to the preceding considerations, we designed an expert survey in 2020, refined it through focus groups, and deployed it throughout 2021. The survey asked participants to consider five major types of cybercrime– Technical products/services ; Attacks and extortion ; Data/identity theft ; Scams ; and Cashing out/money laundering –and nominate the countries that they consider to be the most significant sources of each of these cybercrime types. Participants then rated each nominated country according to the impact of the offenses produced there, and the professionalism and technical skill of the offenders based there. Using the expert responses, we generated scores for each type of cybercrime, which we then combined into an overall metric of cybercriminality by country: the World Cybercrime Index (WCI). The WCI achieves our initial goal to devise a valid measure of cybercrime hub location and significance, and is the first step in our broader aim to understand the local dimensions of cybercrime production across the world.

Participants

Identifying and recruiting cybercrime experts is challenging. Much like the hidden population of cybercriminals we were trying to study, cybercrime experts themselves are also something of a hidden population. Due to the nature of their work, professionals working in the field of cybercrime tend to be particularly wary of unsolicited communication. There is also the problem of determining who is a true cybercrime expert, and who is simply presenting themselves as one. We designed a multi-layered sampling method to address such challenges.

The heart of our strategy involved purposive sampling. For an index based entirely on expert opinion, ensuring the quality of these experts (and thereby the quality of our survey results) was of the utmost importance. We defined “expertise” as adult professionals who have been engaged in cybercrime intelligence, investigation, and/or attribution for a minimum of five years and had a reputation for excellence amongst their peers. Only currently- or recently-practicing intelligence officers and investigators were included in the participant pool. While participants could be from either the public or private sectors, we explicitly excluded professionals working in the field of cybercrime research who are not actively involved in tracking offenders, which includes writers and academics. In short, only experts with first-hand knowledge of cybercriminals are included in our sample. To ensure we had the leading experts from a wide range of backgrounds and geographical areas, we adopted two approaches for recruitment. We searched extensively through a range of online sources including social media (e.g. LinkedIn), corporate sites, news articles and cybercrime conference programs to identify individuals who met our inclusion criteria. We then faced a second challenge of having to find or discern contact information for these individuals.

Complementing this strategy, the authors also used their existing relationships with recognised cybercrime experts to recruit participants using the “snowball” method [ 35 ]. This both enhanced access and provided a mechanism for those we knew were bona fide experts to recommend other bona fide experts. The majority of our participants were recruited in this manner, either directly through our initial contacts or through a series of referrals that followed. But it is important to note that this snowball sampling fell under our broader purposive sampling strategy. That is, all the original “seeds” had to meet our inclusion criteria of being a top expert in the first instance. Any connections we were offered also had to meet our criteria or we would not invite them to participate. Another important aspect of this sampling strategy is that we did not rely on only one gatekeeper, but numerous, often unrelated, individuals who helped us with introductions. This approach reduced bias in the sample. It was particularly important to deploy a number of different “snowballs” to ensure that we included experts from each region of the world (Africa, Asia Pacific, Europe, North America and South America) and from a range of relevant professional backgrounds. We limited our sampling strategy to English speakers. The survey itself was likewise written in English. The use of English was partly driven by the resources available for this study, but the population of cybercrime experts is itself very global, with many attending international conferences and cooperating with colleagues from across the world. English is widely spoken within this community. While we expect the gains to be limited, future surveys will be translated into some additional languages (e.g. Spanish and Chinese) to accommodate any non-English speaking experts that we may not otherwise be able to reach.

Our survey design, detailed below, received ethics approval from the Human Research Advisory Panel (HREAP A) at the University of New South Wales in Australia, approval number HC200488, and the Research Ethics Committee of the Department of Sociology (DREC) at the University of Oxford in the United Kingdom, approval number SOC_R2_001_C1A_20_23. Participants were recruited in waves between 1 August 2020 and 30 September 2021. All participants provided consent to participate in the focus groups, pilot survey, and final survey.

Survey design

The survey comprised three stages. First, we conducted three focus groups with seven experts in cybercrime intelligence/investigations to evaluate our initial assumptions, concepts, and framework. These experts were recruited because they had reputations as some of the very top experts in the field; they represented a range of backgrounds in terms of their own geographical locations and expertise across different types of cybercrime; and they spanned both the public and private sectors. In short, they offered a cross-section of the survey sample we aimed to recruit. These focus groups informed several refinements to the survey design and specific terms to make them better comprehensible to participants. Some of the key terms, such as “professionalism” and “impact”, were a direct result of this process. Second, some participants from the focus groups then completed a pilot version of the survey, alongside others who had not taken part in these focus groups, who could offer a fresh perspective. This allowed us to test technical components, survey questions, and user experience. The pilot participants provided useful feedback and prompted a further refinement of our approach. The final survey was released online in March 2021 and closed in October 2021. We implemented several elements to ensure data quality, including a series of preceding statements about time expectations, attention checks, and visual cues throughout the survey. These elements significantly increased the likelihood that our participants were both suitable and would provide full and thoughtful responses.

The introduction to the survey outlined the survey’s two main purposes: to identify which countries are the most significant sources of profit-driven cybercrime, and to determine how impactful the cybercrime is in these locations. Participants were reminded that state-based actors and offenders driven primarily by personal interests (for instance, cyberbullying or harassment) should be excluded from their consideration. We defined the “source” of cybercrime as the country where offenders are primarily based, rather than their nationality. To maintain a level of consistency, we made the decision to only include countries formally recognised by the United Nations. We initially developed seven categories of cybercrime to be included in the survey, based on existing research. But during the focus groups and pilot survey, our experts converged on five categories as the most significant cybercrime threats on a global scale:

  • Technical products/services (e.g. malware coding, botnet access, access to compromised systems, tool production).
  • Attacks and extortion (e.g. DDoS attacks, ransomware).
  • Data/identity theft (e.g. hacking, phishing, account compromises, credit card comprises).
  • Scams (e.g. advance fee fraud, business email compromise, online auction fraud).
  • Cashing out/money laundering (e.g. credit card fraud, money mules, illicit virtual currency platforms).

After being prompted with these descriptions and a series of images of world maps to ensure participants considered a wide range of regions/countries, participants were asked to nominate up to five countries that they believed were the most significant sources of each of these types of cybercrime. Countries could be listed in any order; participants were not instructed to rank them. Nominating countries was optional and participants were free to skip entire categories if they wished. Participants were then asked to rate each of the countries they nominated against three measures: how impactful the cybercrime is, how professional the cybercrime offenders are, and how technically skilled the cybercrime offenders are. Across each of these three measures, participants were asked to assign scores on a Likert-type scale between 1 (e.g. least professional) to 10 (e.g. most professional). Nominating and then rating countries was repeated for all five cybercrime categories.

This process, of nominating and then rating countries across each category, introduces a potential limitation in the survey design: the possibility of survey response fatigue. If a participant nominated the maximum number of countries across each cybercrime category– 25 countries–by the end of the survey they would have completed 75 Likert-type scales. The repetition of this task, paired with the consideration that it requires, has the potential to introduce respondent fatigue as the survey progresses, in the form of response attrition, an increase in careless responses, and/or increased likelihood of significantly higher/lower scores given. This is a common phenomenon in long-form surveys [ 36 ], and especially online surveys [ 37 , 38 ]. Jeong et al [ 39 ], for instance, found that questions asked near the end of a 2.5 hour survey were 10–64% more likely to be skipped than those at the beginning. We designed the survey carefully, refined with the aid of focus groups and a pilot, to ensure that only the most essential questions were asked. As such, the survey was not overly long (estimated to take 30 minutes). To accommodate any cognitive load, participants were allowed to complete the survey anytime within a two-week window. Their progress was saved after each session, which enabled participants to take breaks between completing each section (a suggestion made by Jeong et al [ 39 ]). Crucially, throughout survey recruitment, participants were informed that the survey is time-intensive and required significant attention. At the beginning of the survey, participants were instructed not to undertake the survey unless they could allocate 30 minutes to it. This approach pre-empted survey fatigue by discouraging those likely to lose interest from participating. This compounds the fact that only experts with a specific/strong interest in the subject matter of the survey were invited to participate. Survey fatigue is addressed further in the Discussion section, where we provide an analysis suggesting little evidence of participant fatigue.

In sum, we designed the survey to protect against various sources of bias and error, and there are encouraging signs that the effects of these issues in the data are limited (see Discussion ). Yet expert surveys are inherently prone to some types of bias and response issues; in the WCI, the issue of selection and self-selection within our pool of experts, as well as geo-political biases that may lead to systematic over- or under-scoring of certain countries, is something we considered closely. We discuss these issues in detail in the subsection on Limitations below.

research paper on cybercrime

This “type” score is then multiplied by the proportion of experts who nominated that country. Within each cybercrime type, a country could be nominated a possible total of 92 times–once per participant. We then multiply this weighted score by ten to produce a continuous scale out of 100 (see Eq (2) ). This process prevents countries that received high scores, but a low number of nominations, from receiving artificially high rankings.

research paper on cybercrime

The analyses for this paper were performed in R. All data and code have been made publicly available so that our analysis can be reproduced and extended.

We contacted 245 individuals to participate in the survey, of which 147 agreed and were sent invitation links to participate. Out of these 147, a total of 92 people completed the survey, giving us an overall response rate of 37.5%. Given the expert nature of the sample, this is a high response rate (for a detailed discussion see [ 40 ]), and one just below what Wu, Zhao, and Fils-Aime estimate of response rates for general online surveys in social science: 44% [ 41 ]. The survey collected information on the participants’ primary nationality and their current country of residence. Four participants chose not to identify their nationality. Overall, participants represented all five major geopolitical regions (Africa, the Asia-Pacific, Europe, North America and South America), both in nationality and residence, though the distribution was uneven and concentrated in particular regions/countries. There were 8 participants from Africa, 11 participants from the Asia Pacific, 27 from North America, and 39 from Europe. South America was the least represented region with only 3 participants. A full breakdown of participants’ nationality, residence, and areas of expertise is included in the Supporting Information document (see S1 Appendix ).

Table 1 shows the scores for the top fifteen countries of the WCI overall index. Each entry shows the country, along with the mean score (out of 10) averaged across the participants who nominated this country, for three categories: impact, professionalism, and technical skill. This is followed by each country’s WCI overall and WCI type scores. Countries are ordered by their WCI overall score. Each country’s highest WCI type scores are highlighted. Full indices that include all 197 UN-recognised countries can be found in S1 Indices .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0297312.t001

Some initial patterns can be observed from this table, as well as the full indices in the supplementary document (see S1 Indices ). First, a small number of countries hold consistently high ranks for cybercrime. Six countries–China, Russia, Ukraine, the US, Romania, and Nigeria–appear in the top 10 of every WCI type index, including the WCI overall index. Aside from Romania, all appear in the top three at least once. While appearing in a different order, the first ten countries in the Technical products/services and Attacks and extortion indices are the same. Second, despite this small list of countries regularly appearing as cybercrime hubs, the survey results capture a broad geographical diversity. All five geopolitical regions are represented across each type. Overall, 97 distinct countries were nominated by at least one expert. This can be broken down into the cybercrime categories. Technical products/services includes 41 different countries; Attacks and extortion 43; Data/identity theft 51; Scams 49; and Cashing out/money laundering 63.

Some key findings emerge from these results, which are further illustrated by the following Figs 1 and 2 . First, cybercrime is not universally distributed. Certain countries are cybercrime hubs, while many others are not associated with cybercriminality in a serious way. Second, countries that are cybercrime hubs specialise in particular types of cybercrime. That is, despite a small number of countries being leading producers of cybercrime, there is meaningful variation between them both across categories, and in relation to scores for impact, professionalism and technical skill. Third, the results show a longer list of cybercrime-producing countries than are usually included in publications on the geography of cybercrime. As the survey captures leading producers of cybercrime, rather than just any country where cybercrime is present, this suggests that, even if a small number of countries are of serious concern, and close to 100 are of little concern at all, the remaining half are of at least moderate concern.

thumbnail

Base map and data from OpenStreetMap and OpenStreetMap Foundation.

https://doi.org/10.1371/journal.pone.0297312.g001

thumbnail

https://doi.org/10.1371/journal.pone.0297312.g002

To examine further the second finding concerning hub specialisation, we calculated an overall “Technicality score”–or “T-score”–for the top 15 countries of the WCI overall index. We assigned a value from 2 to -2 to each type of cybercrime to designate the level of technical complexity involved. Technical products/services is the most technically complex type (2), followed by Attacks and extortion (1), Data/identity theft (0), Scams (-1), and finally Cashing out and money laundering (-2), which has very low technical complexity. We then multiplied each country’s WCI score for each cybercrime type by its assigned value–for instance, a Scams WCI score of 5 would be multiplied by -1, with a final modified score of -5. As a final step, for each country, we added all of their modified WCI scores across all five categories together to generate the T-score. Fig 3 plots the top 15 WCI overall countries’ T-scores, ordering them by score. Countries with negative T-scores are highlighted in red, and countries with positive scores are in black.

thumbnail

Negative values correspond to lower technicality, positive values to higher technicality.

https://doi.org/10.1371/journal.pone.0297312.g003

The T-score is best suited to characterising a given hub’s specialisation. For instance, as the line graph makes clear, Russia and Ukraine are highly technical cybercrime hubs, whereas Nigerian cybercriminals are engaged in less technical forms of cybercrime. But for countries that lie close to the centre (0), the story is more complex. Some may specialise in cybercrime types with middling technical complexity (e.g. Data/identity theft ). Others may specialise in both high- and low-tech crimes. In this sample of countries, India (-6.02) somewhat specialises in Scams but is otherwise a balanced hub, whereas Romania (10.41) and the USA (-2.62) specialise in both technical and non-technical crimes, balancing their scores towards zero. In short, each country has a distinct profile, indicating a unique local dimension.

This paper introduces a global and robust metric of cybercriminality–the World Cybercrime Index. The WCI moves past previous technical measures of cyber attack geography to establish a more focused measure of the geography of cybercrime offenders. Elicited through an expert survey, the WCI shows that cybercrime is not universally distributed. The key theoretical contribution of this index is to illustrate that cybercrime, often seen as a fluid and global type of organized crime, actually has a strong local dimension (in keeping with broader arguments by some scholars, such as [ 14 , 42 ]).

While we took a number of steps to ensure our sample of experts was geographically representative, the sample is skewed towards some regions (such as Europe) and some countries (such as the US). This may simply reflect the high concentration of leading cybercrime experts in these locations. But it is also possible this distribution reflects other factors, including the authors’ own social networks; the concentration of cybercrime taskforces and organisations in particular countries; the visibility of different nations on networking platforms like LinkedIn; and also perhaps norms of enthusiasm or suspicion towards foreign research projects, both inside particular organisations and between nations.

To better understand what biases might have influenced the survey data, we analysed participant rating behaviours with a series of linear regressions. Numerical ratings were the response and different participant characteristics–country of nationality; country of residence; crime type expertise; and regional expertise–were the predictors. Our analysis found evidence (p < 0.05) that participants assigned higher ratings to the countr(ies) they either reside in or are citizens of, though this was not a strong or consistent result. For instance, regional experts did not consistently rate their region of expertise more highly than other regions. European and North American experts, for example, rated countries from these regions lower than countries from other regions. Our analysis of cybercrime type expertise showed even less systematic rating behaviour, with no regression yielding a statistically significant (p < 0.05) result. Small sample sizes across other known participant characteristics meant that further analyses of rating behaviour could not be performed. This applied to, for instance, whether residents and citizens of the top ten countries in the WCI nominated their own countries more or less often than other experts. On this point: 46% of participants nominated their own country at some point in the survey, but the majority (83%) of nominations were for a country different to the participant’s own country of residence or nationality. This suggested limited bias towards nominating one’s own country. Overall, these analyses point to an encouraging observation: while there is a slight home-country bias, this does not systematically result in higher rating behaviour. Longitudinal data from future surveys, as well as a larger participant pool, will better clarify what other biases may affect rating behaviour.

There is little evidence to suggest that survey fatigue affected our data. As the survey progressed, the heterogeneity of nominated countries across all experts increased, from 41 different countries nominated in the first category to 63 different countries nominated in the final category. If fatigue played a significant role in the results then we would expect this number to decrease, as participants were not required to nominate countries within a category and would have been motivated to nominate fewer countries to avoid extending their survey time. We further investigated the data for evidence of survey fatigue in two additional ways: by performing a Mann-Kendall/Sen’s slope trend test (MK/S) to determine whether scores skewed significantly upwards or downwards towards the end of the survey; and by compiling an intra-individual response variability (IRV) index to search for long strings of repeated scores at the end of the survey [ 43 ]. The MK/S test was marginally statistically significant (p<0.048), but the results indicated that scores trended downwards only minimally (-0.002 slope coefficient). Likewise, while the IRV index uncovered a small group of participants (n = 5) who repeatedly inserted the same score, this behaviour was not more likely to happen at the end of the survey (see S7 and S8 Tables in S1 Appendix ).

It is encouraging that there is at least some external validation for the WCI’s highest ranked countries. Steenbergen and Marks [ 44 ] recommend that data produced from expert judgements should “demonstrate convergent validity with other measures of [the topic]–that is, the experts should provide evaluations of the same […] phenomenon that other measurement instruments pick up.” (p. 359) Most studies of the global cybercrime geography are, as noted in the introduction, based on technical measures that cannot accurately establish the true physical location of offenders (for example [ 1 , 4 , 28 , 33 , 45 ]). Comparing our results to these studies would therefore be of little value, as the phenomena being measured differs: they are measuring attack infrastructure, whereas the WCI measures offender location. Instead, looking at in-depth qualitative cybercrime case studies would provide a better comparison, at least for the small number of higher ranked countries. Though few such studies into profit-driven cybercrime exist, and the number of countries included are limited, we can see that the top ranked countries in the WCI match the key cybercrime producing countries discussed in the qualitative literature (see for example [ 3 , 10 , 32 , 46 – 50 ]). Beyond this qualitative support, our sampling strategy–discussed in the Methods section above–is our most robust control for ensuring the validity of our data.

Along with contributing to theoretical debates on the (local) nature of organized crime [ 1 , 14 ], this index can also contribute to policy discussions. For instance, there is an ongoing debate as to the best approaches to take in cybercrime reduction, whether this involves improving cyber-law enforcement capacity [ 3 , 51 ], increasing legitimate job opportunities and access to youth programs for potential offenders [ 52 , 53 ], strengthening international agreements and law harmonization [ 54 – 56 ], developing more sophisticated and culturally-specific social engineering countermeasures [ 57 ], or reducing corruption [ 3 , 58 ]. As demonstrated by the geographical, economic, and political diversity of the top 15 countries (see Table 1 ), the likelihood that a single strategy will work in all cases is low. If cybercrime is driven by local factors, then mitigating it may require a localised approach that considers the different features of cybercrime in these contexts. But no matter what strategies are applied in the fight against cybercrime, they should be targeted at the countries that produce the most cybercrime, or at least produce the most impactful forms of it [ 3 ]. An index is a valuable resource for determining these countries and directing resources appropriately. Future research that explains what is driving cybercrime in these locations might also suggest more appropriate means for tackling the problem. Such an analysis could examine relevant correlates, such as corruption, law enforcement capacity, internet penetration, education levels and so on to inform/test a theoretically-driven model of what drives cybercrime production in some locations, but not others. It also might be possible to make a kind of prediction: to identify those nations that have not yet emerged as cybercrime hubs but may in the future. This would allow an early warning system of sorts for policymakers seeking to prevent cybercrime around the world.

Limitations

In addition to the points discussed above, the findings of the WCI should be considered in light of some remaining limitations. Firstly, as noted in the methods, our pool of experts was not as large or as globally representative as we had hoped. Achieving a significant response rate is a common issue across all surveys, and is especially difficult in those that employ the snowball technique [ 59 ] and also attempt to recruit experts [ 60 ]. However, ensuring that our survey data captures the most accurate picture of cybercrime activity is an essential aspect of the project, and the under-representation of experts from Africa and South America is noteworthy. More generally, our sample size (n = 92) is relatively small. Future iterations of the WCI survey should focus on recruiting a larger pool of experts, especially those from under-represented regions. However, this is a small and hard-to-reach population, which likely means the sample size will not grow significantly. While this limits statistical power, it is also a strength of the survey: by ensuring that we only recruit the top cybercrime experts in the world, the weight and validity of our data increases.

Secondly, though we developed our cybercrime types and measures with expert focus groups, the definitions used in the WCI will always be contestable. For instance, a small number of comments left at the end of the survey indicated that the Cashing out/money laundering category was unclear to some participants, who were unsure whether they should nominate the country in which these schemes are organised or the countries in which the actual cash out occurs. A small number of participants also commented that they were not sure whether the ‘impact’ of a country’s cybercrime output should be measured in terms of cost, social change, or some other metric. We limited any such uncertainties by running a series of focus groups to check that our categories were accurate to the cybercrime reality and comprehensible to practitioners in this area. We also ran a pilot version of the survey. The beginning of the survey described the WCI’s purpose and terms of reference, and participants were able to download a document that described the project’s methodology in further detail. Each time a participant was prompted to nominate countries as a significant source of a type of cybercrime, the type was re-defined and examples of offences under that type were provided. However, the examples were not exhaustive and the definitions were brief. This was done partly to avoid significantly lengthening the survey with detailed definitions and clarifications. We also wanted to avoid over-defining the cybercrime types so that any new techniques or attack types that emerged while the survey ran would be included in the data. Nonetheless, there will always remain some elasticity around participant interpretations of the survey.

Finally, although we restricted the WCI to profit-driven activity, the distinction between cybercrime that is financially-motivated, and cybercrime that is motivated by other interests, is sometimes blurred. Offenders who typically commit profit-driven offences may also engage in state-sponsored activities. Some of the countries with high rankings within the WCI may shelter profit-driven cybercriminals who are protected by corrupt state actors of various kinds, or who have other kinds of relationships with the state. Actors in these countries may operate under the (implicit or explicit) sanctioning of local police or government officials to engage in cybercrime. Thus while the WCI excludes state-based attacks, it may include profit-driven cybercriminals who are protected by states. Investigating the intersection between profit-driven cybercrime and the state is a strong focus in our ongoing and future research. If we continue to see evidence that these activities can overlap (see for example [ 32 , 61 – 63 ]), then any models explaining the drivers of cybercrime will need to address this increasingly important aspect of local cybercrime hubs.

This study makes use of an expert survey to better measure the geography of profit-driven cybercrime and presents the output of this effort: the World Cybercrime Index. This index, organised around five major categories of cybercrime, sheds light on the geographical concentrations of financially-motivated cybercrime offenders. The findings reveal that a select few countries pose the most significant cybercriminal threat. By illustrating that hubs often specialise in particular forms of cybercrime, the WCI also offers valuable insights into the local dimension of cybercrime. This study provides a foundation for devising a theoretically-driven model to explain why some countries produce more cybercrime than others. By contributing to a deeper understanding of cybercrime as a localised phenomenon, the WCI may help lift the veil of anonymity that protects cybercriminals and thereby enhance global efforts to combat this evolving threat.

Supporting information

S1 indices. wci indices..

Full indices for the WCI Overall and each WCI Type.

https://doi.org/10.1371/journal.pone.0297312.s001

S1 Appendix. Supporting information.

Details of respondent characteristics and analysis of rating behaviour.

https://doi.org/10.1371/journal.pone.0297312.s002

Acknowledgments

The data collection for this project was carried out as part of a partnership between the Department of Sociology, University of Oxford and UNSW Canberra Cyber. The analysis and writing phases received support from CRIMGOV. Fig 1 was generated using information from OpenStreetMap and OpenStreetMap Foundation, which is made available under the Open Database License.

  • View Article
  • Google Scholar
  • 2. Lusthaus J, Bruce M, Phair N. Mapping the geography of cybercrime: A review of indices of digital offending by country. 2020.
  • 4. McCombie S, Pieprzyk J, Watters P. Cybercrime Attribution: An Eastern European Case Study. Proceedings of the 7th Australian Digital Forensics Conference. Perth, Australia: secAU—Security Research Centre, Edith Cowan University; 2009. pp. 41–51. https://researchers.mq.edu.au/en/publications/cybercrime-attribution-an-eastern-european-case-study
  • 7. Anderson R, Barton C, Bohme R, Clayton R, van Eeten M, Levi M, et al. Measuring the cost of cybercrime. The Economics of Information Security and Privacy. Springer; 2013. pp. 265–300. https://link.springer.com/chapter/10.1007/978-3-642-39498-0_12
  • 8. Anderson R, Barton C, Bohme R, Clayton R, Ganan C, Grasso T, et al. Measuring the Changing Cost of Cybercrime. California, USA; 2017.
  • 9. Morgan S. 2022 Official Cybercrime Report. Cybersecurity Ventures; 2022. https://s3.ca-central-1.amazonaws.com/esentire-dot-com-assets/assets/resourcefiles/2022-Official-Cybercrime-Report.pdf
  • 12. Wall D. Cybercrime: The Transformation of Crime in the Information Age. Polity Press; 2007.
  • 14. Varese F. Mafias on the move: how organized crime conquers new territories. Princeton University Press; 2011.
  • 15. Dupont B. Skills and Trust: A Tour Inside the Hard Drives of Computer Hackers. Crime and networks. Routledge; 2013.
  • 16. Franklin J, Paxson V, Savage S. An Inquiry into the Nature and Causes of the Wealth of Internet Miscreants. Proceedings of the 2007 ACM Conference on Computer and Communications Security. Alexandria, Virginia, USA; 2007.
  • 17. Hutchings A, Clayton R. Configuring Zeus: A case study of online crime target selection and knowledge transmission. Scottsdale, AZ, USA: IEEE; 2017.
  • 20. Levesque F, Fernandez J, Somayaji A, Batchelder. National-level risk assessment: A multi-country study of malware infections. 2016. https://homeostasis.scs.carleton.ca/~soma/pubs/levesque-weis2016.pdf
  • 21. Crowdstrike. 2022 Global Threat Report. Crowdstrike; 2022. https://go.crowdstrike.com/crowdstrike/gtr
  • 22. EC3. Internet Organised Crime Threat Assessment (IOCTA) 2021. EC3; 2021. https://www.europol.europa.eu/publications-events/main-reports/internet-organised-crime-threat-assessment-iocta-2021
  • 23. ENISA. ENISA threat Landscape 2021. ENISA; 2021. https://www.enisa.europa.eu/publications/enisa-threat-landscape-2021
  • 24. Sophos. Sophos 2022 Threat Report. Sophos; 2022. https://www.sophos.com/ en-us/labs/security-threat-report
  • 25. van Eeten M, Bauer J, Asghari H, Tabatabaie S, Rand D. The Role of Internet Service Providers in Botnet Mitigation. An Empirical Analysis Based on Spam Data WEIS. 2010. van Eeten, Michel and Bauer, Johannes M. and Asghari, Hadi and Tabatabaie, Shirin and Rand, David, The Role of Internet Service Providers in Botnet Mitigation an Empirical Analysis Based on Spam Data (August 15, 2010). TPRC 2010, SSRN: https://ssrn.com/abstract=1989198
  • 26. He S, Lee GM, Quarterman JS, Whinston A. Cybersecurity Policies Design and Evaluation: Evidence from a Large-Scale Randomized Field Experiment. 2015. https://econinfosec.org/archive/weis2015/papers/WEIS_2015_he.pdf
  • 27. Snyder P, Kanich C. No Please, After You: Detecting Fraud in Affiliate Marketing Networks. 2015. https://econinfosec.org/archive/weis2015/papers/WEIS_2015_snyder.pdf
  • 29. Wang Q-H, Kim S-H. Cyber Attacks: Cross-Country Interdependence and Enforcement. 2009. http://weis09.infosecon.net/files/153/paper153.pdf
  • 32. Lusthaus J. Industry of Anonymity: Inside the Business of Cybercrime. Harvard University Press; 2018.
  • 33. Kshetri N. The Global Cybercrime Industry: Economic, Institutional and Strategic Perspectives. Berlin: Springer; 2010.
  • 36. Backor K, Golde S, Nie N. Estimating Survey Fatigue in Time Use Study. Washington, DC.; 2007. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=401f97f2d7c684b295486636d8a84c627eb33446
  • 42. Reuter P. Disorganized Crime: Illegal Markets and the Mafia. MIT Press; 1985.
  • PubMed/NCBI
  • 47. Sotande E. Transnational Organised Crime and Illicit Financial Flows: Nigeria, West Africa and the Global North. University of Leeds, School of Law. 2016. https://etheses.whiterose.ac.uk/15473/1/Emmanuel%20Sotande%20Thessis%20at%20the%20University%20of%20Leeds.%20viva%20corrected%20version%20%281%29.pdf
  • 48. Lusthaus J. Modelling cybercrime development: the case of Vietnam. The Human Factor of Cybercrime. Routledge; 2020. pp. 240–257.
  • 51. Lusthaus J. Electronic Ghosts. In: Democracy: A Journal of Ideas [Internet]. 2014. https://democracyjournal.org/author/jlusthaus/
  • 52. Brewer R, de Vel-Palumbo M, Hutchings A, Maimon D. Positive Diversions. Cybercrime Prevention. 2019. https://www.researchgate.net/publication/337297392_Positive_Diversions
  • 53. National Cyber Crime Unit / Prevent Team. Pathways Into Cyber Crime. National Crime Agency; 2017. https://www.nationalcrimeagency.gov.uk/who-we-are/publications/6-pathways-into-cyber-crime-1/file
  • 60. Christopoulos D. Peer Esteem Snowballing: A methodology for expert surveys. 2009. https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=63ac9f6db0a2fa2e0ca08cd28961385f98ec21ec

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 23 February 2023

Exploring the global geography of cybercrime and its driving forces

  • Shuai Chen   ORCID: orcid.org/0000-0003-3623-1532 1 , 2 ,
  • Mengmeng Hao   ORCID: orcid.org/0000-0001-5086-6441 1 , 2 ,
  • Fangyu Ding   ORCID: orcid.org/0000-0003-1821-531X 1 , 2 ,
  • Dong Jiang 1 , 2 ,
  • Jiping Dong 1 , 2 ,
  • Shize Zhang 3 ,
  • Qiquan Guo 1 &
  • Chundong Gao 4  

Humanities and Social Sciences Communications volume  10 , Article number:  71 ( 2023 ) Cite this article

10k Accesses

10 Citations

1 Altmetric

Metrics details

  • Criminology
  • Science, technology and society

Cybercrime is wreaking havoc on the global economy, national security, social stability, and individual interests. The current efforts to mitigate cybercrime threats are primarily focused on technical measures. This study considers cybercrime as a social phenomenon and constructs a theoretical framework that integrates the social, economic, political, technological, and cybersecurity factors that influence cybercrime. The FireHOL IP blocklist, a novel cybersecurity data set, is used to map worldwide subnational cybercrimes. Generalised linear models (GLMs) are used to identify the primary factors influencing cybercrime, whereas structural equation modelling (SEM) is used to estimate the direct and indirect effects of various factors on cybercrime. The GLM results suggest that the inclusion of a broad set of socioeconomic factors can significantly improve the model’s explanatory power, and cybercrime is closely associated with socioeconomic development, while their effects on cybercrime differ by income level. Additionally, results from SEM further reveals the causal relationships between cybercrime and numerous contextual factors, demonstrating that technological factors serve as a mediator between socioeconomic conditions and cybercrime.

Similar content being viewed by others

research paper on cybercrime

Rethinking the environmental Kuznets curve hypothesis across 214 countries: the impacts of 12 economic, institutional, technological, resource, and social factors

Qiang Wang, Yuanfan Li & Rongrong Li

research paper on cybercrime

Exposure to untrustworthy websites in the 2020 US election

Ryan C. Moore, Ross Dahlke & Jeffrey T. Hancock

research paper on cybercrime

A systematic review of worldwide causal and correlational evidence on digital media and democracy

Philipp Lorenz-Spreen, Lisa Oswald, … Ralph Hertwig

Introduction

Cybercrime is a broad term used by government, businesses, and the general public to account for a variety of criminal activities and harmful behaviours involving the adoption of computers, the internet, or other forms of information communications technologies (ICTs) (Wall, 2007 ). As an emerging social phenomenon in the information age, cybercrime has aroused growing concern around the world due to its high destructiveness and widespread influence. In 2017, the WannaCry ransomware attack affected more than 230,000 computers across 150 countries, resulting in economic losses of more than 4 billion dollars and posing a serious danger to the global education, government, finance, and healthcare sectors (Ghafur et al., 2019 ; Castillo and Falzon, 2018 ; Mohurle and Patil, 2017 ). Although there is currently no precise and universally accepted definition of cybercrime (Phillips et al., 2022 ; Holt and Bossler, 2014 ), it is generally acknowledged that the term covers both traditional crimes that are facilitated or amplified by utilising ICTs as well as new types of crimes that emerged with the advent of ICTs (Ho and Luong, 2022 ). Based on the role of technology in the commission of the crime, the most widely utilised typology divides cybercrime into cyber-dependent crime (such as hacking, distributed denial of service, and malware) and cyber-enabled crime (online fraud, digital piracy, cyberbullying) (Brenner, 2013 ; Sarre et al., 2018 ; McGuire and Dowling, 2013 ). Along with the rapid development of ICTs and the increasing prevalence of the internet, these criminal activities are significantly disrupting the global economy, national security, social stability, and individual interests. Although it is difficult to estimate the precise financial cost of cybercrime (Anderson et al., 2013 ; Anderson et al., 2019 ), statistical evidence from governments and industries indicates that the economic losses caused by cybercrime are extremely enormous and are still rising rapidly (McAfee, 2021 ).

Cybercrime is complicated in nature and involves many disciplines, including criminology, computer science, psychology, sociology, economics, geography, political science, and law, among others (Holt, 2017 ; Dupont and Holt, 2022 ; Payne, 2020 ). Computer science and cybersecurity efforts are primarily focused on applying technical approaches such as Intrusion Detection Systems (IDSs), Intrusion Prevention Systems (IPSs), firewalls, and anti-virus software to mitigate cyberattack threats (Kumar and Carley, 2016 ; Walters, 2015 ). These methods may help to some extent lessen the adverse impacts of cybercrime on both organisations and individuals. However, these technical solutions are largely unaware of the human and contextual factors that contribute to the issues, providing only reactive solutions, and are unable to keep up with the rapidly evolving modus operandi and emerging technologies (Clough, 2015 ; Neal, 2014 ). It is suggested that cybercrime is a complex social phenomenon driven by the compound interactions of underlying socioeconomic factors. Human and social factors play a substantial role in the formation of cybercrime agglomerations (Waldrop, 2016 ; Watters et al., 2012 ; Leukfeldt and Holt, 2019 ). They are also important aspects of cybercrime prevention and control (Dupont and Holt, 2022 ). The human factors influencing cybercrime have been the subject of an expanding body of sociological and psychological study in recent years. These studies, which covered cyberbullying, online harassment, identity theft, online fraud, malware infection, phishing, and other types of cybercrime, generally applied traditional criminological and psychological theories, such as routine activities theory, lifestyle-routine activities theory, self-control theory, and general strain theory, to explain the victimisation and offending of various cybercrimes (Bergmann et al., 2018 ; Mikkola et al., 2020 ; Ngo and Paternoster, 2011 ; Pratt et al., 2010 ; Williams, 2016 ). Results from these studies suggested that by altering criminal motivations and opportunity structures, individual factors (i.e., age, gender, ethnicity, education, socioeconomic status, and self-control) and situational factors (online activities, time spent online, risk exposure, deviant behaviours) may have an impact on cybercrime offence and victimisation. These findings advanced our knowledge in understanding the impact of technology on criminal behaviours, factors affecting the risk of cyber victimisation, and the applicability of traditional criminological theories to cybercrime (Holt and Bossler, 2014 ).

Cybercrime is a highly geographical phenomenon on a macro-level scale, with some countries accounting for a disproportionate amount of cybercrimes (Kigerl, 2012 ; Kigerl, 2016 ). This spatial heterogeneity is closely related to specific socioeconomic contexts (Kshetri, 2010 ). Academic efforts have been made to identify the clusters of high cybercrime countries and to explain the potential socioeconomic factors that led to the formation of these clusters. For example, Mezzour, Carley, and Carley ( 2014 ) found that Eastern European countries hosted a greater number of attacking computers due to their superior computing infrastructure and high levels of corruption. Similarly, Kumar and Carley ( 2016 ) found that higher levels of corruption and large internet bandwidth would favour attack origination. They also noted that countries with the greater gross domestic product (GDP) per capita and better ICT infrastructure were targeted more frequently. Meanwhile, Srivastava et al. ( 2020 ) pointed out that countries with better technology and economic capital were more likely to become the origins of cybercrime, but countries with better cybersecurity preparedness may reduce the frequency of the cybercrime originating within them. Moreover, Holt, Burruss, and Bossler ( 2018 ) suggested that nations with better technological infrastructure, greater political freedom, and fewer organised crime were more likely to report malware infections, while Overvest and Straathof ( 2015 ) suggested that the number of internet users, bandwidth, and economic ties were significantly related to cyberattack origin. Kigerl ( 2012 ) found that a higher unemployment rate and more internet users were linked to an increase in spam activities. However, these studies have tended to utilise a restricted range of predictor variables and only included certain aspects of cybercrime. Besides, most of the studies have been conducted at the national level, which could potentially hide many disparities within countries.

In this work, we construct a conceptual model to better represent the context from which cybercrime emerges, which is applied as a framework to analyse the underlying socioeconomic driving forces. A novel cybersecurity data set, the FireHOL IP blocklist, is adopted as a proxy to reflect the levels of cybercriminal activities within different areas. A set of social, economic, political, technological, and cybersecurity indicators is used as explanatory variables. Generalised linear models (GLMs) are used to quantify the effect of each factor on cybercrime, while structural equation modelling (SEM) is used to estimate the complex interactions among various factors and their direct and indirect effects on cybercrime.

Conceptual framework

We propose a conceptual framework for examining the driving forces of cybercrime by reviewing existing empirical literature and integrating different criminological theories. The conceptual framework includes five interrelated components: the social, economic, political, technological, and cybersecurity factors. The potential pathways by which each component may directly or indirectly influence cybercrime are illustrated in Fig. 1 .

figure 1

The solid line indicates a direct effect, and a dashed line indicates indirect effect. H1–H5 refer to the five hypotheses, “+” indicates a positive effect, and “−” indicates a negative effect.

The social and economic factors depict the level of regional development, serving as the fundamental context in which cybercrime emerges. Given the intrinsic technological nature of cybercrime, global urbanisation, and the information technology revolution have promoted global connectivity and created unprecedented conditions and opportunities for cybercrime (UNODC, 2013 ). From the perspective of general strain theory, poverty, unemployment, income inequality, and other social disorders that are accompanied by social transformations could lead to cultures of materialism and stimulate motivations of cybercrime for illegal gains (Meke, 2012 ; Onuora et al., 2017 ). On the other hand, economically developed regions generally have superior ICT infrastructure, which can provide convenient and low-cost conditions for cybercriminals to commit crimes. High educational attainment is also likely to be associated with cybercrime, given that cybercrime usually requires some level of computer skills and IT knowledge (Holt and Schell, 2011 ; Asal et al., 2016 ). In general, better socioeconomic conditions are associated with more cybercriminal activities, which leads us to develop the first two hypotheses:

H1: Social factor is positively associated with cybercrime .
H2: Economic factor is positively associated with cybercrime .

The influence of political factors on cybercrime is mainly reflected in the regulation and intervention measures of governments in preventing and controlling cybercrime, such as legal system construction, government efficiency, control of corruption, and political stability. The offender’s decision to engage in illegal activity is a function of the expected probability of being arrested and convicted and the expected penalty if convicted (Ehrlich, 1996 ). As with traditional crimes, the lack of efficient social control and punishment mechanism will breed criminal behaviours. The deterrent effect of the legislation makes cybercriminals have to consider the consequences they need to bear. While the virtual and transnational nature of cyberspace makes it easier for perpetrators to avoid punishment, cybercrime can be deterred to some extent by increasing the severity of punishment and international law enforcement cooperation (Hall et al., 2020 ). On the other side, cybercriminals could seek protection through corrupt connections with the local institutional environment, which would weaken law enforcement operations and encourage cybercriminal activities (Hall et al., 2020 ; Lusthaus and Varese, 2021 ; Sutanrikulu et al., 2020 ). For instance, corruption in law enforcement authorities makes it hard for cybercriminals to be punished, while corruption in network operators or internet service providers (ISPs) makes it easier for cybercriminals to apply for malicious domain names or register fake websites. Some studies have shown that areas with high levels of corruption usually have more cybercriminal activities (Mezzour et al., 2014 ; Watters et al., 2012 ). Cybercrimes are typically attributed to political corruption, ineffective governance, institutional weakness, and weak rule of law across West Africa and East Europe (Asal et al., 2016 ). Therefore, we propose that:

H3: Political factor is negatively associated with cybercrime .

The technological environment, which is composed of communication conditions and underlying physical ICT infrastructure, serves as an essential medium through which cybercrime is committed. According to the rational choice theory, crime is the result of an individual’s rational consideration of the expected costs and benefits attached to their criminal activity (Mandelcorn et al., 2013 ; Brewer et al., 2019 ). Better internet infrastructure, greater internet penetration, and faster connection could facilitate cybercrimes by reducing crime costs, expanding opportunities, and increasing potential benefits. For example, in a majority of spam and DDoS attacks, cybercriminals often carry out large-scale coordinated attacks by sending remote commands to a set of compromised computers (also known as botnets). High-performance computers and high-bandwidth connectivity such as university, corporate, and government servers allow for more efficient attacks and could expand the scope of cybercrime, making them preferred by cybercriminals (Hoque et al., 2015 ; Van Eeten et al., 2010 ; Eslahi et al., 2012 ). We thus hypothesise that:

H4: Technological factor is positively related to cybercrime .

Cybersecurity preparedness reflects the capabilities and commitment of a country to prevent and combat cybercrime. According to the International Telecommunication Union (ITU), cybersecurity preparedness involves the legal, technical, organisation, capacity, and cooperation aspects (Bruggemann et al., 2022 ). Legal measures such as laws and regulations define what constitutes cybercrime and specify necessary procedures in the investigation, prosecution, and sanction of cybercrime, providing a basis for other measures. Technical measures refer to the technical capabilities to cope with cybersecurity risks and build cybersecurity resilience through national institutions and frameworks such as the Computer Incident Response Teams (CIRTs) or Computer Emergency Response Teams (CERTs). Organisation measures refer to the comprehensive strategies, policies, organisations, and coordination mechanisms for cybersecurity development. Capacity development reflects the research and development, awareness campaigns, training and education, and certified professionals and public agencies for cybersecurity capacity building. Cooperation measures refer to the collaboration and information sharing at the national, regional, and international levels, which is essential in addressing cybersecurity issues given the transnational nature of cybercrime. According to the general deterrence theory and routine activity theory of criminology (Leukfeldt and Holt, 2019 ; Hutchings and Hayes, 2009 ; Lianos and McGrath, 2018 ), cybersecurity preparedness serves as a deterrent or a guardianship of cybercrime. It is crucial in defending a country from external cybercrime as well as reducing cybercrime originating from within. Therefore, we hypothesise that:

H5: Cybersecurity preparedness is negatively associated with cybercrime .

The five hypotheses proposed in the conceptual model (Fig. 1 ) outline the direct effects of various contextual drivers on cybercrime. The social, economic, political, technological, and cybersecurity factors may interact in other ways, which could also have an indirect impact on cybercrime. Then, using a combination of two statistical methods and a set of explanatory covariates, we test the hypothesised pathways.

Cybercrime data

It is commonly acknowledged among cybercrime scholars that the lack of standardised legal definitions of cybercrime and valid, reliable official statistics makes it difficult to estimate the prevalence or incidence of cybercrime around the world (Holt and Bossler, 2015 ). Although in some countries, law enforcement agencies do collect data on cybercrime (e.g., police data and court judgement), there are inevitable under-reporting and under-recording issues with these official data (Holt and Bossler, 2015 ; Howell and Burruss, 2020 ). This has prompted some researchers to use alternative data sources to measure cybercrime, including social media, online forums, emails, and cybersecurity companies (Holt and Bossler, 2015 ). Among these data sources, technical data such as spam emails, honeypots, IDS/IPS or firewall logs, malicious domains/URLs, and IP addresses are often used as proxies for different aspects of cybercrime (Amin et al., 2021 ; Garg et al., 2013 ; Kigerl, 2012 ; Kigerl, 2016 ; Kigerl, 2021 ; Mezzour et al., 2014 ; Srivastava et al., 2020 ; Kshetri, 2010 ), accounting for a large proportion in the literature of macro-level cybercrime research. However, due to the anonymity and virtuality of cyberspace, cybercriminals are not restrained by national boundaries and could utilise compromised computers distributed around the world as a platform to commit cybercrime. Meanwhile, IP addresses can be faked or spoofed by using technologies such as proxy servers, anonymity networks, and virtual private networks (VPNs) to hide the true identity and location of cybercriminals (Holt and Bossler, 2015 ; Leukfeldt and Holt, 2019 ). As a result, the attribution of cybercriminal becomes extremely challenging and requires a high level of expertise and coordination from law enforcement agencies and cybersecurity teams (Lusthaus et al., 2020 ). Therefore, instead of capturing where cybercriminals reside in physical space, most studies using these technical data are measuring the possible locations where the cyberattacks or cybercrimes originate, even if part of them could be locations where cybercriminals choose to host their botnets or spam servers. Though there is partial support that certain types of cyberattacks originate from physically proximate IP addresses (Maimon et al., 2015 ), more elaborate and comprehensive research is lacking.

In this study, we used a novel cybersecurity data set, the IP addresses from FireHOL blocklist (FireHOL, 2021 ), as a proxy to measure cybercrime. The FireHOL IP blocklist is a composition of multiple sources of illegitimate or malicious IP addresses, which can be used on computer systems (i.e., servers, routers, and firewalls) to block access from and to these IPs. These IPs are related to certain types of cybercrime activities, including abuse, attacks, botnets, malware, command and control, and spam. We adopt FireHOL level 1 blocklist, which consists of ~2900 subnets and over 600 million unique IPs, with a minimum of false positives. The anonymous IPs, which are used by other parties to hide their true identities, such as open proxies, VPN providers, etc., were excluded from the analysis. Next, we applied an open-source IP geolocation database, IP2Location™ Lite, to map these unique IP addresses in specific geographic locations in the form of country/region/city and longitude/altitude pair. The location accuracy of the IP geolocation is high at the national and regional levels, with ~98% accuracy at the country level and 60% at the city level. In order to reduce uncertainty, we focused on the analysis at the state/region level. At last, we calculated the counts of unique IPs located within each subnational area to measure the global distribution of cybercrimes.

Although FireHOL IP blocklist has the same restrictions as other technical data, it was used in this study for several reasons. The basic function of IP addresses in the modern internet makes it an indispensable element in different phases of cybercrime, it is also the key ingredient of cybercrime attribution and digital evidence collection. As a result, an IP-based firewall is one of the most effective and commonly used preventive measures for cybersecurity defence. FireHOL IP blocklist has the advantage of global coverage and includes different cybercrime types. It dynamically collects cybercrime IPs from multiple sources around the world. Although it is difficult to determine whether the IPs in the blocklist are the real sources of cybercrime or come from infected machines, it does reflect the geographical distribution of the malicious IPs that are related to certain cybercrime activities. Besides, it provides a more fine-grained estimate of the subnational cybercrime geography than country-level statistics.

Explanatory variables

We adopted a broad set of explanatory variables to characterise the social, economic, political, technological, and cybersecurity conditions based on the conceptual model presented above (Fig. 1 ). The social environment is represented by population, the population aged 15–64, education index, nighttime light index, and human development index (HDI); The economic condition is measured by income index, GDP growth, Gini index, unemployment (% of the total labour force) and poverty rate; The political environment is measure by 5 dimensions of the World Governance Indicators (WGI), including control of corruption, government effectiveness, rule of law, political stability and absence of violence/terrorism, voice and accountability. The technological environment is reflected by the internet infrastructure (the number of internet data centres and internet exchange centres), internet users (% of the population), international bandwidth (per internet user), secure internet server (per 1 million people), and fixed broadband subscriptions (per 100 people). Moreover, we applied the five dimensions of the Global Cybersecurity Index (GCI) to assess the level of commitment among various nations to cybersecurity, including legal measures, technical measures, organisational measures, capacity development measures, cooperation measures, and one overall cybersecurity index (the sum of the 5 measures above). Population, income index, education index, HDI, nighttime light, and infrastructure data are collected at the subnational administrative level, while other variables are derived at the country level. Log transformations (base 10) were used to improve normality for variables with skewed distributions, including population, nighttime light, infrastructure, fixed broadband, secure internet server, and bandwidth. All variables were normalised for further analysis.

Generalised linear models (GLMs)

In this study, GLMs were used to assess the potential influence of various explanatory variables on cybercrime and to identify the most important factors. A GLM is an extension of a regular regression model that includes nonnormal response distributions and modelling functions (Faraway, 2016 ). GLM analyses were conducted at two scales: the global scale and the income group scale. All GLMs were built in R version 4.1.2 using the “glm” function of the “stats” package (R, Core Team, 2013 ), and a gaussian distribution is used as the link function. The Akaike information criterion (AIC), the determination coefficient ( R 2 ), and the significance level of the predictors ( p -value) are used to evaluate GLMs. The model with the lowest AIC and highest R 2 value is chosen as the optimal model. Variance inflation factors (VIFs) were calculated using the “car” package (Fox et al., 2012 ) to test for collinearity between quantitative explanatory variables prior to the GLM analysis. Variables with a VIF value greater than 10 (VIF > 10) were regarded as collinearity generators and were therefore excluded from further analysis. The relative contribution and coefficients of each GLM were plotted using the “GGally” package.

Structural equation modelling (SEM)

SEM was used to examine the causal relationships within the networks of interacting factors, thereby distinguishing the direct from indirect drivers of cybercrime. SEM is a powerful, multivariate technique found increasingly in scientific investigations to test and evaluate multivariate causal relationships (Fan et al., 2016 ). SEM differs from other modelling approaches in that it tests both the direct and indirect effects on pre-assumed causal relationships. The following fit indices were considered to evaluate model adequacy: (a) root mean square error of approximation (RMSEA), which is a “badness of fit” index in which 0 indicates a perfect fit while higher values indicate a lack of fit; (b) standardised root mean square residual (SRMR), which is similar to RMSEA and should be less than 0.09 for good model fit; (c) comparative fit index (CFI), which represents the amount of variance that has been accounted for in a covariance matrix ranging from 0.0 to 1.0, with a higher CFI value indicating better model fit; (d) Tucker–Lewis index (TLI), which is a non-normed fit index (NNFI) that proposes a fit index independent of sample size. In this study, SEM analysis was conducted using AMOS (Arbuckle, 2011 ).

Spatial distribution of cybercrime IPs

We mapped the subnational distribution of cybercrime IPs globally, which reveals significant spatial variability (see Fig. 2 ). On a global scale, most cybercrime IPs were located in North America, Central and Eastern Europe, East Asia, India, and eastern Australia. Meanwhile, areas with low numbers of cybercrime IPs were primarily found in large parts of Africa except for South Africa, western and northern parts of South America, Central America, some regions of the Middle East, southern parts of Central Asia, and some regions of Southeast Asia. On a continental scale, we found that the number of cybercrime IPs increased gradually from Africa to Europe. The two continents with the most cybercrime IPs were North America and Europe, with North America showing more variations. This trend seems to be closely associated with the regional socioeconomic development level. To further investigate this relationship, we grouped the subnational regions by income level according to the World Bank classification rules. We found a more evident pattern, with high-income regions hosting the majority of cybercrime IPs and lower-middle-income regions hosting the least.

figure 2

a Number of cybercrime IPs at the subnational level. b Log-transformed cybercrime IP count by continent: Africa (AF), Asia/Oceania (AS/OC), South America (SA), North America (NA) and Europe (EU). c Log-transformed cybercrime IP count by income group: low-income (LI), lower-middle-income (LMI), upper-middle-income (UMI) and high-income (HI) groups. The centre line, boxes, and whiskers show the means, 1 standard error (SE), and 95% confidence interval (CI), respectively.

Major factors influencing cybercrime

GLMs were built based on the 5 categories of 26 representative influential variables identified in the conceptual framework. After excluding 8 collinear variables (i.e., government effectiveness, rule of law, HDI, and 5 cybersecurity measures) and 7 nonsignificant variables (GDP growth, unemployment, poverty, political stability, voice and accountability, bandwidth, and internet users), the global scale GLM model includes 11 variables with an R 2 value of 0.82. Figure 3 shows the relative contribution of each predictor variable to the model. Globally, the social and technological factors contribute most to the model, with relative contribution rates of 53.4% and 30.1%, respectively. Infrastructure alone explains up to 18.1% of the model variance in cybercrimes ( R 2 to 0.504). However, the inclusion of the population and education index improves the explanation of model variance by 18.3% and 28.5%, respectively ( R 2 to 0.596 and 0.766). This is also the case with GLMs of different income groups, indicating that despite the main effects of technological factors, the inclusion of a broad set of socioeconomic factors significantly improves the accuracy of models that attempt to quantify the driving forces of cybercrime.

figure 3

Relative contribution of predictor variables to cybercrime.

When assessed by income group, we noted that although the social and technological factors were the most important factors in explaining cybercrime, the contribution of each variable varies by income group. For example, the contribution of the income index decreases gradually from low-income regions to wealthier regions, while the Gini index is more significant in upper-middle regions and high-income regions than in low-income regions and lower-middle-income regions. Fixed broadband subscriptions contributed the most in low-income regions and the least in high-income regions. Additionally, cybersecurity preparedness has a greater influence on low-income and lower-middle-income regions.

Estimated effect of factors on cybercrime

The coefficient values in Fig. 4 represent effect sizes from the GLMs for the relationship between cybercrime and the five categories of contextual factors. At the global scale, cybercrime is positively correlated with social, economic, and technological factors, suggesting that most cybercrimes are launched in regions with a higher population, higher urbanisation, better educational and economic conditions, and, most importantly, improved internet infrastructure and communication conditions. By contrast, cybercrime is negatively related to political and cybersecurity factors, indicating that the control of corruption and the commitment to cybersecurity show certain inhibitory effects on cybercrime.

figure 4

The coefficient values are represented as dots, significant variables are represented as filled dots, nonsignificant variables are represented as hollow dots, and bars represent 95% CIs.

From the perspective of income groups, the ways contextual factors affect cybercrime remain basically consistent with the global results, but subtle differences are observed. In low-income countries, the influence of the income index on cybercrime is the strongest, and cybercrime is significantly associated with a higher income index, higher education index, better infrastructure, and higher fixed broadband subscriptions. This pattern may indicate that in low-income countries, wealthier areas tend to have more cybercrimes due to the existence of better communication conditions in these areas. However, in high-income countries, where the internet is universally available, the roles of income index and fixed broadband subscriptions gradually weaken. In contrast, the effects of the Gini index and education are stronger in wealthier countries, indicating that economic inequality and education in these countries can be important drivers of cybercrime. Moreover, the control of corruption is negatively related to cybercrime in lower-middle, upper-middle, and high-income regions.

Pathways of factors for cybercrime

To understand the intricate interactions among different predictors, we perform SEM based on the conceptual model. The SEM model is composed of five latent variables, representing the social, economic, political, technological, and cybersecurity context, and each latent variable has five components reflected by the explanatory variables. Overall SEM fit is assessed, showing a good fit (CFI = 0.917, TLI = 0.899, SRMR = 0.058). SEM confirms many of the hypotheses in the conceptual model, and all relationships are statistically significant. Fig. 5 shows the results of SEM.

figure 5

Black arrows indicate a positive effect, red arrows indicate a negative effect, and values on the straight arrows between variables represent the standardised path coefficients.

According to the SEM, all the hypotheses are tested and supported. Specifically, social, economic, and technological factors have direct positive effects on cybercrime (standardised path coefficients of direct effect are 0.03, 0.10, and 0.61, respectively), indicating that when social, economic, and technological factors go up by 1 standard deviation, cybercrime goes up by 0.03, 0.10, and 0.61 standard deviations, respectively. By contrast, the political and cybersecurity factors have direct negative effects on cybercrime (standardised path coefficients of direct effect are −0.22 and −0.07, respectively), indicating that 1 standard deviation rise in political and cybersecurity factors are associated with 0.22 and 0.07 standard deviations decrease of cybercrime, respectively. It is worth noting that although the direct effects of social and economic factors on cybercrimes are relatively small, their indirect effects on cybercrime through the mediation of technological and political factors are non-negligible.

In sum, SEM quantifies the direct and indirect effects of social, economic, political, technological, and cybersecurity factors on cybercrime, consistent with the hypotheses outlined in the conceptual model. More importantly, the results suggest that even though cybercrimes are primarily determined by technological factors, the direct and indirect effects of underlying social, economic, political, and cybersecurity also play significant roles. This suggests that the technological factor is a necessary but not sufficient condition for the occurrence of cybercrime.

In the current study, we mapped the global subnational distribution of cybercrimes based on a novel cybersecurity data set, the FireHOL IP blocklist. Given the widespread difficulty in obtaining cybercrime data, the data sources used in this study could provide an alternative measure of the subnational cybercrime level on a global scale. Compared to country-level studies (Amin et al., 2021 ; Garg et al., 2013 ; Goel and Nelson, 2009 ; Solano and Peinado, 2017 ; Sutanrikulu et al., 2020 ), the results present a more fine-grained view of the spatial distribution of cybercrime. The map reveals high spatial variability of cybercrime between and within countries, which appears to be closely related to local socioeconomic development status.

To recognise the driving forces behind cybercrime, we proposed a theoretical framework that encompasses the social, economic, political, technological, and cybersecurity factors influencing cybercrime, drawing on existing theoretical and empirical research. On this basis, we used GLMs to identify the major factors and their contributions to cybercrime and SEM to quantify the direct and indirect effects of these driving forces. The GLM results show that using technological factors alone as explanatory variables is insufficient to account for cybercrime, and the inclusion of a broad suite of social, economic, political, technological, and cybersecurity factors can remarkably improve model performance. Global scale modelling indicates that cybercrime is closely associated with socioeconomic and internet development, as developed regions have more available computers and better communication conditions that facilitate the implementation of cybercrime. Some studies have argued that wealthier areas might have fewer incentives for cybercrime, while poorer areas could benefit more from cybercrime activities (Ki et al., 2006 ; Kigerl, 2012 ; Kshetri, 2010 ). However, our study shows that the technological factors constituted by the internet infrastructure and communication conditions are necessary for the production of cybercrime, rendering wealthier areas more convenient for committing cybercrime.

Meanwhile, the GLMs of the 4 income groups demonstrate important differential impacts of the explanatory variables on cybercrime. For example, in low-income countries, where the overall internet penetration rate is low, cybercrime originates mainly in more developed areas with better internet infrastructure, higher internet penetration, and higher education levels. A typical example is the “Yahoo Boys” in Nigeria, referring to young Nigerians engaged in cyber fraud through Yahoo mail, mostly well-educated undergraduates with digital skills (Lazarus and Okolorie, 2019 ). A range of factors, such as a high rate of unemployment, a lack of legitimate economic opportunities, a prevalence of cybercrime subculture, a lack of strong cybercrime laws, and a high level of corruption, have motivated them to obtain illegal wealth through cybercrime. In contrast, cybercrime in high-income regions originates in areas with a high Gini index and a high education level. One possible explanation for this finding may be that well-educated individuals who live in countries with a high Gini index are paid less for their skills than their counterparts, which motivates them to engage in cybercrimes to improve their lives.

Encouragingly, both the GLM and SEM results suggest that political factors and cybersecurity preparedness can mitigate the incidence of cybercrime to some extent, in agreement with the hypotheses. Though previous country-level studies suggest that countries facing more cybersecurity threats tend to have a high level of cybersecurity preparedness (Makridis and Smeets, 2019 ; Calderaro and Craig, 2020 ), our results indicate that cybersecurity preparedness could in turn reduce cybercrimes that originate from a country. This emphasises the importance of government intervention and cybersecurity capacity building. The necessary intervening measures may include the enactment and enforcement of laws, regulation of telecommunication operators and internet service providers (ISPs), strengthening of strike force by security and judicial departments, and improvement of cybersecurity capacity. Given the interconnectedness of cyberspace and the borderless nature of cybercrime, it must be recognised that cybersecurity is not a problem that can be solved by any single country. Thus, enhancing international cooperation in legal, technical, organisational, and capacity aspects of cybersecurity becomes an essential way to tackle cybersecurity challenges.

As presented through SEM, technological factors are closely associated with the development of socioeconomic development and serve as a mediator between socio-economic conditions and cybercrime. In the past decades, ICTs have developed unevenly across different parts of the world due to a range of geographic, socioeconomic, and demographic factors, which has led to the global digital divide (Pick and Azari, 2008 ). The disparities in internet access in different regions have largely determined the spatial patterns of cybercrime. Currently, developing countries (especially those within Asia, Africa, and Latin America) are the fastest-growing regions in terms of ICT infrastructure and internet penetration (Pandita, 2017 ). However, even in developed countries, the progress of technological innovation has outpaced the establishment of legal regulations, national institutions and frameworks, policies and strategies, and other mechanisms that could help manage the new challenges (Bastion and Mukku, 2020 ). Many developing countries are facing difficulties in combating cybercrime due to a lack of adequate financial and human resources, legal and regulatory frameworks, and technical and institutional capacities, providing a fertile ground for cybercrime activities. In this vein, it is extremely urgent and necessary to enhance the cybersecurity capacities of developing countries and engage them in the international cooperation of cybersecurity, ensuring that they can maximize the socio-economic benefits of technological development instead of being harmed by it.

Cybercrime is a sophisticated social phenomenon rooted in deep and comprehensive geographical and socioeconomic causes. This study offers an alternative perspective in solving cybersecurity problems instead of pure technical measures. We believe that improvements in cybersecurity require not only technological, legal, regulatory, and policing measures but also broader approaches that address the underlying social, economic, and political issues that influence cybercrime. While the results presented in this study are preliminary, we hope that this work will provide an extensible framework that can be expanded for future studies to investigate the driving forces of cybercrime.

However, our study has several limitations due to the disadvantages of data. First and foremost, the geo-localisation of cybercrimes or cybercriminals remains a major challenge for cybercrime research. Although the FireHOL IP blocklist has the potential to measure global cybercrime at a high spatial resolution, IP-based measures may not accurately capture the true locations of cybercriminals, as they may simply exploit places with better ICT infrastructure. Therefore, caution should be exercised in interpreting the associations between cybercrime and socioeconomic factors. Future studies combining survey data, police and court judgement data, and cybercrime attribution techniques are needed to further validate the accuracy and validity of IP-based technical data in measuring the geography of cybercrime and gain a deeper understanding of the driving forces of cybercrime. Besides, COVID-19 has greatly changed the way we live and work, and many studies have suggested that the pandemic has increased the frequency of cybercrimes within the context of economic recession, high unemployment, accelerated digital transformation, and unprecedented uncertainty (Lallie et al., 2021 ; Eian et al., 2020 ; Pranggono and Arabo, 2021 ). Unfortunately, the blocklist data cannot well capture this dynamic due to a lack of temporal attributes. Furthermore, different types of cybercrime can be influenced by different mechanisms. We use the total amount of all types of cybercrime IPs instead of looking into a specific type of cybercrime, given that such segmentation may result in data sparsity for some groups. Future studies are needed to determine how different categories of cybercrimes are affected by socioeconomic factors. At last, micro-level individual and behaviour characteristics and more fine-grained explanatory variables should be included to better understand cybercrime.

Data availability

The FireHOL IP lists data are publicly available at the FireHOL website ( https://iplists.firehol.org/ and https://github.com/firehol/blocklist-ipsets ); population, education index, income index, HDI, and subnational regions data are available from Global Data Lab ( https://globaldatalab.org ); nighttime light data are available from the Earth Observation Group ( https://eogdata.mines.edu/download_dnb_composites.html ); Population aged 15–64, Gini index, GDP growth, unemployment, poverty rate, control of corruption, government effectiveness, rule of law, political stability and absence of violence/terrorism, and voice and accountability, are obtained from World Bank ( https://databank.worldbank.org/home.aspx ), the internet users, international bandwidth, secure internet server, and fixed broadband subscriptions are available from International Telecommunication Union (ITU) ( https://www.itu.int/itu-d/sites/statistics ); the internet infrastructure are collected from TeleGeography ( https://www.internetexchangemap.com ) and the World Data Centers Database ( https://datacente.rs ); the legal measures, technical measures, organisational measures, capacity development, cooperation measures and overall cybersecurity index were obtained from the Global Cybersecurity Index (GCI) of the ITU ( https://www.itu.int/en/ITU-D/Cybersecurity/Pages/global-cybersecurity-index.aspx ).

Amin RW, Sevil HE, Kocak S, Francia G, Hoover P (2021) The spatial analysis of the malicious uniform resource locators (URLs): 2016 dataset case study. Information 12(1):2

Article   Google Scholar  

Anderson R, Barton C, Böhme R, Clayton R, Van Eeten MJ, Levi M, Moore T, Savage S (2013) Measuring the cost of cybercrime. In: The economics of information security and privacy. Springer, pp. 265–300

Anderson R, Barton C, Bölme R, Clayton R, Ganán C, Grasso T, Levi M, Moore T, Vasek M (2019) Measuring the changing cost of cybercrime. The 18th Annual Workshop on the Economics of Information Security. https://doi.org/10.17863/CAM.41598

Arbuckle JL (2011) IBM SPSS Amos 20 user’s guide. Amos Development Corporation, SPSS Inc. pp. 226–229

Asal V, Mauslein J, Murdie A, Young J, Cousins K, Bronk C (2016) Repression, education, and politically motivated cyberattacks. J Glob Secur Stud 1(3):235–247

Bastion G, Mukku S (2020) Data and the global south: key issues for inclusive digital development. https://doi.org/10.13140/RG.2.2.35091.50724

Bergmann MC, Dreißigacker A, von Skarczinski B, Wollinger GR (2018) Cyber-dependent crime victimization: the same risk for everyone? Cyberpsychol Behav Soc Network 21(2):84–90

Brenner SW (2013) Cybercrime: re-thinking crime control strategies. Crime online: Willan. pp. 12–28

Brewer R, de Vel-Palumbo M, Hutchings A, Holt T, Goldsmith A, Maimon D (2019) Cybercrime prevention: theory and applications. Springer

Bruggemann R, Koppatz P, Scholl M, Schuktomow R (2022) Global cybersecurity index (GCI) and the role of its 5 pillars. Soc Indic Res 159(1):125–143

Calderaro A, Craig AJ (2020) Transnational governance of cybersecurity: policy challenges and global inequalities in cyber capacity building. Third World Q 41(6):917–938

Castillo D, Falzon J (2018) An analysis of the impact of Wannacry cyberattack on cybersecurity stock returns. Rev Econ Financ 13:93–100

Google Scholar  

Clough J (2015) Principles of cybercrime. Cambridge University Press

Dupont B, Holt T (2022) The human factor of cybercrime. Soc Sci Comput Rev 40(4):860–864

Ehrlich I (1996) Crime, punishment, and the market for offenses. J Econ Perspect 10(1):43–67

Eian IC, Yong LK, Li MYX, Qi YH, Fatima Z (2020) Cyber attacks in the era of covid-19 and possible solution domains. Preprints 2020, 2020090630

Eslahi M, Salleh R, Anuar NB (2012) ‘Bots and botnets: an overview of characteristics, detection and challenges’. 2012 IEEE International Conference on Control System, Computing and Engineering. IEEE, pp. 349–354

Fan Y, Chen J, Shirkey G, John R, Wu SR, Park H, Shao C (2016) Applications of structural equation modeling (SEM) in ecological studies: an updated review. Ecol Process 5(1):1–12

Faraway JJ (2016) Extending the linear model with R: generalized linear, mixed effects and nonparametric regression models. Chapman and Hall/CRC

FireHOL (2021) FireHOL. FireHOL IP lists. https://iplists.firehol.org [Accessed on Aug 21, 2021]

Fox J, Weisberg S, Adler D, Bates D, Baud-Bovy G, Ellison S, Firth D, Friendly M, Gorjanc G, Graves,S (2012) Package ‘car’, Vienna: R Foundation for Statistical Computing, 16

Garg V, Koster T, Camp LJ (2013) Cross-country analysis of spambots. EURASIP J Inform Secur 2013(1):1–13

Ghafur S, Kristensen S, Honeyford K, Martin G, Darzi A, Aylin P (2019) A retrospective impact analysis of the WannaCry cyberattack on the NHS. NPJ Digit Med 2(1):1–7

Goel RK, Nelson MA (2009) Determinants of software piracy: economics, institutions, and technology. J Technol Transfer 34(6):637–658

Hall T, Sanders B, Bah M, King O, Wigley E (2020) Economic geographies of the illegal: the multiscalar production of cybercrime. Trend OrganCrime 24:282–307

Ho HTN, Luong HT (2022) Research trends in cybercrime victimization during 2010–2020: a bibliometric analysis. SN Soc Sci 2(1):1–32

Holt T, Bossler A (2015) Cybercrime in progress: Theory and prevention of technology-enabled offenses. Routledge

Holt TJ (2017) Cybercrime through an interdisciplinary lens. Routledge

Holt TJ, Bossler AM (2014) An assessment of the current state of cybercrime scholarship. Deviant Behav 35(1):20–40

Holt TJ, Burruss GW, Bossler AM (2018) Assessing the macro-level correlates of malware infections using a routine activities framework. Int J Offender Ther Comp Criminol 62(6):1720–1741

Article   PubMed   Google Scholar  

Holt TJ, Schell BH (2011) Corporate hacking and technology-driven crime. Igi Global

Hoque N, Bhattacharyya DK, Kalita JK (2015) Botnet in DDoS attacks: trends and challenges. IEEE Commun Surv Tutor 17(4):2242–2270

Howell CJ, Burruss GW (2020) Datasets for analysis of cybercrime. In: The Palgrave handbook of international cybercrime and cyberdeviance. Palgrave Macmillan. pp. 207–219

Hutchings A, Hayes H (2009) Routine activity theory and phishing victimisation: who gets caught in the ‘net’? Curr Issues Crim Justice 20(3):433–452

Ki E-J, Chang B-H, Khang H (2006) Exploring influential factors on music piracy across countries. J Commun 56(2):406–426

Kigerl A (2012) Routine activity theory and the determinants of high cybercrime countries. Soc Sci Comput Rev 30(4):470–486

Kigerl A (2016) Cyber crime nation typologies: K-means clustering of countries based on cyber crime rates. Int J Cyber Criminol10(2): 147–169

Kigerl A (2021) Routine activity theory and malware, fraud, and spam at the national level, Crime Law Soc Chang 76:109–130

Kshetri N (2010) Diffusion and effects of cyber-crime in developing economies. Third World Q 31(7):1057–1079

Kumar S, Carley KM (2016) ‘Approaches to understanding the motivations behind cyber attacks’. 2016 IEEE Conference on Intelligence and Security Informatics (ISI). IEEE, pp. 307–309

Lallie HS, Shepherd LA, Nurse JR, Erola A, Epiphaniou G, Maple C, Bellekens X (2021) Cyber security in the age of covid-19: a timeline and analysis of cyber-crime and cyber-attacks during the pandemic. Comput Secur 105:102248

Article   PubMed   PubMed Central   Google Scholar  

Lazarus S, Okolorie GU (2019) The bifurcation of the Nigerian cybercriminals: Narratives of the Economic and Financial Crimes Commission (EFCC) agents. Telemat Informat 40:14–26

Leukfeldt R, Holt TJ (2019) The human factor of cybercrime. Routledge

Lianos H, McGrath A (2018) Can the general theory of crime and general strain theory explain cyberbullying perpetration? Crime Delinq 64(5):674–700

Lusthaus J, Bruce M, Phair N (2020) ‘Mapping the geography of cybercrime: a review of indices of digital offending by country’. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW): IEEE, pp. 448–453

Lusthaus J, Varese F (2021) Offline and local: the hidden face of cybercrime. Policing J Policy Pract 15(1):4–14

Maimon D, Wilson T, Ren W, Berenblum T (2015) On the relevance of spatial and temporal dimensions in assessing computer susceptibility to system trespassing incidents. Br J Criminol 55(3):615–634

Makridis CA, Smeets M (2019) Determinants of cyber readiness. J Cyber Policy 4(1):72–89

Mandelcorn S, Modarres M, Mosleh A (2013) An explanatory model of cyberattacks drawn from rational choice theory. Trans Am Nuclear Soc 109(1):1869–1871

McAfee (2021) McAfee and the Center for Strategic and International Studies (CSIS). The Hidden Costs of Cybercrime. https://www.csis.org/analysis/hidden-costs-cybercrime [Accessed on Aug 21, 2021]

McGuire M, Dowling S (2013) Cyber-crime: a review of the evidence summary of key findings and implications Home Office Research Report 75, Home Office, United Kingdom, Oct. 30p

Meke E (2012) Urbanization and cyber Crime in Nigeria: causes and consequences. Eur J Comput Sci Inform Technol 3(9):1–11

Mezzour G, Carley L, Carley KM (2014) Global mapping of cyber attacks. Available at SSRN 2729302

Mikkola M, Oksanen A, Kaakinen M, Miller BL, Savolainen I, Sirola A, Zych I, Paek H-J (2020) Situational and individual risk factors for cybercrime victimization in a cross-national context. Int J Offender Ther Comparat Criminol https://doi.org/10.1177/0306624X20981041

Mohurle S, Patil M (2017) A brief study of wannacry threat: ransomware attack 2017. Int J Adv Res Comput Sci 8(5):1938–1940

Neal S (2014) Cybercrime, transgression and virtual environments. Crime: Willan, pp. 71–104

Ngo FT, Paternoster R (2011) Cybercrime victimization: an examination of individual and situational level factors. Int J Cyber Criminol 5(1):773

Onuora A, Uche D, Ogbunude F, Uwazuruike F (2017) The challenges of cybercrime in Nigeria: an overview. AIPFU J School Sci 1(2):6–11

Overvest B, Straathof B (2015) What drives cybercrime? Empirical evidence from DDoS attacks. CPB Netherlands Bureau for Economic Policy Analysis

Pandita R (2017) Internet: a change agent an overview of internet penetration & growth across the world. Int J Inform Dissemination Technol 7(2):83

Payne BK (2020) Defining cybercrime. The Palgrave handbook of international cybercrime and cyberdeviance. Palgrave Macmillan. pp. 3–25

Phillips K, Davidson JC, Farr RR, Burkhardt C, Caneppele S, Aiken MP (2022) Conceptualizing cybercrime: definitions, typologies and taxonomies. Forensic Sci 2(2):379–398

Pick JB, Azari R (2008) Global digital divide: Influence of socioeconomic, governmental, and accessibility factors on information technology. Inform Technol Dev 14(2):91–115

Pranggono B, Arabo A (2021) COVID‐19 pandemic cybersecurity issues. Internet Technol Lett 4(2):e247

Pratt TC, Holtfreter K, Reisig MD (2010) Routine online activity and internet fraud targeting: extending the generality of routine activity theory. J Res Crime Delinquency 47(3):267–296

R (Core Team, 2013) R: A language and environment for statistical computing. R Core Team

Sarre R, Lau LY-C, Chang LY (2018) Responding to cybercrime: current trends. Taylor & Francis

Solano PC, Peinado AJR (2017) ‘Socio-economic factors in cybercrime: Statistical study of the relation between socio-economic factors and cybercrime’. 2017 International Conference On Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA): IEEE, pp. 1–4

Srivastava SK, Das S, Udo GJ, Bagchi K (2020) Determinants of cybercrime originating within a nation: a cross-country study. J Glob Inf Technol Manag 23(2):112–137

Sutanrikulu A, Czajkowska S, Grossklags J (2020) ‘Analysis of darknet market activity as a country-specific, socio-economic and technological phenomenon’. 2020 APWG Symposium on Electronic Crime Research (eCrime): IEEE, pp. 1–10

UNODC (2013) Comprehensive study on cybercrime. United Nations, New York

Van Eeten M, Bauer JM, Asghari H, Tabatabaie S (2010) The role of internet service providers in botnet mitigation an empirical analysis based on spam data. TPRC

Waldrop MM (2016) How to hack the hackers: The human side of cybercrime. Nature 533: 164–167

Wall D (2007) Cybercrime: the transformation of crime in the information age. Polity

Walters GD (2015) Proactive criminal thinking and the transmission of differential association: a cross-lagged multi-wave path analysis. Crim Just Behav 42(11):1128–1144

Watters, PA, McCombie, S, Layton, R and Pieprzyk, J (2012) Characterising and predicting cyber attacks using the Cyber Attacker Model Profile (CAMP). J Money Laund Control . ISSN: 1368-5201

Williams ML (2016) Guardians upon high: an application of routine activities theory to online identity theft in Europe at the country and individual level. Br J Criminol 56(1):21–48

Download references

Acknowledgements

This research was funded by the National Key Research and Development Project of China, grant number 2020YFB1806500 and the Key Research Program of the Chinese Academy of Sciences, grant number ZDRW-XH-2021-3. We thank Yushu Qian, Ying Liu, Qinghua Tan for providing valuable suggestions.

Author information

Authors and affiliations.

Institute of Geographic Sciences and Nature Resources Research, Chinese Academy of Sciences, Beijing, China

Shuai Chen, Mengmeng Hao, Fangyu Ding, Dong Jiang, Jiping Dong & Qiquan Guo

College of Resources and Environment, University of Chinese Academy of Sciences, Beijing, China

Shuai Chen, Mengmeng Hao, Fangyu Ding, Dong Jiang & Jiping Dong

Big Data Center of State Grid Corporation of China, Beijing, China

Shize Zhang

The Administrative Bureau of Chinese Academy of Sciences, Beijing, China

Chundong Gao

You can also search for this author in PubMed   Google Scholar

Contributions

DJ, QQG and CDG designed the research; SC, FYD, DJ, SZZ and MMH performed the research; SC, FYD and JPD analysed the data; SC, FYD, DJ and MMH wrote the first draft of the paper; JPD, SZZ, QQG, CDG and DJ gave useful edits, comments and suggestions to this work.

Corresponding author

Correspondence to Dong Jiang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Additional information.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Chen, S., Hao, M., Ding, F. et al. Exploring the global geography of cybercrime and its driving forces. Humanit Soc Sci Commun 10 , 71 (2023). https://doi.org/10.1057/s41599-023-01560-x

Download citation

Received : 19 May 2022

Accepted : 14 February 2023

Published : 23 February 2023

DOI : https://doi.org/10.1057/s41599-023-01560-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research paper on cybercrime

  • Reference Manager
  • Simple TEXT file

People also looked at

Review article, phishing attacks: a recent comprehensive study and a new anatomy.

www.frontiersin.org

  • Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff, United Kingdom

With the significant growth of internet usage, people increasingly share their personal information online. As a result, an enormous amount of personal information and financial transactions become vulnerable to cybercriminals. Phishing is an example of a highly effective form of cybercrime that enables criminals to deceive users and steal important data. Since the first reported phishing attack in 1990, it has been evolved into a more sophisticated attack vector. At present, phishing is considered one of the most frequent examples of fraud activity on the Internet. Phishing attacks can lead to severe losses for their victims including sensitive information, identity theft, companies, and government secrets. This article aims to evaluate these attacks by identifying the current state of phishing and reviewing existing phishing techniques. Studies have classified phishing attacks according to fundamental phishing mechanisms and countermeasures discarding the importance of the end-to-end lifecycle of phishing. This article proposes a new detailed anatomy of phishing which involves attack phases, attacker’s types, vulnerabilities, threats, targets, attack mediums, and attacking techniques. Moreover, the proposed anatomy will help readers understand the process lifecycle of a phishing attack which in turn will increase the awareness of these phishing attacks and the techniques being used; also, it helps in developing a holistic anti-phishing system. Furthermore, some precautionary countermeasures are investigated, and new strategies are suggested.

Introduction

The digital world is rapidly expanding and evolving, and likewise, as are cybercriminals who have relied on the illegal use of digital assets—especially personal information—for inflicting damage to individuals. One of the most threatening crimes of all internet users is that of ‘identity theft’ ( Ramanathan and Wechsler, 2012 ) which is defined as impersonating the person’s identity to steal and use their personal information (i.e., bank details, social security number, or credit card numbers, etc.) by an attacker for the individuals’ own gain not just for stealing money but also for committing other crimes ( Arachchilage and Love, 2014 ). Cyber criminals have also developed their methods for stealing their information, but social-engineering-based attacks remain their favorite approach. One of the social engineering crimes that allow the attacker to perform identity theft is called a phishing attack. Phishing has been one of the biggest concerns as many internet users fall victim to it. It is a social engineering attack wherein a phisher attempts to lure the users to obtain their sensitive information by illegally utilizing a public or trustworthy organization in an automated pattern so that the internet user trusts the message, and reveals the victim’s sensitive information to the attacker ( Jakobsson and Myers, 2006 ). In phishing attacks, phishers use social engineering techniques to redirect users to malicious websites after receiving an email and following an embedded link ( Gupta et al., 2015 ). Alternatively, attackers could exploit other mediums to execute their attacks such as Voice over IP (VoIP), Short Message Service (SMS) and, Instant Messaging (IM) ( Gupta et al., 2015 ). Phishers have also turned from sending mass-email messages, which target unspecified victims, into more selective phishing by sending their emails to specific victims, a technique called “spear-phishing.”

Cybercriminals usually exploit users with a lack of digital/cyber ethics or who are poorly trained in addition to technical vulnerabilities to reach their goals. Susceptibility to phishing varies between individuals according to their attributes and awareness level, therefore, in most attacks, phishers exploit human nature for hacking, instead of utilising sophisticated technologies. Even though the weakness in the information security chain is attributed to humans more than the technology, there is a lack of understanding about which ring in this chain is first penetrated. Studies found that certain personal characteristics make some persons more receptive to various lures ( Iuga et al., 2016 ; Ovelgönne et al., 2017 ; Crane, 2019 ). For example, individuals who usually obey authorities more than others are more likely to fall victim to a Business Email Compromise (BEC) that is pretending to be from a financial institution and requests immediate action by seeing it as a legitimate email ( Barracuda, 2020 ). Greediness is another human weakness that could be used by an attacker, for example, emails that offering either great discounts, free gift cards, and others ( Workman, 2008 ).

Various channels are used by the attacker to lure the victim through a scam or through an indirect manner to deliver a payload for gaining sensitive and personal information from the victim ( Ollmann, 2004 ). However, phishing attacks have already led to damaging losses and could affect the victim not only through a financial context but could also have other serious consequences such as loss of reputation, or compromise of national security ( Ollmann, 2004 ; Herley and Florêncio, 2008 ). Cybercrime damages have been expected to cost the world $6 trillion annually by 2021, up from $3 trillion in 2015 according to Cybersecurity Ventures ( Morgan, 2019 ). Phishing attacks are the most common type of cybersecurity breaches as stated by the official statistics from the cybersecurity breaches survey 2020 in the United Kingdom ( GOV.UK, 2020 ). Although these attacks affect organizations and individuals alike, the loss for the organizations is significant, which includes the cost for recovery, the loss of reputation, fines from information laws/regulations, and reduced productivity ( Medvet et al., 2008 ).

Phishing is a field of study that merges social psychology, technical systems, security subjects, and politics. Phishing attacks are more prevalent: a recent study ( Proofpoint, 2020 ) found that nearly 90% of organizations faced targeted phishing attacks in 2019. From which 88% experienced spear-phishing attacks, 83% faced voice phishing (Vishing), 86% dealt with social media attacks, 84% reported SMS/text phishing (SMishing), and 81% reported malicious USB drops. The 2018 Proofpoint 1 annual report ( Proofpoint, 2019a ) has stated that phishing attacks jumped from 76% in 2017 to 83% in 2018, where all phishing types happened more frequently than in 2017. The number of phishing attacks identified in the second quarter of 2019 was notably higher than the number recorded in the previous three quarters. While in the first quarter of 2020, this number was higher than it was in the previous one according to a report from Anti-Phishing Working Group (APWG 2 ) ( APWG, 2018 ) which confirms that phishing attacks are on the rise. These findings have shown that phishing attacks have increased continuously in recent years and have become more sophisticated and have gained more attention from cyber researchers and developers to detect and mitigate their impact. This article aims to determine the severity of the phishing problem by providing detailed insights into the phishing phenomenon in terms of phishing definitions, current statistics, anatomy, and potential countermeasures.

The rest of the article is organized as follows. Phishing Definitions provides a number of phishing definitions as well as some real-world examples of phishing. The evolution and development of phishing attacks are discussed in Developing a Phishing Campaign . What Attributes Make Some People More Susceptible to Phishing Attacks Than Others explores the susceptibility to these attacks. The proposed phishing anatomy and types of phishing attacks are elaborated in Proposed Phishing Anatomy . In Countermeasures , various anti-phishing countermeasures are discussed. The conclusions of this study are drawn in Conclusion .

Phishing Definitions

Various definitions for the term “phishing” have been proposed and discussed by experts, researchers, and cybersecurity institutions. Although there is no established definition for the term “phishing” due to its continuous evolution, this term has been defined in numerous ways based on its use and context. The process of tricking the recipient to take the attacker’s desired action is considered the de facto definition of phishing attacks in general. Some definitions name websites as the only possible medium to conduct attacks. The study ( Merwe et al., 2005 , p. 1) defines phishing as “a fraudulent activity that involves the creation of a replica of an existing web page to fool a user into submitting personal, financial, or password data.” The above definition describes phishing as an attempt to scam the user into revealing sensitive information such as bank details and credit card numbers, by sending malicious links to the user that leads to the fake web establishment. Others name emails as the only attack vector. For instance, PishTank (2006) defines phishing as “a fraudulent attempt, usually made through email, to steal your personal information.” A description for phishing stated by ( Kirda and Kruegel, 2005 , p.1) defines phishing as “a form of online identity theft that aims to steal sensitive information such as online banking passwords and credit card information from users.” Some definitions highlight the usage of combined social and technical skills. For instance, APWG defines phishing as “a criminal mechanism employing both social engineering and technical subterfuge to steal consumers’ personal identity data and financial account credentials” ( APWG, 2018 , p. 1). Moreover, the definition from the United States Computer Emergency Readiness Team (US-CERT) states phishing as “a form of social engineering that uses email or malicious websites (among other channels) to solicit personal information from an individual or company by posing as a trustworthy organization or entity” ( CISA, 2018 ). A detailed definition has been presented in ( Jakobsson and Myers, 2006 , p. 1), which describes phishing as “a form of social engineering in which an attacker, also known as a phisher, attempts to fraudulently retrieve legitimate users’ confidential or sensitive credentials by mimicking electronic communications from a trustworthy or public organization in an automated fashion. Such communications are most frequently done through emails that direct users to fraudulent websites that in turn collect the credentials in question.”

In order to understand the anatomy of the phishing attack, there is a necessity for a clear and detailed definition that underpins previous existent definitions. Since a phishing attack constitutes a mix of technical and social engineering tactics, a new definition (i.e., Anatomy) has been proposed in this article, which describes the complete process of a phishing attack. This provides a better understanding for the readers as it covers phishing attacks in depth from a range of perspectives. Various angles and this might help beginner readers or researchers in this field. To this end, we define phishing as a socio-technical attack, in which the attacker targets specific valuables by exploiting an existing vulnerability to pass a specific threat via a selected medium into the victim’s system, utilizing social engineering tricks or some other techniques to convince the victim into taking a specific action that causes various types of damages.

Figure 1 depicts the general process flow for a phishing attack that contains four phases; these phases are elaborated in Proposed Phishing Anatomy . However, as shown in Figure 1 , in most attacks, the phishing process is initiated by gathering information about the target. Then the phisher decides which attack method is to be used in the attack as initial steps within the planning phase. The second phase is the preparation phase, in which the phisher starts to search for vulnerabilities through which he could trap the victim. The phisher conducts his attack in the third phase and waits for a response from the victim. In turn, the attacker could collect the spoils in the valuables acquisition phase, which is the last step in the phishing process. To elaborate the above phishing process using an example, an attacker may send a fraudulent email to an internet user pretending to be from the victim’s bank, requesting the user to confirm the bank account details, or else the account may be suspended. The user may think this email is legitimate since it uses the same graphic elements, trademarks, and colors of their legitimate bank. Submitted information will then be directly transmitted to the phisher who will use it for different malicious purposes such as money withdrawal, blackmailing, or committing further frauds.

www.frontiersin.org

FIGURE 1 . General phishing attack process.

Real-World Phishing Examples

Some real-world examples of phishing attacks are discussed in this section to present the complexity of some recent phishing attacks. Figure 2 shows the screenshot of a suspicious phishing email that passed a University’s spam filters and reached the recipient mailbox. As shown in Figure 2 , the phisher uses the sense of importance or urgency in the subject through the word ‘important,’ so that the email can trigger a psychological reaction in the user to prompt them into clicking the button “View message.” The email contains a suspicious embedded button, indeed, when hovering over this embedded button, it does not match with Uniform Resource Locator (URL) in the status bar. Another clue in this example is that the sender's address is questionable and not known to the receiver. Clicking on the fake attachment button will result in either installation of a virus or worm onto the computer or handing over the user’s credentials by redirecting the victim onto a fake login page.

www.frontiersin.org

FIGURE 2 . Screenshot of a real suspicious phishing email received by the authors’ institution in February 2019.

More recently, phishers take advantage of the Coronavirus pandemic (COVID-19) to fool their prey. Many Coronavirus-themed scam messages sent by attackers exploited people’s fear of contracting COVID-19 and urgency to look for information related to Coronavirus (e.g., some of these attacks are related to Personal Protective Equipment (PPE) such as facemasks), the WHO stated that COVID-19 has created an Infodemic which is favorable for phishers ( Hewage, 2020 ). Cybercriminals also lured people to open attachments claiming that it contains information about people with Coronavirus within the local area.

Figure 3 shows an example of a phishing e-mail where the attacker claimed to be the recipient’s neighbor sending a message in which they pretended to be dying from the virus and threatening to infect the victim unless a ransom was paid ( Ksepersky, 2020 ).

www.frontiersin.org

FIGURE 3 . Screenshot of a coronavirus related phishing email ( Ksepersky, 2020 ).

Another example is the phishing attack spotted by a security researcher at Akamai organization in January 2019. The attack attempted to use Google Translate to mask suspicious URLs, prefacing them with the legit-looking “ www.translate.google.com ” address to dupe users into logging in ( Rhett, 2019 ). That attack followed with Phishing scams asking for Netflix payment detail for example, or embedded in promoted tweets that redirect users to genuine-looking PayPal login pages. Although the tricky/bogus page was very well designed in the latter case, the lack of a Hypertext Transfer Protocol Secure (HTTPS) lock and misspellings in the URL were key red flags (or giveaways) that this was actually a phishing attempt ( Keck, 2018 ). Figure 4A shows a screenshot of a phishing email received by the Federal Trade Commission (FTC). The email promotes the user to update his payment method by clicking on a link, pretending that Netflix is having a problem with the user's billing information ( FTC, 2018 ).

www.frontiersin.org

FIGURE 4 . Screenshot of the (A) Netflix scam email and (B) fraudulent text message (Apple) ( Keck, 2018 ; Rhett, 2019 )

Figure 4B shows a text message as another example of phishing that is difficult to spot as a fake text message ( Pompon et al., 2018 ). The text message shown appears to come from Apple asking the customer to update the victim’s account. A sense of urgency is used in the message as a lure to motivate the user to respond.

Developing a Phishing Campaign

Today, phishing is considered one of the most pressing cybersecurity threats for all internet users, regardless of their technical understanding and how cautious they are. These attacks are getting more sophisticated by the day and can cause severe losses to the victims. Although the attacker’s first motivation is stealing money, stolen sensitive data can be used for other malicious purposes such as infiltrating sensitive infrastructures for espionage purposes. Therefore, phishers keep on developing their techniques over time with the development of electronic media. The following sub-sections discuss phishing evolution and the latest statistics.

Historical Overview

Cybersecurity has been a major concern since the beginning of APRANET, which is considered to be the first wide-area packet-switching network with distributed control and one of the first networks to implement the TCP/IP protocol suite. The term “Phishing” which was also called carding or brand spoofing, was coined for the first time in 1996 when the hackers created randomized credit card numbers using an algorithm to steal users' passwords from America Online (AOL) ( Whitman and Mattord, 2012 ; Cui et al., 2017 ). Then phishers used instant messages or emails to reach users by posing as AOL employees to convince users to reveal their passwords. Attackers believed that requesting customers to update their account would be an effective way to disclose their sensitive information, thereafter, phishers started to target larger financial companies. The author in ( Ollmann, 2004 ) believes that the “ph” in phishing comes from the terminology “Phreaks” which was coined by John Draper, who was also known as Captain Crunch, and was used by early Internet criminals when they phreak telephone systems. Where the “f” in ‘fishing’ replaced with “ph” in “Phishing” as they both have the same meaning by phishing the passwords and sensitive information from the sea of internet users. Over time, phishers developed various and more advanced types of scams for launching their attack. Sometimes, the purpose of the attack is not limited to stealing sensitive information, but it could involve injecting viruses or downloading the malicious program into a victim's computer. Phishers make use of a trusted source (for instance a bank helpdesk) to deceive victims so that they disclose their sensitive information ( Ollmann, 2004 ).

Phishing attacks are rapidly evolving, and spoofing methods are continuously changing as a response to new corresponding countermeasures. Hackers take advantage of new tool-kits and technologies to exploit systems’ vulnerabilities and also use social engineering techniques to fool unsuspecting users. Therefore, phishing attacks continue to be one of the most successful cybercrime attacks.

The Latest Statistics of Phishing Attacks

Phishing attacks are becoming more common and they are significantly increasing in both sophistication and frequency. Lately, phishing attacks have appeared in various forms. Different channels and threats are exploited and used by the attackers to trap more victims. These channels could be social networks or VoIP, which could carry various types of threats such as malicious attachments, embedded links within an email, instant messages, scam calls, or other types. Criminals know that social engineering-based methods are effective and profitable; therefore, they keep focusing on social engineering attacks, as it is their favorite weapon, instead of concentrating on sophisticated techniques and toolkits. Phishing attacks have reached unprecedented levels especially with emerging technologies such as mobile and social media ( Marforio et al., 2015 ). For instance, from 2017 to 2020, phishing attacks have increased from 72 to 86% among businesses in the United Kingdom in which a large proportion of the attacks are originated from social media ( GOV.UK, 2020 ).

The APWG Phishing Activity Trends Report analyzes and measures the evolution, proliferation, and propagation of phishing attacks reported to the APWG. Figure 5 shows the growth in phishing attacks from 2015 to 2020 by quarters based on APWG annual reports ( APWG, 2020 ). As demonstrated in Figure 5 , in the third quarter of 2019, the number of phishing attacks rose to 266,387, which is the highest level in three years since late 2016. This was up 46% from the 182,465 for the second quarter, and almost double the 138,328 seen in the fourth quarter of 2018. The number of unique phishing e-mails reported to APWG in the same quarter was 118,260. Furthermore, it was found that the number of brands targeted by phishing campaigns was 1,283.

www.frontiersin.org

FIGURE 5 . The growth in phishing attacks 2015–2020 by quarters based on data collected from APWG annual reports.

Cybercriminals are always taking advantage of disasters and hot events for their own gains. With the beginning of the COVID-19 crisis, a variety of themed phishing and malware attacks have been launched by phishers against workers, healthcare facilities, and even the general public. A report from Microsoft ( Microsoft, 2020 ) showed that cyber-attacks related to COVID-19 had spiked to an unprecedented level in March, most of these scams are fake COVID-19 websites according to security company RiskIQ ( RISKIQ, 2020 ). However, the total number of phishing attacks observed by APWG in the first quarter of 2020 was 165,772, up from the 162,155 observed in the fourth quarter of 2019. The number of these unique phishing reports submitted to APWG during the first quarter of 2020 was 139,685, up from 132,553 in the fourth quarter of 2019, 122,359 in the third quarter of 2019, and 112,163 in the second quarter of 2019 ( APWG, 2020 ).

A study ( KeepnetLABS, 2018 ) confirmed that more than 91% of system breaches are caused by attacks initiated by email. Although cybercriminals use email as the main medium for leveraging their attacks, many organizations faced a high volume of different social engineering attacks in 2019 such as Social Media Attacks, Smishing Attacks, Vishing Attacks, USB-based Attacks (for example by hiding and delivering malware to smartphones via USB phone chargers and distributing malware-laden free USBs) ( Proofpoint, 2020 ). However, info-security professionals reported a higher frequency of all types of social engineering attacks year-on-year according to a report presented by Proofpoint. Spear phishing increased to 64% in 2018 from 53% in 2017, Vishing and/or SMishing increased to 49% from 45%, and USB attacks increased to 4% from 3%. The positive side shown in this study is that 59% of suspicious emails reported by end-users were classified as potential phishing, indicating that employees are being more security-aware, diligent, and thoughtful about the emails they receive ( Proofpoint, 2019a ). In all its forms, phishing can be one of the easiest cyber attacks to fall for. With the increasing levels of different phishing types, a survey was conducted by Proofpoint to identify the strengths and weaknesses of particular regions in terms of specific fundamental cybersecurity concepts. In this study, several questions were asked of 7,000 end-users about the identification of multiple terms like phishing, ransomware, SMishing, and Vishing across seven countries; the US, United Kingdom, France, Germany, Italy, Australia, and Japan. The response was different from country to country, where respondents from the United Kingdom recorded the highest knowledge with the term phishing at 70% and the same with the term ransomware at 60%. In contrast, the results showed that the United Kingdom recorded only 18% for each Vishing and SMishing ( Proofpoint, 2019a ), as shown in Table 1 .

www.frontiersin.org

TABLE 1 . Percentage of respondents understanding multiple cybersecurity terms from different countries.

On the other hand, a report by Wombat security reflects responses from more than 6,000 working adults about receiving fraudulent solicitation across six countries; the US, United Kingdom, Germany, France, Italy, and Australia ( Ksepersky, 2020 ). Respondents from the United Kingdom stated that they were recipients of fraudulent solicitations through the following sources: email 62%, phone call 27%, text message 16%, mailed letter 8%, social media 10%, and 17% confirmed that they been the victim of identity theft ( Ksepersky, 2020 ). However, the consequences of responding to phishing are serious and costly. For instance, the United Kingdom losses from financial fraud across payment cards, remote banking, and cheques totaled £768.8 million in 2016 ( Financial Fraud Action UK, 2017 ). Indeed, the losses resulting from phishing attacks are not limited to financial losses that might exceed millions of pounds, but also loss of customers and reputation. According to the 2020 state of phish report ( Proofpoint, 2020 ), damages from successful phishing attacks can range from lost productivity to cash outlay. The cost can include; lost hours from employees, remediation time for info security teams’ costs due to incident response, damage to reputation, lost intellectual property, direct monetary losses, compliance fines, lost customers, legal fees, etc.

There are many targets for phishing including end-user, business, financial services (i.e., banks, credit card companies, and PayPal), retail (i.e., eBay, Amazon) and, Internet Service Providers ( wombatsecurity.com, 2018 ). Affected organizations detected by Kaspersky Labs globally in the first quarter of 2020 are demonstrated in Figure 6 . As shown in the figure, online stores were at the top of the targeted list (18.12%) followed by global Internet portals (16.44%) and social networks in third place (13.07%) ( Ksepersky, 2020 ). While the most impersonated brands overall for the first quarter of 2020 were Apple, Netflix, Yahoo, WhatsApp, PayPal, Chase, Facebook, Microsoft eBay, and Amazon ( Checkpoint, 2020 ).

www.frontiersin.org

FIGURE 6 . Distribution of organizations affected by phishing attacks detected by Kaspersky in quarter one of 2020.

Phishing attacks can take a variety of forms to target people and steal sensitive information from them. Current data shows that phishing attacks are still effective, which indicates that the available existing countermeasures are not enough to detect and prevent these attacks especially on smart devices. The social engineering element of the phishing attack has been effective in bypassing the existing defenses to date. Therefore, it is essential to understand what makes people fall victim to phishing attacks. What Attributes Make Some People More Susceptible to Phishing Attacks Than Others discusses the human attributes that are exploited by the phishers.

What Attributes Make Some People More Susceptible to Phishing Attacks Than Others

Why do most existing defenses against phishing not work? What personal and contextual attributes make them more susceptible to phishing attacks than other users? Different studies have discussed those two questions and examined the factors affecting susceptibility to a phishing attack and the reasons behind why people get phished. Human nature is considered one of the most affecting factors in the process of phishing. Everyone is susceptible to phishing attacks because phishers play on an individual’s specific psychological/emotional triggers as well as technical vulnerabilities ( KeepnetLABS, 2018 ; Crane, 2019 ). For instance, individuals are likely to click on a link within an email when they see authority cues ( Furnell, 2007 ). In 2017, a report by PhishMe (2017) found that curiosity and urgency were the most common triggers that encourage people to respond to the attack, later these triggers were replaced by entertainment, social media, and reward/recognition as the top emotional motivators. However, in the context of a phishing attack, the psychological triggers often surpass people’s conscious decisions. For instance, when people are working under stress, they tend to make decisions without thinking of the possible consequences and options ( Lininger and Vines, 2005 ). Moreover, everyday stress can damage areas of the brain that weakens the control of their emotions ( Keinan, 1987 ). Several studies have addressed the association between susceptibility to phishing and demographic variables (e.g., age and gender) as an attempt to identify the reasons behind phishing success at different population groups. Although everyone is susceptible to phishing, studies showed that different age groups are more susceptible to certain lures than others are. For example, participants with an age range between 18 and 25 are more susceptible to phishing than other age groups ( Williams et al., 2018 ). The reason that younger adults are more likely to fall for phishing, is that younger adults are more trusting when it comes to online communication, and are also more likely to click on unsolicited e-mails ( Getsafeonline, 2017 ). Moreover, older participants are less susceptible because they tend to be less impulsive ( Arnsten et al., 2012 ). While some studies confirmed that women are more susceptible than men to phishing as they click on links in phishing emails and enter information into phishing websites more often than men do. The study published by Getsafeonline (2017) identifies a lack of technical know-how and experience among women than men as the main reason for this. In contrast, a survey conducted by antivirus company Avast found that men are more susceptible to smartphone malware attacks than women ( Ong, 2014 ). These findings confirmed the results from the study ( Hadlington, 2017 ) that found men are more susceptible to mobile phishing attacks than women. The main reason behind this according to Hadlington (2017) is that men are more comfortable and trusting when using mobile online services. The relationships between demographic characteristics of individualls and their ability to correctly detect a phishing attack have been studied in ( Iuga et al., 2016 ). The study showed that participants with high Personal Computer (PC) usage tend to identify phishing efforts more accurately and faster than other participants. Another study ( Hadlington, 2017 ) showed that internet addiction, attentional, and motor impulsivity were significant positive predictors for risky cybersecurity behaviors while a positive attitude toward cybersecurity in business was negatively related to risky cybersecurity behaviors. On the other hand, the trustworthiness of people in some web sites/platforms is one of the holes that the scammers or crackers exploit especially when it based on visual appearance that could fool the user ( Hadlington, 2017 ). For example, fraudsters take advantage of people’s trust in a website by replacing a letter from the legitimate site with a number such as goog1e.com instead of google.com . Another study ( Yeboah-Boateng and Amanor, 2014 ) demonstrates that although college students are unlikely to disclose personal information as a response to an email, nonetheless they could easily be tricked by other tactics, making them alarmingly susceptible to email phishing attacks. The reason for that is most college students do not have a basis in ICT especially in terms of security. Although security terms like viruses, online scams and worms are known by some end-users, these users could have no knowledge about Phishing, SMishing, and Vishing and others ( Lin et al., 2012 ). However, study ( Yeboah-Boateng and Amanor, 2014 ) shows that younger students are more susceptible than older students, and students who worked full-time were less likely to fall for phishing.

The study reported in ( Diaz et al., 2020 ) examines user click rates and demographics among undergraduates by sending phishing attacks to 1,350 randomly selected students. Students from various disciplines were involved in the test, from engineering and mathematics to arts and social sciences. The study observed that student susceptibility was affected by a range of factors such as phishing awareness, time spent on the computer, cyber training, age, academic year, and college affiliation. The most surprising finding is that those who have greater phishing knowledge are more susceptible to phishing scams. The authors consider two speculations for these unexpected findings. First, user’s awareness about phishing might have been increased with the continuous falling for phishing scams. Second, users who fell for the phish might have less knowledge about phishing than they claim. Other findings from this study agreed with findings from other studies that is, older students were more able to detect a phishing email, and engineering and IT majors had some of the lowest click rates as shown in Figure 7 , which shows that some academic disciplines are more susceptible to phishing than others ( Bailey et al., 2008 ).

www.frontiersin.org

FIGURE 7 . The number of clicks on phishing emails by students in the College of Arts, Humanities, and Social Sciences (AHSS), the College of Engineering and Information Technology (EIT), and the College of Natural and Mathematical Sciences (NMS) at the University of Maryland, Baltimore County (UMBC) ( Diaz et al., 2020 ).

Psychological studies have also illustrated that the user’s ability to avoid phishing attacks affected by different factors such as browser security indicators and user's awareness of phishing. The author in ( Dhamija et al., 2006 ) conducted an experimental study using 22 participants to test the user’s ability to recognize phishing websites. The study shows that 90% of these participants became victims of phishing websites and 23% of them ignored security indexes such as the status and address bar. In 2015, another study was conducted for the same purpose, where a number of fake web pages was shown to the participants ( Alsharnouby et al., 2015 ). The results of this study showed that participants detected only 53% of phishing websites successfully. The authors also observed that the time spent on looking at browser elements affected the ability to detect phishing. Lack of knowledge or awareness and carelessness are common causes for making people fall for a phishing trap. Most people have unknowingly opened a suspicious attachment or clicked a fake link that could lead to different levels of compromise. Therefore, focusing on training and preparing users for dealing with such attacks are essential elements to minimize the impact of phishing attacks.

Given the above discussion, susceptibility to phishing varies according to different factors such as age, gender, education level, internet, and PC addiction, etc. Although for each person, there is a trigger that can be exploited by phishers, even people with high experience may fall prey to phishing due to the attack sophistication that makes it difficult to be recognized. Therefore, it is inequitable that the user has always been blamed for falling for these attacks, developers must improve the anti-phishing systems in a way that makes the attack invisible. Understanding the susceptibility of individuals to phishing attacks will help in better developing prevention and detection techniques and solutions.

Proposed Phishing Anatomy

Phishing process overview.

Generally, most of the phishing attacks start with an email ( Jagatic et al., 2007 ). The phishing mail could be sent randomly to potential users or it can be targeted to a specific group or individuals. Many other vectors can also be used to initiate the attack such as phone calls, instant messaging, or physical letters. However, phishing process steps have been discussed by many researchers due to the importance of understanding these steps in developing an anti-phishing solution. The author in the study ( Rouse, 2013 ) divides the phishing attack process into five phases which are planning, setup, attack, collection, and cash. A study ( Jakobsson and Myers, 2006 ) discusses the phishing process in detail and explained it as step-by-step phases. These phases include preparation for the attack, sending a malicious program using the selected vector, obtaining the user’s reaction to the attack, tricking a user to disclose their confidential information which will be transmitted to the phisher, and finally obtaining the targeted money. While the study ( Abad, 2005 ) describes a phishing attack in three phases: the early phase which includes initializing attack, creating the phishing email, and sending a phishing email to the victim. The second phase includes receiving an email by the victim and disclosing their information (in the case of the respondent) and the final phase in which the defrauding is successful. However, all phishing scams include three primary phases, the phisher requests sensitive valuables from the target, and the target gives away these valuables to a phisher, and phisher misuses these valuables for malicious purposes. These phases can be classified furthermore into its sub-processes according to phishing trends. Thus, a new anatomy for phishing attacks has been proposed in this article, which expands and integrates previous definitions to cover the full life cycle of a phishing attack. The proposed new anatomy, which consists of 4 phases, is shown in Figure 8 . This new anatomy provides a reference structure to look at phishing attacks in more detail and also to understand potential countermeasures to prevent them. The explanations for each phase and its components are presented as follows:

www.frontiersin.org

FIGURE 8 . The proposed anatomy of phishing was built upon the proposed phishing definition in this article, which concluded from our understanding of a phishing attack.

Figure 8 depicts the proposed anatomy of the phishing attack process, phases, and components drawn upon the proposed definition in this article. The proposed phishing anatomy explains in detail each phase of phishing phases including attackers and target types, examples about the information that could be collected by the attacker about the victim, and examples about attack methods. The anatomy, as shown in the figure, illustrates a set of vulnerabilities that the attacker can exploit and the mediums used to conduct the attack. Possible threats are also listed, as well as the data collection method for a further explanation and some examples about target responding types and types of spoils that the attacker could gain and how they can use the stolen valuables. This anatomy elaborates on phishing attacks in depth which helps people to better understand the complete phishing process (i.e., end to end Phishing life cycle) and boost awareness among readers. It also provides insights into potential solutions for phishing attacks we should focus on. Instead of always placing the user or human in an accusation ring as the only reason behind phishing success, developers must be focusing on solutions to mitigate the initiation of the attack by preventing the bait from reaching the user. For instance, to reach the target’s system, the threat has to pass through many layers of technology or defenses exploiting one or more vulnerabilities such as web and software vulnerabilities.

Planning Phase

This is the first stage of the attack, where a phisher makes a decision about the targets and starts gathering information about them (individuals or company). Phishers gather information about the victims to lure them based on psychological vulnerability. This information can be anything like name, e-mail addresses for individuals, or the customers of that company. Victims could also be selected randomly, by sending mass mailings or targeted by harvesting their information from social media, or any other source. Targets for phishing could be any user with a bank account and has a computer on the Internet. Phishers target businesses such as financial services, retail sectors such as eBay and Amazon, and internet service providers such as MSN/Hotmail, and Yahoo ( Ollmann, 2004 ; Ramzan and Wuest, 2007 ). This phase also includes devising attack methods such as building fake websites (sometimes phishers get a scam page that is already designed or used, designing malware, constructing phishing emails. The attacker can be categorized based on the attack motivation. There are four types of attackers as mentioned in studies ( Vishwanath, 2005 ; Okin, 2009 ; EDUCBA, 2017 ; APWG, 2020 ):

▪ Script kiddies: the term script kiddies represents an attacker with no technical background or knowledge about writing sophisticated programs or developing phishing tools but instead they use scripts developed by others in their phishing attack. Although the term comes from children that use available phishing kits to crack game codes by spreading malware using virus toolkits, it does not relate precisely to the actual age of the phisher. Script kiddies can get access to website administration privileges and commit a “Web cracking” attack. Moreover, they can use hacking tools to compromise remote computers so-called “botnet,” the single compromised computer called a “zombie computer.” These attackers are not limited to just sit back and enjoy phishing, they could cause serious damage such as stealing information or uploading Trojans or viruses. In February 2000, an attack launched by Canadian teen Mike Calce resulted in $1.7 million US Dollars (USD) damages from Distributed Denial of Service (DDoS) attacks on CNN, eBay, Dell, Yahoo, and Amazon ( Leyden, 2001 ).

▪ Serious Crackers: also known as Black Hats. These attackers can execute sophisticated attacks and develop worms and Trojans for their attack. They hijack people's accounts maliciously and steal credit card information, destroy important files, or sell compromised credentials for personal gains.

▪ Organized crime: this is the most organized and effective type of attacker and they can incur significant damage to victims. These people hire serious crackers for conducting phishing attacks. Moreover, they can thoroughly trash the victim's identity, and committing devastated frauds as they have the skills, tools, and manpower. An organized cybercrime group is a team of expert hackers who share their skills to build complex attacks and to launch phishing campaigns against individuals and organizations. These groups offer their work as ‘crime as a service’ and they can be hired by terrorist groups, organizations, or individuals.

▪ Terrorists: due to our dependency on the internet for most activities, terrorist groups can easily conduct acts of terror remotely which could have an adverse impact. These types of attacks are dangerous since they are not in fear of any aftermath, for instance going to jail. Terrorists could use the internet to the maximum effect to create fear and violence as it requires limited funds, resources, and efforts compared to, for example, buying bombs and weapons in a traditional attack. Often, terrorists use spear phishing to launch their attacks for different purposes such as inflicting damage, cyber espionage, gathering information, locating individuals, and other vandalism purposes. Cyber espionage has been used extensively by cyber terrorists to steal sensitive information on national security, commercial information, and trade secrets which can be used for terrorist activities. These types of crimes may target governments or organizations, or individuals.

Attack Preparation

After making a decision about the targets and gathering information about them, phishers start to set up the attack by scanning for the vulnerabilities to exploit. The following are some examples of vulnerabilities exploited by phishers. For example, the attacker might exploit buffer overflow vulnerability to take control of target applications, create a DoS attack, or compromise computers. Moreover, “zero-day” software vulnerabilities, which refer to newly discovered vulnerabilities in software programs or operating systems could be exploited directly before it is fixed ( Kayne, 2019 ). Another example is browser vulnerabilities, adding new features and updates to the browser might introduce new vulnerabilities to the browser software ( Ollmann, 2004 ). In 2005, attackers exploited a cross-domain vulnerability in Internet Explorer (IE) ( Symantic, 2019 ). The cross-domain used to separate content from different sources in Microsoft IE. Attackers exploited a flaw in the cross-domain that enables them to execute programs on a user's computer after running IE. According to US-CERT, hackers are actively exploiting this vulnerability. To carry out a phishing attack, attackers need a medium so that they can reach their target. Therefore, apart from planning the attack to exploit potential vulnerabilities, attackers choose the medium that will be used to deliver the threat to the victim and carry out the attack. These mediums could be the internet (social network, websites, emails, cloud computing, e-banking, mobile systems) or VoIP (phone call), or text messages. For example, one of the actively used mediums is Cloud Computing (CC). The CC has become one of the more promising technologies and has popularly replaced conventional computing technologies. Despite the considerable advantages produced by CC, the adoption of CC faces several controversial obstacles including privacy and security issues ( CVEdetails, 2005 ). Due to the fact that different customers could share the same recourses in the cloud, virtualization vulnerabilities may be exploited by a possible malicious customer to perform security attacks on other customers’ applications and data ( Zissis and Lekkas, 2012 ). For example, in September 2014, secret photos of some celebrities suddenly moved through the internet in one of the more terrible data breaches. The investigation revealed that the iCloud accounts of the celebrities were breached ( Lehman and Vajpayee, 2011 ). According to Proofpoint, in 2017, attackers used Microsoft SharePoint to infect hundreds of campaigns with malware through messages.

Attack Conducting Phase

This phase involves using attack techniques to deliver the threat to the victim as well as the victim’s interaction with the attack in terms of responding or not. After the victim's response, the system may be compromised by the attacker to collect user's information using techniques such as injecting client-side script into webpages ( Johnson, 2016 ). Phishers can compromise hosts without any technical knowledge by purchasing access from hackers ( Abad, 2005 ). A threat is a possible danger that that might exploit a vulnerability to compromise people’s security and privacy or cause possible harm to a computer system for malicious purposes. Threats could be malware, botnet, eavesdropping, unsolicited emails, and viral links. Several Phishing techniques are discussed in sub- Types and Techniques of Phishing Attacks .

Valuables Acquisition Phase

In this stage, the phisher collects information or valuables from victims and uses it illegally for purchasing, funding money without the user’s knowledge, or selling these credentials in the black market. Attackers target a wide range of valuables from their victims that range from money to people’s lives. For example, attacks on online medical systems may lead to loss of life. Victim’s data can be collected by phishers manually or through automated techniques ( Jakobsson et al., 2007 ).

The data collection can be conducted either during or after the victim’s interaction with the attacker. However, to collect data manually simple techniques are used wherein victims interact directly with the phisher depending on relationships within social networks or other human deception techniques ( Ollmann, 2004 ). Whereas in automated data collection, several techniques can be used such as fake web forms that are used in web spoofing ( Dhamija et al., 2006 ). Additionally, the victim’s public data such as the user’s profile in social networks can be used to collect the victim’s background information that is required to initialize social engineering attacks ( Wenyin et al., 2005 ). In VoIP attacks or phone attack techniques such as recorded messages are used to harvest user's data ( Huber et al., 2009 ).

Types and Techniques of Phishing Attacks

Phishers conduct their attack either by using psychological manipulation of individuals into disclosing personal information (i.e., deceptive attack as a form of social engineering) or using technical methods. Phishers, however, usually prefer deceptive attacks by exploiting human psychology rather than technical methods. Figure 9 illustrates the types of phishing and techniques used by phishers to conduct a phishing attack. Each type and technique is explained in subsequent sections and subsections.

www.frontiersin.org

FIGURE 9 . Phishing attack types and techniques drawing upon existing phishing attacks.

Deceptive Phishing

Deceptive phishing is the most common type of phishing attack in which the attacker uses social engineering techniques to deceive victims. In this type of phishing, a phisher uses either social engineering tricks by making up scenarios (i.e., false account update, security upgrade), or technical methods (i.e., using legitimate trademarks, images, and logos) to lure the victim and convince them of the legitimacy of the forged email ( Jakobsson and Myers, 2006 ). By believing these scenarios, the user will fall prey and follow the given link, which leads to disclose his personal information to the phisher.

Deceptive phishing is performed through phishing emails; fake websites; phone phishing (Scam Call and IM); social media; and via many other mediums. The most common social phishing types are discussed below;

Phishing e-Mail

The most common threat derived by an attacker is deceiving people via email communications and this remains the most popular phishing type to date. A Phishing email or Spoofed email is a forged email sent from an untrusted source to thousands of victims randomly. These fake emails are claiming to be from a person or financial institution that the recipient trusts in order to convince recipients to take actions that lead them to disclose their sensitive information. A more organized phishing email that targets a particular group or individuals within the same organization is called spear phishing. In the above type, the attacker may gather information related to the victim such as name and address so that it appears to be credible emails from a trusted source ( Wang et al., 2008 ), and this is linked to the planning phase of the phishing anatomy proposed in this article. A more sophisticated form of spear phishing is called whaling, which targets high-rank people such as CEOs and CFOs. Some examples of spear-phishing attack victims in early 2016 are the phishing email that hacked the Clinton campaign chairman John Podesta’s Gmail account ( Parmar, 2012 ). Clone phishing is another type of email phishing, where the attacker clones a legitimate and previously delivered email by spoofing the email address and using information related to the recipient such as addresses from the legitimate email with replaced links or malicious attachments ( Krawchenko, 2016 ). The basic scenario for this attack is illustrated previously in Figure 4 and can be described in the following steps.

1. The phisher sets up a fraudulent email containing a link or an attachment (planning phase).

2. The phisher executes the attack by sending a phishing email to the potential victim using an appropriate medium (attack conducting phase).

3. The link (if clicked) directs the user to a fraudulent website, or to download malware in case of clicking the attachment (interaction phase).

4. The malicious website prompts users to provide confidential information or credentials, which are then collected by the attacker and used for fraudulent activities. (Valuables acquisition phase).

Often, the phisher does not use the credentials directly; instead, they resell the obtained credentials or information on a secondary market ( Jakobsson and Myers, 2006 ), for instance, script kiddies might sell the credentials on the dark web.

Spoofed Website

This is also called phishing websites, in which phishers forge a website that appears to be genuine and looks similar to the legitimate website. An unsuspicious user is redirected to this website after clicking a link embedded within an email or through an advertisement (clickjacking) or any other way. If the user continues to interact with the spoofed website, sensitive information will be disclosed and harvested by the phisher ( CSIOnsite, 2012 ).

Phone Phishing (Vishing and SMishing)

This type of phishing is conducted through phone calls or text messages, in which the attacker pretends to be someone the victim knows or any other trusted source the victim deals with. A user may receive a convincing security alert message from a bank convincing the victim to contact a given phone number with the aim to get the victim to share passwords or PIN numbers or any other Personally Identifiable Information (PII). The victim may be duped into clicking on an embedded link in the text message. The phisher then could take the credentials entered by the victim and use them to log in to the victims' instant messaging service to phish other people from the victim’s contact list. A phisher could also make use of Caller IDentification (CID) 3 spoofing to dupe the victim that the call is from a trusted source or by leveraging from an internet protocol private branch exchange (IP PBX) 4 tools which are open-source and software-based that support VoIP ( Aburrous et al., 2008 ). A new report from Fraud Watch International about phishing attack trends for 2019 anticipated an increase in SMishing where the text messages content is only viewable on a mobile device ( FraudWatchInternational, 2019 ).

Social Media Attack (Soshing, Social Media Phishing)

Social media is the new favorite medium for cybercriminals to conduct their phishing attacks. The threats of social media can be account hijacking, impersonation attacks, scams, and malware distributing. However, detecting and mitigating these threats requires a longer time than detecting traditional methods as social media exists outside of the network perimeter. For example, the nation-state threat actors conducted an extensive series of social media attacks on Microsoft in 2014. Multiple Twitter accounts were affected by these attacks and passwords and emails for dozens of Microsoft employees were revealed ( Ramzan, 2010 ). According to Kaspersky Lab’s, the number of phishing attempts to visit fraudulent social network pages in the first quarter of 2018 was more than 3.7 million attempts, of which 60% were fake Facebook pages ( Raggo, 2016 ).

The new report from predictive email defense company Vade Secure about phishers’ favorites for quarter 1 and quarter 2 of 2019, stated that Soshing primarily on Facebook and Instagram saw a 74.7% increase that is the highest quarter-over- quarter growth of any industry ( VadeSecure, 2021 ).

Technical Subterfuge

Technical subterfuge is the act of tricking individuals into disclosing their sensitive information through technical subterfuge by downloading malicious code into the victim's system. Technical subterfuge can be classified into the following types:

Malware-Based Phishing

As the name suggests, this is a type of phishing attack which is conducted by running malicious software on a user’s machine. The malware is downloaded to the victim’s machine, either by one of the social engineering tricks or technically by exploiting vulnerabilities in the security system (e.g., browser vulnerabilities) ( Jakobsson and Myers, 2006 ). Panda malware is one of the successful malware programs discovered by Fox-IT Company in 2016. This malware targets Windows Operating Systems (OS). It spreads through phishing campaigns and its main attack vectors include web injects, screenshots of user activity (up to 100 per mouse click), logging of keyboard input, Clipboard pastes (to grab passwords and paste them into form fields), and exploits to the Virtual Network Computing (VNC) desktop sharing system. In 2018, Panda malware expanded its targets to include cryptocurrency exchanges and social media sites ( F5Networks, 2018 ). There are many forms of Malware-based phishing attacks; some of them are discussed below:

Key Loggers and Screen Loggers

Loggers are the type of malware used by phishers and installed either through Trojan horse email attachments or through direct download to the user’s personal computer. This software monitors data and records user keystrokes and then sends it to the phisher. Phisher uses the key loggers to capture sensitive information related to victims, such as names, addresses, passwords, and other confidential data. Key loggers can also be used for non-phishing purposes such as to monitor a child's use of the internet. Key loggers can also be implemented in many other ways such as detecting URL changes and logs information as Browser Helper Object (BHO) that enables the attacker to take control of the features of all IE’s, monitoring keyboard and mouse input as a device driver and, monitoring users input and displays as a screen logger ( Jakobsson and Myers, 2006 ).

Viruses and Worms

A virus is a type of malware, which is a piece of code spreading in another application or program by making copies of itself in a self-automated manner ( Jakobsson and Myers, 2006 ; F5Networks, 2018 ). Worms are similar to viruses but they differ in the execution manner, as worms are executed by exploiting the operating systems vulnerability without the need to modify another program. Viruses transfer from one computer to another with the document that they are attached to, while worms transfer through the infected host file. Both viruses and worms can cause data and software damaging or Denial-of-Service (DoS) conditions ( F5Networks, 2018 ).

Spying software is a malicious code designed to track the websites visited by users in order to steal sensitive information and conduct a phishing attack. Spyware can be delivered through an email and, once it is installed on the computer, take control over the device and either change its settings or gather information such as passwords and credit card numbers or banking records which can be used for identity theft ( Jakobsson and Myers, 2006 ).

Adware is also known as advertising-supported software ( Jakobsson and Myers, 2006 ). Adware is a type of malware that shows the user an endless pop-up window with ads that could harm the performance of the device. Adware can be annoying but most of it is safe. Some of the adware could be used for malicious purposes such as tracking the internet sites the user visits or even recording the user's keystrokes ( cisco, 2018 ).

Ransomware is a type of malware that encrypts the user's data after they run an executable program on the device. In this type of attack, the decryption key is held until the user pays a ransom (cisco, 2018). Ransomware is responsible for tens of millions of dollars in extortion annually. Worse still, this is hard to detect with developing new variants, facilitating the evasion of many antivirus and intrusion detection systems ( Latto, 2020 ). Ransomware is usually delivered to the victim's device through phishing emails. According to a report ( PhishMe, 2016 ), 93% of all phishing emails contained encryption ransomware. Phishing, as a social engineering attack, convinces victims into executing actions without knowing about the malicious program.

A rootkit is a collection of programs, typically malicious, that enables access to a computer or computer network. These toolsets are used by intruders to hide their actions from system administrators by modifying the code of system calls and changing the functionality ( Belcic, 2020 ). The term “rootkit” has negative connotations through its association with malware, and it is used by the attacker to alert existing system tools to escape detection. These kits enable individuals with little or no knowledge to launch phishing exploits. It contains coding, mass emailing software (possibly with thousands of email addresses included), web development software, and graphic design tools. An example of rootkits is the Kernel kit. Kernel-Level Rootkits are created by replacing portions of the core operating system or adding new code via Loadable Kernel Modules in (Linux) or device drivers (in Windows) ( Jakobsson and Myers, 2006 ).

Session Hijackers

In this type, the attacker monitors the user’s activities by embedding malicious software within a browser component or via network sniffing. The monitoring aims to hijack the session, so that the attacker performs an unauthorized action with the hijacked session such as financial transferring, without the user's permission ( Jakobsson and Myers, 2006 ).

Web Trojans

Web Trojans are malicious programs that collect user’s credentials by popping up in a hidden way over the login screen ( Jakobsson and Myers, 2006 ). When the user enters the credentials, these programs capture and transmit the stolen credentials directly to the attacker ( Jakobsson et al., 2007 ).

Hosts File Poisoning

This is a way to trick a user into going to the phisher’s site by poisoning (changing) the host’s file. When the user types a particular website address in the URL bar, the web address will be translated into a numeric (IP) address before visiting the site. The attacker, to take the user to a fake website for phishing purposes, will modify this file (e.g., DNS cache). This type of phishing is hard to detect even by smart and perceptive users ( Ollmann, 2004 ).

System Reconfiguration Attack

In this format of the phishing attack, the phisher manipulates the settings on a user’s computer for malicious activities so that the information on this PC will be compromised. System reconfigurations can be changed using different methods such as reconfiguring the operating system and modifying the user’s Domain Name System (DNS) server address. The wireless evil twin is an example of a system reconfiguration attack in which all user’s traffic is monitored via a malicious wireless Access Point (AP) ( Jakobsson and Myers, 2006 ).

Data theft is an unauthorized accessing and stealing of confidential information for a business or individuals. Data theft can be performed by a phishing email that leads to the download of a malicious code to the user's computer which in turn steals confidential information stored in that computer directly ( Jakobsson and Myers, 2006 ). Stolen information such as passwords, social security numbers, credit card information, sensitive emails, and other personal data could be used directly by a phisher or indirectly by selling it for different purposes.

Domain Name System Based Phishing (Pharming)

Any form of phishing that interferes with the domain name system so that the user will be redirected to the malicious website by polluting the user's DNS cache with wrong information is called DNS-based phishing. Although the host’s file is not a part of the DNS, the host’s file poisoning is another form of DNS based phishing. On the other hand, by compromising the DNS server, the genuine IP addresses will be modified which results in taking the user unwillingly to a fake location. The user can fall prey to pharming even when clicking on a legitimate link because the website’s domain name system (DNS) could be hijacked by cybercriminals ( Jakobsson and Myers, 2006 ).

Content Injection Phishing

Content-Injection Phishing refers to inserting false content into a legitimate site. This malicious content could misdirect the user into fake websites, leading users into disclosing their sensitive information to the hacker or it can lead to downloading malware into the user's device ( Jakobsson and Myers, 2006 ). The malicious content could be injected into a legitimate site in three primary ways:

1. Hacker exploits a security vulnerability and compromises a web server.

2. Hacker exploits a Cross-Site Scripting (XSS) vulnerability that is a programming flaw that enables attackers to insert client-side scripts into web pages, which will be viewed by the visitors to the targeted site.

3. Hacker exploits Structured Query Language (SQL) injection vulnerability, which allows hackers to steal information from the website’s database by executing database commands on a remote server.

Man-In-The-Middle Phishing

The Man In The Middle attack (MITM) is a form of phishing, in which the phishers insert communications between two parties (i.e. the user and the legitimate website) and tries to obtain the information from both parties by intercepting the victim’s communications ( Ollmann, 2004 ). Such that the message is going to the attacker instead of going directly to the legitimate recipients. For a MITM, the attacker records the information and misuse it later. The MITM attack conducts by redirecting the user to a malicious server through several techniques such as Address Resolution Protocol (ARP) poisoning, DNS spoofing, Trojan key loggers, and URL Obfuscation ( Jakobsson and Myers, 2006 ).

Search Engine Phishing

In this phishing technique, the phisher creates malicious websites with attractive offers and use Search Engine Optimization (SEO) tactics to have them indexed legitimately such that it appears to the user when searching for products or services. This is also known as black hat SEO ( Jakobsson and Myers, 2006 ).

URL and HTML Obfuscation Attacks

In most of the phishing attacks, phishers aim to convince a user to click on a given link that connects the victim to a malicious phishing server instead of the destination server. This is the most popular technique used by today's phishers. This type of attack is performed by obfuscating the real link (URL) that the user intends to connect (an attempt from the attacker to make their web address look like the legitimate one). Bad Domain Names and Host Name Obfuscation are common methods used by attackers to fake an address ( Ollmann, 2004 ).

Countermeasures

A range of solutions are being discussed and proposed by the researchers to overcome the problems of phishing, but still, there is no single solution that can be trusted or capable of mitigating these attacks ( Hong, 2012 ; Boddy, 2018 ; Chanti and Chithralekha, 2020 ). The proposed phishing countermeasures in the literature can be categorized into three major defense strategies. The first line of defense is human-based solutions by educating end-users to recognize phishing and avoid taking the bait. The second line of defense is technical solutions that involve preventing the attack at early stages such as at the vulnerability level to prevent the threat from materializing at the user's device, which means decreasing the human exposure, and detecting the attack once it is launched through the network level or at the end-user device. This also includes applying specific techniques to track down the source of the attack (for example these could include identification of new domains registered that are closely matched with well-known domain names). The third line of defense is the use of law enforcement as a deterrent control. These approaches can be combined to create much stronger anti-phishing solutions. The above solutions are discussed in detail below.

Human Education (Improving User Awareness About Phishing)

Human education is by far an effective countermeasure to avoid and prevent phishing attacks. Awareness and human training are the first defense approach in the proposed methodology for fighting against phishing even though it does not assume complete protection ( Hong, 2012 ). End-user education reduces user's susceptibility to phishing attacks and compliments other technical solutions. According to the analysis carried out in ( Bailey et al., 2008 ), 95% of phishing attacks are caused due to human errors; nonetheless, existing phishing detection training is not enough for combating current sophisticated attacks. In the study presented by Khonji et al. (2013) , security experts contradict the effectiveness and usability of user education. Furthermore, some security experts claim that user education is not effective as security is not the main goal for users and users do not have a motivation to educate themselves about phishing ( Scaife et al., 2016 ), while others confirm that user education could be effective if designed properly ( Evers, 2006 ; Whitman and Mattord, 2012 ). Moreover, user training has been mentioned by many researchers as an effective way to protect users when they are using online services ( Dodge et al., 2007 ; Salem et al., 2010 ; Chanti and Chithralekha, 2020 ). To detect and avoid phishing emails, a combined training approach was proposed by authors in the study ( Salem et al., 2010 ). The proposed solution uses a combination of tools and human learning, wherein a security awareness program is introduced to the user as a first step. The second step is using an intelligent system that detects the attacks at the email level. After that, the emails are classified by a fuzzy logic-based expert system. The main critic of this method is that the study chooses only limited characteristics of the emails as distinguishing features ( Kumaraguru et al., 2010 ; CybintCyberSolutions, 2018 ). Moreover, the majority of phishing training programs focus on how to recognize and avoid phishing emails and websites while other threatening phishing types receive less attention such as voice phishing and malware or adware phishing. The authors in ( Salem et al., 2010 ) found that the most used solutions in educating people are not useful if they ignore the notifications/warnings about fake websites. Training users should involve three major directions: the first one is awareness training through holding seminars or online courses for both employees within organizations or individuals. The second one is using mock phishing attacks to attack people to test users’ vulnerability and allow them to assess their own knowledge about phishing. However, only 38% of global organizations claim they are prepared to handle a sophisticated cyber-attack ( Kumaraguru et al., 2010 ). Wombat Security’s State of the Phish™ Report 2018 showed that approximately two-fifths of American companies use computer-based online awareness training and simulated phishing attacks as educating tools on a monthly basis, while just 15% of United Kingdom firms do so ( CybintCyberSolutions, 2018 ). The third direction is educating people by developing games to teach people about phishing. The game developer should take into consideration different aspects before designing the game such as audience age and gender, because people's susceptibility to phishing is varying. Authors in the study ( Sheng et al., 2007 ) developed a game to train users so that they can identify phishing attacks called Anti-Phishing Phil that teaches about phishing web pages, and then tests users about the efficiency and effectiveness of the game. The results from the study showed that the game participants improve their ability to identify phishing by 61% indicating that interactive games might turn out to be a joyful way of educating people. Although, user’s education and training can be very effective to mitigate security threats, phishing is becoming more complex and cybercriminals can fool even the security experts by creating convincing spear phishing emails via social media. Therefore, individual users and employees must have at least basic knowledge about dealing with suspicious emails and report it to IT staff and specific authorities. In addition, phishers change their strategies continuously, which makes it harder for organizations, especially small/medium enterprises to afford the cost of their employee education. With millions of people logging on to their social media accounts every day, social media phishing is phishers' favorite medium to deceive their victims. For example, phishers are taking advantage of the pervasiveness of Facebook to set up creative phishing attacks utilizing the Facebook Login feature that enables the phisher to compromise all the user's accounts with the same credentials (VadeSecure). Some countermeasures are taken by Social networks to reduce suspicious activities on social media such as Two-Factor authentication for logging in, that is required by Facebook, and machine-learning techniques used by Snapchat to detect and prevent suspicious links sent within the app ( Corrata, 2018 ). However, countermeasures to control Soshing and phone phishing attacks might include:

• Install anti-virus, anti-spam software as a first action and keep it up to date to detect and prevent any unauthorized access.

• Educate yourself about recent information on phishing, the latest trends, and countermeasures.

• Never click on hyperlinks attached to a suspicious email, post, tweet, direct message.

• Never trust social media, do not give any sensitive information over the phone or non-trusted account. Do not accept friend requests from people you do not know.

• Use a unique password for each account.

Training and educating users is an effective anti-phishing countermeasure and has already shown promising initial results. The main downside of this solution is that it demands high costs ( Dodge et al., 2007 ). Moreover, this solution requires basic knowledge in computer security among trained users.

Technical Solutions

The proposed technical solutions for detecting and blocking phishing attacks can be divided into two major approaches: non-content based solutions and content-based solutions ( Le et al., 2006 ; Bin et al., 2010 ; Boddy, 2018 ). Both approaches are briefly described in this section. Non-content based methods include blacklists and whitelists that classify the fake emails or webpages based on the information that is not part of the email or the webpage such as URL and domain name features ( Dodge et al., 2007 ; Ma et al., 2009 ; Bin et al., 2010 ; Salem et al., 2010 ). Stopping the phishing sites using blacklist and whitelist approaches, wherein a list of known URLs and sites is maintained, the website under scrutiny is checked against such a list in order to be classified as a phishing or legitimate site. The downside of this approach is that it will not identify all phishing websites. Because once a phishing site is taken down, the phisher can easily register a new domain ( Miyamoto et al., 2009 ). Content-based methods classify the page or the email relying on the information within its content such as texts, images, and also HTML, java scripts, and Cascading Style Sheets (CSS) codes ( Zhang et al., 2007 ; Maurer and Herzner, 2012 ). Content-based solutions involve Machine Learning (ML), heuristics, visual similarity, and image processing methods ( Miyamoto et al., 2009 ; Chanti and Chithralekha, 2020 ). and finally, multifaceted methods, which apply a combination of the previous approaches to detect and prevent phishing attacks ( Afroz and Greenstadt, 2009 ). For email filtering, ML techniques are commonly used for example in 2007, the first email phishing filter was developed by authors in ( Fette et al., 2007 ). This technique uses a set of features such as URLs that use different domain names. Spam filtering techniques ( Cormack et al., 2011 ) and statistical classifiers ( Bergholz et al., 2010 ) are also used to identify a phishing email. Authentication and verification technologies are also used in spam email filtering as an alternative to heuristics methods. For example, the Sender Policy Framework (SPF) verifies whether a sender is valid when accepting mail from a remote mail server or email client ( Deshmukh and raddha Popat, 2017 ).

The technical solutions for Anti-phishing are available at different levels of the delivery chain such as mail servers and clients, Internet Service Providers (ISPs), and web browser tools. Drawing from the proposed anatomy for phishing attacks in Proposed Phishing Anatomy , authors categorize technical solutions into the following approaches:

1. Techniques to detect the attack after it has been launched. Such as by scanning the web to find fake websites. For example, content-based phishing detection approaches are heavily deployed on the Internet. The features from the website elements such as Image, URL, and text content are analyzed using Rule-based approaches and Machine Learning that examine the presence of special characters (@), IP addresses instead of the domain name, prefix/suffix, HTTPS in domain part and other features ( Jeeva and Rajsingh, 2016 ). Fuzzy Logic (FL) has also been used as an anti-phishing model to help classify websites into legitimate or ‘phishy’ as this model deals with intervals rather than specific numeric values ( Aburrous et al., 2008 ).

2. Techniques to prevent the attack from reaching the user's system. Phishing prevention is an important step to defend against phishing by blocking a user from seeing and dealing with the attack. In email phishing, anti-spam software tools can block suspicious emails. Phishers usually send a genuine look-alike email that dupes the user to open an attachment or click on a link. Some of these emails pass the spam filter because phishers use misspelled words. Therefore, techniques that detect fake emails by checking the spelling and grammar correction are increasingly used, so that it can prevent the email from reaching the user's mailbox. Authors in the study ( Fette et al., 2007 ) have developed a new classification algorithm based on the Random Forest algorithm after exploring email phishing utilizing the C4.5 decision tree generator algorithm. The developed method is called "Phishing Identification by Learning on Features of Email Received" (PILFER), which can classify phishing email depending on various features such as IP based URLs, the number of links in the HTML part(s) of an email, the number of domains, the number of dots, nonmatching URLs, and availability of JavaScripts. The developed method showed high accuracy in detecting phishing emails ( Afroz and Greenstadt, 2009 ).

3. Corrective techniques that can take down the compromised website, by requesting the website's Internet Service Provider (ISP) to shut down the fake website in order to prevent more users from falling victims to phishing ( Moore and Clayton, 2007 ; Chanti and Chithralekha, 2020 ). ISPs are responsible for taking down fake websites. Removing the compromised and illegal websites is a complex process; many entities are involved in this process from private companies, self-regulatory bodies, government agencies, volunteer organizations, law enforcement, and service providers. Usually, illegal websites are taken down by Takedown Orders, which are issued by courts or in some jurisdictions by law enforcement. On the other hand, these can be voluntarily taken down by the providers themselves as a result of issued takedown notices ( Moore and Clayton, 2007 ; Hutchings et al., 2016 ). According to PHISHLABS ( PhishLabs, 2019 ) report, taking down phishing sites is helpful but it is not completely effective as these sites can still be alive for days stealing customers' credentials before detecting the attack.

4. Warning tools or security indicators that embedded into the web browser to inform the user after detecting the attack. For example, eBay Toolbar and Account Guard ( eBay Toolbar and Account Guard, 2009 ) protect customer’s eBay and PayPal passwords respectively by alerting the users about the authenticity of the sites that users try to type the password in. Numerous anti-phishing solutions rely mainly on warnings that are displayed on the security toolbar. In addition, some toolbars block suspicious sites to warn about it such as McAfee and Netscape. A study presented in ( Robichaux and Ganger, 2006 ) conducted a test to evaluate the performance of eight anti-phishing solutions, including Microsoft Internet Explorer 7, EarthLink, eBay, McAfee, GeoTrust, Google using Firefox, Netscape, and Netcraft. These tools are warning and blocking tools that allow legitimate sites while block and warn about known phishing sites. The study also found that Internet Explorer and Netcraft Toolbar showed the most effective results than other anti-phishing tools. However, security toolbars are still failing to avoid people falling victim to phishing despite these toolbars improving internet security in general ( Abu-Nimeh and Nair, 2008 ).

5. Authentication ( Moore and Clayton, 2007 ) and authorization ( Hutchings et al., 2016 ) techniques that provide protection from phishing by verifying the identity of the legitimate person. This prevents phishers from accessing a protected resource and conducting their attack. There are three types of authentication; single-factor authentication requires only username and password. The second type is two-factor authentication that requires additional information in addition to the username and password such as an OTP (One-Time Password) which is sent to the user’s email id or phone. The third type is multi-factor authentication using more than one form of identity (i.e., a combination of something you know, something you are, and something you have). Some widely used methods in the authorization process are API authorization and OAuth 2.0 that allow the previously generated API to access the system.

However, the progressive increase in phishing attacks shows that previous methods do not provide the required protection against most existing phishing attacks. Because no single solution or technology could prevent all phishing attacks. An effective anti-phishing solution should be based on a combination of technical solutions and increased user awareness ( Boddy, 2018 ).

Solutions Provided by Legislations as a Deterrent Control

A cyber-attack is considered a crime when an individual intentionally accesses personal information on a computer without permission, even if the individual does not steal information or damage the system ( Mince-Didier, 2020 ). Since the sole objective of almost all phishing attacks is to obtain sensitive information by knowingly intending to commit identity theft, and while there are currently no federal laws in the United States aimed specifically at phishing, therefore, phishing crimes are usually covered under identity theft laws. Phishing is considered a crime even if the victim does not actually fall for the phishing scam, the punishments depend on circumstances and usually include jail, fines, restitution, probation ( Nathan, 2020 ). Phishing attacks are causing different levels of damages to the victims such as financial and reputational losses. Therefore, law enforcement authorities should track down these attacks in order to punish the criminal as with real-world crimes. As a complement to technical solutions and human education, the support provided by applicable laws and regulations can play a vital role as a deterrent control. Increasingly authorities around the world have created several regulations in order to mitigate the increase of phishing attacks and their impact. The first anti-phishing laws were enacted by the United States, where the FTC in the US added the phishing attacks to the computer crime list in January 2004. A year later, the ‘‘Anti-Phishing Act’’ was introduced in the US Congress in March 2005 ( Mohammad et al., 2014 ). Meanwhile, in the United Kingdom, the law legislation is gradually conforming to address phishing and other forms of cyber-crime. In 2006, the United Kingdom government improved the Computer Misuse Act 1990 intending to bring it up to date with developments in computer crime and to increase penalties for breach enacted penalties of up to 10 years ( eBay Toolbar and Account Guard, 2009 ; PhishLabs, 2019 ). In this regard, a student in the United Kingdom who made hundreds of thousands of pounds blackmailing pornography website users was jailed in April 2019 for six years and five months. According to the National Crime Agency (NCA), this attacker was the most prolific cybercriminal to be sentenced in the United Kingdom ( Casciani, 2019 ). Moreover, the organizations bear part of the responsibility in protecting personal information as stated in the Data Protection Act 2018 and EU General Data Protection Regulation (GDPR). Phishing websites also can be taken down through Law enforcement agencies' conduct. In the United Kingdom, websites can be taken down by the National Crime Agency (NCA), which includes the National Cyber Crime Unit, and by the City of London Police, which includes the Police Intellectual Property Crime Unit (PIPCU) and the National Fraud Intelligence Bureau (NFIB) ( Hutchings et al., 2016 ).

However, anti-phishing law enforcement is still facing numerous challenges and limitations. Firstly, after perpetrating the phishing attack, the phisher can vanish in cyberspace making it difficult to prove the guilt attributed to the offender and to recover the damages caused by the attack, limiting the effectiveness of the law enforcement role. Secondly, even if the attacker’s identity is disclosed in the case of international attackers, it will be difficult to bring this attacker to justice because of the differences in countries' legislations (e.g., exchange treaties). Also, the attack could be conducted within a short time span, for instance, the average lifetime for a phishing web site is about 54 h as stated by the APWG, therefore, there must be a quick response from the government and the authorities to detect, control and identify the perpetrators of the attack ( Ollmann, 2004 ).

Phishing attacks remain one of the major threats to individuals and organizations to date. As highlighted in the article, this is mainly driven by human involvement in the phishing cycle. Often phishers exploit human vulnerabilities in addition to favoring technological conditions (i.e., technical vulnerabilities). It has been identified that age, gender, internet addiction, user stress, and many other attributes affect the susceptibility to phishing between people. In addition to traditional phishing channels (e.g., email and web), new types of phishing mediums such as voice and SMS phishing are on the increase. Furthermore, the use of social media-based phishing has increased in use in parallel with the growth of social media. Concomitantly, phishing has developed beyond obtaining sensitive information and financial crimes to cyber terrorism, hacktivism, damaging reputations, espionage, and nation-state attacks. Research has been conducted to identify the motivations and techniques and countermeasures to these new crimes, however, there is no single solution for the phishing problem due to the heterogeneous nature of the attack vector. This article has investigated problems presented by phishing and proposed a new anatomy, which describes the complete life cycle of phishing attacks. This anatomy provides a wider outlook for phishing attacks and provides an accurate definition covering end-to-end exclusion and realization of the attack.

Although human education is the most effective defense for phishing, it is difficult to remove the threat completely due to the sophistication of the attacks and social engineering elements. Although, continual security awareness training is the key to avoid phishing attacks and to reduce its impact, developing efficient anti-phishing techniques that prevent users from being exposed to the attack is an essential step in mitigating these attacks. To this end, this article discussed the importance of developing anti-phishing techniques that detect/block the attack. Furthermore, the importance of techniques to determine the source of the attack could provide a stronger anti-phishing solution as discussed in this article.

Furthermore, this article identified the importance of law enforcement as a deterrent mechanism. Further investigations and research are necessary as discussed below.

1. Further research is necessary to study and investigate susceptibility to phishing among users, which would assist in designing stronger and self-learning anti-phishing security systems.

2. Research on social media-based phishing, Voice Phishing, and SMS Phishing is sparse and these emerging threats are predicted to be significantly increased over the next years.

3. Laws and legislations that apply for phishing are still at their infant stage, in fact, there are no specific phishing laws in many countries. Most of the phishing attacks are covered under traditional criminal laws such as identity theft and computer crimes. Therefore, drafting of specific laws for phishing is an important step in mitigating these attacks in a time where these crimes are becoming more common.

4. Determining the source of the attack before the end of the phishing lifecycle and enforcing law legislation on the offender could help in restricting phishing attacks drastically and would benefit from further research.

It can be observed that the mediums used for phishing attacks have changed from traditional emails to social media-based phishing. There is a clear lag between sophisticated phishing attacks and existing countermeasures. The emerging countermeasures should be multidimensional to tackle both human and technical elements of the attack. This article provides valuable information about current phishing attacks and countermeasures whilst the proposed anatomy provides a clear taxonomy to understand the complete life cycle of phishing.

Author Contributions

This work is by our PhD student ZA supported by her Supervisory Team.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

AOL America Online

APWG Anti Phishing Working Group Advanced

APRANET Advanced Research Projects Agency Network.

ARP address resolution protocol.

BHO Browser Helper Object

BEC business email compromise

COVID-19 Coronavirus disease 2019

CSS cascading style sheets

DDoS distributed denial of service

DNS Domain Name System

DoS Denial of Service

FTC Federal Trade Commission

FL Fuzzy Logic

HTTPS Hypertext Transfer Protocol Secure

IE Internet Explorer

ICT Information and Communications Technology

IM Instant Message

IT Information Technology

IP Internet Protocol

MITM Man-in-the-Middle

NCA National Crime Agency

NFIB National Fraud Intelligence Bureau

PIPCU Police Intellectual Property Crime Unit

OS Operating Systems

PBX Private Branch Exchange

SMishing Text Message Phishing

SPF Sender Policy Framework

SMTP Simple Mail Transfer Protocol

SMS Short Message Service

Soshing Social Media Phishing

SQL structured query language

URL Uniform Resource Locator

UK United Kingdom

US United States

USB Universal Serial Bus

US-CERT United States Computer Emergency Readiness Team.

Vishing Voice Phishing

VNC Virtual Network Computing

VoIP Voice over Internet Protocol

XSS Cross-Site Scripting

1 Proofpoint is “a leading cybersecurity company that protects organizations’ greatest assets and biggest risks: their people. With an integrated suite of cloud-based solutions”( Proofpoint, 2019b ).

2 APWG Is “the international coalition unifying the global response to cybercrime across industry, government and law-enforcement sectors and NGO communities” ( APWG, 2020 ).

3 CalleR ID is “a telephone facility that displays a caller’s phone number on the recipient's phone device before the call is answered” ( Techpedia, 2021 ).

4 An IPPBX is “a telephone switching system within an enterprise that switches calls between VoIP users on local lines while allowing all users to share a certain number of external phone lines” ( Margaret, 2008 ).

Abad, C. (2005). The economy of phishing: a survey of the operations of the phishing market. First Monday 10, 1–11. doi:10.5210/fm.v10i9.1272

CrossRef Full Text | Google Scholar

Abu-Nimeh, S., and Nair, S. (2008). “Bypassing security toolbars and phishing filters via dns poisoning,” in IEEE GLOBECOM 2008–2008 IEEE global telecommunications conference , New Orleans, LA , November 30–December 2, 2008 ( IEEE) , 1–6. doi:10.1109/GLOCOM.2008.ECP.386

Aburrous, M., Hossain, M. A., Thabatah, F., and Dahal, K. (2008). “Intelligent phishing website detection system using fuzzy techniques,” in 2008 3rd international conference on information and communication technologies: from theory to applications (New York, NY: IEEE , 1–6. doi:10.1109/ICTTA.2008.4530019

Afroz, S., and Greenstadt, R. (2009). “Phishzoo: an automated web phishing detection approach based on profiling and fuzzy matching,” in Proceeding 5th IEEE international conference semantic computing (ICSC) , 1–11.

Google Scholar

Alsharnouby, M., Alaca, F., and Chiasson, S. (2015). Why phishing still works: user strategies for combating phishing attacks. Int. J. Human-Computer Stud. 82, 69–82. doi:10.1016/j.ijhcs.2015.05.005

APWG (2018). Phishing activity trends report 3rd quarter 2018 . US. 1–11.

APWG (2020). APWG phishing attack trends reports. 2020 anti-phishing work. Group, Inc Available at: https://apwg.org/trendsreports/ (Accessed September 20, 2020).

Arachchilage, N. A. G., and Love, S. (2014). Security awareness of computer users: a phishing threat avoidance perspective. Comput. Hum. Behav. 38, 304–312. doi:10.1016/j.chb.2014.05.046

Arnsten, B. A., Mazure, C. M., and April, R. S. (2012). Everyday stress can shut down the brain’s chief command center. Sci. Am. 306, 1–6. Available at: https://www.scientificamerican.com/article/this-is-your-brain-in-meltdown/ (Accessed October 15, 2019).

Bailey, J. L., Mitchell, R. B., and Jensen, B. k. (2008). “Analysis of student vulnerabilities to phishing,” in 14th americas conference on information systems, AMCIS 2008 , 75–84. Available at: https://aisel.aisnet.org/amcis2008/271 .

Barracuda (2020). Business email compromise (BEC). Available at: https://www.barracuda.com/glossary/business-email-compromise (Accessed November 15, 2020).

Belcic, I. (2020). Rootkits defined: what they do, how they work, and how to remove them. Available at: https://www.avast.com/c-rootkit (Accessed November 7, 2020).

Bergholz, A., De Beer, J., Glahn, S., Moens, M.-F., Paaß, G., and Strobel, S. (2010). New filtering approaches for phishing email. JCS 18, 7–35. doi:10.3233/JCS-2010-0371

Bin, S., Qiaoyan, W., and Xiaoying, L. (2010). “A DNS based anti-phishing approach.” in 2010 second international conference on networks security, wireless communications and trusted computing , Wuhan, China , April 24–25, 2010 . ( IEEE ), 262–265. doi:10.1109/NSWCTC.2010.196

Boddy, M. (2018). Phishing 2.0: the new evolution in cybercrime. Comput. Fraud Secur. 2018, 8–10. doi:10.1016/S1361-3723(18)30108-8

Casciani, D. (2019). Zain Qaiser: student jailed for blackmailing porn users worldwide. Available at: https://www.bbc.co.uk/news/uk-47800378 (Accessed April 9, 2019).

Chanti, S., and Chithralekha, T. (2020). Classification of anti-phishing solutions. SN Comput. Sci. 1, 11. doi:10.1007/s42979-019-0011-2

Checkpoint (2020). Check point research’s Q1 2020 brand phishing report. Available at: https://www.checkpoint.com/press/2020/apple-is-most-imitated-brand-for-phishing-attempts-check-point-researchs-q1-2020-brand-phishing-report/ (Accessed August 6, 2020).

cisco (2018). What is the difference: viruses, worms, Trojans, and bots? Available at: https://www.cisco.com/c/en/us/about/security-center/virus-differences.html (Accessed January 20, 2020).

CISA (2018). What is phishing. Available at: https://www.us-cert.gov/report-phishing (Accessed June 10, 2019).

Cormack, G. V., Smucker, M. D., and Clarke, C. L. A. (2011). Efficient and effective spam filtering and re-ranking for large web datasets. Inf. Retrieval 14, 441–465. doi:10.1007/s10791-011-9162-z

Corrata (2018). The rising threat of social media phishing attacks. Available at: https://corrata.com/the-rising-threat-of-social-media-phishing-attacks/%0D (Accessed October 29, 2019).

Crane, C. (2019). The dirty dozen: the 12 most costly phishing attack examples. Available at: https://www.thesslstore.com/blog/the-dirty-dozen-the-12-most-costly-phishing-attack-examples/#:∼:text=At some level%2C everyone is susceptible to phishing,outright trick you into performing a particular task (Accessed August 2, 2020).

CSI Onsite (2012). Phishing. Available at: http://csionsite.com/2012/phishing/ (Accessed May 8, 2019).

Cui, Q., Jourdan, G.-V., Bochmann, G. V., Couturier, R., and Onut, I.-V. (2017). Tracking phishing attacks over time. Proc. 26th Int. Conf. World Wide Web - WWW ’17 , Republic and Canton of Geneva, Switzerland: International World Wide Web Conferences Steering Committee . 667–676. doi:10.1145/3038912.3052654

CVEdetails (2005). Vulnerability in microsoft internet explorer. Available at: https://www.cvedetails.com/cve/CVE-2005-4089/ (Accessed August 20, 2019).

Cybint Cyber Solutions (2018). 13 alarming cyber security facts and stats. Available at: https://www.cybintsolutions.com/cyber-security-facts-stats/ (Accessed July 20, 2019).

Deshmukh, M., and raddha Popat, S. (2017). Different techniques for detection of phishing attack. Int. J. Eng. Sci. Comput. 7, 10201–10204. Available at: http://ijesc.org/ .

Dhamija, R., Tygar, J. D., and Hearst, M. (2006). “Why phishing works,” in Proceedings of the SIGCHI conference on human factors in computing systems - CHI ’06 , Montréal Québec, Canada , (New York, NY: ACM Press ), 581. doi:10.1145/1124772.1124861

Diaz, A., Sherman, A. T., and Joshi, A. (2020). Phishing in an academic community: a study of user susceptibility and behavior. Cryptologia 44, 53–67. doi:10.1080/01611194.2019.1623343

Dodge, R. C., Carver, C., and Ferguson, A. J. (2007). Phishing for user security awareness. Comput. Security 26, 73–80. doi:10.1016/j.cose.2006.10.009

eBay Toolbar and Account Guard (2009). Available at: https://download.cnet.com/eBay-Toolbar/3000-12512_4-10153544.html (Accessed August 7, 2020).

EDUCBA (2017). Hackers vs crackers: easy to understand exclusive difference. Available at: https://www.educba.com/hackers-vs-crackers/ (Accessed July 17, 2019).

Evers, J. (2006). Security expert: user education is pointless. Available at: https://www.cnet.com/news/security-expert-user-education-is-pointless/ (Accessed June 25, 2019).

F5Networks (2018). Panda malware broadens targets to cryptocurrency exchanges and social media. Available at: https://www.f5.com/labs/articles/threat-intelligence/panda-malware-broadens-targets-to-cryptocurrency-exchanges-and-social-media (Accessed April 23, 2019).

Fette, I., Sadeh, N., and Tomasic, A. (2007). “Learning to detect phishing emails,” in Proceedings of the 16th international conference on world wide web - WWW ’07 , Banff Alberta, Canada , (New York, NY: ACM Press) , 649–656. doi:10.1145/1242572.1242660

Financial Fraud Action UK (2017). Fraud the facts 2017: the definitive overview of payment industry fraud. London. Available at: https://www.financialfraudaction.org.uk/fraudfacts17/assets/fraud_the_facts.pdf .

Fraud Watch International (2019). Phishing attack trends for 2019. Available at: https://fraudwatchinternational.com/phishing/phishing-attack-trends-for-2019/ (Accessed October 29, 2019).

FTC (2018). Netflix scam email. Available at: https://www.ftc.gov/tips-advice/business-center/small-businesses/cybersecurity/phishing (Accessed May 8, 2019).

Furnell, S. (2007). An assessment of website password practices). Comput. Secur. 26, 445–451. doi:10.1016/j.cose.2007.09.001

Getsafeonline (2017). Caught on the net. Available at: https://www.getsafeonline.org/news/caught-on-the-net/%0D (Accessed August 1, 2020).

GOV.UK (2020). Cyber security breaches survey 2020. Available at: https://www.gov.uk/government/publications/cyber-security-breaches-survey-2020/cyber-security-breaches-survey-2020 (Accessed August 6, 2020).

Gupta, P., Srinivasan, B., Balasubramaniyan, V., and Ahamad, M. (2015). “Phoneypot: data-driven understanding of telephony threats,” in Proceedings 2015 network and distributed system security symposium , (Reston, VA: Internet Society ), 8–11. doi:10.14722/ndss.2015.23176

Hadlington, L. (2017). Human factors in cybersecurity; examining the link between internet addiction, impulsivity, attitudes towards cybersecurity, and risky cybersecurity behaviours. Heliyon 3, e00346-18. doi:10.1016/j.heliyon.2017.e00346

Herley, C., and Florêncio, D. (2008). “A profitless endeavor,” in New security paradigms workshop (NSPW ’08) , New Hampshire, United States , October 25–28, 2021 , 1–12. doi:10.1145/1595676.1595686

Hewage, C. (2020). Coronavirus pandemic has unleashed a wave of cyber attacks – here’s how to protect yourself. Conversat . Available at: https://theconversation.com/coronavirus-pandemic-has-unleashed-a-wave-of-cyber-attacks-heres-how-to-protect-yourself-135057 (Accessed November 16, 2020).

Hong, J. (2012). The state of phishing attacks. Commun. ACM 55, 74–81. doi:10.1145/2063176.2063197

Huber, M., Kowalski, S., Nohlberg, M., and Tjoa, S. (2009). “Towards automating social engineering using social networking sites,” in 2009 international conference on computational science and engineering , Vancouver, BC , August 29–31, 2009 ( IEEE , 117–124. doi:10.1109/CSE.2009.205

Hutchings, A., Clayton, R., and Anderson, R. (2016). “Taking down websites to prevent crime,” in 2016 APWG symposium on electronic crime research (eCrime) ( IEEE ), 1–10. doi:10.1109/ECRIME.2016.7487947

Iuga, C., Nurse, J. R. C., and Erola, A. (2016). Baiting the hook: factors impacting susceptibility to phishing attacks. Hum. Cent. Comput. Inf. Sci. 6, 8. doi:10.1186/s13673-016-0065-2

Jagatic, T. N., Johnson, N. A., Jakobsson, M., and Menczer, F. (2007). Social phishing. Commun. ACM 50, 94–100. doi:10.1145/1290958.1290968

Jakobsson, M., and Myers, S. (2006). Phishing and countermeasures: understanding the increasing problems of electronic identity theft . New Jersey: John Wiley and Sons .

Jakobsson, M., Tsow, A., Shah, A., Blevis, E., and Lim, Y. K. (2007). “What instills trust? A qualitative study of phishing,” in Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics) , (Berlin, Heidelberg: Springer ), 356–361. doi:10.1007/978-3-540-77366-5_32

Jeeva, S. C., and Rajsingh, E. B. (2016). Intelligent phishing url detection using association rule mining. Hum. Cent. Comput. Inf. Sci. 6, 10. doi:10.1186/s13673-016-0064-3

Johnson, A. (2016). Almost 600 accounts breached in “celebgate” nude photo hack, FBI says. Available at: http://www.cnbc.com/id/102747765 (Accessed: February 17, 2020).

Kayne, R. (2019). What are script kiddies? Wisegeek. Available at: https://www.wisegeek.com/what-are-script-kiddies.htm V V February 19, 2020).

Keck, C. (2018). FTC warns of sketchy Netflix phishing scam asking for payment details. Available at: https://gizmodo.com/ftc-warns-of-sketchy-netflix-phishing-scam-asking-for-p-1831372416 (Accessed April 23, 2019).

Keepnet LABS (2018). Statistical analysis of 126,000 phishing simulations carried out in 128 companies around the world. USA, France. Available at: www.keepnetlabs.com .

Keinan, G. (1987). Decision making under stress: scanning of alternatives under controllable and uncontrollable threats. J. Personal. Soc. Psychol. 52, 639–644. doi:10.1037/0022-3514.52.3.639

Khonji, M., Iraqi, Y., and Jones, A. (2013). Phishing detection: a literature survey. IEEE Commun. Surv. Tutorials 15, 2091–2121. doi:10.1109/SURV.2013.032213.00009

Kirda, E., and Kruegel, C. (2005). Protecting users against phishing attacks with AntiPhish. Proc. - Int. Comput. Softw. Appl. Conf. 1, 517–524. doi:10.1109/COMPSAC.2005.126

Krawchenko, K. (2016). The phishing email that hacked the account of John Podesta. CBSNEWS Available at: https://www.cbsnews.com/news/the-phishing-email-that-hacked-the-account-of-john-podesta/ (Accessed April 13, 2019).

Ksepersky (2020). Spam and phishing in Q1 2020. Available at: https://securelist.com/spam-and-phishing-in-q1-2020/97091/ (Accessed July 27, 2020).

Kumaraguru, P., Sheng, S., Acquisti, A., Cranor, L. F., and Hong, J. (2010). Teaching Johnny not to fall for phish. ACM Trans. Internet Technol. 10, 1–31. doi:10.1145/1754393.1754396

Latto, N. (2020). What is adware and how can you prevent it? Avast. Available at: https://www.avast.com/c-adware (Accessed May 8, 2020).

Le, D., Fu, X., and Hogrefe, D. (2006). A review of mobility support paradigms for the internet. IEEE Commun. Surv. Tutorials 8, 38–51. doi:10.1109/COMST.2006.323441

Lehman, T. J., and Vajpayee, S. (2011). “We’ve looked at clouds from both sides now,” in 2011 annual SRII global conference , San Jose, CA , March 20–April 2, 2011 , ( IEEE , 342–348. doi:10.1109/SRII.2011.46

Leyden, J. (2001). Virus toolkits are s’kiddie menace. Regist . Available at: https://www.theregister.co.uk/2001/02/21/virus_toolkits_are_skiddie_menace/%0D (Accessed June 15, 2019).

Lin, J., Sadeh, N., Amini, S., Lindqvist, J., Hong, J. I., and Zhang, J. (2012). “Expectation and purpose,” in Proceedings of the 2012 ACM conference on ubiquitous computing - UbiComp ’12 (New York, New York, USA: ACM Press ), 1625. doi:10.1145/2370216.2370290

Lininger, R., and Vines, D. R. (2005). Phishing: cutting the identity theft line. Print book . Indiana: Wiley Publishing, Inc .

Ma, J., Saul, L. K., Savage, S., and Voelker, G. M. (2009). “Identifying suspicious URLs.” in Proceedings of the 26th annual international conference on machine learning - ICML ’09 (New York, NY: ACM Press ), 1–8. doi:10.1145/1553374.1553462

Marforio, C., Masti, R. J., Soriente, C., Kostiainen, K., and Capkun, S. (2015). Personalized security indicators to detect application phishing attacks in mobile platforms. Available at: http://arxiv.org/abs/1502.06824 .

Margaret, R. I. P. (2008). PBX (private branch exchange). Available at: https://searchunifiedcommunications.techtarget.com/definition/IP-PBX (Accessed June 19, 2019).

Maurer, M.-E., and Herzner, D. (2012). Using visual website similarity for phishing detection and reporting. 1625–1630. doi:10.1145/2212776.2223683

Medvet, E., Kirda, E., and Kruegel, C. (2008). “Visual-similarity-based phishing detection,” in Proceedings of the 4th international conference on Security and privacy in communication netowrks - SecureComm ’08 (New York, NY: ACM Press ), 1. doi:10.1145/1460877.1460905

Merwe, A. v. d., Marianne, L., and Marek, D. (2005). “Characteristics and responsibilities involved in a Phishing attack, in WISICT ’05: proceedings of the 4th international symposium on information and communication technologies . Trinity College Dublin , 249–254.

Microsoft (2020). Exploiting a crisis: how cybercriminals behaved during the outbreak. Available at: https://www.microsoft.com/security/blog/2020/06/16/exploiting-a-crisis-how-cybercriminals-behaved-during-the-outbreak/ (Accessed August 1, 2020).

Mince-Didier, A. (2020). Hacking a computer or computer network. Available at: https://www.criminaldefenselawyer.com/resources/hacking-computer.html (Accessed August 7, 2020).

Miyamoto, D., Hazeyama, H., and Kadobayashi, Y. (2009). “An evaluation of machine learning-based methods for detection of phishing sites,” in international conference on neural information processing ICONIP 2008: advances in neuro-information processing lecture notes in computer science . Editors M. Köppen, N. Kasabov, and G. Coghill (Berlin, Heidelberg: Springer Berlin Heidelberg ), 539–546. doi:10.1007/978-3-642-02490-0_66

Mohammad, R. M., Thabtah, F., and McCluskey, L. (2014). Predicting phishing websites based on self-structuring neural network. Neural Comput. Applic 25, 443–458. doi:10.1007/s00521-013-1490-z

Moore, T., and Clayton, R. (2007). “Examining the impact of website take-down on phishing,” in Proceedings of the anti-phishing working groups 2nd annual eCrime researchers summit on - eCrime ’07 (New York, NY: ACM Press ), 1–13. doi:10.1145/1299015.1299016

Morgan, S. (2019). 2019 official annual cybercrime report. USA, UK, Canada. Available at: https://www.herjavecgroup.com/wp-content/uploads/2018/12/CV-HG-2019-Official-Annual-Cybercrime-Report.pdf .

Nathan, G. (2020). What is phishing? + laws, charges & statute of limitations. Available at: https://www.federalcharges.com/phishing-laws-charges/ (Accessed August 7, 2020).

Okin, S. (2009). From script kiddies to organised cybercrime. Available at: https://comsecglobal.com/from-script-kiddies-to-organised-cybercrime-things-are-getting-nasty-out-there/ (Accessed August 12, 2019).

Ollmann, G. (2004). The phishing guide understanding & preventing phishing attacks abstract. USA. Available at: http://www.ngsconsulting.com .

Ong, S. (2014). Avast survey shows men more susceptible to mobile malware. Available at: https://www.mirekusoft.com/avast-survey-shows-men-more-susceptible-to-mobile-malware/ (Accessed November 5, 2020).

Ovelgönne, M., Dumitraş, T., Prakash, B. A., Subrahmanian, V. S., and Wang, B. (2017). Understanding the relationship between human behavior and susceptibility to cyber attacks. ACM Trans. Intell. Syst. Technol. 8, 1–25. doi:10.1080/00207284.1985.11491413

Parmar, B. (2012). Protecting against spear-phishing. Computer Fraud Security , 2012, 8–11. doi:10.1016/S1361-3723(12)70007-6

Phish Labs (2019). 2019 phishing trends and intelligence report the growing social engineering threat. Available at: https://info.phishlabs.com/hubfs/2019 PTI Report/2019 Phishing Trends and Intelligence Report.pdf .

PhishMe (2016). Q1 2016 malware review. Available at: WWW.PHISHME.COM .

PhishMe (2017). Human phishing defense enterprise phishing resiliency and defense report 2017 analysis of susceptibility, resiliency and defense against simulated and real phishing attacks. Available at: https://cofense.com/wp-content/uploads/2017/11/Enterprise-Phishing-Resiliency-and-Defense-Report-2017.pdf .

PishTank (2006). What is phishing. Available at: http://www.phishtank.com/what_is_phishing.php?view=website&annotated=true (Accessed June 19, 2019).

Pompon, A. R., Walkowski, D., and Boddy, S. (2018). Phishing and Fraud Report attacks peak during the holidays. US .

Proofpoint (2019a). State of the phish 2019 report. Sport Mark. Q. 14, 4. doi:10.1038/sj.jp.7211019

Proofpoint (2019b). What is Proofpoint. Available at: https://www.proofpoint.com/us/company/about (Accessed September 25, 2019).

Proofpoint (2020). 2020 state of the phish. Available at: https://www.proofpoint.com/sites/default/files/gtd-pfpt-us-tr-state-of-the-phish-2020.pdf .

Raggo, M. (2016). Anatomy of a social media attack. Available at: https://www.darkreading.com/analytics/anatomy-of-a-social-media-attack/a/d-id/1326680 (Accessed March 14, 2019).

Ramanathan, V., and Wechsler, H. (2012). PhishGILLNET-phishing detection methodology using probabilistic latent semantic analysis, AdaBoost, and co-training. EURASIP J. Info. Secur. 2012, 1–22. doi:10.1186/1687-417X-2012-1

Ramzan, Z. (2010). “Phishing attacks and countermeasures,” in Handbook of Information and communication security (Berlin, Heidelberg: Springer Berlin Heidelberg ), 433–448. doi:10.1007/978-3-642-04117-4_23

Ramzan, Z., and Wuest, C. (2007). “Phishing Attacks: analyzing trends in 2006,” in Fourth conference on email and anti-Spam (Mountain View , ( California, United States ).

Rhett, J. (2019). Don’t fall for this new Google translate phishing attack. Available at: https://www.gizmodo.co.uk/2019/02/dont-fall-for-this-new-google-translate-phishing-attack/ (Accessed April 23, 2019). doi:10.5040/9781350073272

RISKIQ (2020). Investigate | COVID-19 cybercrime weekly update. Available at: https://www.riskiq.com/blog/analyst/covid19-cybercrime-update/%0D (Accessed August 1, 2020).

Robichaux, P., and Ganger, D. L. (2006). Gone phishing: evaluating anti-phishing tools for windows. Available at: http://www.3sharp.com/projects/antiphishing/gonephishing.pdf .

Rouse, M. (2013). Phishing defintion. Available at: https://searchsecurity.techtarget.com/definition/phishing (Accessed April 10, 2019).

Salem, O., Hossain, A., and Kamala, M. (2010). “Awareness program and AI based tool to reduce risk of phishing attacks,” in 2010 10th IEEE international conference on computer and information technology (IEEE) , Bradford, United Kingdom , June 29–July 1, 2010, 2001 ( IEEE ), 1418–1423. doi:10.1109/CIT.2010.254

Scaife, N., Carter, H., Traynor, P., and Butler, K. R. B. (2016). “Crypto lock (and drop it): stopping ransomware attacks on user data,” in 2016 IEEE 36th international conference on distributed computing systems (ICDCS) ( IEEE , 303–312. doi:10.1109/ICDCS.2016.46

Sheng, S., Magnien, B., Kumaraguru, P., Acquisti, A., Cranor, L. F., Hong, J., et al. (2007). “Anti-Phishing Phil: the design and evaluation of a game that teaches people not to fall for phish,” in Proceedings of the 3rd symposium on usable privacy and security - SOUPS ’07 (New York, NY: ACM Press ), 88–99. doi:10.1145/1280680.1280692

Symantic, (2019). Internet security threat report volume 24|February 2019 . USA.

Techpedia (2021). Caller ID. Available at: https://www.techopedia.com/definition/24222/caller-id (Accessed June 19, 2019).

VadeSecure (2021). Phishers favorites 2019. Available at: https://www.vadesecure.com/en/ (Accessed October 29, 2019).

Vishwanath, A. (2005). “Spear phishing: the tip of the spear used by cyber terrorists,” in deconstruction machines (United States: University of Minnesota Press ), 469–484. doi:10.4018/978-1-5225-0156-5.ch023

Wang, X., Zhang, R., Yang, X., Jiang, X., and Wijesekera, D. (2008). “Voice pharming attack and the trust of VoIP,” in Proceedings of the 4th international conference on security and privacy in communication networks, SecureComm’08 , 1–11. doi:10.1145/1460877.1460908

Wenyin, L., Huang, G., Xiaoyue, L., Min, Z., and Deng, X. (2005). “Detection of phishing webpages based on visual similarity,” in 14th international world wide web conference, WWW2005 , Chiba, Japan , May 10–14, 2005 , 1060–1061. doi:10.1145/1062745.1062868

Whitman, M. E., and Mattord, H. J. (2012). Principles of information security. Course Technol. 1–617. doi:10.1016/B978-0-12-381972-7.00002-6

Williams, E. J., Hinds, J., and Joinson, A. N. (2018). Exploring susceptibility to phishing in the workplace. Int. J. Human-Computer Stud. 120, 1–13. doi:10.1016/j.ijhcs.2018.06.004

wombatsecurity.com (2018). Wombat security user risk report. USA. Available at: https://info.wombatsecurity.com/hubfs/WombatProofpoint-UserRiskSurveyReport2018_US.pdf .

Workman, M. (2008). Wisecrackers: a theory-grounded investigation of phishing and pretext social engineering threats to information security. J. Am. Soc. Inf. Sci. 59 (4), 662–674. doi:10.1002/asi.20779

Yeboah-Boateng, E. O., and Amanor, P. M. (2014). Phishing , SMiShing & vishing: an assessment of threats against mobile devices. J. Emerg. Trends Comput. Inf. Sci. 5 (4), 297–307.

Zhang, Y., Hong, J. I., and Cranor, L. F. (2007). “Cantina,” in Proceedings of the 16th international conference on World Wide Web - WWW ’07 (New York, NY: ACM Press ), 639. doi:10.1145/1242572.1242659

Zissis, D., and Lekkas, D. (2012). Addressing cloud computing security issues. Future Generat. Comput. Syst. 28, 583–592. doi:10.1016/j.future.2010.12.006

Keywords: phishing anatomy, precautionary countermeasures, phishing targets, phishing attack mediums, phishing attacks, attack phases, phishing techniques

Citation: Alkhalil Z, Hewage C, Nawaf L and Khan I (2021) Phishing Attacks: A Recent Comprehensive Study and a New Anatomy. Front. Comput. Sci. 3:563060. doi: 10.3389/fcomp.2021.563060

Received: 17 May 2020; Accepted: 18 January 2021; Published: 09 March 2021.

Reviewed by:

Copyright © 2021 Alkhalil, Hewage, Nawaf and Khan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Chaminda Hewage, [email protected]

This article is part of the Research Topic

2021 Editor's Pick: Computer Science

  • Environment
  • Road to Net Zero
  • Art & Design
  • Film & TV
  • Music & On-stage
  • Pop Culture
  • Fashion & Beauty
  • Home & Garden
  • Things to do
  • Combat Sports
  • Horse Racing
  • Beyond the Headlines
  • Trending Middle East
  • Business Extra
  • Culture Bites
  • Year of Elections
  • Pocketful of Dirhams
  • Books of My Life
  • Iraq: 20 Years On

UK makes top 10 in world index of cyber crime threats

Index could help shine light on difficult-to-trace activity, study author says.

The rankings were based on data gathered by researchers who surveyed almost 100 cyber crime experts from around the world. Getty Images

The rankings were based on data gathered by researchers who surveyed almost 100 cyber crime experts from around the world. Getty Images

Soraya Ebrahimi author image

The UK has been placed eighth in global cyber crime hotspots in a new study ranking the most significant sources of cyber threats.

The World Cybercrime Index was published in journal Plos One after three years of research by academics from the University of Oxford and the University of New South Wales (UNSW) Canberra.

The index said Russia housed the greatest cyber crime threat, followed by Ukraine , China , the US and Nigeria .

The rankings were based on data gathered by researchers who surveyed almost 100 cyber crime experts from around the world, asking them to identify the most significant sources of five major types of cyber crime, ranking countries according to the impact, professionalism and technical skill of its criminals.

The study’s co-author, Miranda Bruce, said the research would enable cyber security agencies to focus on key hubs of cyber crime, directing funds and focus more effectively.

“The research that underpins the index will help remove the veil of anonymity around cyber criminal offenders, and we hope that it will aid the fight against the growing threat of profit-driven cyber crime,” she said.

“We now have a deeper understanding of the geography of cyber crime, and how different countries specialise in different types of cyber crime.

“By continuing to collect this data, we’ll be able to monitor the emergence of any new hotspots and it is possible early interventions could be made in at-risk countries before a serious cyber crime problem even develops.”

Co-author and associate professor Jonathan Lusthaus said the index could help shine a light on what is often difficult-to-trace activity.

“Due to the illicit and anonymous nature of their activities, cyber criminals cannot be easily accessed or reliably surveyed. They are actively hiding,” he said.

“If you try to use technical data to map their location, you will also fail, as cyber criminals bounce their attacks around internet infrastructure across the world.

“The best means we have to draw a picture of where these offenders are actually located is to survey those whose job it is to track these people.”

The researchers said they hope to expand the study to examine whether different national characteristics such as education rates, gross domestic product or levels of corruption affect the amount of cyber crime emerging from a country.

Bollywood churning out 'pro-government' fare ahead of Indian elections, film critic says

Going digital to turn contact centres into customer experience centres

Anz explores genai to find answers for customers faster, us cyber agency says russian hackers used microsoft access to steal government emails, apple drops term 'state-sponsored' attacks from its threat notification policy, westpac may lean on leosats to expand banking in indigenous communities, cyber criminals gather together, researchers find, index seeks to localise activity..

Cyber crime is not as fluid or mobile as believed, with international research identifying six hotspots that host the most malicious activity.

Cyber criminals gather together, researchers find

An Australia-UK-French research team nominated Russia, Ukraine, China, the USA, Nigeria and Romania as key cyber crime host countries. Australia ranked 34 on the list.

The World Cybercrime Index is designed to let law enforcement and the private sector to concentrate their efforts on “key cyber crime hubs”, co-author Dr Miranda Bruce of UNSW Canberra said.

“The research that underpins the Index will help remove the veil of anonymity around cyber criminal offenders, and we hope that it will aid the fight against the growing threat of profit-driven cyber crime,” she added.

Like other forms of organised crime, cyber crime exists in particular contexts, researcher Professor Federico Varese of France’s Sciences Po said in a statement.

The researchers interviewed 92 experts who specialise in cyber crime intelligence gathering and investigations, who were asked to nominate countries that were sources of different kinds of cyber crime, and rank each country “according to the impact, professionalism, and technical skill of its cyber criminals.”

World Cybercrime Index

Oxford university associate professor Jonathan Lusthaus said since attackers are actively hiding, the researchers decided that "the best means we have to draw a picture of where these offenders are actually located is to survey those whose job it is to track these people.”

By seeking to attribute different cyber crimes to their geographies, the researchers believe the public and private sectors will be better able to target what they spend in their responses.

“By continuing to collect this data, we’ll be able to monitor the emergence of any new hotspots and it’s possible early interventions could be made in at-risk countries before a serious cyber crime problem even develops," Dr Bruce said.

“We are hoping to expand the study so that we can determine whether national characteristics like educational attainment, Internet penetration, GDP or levels of corruption are associated with cyber crime," Varese said.

The team’s research is published in the Plos One journal .

The Index was funded by EU-backed CRIMGOV, which is hosted at Oxford and Sciences Po. The other co-authors of the research are Oxford professor Ridhi Kashyap, and Monash University’s Nigel Phair.

Three of the team – Dr Bruce, Lusthaus and Phair – previously researched the available geographical cyber crime data in this 2020 paper [pdf] .

research paper on cybercrime

Partner Content

MinRes builds cyber resilience with Rubrik

Sponsored Whitepapers

Centralized Remote Connectivity for State & Local Government

Most Read Articles

Telstra customers' details included in leaked data file

Telstra customers' details included in leaked data file

HTTP2 bug plagues web servers

HTTP2 bug plagues web servers

Australian motorcycle distributor sees websites breached

Australian motorcycle distributor sees websites breached

Microsoft unleashes 157 bug fixes

Microsoft unleashes 157 bug fixes

Digital nation.

How eBay uses interaction analytics to improve CX

Most popular tech stories

State of Security 2023

State of Security 2023

Cover story: sustainability and ai, a promising partnership or an environmental grey area, fyai: what is an ai hallucination and how does it impact business leaders, case study: warren and mahoney adopts digital tools to reduce its carbon footprint, cricket australia automates experiences for fans and players.

Dicker Data launches TechX 2024 across ANZ

Dicker Data launches TechX 2024 across ANZ

Microsoft lays out copilot opportunity for anz partners, photos: partners gather for ingram micro's aws marketplace launch on sydney harbour, aws announces amazon bedrock general availability in sydney region, aws salutes top anz partners.

Right to repair: Large scale IT buyers can influence product design... and they should

Right to repair: Large scale IT buyers can influence product design... and they should

Shivering in summer sweating in winter your building is living a lie, building a modern workplace for a remote workforce, venom blackbook zero 15 phantom.

Huge IoT Impact agenda features latest Australian IoT use cases, case studies and expert insights

Huge IoT Impact agenda features latest Australian IoT use cases, case studies and expert insights

Western sydney "aerotropolis" will be in spotlight at sensing the west forum in march, meet the asset management award finalists in the 2024 australian iot awards, pitches invited for $10 million drought resilience commercialisation initiative, meet the logistics & supply chain management finalists in the 2024 australian iot awards.

research paper on cybercrime

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Cybercrime Victimization and Problematic Social Media Use: Findings from a Nationally Representative Panel Study

Eetu marttila.

Economic Sociology, Department of Social Research, University of Turku, Assistentinkatu 7, 20014 Turku, Finland

Aki Koivula

Pekka räsänen, associated data.

The survey data used in this study will be made available through via Finnish Social Science Data Archive (FSD, http://www.fsd.uta.fi/en/ ) after the manuscript acceptance. The data are also available from the authors on scholarly request.

Analyses were run with Stata 16.1. The code is also available from the authors on request for replication purposes.

According to criminological research, online environments create new possibilities for criminal activity and deviant behavior. Problematic social media use (PSMU) is a habitual pattern of excessive use of social media platforms. Past research has suggested that PSMU predicts risky online behavior and negative life outcomes, but the relationship between PSMU and cybercrime victimization is not properly understood. In this study, we use the framework of routine activity theory (RAT) and lifestyle-exposure theory (LET) to examine the relationship between PSMU and cybercrime victimization. We analyze how PSMU is linked to cybercrime victimization experiences. We explore how PSMU predicts cybercrime victimization, especially under those risky circumstances that generally increase the probability of victimization. Our data come from nationally representative surveys, collected in Finland in 2017 and 2019. The results of the between-subjects tests show that problematic PSMU correlates relatively strongly with cybercrime victimization. Within-subjects analysis shows that increased PSMU increases the risk of victimization. Overall, the findings indicate that, along with various confounding factors, PSMU has a notable cumulative effect on victimization. The article concludes with a short summary and discussion of the possible avenues for future research on PSMU and cybercrime victimization.

Introduction

In criminology, digital environments are generally understood as social spaces which open new possibilities for criminal activity and crime victimization (Yar, 2005 ). Over the past decade, social media platforms have established themselves as the basic digital infrastructure that governs daily interactions. The rapid and vast adaptation of social media technologies has produced concern about the possible negative effects, but the association between social media use and decreased wellbeing measures appears to be rather weak (Appel et al., 2020 ; Kross et al., 2020 ). Accordingly, researchers have proposed that the outcomes of social media use depend on the way platforms are used, and that the negative outcomes are concentrated among those who experience excessive social media use (Kross et al., 2020 ; Wheatley & Buglass, 2019 ). Whereas an extensive body of research has focused either on cybercrime victimization or on problematic social media use, few studies have focused explicitly on the link between problematic use and victimization experiences (e.g., Craig et al., 2020 ; Longobardi et al., 2020 ).

As per earlier research, the notion of problematic use is linked to excessive and uncontrollable social media usage, which is characterized by compulsive and routinized thoughts and behavior (e.g., Kuss & Griffiths, 2017 ). The most frequently used social scientific and criminological accounts of risk factors of victimization are based on routine activity theory (RAT) (Cohen & Felson, 1979 ) and lifestyle-exposure theory (LET) (Hindelang et al., 1978 ). Although RAT and LET were originally developed to understand how routines and lifestyle patterns may lead to victimization in physical spaces, they have been applied in online environments (e.g., Milani et al., 2020 ; Räsänen et al., 2016 ).

As theoretical frameworks, RAT and LET presume that lifestyles and routine activities are embedded in social contexts, which makes it possible to understand behaviors and processes that lead to victimization. The excessive use of social media platforms increases the time spent in digital environments, which, according to lifestyle and routine activities theories, tends to increase the likelihood of ending up in dangerous situations. Therefore, we presume that problematic use is a particularly dangerous pattern of use, which may increase the risk of cybercrime victimization.

In this study, we employ the key elements of RAT and LET to focus on the relationship between problematic social media use and cybercrime victimization. Our data come from high quality, two-wave longitudinal population surveys, which were collected in Finland in 2017 and 2019. First, we examine the cross-sectional relationship between problematic use and victimization experiences at Wave 1, considering the indirect effect of confounding factors. Second, we test for longitudinal effects by investigating whether increased problematic use predicts an increase in victimization experiences at Wave 2.

Literature Review

Problematic social media use.

Over the last few years, the literature on the psychological, cultural, and social effects of social media has proliferated. Prior research on the topic presents a nuanced view of social media and its consequences (Kross et al., 2020 ). For instance, several studies have demonstrated that social media use may produce positive outcomes, such as increased life satisfaction, social trust, and political participation (Kim & Kim, 2017 ; Valenzuela et al., 2009 ). The positive effects are typically explained to follow from use that satisfy individuals’ socioemotional needs, such as sharing emotions and receiving social support on social media platforms (Pang, 2018 ; Verduyn et al., 2017 ).

However, another line of research associates social media use with several negative effects, including higher stress levels, increased anxiety and lower self-esteem (Kross et al., 2020 ). Negative outcomes, such as depression (Shensa et al., 2017 ), decreased subjective well-being (Wheatley & Buglass, 2019 ) and increased loneliness (Meshi et al., 2020 ), are also commonly described in the research literature. The most common mechanisms that are used to explain negative outcomes of social media use are social comparison and fear of missing out (Kross et al., 2020 ). In general, it appears that the type of use that does not facilitate interpersonal connection is more detrimental to users’ health and well-being (Clark et al., 2018 ).

Even though the earlier research on the subject has produced somewhat contradictory results, the researchers generally agree that certain groups of users are at more risk of experiencing negative outcomes of social media use. More specifically, the researchers have pointed out that there is a group of individuals who have difficulty controlling the quantity and intensity of their use of social media platforms (Kuss & Griffiths, 2017 ). Consequently, new concepts, such as problematic social media use (Bányai et al., 2017 ) and social networking addiction (Griffiths et al., 2014 ) have been developed to assess excessive use. In this research, we utilize the concept of problematic social media use (PSMU), which is applied broadly in the literature. In contrast to evidence of social media use in general, PSMU consistently predicts negative outcomes in several domains of life, including decreased subjective well-being (Kross et al., 2013 ; Wheatley & Buglass, 2019 ), depression (Hussain & Griffiths, 2018 ), and loneliness (Marttila et al., 2021 ).

To our knowledge, few studies have focused explicitly on the relationship between PSMU and cybercrime victimization. One cross-national study of young people found that PSMU is consistently and strongly associated with cyberbullying victimization across countries (Craig et al., 2020 ) and another one of Spanish adolescents returned similar results (Martínez-Ferrer et al., 2018 ). Another study of Italian adolescents found that an individual’s number of followers on Instagram was positively associated with experiences of cybervictimization (Longobardi et al., 2020 ). A clear limitation of the earlier studies is that they focused on adolescents and often dealt with cyberbullying or harassment. Therefore, the results are not straightforwardly generalizable to adult populations or to other forms of cybercrime victimization. Despite this, there are certain basic assumptions about cybercrime victimization that must be considered.

Cybercrime Victimization, Routine Activity, and Lifestyle-Exposure Theories

In criminology, the notion of cybercrime is used to refer to a variety of illegal activities that are performed in online networks and platforms through computers and other devices (Yar & Steinmetz, 2019 ). As a concept, cybercrime is employed in different levels of analysis and used to describe a plethora of criminal phenomena, ranging from individual-level victimization to large-scale, society-wide operations (Donalds & Osei-Bryson, 2019 ). In this study, we define cybercrime as illegal activity and harm to others conducted online, and we focus on self-reported experiences of cybercrime victimization. Therefore, we do not address whether respondents reported an actual crime victimization to the authorities.

In Finland and other European countries, the most common types of cybercrime include slander, hacking, malware, online fraud, and cyberbullying (see Europol, 2019 ; Meško, 2018 ). Providing exact estimates of cybercrime victims has been a challenge for previous criminological research, but 1 to 15 percent of the European population is estimated to have experienced some sort of cybercrime victimization (Reep-van den Bergh & Junger, 2018 ). Similarly, it is difficult to give a precise estimate of the prevalence of social media-related criminal activity. However, as a growing proportion of digital interactions are mediated by social media platforms, we can expect that cybercrime victimization on social media is also increasing. According to previous research, identity theft (Reyns et al., 2011 ), cyberbullying (Lowry et al., 2016 ), hate speech (Räsänen et al., 2016 ), and stalking (Marcum et al., 2017 ) are all regularly implemented on social media. Most of the preceding studies have focused on cybervictimization of teenagers and young adults, which are considered the most vulnerable population segments (e.g., Hawdon et al., 2017 ; Keipi et al.,  2016 ).

One of the most frequently used conceptual frameworks to explain victimization is routine activity theory (RAT) (Cohen & Felson, 1979 ). RAT claims that the everyday routines of social actors place individuals at risk for victimization by exposing them to dangerous people, places, and situations. The theory posits that a crime is more likely to occur when a motivated offender, a suitable target, and a lack of capable guardians converge in space and time (Cohen & Felson, 1979 ). RAT is similar to lifestyle-exposure theory (LET), which aims to understand the ways in which lifestyle patterns in the social context allow different forms of victimization (Hindelang et al., 1978 ).

In this study, we build our approach on combining RAT and LET in order to examine risk-enhancing behaviors and characteristics fostered by online environment. Together, these theories take the existence of motivated offenders for granted and therefore do not attempt to explain their involvement in crime. Instead, we concentrate on how routine activities and lifestyle patterns, together with the absence of a capable guardian, affect the probability of victimization.

Numerous studies have investigated the applicability of LET and RAT for cybercrime victimization (e.g., Holt & Bosser, 2008 , 2014 ; Leukfeldt & Yar, 2016 ; Näsi et al., 2017 ; Vakhitova et al., 2016 , 2019 ; Yar, 2005 ). The results indicate that different theoretical concepts are operationalizable to online environments to varying degrees, and that some operationalizations are more helpful than others (Näsi et al., 2017 ). For example, the concept of risk exposure is considered to be compatible with online victimization, even though earlier studies have shown a high level of variation in how the risk exposure is measured (Vakhitova et al., 2016 ). By contrast, target attractiveness and lack of guardianship are generally considered to be more difficult to operationalize in the context of technology-mediated victimization (Leukfeldt & Yar, 2016 ).

In the next section, we will take a closer look at how the key theoretical concepts LET and RAT have been operationalized in earlier studies on cybervictimization. Here, we focus solely on factors that we can address empirically with our data. Each of these have successfully been applied to online environments in prior studies (e.g., Hawdon et al., 2017 ; Keipi et al., 2016 ).

Confounding Elements of Lifestyle and Routine Activities Theories and Cybercrime Victimization

Exposure to risk.

The first contextual component of RAT/LET addresses the general likelihood of experiencing risk situations. Risk exposure has typically been measured by the amount of time spent online or the quantity of different online activities – the hours spent online, the number of online accounts, the use of social media services (Hawdon et al., 2017 ; Vakhitova et al., 2019 ). The studies that have tested the association have returned mixed results, and it seems that simply the time spent online does not predict increased victimization (e.g., Ngo & Paternoster, 2011 ; Reyns et al., 2011 ). On the other hand, the use of social media platforms (Bossler et al., 2012 ; Räsänen et al., 2016 ) and the number of accounts in social networks are associated with increased victimization (Reyns et al., 2011 ).

Regarding the association between the risk of exposure and victimization experiences, previous research has suggested that specific online activities may increase the likelihood of cybervictimization. For example, interaction with other users is associated with increased victimization experiences, whereas passive use may protect from cybervictimization (Holt & Bossler, 2008 ; Ngo & Paternoster, 2011 ; Vakhitova et al., 2019 ). In addition, we assume that especially active social media use, such as connecting with new people, is a risk factor and should be taken into account by measuring the proximity to offenders in social media.

Proximity to Offenders

The second contextual component of RAT/LET is closeness to the possible perpetrators. Previously, proximity to offenders was typically measured by the amount of self-disclosure in online environments, such as the number of followers on social media platforms (Vakhitova et al., 2019 ). Again, earlier studies have returned inconsistent results, and the proximity to offenders has mixed effects on the risk victimization. For example, the number of online friends does not predict increased risk of cybercrime victimization (Näsi et al., 2017 ; Räsänen et al., 2016 ; Reyns et al., 2011 ). By contrast, a high number of social media followers (Longobardi et al., 2020 ) and online self-disclosures are associated with higher risk of victimization (Vakhitova et al., 2019 ).

As in the case of risk exposure, different operationalizations of proximity to offenders may predict victimization more strongly than others. For instance, compared to interacting with friends and family, contacting strangers online may be much riskier (Vakhitova et al., 2016 ). Earlier studies support this notion, and allowing strangers to acquire sensitive information about oneself, as well as frequent contact with strangers on social media, predict increased risk for cybervictimization (Craig et al., 2020 ; Reyns et al., 2011 ). Also, compulsive online behavior is associated with a higher probability of meeting strangers online (Gámez-Guadix et al., 2016 ), and we assume that PSMU use may be associated with victimization indirectly through contacting strangers.

Target Attractiveness

The third contextual element of RAT/LET considers the fact that victimization is more likely among those who share certain individual and behavioral traits. Such traits can be seen to increase attractiveness to offenders and thereby increase the likelihood of experiencing risk situations. Earlier studies on cybercrime victimization have utilized a wide selection of measures to operationalize target attractiveness, including gender and ethnic background (Näsi et al., 2017 ), browsing risky content (Räsänen et al., 2016 ), financial status (Leukfeldt & Yar, 2016 ) or relationship status, and sexual orientation (Reyns et al., 2011 ).

In general, these operationalizations do not seem to predict victimization reliably or effectively. Despite this, we suggest that certain operationalizations of target attractiveness may be valuable. Past research on the different uses of social media has suggested that provocative language or expressions of ideological points of view can increase victimization. More specifically, political activity is a typical behavioral trait that tends to provoke reactions in online discussions (e.g. , Lutz & Hoffmann, 2017 ). In studies of cybervictimization, online political activity is associated with increased victimization (Vakhitova et al., 2019 ). Recent studies have also emphasized how social media have brought up and even increased political polarization (van Dijk & Hacker, 2018 ).

In Finland, the main division has been drawn between the supporters of the populist right-wing party, the Finns, and the supporters of the Green League and the Left Alliance (Koiranen et al., 2020 ). However, it is noteworthy that Finland has a multi-party system based on socioeconomic cleavages represented by traditional parties, such as the Social Democratic Party of Finland, the National Coalition Party, and the Center Party (Koivula et al., 2020 ). Indeed, previous research has shown that there is relatively little affective polarization in Finland (Wagner, 2021 ). Therefore, in the Finnish context it is unlikely that individuals would experience large-scale victimization based on their party preference.

Lack of Guardianship

The fourth element of RAT/LET assesses the role of social and physical guardianship against harmful activity. The lack of guardianship is assumed to increase victimization, and conversely, the presence of capable guardianship to decrease the likelihood victimization (Yar, 2005 ). In studies of online activities and routines, different measures of guardianship have rarely acted as predictors of victimization experiences (Leukfeldt & Yar, 2016 ; Vakhitova et al., 2016 ).

Regarding social guardianship, measures such as respondents’ digital skills and online risk awareness have been used, but with non-significant results (Leukfeldt & Yar, 2016 ). On the other hand, past research has indicated that victims of cyber abuse in general are less social than non-victims, which indicates that social networks may protect users from abuse online (Vakhitova et al., 2019 ). Also, younger users, females, and users with low educational qualifications are assumed to have weaker social guardianship against victimization and therefore are in more vulnerable positions (e.g., Keipi et al., 2016 ; Pratt & Turanovic, 2016 ).

In terms of physical guardianship, several technical measures, such as the use of firewalls and virus scanners, have been utilized in past research (Leukfeldt & Yar, 2016 ). In a general sense, technical security tools function as external settings in online interactions, similar to light, which may increase the identifiability of the aggressor in darkness. Preceding studies, however, have found no significant connection between technical guardianship and victimization (Vakhitova et al., 2016 ). Consequently, we decided not to address technical guardianship in this study.

Based on the preceding research findings discussed above, we stated the following two hypotheses:

  • H1: Increased PSMU associates with increased cybercrime victimization.
  • H2: The association between PSMU and cybercrime victimization is confounded by factors assessing exposure to risk, proximity to offenders, target attractiveness, and lack of guardianship.

Research Design

Our aim was to analyze how problematic use of social media is linked to cybercrime victimization experiences. According to RAT and LET, cybercrime victimization relates to how individuals’ lifestyles expose them to circumstances that increase the probability of victimization (Hindelang et al., 1978 ) and how individuals behave in different risky environments (Engström, 2020 ). Our main premise is that PSMU exposes users more frequently to environments that increase the likelihood of victimization experiences.

We constructed our research in two separate stages on the basis of the two-wave panel setting. In the first stage, we approached the relationship between PSMU and cybercrime victimization cross-sectionally by using a large and representative sample of the Finnish population aged 18–74. We also analyzed the extent to which the relationship between PSMU and cybercrime victimization was related to the confounders. In the second stage of analysis, we paid more attention to longitudinal effects and tested for the panel effects, examining changes in cybercrime victimization in relation to changes in PSMU.

Participants

We utilized two-wave panel data that were derived from the first and second rounds of the Digital Age in Finland survey. The cross-sectional study was based on the first round of the survey, organized in December 2017, for a total of 3,724 Finns. In this sample, two-thirds of the respondents were randomly sampled from the Finnish population register, and one-third were supplemented from a demographically balanced online respondent pool organized by Taloustutkimus Inc. We analyzed social media users ( N  = 2,991), who accounted for 77% of the original data. The data over-represented older citizens, which is why post-stratifying weights were applied to correspond with the official population distribution of Finns aged 18–74 (Sivonen et al., 2019 ).

To form a longitudinal setting, respondents were asked whether they were willing to participate in the survey a second time about a year after the first data collection. A total of 1,708 participants expressed willingness to participate in the follow-up survey that was conducted 15 months after the first round, in March 2019. A total of 1,134 people participated in the follow-up survey, comprising a response rate of 67% in the second round.

The question form was essentially the same for both rounds of data collection.

The final two-wave data used in the second-stage of analysis mirrored on population characteristics in terms of gender (males 50.8%) and age (M = 49.9, SD  = 16.2) structures. However, data were unrepresentative in terms of education and employment status when compared to the Finnish population: tertiary level education was achieved by 44.5% of participants and only 50.5% of respondents were employed. The data report published online shows a more detailed description of the data collection and its representativeness (Sivonen et al., 2019 ).

Our dependent variable measured whether the participants had been a target of cybercrime. Cybercrime was measured with five dichotomous questions inquiring whether the respondent had personally: 1) been targeted by threat or attack on social media, 2) been falsely accused online, 3) been targeted with hateful or degrading material on the Internet, 4) experienced sexual harassment on social media, and 5) been subjected to account stealing. 1 In the first round, 159 respondents (14.0%) responded that they had been the victim of cybercrime. In the second round, the number of victimization experiences increased by about 6 percentage points, as 71 respondents had experienced victimization during the observation period.

Our main independent variable was problematic social media use (PSMU). Initially, participants’ problematic and excessive social media usage was measured through an adaptation of the Compulsive Internet Use Scale (CIUS) , which consists of 14 items ratable on a 5-point Likert scale (Meerkerk et al., 2009 ). Our measure included five items on a 4-point scale scored from 1 (never) to 4 (daily) based on how often respondents: 1) “Have difficulties with stopping social media use,” 2)”'Have been told by others you should use social media less,” 3) “Have left important work, school or family related things undone due to social media use,” 4) “Use social media to alleviate feeling bad or stress,” and 5) “Plan social media use beforehand.”

For our analysis, all five items were used to create a new three-level variable to assess respondents’ PSMU at different intensity levels. If the respondent was experiencing daily or weekly at least one of the signs of problematic use daily, PSMU was coded as at least weekly . Second, if the respondent was experiencing less than weekly at least one of the signs of problematic use, PSMU was coded as occasionally. Finally, if the respondent was not experiencing any signs of problematic use, PSMU was coded to none.

To find reliable estimates for the effects of PSMU, we controlled for general social media use , including respondents’ activity on social networking sites and instant messenger applications. We combined two items to create a new four-level variable to measure respondents’ social media use (SMU). If a respondent reported using either social media platforms (e.g., Facebook, Twitter), instant messengers (e.g., WhatsApp, Facebook Messenger) or both many hours per day, we coded their activity as high . We coded activity as medium , if respondents reported using social media daily . Third, we coded activity as low for those respondents who reported using social media only on a weekly basis. Finally, we considered activity as very low if respondents reported using platforms or instant messengers less than weekly.

Confounding variables were related to participants’ target attractiveness, proximity to offenders, and potential guardianship factors.

Target attractiveness was measured by online political activity . Following previous studies (Koiranen et al., 2020 ; Koivula et al., 2019 ), we formed the variable based on four single items: following political discussions, participating in political discussions, sharing political content, and creating political content. Participants’ activity was initially determined by means of a 5-point scale (1 = Never, 2 = Sometimes, 3 = Weekly, 4 = Daily, and 5 = Many times per day). For analysis purposes, we first separated “politically inactive” users, who reported never using social media for political activities. Second, we coded as “followers” participants who only followed but never participated in the political discussions in social media. Third, we classified as “occasional participants” those who at least sometimes participated in political activities on social media. Finally, those participants who at least weekly used social media to participate in political activities were classified as “active participants.”

Proximity to offenders was considered by analyzing contacting strangers on social media . Initially, the question asked the extent to which respondents were in contact with strangers on social media, evaluated with a 5-point interval scale, from 1 ( Not at all ) to 5 ( Very much ). For the analysis, we merged response options 1 and 2 to form value 1, and 4 and 5 to form 3. Consequently, we used a three-level variable to measure respondents’ tendency to contact strangers on social media, in which 1 = Low, 2 = Medium, and 3 = High intensity.

Lack of guardianship was measured by gender, age, education, and main activity. Respondent’s gender (1 =  Male , 2 =  Female ), age (in years), level of education, and main activity were measured. While these variables could also be placed under target attractiveness, we placed them here. This is because background characteristics the variables measure are often invisible in online environments and exist only in terms of expressed behavior (e.g., Keipi et al., 2016 ). For statistical analysis, we classified education and main activity into binary variables. Education was measured with a binary variable that implied whether the respondent had achieved at least tertiary level education or not. The dichotomization can be justified by relatively high educational levels in Finland, where tertiary education is often considered as cut-off point between educated and non-educated citizens (Leinsalu et al., 2020 ). Main activity was measured with a binary variable that differentiated unemployed respondents from others (working, retirees, and full-time students). Regarding the lack of guardianship, unemployed people are less likely to relate to informal peer-networks occurring at workplaces or educational establishments, a phenomenon that also takes place in many senior citizens’ activities. Descriptive statistics for all measurements are provided in (Table ​ (Table1 1 ).

Descriptive statistics for the applied variables

Analytic techniques

The analyses were performed in two different stages with STATA 16. In the cross-sectional approach we analyzed the direct and indirect associations between PSMU and cybercrime victimization. We reported average marginal effects and their standard errors with statistical significances (Table ​ (Table2.). 2 .). The main effect of PSMU was illustrated in Fig.  1 by utilizing a user-written coefplot package (Jann, 2014 ).

The likelihood of cybercrime victimization according to confounding and control variables. Average marginal effects (AME) with standard errors estimated from the logit models

Standard errors in parentheses

*** p  < 0.001, ** p  < 0.01, * p  < 0.05

An external file that holds a picture, illustration, etc.
Object name is 12103_2021_9665_Fig1_HTML.jpg

Likelihood of cybercrime victimization according to the level of problematic social media use. Predicted probabilities with 95% confidence intervals

When establishing the indirect effects, we used the KHB-method developed by Karlson et al. ( 2012 ) and employed the khb command in Stata (Kohler et al., 2011 ). The KHB method decomposes the total effect of an independent variable into direct and indirect via a confounding / mediating variable (Karlson et al., 2012 ). Based on decomposition analysis, we reported logit coefficients for the total effect, direct effects, and indirect effects with statistical significances and confounding percentages (Table ​ (Table3 3 .).

The decomposition of effect of PSMU on online victimization with respect to confounding factors. The logit coefficients estimated using the KHB method

In the second stage, we analyzed the panel effects. We used hybrid mixed models to distinguish two time-varying factors: between-person effects and within-person effects, and predicted changes in cybercrime victimization with respect to changes in problematic social media use. We also tested how the relationship between cybercrime victimization and other time-varying variables changed over the observation period. The hybrid models were performed by using the xthybrid command (Schunck & Perales, 2017 ).

The results for our first hypothesis are presented in Fig.  1 . The likelihood of becoming a victim of cybercrime increased significantly as PSMU increased. Respondents who reported problematic use on a daily basis experienced cybercrime with a probability of more than 40%. The probability of becoming a victim was also high, 30%, if problematic use occurred weekly.

The models predicting cybercrime victimization are shown in Table ​ Table2. 2 . In the first model (M1), PSMU significantly predicted the risk of victimization if a participant reported even occasional problematic use (AME 0.06; p  < 0.001). If the respondent reported problematic use weekly (AME 0.17; p  < 0.001) or daily (AME 0.33; p  < 0.001), his or her probability of becoming a victim was significantly higher.

The next three models (M2-M4) were constructed on the basis of variables measuring risk exposure, proximity to offenders, and target attractiveness. The second model (M2) indicates that highly intensive social media use (AME 0.19, p  < 0.001) was related to cybercrime victimization. The third (M3) model presents that those who reported low intensity of meeting strangers online had lower probability of being victims (AME -0.11, p  < 0.001) and those who reported high intensity had higher probability (AME 0.12, p  < 0.05). Finally, the fourth (M4) model suggests that political activity was related to victimization: those who reported participating occasionally (AME 0.07, p  < 0.01) and actively (AME 0.14, p  < 0.001) had higher probability of being a victim.

Next, we evaluated how different guardianship factors were related to victimization. The fifth model (M5) indicates that age, gender, and economic activity were identified as significant protective factors. According to the results, older (AME -0.01, p  < 0.001) and male (AME -0.04, p  < 0.001) participants were less likely to be targets of cybercrime. Interestingly, higher education or unemployment was not related to victimization. Finally, the fifth model also suggests that the effect of PSMU remained significant even after controlling for confounding and control variables.

We decomposed the fifth model to determine how different confounding and control variables affected the relationship between PSMU and victimization. The results of the decomposition analysis are shown in Table ​ Table3. First, 3 . First, the factors significantly influenced the association between PSMU and victimization ( B  = 0.38, p  < 0.001), which means that the confounding percentage of background factors was 58.7%. However, the total effect of PSMU remained significant ( B  = 0.27, p  < 0.001). Age was the most significant factor in the association between PSMU and victimization ( B  = 0.14; p  < 0.001), explaining 36% of the total confounding percentage. Political activity was also a major contributing factor ( B  = 0.12, p  < 0.001) that explained 31.2% of the total confounding percentage. The analysis also revealed that meeting strangers online significantly confounded the relationship between PSMU and victimization ( B  = 0.7, p  < 0.001).

In the second stage, we examined the longitudinal effects of PSMU on cybercrime victimization using panel data from Finnish social media users. We focused on the factors varying in short term, that is why we also analyzed the temporal effects of SMU, contacting strangers online, and online political activity on victimization. The demographic factors that did not change over time or for which temporal variability did not vary across clusters (such as age) were not considered in the second stage.

Table ​ Table4 4 shows the hybrid models predicting each variable separately. The within-effects revealed that increased PSMU increased individuals’ probability of being victimized during the observation period ( B  = 0.77, p  = 0.02). Moreover, the between-effects of PSMU was significant ( B  = 2.00, p  < 0.001), indicating that increased PSMU was related to individuals’ higher propensity to be victimized over the observation period.

Unadjusted logit coefficients of cybercrime victimization according to PSMU and confounding variables from hybrid generalized mixed models

Each variable modelled separately

We could not find significant within-subject effects in terms of other factors. However, the between-effects indicated that SMU ( B  = 2.00, p  < 0.001), low intensity of meeting strangers online ( B  = -3.27, p  < 0.001), and online political participation ( B  = 2.08, p  < 0.001) distinguished the likelihood of individuals being victimized.

Over the last decade, social media has revolutionized the way people communicate and share information. As the everyday lives of individuals are increasingly mediated by social media technologies, some users may experience problems with excessive use. In prior studies, problematic use has been associated with many negative life outcomes, ranging from psychological disorders to economic consequences.

The main objective of this study was to determine whether PSMU is also linked to increased cybercrime victimization. First, we examined how PSMU associates with cybercrime victimization and hypothesized that increased PSMU associates with increased cybercrime victimization (H1). Our findings from the cross-sectional study indicated that PSMU is a notable predictor of victimization. In fact, daily reported problematic use increased the likelihood of cybercrime victimization by more than 30 percentage points. More specifically, the analysis showed that more than 40% of users who reported experiencing problematic use daily reported being victims of cybercrime, while those who never experienced problematic use had a probability of victimization of slightly over 10%.

We also examined how PSMU captures other risk factors contributing to cybercrime victimization. Here, we hypothesized that the association between PSMU and cybercrime victimization is mediated by exposure to risk, proximity to offenders, target attractiveness, and lack of guardianship (H2). The decomposition analysis indicated that confounding factors explained over 50 percent of the total effect of PSMU. A more detailed analysis showed that the association between PSMU and cybercrime victimization was related to respondents’ young age, online political activity, activity to meet strangers online, and intensity of general social media use. This means that PSMU and victimization are linked to similar factors related to routine activities and lifestyle that increase the target's attractiveness, proximity to offenders and lack of guardianship. Notably, the effect of PSMU remained significant even after controlling for the confounding factors.

In the longitudinal analysis, we confirmed the first hypothesis and found that increased PSMU was associated with increased cybercrime victimization in both within- and between-subject analyses. The result indicated a clear link between problematic use and cybercrime experiences during the observation period: as problematic use increases, so does the individual’s likelihood of becoming a victim of cybercrime. At the same time, according to the between-subject analysis, it also appears that cybercrime experiences are generally more likely to increase for those who experience more problematic use. Interestingly, we could not find within-subject effects in terms of other factors. This means, for example, that individuals' increased encounters with strangers or increased online political activity were not directly reflected in the likelihood of becoming a victim during the observation period. The between-subject analyses, however, indicated that an individual’s increased propensity to be victimized is related to higher level of social media activity, intensity of meeting strangers online, and online political activity over time.

Our findings are consistent with those of preceding research pointing to the fact that cybervictimization is indeed a notable threat, especially to those already in vulnerable circumstances (Keipi et al., 2016 ). The probabilities of cybercrime risk vary in online interactional spaces, depending on the absence and presence of certain key components suggested in our theoretical framework. Despite the seriousness of our findings, recent statistics indicate that cybercrime victimization is still relatively rare in Finland. In 2020, seven percent of Finnish Internet users had experienced online harassment, and 13 percent reported experiencing unwelcome advances during the previous three months (OSF, 2020 ). However, both forms of cybercrime victimization are clearly more prevalent among younger people and those who use social media frequently.

Cybercrime is becoming an increasingly critical threat as social media use continues to spread throughout segments of the population. Certain online activities and routinized behaviors can be considered to be particularly risky and to increase the probability of cybercrime victimization. In our study, we have identified problematic social media use as a specific behavioral pattern or lifestyle that predicts increased risk of becoming a victim of cybercrime.

Although the overall approach of our study was straightforward, the original theoretical concepts are ambiguously defined and alternative meanings have been given to them. It follows that the empirical operationalization of the concepts was not in line with some studies looking at the premises of RAT and LET framework. Indeed, different empirical measures have been employed to address the basic elements associating with risks of victimization (e.g., Hawdon et al., 2017 ; Pratt & Turanovic, 2016 ). In our investigation, we focused on selected online activities and key socio-demographic background factors.

Similarly, we need to be cautious when discussing the implications of our findings. First, our study deals with one country alone, which means that the findings cannot be generalized beyond Finland or beyond the timeline 2017 to 2019. This means that our findings may not be applicable to the highly specific time of the COVID-19 pandemic when online activities have become more versatile than ever before. In addition, although our sample was originally drawn from the national census database, some response bias probably exists in the final samples. Future research should use longitudinal data that better represent, for example, different socio-economic groups. We also acknowledge that we did not control for the effect of offline social relations on the probability of cybercrime risk. Despite these limitations, we believe our study has significance for contemporary cybercrime research.

Our study shows that PSMU heightens the risk of cybercrime victimization. Needless to say, future research should continue to identify specific activities that comprise “dangerous” lifestyles online, which may vary from one population group to another. In online settings, there are a variety of situations and circumstances that are applicable to different forms of cybercrime. For instance, lack of basic online skills regarding cybersecurity can work like PSMU.

In general, our findings contribute to the assumption that online and offline victimization should not necessarily be considered distinct phenomena. Therefore, our theoretical framework, based on RAT and LET, seems highly justified. Our observations contribute to an increasing body of research that demonstrates how routine activities and lifestyle patterns of individuals can be applied to crimes committed in the physical world, as well as to crimes occurring in cyberspace.

Biographies

is a PhD student at the Unit of Economic Sociology, University of Turku, Finland. Marttila is interested in the use of digital technologies, risks, and well-being.

is a University Lecturer at the Unit of Economic Sociology, University of Turku, Finland. Koivula’s research deals with political preferences, consumer behavior and use of online platforms.

is Professor of Economic Sociology at University of Turku, Finland. His current research interests are in digital inequalities and online hate speech in platform economy.

Open Access funding provided by University of Turku (UTU) including Turku University Central Hospital. This study was funded by the Strategic Research Council of the Academy of Finland (decision number 314171).

Data Availability

Code availability, declarations.

The authors declare no conflicts of interest.

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

2) Have you been falsely accused online?

3) Have you been targeted with hateful or degrading material on the Internet?

4) Have you experienced sexual harassment social media?

5) Has your online account been stolen or a new account made with your name without your permission?

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Appel M, Marker C, Gnambs T. Are social media ruining our lives? A review of meta-analytic evidence. Review of General Psychology. 2020; 24 (1):60–74. doi: 10.1177/1089268019880891. [ CrossRef ] [ Google Scholar ]
  • Bányai, F., Zsila, Á., Király, O., Maraz, A., Elekes, Z., Griffiths, M. D., et al. (2017). Problematic social media use: Results from a large-scale nationally representative adolescent sample. PLoS ONE , 12 (1). 10.1371/journal.pone.0169839 [ PMC free article ] [ PubMed ]
  • Bossler AM, Holt TJ, May DC. Predicting online harassment victimization among a juvenile population. Youth & Society. 2012; 44 (4):500–523. doi: 10.1177/0044118X11407525. [ CrossRef ] [ Google Scholar ]
  • Clark JL, Algoe SB, Green MC. Social network sites and well-being: The role of social connection. Current Directions in Psychological Science. 2018; 9 :44–49. doi: 10.1016/j.copsyc.2015.10.006. [ CrossRef ] [ Google Scholar ]
  • Cohen LE, Felson M. Social change and crime rate trends: A routine activity approach. American Sociological Review. 1979; 44 (4):588–608. doi: 10.2307/2094589. [ CrossRef ] [ Google Scholar ]
  • Craig W, Boniel-Nissim M, King N, Walsh SD, Boer M, Donnelly PD, et al. Social media use and cyber-bullying: A cross-national analysis of young people in 42 countries. Journal of Adolescent Health. 2020; 66 (6):S100–S108. doi: 10.1016/j.jadohealth.2020.03.006. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Donalds C, Osei-Bryson KM. Toward a cybercrime classification ontology: A knowledge-based approach. Computers in Human Behavior. 2019; 92 :403–418. doi: 10.1016/j.chb.2018.11.039. [ CrossRef ] [ Google Scholar ]
  • Engström A. Conceptualizing lifestyle and routine activities in the early 21st century: A systematic review of self-report measures in studies on direct-contact offenses in young populations. Crime & Delinquency. 2020; 67 (5):737–782. doi: 10.1177/0011128720937640. [ CrossRef ] [ Google Scholar ]
  • Europol (2019). European Union serious and organised crime threat assessment. Online document, available at: https://ec.europa.eu/home-affairs/what-we-do/policies/cybercrime_en
  • Gámez-Guadix M, Borrajo E, Almendros C. Risky online behaviors among adolescents: Longitudinal relations among problematic Internet use, cyberbullying perpetration, and meeting strangers online. Journal of Behavioral Addictions. 2016; 5 (1):100–107. doi: 10.1556/2006.5.2016.013. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Griffiths, M. D., Kuss, D. J., & Demetrovics, Z. (2014). Social networking addiction: An overview of preliminary findings. In K. P. Rosenberg & L. C. B. T.-B. A. Feder (Eds.), Behavioral addictions: Criteria, evidence, and treatment (pp. 119–141). San Diego: Academic Press. 10.1016/B978-0-12-407724-9.00006-9
  • Hawdon J, Oksanen A, Räsänen P. Exposure to online hate in four nations: A cross-national consideration. Deviant Behavior. 2017; 38 (3):254–266. doi: 10.1080/01639625.2016.1196985. [ CrossRef ] [ Google Scholar ]
  • Hindelang MJ, Gottfredson MR, Garofalo J. Victims of personal crime: An empirical foundation for a theory of personal victimization. Ballinger Publishing Co; 1978. [ Google Scholar ]
  • Holt TJ, Bossler AM. Examining the applicability of lifestyle-routine activities theory for cybercrime victimization. Deviant Behavior. 2008; 30 (1):1–25. doi: 10.1080/01639620701876577. [ CrossRef ] [ Google Scholar ]
  • Holt TJ, Bossler AM. An assessment of the current state of cybercrime scholarship. Deviant Behavior. 2014; 35 (1):20–40. doi: 10.1080/01639625.2013.822209. [ CrossRef ] [ Google Scholar ]
  • Hussain, Z., & Griffiths, M. D. (2018). Problematic social networking site use and comorbid psychiatric disorders: A systematic review of recent large-scale studies. Frontiers in Psychiatry , 9 (686). 10.3389/fpsyt.2018.00686 [ PMC free article ] [ PubMed ]
  • Jann, B. (2014). Plotting regression coefficients and other estimates . The Stata Journal , 14 (4), 708–737. 10.1177%2F1536867X1401400402
  • Karlson, K. B., Holm, A., & Breen, R. (2012). Comparing regression coefficients between same-sample nested models using logit and probit: A new method. Sociological methodology, 42 (1), 286–313. 10.1177%2F0081175012444861
  • Keipi, T., Näsi, M., Oksanen, A., & Räsänen, P. (2016). Online hate and harmful content: Cross-national perspectives. Taylor & Francis. http://library.oapen.org/handle/20.500.12657/22350
  • Kim B, Kim Y. College students’ social media use and communication network heterogeneity: Implications for social capital and subjective well-being. Computers in Human Behavior. 2017; 73 :620–628. doi: 10.1016/j.chb.2017.03.033. [ CrossRef ] [ Google Scholar ]
  • Kohler, U., Karlson, K. B., & Holm, A. (2011). Comparing coefficients of nested nonlinear probability models. The Stata Journal, 11 (3), 420–438. 10.1177/1536867X1101100306
  • Koivula A, Kaakinen M, Oksanen A, Räsänen P. The role of political activity in the formation of online identity bubbles. Policy & Internet. 2019; 11 (4):396–417. doi: 10.1002/poi3.211. [ CrossRef ] [ Google Scholar ]
  • Koivula A, Koiranen I, Saarinen A, Keipi T. Social and ideological representativeness: A comparison of political party members and supporters in Finland after the realignment of major parties. Party Politics. 2020; 26 (6):807–821. doi: 10.1177/1354068818819243. [ CrossRef ] [ Google Scholar ]
  • Koiranen I, Koivula A, Saarinen A, Keipi T. Ideological motives, digital divides, and political polarization: How do political party preference and values correspond with the political use of social media? Telematics and Informatics. 2020; 46 :101322. doi: 10.1016/j.tele.2019.101322. [ CrossRef ] [ Google Scholar ]
  • Kross E, Verduyn P, Demiralp E, Park J, Lee DS, Lin N, et al. Facebook use predicts declines in subjective well-being in young adults. PLoS ONE. 2013; 8 (8):e69841. doi: 10.1371/journal.pone.0069841. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kross E, Verduyn P, Sheppes G, Costello CK, Jonides J, Ybarra O. Social media and well-being: Pitfalls, progress, and next steps. Trends in Cognitive Sciences. 2020; 25 (1):55–66. doi: 10.1016/j.tics.2020.10.005. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kuss D, Griffiths M. Social networking sites and addiction: Ten lessons learned. International Journal of Environmental Research and Public Health. 2017; 14 (3):311. doi: 10.3390/ijerph14030311. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Leinsalu M, Baburin A, Jasilionis D, Krumins J, Martikainen P, Stickley A. Economic fluctuations and urban-rural differences in educational inequalities in mortality in the Baltic countries and Finland in 2000–2015: A register-based study. International Journal for Equity in Health. 2020; 19 (1):1–6. doi: 10.1186/s12939-020-01347-5. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Leukfeldt ER, Yar M. Applying routine activity theory to cybercrime: A theoretical and empirical analysis. Deviant Behavior. 2016; 37 (3):263–280. doi: 10.1080/01639625.2015.1012409. [ CrossRef ] [ Google Scholar ]
  • Longobardi C, Settanni M, Fabris MA, Marengo D. Follow or be followed: Exploring the links between Instagram popularity, social media addiction, cyber victimization, and subjective happiness in Italian adolescents. Children and Youth Services Review. 2020; 113 :104955. doi: 10.1016/j.childyouth.2020.104955. [ CrossRef ] [ Google Scholar ]
  • Lowry PB, Zhang J, Wang C, Siponen M. Why do adults engage in cyberbullying on social media? An integration of online disinhibition and deindividuation effects with the social structure and social learning model. Information Systems Research. 2016; 27 (4):962–986. doi: 10.1287/isre.2016.0671. [ CrossRef ] [ Google Scholar ]
  • Lutz C, Hoffmann CP. The dark side of online participation: Exploring non-, passive and negative participation. Information, Communication & Society. 2017; 20 (6):876–897. doi: 10.1080/1369118X.2017.1293129. [ CrossRef ] [ Google Scholar ]
  • Marcum CD, Higgins GE, Nicholson J. I’m watching you: Cyberstalking behaviors of university students in romantic relationships. American Journal of Criminal Justice. 2017; 42 (2):373–388. doi: 10.1007/s12103-016-9358-2. [ CrossRef ] [ Google Scholar ]
  • Martínez-Ferrer B, Moreno D, Musitu G. Are adolescents engaged in the problematic use of social networking sites more involved in peer aggression and victimization? Frontiers in Psychology. 2018; 9 :801. doi: 10.3389/fpsyg.2018.00801. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Marttila E, Koivula A, Räsänen P. Does excessive social media use decrease subjective well-being? A longitudinal analysis of the relationship between problematic use, loneliness and life satisfaction. Telematics and Informatics. 2021; 59 :101556. doi: 10.1016/j.tele.2020.101556. [ CrossRef ] [ Google Scholar ]
  • Meerkerk GJ, Van Den Eijnden RJJM, Vermulst AA, Garretsen HFL. The Compulsive Internet Use Scale (CIUS): Some psychometric properties. Cyberpsychology and Behavior. 2009; 12 (1):1–6. doi: 10.1089/cpb.2008.0181. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Meshi D, Cotten SR, Bender AR. Problematic social media use and perceived social isolation in older adults: A cross-sectional study. Gerontology. 2020; 66 (2):160–168. doi: 10.1159/000502577. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Meško G. On some aspects of cybercrime and cybervictimization. European Journal of Crime, Criminal Law and Criminal Justice. 2018; 26 (3):189–199. doi: 10.1163/15718174-02603006. [ CrossRef ] [ Google Scholar ]
  • Milani R, Caneppele S, Burkhardt C. Exposure to cyber victimization: Results from a Swiss survey. Deviant Behavior. 2020 doi: 10.1080/01639625.2020.1806453. [ CrossRef ] [ Google Scholar ]
  • Näsi M, Räsänen P, Kaakinen M, Keipi T, Oksanen A. Do routine activities help predict young adults’ online harassment: A multi-nation study. Criminology and Criminal Justice. 2017; 17 (4):418–432. doi: 10.1177/1748895816679866. [ CrossRef ] [ Google Scholar ]
  • Ngo FT, Paternoster R. Cybercrime victimization: An examination of individual and situational level factors. International Journal of Cyber Criminology. 2011; 5 (1):773–793. [ Google Scholar ]
  • Official Statistics of Finland (OSF) (2020). Väestön tieto- ja viestintätekniikan käyttö [online document]. ISSN=2341–8699. 2020, Liitetaulukko 29. Vihamielisten viestien näkeminen, häirinnän kokeminen ja epäasiallisen lähestymisen kohteeksi joutuminen sosiaalisessa mediassa 2020, %-osuus väestöstä. Helsinki: Tilastokeskus. Available at: http://www.stat.fi/til/sutivi/2020/sutivi_2020_2020-11-10_tau_029_fi.html
  • Pang H. How does time spent on WeChat bolster subjective well-being through social integration and social capital? Telematics and Informatics. 2018; 35 (8):2147–2156. doi: 10.1016/j.tele.2018.07.015. [ CrossRef ] [ Google Scholar ]
  • Pratt TC, Turanovic JJ. Lifestyle and routine activity theories revisited: The importance of “risk” to the study of victimization. Victims & Offenders. 2016; 11 (3):335–354. doi: 10.1080/15564886.2015.1057351. [ CrossRef ] [ Google Scholar ]
  • Reep-van den Bergh CMM, Junger M. Victims of cybercrime in Europe: A review of victim surveys. Crime Science. 2018; 7 (1):1–15. doi: 10.1186/s40163-018-0079-3. [ CrossRef ] [ Google Scholar ]
  • Reyns BW, Henson B, Fisher BS. Being pursued online. Criminal Justice and Behavior. 2011; 38 (11):1149–1169. doi: 10.1177/0093854811421448. [ CrossRef ] [ Google Scholar ]
  • Räsänen P, Hawdon J, Holkeri E, Keipi T, Näsi M, Oksanen A. Targets of online hate: Examining determinants of victimization among young Finnish Facebook users. Violence and Victims. 2016; 31 (4):708–725. doi: 10.1891/0886-6708.vv-d-14-00079. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schunck, R., & Perales, F. (2017). Within- and between-cluster effects in generalized linear mixed models: A discussion of approaches and the xthybrid command. The Stata Journal , 17(1), 89–115. 10.1177%2F1536867X1701700106
  • Shensa A, Escobar-Viera CG, Sidani JE, Bowman ND, Marshal MP, Primack BA. Problematic social media use and depressive symptoms among U.S. young adults: A nationally-representative study. Social Science and Medicine. 2017; 182 :150–157. doi: 10.1016/j.socscimed.2017.03.061. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sivonen, J., Kuusela, A., Koivula, A., Saarinen, A., & Keipi, T. (2019). Working papers in economic sociology: Research Report on Finland in the Digital Age Round 2 Panel-survey . Turku.
  • Wagner M. Affective polarization in multiparty systems. Electoral Studies. 2021; 69 :102199. doi: 10.1016/j.electstud.2020.102199. [ CrossRef ] [ Google Scholar ]
  • Vakhitova ZI, Alston-Knox CL, Reynald DM, Townsley MK, Webster JL. Lifestyles and routine activities: Do they enable different types of cyber abuse? Computers in Human Behavior. 2019; 101 :225–237. doi: 10.1016/j.chb.2019.07.012. [ CrossRef ] [ Google Scholar ]
  • Vakhitova ZI, Reynald DM, Townsley M. Toward the adaptation of routine activity and lifestyle exposure theories to account for cyber abuse victimization. Journal of Contemporary Criminal Justice. 2016; 32 (2):169–188. doi: 10.1177/1043986215621379. [ CrossRef ] [ Google Scholar ]
  • Valenzuela S, Park N, Kee KF. Is there social capital in a social network site?: Facebook use and college student’s life satisfaction, trust, and participation. Journal of Computer-Mediated Communication. 2009; 14 (4):875–901. doi: 10.1111/j.1083-6101.2009.01474.x. [ CrossRef ] [ Google Scholar ]
  • Van Dijk JA, Hacker KL. Internet and democracy in the network society. Routledge. 2018 doi: 10.4324/9781351110716. [ CrossRef ] [ Google Scholar ]
  • Verduyn P, Ybarra O, Résibois M, Jonides J, Kross E. Do social network sites enhance or undermine subjective well-being? A critical review. Social Issues and Policy Review. 2017; 11 (1):274–302. doi: 10.1111/sipr.12033. [ CrossRef ] [ Google Scholar ]
  • Wheatley D, Buglass SL. Social network engagement and subjective well-being: A life-course perspective. The British Journal of Sociology. 2019; 70 (5):1971–1995. doi: 10.1111/1468-4446.12644. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Yar M. The novelty of ‘Cybercrime’ European Journal of Criminology. 2005; 2 (4):407–427. doi: 10.1177/147737080556056. [ CrossRef ] [ Google Scholar ]
  • Yar, M., & Steinmetz, K. F. (2019). Cybercrime and society . SAGE Publications Limited.

research paper on cybercrime

Millions of American trucks face cybersecurity exposure

N ew worker safety regulations meant to log how many hours truckers are on the road may have inadvertently exposed millions of U.S. 18-wheelers to hackers who could take control of entire fleets of vehicles, according to a new Colorado State University paper .

Jake Jepson, co-author and graduate research assistant at Colorado State University, said it's important to create guard rails as the nation's transportation networks, power grids, water systems and other critical infrastructure move online.

"Each year those systems that never used to be connected to the internet or have any wireless connections are becoming more and more connected," he said. "And that can introduce vulnerabilities."

C.S.U. researchers found the cybersecurity gaps in electronic logging devices, which track a host of data required for inspections. The devices are connected to the vehicle's control systems, and are not currently required to carry cybersecurity precautions. In one example, the paper shows how hackers can manipulate trucks wirelessly and force them to pull over.

Jeremy Daily, C.S.U. associate professor, said students were able to locate the gaps by reverse-engineering one of the devices, which are produced by third-party vendors, and that adding new electronics to trucks that don't go through a typical manufacturer's design process can introduce new vulnerabilities.

"When regulators are introducing new requirements, they have to be aware of the cyber security implications," he explained.

Daily estimates that more than 14 million medium-and heavy-duty trucks that form the core of the U.S. shipping sector may have been exposed. He says the paper's findings can help device vendors fix the problem.

"The happy ending of this story is that we have worked with the vendor, and they have come up with a patch to the problem," he continued. "And so, it's important for the truckers and the people that have these devices to pay attention to those software update recommendations when they come out."

Colorado News Connection

The Federal Register

The daily journal of the united states government, request access.

Due to aggressive automated scraping of FederalRegister.gov and eCFR.gov, programmatic access to these sites is limited to access to our extensive developer APIs.

If you are human user receiving this message, we can add your IP address to a set of IPs that can access FederalRegister.gov & eCFR.gov; complete the CAPTCHA (bot test) below and click "Request Access". This process will be necessary for each IP address you wish to access the site from, requests are valid for approximately one quarter (three months) after which the process may need to be repeated.

An official website of the United States government.

If you want to request a wider IP range, first request access for your current IP, and then use the "Site Feedback" button found in the lower left-hand side to make the request.

Advertisement

Advertisement

Cybercrime Victimization and Problematic Social Media Use: Findings from a Nationally Representative Panel Study

  • Open access
  • Published: 25 November 2021
  • Volume 46 , pages 862–881, ( 2021 )

Cite this article

You have full access to this open access article

  • Eetu Marttila 1 ,
  • Aki Koivula 1 &
  • Pekka Räsänen 1  

8969 Accesses

10 Citations

Explore all metrics

According to criminological research, online environments create new possibilities for criminal activity and deviant behavior. Problematic social media use (PSMU) is a habitual pattern of excessive use of social media platforms. Past research has suggested that PSMU predicts risky online behavior and negative life outcomes, but the relationship between PSMU and cybercrime victimization is not properly understood. In this study, we use the framework of routine activity theory (RAT) and lifestyle-exposure theory (LET) to examine the relationship between PSMU and cybercrime victimization. We analyze how PSMU is linked to cybercrime victimization experiences. We explore how PSMU predicts cybercrime victimization, especially under those risky circumstances that generally increase the probability of victimization. Our data come from nationally representative surveys, collected in Finland in 2017 and 2019. The results of the between-subjects tests show that problematic PSMU correlates relatively strongly with cybercrime victimization. Within-subjects analysis shows that increased PSMU increases the risk of victimization. Overall, the findings indicate that, along with various confounding factors, PSMU has a notable cumulative effect on victimization. The article concludes with a short summary and discussion of the possible avenues for future research on PSMU and cybercrime victimization.

Similar content being viewed by others

research paper on cybercrime

Pornography Consumption in People of Different Age Groups: an Analysis Based on Gender, Contents, and Consequences

Rafael Ballester-Arnal, Marta García-Barba, … Maria Dolores Gil-Llario

research paper on cybercrime

Risk and protective factors of drug abuse among adolescents: a systematic review

Azmawati Mohammed Nawi, Rozmi Ismail, … Nurul Shafini Shafurdin

research paper on cybercrime

Online Dating and Problematic Use: A Systematic Review

Gabriel Bonilla-Zorita, Mark D. Griffiths & Daria J. Kuss

Avoid common mistakes on your manuscript.

Introduction

In criminology, digital environments are generally understood as social spaces which open new possibilities for criminal activity and crime victimization (Yar, 2005 ). Over the past decade, social media platforms have established themselves as the basic digital infrastructure that governs daily interactions. The rapid and vast adaptation of social media technologies has produced concern about the possible negative effects, but the association between social media use and decreased wellbeing measures appears to be rather weak (Appel et al., 2020 ; Kross et al., 2020 ). Accordingly, researchers have proposed that the outcomes of social media use depend on the way platforms are used, and that the negative outcomes are concentrated among those who experience excessive social media use (Kross et al., 2020 ; Wheatley & Buglass, 2019 ). Whereas an extensive body of research has focused either on cybercrime victimization or on problematic social media use, few studies have focused explicitly on the link between problematic use and victimization experiences (e.g., Craig et al., 2020 ; Longobardi et al., 2020 ).

As per earlier research, the notion of problematic use is linked to excessive and uncontrollable social media usage, which is characterized by compulsive and routinized thoughts and behavior (e.g., Kuss & Griffiths, 2017 ). The most frequently used social scientific and criminological accounts of risk factors of victimization are based on routine activity theory (RAT) (Cohen & Felson, 1979 ) and lifestyle-exposure theory (LET) (Hindelang et al., 1978 ). Although RAT and LET were originally developed to understand how routines and lifestyle patterns may lead to victimization in physical spaces, they have been applied in online environments (e.g., Milani et al., 2020 ; Räsänen et al., 2016 ).

As theoretical frameworks, RAT and LET presume that lifestyles and routine activities are embedded in social contexts, which makes it possible to understand behaviors and processes that lead to victimization. The excessive use of social media platforms increases the time spent in digital environments, which, according to lifestyle and routine activities theories, tends to increase the likelihood of ending up in dangerous situations. Therefore, we presume that problematic use is a particularly dangerous pattern of use, which may increase the risk of cybercrime victimization.

In this study, we employ the key elements of RAT and LET to focus on the relationship between problematic social media use and cybercrime victimization. Our data come from high quality, two-wave longitudinal population surveys, which were collected in Finland in 2017 and 2019. First, we examine the cross-sectional relationship between problematic use and victimization experiences at Wave 1, considering the indirect effect of confounding factors. Second, we test for longitudinal effects by investigating whether increased problematic use predicts an increase in victimization experiences at Wave 2.

Literature Review

Problematic social media use.

Over the last few years, the literature on the psychological, cultural, and social effects of social media has proliferated. Prior research on the topic presents a nuanced view of social media and its consequences (Kross et al., 2020 ). For instance, several studies have demonstrated that social media use may produce positive outcomes, such as increased life satisfaction, social trust, and political participation (Kim & Kim, 2017 ; Valenzuela et al., 2009 ). The positive effects are typically explained to follow from use that satisfy individuals’ socioemotional needs, such as sharing emotions and receiving social support on social media platforms (Pang, 2018 ; Verduyn et al., 2017 ).

However, another line of research associates social media use with several negative effects, including higher stress levels, increased anxiety and lower self-esteem (Kross et al., 2020 ). Negative outcomes, such as depression (Shensa et al., 2017 ), decreased subjective well-being (Wheatley & Buglass, 2019 ) and increased loneliness (Meshi et al., 2020 ), are also commonly described in the research literature. The most common mechanisms that are used to explain negative outcomes of social media use are social comparison and fear of missing out (Kross et al., 2020 ). In general, it appears that the type of use that does not facilitate interpersonal connection is more detrimental to users’ health and well-being (Clark et al., 2018 ).

Even though the earlier research on the subject has produced somewhat contradictory results, the researchers generally agree that certain groups of users are at more risk of experiencing negative outcomes of social media use. More specifically, the researchers have pointed out that there is a group of individuals who have difficulty controlling the quantity and intensity of their use of social media platforms (Kuss & Griffiths, 2017 ). Consequently, new concepts, such as problematic social media use (Bányai et al., 2017 ) and social networking addiction (Griffiths et al., 2014 ) have been developed to assess excessive use. In this research, we utilize the concept of problematic social media use (PSMU), which is applied broadly in the literature. In contrast to evidence of social media use in general, PSMU consistently predicts negative outcomes in several domains of life, including decreased subjective well-being (Kross et al., 2013 ; Wheatley & Buglass, 2019 ), depression (Hussain & Griffiths, 2018 ), and loneliness (Marttila et al., 2021 ).

To our knowledge, few studies have focused explicitly on the relationship between PSMU and cybercrime victimization. One cross-national study of young people found that PSMU is consistently and strongly associated with cyberbullying victimization across countries (Craig et al., 2020 ) and another one of Spanish adolescents returned similar results (Martínez-Ferrer et al., 2018 ). Another study of Italian adolescents found that an individual’s number of followers on Instagram was positively associated with experiences of cybervictimization (Longobardi et al., 2020 ). A clear limitation of the earlier studies is that they focused on adolescents and often dealt with cyberbullying or harassment. Therefore, the results are not straightforwardly generalizable to adult populations or to other forms of cybercrime victimization. Despite this, there are certain basic assumptions about cybercrime victimization that must be considered.

Cybercrime Victimization, Routine Activity, and Lifestyle-Exposure Theories

In criminology, the notion of cybercrime is used to refer to a variety of illegal activities that are performed in online networks and platforms through computers and other devices (Yar & Steinmetz, 2019 ). As a concept, cybercrime is employed in different levels of analysis and used to describe a plethora of criminal phenomena, ranging from individual-level victimization to large-scale, society-wide operations (Donalds & Osei-Bryson, 2019 ). In this study, we define cybercrime as illegal activity and harm to others conducted online, and we focus on self-reported experiences of cybercrime victimization. Therefore, we do not address whether respondents reported an actual crime victimization to the authorities.

In Finland and other European countries, the most common types of cybercrime include slander, hacking, malware, online fraud, and cyberbullying (see Europol, 2019 ; Meško, 2018 ). Providing exact estimates of cybercrime victims has been a challenge for previous criminological research, but 1 to 15 percent of the European population is estimated to have experienced some sort of cybercrime victimization (Reep-van den Bergh & Junger, 2018 ). Similarly, it is difficult to give a precise estimate of the prevalence of social media-related criminal activity. However, as a growing proportion of digital interactions are mediated by social media platforms, we can expect that cybercrime victimization on social media is also increasing. According to previous research, identity theft (Reyns et al., 2011 ), cyberbullying (Lowry et al., 2016 ), hate speech (Räsänen et al., 2016 ), and stalking (Marcum et al., 2017 ) are all regularly implemented on social media. Most of the preceding studies have focused on cybervictimization of teenagers and young adults, which are considered the most vulnerable population segments (e.g., Hawdon et al., 2017 ; Keipi et al.,  2016 ).

One of the most frequently used conceptual frameworks to explain victimization is routine activity theory (RAT) (Cohen & Felson, 1979 ). RAT claims that the everyday routines of social actors place individuals at risk for victimization by exposing them to dangerous people, places, and situations. The theory posits that a crime is more likely to occur when a motivated offender, a suitable target, and a lack of capable guardians converge in space and time (Cohen & Felson, 1979 ). RAT is similar to lifestyle-exposure theory (LET), which aims to understand the ways in which lifestyle patterns in the social context allow different forms of victimization (Hindelang et al., 1978 ).

In this study, we build our approach on combining RAT and LET in order to examine risk-enhancing behaviors and characteristics fostered by online environment. Together, these theories take the existence of motivated offenders for granted and therefore do not attempt to explain their involvement in crime. Instead, we concentrate on how routine activities and lifestyle patterns, together with the absence of a capable guardian, affect the probability of victimization.

Numerous studies have investigated the applicability of LET and RAT for cybercrime victimization (e.g., Holt & Bosser, 2008 , 2014 ; Leukfeldt & Yar, 2016 ; Näsi et al., 2017 ; Vakhitova et al., 2016 , 2019 ; Yar, 2005 ). The results indicate that different theoretical concepts are operationalizable to online environments to varying degrees, and that some operationalizations are more helpful than others (Näsi et al., 2017 ). For example, the concept of risk exposure is considered to be compatible with online victimization, even though earlier studies have shown a high level of variation in how the risk exposure is measured (Vakhitova et al., 2016 ). By contrast, target attractiveness and lack of guardianship are generally considered to be more difficult to operationalize in the context of technology-mediated victimization (Leukfeldt & Yar, 2016 ).

In the next section, we will take a closer look at how the key theoretical concepts LET and RAT have been operationalized in earlier studies on cybervictimization. Here, we focus solely on factors that we can address empirically with our data. Each of these have successfully been applied to online environments in prior studies (e.g., Hawdon et al., 2017 ; Keipi et al., 2016 ).

Confounding Elements of Lifestyle and Routine Activities Theories and Cybercrime Victimization

Exposure to risk.

The first contextual component of RAT/LET addresses the general likelihood of experiencing risk situations. Risk exposure has typically been measured by the amount of time spent online or the quantity of different online activities – the hours spent online, the number of online accounts, the use of social media services (Hawdon et al., 2017 ; Vakhitova et al., 2019 ). The studies that have tested the association have returned mixed results, and it seems that simply the time spent online does not predict increased victimization (e.g., Ngo & Paternoster, 2011 ; Reyns et al., 2011 ). On the other hand, the use of social media platforms (Bossler et al., 2012 ; Räsänen et al., 2016 ) and the number of accounts in social networks are associated with increased victimization (Reyns et al., 2011 ).

Regarding the association between the risk of exposure and victimization experiences, previous research has suggested that specific online activities may increase the likelihood of cybervictimization. For example, interaction with other users is associated with increased victimization experiences, whereas passive use may protect from cybervictimization (Holt & Bossler, 2008 ; Ngo & Paternoster, 2011 ; Vakhitova et al., 2019 ). In addition, we assume that especially active social media use, such as connecting with new people, is a risk factor and should be taken into account by measuring the proximity to offenders in social media.

Proximity to Offenders

The second contextual component of RAT/LET is closeness to the possible perpetrators. Previously, proximity to offenders was typically measured by the amount of self-disclosure in online environments, such as the number of followers on social media platforms (Vakhitova et al., 2019 ). Again, earlier studies have returned inconsistent results, and the proximity to offenders has mixed effects on the risk victimization. For example, the number of online friends does not predict increased risk of cybercrime victimization (Näsi et al., 2017 ; Räsänen et al., 2016 ; Reyns et al., 2011 ). By contrast, a high number of social media followers (Longobardi et al., 2020 ) and online self-disclosures are associated with higher risk of victimization (Vakhitova et al., 2019 ).

As in the case of risk exposure, different operationalizations of proximity to offenders may predict victimization more strongly than others. For instance, compared to interacting with friends and family, contacting strangers online may be much riskier (Vakhitova et al., 2016 ). Earlier studies support this notion, and allowing strangers to acquire sensitive information about oneself, as well as frequent contact with strangers on social media, predict increased risk for cybervictimization (Craig et al., 2020 ; Reyns et al., 2011 ). Also, compulsive online behavior is associated with a higher probability of meeting strangers online (Gámez-Guadix et al., 2016 ), and we assume that PSMU use may be associated with victimization indirectly through contacting strangers.

Target Attractiveness

The third contextual element of RAT/LET considers the fact that victimization is more likely among those who share certain individual and behavioral traits. Such traits can be seen to increase attractiveness to offenders and thereby increase the likelihood of experiencing risk situations. Earlier studies on cybercrime victimization have utilized a wide selection of measures to operationalize target attractiveness, including gender and ethnic background (Näsi et al., 2017 ), browsing risky content (Räsänen et al., 2016 ), financial status (Leukfeldt & Yar, 2016 ) or relationship status, and sexual orientation (Reyns et al., 2011 ).

In general, these operationalizations do not seem to predict victimization reliably or effectively. Despite this, we suggest that certain operationalizations of target attractiveness may be valuable. Past research on the different uses of social media has suggested that provocative language or expressions of ideological points of view can increase victimization. More specifically, political activity is a typical behavioral trait that tends to provoke reactions in online discussions (e.g. , Lutz & Hoffmann, 2017 ). In studies of cybervictimization, online political activity is associated with increased victimization (Vakhitova et al., 2019 ). Recent studies have also emphasized how social media have brought up and even increased political polarization (van Dijk & Hacker, 2018 ).

In Finland, the main division has been drawn between the supporters of the populist right-wing party, the Finns, and the supporters of the Green League and the Left Alliance (Koiranen et al., 2020 ). However, it is noteworthy that Finland has a multi-party system based on socioeconomic cleavages represented by traditional parties, such as the Social Democratic Party of Finland, the National Coalition Party, and the Center Party (Koivula et al., 2020 ). Indeed, previous research has shown that there is relatively little affective polarization in Finland (Wagner, 2021 ). Therefore, in the Finnish context it is unlikely that individuals would experience large-scale victimization based on their party preference.

Lack of Guardianship

The fourth element of RAT/LET assesses the role of social and physical guardianship against harmful activity. The lack of guardianship is assumed to increase victimization, and conversely, the presence of capable guardianship to decrease the likelihood victimization (Yar, 2005 ). In studies of online activities and routines, different measures of guardianship have rarely acted as predictors of victimization experiences (Leukfeldt & Yar, 2016 ; Vakhitova et al., 2016 ).

Regarding social guardianship, measures such as respondents’ digital skills and online risk awareness have been used, but with non-significant results (Leukfeldt & Yar, 2016 ). On the other hand, past research has indicated that victims of cyber abuse in general are less social than non-victims, which indicates that social networks may protect users from abuse online (Vakhitova et al., 2019 ). Also, younger users, females, and users with low educational qualifications are assumed to have weaker social guardianship against victimization and therefore are in more vulnerable positions (e.g., Keipi et al., 2016 ; Pratt & Turanovic, 2016 ).

In terms of physical guardianship, several technical measures, such as the use of firewalls and virus scanners, have been utilized in past research (Leukfeldt & Yar, 2016 ). In a general sense, technical security tools function as external settings in online interactions, similar to light, which may increase the identifiability of the aggressor in darkness. Preceding studies, however, have found no significant connection between technical guardianship and victimization (Vakhitova et al., 2016 ). Consequently, we decided not to address technical guardianship in this study.

Based on the preceding research findings discussed above, we stated the following two hypotheses:

H1: Increased PSMU associates with increased cybercrime victimization.

H2: The association between PSMU and cybercrime victimization is confounded by factors assessing exposure to risk, proximity to offenders, target attractiveness, and lack of guardianship.

Research Design

Our aim was to analyze how problematic use of social media is linked to cybercrime victimization experiences. According to RAT and LET, cybercrime victimization relates to how individuals’ lifestyles expose them to circumstances that increase the probability of victimization (Hindelang et al., 1978 ) and how individuals behave in different risky environments (Engström, 2020 ). Our main premise is that PSMU exposes users more frequently to environments that increase the likelihood of victimization experiences.

We constructed our research in two separate stages on the basis of the two-wave panel setting. In the first stage, we approached the relationship between PSMU and cybercrime victimization cross-sectionally by using a large and representative sample of the Finnish population aged 18–74. We also analyzed the extent to which the relationship between PSMU and cybercrime victimization was related to the confounders. In the second stage of analysis, we paid more attention to longitudinal effects and tested for the panel effects, examining changes in cybercrime victimization in relation to changes in PSMU.

Participants

We utilized two-wave panel data that were derived from the first and second rounds of the Digital Age in Finland survey. The cross-sectional study was based on the first round of the survey, organized in December 2017, for a total of 3,724 Finns. In this sample, two-thirds of the respondents were randomly sampled from the Finnish population register, and one-third were supplemented from a demographically balanced online respondent pool organized by Taloustutkimus Inc. We analyzed social media users ( N  = 2,991), who accounted for 77% of the original data. The data over-represented older citizens, which is why post-stratifying weights were applied to correspond with the official population distribution of Finns aged 18–74 (Sivonen et al., 2019 ).

To form a longitudinal setting, respondents were asked whether they were willing to participate in the survey a second time about a year after the first data collection. A total of 1,708 participants expressed willingness to participate in the follow-up survey that was conducted 15 months after the first round, in March 2019. A total of 1,134 people participated in the follow-up survey, comprising a response rate of 67% in the second round.

The question form was essentially the same for both rounds of data collection.

The final two-wave data used in the second-stage of analysis mirrored on population characteristics in terms of gender (males 50.8%) and age (M = 49.9, SD  = 16.2) structures. However, data were unrepresentative in terms of education and employment status when compared to the Finnish population: tertiary level education was achieved by 44.5% of participants and only 50.5% of respondents were employed. The data report published online shows a more detailed description of the data collection and its representativeness (Sivonen et al., 2019 ).

Our dependent variable measured whether the participants had been a target of cybercrime. Cybercrime was measured with five dichotomous questions inquiring whether the respondent had personally: 1) been targeted by threat or attack on social media, 2) been falsely accused online, 3) been targeted with hateful or degrading material on the Internet, 4) experienced sexual harassment on social media, and 5) been subjected to account stealing. Footnote 1 In the first round, 159 respondents (14.0%) responded that they had been the victim of cybercrime. In the second round, the number of victimization experiences increased by about 6 percentage points, as 71 respondents had experienced victimization during the observation period.

Our main independent variable was problematic social media use (PSMU). Initially, participants’ problematic and excessive social media usage was measured through an adaptation of the Compulsive Internet Use Scale (CIUS) , which consists of 14 items ratable on a 5-point Likert scale (Meerkerk et al., 2009 ). Our measure included five items on a 4-point scale scored from 1 (never) to 4 (daily) based on how often respondents: 1) “Have difficulties with stopping social media use,” 2)”'Have been told by others you should use social media less,” 3) “Have left important work, school or family related things undone due to social media use,” 4) “Use social media to alleviate feeling bad or stress,” and 5) “Plan social media use beforehand.”

For our analysis, all five items were used to create a new three-level variable to assess respondents’ PSMU at different intensity levels. If the respondent was experiencing daily or weekly at least one of the signs of problematic use daily, PSMU was coded as at least weekly . Second, if the respondent was experiencing less than weekly at least one of the signs of problematic use, PSMU was coded as occasionally. Finally, if the respondent was not experiencing any signs of problematic use, PSMU was coded to none.

To find reliable estimates for the effects of PSMU, we controlled for general social media use , including respondents’ activity on social networking sites and instant messenger applications. We combined two items to create a new four-level variable to measure respondents’ social media use (SMU). If a respondent reported using either social media platforms (e.g., Facebook, Twitter), instant messengers (e.g., WhatsApp, Facebook Messenger) or both many hours per day, we coded their activity as high . We coded activity as medium , if respondents reported using social media daily . Third, we coded activity as low for those respondents who reported using social media only on a weekly basis. Finally, we considered activity as very low if respondents reported using platforms or instant messengers less than weekly.

Confounding variables were related to participants’ target attractiveness, proximity to offenders, and potential guardianship factors.

Target attractiveness was measured by online political activity . Following previous studies (Koiranen et al., 2020 ; Koivula et al., 2019 ), we formed the variable based on four single items: following political discussions, participating in political discussions, sharing political content, and creating political content. Participants’ activity was initially determined by means of a 5-point scale (1 = Never, 2 = Sometimes, 3 = Weekly, 4 = Daily, and 5 = Many times per day). For analysis purposes, we first separated “politically inactive” users, who reported never using social media for political activities. Second, we coded as “followers” participants who only followed but never participated in the political discussions in social media. Third, we classified as “occasional participants” those who at least sometimes participated in political activities on social media. Finally, those participants who at least weekly used social media to participate in political activities were classified as “active participants.”

Proximity to offenders was considered by analyzing contacting strangers on social media . Initially, the question asked the extent to which respondents were in contact with strangers on social media, evaluated with a 5-point interval scale, from 1 ( Not at all ) to 5 ( Very much ). For the analysis, we merged response options 1 and 2 to form value 1, and 4 and 5 to form 3. Consequently, we used a three-level variable to measure respondents’ tendency to contact strangers on social media, in which 1 = Low, 2 = Medium, and 3 = High intensity.

Lack of guardianship was measured by gender, age, education, and main activity. Respondent’s gender (1 =  Male , 2 =  Female ), age (in years), level of education, and main activity were measured. While these variables could also be placed under target attractiveness, we placed them here. This is because background characteristics the variables measure are often invisible in online environments and exist only in terms of expressed behavior (e.g., Keipi et al., 2016 ). For statistical analysis, we classified education and main activity into binary variables. Education was measured with a binary variable that implied whether the respondent had achieved at least tertiary level education or not. The dichotomization can be justified by relatively high educational levels in Finland, where tertiary education is often considered as cut-off point between educated and non-educated citizens (Leinsalu et al., 2020 ). Main activity was measured with a binary variable that differentiated unemployed respondents from others (working, retirees, and full-time students). Regarding the lack of guardianship, unemployed people are less likely to relate to informal peer-networks occurring at workplaces or educational establishments, a phenomenon that also takes place in many senior citizens’ activities. Descriptive statistics for all measurements are provided in (Table 1 ).

Analytic techniques

The analyses were performed in two different stages with STATA 16. In the cross-sectional approach we analyzed the direct and indirect associations between PSMU and cybercrime victimization. We reported average marginal effects and their standard errors with statistical significances (Table 2 .). The main effect of PSMU was illustrated in Fig.  1 by utilizing a user-written coefplot package (Jann, 2014 ).

figure 1

Likelihood of cybercrime victimization according to the level of problematic social media use. Predicted probabilities with 95% confidence intervals

When establishing the indirect effects, we used the KHB-method developed by Karlson et al. ( 2012 ) and employed the khb command in Stata (Kohler et al., 2011 ). The KHB method decomposes the total effect of an independent variable into direct and indirect via a confounding / mediating variable (Karlson et al., 2012 ). Based on decomposition analysis, we reported logit coefficients for the total effect, direct effects, and indirect effects with statistical significances and confounding percentages (Table 3 .).

In the second stage, we analyzed the panel effects. We used hybrid mixed models to distinguish two time-varying factors: between-person effects and within-person effects, and predicted changes in cybercrime victimization with respect to changes in problematic social media use. We also tested how the relationship between cybercrime victimization and other time-varying variables changed over the observation period. The hybrid models were performed by using the xthybrid command (Schunck & Perales, 2017 ).

The results for our first hypothesis are presented in Fig.  1 . The likelihood of becoming a victim of cybercrime increased significantly as PSMU increased. Respondents who reported problematic use on a daily basis experienced cybercrime with a probability of more than 40%. The probability of becoming a victim was also high, 30%, if problematic use occurred weekly.

The models predicting cybercrime victimization are shown in Table 2 . In the first model (M1), PSMU significantly predicted the risk of victimization if a participant reported even occasional problematic use (AME 0.06; p  < 0.001). If the respondent reported problematic use weekly (AME 0.17; p  < 0.001) or daily (AME 0.33; p  < 0.001), his or her probability of becoming a victim was significantly higher.

The next three models (M2-M4) were constructed on the basis of variables measuring risk exposure, proximity to offenders, and target attractiveness. The second model (M2) indicates that highly intensive social media use (AME 0.19, p  < 0.001) was related to cybercrime victimization. The third (M3) model presents that those who reported low intensity of meeting strangers online had lower probability of being victims (AME -0.11, p  < 0.001) and those who reported high intensity had higher probability (AME 0.12, p  < 0.05). Finally, the fourth (M4) model suggests that political activity was related to victimization: those who reported participating occasionally (AME 0.07, p  < 0.01) and actively (AME 0.14, p  < 0.001) had higher probability of being a victim.

Next, we evaluated how different guardianship factors were related to victimization. The fifth model (M5) indicates that age, gender, and economic activity were identified as significant protective factors. According to the results, older (AME -0.01, p  < 0.001) and male (AME -0.04, p  < 0.001) participants were less likely to be targets of cybercrime. Interestingly, higher education or unemployment was not related to victimization. Finally, the fifth model also suggests that the effect of PSMU remained significant even after controlling for confounding and control variables.

We decomposed the fifth model to determine how different confounding and control variables affected the relationship between PSMU and victimization. The results of the decomposition analysis are shown in Table 3 . First, the factors significantly influenced the association between PSMU and victimization ( B  = 0.38, p  < 0.001), which means that the confounding percentage of background factors was 58.7%. However, the total effect of PSMU remained significant ( B  = 0.27, p  < 0.001). Age was the most significant factor in the association between PSMU and victimization ( B  = 0.14; p  < 0.001), explaining 36% of the total confounding percentage. Political activity was also a major contributing factor ( B  = 0.12, p  < 0.001) that explained 31.2% of the total confounding percentage. The analysis also revealed that meeting strangers online significantly confounded the relationship between PSMU and victimization ( B  = 0.7, p  < 0.001).

In the second stage, we examined the longitudinal effects of PSMU on cybercrime victimization using panel data from Finnish social media users. We focused on the factors varying in short term, that is why we also analyzed the temporal effects of SMU, contacting strangers online, and online political activity on victimization. The demographic factors that did not change over time or for which temporal variability did not vary across clusters (such as age) were not considered in the second stage.

Table 4 shows the hybrid models predicting each variable separately. The within-effects revealed that increased PSMU increased individuals’ probability of being victimized during the observation period ( B  = 0.77, p  = 0.02). Moreover, the between-effects of PSMU was significant ( B  = 2.00, p  < 0.001), indicating that increased PSMU was related to individuals’ higher propensity to be victimized over the observation period.

We could not find significant within-subject effects in terms of other factors. However, the between-effects indicated that SMU ( B  = 2.00, p  < 0.001), low intensity of meeting strangers online ( B  = -3.27, p  < 0.001), and online political participation ( B  = 2.08, p  < 0.001) distinguished the likelihood of individuals being victimized.

Over the last decade, social media has revolutionized the way people communicate and share information. As the everyday lives of individuals are increasingly mediated by social media technologies, some users may experience problems with excessive use. In prior studies, problematic use has been associated with many negative life outcomes, ranging from psychological disorders to economic consequences.

The main objective of this study was to determine whether PSMU is also linked to increased cybercrime victimization. First, we examined how PSMU associates with cybercrime victimization and hypothesized that increased PSMU associates with increased cybercrime victimization (H1). Our findings from the cross-sectional study indicated that PSMU is a notable predictor of victimization. In fact, daily reported problematic use increased the likelihood of cybercrime victimization by more than 30 percentage points. More specifically, the analysis showed that more than 40% of users who reported experiencing problematic use daily reported being victims of cybercrime, while those who never experienced problematic use had a probability of victimization of slightly over 10%.

We also examined how PSMU captures other risk factors contributing to cybercrime victimization. Here, we hypothesized that the association between PSMU and cybercrime victimization is mediated by exposure to risk, proximity to offenders, target attractiveness, and lack of guardianship (H2). The decomposition analysis indicated that confounding factors explained over 50 percent of the total effect of PSMU. A more detailed analysis showed that the association between PSMU and cybercrime victimization was related to respondents’ young age, online political activity, activity to meet strangers online, and intensity of general social media use. This means that PSMU and victimization are linked to similar factors related to routine activities and lifestyle that increase the target's attractiveness, proximity to offenders and lack of guardianship. Notably, the effect of PSMU remained significant even after controlling for the confounding factors.

In the longitudinal analysis, we confirmed the first hypothesis and found that increased PSMU was associated with increased cybercrime victimization in both within- and between-subject analyses. The result indicated a clear link between problematic use and cybercrime experiences during the observation period: as problematic use increases, so does the individual’s likelihood of becoming a victim of cybercrime. At the same time, according to the between-subject analysis, it also appears that cybercrime experiences are generally more likely to increase for those who experience more problematic use. Interestingly, we could not find within-subject effects in terms of other factors. This means, for example, that individuals' increased encounters with strangers or increased online political activity were not directly reflected in the likelihood of becoming a victim during the observation period. The between-subject analyses, however, indicated that an individual’s increased propensity to be victimized is related to higher level of social media activity, intensity of meeting strangers online, and online political activity over time.

Our findings are consistent with those of preceding research pointing to the fact that cybervictimization is indeed a notable threat, especially to those already in vulnerable circumstances (Keipi et al., 2016 ). The probabilities of cybercrime risk vary in online interactional spaces, depending on the absence and presence of certain key components suggested in our theoretical framework. Despite the seriousness of our findings, recent statistics indicate that cybercrime victimization is still relatively rare in Finland. In 2020, seven percent of Finnish Internet users had experienced online harassment, and 13 percent reported experiencing unwelcome advances during the previous three months (OSF, 2020 ). However, both forms of cybercrime victimization are clearly more prevalent among younger people and those who use social media frequently.

Cybercrime is becoming an increasingly critical threat as social media use continues to spread throughout segments of the population. Certain online activities and routinized behaviors can be considered to be particularly risky and to increase the probability of cybercrime victimization. In our study, we have identified problematic social media use as a specific behavioral pattern or lifestyle that predicts increased risk of becoming a victim of cybercrime.

Although the overall approach of our study was straightforward, the original theoretical concepts are ambiguously defined and alternative meanings have been given to them. It follows that the empirical operationalization of the concepts was not in line with some studies looking at the premises of RAT and LET framework. Indeed, different empirical measures have been employed to address the basic elements associating with risks of victimization (e.g., Hawdon et al., 2017 ; Pratt & Turanovic, 2016 ). In our investigation, we focused on selected online activities and key socio-demographic background factors.

Similarly, we need to be cautious when discussing the implications of our findings. First, our study deals with one country alone, which means that the findings cannot be generalized beyond Finland or beyond the timeline 2017 to 2019. This means that our findings may not be applicable to the highly specific time of the COVID-19 pandemic when online activities have become more versatile than ever before. In addition, although our sample was originally drawn from the national census database, some response bias probably exists in the final samples. Future research should use longitudinal data that better represent, for example, different socio-economic groups. We also acknowledge that we did not control for the effect of offline social relations on the probability of cybercrime risk. Despite these limitations, we believe our study has significance for contemporary cybercrime research.

Our study shows that PSMU heightens the risk of cybercrime victimization. Needless to say, future research should continue to identify specific activities that comprise “dangerous” lifestyles online, which may vary from one population group to another. In online settings, there are a variety of situations and circumstances that are applicable to different forms of cybercrime. For instance, lack of basic online skills regarding cybersecurity can work like PSMU.

In general, our findings contribute to the assumption that online and offline victimization should not necessarily be considered distinct phenomena. Therefore, our theoretical framework, based on RAT and LET, seems highly justified. Our observations contribute to an increasing body of research that demonstrates how routine activities and lifestyle patterns of individuals can be applied to crimes committed in the physical world, as well as to crimes occurring in cyberspace.

Data Availability

The survey data used in this study will be made available through via Finnish Social Science Data Archive (FSD, http://www.fsd.uta.fi/en/ ) after the manuscript acceptance. The data are also available from the authors on scholarly request.

Code Availability

Analyses were run with Stata 16.1. The code is also available from the authors on request for replication purposes.

1) Have you been targeted by threat or attack on social media?

2) Have you been falsely accused online?

3) Have you been targeted with hateful or degrading material on the Internet?

4) Have you experienced sexual harassment social media?

5) Has your online account been stolen or a new account made with your name without your permission?

Appel, M., Marker, C., & Gnambs, T. (2020). Are social media ruining our lives? A review of meta-analytic evidence. Review of General Psychology, 24 (1), 60–74. https://doi.org/10.1177/1089268019880891

Article   Google Scholar  

Bányai, F., Zsila, Á., Király, O., Maraz, A., Elekes, Z., Griffiths, M. D., et al. (2017). Problematic social media use: Results from a large-scale nationally representative adolescent sample. PLoS ONE , 12 (1). https://doi.org/10.1371/journal.pone.0169839

Bossler, A. M., Holt, T. J., & May, D. C. (2012). Predicting online harassment victimization among a juvenile population. Youth & Society, 44 (4), 500–523. https://doi.org/10.1177/0044118X11407525

Clark, J. L., Algoe, S. B., & Green, M. C. (2018). Social network sites and well-being: The role of social connection. Current Directions in Psychological Science, 9 , 44–49. https://doi.org/10.1016/j.copsyc.2015.10.006

Cohen, L. E., & Felson, M. (1979). Social change and crime rate trends: A routine activity approach. American Sociological Review, 44 (4), 588–608. https://doi.org/10.2307/2094589

Craig, W., Boniel-Nissim, M., King, N., Walsh, S. D., Boer, M., Donnelly, P. D., et al. (2020). Social media use and cyber-bullying: A cross-national analysis of young people in 42 countries. Journal of Adolescent Health, 66 (6), S100–S108. https://doi.org/10.1016/j.jadohealth.2020.03.006

Donalds, C., & Osei-Bryson, K. M. (2019). Toward a cybercrime classification ontology: A knowledge-based approach. Computers in Human Behavior, 92 , 403–418. https://doi.org/10.1016/j.chb.2018.11.039

Engström, A. (2020). Conceptualizing lifestyle and routine activities in the early 21st century: A systematic review of self-report measures in studies on direct-contact offenses in young populations. Crime & Delinquency, 67 (5), 737–782. https://doi.org/10.1177/0011128720937640

Europol (2019). European Union serious and organised crime threat assessment. Online document, available at: https://ec.europa.eu/home-affairs/what-we-do/policies/cybercrime_en

Gámez-Guadix, M., Borrajo, E., & Almendros, C. (2016). Risky online behaviors among adolescents: Longitudinal relations among problematic Internet use, cyberbullying perpetration, and meeting strangers online. Journal of Behavioral Addictions, 5 (1), 100–107. https://doi.org/10.1556/2006.5.2016.013

Griffiths, M. D., Kuss, D. J., & Demetrovics, Z. (2014). Social networking addiction: An overview of preliminary findings. In K. P. Rosenberg & L. C. B. T.-B. A. Feder (Eds.), Behavioral addictions: Criteria, evidence, and treatment (pp. 119–141). San Diego: Academic Press. https://doi.org/10.1016/B978-0-12-407724-9.00006-9

Hawdon, J., Oksanen, A., & Räsänen, P. (2017). Exposure to online hate in four nations: A cross-national consideration. Deviant Behavior, 38 (3), 254–266. https://doi.org/10.1080/01639625.2016.1196985

Hindelang, M. J., Gottfredson, M. R., & Garofalo, J. (1978). Victims of personal crime: An empirical foundation for a theory of personal victimization . Ballinger Publishing Co.

Google Scholar  

Holt, T. J., & Bossler, A. M. (2008). Examining the applicability of lifestyle-routine activities theory for cybercrime victimization. Deviant Behavior, 30 (1), 1–25. https://doi.org/10.1080/01639620701876577

Holt, T. J., & Bossler, A. M. (2014). An assessment of the current state of cybercrime scholarship. Deviant Behavior, 35 (1), 20–40. https://doi.org/10.1080/01639625.2013.822209

Hussain, Z., & Griffiths, M. D. (2018). Problematic social networking site use and comorbid psychiatric disorders: A systematic review of recent large-scale studies. Frontiers in Psychiatry , 9 (686). https://doi.org/10.3389/fpsyt.2018.00686

Jann, B. (2014). Plotting regression coefficients and other estimates . The Stata Journal , 14 (4), 708–737. 10.1177%2F1536867X1401400402

Karlson, K. B., Holm, A., & Breen, R. (2012). Comparing regression coefficients between same-sample nested models using logit and probit: A new method. Sociological methodology, 42 (1), 286–313. 10.1177%2F0081175012444861

Keipi, T., Näsi, M., Oksanen, A., & Räsänen, P. (2016). Online hate and harmful content: Cross-national perspectives. Taylor & Francis. http://library.oapen.org/handle/20.500.12657/22350

Kim, B., & Kim, Y. (2017). College students’ social media use and communication network heterogeneity: Implications for social capital and subjective well-being. Computers in Human Behavior, 73 , 620–628. https://doi.org/10.1016/j.chb.2017.03.033

Kohler, U., Karlson, K. B., & Holm, A. (2011). Comparing coefficients of nested nonlinear probability models. The Stata Journal, 11 (3), 420–438. https://doi.org/10.1177/1536867X1101100306

Koivula, A., Kaakinen, M., Oksanen, A., & Räsänen, P. (2019). The role of political activity in the formation of online identity bubbles. Policy & Internet, 11 (4), 396–417. https://doi.org/10.1002/poi3.211

Koivula, A., Koiranen, I., Saarinen, A., & Keipi, T. (2020). Social and ideological representativeness: A comparison of political party members and supporters in Finland after the realignment of major parties. Party Politics, 26 (6), 807–821. https://doi.org/10.1177/1354068818819243

Koiranen, I., Koivula, A., Saarinen, A., & Keipi, T. (2020). Ideological motives, digital divides, and political polarization: How do political party preference and values correspond with the political use of social media? Telematics and Informatics, 46 , 101322. https://doi.org/10.1016/j.tele.2019.101322

Kross, E., Verduyn, P., Demiralp, E., Park, J., Lee, D. S., Lin, N., et al. (2013). Facebook use predicts declines in subjective well-being in young adults. PLoS ONE, 8 (8), e69841. https://doi.org/10.1371/journal.pone.0069841

Kross, E., Verduyn, P., Sheppes, G., Costello, C. K., Jonides, J., & Ybarra, O. (2020). Social media and well-being: Pitfalls, progress, and next steps. Trends in Cognitive Sciences, 25 (1), 55–66. https://doi.org/10.1016/j.tics.2020.10.005

Kuss, D., & Griffiths, M. (2017). Social networking sites and addiction: Ten lessons learned. International Journal of Environmental Research and Public Health, 14 (3), 311. https://doi.org/10.3390/ijerph14030311

Leinsalu, M., Baburin, A., Jasilionis, D., Krumins, J., Martikainen, P., & Stickley, A. (2020). Economic fluctuations and urban-rural differences in educational inequalities in mortality in the Baltic countries and Finland in 2000–2015: A register-based study. International Journal for Equity in Health, 19 (1), 1–6. https://doi.org/10.1186/s12939-020-01347-5

Leukfeldt, E. R., & Yar, M. (2016). Applying routine activity theory to cybercrime: A theoretical and empirical analysis. Deviant Behavior, 37 (3), 263–280. https://doi.org/10.1080/01639625.2015.1012409

Longobardi, C., Settanni, M., Fabris, M. A., & Marengo, D. (2020). Follow or be followed: Exploring the links between Instagram popularity, social media addiction, cyber victimization, and subjective happiness in Italian adolescents. Children and Youth Services Review, 113 , 104955. https://doi.org/10.1016/j.childyouth.2020.104955

Lowry, P. B., Zhang, J., Wang, C., & Siponen, M. (2016). Why do adults engage in cyberbullying on social media? An integration of online disinhibition and deindividuation effects with the social structure and social learning model. Information Systems Research, 27 (4), 962–986. https://doi.org/10.1287/isre.2016.0671

Lutz, C., & Hoffmann, C. P. (2017). The dark side of online participation: Exploring non-, passive and negative participation. Information, Communication & Society, 20 (6), 876–897. https://doi.org/10.1080/1369118X.2017.1293129

Marcum, C. D., Higgins, G. E., & Nicholson, J. (2017). I’m watching you: Cyberstalking behaviors of university students in romantic relationships. American Journal of Criminal Justice, 42 (2), 373–388. https://doi.org/10.1007/s12103-016-9358-2

Martínez-Ferrer, B., Moreno, D., & Musitu, G. (2018). Are adolescents engaged in the problematic use of social networking sites more involved in peer aggression and victimization? Frontiers in Psychology, 9 , 801. https://doi.org/10.3389/fpsyg.2018.00801

Marttila, E., Koivula, A., & Räsänen, P. (2021). Does excessive social media use decrease subjective well-being? A longitudinal analysis of the relationship between problematic use, loneliness and life satisfaction. Telematics and Informatics, 59 , 101556. https://doi.org/10.1016/j.tele.2020.101556

Meerkerk, G. J., Van Den Eijnden, R. J. J. M., Vermulst, A. A., & Garretsen, H. F. L. (2009). The Compulsive Internet Use Scale (CIUS): Some psychometric properties. Cyberpsychology and Behavior, 12 (1), 1–6. https://doi.org/10.1089/cpb.2008.0181

Meshi, D., Cotten, S. R., & Bender, A. R. (2020). Problematic social media use and perceived social isolation in older adults: A cross-sectional study. Gerontology, 66 (2), 160–168. https://doi.org/10.1159/000502577

Meško, G. (2018). On some aspects of cybercrime and cybervictimization. European Journal of Crime, Criminal Law and Criminal Justice, 26 (3), 189–199. https://doi.org/10.1163/15718174-02603006

Milani, R., Caneppele, S., & Burkhardt, C. (2020). Exposure to cyber victimization: Results from a Swiss survey. Deviant Behavior . https://doi.org/10.1080/01639625.2020.1806453

Näsi, M., Räsänen, P., Kaakinen, M., Keipi, T., & Oksanen, A. (2017). Do routine activities help predict young adults’ online harassment: A multi-nation study. Criminology and Criminal Justice, 17 (4), 418–432. https://doi.org/10.1177/1748895816679866

Ngo, F. T., & Paternoster, R. (2011). Cybercrime victimization: An examination of individual and situational level factors. International Journal of Cyber Criminology, 5 (1), 773–793.

Official Statistics of Finland (OSF) (2020). Väestön tieto- ja viestintätekniikan käyttö [online document]. ISSN=2341–8699. 2020, Liitetaulukko 29. Vihamielisten viestien näkeminen, häirinnän kokeminen ja epäasiallisen lähestymisen kohteeksi joutuminen sosiaalisessa mediassa 2020, %-osuus väestöstä. Helsinki: Tilastokeskus. Available at: http://www.stat.fi/til/sutivi/2020/sutivi_2020_2020-11-10_tau_029_fi.html

Pang, H. (2018). How does time spent on WeChat bolster subjective well-being through social integration and social capital? Telematics and Informatics, 35 (8), 2147–2156. https://doi.org/10.1016/j.tele.2018.07.015

Pratt, T. C., & Turanovic, J. J. (2016). Lifestyle and routine activity theories revisited: The importance of “risk” to the study of victimization. Victims & Offenders, 11 (3), 335–354. https://doi.org/10.1080/15564886.2015.1057351

Reep-van den Bergh, C. M. M., & Junger, M. (2018). Victims of cybercrime in Europe: A review of victim surveys. Crime Science, 7 (1), 1–15. https://doi.org/10.1186/s40163-018-0079-3

Reyns, B. W., Henson, B., & Fisher, B. S. (2011). Being pursued online. Criminal Justice and Behavior, 38 (11), 1149–1169. https://doi.org/10.1177/0093854811421448

Räsänen, P., Hawdon, J., Holkeri, E., Keipi, T., Näsi, M., & Oksanen, A. (2016). Targets of online hate: Examining determinants of victimization among young Finnish Facebook users. Violence and Victims, 31 (4), 708–725. https://doi.org/10.1891/0886-6708.vv-d-14-00079

Schunck, R., & Perales, F. (2017). Within- and between-cluster effects in generalized linear mixed models: A discussion of approaches and the xthybrid command. The Stata Journal , 17(1), 89–115. 10.1177%2F1536867X1701700106

Shensa, A., Escobar-Viera, C. G., Sidani, J. E., Bowman, N. D., Marshal, M. P., & Primack, B. A. (2017). Problematic social media use and depressive symptoms among U.S. young adults: A nationally-representative study. Social Science and Medicine, 182 , 150–157. https://doi.org/10.1016/j.socscimed.2017.03.061

Sivonen, J., Kuusela, A., Koivula, A., Saarinen, A., & Keipi, T. (2019). Working papers in economic sociology: Research Report on Finland in the Digital Age Round 2 Panel-survey . Turku.

Wagner, M. (2021). Affective polarization in multiparty systems. Electoral Studies, 69 , 102199. https://doi.org/10.1016/j.electstud.2020.102199

Vakhitova, Z. I., Alston-Knox, C. L., Reynald, D. M., Townsley, M. K., & Webster, J. L. (2019). Lifestyles and routine activities: Do they enable different types of cyber abuse? Computers in Human Behavior, 101 , 225–237. https://doi.org/10.1016/j.chb.2019.07.012

Vakhitova, Z. I., Reynald, D. M., & Townsley, M. (2016). Toward the adaptation of routine activity and lifestyle exposure theories to account for cyber abuse victimization. Journal of Contemporary Criminal Justice, 32 (2), 169–188. https://doi.org/10.1177/1043986215621379

Valenzuela, S., Park, N., & Kee, K. F. (2009). Is there social capital in a social network site?: Facebook use and college student’s life satisfaction, trust, and participation. Journal of Computer-Mediated Communication, 14 (4), 875–901. https://doi.org/10.1111/j.1083-6101.2009.01474.x

Van Dijk, J. A., & Hacker, K. L. (2018). Internet and democracy in the network society. Routledge . https://doi.org/10.4324/9781351110716

Verduyn, P., Ybarra, O., Résibois, M., Jonides, J., & Kross, E. (2017). Do social network sites enhance or undermine subjective well-being? A critical review. Social Issues and Policy Review, 11 (1), 274–302. https://doi.org/10.1111/sipr.12033

Wheatley, D., & Buglass, S. L. (2019). Social network engagement and subjective well-being: A life-course perspective. The British Journal of Sociology, 70 (5), 1971–1995. https://doi.org/10.1111/1468-4446.12644

Yar, M. (2005). The novelty of ‘Cybercrime.’ European Journal of Criminology, 2 (4), 407–427. https://doi.org/10.1177/147737080556056

Yar, M., & Steinmetz, K. F. (2019). Cybercrime and society . SAGE Publications Limited.

Download references

Open Access funding provided by University of Turku (UTU) including Turku University Central Hospital. This study was funded by the Strategic Research Council of the Academy of Finland (decision number 314171).

Author information

Authors and affiliations.

Economic Sociology, Department of Social Research, University of Turku, Assistentinkatu 7, 20014, Turku, Finland

Eetu Marttila, Aki Koivula & Pekka Räsänen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Eetu Marttila .

Ethics declarations

Conflict of interests.

The authors declare no conflicts of interest.

Ethical Approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Marttila, E., Koivula, A. & Räsänen, P. Cybercrime Victimization and Problematic Social Media Use: Findings from a Nationally Representative Panel Study. Am J Crim Just 46 , 862–881 (2021). https://doi.org/10.1007/s12103-021-09665-2

Download citation

Received : 02 December 2020

Accepted : 22 June 2021

Published : 25 November 2021

Issue Date : December 2021

DOI : https://doi.org/10.1007/s12103-021-09665-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Social media
  • Problematic social media use
  • Longitudinal analysis
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. The Book and Resources

    research paper on cybercrime

  2. Cybercrime and Its Effects on Society Research Paper

    research paper on cybercrime

  3. Research paper on cybercrime pdf editor

    research paper on cybercrime

  4. Cybercrime Research Paper

    research paper on cybercrime

  5. (PDF) (2016) White Paper: Youth Pathways into Cybercrime

    research paper on cybercrime

  6. Cyber Crime Research Paper

    research paper on cybercrime

VIDEO

  1. DFS101: 13.1 Research and the future of cybercrime investigation

  2. 07-Cyber Security Research & Advanced Topics

  3. Cyber Security Research for Masters & PhD

  4. Cybersecurity Presentation for Research Paper || IEEE Research Paper || #cybersecurity

  5. What Is CYBER CRIME

  6. What is Cybercrime: Cybercrime Definition

COMMENTS

  1. Introduction: new directions in cybercrime research

    Dr. Tamar Berenblum is the research director of the The Federmann Cyber Security Center - Cyber Law Program, Faculty of Law, the Hebrew University of Jerusalem, Israel, and the co-chair of the European Society of Criminology (ESC) Working Group on Cybercrime. Tamar is also a Post-Doc Research Fellow at the Netherlands Institute for the Study of Crime and Law Enforcement (NSCR), Netherlands ...

  2. (PDF) On Cyber Crimes and Cyber Security

    This chapter covers the definitions, types, and intrusions of e-crimes. It has also focused on the laws against e-crimes in different countries. Cybersecurity and searching methods to get secured ...

  3. Understanding cybercrime in 'real world' policing and law enforcement

    This paper reviews current research, providing a comprehensive account of cybercrime and addressing issues in policing such offences. We achieve this by describing the technological, individual, social and situational landscapes conducive to cybercrime, and how this knowledge may inform strategies to overcome current issues in investigations ...

  4. Mapping the global geography of cybercrime with the World Cybercrime

    Profit-driven cybercrime, which is the focus of this paper/research, has been studied by both social scientists and computer scientists. It has been characterised by empirical contributions that have sought to illuminate the nature and organisation of cybercrime both online and offline [15-20]. But, as noted above, the geography of cybercrime ...

  5. Cybercrime: Victimization, Perpetration, and Techniques

    The third paper bridges research on cybercrime victimization and cybercrime perpetration and provides a glimpse at the state of knowledge about a specific form of cyberviolence. Catherine Marcum and George Higgins conduct a systematic review of literature investigating both offending and victimization of cyberstalking, cyberdating abuse, and ...

  6. An Exploration of the Psychological Impact of Hacking Victimization

    In 2018, 978 million people globally fell victim to online crime, or cybercrime (Symantec Corporation, 2019).Cybercrime refers to a broad range of criminal activity committed using computers or the internet and encompasses a wide range of offenses such as cyber-stalking, harassment, online fraud, phishing and hacking (Morgan et al., 2016).With the rapid digitization of society, trends indicate ...

  7. Cyber risk and cybersecurity: a systematic review of data ...

    Cybercrime is estimated to have cost the global economy just under USD 1 trillion in 2020, indicating an increase of more than 50% since 2018. With the average cyber insurance claim rising from USD 145,000 in 2019 to USD 359,000 in 2020, there is a growing necessity for better cyber information sources, standardised databases, mandatory reporting and public awareness. This research analyses ...

  8. Understanding cybercrime from a criminal's perspective: Why and how

    As aspects of the findings of this research, this study performs (1) an analysis of interviews and investigative records with a qualitative research approach, (2) derives the main criminal factors for each cybercrime type from the suspects' point of view, and proposes (3) preventive and response measures for cybercrimes.

  9. Researching Cybercrimes: Methodologies, Ethics, and Critical ...

    Seeks to develop a common jargon, methodology and ethical standards for researching cybercrime across the globe. Presents a diverse range of cybercrime research examples, from an international range of contributors. Speaks to those in the social sciences, applied philosophy, computer science, legal ethics and beyond. 33k Accesses.

  10. Exploring the global geography of cybercrime and its driving forces

    McGuire M, Dowling S (2013) Cyber-crime: a review of the evidence summary of key findings and implications Home Office Research Report 75, Home Office, United Kingdom, Oct. 30p

  11. Research trends in cybercrime victimization during 2010-2020: a

    The current bibliometric study assessed the scholarly status on cybercrime victimization during 2010-2020 by retrieving SSCI articles from WoS database. There is no study that applied bibliometric method to research on the examined subject. Hence, this paper firstly contributed statistical evidence and visualized findings to literature of ...

  12. A Comprehensive Framework for Cyber Behavioral Analysis Based on a

    Cybercrime presents a significant threat to global society. With the number of cybercrimes increasing year after year and the financial losses escalating, law enforcement must advance its capacity to identify cybercriminals, collect probative evidence, and bring cybercriminals before the courts. Arguably to date, the approach to combatting cybercrime has been technologically centric (e.g ...

  13. (PDF) Cyber Crime: Rise, Evolution and Prevention

    Cyber_Crime_Research_Report[1][1] (1).pdf. ... The paper presents the result of a survey conducted and provides the recommendations for improving general awareness of Cyber Law in India.

  14. Phishing Attacks: A Recent Comprehensive Study and a New Anatomy

    With the significant growth of internet usage, people increasingly share their personal information online. As a result, an enormous amount of personal information and financial transactions become vulnerable to cybercriminals. Phishing is an example of a highly effective form of cybercrime that enables criminals to deceive users and steal important data. Since the first reported phishing ...

  15. Research Trends in Cybercrime and Cybersecurity: A Review Based on Web

    Studies on cybercrime and cybersecurity have expanded in both scope and breadth in recent years. This study offers a bibliometric review of research trends in cybercrime and cybersecurity over the past 26 years (1995-2021) based on Web of Science core collection database.

  16. The Future of Cybercrime: AI and Emerging Technologies Are ...

    This paper reviews the impact of AI and emerging technologies on the future of cybercrime and the necessary strategies to combat it effectively. Society faces a pressing challenge as cybercrime proliferates through AI and emerging technologies. At the same time, law enforcement and regulators struggle to keep it up.

  17. "CYBER CRIME CHANGING EVERYTHING

    1) In conclusion, this research paper delves into the evolving landscape of cybercrime in India and proposes strategies to prevent and combat this growing threat. The findings emphasize the urgent ...

  18. World-first "Cybercrime Index" ranks countries by cyber threat level

    An international team of researchers have compiled the first ever 'World Cybercrime Index', which identifies key crime hotspots by ranking the most significant sources of cybercrime across the globe. The research, published in PLOS ONE today, shows that a relatively small number of countries house the greatest cybercriminal threat. Russia ...

  19. Research trends in cybercrime victimization during 2010-2020: a

    Research on cybercrime victimization is relatively diversified; however, no bibliometric study has been found to introduce the panorama of this subject. The current study aims to address this research gap by performing a bibliometric analysis of 387 Social Science Citation Index articles relevant to cybercrime victimization from Web of Science database during the period of 2010-2020. The ...

  20. UK makes top 10 in world index of cyber crime threats

    The UK has been placed eighth in global cyber crime hotspots in a new study ranking the most significant sources of cyber threats. The World Cybercrime Index was published in journal Plos One after three years of research by academics from the University of Oxford and the University of New South Wales (UNSW) Canberra.

  21. Cyber Resilience Act Requirements Standards Mapping

    To facilitate adoption of the CRA provisions, these requirements need to be translated into the form of harmonised standards, with which manufacturers can comply. In support of the standardisation effort, this study attempt to identify the most relevant existing cybersecurity standards for each CRA requirement, analyses the coverage already offered on the intended scope of the requirement and ...

  22. Cyber criminals gather together, researchers find

    An Australia-UK-French research team nominated Russia, Ukraine, China, the USA, Nigeria and Romania as key cyber crime host countries. Australia ranked 34 on the list.

  23. Cybercrime Victimization and Problematic Social Media Use: Findings

    Providing exact estimates of cybercrime victims has been a challenge for previous criminological research, but 1 to 15 percent of the European population is estimated to have experienced some sort of cybercrime victimization (Reep-van den Bergh & Junger, 2018). Similarly, it is difficult to give a precise estimate of the prevalence of social ...

  24. CID renews cybercrime training partnership with Infosys Foundation, DSCI

    The Karnataka Criminal Investigation Department (CID) on Wednesday renewed their partnership for Cyber Crime Investigation Training and Research, located in Bengaluru, (CCITR) with their ...

  25. (PDF) Research on Cybercrime and its Policing

    increase alongside the technological advances. Research on Cybercrime and its P olicing. Abstract. Being one among the foremost rapidly e xpanding sector, internet has become one. among the most ...

  26. Millions of American trucks face cybersecurity exposure

    Jake Jepson, co-author and graduate research assistant at Colorado State University, said it's important to create guard rails as the nation's transportation networks, power grids, water systems ...

  27. Cybercrime and Artificial Intelligence. An overview of the work of

    The purpose of this paper is to assess whether current international instruments to counter cybercrime may apply in the context of Artificial Intelligence (AI) technologies and to provide a short analysis of the ongoing policy initiatives of international organizations that would have a relevant impact in the law-making process in the field of cybercrime in the near future. This paper ...

  28. Federal Register :: Cyber Incident Reporting for Critical

    This PDF is the current document as it appeared on Public Inspection on 03/27/2024 at 8:45 am. It was viewed 7514 times while on Public Inspection. If you are using public inspection listings for legal research, you should verify the contents of the documents against a final, official edition of the Federal Register.

  29. Romanian Minister Files Crime Complaint Over Deepfake Video

    Energy Minister urges law enforcement to act over deepfake video showing him allegedly promoting an investment platform. Romanian Energy Minister, Sebastian Burduja, at a press briefing on April 5 ...

  30. Cybercrime Victimization and Problematic Social Media Use ...

    According to criminological research, online environments create new possibilities for criminal activity and deviant behavior. Problematic social media use (PSMU) is a habitual pattern of excessive use of social media platforms. Past research has suggested that PSMU predicts risky online behavior and negative life outcomes, but the relationship between PSMU and cybercrime victimization is not ...