Social Engineering Attacks Prevention: A Systematic Literature Review

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Social Engineering Attacks: Recent Advances and Challenges

  • Conference paper
  • First Online: 03 July 2021
  • Cite this conference paper

research paper on social engineering

  • Nikol Mashtalyar 9 ,
  • Uwera Nina Ntaganzwa 9 ,
  • Thales Santos 9 ,
  • Saqib Hakak 9 &
  • Suprio Ray 9  

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12788))

Included in the following conference series:

  • International Conference on Human-Computer Interaction

2710 Accesses

5 Citations

The world’s technological landscape is continuously evolving with new possibilities, yet also evolving in parallel with the emergence of new threats. Social engineering is of predominant concern for industries, governments and institutions due to the exploitation of their most valuable resource, their people. Social engineers prey on the psychological weaknesses of humans with sophisticated attacks, which pose serious cybersecurity threats to digital infrastructure. Social engineers use deception and manipulation by means of human computer interaction to exploit privacy and cybersecurity concerns. Numerous forms of attacks have been observed, which can target a range of resources such as intellectual property, confidential data and financial resources. Therefore, institutions must be prepared for any kind of attack that may be deployed and demonstrate willingness to implement new defense strategies. In this article, we present the state-of-the-art social engineering attacks, their classification and various mitigation strategies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Wang, Z., Sun, L., Zhu, H.: Defining social engineering in cybersecurity. IEEE Access 8 , 85094–85115 (2020)

Article   Google Scholar  

Salahdine, F., Kaabouch, N.: Social engineering attacks: a survey. Future Internet 11 (4), 89 (2019)

Albladi, S.M., Weir, G.R.S.: User characteristics that influence judgment of social engineering attacks in social networks. Hum.-Cent. Comput. Inf. Sci. 8 (1), 1–24 (2018). https://doi.org/10.1186/s13673-018-0128-7

Williams, E.J., Hinds, J., Joinson, A.N.: Exploring susceptibility to phishing in the workplace. Int. J. Hum. Comput. Stud. 120 , 1–13 (2018)

Breda, F., Barbosa, H., Morais, T.: Social engineering and cyber security. In: Proceedings of International Technology, Education and Development Conference (2017)

Google Scholar  

Kumar, A., Chaudhary, M., Kumar, N.: Social engineering threats and awareness: a survey. Eur. J. Adv. Eng. Tech. 2 (11), 15–19 (2015)

MathSciNet   Google Scholar  

Hakak, S., Khan, W.Z., Imran, M., Choo, K.-K.R., Shoaib, M.: Have you been a victim of COVID-19-related cyber incidents? Survey, taxonomy, and mitigation strategies. IEEE Access 8 , 124134–124144 (2020)

FBI. Federal agencies warn of emerging fraud schemes related to COVID-19 vaccines. [Online]. Available: https://www.fbi.gov/news/pressrel/press-releases/federal-agencies-warn-of-emerging-fraud-schemes-related-to-covid-19-vaccines

Alzahrani, A.: Coronavirus social engineering attacks: issues and recommendations. Int. J. Adv. Comput. Sci. Appl. 11 (5), 9 (2020). https://doi.org/10.14569/IJACSA.2020.0110523

Article   MathSciNet   Google Scholar  

Google. Protecting businesses against cyber threats during COVID-19 and beyond. [Online]. Available: https://cloud.google.com/blog/products/identity-security/protecting-against-cyber-threats-during-covid-19-and-beyond

Szurdi, J., Starov, O., McCabe, A., Chen, Z., Duan, R.: Studying how cybercriminals prey on the COVID-19 pandemic. [Online]. Available: https://unit42.paloaltonetworks.com/how-cybercriminals-prey-on-the-covid-19-pandemic/

Albladi, S.M., Weir, G.R.: Predicting individuals’ vulnerability to social engineering in social networks. Cybersecur. 3 (1), 1–19 (2020)

Lansley, M., Kapetanakis, S., Polatidis, N.: SEADer++ v2: detecting social engineering attacks using natural language processing and machine learning. In: 2020 International Conference on Innovations in Intelligent Systems and Applications (INISTA), pp. 1–6. IEEE (2020)

Basit, A., Zafar, M., Liu, X., Javed, A.R., Jalil, Z., Kifayat, K.: A comprehensive survey of AI-enabled phishing attacks detection techniques. Telecommun. Syst. 76 (1), 139–154 (2020). https://doi.org/10.1007/s11235-020-00733-2

Abreu, J.V.F., Fernandes, J.H.C., Gondim, J.J.C., Ralha, C.G.: Bot development for social engineering attacks on Twitter. arXiv preprint arXiv:2007.11778 (2020)

Smith, A., Papadaki, M., Furnell, S.M.: Improving awareness of social engineering attacks. In: Dodge, R.C., Futcher, L. (eds.) WISE 2009/2011/2013. IAICT, vol. 406, pp. 249–256. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39377-8_29

Chapter   Google Scholar  

Saleem, J., Hammoudeh, M.: Defense methods against social engineering attacks. In: Daimi, K. (ed.) Computer and Network Security Essentials, pp. 603–618. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-58424-9_35

Zulkurnain, A.U., Hamidy, A., Husain, A.B., Chizari, H.: Social engineering attack mitigation. Int. J. Math. Comput. Sci. 1 (4), 188–198 (2015)

Bullée, J.-W., Montoya, L., Pieters, W., Junger, M., Hartel, P.H.: The persuasion and security awareness experiment: reducing the success of social engineering attacks. J. Exp. Criminol. 11 , 97–115 (2015)

Parthy, P.P., Rajendran, G.: Identification and prevention of social engineering attacks on an enterprise. In: 2019 International Carnahan Conference on Security Technology (ICCST), pp. 1–5. IEEE (2019)

Aldawood, H.A., Skinner, G.: A critical appraisal of contemporary cyber security social engineering solutions: measures, policies, tools and applications. In: 2018 26th International Conference on Systems Engineering (ICSEng), pp. 1–6. IEEE (2018)

Aldawood, H., Skinner, G.: An academic review of current industrial and commercial cyber security social engineering solutions. In: Proceedings of the 3rd International Conference on Cryptography, Security and Privacy, pp. 110–115 (2019)

Campbell, C.C.: Solutions for counteracting human deception in social engineering attacks. Inf. Technol. People 32 (5), 1130–1152 (2019)

Heartfield, R., Loukas, G., Gan, D.: You are probably not the weakest link: towards practical prediction of susceptibility to semantic social engineering attacks. IEEE Access 4 , 6910–6928 (2016)

Google. Improving malicious document detection in gmail with deeplearning (2020). [Online]. Available: https://security.googleblog.com/2020/02/improving-malicious-document-detection.html . Accessed 16 January 2021

World Health Organisation. How to report misinformation online (2020). [Online]. Available: https://www.who.int/campaigns/connecting-the-world-to-combat-coronavirus/how-to-report-misinformation-online . Accessed 16 January 2021

W.H.O. Coronavirus disease (COVID-19) advice for the public: mythbusters (2020). [Online]. Available: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public/myth-busters . Accessed 16 January 2021

U.Gov. (2020) Go viral! a 5 minute game that helps protect you against COVID-19 misinformation. [Online]. Available: https://www.goviralgame.com/en?utm_source=EO&utm_medium=SocialMedia&utm_campaign=goviral&utm_content=Eng . Accessed 16 January 2021

WHO. Countering misinformation with the government of the United Kingdom (2020). [Online]. Available: https://www.who.int/news-room/feature-stories/detail/countering-misinformation-about-covid-19 . Accessed 16 January 2021

Shafi, M., et al.: 5g: a tutorial overview of standards, trials, challenges, deployment, and practice. IEEE J Sel. Areas Commun. 35 (6), 1201–1221 (2017)

Cresci, S.: A decade of social bot detection. Commun. ACM 63 (10), 72–83 (2020)

Heidari, M., Jones, J.H.: Using bert to extract topic-independent sentiment features for social media bot detection. In: 11th IEEE Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON), vol. 2020, pp. 0542–0547. IEEE (2020)

Kudugunta, S., Ferrara, E.: Deep neural networks for bot detection. Inf. Sci. 467 , 312–322 (2018)

Wu, W., Alvarez, J., Liu, C., Sun, H.-M.: Bot detection using unsupervised machine learning. Microsyst. Technol. 24 (1), 209–217 (2018)

Abou Daya, A., Salahuddin, M.A., Limam, N., Boutaba, R.: A graph-based machine learning approach for bot detection. In: IFIP/IEEE Symposium on Integrated Network and Service Management (IM), vol. 2019, pp. 144–152. IEEE (2019)

Huh, J.-H., Seo, Y.-S.: Understanding edge computing: engineering evolution with artificial intelligence. IEEE Access 7 , 164229–164245 (2019)

Xia, P., et al.: Don’t fish in troubled waters! characterizing coronavirus-themed cryptocurrency scams (2020)

Weber, K., Schütz, A., Fertig, T., Müller, N.: Exploiting the human factor: social engineering attacks on cryptocurrency users 07 , 650–668 (2020)

Khan, W.Z., Ahmed, E., Hakak, S., Yaqoob, I., Ahmed, A.: Edge computing: a survey. Future Gener. Comput. Syst. 97 , 219–235 (2019)

Hakak, S., Ray, S., Khan, W.Z., Scheme, E.: A framework for edge-assisted healthcare data analytics using federated learning. In: IEEE International Workshop on Data Analytics for Smart Health (DASH) 2020. IEEE BigData (2020)

Hakak, S., Khan, W.Z., Gilkar, G.A., Haider, N., Imran, M., Alkatheiri, M.S.: Industrial wastewater management using blockchain technology: architecture, requirements, and future directions. IEEE Internet of Things Mag. 3 (2), 38–43 (2020)

Download references

Author information

Authors and affiliations.

Faculty of Computer Science, University of New Brunswick, Fredericton, Canada

Nikol Mashtalyar, Uwera Nina Ntaganzwa, Thales Santos, Saqib Hakak & Suprio Ray

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Nikol Mashtalyar , Uwera Nina Ntaganzwa , Thales Santos , Saqib Hakak or Suprio Ray .

Editor information

Editors and affiliations.

San Jose State University, San Jose, CA, USA

Abbas Moallem

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Cite this paper.

Mashtalyar, N., Ntaganzwa, U.N., Santos, T., Hakak, S., Ray, S. (2021). Social Engineering Attacks: Recent Advances and Challenges. In: Moallem, A. (eds) HCI for Cybersecurity, Privacy and Trust. HCII 2021. Lecture Notes in Computer Science(), vol 12788. Springer, Cham. https://doi.org/10.1007/978-3-030-77392-2_27

Download citation

DOI : https://doi.org/10.1007/978-3-030-77392-2_27

Published : 03 July 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-77391-5

Online ISBN : 978-3-030-77392-2

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 05 March 2020

Predicting individuals’ vulnerability to social engineering in social networks

  • Samar Muslah Albladi   ORCID: orcid.org/0000-0001-9246-9540 1 &
  • George R. S. Weir 2  

Cybersecurity volume  3 , Article number:  7 ( 2020 ) Cite this article

19k Accesses

22 Citations

5 Altmetric

Metrics details

The popularity of social networking sites has attracted billions of users to engage and share their information on these networks. The vast amount of circulating data and information expose these networks to several security risks. Social engineering is one of the most common types of threat that may face social network users. Training and increasing users’ awareness of such threats is essential for maintaining continuous and safe use of social networking services. Identifying the most vulnerable users in order to target them for these training programs is desirable for increasing the effectiveness of such programs. Few studies have investigated the effect of individuals’ characteristics on predicting their vulnerability to social engineering in the context of social networks. To address this gap, the present study developed a novel model to predict user vulnerability based on several perspectives of user characteristics. The proposed model includes interactions between different social network-oriented factors such as level of involvement in the network, motivation to use the network, and competence in dealing with threats on the network. The results of this research indicate that most of the considered user characteristics are factors that influence user vulnerability either directly or indirectly. Furthermore, the present study provides evidence that individuals’ characteristics can identify vulnerable users so that these risks can be considered when designing training and awareness programs.

Introduction

Individuals and organisations are becoming increasingly dependent on working with computers, accessing the Internet, and more importantly sharing data through virtual communications. This makes cybersecurity one of today’s most significant issues. Protecting people and organisations from being targeted by cybercriminals is becoming a priority for industry and academia (Gupta et al. 2018 ). This is due to the substantial damage that may result from losing valuable data and documents in such attacks. Rather than exploiting technical means to reach their victims, cybercriminals may instead use deceptive social engineering (SE) strategies to convince their targets to accept the lure. Social engineers exploit individuals motives, habits, and behaviour to manipulate their victims (Mitnick and Simon 2003 ).

Often, security practitioners still rely on technical measures to protect from online threats while overlooking the fact that cybercriminals are targeting human weak points to spread and conduct their attacks (Krombholz et al. 2015 ). According to the human-factor report (Proofpoint 2018 ), the number of social engineering attacks that exploit human vulnerabilities dramatically increased over the year examined. This raises the necessity of finding a solution that helps the user toward acceptable defensive behaviour in the social network (SN) setting. Identifying the user characteristics that make them more or less vulnerable to social engineering threats is a major step toward protecting against such threats (Albladi and Weir 2018 ). Knowing where weakness resides can help focus awareness-raising and target training sessions for those individuals, with the aim of reducing their likely victimisation.

With such objectives in mind, the present research developed a conceptual model that reflects the extent to which the user-related factors and dimensions are integrated as a means to predict users’ vulnerability to social engineering-based attacks. This study used a scenario-based experiment to examine the relationships between the behavioural constructs in the conceptual model and the model’s ability to predict user vulnerability to SE victimisation.

The organisation of this paper is as follows: Theoretical background section briefly analyses the related literature that was considered in developing the proposed model. The methods used to evaluate this model are described in Methods section. Following this, the results of the analysis are summarised in Results section. Discussion section provides a discussion of the findings while Theoretical and practical implications section presents the theory and practical implications. An outline approach to a semi-automated advisory system is proposed in A semi-automated security advisory system section. Finally, Conclusion section draws conclusions from this work.

Theoretical background

People’s vulnerability to cyber-attacks, and particularly to social engineering-based attacks, is not a newly emerging problem. Social engineering issues have been studied in email environments (Alseadoon et al. 2015 ; Halevi et al. 2013 ; Vishwanath et al. 2016 ), organisational environments (Flores et al. 2014 , 2015 ), and recently in social network environments (Algarni et al. 2017 ; Saridakis et al. 2016 ; Vishwanath 2015 ). Yet, the present research argues that the context of these exploits affects peoples’ ability to detect them, and that the influences create new characteristics and elements which warrant further investigation.

The present study investigated user characteristics in social networks, particularly Facebook, from different angles such as peoples’ behaviour, perceptions, and socio-emotions, in an attempt to identify the factors that could predict individuals’ vulnerability to SE threats. People’s vulnerability level will be identified based on their response to a variety of social engineering scenarios. The following sub-sections will address in detail the relationship between each factor of the three perspectives and user susceptibility to SE victimisation.

Habitual perspective

Due to the importance of understanding the impact of peoples’ habitual factors on their susceptibility to SE in SNs, this study aims to measure the effect of level of involvement, number of SN connections, percentage of known friends among the network’s connections, and SN experience on predicting user susceptibility to SE in the conceptual model.

Level of involvement

This construct is intended to measure the extent to which a user engages in Facebook activities. When people are highly involved with a communication service, they tend to be relaxed and ignore any cues associated with such service that warn of deception risk (Vishwanath et al. 2016 ). User involvement in a social network can be measured by the number of minutes spent on the network every day and the frequency of commenting on other people’s status updates or pictures (Vishwanath 2015 ). Time spent on Facebook is positively associated with disclosing highly sensitive information (Chang and Heo 2014 ). Furthermore, people who are more involved in the network are believed to be more exposed to social engineering victimisation (Saridakis et al. 2016 ; Vishwanath 2015 ).

Conversely, highly involved users are supposed to have more experience with the different types of threat that could occur online. Yet, it has been observed that active Facebook users are less concerned about sharing their private information as they usually have less restrictive privacy settings (Halevi et al. 2013 ). Users’ tendency to share private information could relate to the fact that individuals who spend a lot of time using the network usually exhibit high trust in the network (Sherchan et al. 2013 ). Therefore, the following hypotheses have been proposed.

Ha1. Users with a higher level of involvement will be more susceptible to social engineering attacks (i.e., there will be a positive relationship).

◦ Hb1. The user’s level of involvement positively influences the user’s experience with cybercrime.

◦ Hb2. The user’s level of involvement positively influences the user’s trust.

Number of connections

Despite of the fact that having large number of SN connections could increase people’s life satisfaction if they are motivated to engage in the network to maintain friendships (Rae and Lonborg 2015 ), this high number of contacts in the network is claimed to increase vulnerability to online risks (Buglass et al. 2016 ; Vishwanath 2015 ). Risky behaviour such as disclosing personal information in Facebook is closely associated with users’ desire to maintain and increase the number of existing friends (Chang and Heo 2014 ; Cheung et al. 2015 ). Users with a high number of social network connections are motivated to be more involved in the network by spending more time sharing information and maintaining their profiles (Madden et al. 2013 ).

Furthermore, a high number of connections might suggest that users are not only connected with their friends but also with strangers. Vishwanath ( 2015 ) has claimed that connecting with strangers on Facebook can be considered as the first level of cyber-attack victimisation, as those individuals are usually less suspicious of the possible threats that can result from connecting with strangers in the network. Furthermore, Alqarni et al. ( 2016 ) have adopted this view to test the relationship between severity and vulnerability of phishing attacks and connection with strangers (as assumed to present the basis for phishing attacks). Their study indicated a negative relationship between the number of strangers that the user is already connected to and the user’s perception of the severity and their vulnerability to phishing attacks in Facebook. Therefore, if users are connected mostly with known friends on Facebook, this could be seen as a mark of less vulnerable individuals. With all of these points in mind, the following hypotheses are generated.

Ha2: Users with a higher number of connections will be more susceptible to social engineering attacks (i.e., there will be a positive relationship).

◦ Hb3: The user’s number of connections positively influences the user’s level of involvement.

Ha3: Users with higher connections with known friends will be less susceptible to social engineering attacks (i.e., there will be a negative relationship).

Social network experience

People’s experience in using information communication technologies makes them more competent to detect online deception in SNs (Tsikerdekis and Zeadally 2014 ). For instance, it has been found that the more time elapsed since joining Facebook makes the user more capable of detecting SE attacks (Algarni et al. 2017 ). Furthermore, despite the fact that some researchers argue that computer experience has no significant impact on their phishing susceptibility (Halevi et al. 2013 ; Saridakis et al. 2016 ), other research on email phishing found positive impact from number of years of using the Internet and number of years of using email on people’s detection ability with email phishing (Alseadoon 2014 ; Sheng et al. 2010 ). Therefore, the present study suggests that the more experienced are the users with SNs, the less vulnerable they are to SE victimisation.

Additionally, in the context of the social network, Internet experience has been found to predict precautionary behaviour, and further causes greater sensitivity to associated risks in using Facebook (Van Schaik et al. 2018 ). Thus, years of experience in using the network could increase the individual’s awareness of the risk associated with connecting with strangers. Accordingly, the present study postulates that more experienced users would have a high percentage of connections with known friends in the network.

Ha4: Users with a higher level of experience with social network will be less susceptible to social engineering attacks (i.e., there will be a negative relationship).

◦ Hb4: The user’s social network experience positively influences the user’s connections with known friends.

Perceptual perspective

People’s risk perception, competence, and cybercrime experience are the three perceptual factors that are believed to influence their susceptibility to social engineering attacks. The strength and direction of these factors’ impact will be discussed as follows.

Risk perception

Facebook users have a different level of risk perception that might affect their decision in times of risk. Vishwanath et al. ( 2016 ) has described risk perception as the bridge between user’s previous knowledge about the expected risk and their competence to deal with that risk. Many studies have considered perceiving the risk associated with engaging in online activities as having a direct influence on avoiding using online services (Riek et al. 2016 ) and more importantly as decreasing their vulnerability to online threats (Vishwanath et al. 2016 ). Facebook users’ perceived risk of privacy and security threats significantly predict their strict privacy and security settings (Van Schaik et al. 2018 ). Thus, if online users are aware of the potential risks and their consequences that might be encountered on Facebook, they will probably avoid clicking on malicious links and communicating with strangers on the network. This indicates that risk perception contributes to the user’s competence in dealing with online threats and should lead to a decrease in susceptibility to SE. Therefore, the following relationships have been proposed.

Ha5: Users with a higher level of risk perception will be less susceptible to social engineering attacks (i.e., there will be a negative relationship).

◦ Hb5: The user’s perceived risk positively influences the user’s competence.

User competence has been considered an essential determinant of end-user capability to accomplish tasks in many different fields. In the realm of information systems, user competence can be defined as the individual’s knowledge of the intended technology and ability to use it effectively (Munro et al. 1997 ). To gain insight into user competence in detecting security threats in the context of online social networks, investigating the multidimensional space that determines this user competence level is fundamental (Albladi and Weir 2017 ). The role of user competence and its dimensions in facilitating the detection of online threats is still a controversial topic in the information security field. The dimensions used in the present study to measure the concept are security awareness, privacy awareness, and self-efficacy. The scales used to measure these factors can determine the level of user competence in evaluating risks associated with social network usage.

User competence in dealing with risky situations in a social network setting is a major predictor of the user’s response to online threats. When individuals feel competent to control their information in social networks, they are found to be less vulnerable to victimisation (Saridakis et al. 2016 ). Furthermore, Self-efficacy, which is one of the user’s competence dimensions, has been found to play a critical role in users’ safe and preservative behaviour online (Milne et al. 2009 ). People who have confidence in their ability to protect themselves online as well as having high-security awareness can be perceived as highly competent users when facing cyber-attacks (Wright and Marett 2010 ). This study hypothesised that highly competent users are less susceptible to SE victimisation.

Ha6: Users with a higher level of competence will be less susceptible to social engineering attacks (i.e., there will be a negative relationship).

Cybercrime experience

Past victimisation is observed as profoundly affecting the person’s view of happiness and safety in general (Mahuteau and Zhu 2016 ). Also, such unpleasant experience is inclined to change behaviour, for example, reducing the likelihood of engagement in online-shopping (Bohme and Moore 2012 ) or even increasing antisocial behaviour (Cao and Lin 2015 ). Furthermore, previous email phishing victimisation is claimed to raise user awareness and vigilance and thus prevent them from being victimised again (Workman 2007 ). Yet, recent studies found this claim to be not significant (Iuga et al. 2016 ; Wang et al. 2017 ). As experience with cybercrimes could also be used as a determinant of people’s weakness in protecting themselves from such threats.

Experience with cybercrime has been found to increase people’s perceived risk of social network services (Riek et al. 2016 ). Those who are knowledgeable and have previous experience with online threats could be assumed to have high-risk perception (Vishwanath et al. 2016 ). However, unlike the context of email phishing, little is known about the role of prior knowledge and experiences with cybercrime in preventing people from being vulnerable to social engineering attacks in the context of social networks. Thus, this study proposes that past experience could raise the user’s risk perception but also could be used as a predictor of the user’s risk of being victimised again. To this extent, the following hypotheses have been assumed.

Ha7: Users with a previous experience with cybercrime will be more susceptible to social engineering attacks (i.e., there will be a positive relationship).

◦ Hb6: The user’s experience with cybercrime positively influences the user’s perceived risk.

Socio-emotional perspective

Little is known regarding the impact that this perspective has on SE victimisation in a SN context. However, previous research has highlighted the positive effect of people’s general trust or belief in their victimisation in email phishing context (Alseadoon et al. 2015 ), which encourages the present study to investigate more socio-emotional factors such as the dimensions of user trust and motivation, in order to consider their possible impact on user’s risky behaviour.

Some studies in email phishing (e.g., Alseadoon et al. 2015 ; Workman 2008 ) stress that the disposition to trust is a predictor of the user’s probability of being deceived by cyber-attacks. In the context of social networks, trust can be derived from the members’ trust for each other as well as trusting the network provider. These two dimensions of trust have been indicated to negatively influence people’s perceived risk in disclosing personal information (Cheung et al. 2015 ). Trust has also been found to strongly increase disclosing personal information among social networks users (Beldad and Hegner 2017 ; Chang and Heo 2014 ). With all of this in mind, the present study hypothesised that trusting the social network provider as well as other members may cause higher susceptibility to cyber-attacks.

Ha8: Users with a higher level of trust will be more susceptible to social engineering attacks (i.e., there will be a positive relationship).

According to the uses and gratification theory, people are using the communication technologies that fulfil their needs (Joinson 2008 ). Users’ motivation to use communication technologies must be taken into consideration in order to understand online user behaviour. This construct has been acknowledged by researchers in many fields such as marketing (Chiu et al. 2014 ), and mobile technology (Kim et al. 2013 ) in order to understand their target users. However, information security research has limitedly adopted this view toward understanding the online users’ risky behaviour. Users can be motivated by different stimuli to engage in social networks such as entertainment or information seeking (Basak and Calisir 2015 ). Additionally, people use Facebook for social reasons such as maintaining existing relationships and making new friends (Rae and Lonborg 2015 ). According to SE victimisation, these motivations can shed light on understanding the user’s behaviour at times of risk. For example, hedonically motivated users who usually seek enjoyment are assumed to be persuaded to click on links that provide new games or apps. While socially motivated users are generally looking to meet new people online, this makes them more likely to connect with strangers. This connections with strangers is considered risky behaviour nowadays (Alqarni et al. 2016 ). Therefore, this study predicts that the users’ vulnerability to social engineering-based attacks will be different based on their motives to access the social network.

User’s differing motivation to use social networking sites can explain their attitude online, such as tendency to disclose personal information in social networks (Chang and Heo 2014 ). Additionally, people’s perceived benefit of network engagement has a positive impact on their willingness to share their photos online (Beldad and Hegner 2017 ). Thus, the present study assumes that motivated users are more vulnerable to SE victimisation than others. Additionally, motivated users could be inclined to be more trusting when using technology (Baabdullah 2018 ). This motivation could lead the individual to spend more time and show higher involvement in the network (Ross et al. 2009 ). This involvement could ultimately lead motivated individuals to experience or at least be familiar with different types of cybercrime that could happen in the network. Hence, the following hypotheses have been postulated.

Ha9: Users with a higher level of motivation will be more susceptible to social engineering attacks (i.e., there will be a positive relationship).

◦ Hb7: The user’s motivation positively influences the user’s trust.

◦ Hb8: The user’s motivation positively influences the user’s level of involvement.

◦ Hb9: The user’s motivation positively influences the user’s experience with cybercrime.

The previous sub-sections explain the nature and the directions of the relationships among the constructs in the present study. Based on these 18 proposed hypotheses, a novel conceptual model has been developed and presented in Fig.  1 . This conceptual model relies on three different perspectives which are believed to predict user behaviour toward SE victimisation on Facebook. Developing and validating such a holistic model gives a clear indication of the contribution of the present study.

figure 1

Research Model

To evaluate the hypotheses of the conceptual model, an online-questionnaire was designed using the Qualtrics online survey tool. The questionnaire incorporated three main parts starting with questions about participants’ demographics, followed by questions that measure the constructs of the proposed model, and finally, a scenario-based experiment. An invitation email was sent to a number of faculty staff in two universities, asking them to distribute the online-questionnaire among their students and staff.

Hair et al. ( 2017 ) suggested using a sophisticated guideline that relies on Cohen ( 1988 ) recommendations to calculate the required sample size by using power estimates. In this case, for 9 predictors (which is the number of independent variables in the conceptual model) with an estimated medium effect size of 0.15, the target sample size should be at least 113 to achieve a power level of 0.80 with a significance level of 0.05 (Soper 2012 ). In this study, 316 participants have completed the questionnaire (after the primary data screening). The descriptive analysis of participants’ demographics in Table  1 revealed a variety of profiles in terms of gender (39% male, 61% female), education level, and education major. The majority of participants in the study were younger adults (age 18–24), representing 76% of the total participants. However, this was expected as the survey was undertaken in two universities where students considered vital members of the higher education environment.

Measurement scales

The proposed conceptual model includes five reflective factors and four second-order formative constructs which are risk, competence, trust, and motivation. Repeated indicator approach was used to measure the formative constructs values. This method recommends using the same number of items on all the first order factors in order to guarantee that all first-order factors have the same weight on the second order factors and to ensure no weight bias are existed (Ringle et al. 2012 ).

The scales used to measure the user habits in SN has been adopted from (Fogel and Nehmad 2009 ). To measure the risk perception dimensions, scales were adapted from Milne et al. ( 2009 ), with some modification and changes to fit the present study context. The scales used to measure the three dimensions of user competence were adopted from Albladi and Weir ( 2017 ). Motivation dimension items were adopted from previous literature (Al Omoush et al. 2012 ; Basak and Calisir 2015 ; Orchard et al. 2014 ; Yang and Lin 2014 ). The scale used to measure users’ trust was adopted with some modification from Fogel and Nehmad ( 2009 ) and Chiu et al. ( 2006 ) studies. Appendix 1 presents a summary of the measurement items.

A scenario-based experiment has been chosen as an empirical approach to examining users’ susceptibility to SE victimisation. In such scenario-based experiments, the human is recruited to take a role in reviewing a set of scripted information which can be in the form of text or images, then asked to react or respond to this predetermined information (Rungtusanatham et al. 2011 ). This method is considered suitable and realistic for many social engineering studies (e.g., (Algarni et al. 2017 ; Iuga et al. 2016 )) due to the ethical concerns associated with conducting real attacks. Our scenario-based experiment includes 6 images of Facebook posts (4 high-risk scenarios, and 2 low-risk scenarios). Each post contains a type of cyber-attack which has been chosen from the most prominent cyber-attacks that occur in social networks (Gao et al. 2011 ).

In the study model, only high-risk scenarios (which include phishing, clickjacking with an executable file, malware, and phishing scam) have been considered to measure user susceptibility to SE attacks. However, comparing individuals’ response to the high-risk attacks and their response to the low-risk attacks aims to examine if users rely on their characteristics when judging the different scenarios and not on other influencing factors such as visual message triggers (Wang et al. 2012 ). Participants were asked to indicate their response to these Facebook posts, as if they had encountered them in their real accounts, by rating a number of statements such as “I would click on this button to read the file” using a 5-point Likert-scale from 1 “strongly disagree” to 5 “strongly agree”. Appendix 2 includes a summary of the scenarios used in this study.

Analysis approach

To evaluate the proposed model, partial least squares structural equation modelling (PLS-SEM) has been used due to its suitability in dealing with complex predictive models that consist in a combination of formative and reflective constructs (Götz et al. 2010 ), even with some limitations regarding data normality and sample size (Hair et al. 2012 ). The SmartPLS v3 software package (Ringle et al. 2015 ) was used to analyse the model and its associated hypotheses.

To evaluate the study model, three different procedures have been conducted. First, using the PLS-algorithm to provide standard model estimations such as path coefficient, the coefficient of determination (R 2 values), effect size, and collinearity statistics. Secondly, using a bootstrapping approach to test the structural model relationships significance. In such approach, the collected data sample is treated as the population sample where the algorithm used a replacement technique to generate a random and large number of bootstrap samples (recommended to predefine as 5000) all with the same amount of cases as the original sample (Henseler et al. 2009 ). The present study conducted the bootstrapping procedure with 5000 bootstrap samples, two-tailed testing, and an assumption of 5% significant level.

Finally, a blindfolding procedure was also used to evaluate the predictive relevance (Q 2 ) of the structural model. In this approach, part of the data points are omitted and considered missing from the constructs’ indicators, and the parameters are estimated using the remaining data points (Hair et al. 2017 ). These estimations are then used to predict the missing data points which will be compared later with the real omitted data to measure Q 2 value. Blindfolding is considered a sample reuse approach which only applied to endogenous constructs (Henseler et al. 2009 ). Endogenous constructs are the variables that are affected by other variables in the study model (Götz et al. 2010 ), such as user susceptibility, involvement, and trust.

The part of the conceptual model that includes the relations between the measurement items and their associated factors is called the measurement model, while the hypothesised relationships among the different factors is called the structural model (Tabachnick and Fidel 2013 ). The present study’s measurement model, which includes all the constructs along with their indicators’ outer loadings, can be found in Appendix 3 . The result of the measurement model analysis in Table  2 reveals that the Cronbach alpha and the composite reliability were acceptable for all constructs as they were above the threshold of 0.70. Additionally, since the average variance extracted (AVE) for all constructs was above the threshold of 0.5 (Hair et al. 2017 ), the convergent validity of the model’s reflective constructs was confirmed.

However, in order to assess the model’s predictive ability and to examine the significance of relationships between the model’s constructs, the structural model should be tested. The assessment of the structural model involves the following testing steps.

Assessing collinearity

This step is vital to determine if there are any collinearity issues among the predictors of each endogenous construct. Failing to do so could lead to a biased path coefficient estimation if a critical collinearity issue exists among the construct predictors (Hair et al. 2017 ). Table  3 presents all the endogenous constructs (represented by the columns) which indicate that the variance inflation factor (VIF) values for all predictors of each endogenous construct (represented by the rows) are below the threshold of 5. Thus, no collinearity issues exist in the structural model.

Assessing path coefficients (hypotheses testing)

The path coefficient was calculated using the bootstrap re-sampling procedure (Hair et al. 2017 ). This procedure provides estimates of the direct impact that each construct has on user susceptibility to cyber-attack. The result of the direct effect test in Table  4 shows that trust (t = 5.202, p  < 0.01) is the highest variable that predicts the user’s susceptibility to SE victimisation, followed by user’s involvement (t = 5.002, p  < 0.01), cybercrime experience (t = 3.736, p  < 0.01), social network experience (t = − 3.015, p  < 0.01), and percentage of known friends among Facebook connections (t = − 2.735, p  < 0.01). The direct effects of user competence to deal with threats (t = − 2.474, p  < 0.05) and the number of connections (t = − 2.428, p  < 0.05) were relatively small, yet still statistically significant in explaining the target variable. However, the impact of the number of connections on users’ susceptibility was negative which opposes hypothesis (Ha2) that claims that this relationship is positive.

Most importantly, the result indicated that perceived risk and motivation have no direct effect on user’s vulnerability ( p  > 0.05). This could be caused by the fact that both factors are second-order formative variables, while their first order factors have different direction effects on user’s susceptibility. As can be seen from the result of the regression analysis in Table  5 , perceived risk is the second order factor of perceived severity of threat which has a significant negative effect on the user’s susceptibility and perceived likelihood of threat which has a positive impact on user’s susceptibility. Therefore, their joint effect logically will be not significant, because the opposite effects of the two dimensions of perceived risk have cancelled each other. Thus, Ha5 could be considered as partially supported.

The situation with Motivation is similar as it is also a second-order formative factor and its first order factors (hedonic and social) have an opposite effect on users’ susceptibility. Table 5 presents the result of the regression analysis of first-order factors for the motivation construct. The result provides evidence that hedonic motivation is negatively related to the user’s susceptibility while social motivation is positively associated with user’s susceptibility. However, when the two dimensions of motivation were aggregated to create one index to measure the total effect of user’s motivation (both direct and indirect), as illustrated in Table  6 , the model revealed a significant predictor of users’ susceptibility (t = 3.854, p  < 0.01). Thus, the direct effect of motivation on user susceptibility is statistically rejected, while the total effect of motivation on users’ susceptibility is statistically significant and considered one of the strongest predictors in the study model.

Evaluating the total effect of a particular construct on user susceptibility is considered useful, especially if the goal of the study is to explore the impact of the relationships between different drivers to predict one latent construct (Hair et al. 2017 ). The total impact includes both the construct’s direct effect and indirect effects through mediating constructs in the model. The total effect analysis in Table 6 revealed that most of the constructs have a significant overall impact on user susceptibility ( p  < 0.05). Although the number of connections has been proven to have a significant negative direct effect on user susceptibility, its total effect when considering all the direct and indirect relationships seems to be very low and not significant (t = − 0.837, p  > 0.05). Furthermore, both the direct and total effect of perceived risk has been found to be not substantial (t = − 1.559, p  > 0.05).

The rest of the hypotheses (group b) aim to examine the relationships between the independent constructs of the study model, which will be tested according to estimates of the path coefficient between the related constructs. Table  7 shows that all nine hypotheses are statistically significant ( p  < 0.05). This also shows the most substantial relationship was between social network experience and the percentage of known friends among Facebook connections (t = 6.091, p  < 0.01), followed by the favourable impact motivation and level of involvement have on increasing users trust (with t-value = 4.821, and t-value = 3.914, respectively).

Furthermore, motivation (t = 3.640, p  < 0.01) and the number of connections (t = 3.106, p  < 0.01) are two factors found to increase users’ level of involvement in the network. Level of involvement also plays a notable role in raising people’s previous experience with cybercrime (t = 2.532, p  < 0.05), while past cybercrime expertise significantly increases people’s perceived risk associated with using Facebook (t = 2.968, p  < 0.01). Nevertheless, the contribution of perceived risk in raising user competence level to deal with online threats was not very strong, although considered statistically significant (t = 2.241, p  < 0.05).

Finally, there was no significant difference with regard to the user characteristics that affect people’s susceptibility or resistance to the high-risk scenarios and low-risk scenarios. This means that participants rely on their perceptions and experience to judge those scenarios.

The coefficient of determination - R 2

The coefficient of determination is a traditional criterion that is used to evaluate the structural model’s predictive power. In this study, this coefficient measure will represent the joint effect of all the model variables in explaining the variance in people’s susceptibility to SE attacks. According to Hair et al. ( 2017 ), the acceptable R 2 value is hard to determine as it might vary depending on the study discipline and the model complexity. Cohen ( 1988 ) has suggested a rule of thumb to assess the R 2 values for models with several independent variables which are: 0.26, 0.13, and 0.02 to be considered substantial, moderate, and weak respectively. Table  8 illustrates the coefficient of determination for the endogenous variables in the study model. The R 2 values indicate that the nine prediction variables together have substantial predictive power and explain 33.5% of the variation in users’ susceptibility to SE attacks. Furthermore, users’ involvement and motivation combined effect on users’ trust is considered moderate as it explains 13.2% of the variation in users’ trust.

Predictive relevance Q 2

To measure the model’s predictive capabilities, a blindfolding procedure has been used to obtain the model’s predictive relevance (Q 2 value). Stone-Geisser’s Q 2 value, which is a measure to assess how well a model predicts the data of omitted cases, should be higher than zero in order to indicate that the path model has a cross-validated predictive relevance (Hair et al. 2017 ). Table 8 presents results of the predictive relevance test and shows that all of the endogenous constructs in the research model have predictive relevance greater than zero, which means that the model has appropriate predictive ability.

Hair et al. ( 2017 ) and Henseler et al. ( 2014 ) have recommended using SRMR and RMS theta as indices to test a model’s goodness of fit. While, SRMR represents the discrepancy between the observed correlations and the model’s implied correlations where its cut-point value should be less than 0.08 (Hu and Bentler 1998 ), RMS theta value of less than 0.12 represents an appropriate model fit (Hair et al. 2017 ; Henseler et al. 2014 ). Normed Fit Index (NFI) is an incremental model fit evaluation approach which compares the structural model with a null model of entirely uncorrelated variables, whereby an NFI value of more than 0.90 represents good model fit (Bentler and Bonett 1980 ). Additionally, Dijkstra and Henseler ( 2015 ), recommend using the squared euclidean distance (d LS ) and the geodesic distance (d G ) as measures to assess model fit by comparing the distance between the sample covariance matrix and a structured covariance matrix. Comparing the original values of d LS and d G with their confidence intervals could indicate a good model fit if their values are less than the upper bound of the 95% confidence interval.

Table  9 illustrates the result of the model fit indices that was obtained from the SmartPLS report. The empirical test of the structural model revealed a good model fit as the SRMR value was 0.05, the RMS theta value was 0.099, the NFI was 0.858, which, if rounded, will be 0.9, and the values of d LS and d G were less than the upper bound of their confidence interval. Thereby, the results of all the considered model fit indices reflect a satisfactory model fit when considering the complexity of the present study model.

Demographic variables effect

One of the present study goals is to examine if specific users’ demographics (age, gender, education, and major) are associated with users’ susceptibility to social engineering attacks. To explore this relationship, regression analysis, as well as variance tests such as t-test and ANOVA test, have been conducted. Table  10 summarises these tests results.

Gender has been found to affect the user’s susceptibility to SE victimisation (Std. beta = 0.133, p  < 0.05) and the t-test indicates that women are more vulnerable to victimisation (t(271.95) = 2.415, p  < 0.05). Also, the user’s major has a significant effect on the user’s vulnerability (Std. beta = 0.112, p  < 0.05). When comparing the groups’ behaviour via ANOVA test, users who are specialised in technical majors such as computer and engineering have been indicated as less susceptible to social engineering attacks than those specialised in humanities and business (F(6) = 5.164, p  < 0.001). Furthermore, the results show that age has no significant impact on user vulnerability (Std. beta = 0.096, p  > 0.05). However, when comparing the means of age groups, it can be seen that younger adults (M = 1.97, SD = 0.99) are less susceptible than older adults (M = 2.56, SD = 0.92). Moreover, the educational level has no significant impact on users’ vulnerability as revealed by the result of the regression analysis (Std. beta = 0.068, p  > 0.05).

Facebook users’ involvement level is revealed in the present study to have a strong significant effect on their susceptibility to SE victimisation. This finding confirms the results of previous research (Saridakis et al. 2016 ; Vishwanath 2015 ). Since most social network users are highly involved in online networks, it is hard to generalise that all involved people are vulnerable. However, high involvement affects other critical factors in the present model, i.e., experience with cybercrime and trust, which in turn have powerful impacts on users’ susceptibility to victimisation.

The number of friends has been found to have a direct negative impact on people’s vulnerability, which is against what the present study hypothesised, as this relationship had been assumed to be positive in order to concur with previous claims that large network size makes individuals more vulnerable to SNs risks (Buglass et al. 2016 ; Vishwanath 2015 ). Facebook users seem to accept friend requests from strangers to expand their friendship network. Around 48% of the participants in this study stated that they know less than 10% of their Facebook network personally. Connecting with strangers on the network has previously been seen as the first step in falling prey to social engineering attacks (Vishwanath 2015 ), while also being regarded as a measure of risky behaviour on social networks (Alqarni et al. 2016 ). A high percentage of strangers with whom the user is connected can be seen as a determinant of the user’s low level of suspicion.

Furthermore, social network experience has been found to significantly predict people’s susceptibility to social engineering in the present study. People’s ability to detect social network deception has been said to depend on information communication technology literacy (Tsikerdekis and Zeadally 2014 ). Thus, experienced users are more familiar with cyber-attacks such as phishing and clickjacking, and easily detect them. This is further supported by Algarni et al. ( 2017 ), who pointed out that the less time that has elapsed since the user joined Facebook, the more susceptible he or she is to social engineering. Yet, their research treated user experience with social networks as a demographic variable and did not examine whether this factor might affect other aspects of user behaviour. For instance, results from the present study reveal that users who are considered more experienced in social networks have fewer connections with strangers (t = 6.091, p  < 0.01), which further explains why they are less susceptible than novice users.

Perception of risk has no direct influence on people’s vulnerability, but the present study found perceived risk to significantly increase people’s level of competence to deal with social engineering attacks. This also accords with the Van Schaik et al. ( 2018 ) study, which found that Facebook users with high risk perception adopt precautionary behaviours such as restrictive privacy and security-related settings. Most importantly, perceived cybercrime risk has also been indicated as influencing people to take precautions and avoid using online social networks (Riek et al. 2016 ).

Measuring user competence levels would contribute to our understanding of the reasons behind user weakness in detecting online security or privacy threats. In the present study, the measure of an individual’s competence level in dealing with cybercrime was based upon three dimensions: security awareness, privacy awareness, and self-efficacy. The empirical results show that this competence measure can significantly predict the individual’s ability to detect SE attacks on Facebook. Individuals’ perception of their self-ability to control the content shared on social network websites has been previously considered a predictor of their ability to detect social network threats (Saridakis et al. 2016 ), as individuals who have this confidence in their self-ability as well as in their security knowledge seem to be competent in dealing with cyber threats (Flores et al. 2015 ; Wright and Marett 2010 ).

Furthermore, our results accord with the finding of Riek et al. ( 2016 ) that previous cybercrime experience has a positive and substantial impact on users’ perceived risk. Yet, this high-risk perception did not decrease users’ vulnerability in the present study. This could be because experience and knowledge of the existence of threats do not need to be reflected in people’s behaviour. For example, individuals who had previously undertaken security awareness training still underestimated the importance of some security practices, such as frequent change of passwords (Kim 2013 ).

The present research found that people’s trust in the social network’s provider and members were the strongest determinants of their vulnerability to social engineering attacks (t = 5.202, p  < 0.01). Previous email phishing research (e.g., Alseadoon et al. 2015 ; Workman 2008 ) has also stressed that people’s disposition to trust has a significant impact on their weakness in detecting phishing emails. Yet, little was known about the impact of trust in providers and other members of social networks on people’s vulnerability to cyber-attacks. These two types of trust have been found to decrease users’ perception of the risks associated with disclosing private information on SNs (Cheung et al. 2015 ). Similarly, trusting social network providers to protect members’ private information has caused Facebook users (especially females) to be more willing to share their photos in the network (Beldad and Hegner 2017 ). These findings draw attention to the huge responsibility that social network providers have to protect their users. In parallel, users should be encouraged to be cautious about their privacy and security.

People’s motivation to use social networks has no direct influence on their vulnerability to SE victimisation, as evidenced by the results of this study. Yet, this motivation significantly affects different essential aspects of user behaviour and perception such as user involvement, trust, and previous experience with cybercrime, which in turn substantially predict user vulnerability. This result accords with the claim that people’s motivation of using SNs increase their disclosure of private information (Beldad and Hegner 2017 ; Chang and Heo 2014 ).

Theoretical and practical implications

Most of the proposed measures to mitigate SE threats in the literature (e.g. (Fu et al. 2018 ; Gupta et al. 2018 )) are focused on technical solutions. Despite the importance and effectiveness of these proposed technical solutions, social engineers try to exploit human vulnerabilities; hence we require solutions that understand and guard against human weaknesses. Given the limited number of studies that investigate the impact of human characteristics on predicting vulnerability to social network security threats, the present study can be considered useful, having critical practical implications that should be acknowledged in this section.

The developed conceptual model shows an acceptable prediction ability of people’s vulnerability to social engineering in social networks as revealed by the results of this study. The proposed model could be used by information security researchers (or researchers from different fields) to predict responses to different security-oriented risks. For instance, decision-making research could benefit from the proposed framework and model as they indicate new perspectives on user-related characteristics that could affect decision-making abilities in times of risk.

Protecting users’ personal information is an essential element in promoting sustainable use of social networks (Kayes and Iamnitchi 2017 ). SN providers should provide better privacy rules and policies and develop more effective security and privacy settings. A live chat threat report must be essential in SN channels in order to reduce the number of potential victims of specific threatening posts or accounts. Providing security and privacy-related tools could also help increase users’ satisfaction with social networks.

Despite the importance of online awareness campaigns as well as the rich training programs that organisations adopt, problems persist because humans are still the weakest link (Aldawood and Skinner 2018 ). Changing beliefs and behaviour is a complex procedure that needs more research. However, the present study offers clear insight into specific individual characteristics that make people more vulnerable to cybercrimes. Using these characteristics to design training programs is a sensible approach to the tuning of security awareness messages. Similarly, our results will be helpful in conducting more successful training programs that incorporate the identified essential attributes from the proposed perspectives, as educational elements to increase people’s awareness. While these identified factors might reflect a user’s weak points, the factors could also be targeted by enforcing behavioural security strategies in order to mitigate social engineering threats.

The developed conceptual model could be used in the assessment process for an organisation’s employees, especially those working in sensitive positions. Also, the model and associated scales could be of help in employment evaluation tests, particularly in security-critical institutions, since the proposed model may predict those weak aspects of an individual that could increase his/her vulnerability to social engineering.

A semi-automated security advisory system

One of the practical usefulness of the proposed prediction model can be demonstrated through integrating this model in a semi-automated advisory system (Fig.  2 ). Based on the idea of user profiling, this research has established a practical solution which can semi-automatically predict users’ vulnerability to various types of social engineering attacks.

figure 2

A Semi-Automated Advisory System

The designed semi-automated advisory system could be used as an approach with which to classify social network users according to their vulnerability type and level after completing an assessment survey. The local administrator can determine the threshold and the priority for each type of attack based on their knowledge. Then, the network provider could send awareness posts to each segment that target the particular group’s needs. Assessing social network users and segmenting them based on their behaviour and vulnerabilities is essential in order to design relevant advice that meets users’ needs. Yet, since social engineering techniques are rapidly changing and improving, the attack scenarios that are used in the assessment step could be updated from time to time. The registered users in the semi-automated advisory system also need to be reassessed regularly in order to observe any changes in their vulnerability.

Significant outcomes were noted with practical implications for how social network users could be assessed and segmented based on their characteristics, behaviour, and vulnerabilities, in turn facilitating their protection from such threats by targeting them with relevant advice and education that meets users’ needs. This system is considered cost and time effective, as integrating individuals’ needs with the administrator’s knowledge of existing threats could avoid the overhead and inconvenience of sending blanket advice to all users.

The study develops a conceptual model to test the factors that influence social networks users’ judgement of social engineering-based attacks in order to identify the weakest points of users’ detection behaviour, which also helps to predict vulnerable individuals. Proposing such a novel conceptual model helped in bridging the gap between theory and practice by providing a better understanding of how to predict vulnerable users. The findings of this research indicate that most of the considered user characteristics influence users’ vulnerability either directly or indirectly. This research also contributes to the existing knowledge of social engineering in social networks, particularly augmenting the research area of predicting user behaviour toward security threats by proposing a new influencing perspective, the socio-emotional, which has not been satisfactorily reported in the literature before, as a dimension affecting user vulnerability. This new perspective could also be incorporated to investigate user behaviour in several other contexts.

Using a scenario-based experiment instead of conducting a real attack study is one of the main limitations of the present study but was considered unavoidable due to ethical considerations. However, the selected attack scenarios were designed carefully to match recent and real social engineering-based attacks on Facebook. Additionally, the present study was undertaken in full consciousness of the fact that when measuring people’s previous experience with cybercrime, some participants might be unaware of their previous victimisation and so might respond inaccurately. In order to mitigate this limitation, different types of SE attacks have been considered in the scale that measures previous experience with cybercrime, such as phishing, identity theft, harassment, and fraud.

Furthermore, this research has focused only on academic communities as all the participants in this study were students, academic, and administrative staff of two universities. This could be seen as a limitation as the result may not reflect the behaviour of the general public. The university context is important however, and cyber-criminals have targeted universities recently due to their importance in providing online resources to their students and community (Öğütçü et al. 2016 ). Additionally, while several steps have been taken to ensure the inclusion of all influential factors in the model, it is not feasible to guarantee that all possibly influencing attributes are included in this study. Further efforts are needed in this sphere, as predicting human behaviour is a complex task.

The conceptual study model could be used to test user vulnerability to different types of privacy or security hazards associated with the use of social networks: for instance, by measuring users’ response to the risk related to loose privacy restrictions, or to sharing private information on the network. Furthermore, investigating whether social networks users have different levels of vulnerability to privacy and security associated risks is another area of potential future research. The proposed model’s prediction efficiency could be compared to different types of security and privacy threats. This comparison would offer a reasonable future direction for researchers to consider. Future research could focus more on improving the proposed model by giving perceived trust greater attention, as this factor was the highest behaviour predictor in the present model. The novel conceptualisation of users’ competence in the conceptual model has proved to have a profound influence on their behaviour toward social engineering victimisation, a finding which can offer additional new insight for future investigations.

Availability of data and materials

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Al Omoush KS, Yaseen SG, Atwah Alma’Aitah M (2012) The impact of Arab cultural values on online social networking: the case of Facebook. Comput Hum Behav 28(6):2387–2399. https://doi.org/10.1016/j.chb.2012.07.010

Article   Google Scholar  

Albladi SM, Weir GRS (2017) Competence measure in social networks. In: 2017 International Carnahan Conference on Security Technology (ICCST). IEEE, pp 1–6. https://doi.org/10.1109/CCST.2017.8167845

Albladi SM, Weir GRS (2018) User characteristics that influence judgment of social engineering attacks in social networks. Hum-Cent Comput Info Sci 8(1):5. https://doi.org/10.1186/s13673-018-0128-7

Aldawood H, Skinner G (2018) Educating and raising awareness on cyber security social engineering: a literature review. In: 2018 IEEE International Conference on Teaching, Assessment, and Learning for Engineering. IEEE, pp 62–68. https://doi.org/10.1109/TALE.2018.8615162

Algarni A, Xu Y, Chan T (2017) An empirical study on the susceptibility to social engineering in social networking sites: the case of Facebook. Eur J Inf Syst 26(6):661–687. https://doi.org/10.1057/s41303-017-0057-y

Alqarni Z, Algarni A, Xu Y (2016) Toward predicting susceptibility to phishing victimization on Facebook. In: 2016 IEEE International Conference on Services Computing (SCC). IEEE, pp 419–426. https://doi.org/10.1109/SCC.2016.61

Alseadoon IMA (2014) The impact of users’ characteristics on their ability to detect phishing emails. Doctoral Thesis. Queensland University of Technology. https://eprints.qut.edu.au/72873/ .

Alseadoon I, Othman MFI, Chan T (2015) What is the influence of users’ characteristics on their ability to detect phishing emails? In: Sulaiman HA, Othman MA, Othman MFI, Rahim YA, Pee NC (eds) Advanced computer and communication engineering technology, vol 315. Springer International Publishing, Cham, pp 949–962. https://doi.org/10.1007/978-3-319-07674-4_89

Chapter   Google Scholar  

Baabdullah AM (2018) Consumer adoption of Mobile Social Network Games (M-SNGs) in Saudi Arabia: the role of social influence, hedonic motivation and trust. Technol Soc 53:91–102. https://doi.org/10.1016/j.techsoc.2018.01.004

Basak E, Calisir F (2015) An empirical study on factors affecting continuance intention of using Facebook. Comput Hum Behav 48:181–189. https://doi.org/10.1016/j.chb.2015.01.055

Beldad AD, Hegner SM (2017) More photos from me to thee: factors influencing the intention to continue sharing personal photos on an Online Social Networking (OSN) site among young adults in the Netherlands. Int J Hum–Comput Interact 33(5):410–422. https://doi.org/10.1080/10447318.2016.1254890

Bentler PM, Bonett DG (1980) Significance tests and goodness of fit in the analysis of covariance structures. Psychol Bull 88(3):588–606. https://doi.org/10.1037//0033-2909.88.3.588

Bohme R, Moore T (2012) How do consumers react to cybercrime? In: 2012 eCrime Researchers Summit. IEEE, pp 1–12. https://doi.org/10.1109/eCrime.2012.6489519

Buglass SL, Binder JF, Betts LR, Underwood JDM (2016) When ‘friends’ collide: social heterogeneity and user vulnerability on social network sites. Comput Hum Behav 54:62–72. https://doi.org/10.1016/j.chb.2015.07.039

Cao B, Lin W-Y (2015) How do victims react to cyberbullying on social networking sites? The influence of previous cyberbullying victimization experiences. Comput Hum Behav 52:458–465. https://doi.org/10.1016/j.chb.2015.06.009

Chang C-W, Heo J (2014) Visiting theories that predict college students’ self-disclosure on Facebook. Comput Hum Behav 30:79–86. https://doi.org/10.1016/j.chb.2013.07.059

Cheung C, Lee ZWY, Chan TKH (2015) Self-disclosure in social networking sites: the role of perceived cost, perceived benefits and social influence. Internet Res 25(2):279–299. https://doi.org/10.1108/IntR-09-2013-0192

Chiu C-M, Hsu M-H, Wang ETG (2006) Understanding knowledge sharing in virtual communities: an integration of social capital and social cognitive theories. Decis Support Syst 42(3):1872–1888. https://doi.org/10.1016/j.dss.2006.04.001

Chiu C-M, Wang ETG, Fang Y-H, Huang H-Y (2014) Understanding customers’ repeat purchase intentions in B2C e-commerce: the roles of utilitarian value, hedonic value and perceived risk. Inf Syst J 24(1):85–114. https://doi.org/10.1111/j.1365-2575.2012.00407.x

Cohen J (1988) Statistical power analysis for the behavioral sciences, 2nd edn

MATH   Google Scholar  

Dijkstra TK, Henseler J (2015) Consistent and asymptotically normal PLS estimators for linear structural equations. Comput Stat Data Anal 81:10–23. https://doi.org/10.1016/j.csda.2014.07.008

Article   MathSciNet   MATH   Google Scholar  

Flores WR, Holm H, Nohlberg M, Ekstedt M (2015) Investigating personal determinants of phishing and the effect of national culture. Inf Comput Secur 23(2):178–199. https://doi.org/10.1108/ICS-05-2014-0029

Flores WR, Holm H, Svensson G, Ericsson G (2014) Using phishing experiments and scenario-based surveys to understand security behaviours in practice. Inf Manag Comput Secur 22(4):393–406. https://doi.org/10.1108/IMCS-11-2013-0083

Fogel J, Nehmad E (2009) Internet social network communities: risk taking, trust, and privacy concerns. Comput Hum Behav 25(1):153–160. https://doi.org/10.1016/j.chb.2008.08.006

Fu Q, Feng B, Guo D, Li Q (2018) Combating the evolving spammers in online social networks. Comput Secur 72:60–73. https://doi.org/10.1016/j.cose.2017.08.014

Gao H, Hu J, Huang T, Wang J, Chen Y (2011) Security issues in online social networks. IEEE Internet Comput 15(4):56–63. https://doi.org/10.1109/MIC.2011.50

Götz O, Liehr-Gobbers K, Krafft M (2010) Evaluation of structural equation models using the partial least squares (PLS) approach. In: Esposito Vinzi V, Chin W, Henseler J, Wang H (eds) Handbook of partial least squares. Springer Berlin Heidelberg, pp 691–711. https://doi.org/10.1007/978-3-540-32827-8_30

Gupta BB, Arachchilage NAG, Psannis KE (2018) Defending against phishing attacks: taxonomy of methods, current issues and future directions. Telecommun Syst 67(2):247–267. https://doi.org/10.1007/s11235-017-0334-z

Hair JF, Hult GTM, Ringle CM, Sarstedt M (2017) A primer on partial least squares structural equation modeling (PLS-SEM), 2nd edn. SAGE Publications. https://search.lib.byu.edu/byu/record/lee.6690785 .

Hair JF, Sarstedt M, Ringle CM, Mena JA (2012) An assessment of the use of partial least squares structural equation modeling in marketing research. J Acad Mark Sci 40(3):414–433. https://doi.org/10.1007/s11747-011-0261-6

Halevi, T., Lewis, J., & Memon, N. (2013). Phishing, personality traits and Facebook. ArXiv Preprint. Retrieved from http://arxiv.org/abs/1301.7643

Google Scholar  

Henseler J, Dijkstra TK, Sarstedt M, Ringle CM, Diamantopoulos A, Straub DW et al (2014) Common beliefs and reality about PLS. Organ Res Methods 17(2):182–209. https://doi.org/10.1177/1094428114526928

Henseler J, Ringle CM, Sinkovics RR (2009) The use of partial least squares path modeling in international marketing. Adv Int Mark 20(1):277–319. https://doi.org/10.1108/S1474-7979(2009)0000020014

Hu L, Bentler PM (1998) Fit indices in covariance structure modeling: sensitivity to underparameterized model misspecification. Psychol Methods 3(4):424–453. https://doi.org/10.1037/1082-989X.3.4.424

Iuga C, Nurse JRC, Erola A (2016) Baiting the hook: factors impacting susceptibility to phishing attacks. Hum-Cent Comput Info Sci 6(1):8. https://doi.org/10.1186/s13673-016-0065-2

Joinson AN (2008) Looking at, looking up or keeping up with people? Motives and uses of Facebook. In: Proceeding of the twenty-sixth annual CHI conference on human factors in computing systems. ACM Press, New York, pp 1027–1036. https://doi.org/10.1145/1357054.1357213

Kayes I, Iamnitchi A (2017) Privacy and security in online social networks: a survey. Online Soc Netw Media 3–4:1–21. https://doi.org/10.1016/j.osnem.2017.09.001

Kim EB (2013) Information security awareness status of business college: undergraduate students. Inf Secur J 22(4):171–179. https://doi.org/10.1080/19393555.2013.828803

Kim YH, Kim DJ, Wachter K (2013) A study of mobile user engagement (MoEN): engagement motivations, perceived value, satisfaction, and continued engagement intention. Decis Support Syst 56(1):361–370. https://doi.org/10.1016/j.dss.2013.07.002

Krombholz K, Hobel H, Huber M, Weippl E (2015) Advanced social engineering attacks. J Inf Secur Appl 22:113–122. https://doi.org/10.1016/j.jisa.2014.09.005

Madden M, Lenhart A, Cortesi S, Gasser U, Duggan M, Smith A, Beaton M (2013) Teens, social media, and privacy. Pew Research Center Retrieved from http://www.pewinternet.org/2013/05/21/teens-social-media-and-privacy/

Mahuteau S, Zhu R (2016) Crime victimisation and subjective well-being: panel evidence from Australia. Health Econ 25(11):1448–1463. https://doi.org/10.1002/hec.3230

Milne GR, Labrecque LI, Cromer C (2009) Toward an understanding of the online consumer’s risky behavior and protection practices. J Consum Aff 43(3):449–473. https://doi.org/10.1111/j.1745-6606.2009.01148.x

Mitnick KD, Simon WL (2003) The art of deception: controlling the human element in security. Wiley. https://books.google.com.sa/books?hl=ar&lr=&id=rmvDDwAAQBAJ&oi=fnd&pg=PR7&dq=Mitnick+KD,+Simon+WL+(2003)+The+art+of+deception:+controlling+the+human+1217+element+in+security.+Wiley&ots=_eyXWB11Wd&sig=9QEMsNUp8X2oiGmAnh7S800L160&redir_esc=y#v=onepage&q&f=false .

Munro MC, Huff SL, Marcolin BL, Compeau DR (1997) Understanding and measuring user competence. Inf Manag 33(1):45–57. https://doi.org/10.1016/S0378-7206(97)00035-9

Öğütçü G, Testik ÖM, Chouseinoglou O (2016) Analysis of personal information security behavior and awareness. Comput Secur 56:83–93. https://doi.org/10.1016/j.cose.2015.10.002

Orchard LJ, Fullwood C, Galbraith N, Morris N (2014) Individual differences as predictors of social networking. J Comput-Mediat Commun 19(3):388–402. https://doi.org/10.1111/jcc4.12068

Proofpoint. (2018). The human factor 2018 report. Retrieved from https://www.proofpoint.com/sites/default/files/pfpt-us-wp-human-factor-report-2018-180425.pdf

Rae JR, Lonborg SD (2015) Do motivations for using Facebook moderate the association between Facebook use and psychological well-being? Front Psychol 6:771. https://doi.org/10.3389/fpsyg.2015.00771

Riek M, Bohme R, Moore T (2016) Measuring the influence of perceived cybercrime risk on online service avoidance. IEEE Trans Dependable Secure Comput 13(2):261–273. https://doi.org/10.1109/TDSC.2015.2410795

Ringle CM, Sarstedt M, Straub D (2012) A critical look at the use of PLS-SEM in MIS quarterly. MIS Q 36(1) Retrieved from https://ssrn.com/abstract=2176426

Ringle CM, Wende S, Becker J-M (2015) SmartPLS 3. SmartPLS, Bönningstedt Retrieved from http://www.smartpls.com

Ross C, Orr ES, Sisic M, Arseneault JM, Simmering MG, Orr RR (2009) Personality and motivations associated with Facebook use. Comput Hum Behav 25(2):578–586. https://doi.org/10.1016/j.chb.2008.12.024

Rungtusanatham M, Wallin C, Eckerd S (2011) The vignette in a scenario-based role-playing experiment. J Supply Chain Manag 47(3):9–16. https://doi.org/10.1111/j.1745-493X.2011.03232.x

Saridakis G, Benson V, Ezingeard J-N, Tennakoon H (2016) Individual information security, user behaviour and cyber victimisation: an empirical study of social networking users. Technol Forecast Soc Chang 102:320–330. https://doi.org/10.1016/j.techfore.2015.08.012

Sheng S, Holbrook M, Kumaraguru P, Cranor LF, Downs J (2010) Who falls for phish? In: Proceedings of the 28th international conference on human factors in computing systems - CHI ‘10. ACM Press, New York, pp 373–382. https://doi.org/10.1145/1753326.1753383

Sherchan W, Nepal S, Paris C (2013) A survey of trust in social networks. ACM Comput Surv 45(4):1–33. https://doi.org/10.1145/2501654.2501661

Soper, D. (2012). A-priori sample size calculator. Retrieved from https://www.danielsoper.com/statcalc/calculator.aspx?id=1

Tabachnick BG, Fidel LS (2013) Using multivariate statistics, 6th edn. Pearson, Boston

Tsikerdekis M, Zeadally S (2014) Online deception in social media. Commun ACM 57(9):72–80. https://doi.org/10.1145/2629612

Van Schaik P, Jansen J, Onibokun J, Camp J, Kusev P (2018) Security and privacy in online social networking: risk perceptions and precautionary behaviour. Comput Hum Behav 78:283–297. https://doi.org/10.1016/j.chb.2017.10.007

Vishwanath A (2015) Habitual Facebook use and its impact on getting deceived on social media. J Comput-Mediat Commun 20(1):83–98. https://doi.org/10.1111/jcc4.12100

Article   MathSciNet   Google Scholar  

Vishwanath A, Harrison B, Ng YJ (2016) Suspicion, cognition, and automaticity model of phishing susceptibility. Commun Res. https://doi.org/10.1177/0093650215627483

Wang J, Herath T, Chen R, Vishwanath A, Rao HR (2012) Research article phishing susceptibility: an investigation into the processing of a targeted spear phishing email. IEEE Trans Prof Commun 55(4):345–362. https://doi.org/10.1109/TPC.2012.2208392

Wang J, Li Y, Rao HR (2017) Coping responses in phishing detection: an investigation of antecedents and consequences. Inf Syst Res 28(2):378–396. https://doi.org/10.1287/isre.2016.0680

Workman M (2007) Gaining access with social engineering: an empirical study of the threat. Inf Syst Secur 16(6):315–331. https://doi.org/10.1080/10658980701788165

Workman M (2008) A test of interventions for security threats from social engineering. Inf Manag Comput Secur 16(5):463–483. https://doi.org/10.1108/09685220810920549

Wright RT, Marett K (2010) The influence of experiential and dispositional factors in phishing: an empirical investigation of the deceived. J Manag Inf Syst 27(1):273–303. https://doi.org/10.2753/MIS0742-1222270111

Yang H-L, Lin C-L (2014) Why do people stick to Facebook web site? A value theory-based view. Inf Technol People 27(1):21–37. https://doi.org/10.1108/ITP-11-2012-0130

Download references

Acknowledgements

We are sincerely grateful to the many individuals who voluntarily participated in this research.

This work is supported by the University of Jeddah, Kingdom of Saudi Arabia as part of the first author’s research conducted at the University of Strathclyde in Glasgow, UK.

Author information

Authors and affiliations.

College of Computer Science and Engineering, University of Jeddah, Jeddah, Kingdom of Saudi Arabia

Samar Muslah Albladi

Department of Computer and Information Sciences, University of Strathclyde, Glasgow, UK

George R. S. Weir

You can also search for this author in PubMed   Google Scholar

Contributions

SMA conducted the study, analysed the collected data, and drafted the manuscript. GRSW participated in drafting the manuscript. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Samar Muslah Albladi .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

figure 3

Measurement Model

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Albladi, S.M., Weir, G.R.S. Predicting individuals’ vulnerability to social engineering in social networks. Cybersecur 3 , 7 (2020). https://doi.org/10.1186/s42400-020-00047-5

Download citation

Received : 08 October 2019

Accepted : 20 February 2020

Published : 05 March 2020

DOI : https://doi.org/10.1186/s42400-020-00047-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Information security
  • Social engineering
  • Social network
  • Vulnerability

research paper on social engineering

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • IEEE - PMC COVID-19 Collection

Logo of pheieee

A Multivocal Literature Review on Growing Social Engineering Based Cyber-Attacks/Threats During the COVID-19 Pandemic: Challenges and Prospective Solutions

Mohammad hijji.

1 Computer Science DepartmentUniversity of Tabuk Tabuk 47512 Saudi Arabia

Gulzar Alam

2 Information and Computer Science DepartmentKing Fahd University of Petroleum and Minerals Dhahran 31261 Saudi Arabia

The novel coronavirus (COVID-19) pandemic has caused a considerable and long-lasting social and economic impact on the world. Along with other potential challenges across different domains, it has brought numerous cybersecurity challenges that must be tackled timely to protect victims and critical infrastructure. Social engineering–based cyber-attacks/threats are one of the major methods for creating turmoil, especially by targeting critical infrastructure, such as hospitals and healthcare services. Social engineering–based cyber-attacks are based on the use of psychological and systematic techniques to manipulate the target. The objective of this research study is to explore the state-of-the-art and state-of-the-practice social engineering–based techniques, attack methods, and platforms used for conducting such cybersecurity attacks and threats. We undertake a systematically directed Multivocal Literature Review (MLR) related to the recent upsurge in social engineering–based cyber-attacks/threats since the emergence of the COVID-19 pandemic. A total of 52 primary studies were selected from both formal and grey literature based on the established quality assessment criteria. As an outcome of this research study; we discovered that the major social engineering–based techniques used during the COVID-19 pandemic are phishing, scamming, spamming, smishing, and vishing, in combination with the most used socio-technical method: fake emails, websites, and mobile apps used as weapon platforms for conducting successful cyber-attacks. Three types of malicious software were frequently used for system and resource exploitation are; ransomware, trojans, and bots. We also emphasized the economic impact of cyber-attacks performed on different organizations and critical infrastructure in which hospitals and healthcare were on the top targeted infrastructures during the COVID-19 pandemic. Lastly, we identified the open challenges, general recommendations, and prospective solutions for future work from the researcher and practitioner communities by using the latest technology, such as artificial intelligence, blockchain, and big data analytics.

I. Introduction

Social engineering (SE) is a method frequently used by hackers and cybercriminals for building strategies to trick people into granting them access to a system by breaking security best practices and standards illegally or even without breaking the law. SE tactics are used for a wide variety of malicious events enabled through human interactions. More explicitly, humans are the weakest links in cybersecurity [1] – , [4] . SE attempts typically achieve success through one or more steps depending on the ability of the attackers to exploit the victim using psychological manipulations to trick users into making security mistakes, granting them access to sensitive information. The social engineer performs their role as a fraudster, and making an effort to get access to computer networks, sensitive data, and information [2] . Major social engineering cyber-attacks are accomplished through social media platforms such as Facebook, Twitter, Instagram, Snapchat, and YouTube [2] , [4] , [5] .

In the current situation of the novel coronavirus (COVID-19) pandemic, social engineering is one of the most significant security threats faced by different organizations in both the public and private sectors, as well as end-users [2] , [4] , [6] . According to a CyberEdge [7] report, “the number of organizations hit with at least one successful social engineering attack per year is around 79%.” Similarly, 99% of cyberthreats were observed and executed through human interactions and done with the assistance of social engineering approach [8] . COVID-19, also known as coronavirus pandemic, is a viral disease that first identified in December 2019 in Wuhan, China caused by the “severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2; formerly called 2019-nCoV)” [9] . COVID-19 spread with a rapid speed around the world, infecting millions of people in over 188 countries with a high death rate compared to other diseases, reaching over a million fatalities so far [10] . Perc et al. [105] proposed a method to determine the daily growth rates to reduce the risk of global spread of the COVID-19 pandemic. Similarly, Hâncean et al. [106] proposed human-to-human transmission networks and dispersion mechanism of the novel coronavirus. Hâncean et al. [106] shown the spread of novel coronavirus and they inspected the number of cases and deaths during the COVID-19 of the Brazilian cities’ populations.

National and international association are essential for combating COVID-19 and other probable epidemics to be more organized for pandemics as early as possible [107] . Science and technology play a significant role in combating COVID-19. Technology assists the research and development by producing drugs, researching vaccines, and providing testing toolkits to overcome this severe pandemic using the emerging technologies such as artificial intelligence, 5G networking, cybersecurity, blockchain, and big data [11] .

The motivation behind our research work is that there is no Multivocal Literature Review (MLR) relevant to the rise of social engineering–based cyber-attacks/threats during the COVID-19 pandemic. Also, to identify the main challenges and proposed prospective solutions for social engineering–based cyber-attacks/threats. We systematically conducted this review by following the well-known published standard guidelines [22] and carefully reviewed both formal and grey literature studies. This review will help different organizations and online working-based employees to carry on their work in a secure manner. The main objective of our research study is to detect the state-of-the-art and state-of-the-practice social engineering–based techniques, attack methods, and platforms used for conducting cyber-attacks/threats with economic and societal impacts on various organizations. Similarly, we aim to identify the most targeted critical infrastructures and organizations that are exploited by cybercriminals during the COVID-19 pandemic. This research work provides an MLR related to the rise of social engineering–based cyber-attacks/threats during the COVID-19 pandemic from its start until October 2020.

The proposed MLR study is structured as follows. Section 2 comprehensively explains the social engineering definitions, types, approach, and goals involved. The detailed research methodology is discussed in section 3 . Sections 4 and 5 explain the results and discussions from the conducted MLR study. Section 6 explores the motivation behind social engineering cyber-attacks and threats. Finally, section 7 provides the limitations of the study and section 8 presents the conclusion and potential future work from the completed research study.

II. Social Engineering: Definition, Approach and Goals

Social engineering “is the ultimate con—the bag of tricks employed by fraudsters who lie, cheat and steal their way past your organization’s security controls. Their goals: theft, fraud or espionage [12] .” Social engineering circumvents all technologies, as well as firewalls. It appeals to hackers because people’s lack of awareness often makes their efforts easier. The comprehensive structure of social engineering is shown in Figure 1 , including its primary types, approaches, life cycle, and goals [1] , [13] .

An external file that holds a picture, illustration, etc.
Object name is alam1-3048839.jpg

Abstract view of the social engineering goals, types, approach and lifecycle.

According to Krombholz et al. [2] and Koyun and Janabi [14] , social engineering is mainly divided into four types as discussed below.

1) Physical

In this type of social engineering approach, the attackers perform some actions like searching for personal data, manuals, memos, and sensitive information in trash and dumpsters. The primary purpose of the attacker is to accumulate information about the victim from physical materials.

This is the most widely used type of social engineering, in which the social engineers use psychological techniques to convince the target user with tactics like building a relationship, spear phishing, baiting, and reverse social engineering. The most commonly used social techniques for cyber-attacks are phishing, smishing, and vishing conducted via emails, texts, and phone calls.

3) Technical

The technical type is usually carried out over the internet, where social networking sites are esteemed sources of information. Social engineers frequently use search engines to collect relevant information about the victims. The hackers guess or attempt to crack passwords to collect critical information about the target user. Correspondingly, the hackers and cybercriminals use automated tools as well, such as Matego and Social-Engineer Toolkit (SET) for successful cyber-attacks.

4) Socio-Technical

Socio-technical techniques are the most powerful of social engineering, combining both the social and technical types. The social engineer considers certain factors like social culture of the victim, human behavior, technologies used, and building infrastructure, as well as goals and values [15] . The combination of both social and technical methods heightens the chances of successful social engineering cyber-attacks.

B. Approach

1) information gathering.

Information gathering is the most significant phase for social engineers, where they collect and combine every piece of relevant information about the victim. It is the most exhausting and time-consuming part of the attack approach in social engineering. Most social engineers use automated online tools for information gathering by accessing the location, mobile number, and address of the target victim. Attackers apply different methods for getting organizational and individual information, such as soft skills and technical skills, depending the targets. Dumpster diving is one general way of gathering information, including medical records, emails, personal photos, bank statements, resumes, account details, tech support logs, software details, websites visited, and social media handles [16] .

2) Threat Modeling

Threat modeling is a procedural process for discovering the weak points in a system’s security. Social engineers try to find bugs or weaknesses in the system to take advantage of while attempting cyber-attacks. A threat model must include the current status of the system and its security, the possibility of new threats, and finally, a mitigation strategy for when the attackers deploy cyber threats. Most importantly, the threat modeling necessitates a rich understanding of objectives and the assets to be protected, along with other environmental factors [17] .

3) Vulnerability Analysis

Social engineers use a collection of strategies to exploit the vulnerabilities of an organization and individuals to take advantage of the system and gain access to sensitive information. Vulnerability analysis consists of four main steps: an initial assessment of the victim’s personality, behaviors, a diagnosis of vulnerabilities in the system, a selection of relevant strategies for successful exploitation of the resources, and vulnerability detection. Then the attackers develop a personalized tactics for cyber-attacks [18] . Attackers often use vulnerability scanners to detect security issues in the target system.

4) Exploitation

When an attacker achieves access due to security weaknesses in a system, then they start to exploit and misuse the resources by collecting sensitive information or disrupting the system availability by demanding money through the use of ransomware malware.

5) Post Exploitation

This phase of social engineering methodology is once the attacker has compromised the system of the victim. At this point, the attacker deals with the collection of crucial relevant information and data. Furthermore, once the attacker knows the security measures of the communication channels, configuration settings, and system networks, the collected data of the target system can be used for continued, future access as per the attacker’s desires. Finally, the attacker cleans the pathways they used and stays invisible by setting up backdoors and rootkits [19] .

6) Reporting

Reporting is the final phase, in which the social engineers stop the social engineering cyber-attacks and aggregate the results and documentation.

C. Life Cycle

1) investigation.

The hackers and cyber criminals first investigate the initial background information like entry points and other weaknesses in the security protocols. In this phase of the SE life cycle, the attacker identifies the victim, gathers background information about the target user, and makes strategies for selecting the attack method.

The attacker attempts to build a relationship of trust with the victim and tries to convince them of what the attacker needs them to believe. The attacker attempts to take control of the interaction as they engage the victim.

After building trust with the victim, the attacker exploits the resources available to them and executes the attack on the targeted system to access the information in a timely manner. In this phase, the attacker may disrupt the business or system by siphoning data as well.

The exit is the final stage of the SE life cycle, in which the attacker concludes the interaction without generating distrust. The attacker eliminates all traces of malicious software code and covers their tracks. Errors made by the authentic users who are targeted are much less obvious, making them tougher to recognize than a malware-based intrusion.

The specific goals of the social engineers are money, ego, revenge, knowledge, and entertainment [20] . They manipulate people into acting differently than they typically do. They want to fool people into providing valuable data and bits of information. Usually, social engineers don’t ever come directly to the victim first. They come to them after gathering information about them or their system, and then they access the target system by fraudulent means. Furthermore, they often establish an immediate connection with the target victim and utilize it as a foundation for building a relationship and an understanding. The attacker uses various approaches for getting relevant information from the victim. Other well-known goals of a social engineer are service disruption, unauthorized access, and financial gain for themselves or another party that hired them [1] .

III. Methodology

Systematic reviews are frequently used in the software engineering domain to summarize the existing literature studies. Garousi et al. [22] considered systematic review studies into six types such as; “Systematic Literature Mappings (SLM), Systematic Literature Review (SLR), Grey Literature Mapping (GLM), Grey Literature Review (GLR), Multivocal Literature Mapping (MLM), and Multivocal Literature Review (MLR)”. SLM and SLR based on formal literature and did not include the practitioner’s opinions which have in the grey literature (white papers, website, reports and blogs). Similarly, GLM and GLR based on only grey literature and did not contain the opinion of researchers. However, MLM and MLR contain both formal literature (peer-reviewed journals, conferences and workshops) and grey literature. Detail comparison of the systematic review studies are shown in Table 1 which explicitly illustrate the strength and weaknesses of each systematic review study based on formal and grey literature.

MLR is more suitable over the other systematic reviews because it is a form of SLR which provides a robust evidence from both researchers (formal published literature; journal, conference and workshop) and practitioners perspective (grey literature; white papers, website, reports and blogs). MLR’s are growing popularity due to bridging a gap between vocal of the industry practitioners and academic researchers. For our current research study, MLR is the most acceptable option because we need information from both formal and grey literature to concisely address our proposed research questions and to identify the current challenges, recommendations, and prospective solutions.

This research study conducted an MLR based on published guidelines and methods [21] – , [24] . MLR is consists of three main phases such as; planning, conducting and reporting as shown in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is alam2-3048839.jpg

Represents the MLR guidelines from planning, designing, conducting till to the reporting phase.

A. Research Questions

Our proposed MLR research questions, with brief description, are shown in Table 1 .

B. Data Collection

In this research, we surveyed MRL guidelines and chose those proposed by Garousi et al. [22] , as shown in Figure 2 . An MLR protocol was documented to delineate the complete strategy for the research study. A team of researchers conducted the MLR, and research studies from a time frame from the start of the COVID-19 pandemic until October 2020 were considered. All team members contributed in all phases of the MLR.

C. Search Strategy

The search string was built by finding keywords and their corresponding alternative words from social engineering studies. Then the designated keywords and their alternative words were chained together with the Boolean operators “AND” and “OR” to express the search string as follows:

“{ Social engineering OR cyber-threats OR cyber-attack OR online attack OR social attack} AND {Method OR technique OR approach OR platform} AND {COVID-19 OR novel coronavirus OR corona OR coronavirus OR SARS-CoV-2 OR 2019-nCoV} AND {Organization OR sector OR place OR location}”

The search strategy of MLR necessitates searching both formal and grey literature. In the first stage, the search string was applied to well-known, source-rich digital libraries, such as Scopus and Google Scholar, to find primary studies from the formal literature. In the second stage, the search string was applied to the Google Search engine to find primary studies from the grey literature.

D. Inclusion and Exclusion Criteria

The following inclusion criteria were used to find related primary studies:

  • • Studies with a focus on social engineering techniques and methods.
  • • Studies with a focus on cyber threats/attacks during the COVID-19.
  • • Studies based on empirical evaluation.

The following exclusion criteria were used to screen out irrelevant primary studies:

  • • Studies not relevant to the aims of the study.
  • • Studies written in a language other than English.
  • • Duplicates and repeated studies.

E. Quality Assessment Criteria for Formal and Grey Literature

The quality assessment criteria for the formal literature were comprised of six questions, as shown in Table 2 . Each question’s score was calculated based on the Kitchenham et al. guidelines [24] . The final score was calculated by assigning a 1 for “Yes” and a 0 for “No” for every individual question, with a summation at the end.

For grey literature, we followed the guidelines for grey literature from Garousi et al. [22] . We presented six questions for the grey literature quality assessment criteria, as shown in Table 3 . The first tier of grey literature consists of white papers, magazines, government reports, books, and theses. The score for the first tier is equal to 1, a high rank. The second tier of grey literature is comprised of news articles, videos, annual reports, presentations, and websites. The score for the second tier is equal to 0.5, a moderate rank. Finally, the third tier of grey literature contains tweets, blogs, and emails. The score for the third tier is equal to 0, a low rank.

F. Study Selection

The study selection was comprised of both formal and grey literature from Scopus, Google Scholar, and Google Search engine. Figure 3 shows the distribution of formal and grey literature studies from various sources. The complete MLR study selection procedure is shown in Figure 4 .

An external file that holds a picture, illustration, etc.
Object name is alam3-3048839.jpg

Selected article distributions over various sources from both formal and grey literature.

An external file that holds a picture, illustration, etc.
Object name is alam4-3048839.jpg

MLR study selection process from google scholar, scopus and google search engine by applying inclusion/ exclusion and quality assessment criteria.

1) Formal Literature Selection

In the initial phase, we identified 532 results from Scopus and 1,890 from Google Scholar that were relevant to our proposed research topic. By analyzing the titles, abstracts, and keywords of the papers according to our inclusion and exclusion criteria and removing duplicate papers, the number of papers were reduced to 16 for Scopus and 29 for Google Scholar. By studying the full text of those papers, we finally selected a total of 13 papers from both Scopus and Google Scholar.

2) Grey Literature Selection

We used the Google Search engine to locate grey literature. In the initial search, we found 4,590,000 results, as listed on the top results page. By limiting the search to only the first 15 pages [25] , due to the collection of more information from Google Search engine. We applied the inclusion and exclusion criteria to the titles and keywords, the number of sources was reduced to 60. Further screening the full texts for relevancy to our topic, we finally selected 39 articles in the form of websites, blogs, reports, news reports, and white papers.

G. Data Extraction

To answer our research questions, we identified and extracted the relevant information and data by employing the predefined data extraction procedure of MLR guidelines. The collected data are stored in Microsoft Excel spreadsheets for evaluation by including the title, author name, SE technique, SE types, SE methods, SE platform used, type of malicious software used, targeted organizations and sectors, and the year of published articles. Appendix A and Appendix B shown the collected data, including their quality assessment scores for each research questions, along with study title, author name, and year.

H. Data Synthesis and Analysis

In the data synthesis phase of the study, the primary studies were carefully evaluated in order to describe the final results. The information and data were collected in the extraction phase, and they were further analyzed to address our research questions and help us draw the conclusions of the proposed study.

IV. Results

A. social engineering techniques used during the covid-19 pandemic (rq1).

Numerous social engineering techniques were used by scammers, hackers, and cybercriminals for cyber-attacks with an objective to exploit the victim’s systems.

According to our research regarding social engineering techniques, phishing is the most common techniques used by the threat actors at 35%. Email platforms were used as a weapon for leading phishing attacks by using various misleading email links and fake news. Spam is the second highest used social engineering technique, at 16%. Scams were the third most common technique at 14%. For example; scams include contents like; loan emails, COVID-19 tests news, bogus insurance invoices, employment news etc. Moreover, the attackers also used smishing and vishing techniques during the COVID-19 pandemic by sending text messages to and calling mobile numbers, WhatsApp users, and other social media accounts to trick victims, and both techniques combined account for nearly 22% of the overall weightage. Finally, the other techniques such as; spear-phishing, extortion, cyberbullying, cyber-stalking, pre-texting, and fear-attacks were executed much less frequently.

B. Type of Social Engineering Methods Used for Conducting Cyber-Attacks During the COVID-19 Pandemic (RQ2)

There are four types of methods used by threat actors for conducting cyber-attacks. In the COVID-19 pandemic cyber-attack scenarios, they used the socio-technical method 44% of the time. The hackers also used the technical method to forcibly attack the victims’ systems to get the desired information in 29% of cases. The social method, such as texting or calling the victims and using fake identities to get relevant information about the victims, was also used a total of 23% of the time during the COVID-19 pandemic. Finally, the physical method was also used in a very small amount of cases, only 4%. The overall percentages for the four methods are shown in Figure 6 .

An external file that holds a picture, illustration, etc.
Object name is alam5-3048839.jpg

Different social engineering techniques used for cyber-attacks/threats during the COVID-19 pandemic shown in percentages of the attacks/threats.

An external file that holds a picture, illustration, etc.
Object name is alam6-3048839.jpg

Social engineering types used for cyber-attacks/threats during the COVID-19 pandemic by percentage of attacks/threats.

C. The Platforms Used as Weapons for Social Engineering–Based Cyber-Attacks During the COVID-19 Pandemic (RQ3)

Figure 7 shows the platforms used by the attackers for performing social engineering cyber-attacks/threats. Email is the most used platform by a wide margin and is discussed by 52 studies. This correlates exactly with RQ1, in which the top used technique was phishing done mainly via emails. The attackers and cybercriminals also developed fake websites related to the coronavirus with news and data intended to trick users, which is the second most used platform. Similarly, the attackers also developed various mobile applications for coronavirus updates to target the user for accessing and getting information.

An external file that holds a picture, illustration, etc.
Object name is alam7-3048839.jpg

Mapping of study sources by the platform used as a weapon for social engineering based cyber-attacks/threats.

The majority of mobile devices that were hacked during the COVID-19 pandemic were targeted with the use of fake applications from the threat actors. Due to the coronavirus, many organizations moved their activities online, and they mostly used platforms like Zoom and Microsoft Teams for online meetings and video conferencing, which have also been hacked many times during the COVID-19 pandemic. Furthermore, WhatsApp as a primary source of communications, was hacked several time times. Other well-known social media platforms were also hacked and used as weapons for conducting social engineering cyber-attacks, as shown in Figure 7 .

D. Kinds of Malicious Software and Attack Methods Used for Social Engineering Cyber-Attacks/Threats (RQ4)

Figure 8 shows the growth trends of social engineering cyber-attacks/threats using different malicious software. Ransomware is the most cited malicious software used for cyber-attacks on various public and private sector organizations during the COVID-19 pandemic. The generic “Other Malware” category is the second most cited, as shown in Figure 8 , consisting of various malicious software and cyber-attack methods, such as e-skimming, cryptominer software, BEC, DoS, brute-force attempt, DDoS, cyber-sabotage, and malicious URL attacks have also been conducted during the COVID-19 pandemic. Trojan malware was also used in significant amounts. Spyware, spoofing, impersonation, and bots were used at a moderate level, compared to the other top-cited categories.

An external file that holds a picture, illustration, etc.
Object name is alam8-3048839.jpg

Growth trends in malicious software used in social engineering based cyber-attacks/threats by number of study sources citing them.

Three types of malicious software were the most commonly used, ransomware, trojans, and bots, as shown in Figure 9 with their specific deployments and families. By count of these unique family, ransomware was used the most with 30 families, trojans second with 19 families, and finally bots third with 7 families. The generic “Other Malware Family” includes 13 families, as presented in Figure 9 .

An external file that holds a picture, illustration, etc.
Object name is alam9-3048839.jpg

Detailed overview of the malicious software and their sub-families used in social engineering based cyber-attacks/threats during the COVID-19 pandemic.

The trojan families that were used the most during the COVID-19 Pandemic are; RAT, AZORult, Emotet, KPOT, Nanocore, and Sphinx.

Emotet was used mostly for banking and financial cyber-attacks. Similarly, Netwalker, MAZE, Stealer, Maillot, Covid-lock, Dopper-paymer, and Agent Tesla are ransomware family that was widely used as a threat for demanding money and financial benefits. Furthermore, from the bot family type, Loki-Bots was highly used, and Spider, Remcos, and Info-Stealer from the other malware family were regularly used during the COVID-19 pandemic for cyber-attacks /threats.

E. Mostly Targeted Organizations and Sectors During COVID-19 Cyber-Attacks (RQ5)

Cybercriminals exploit various organizations and industries during the COVID-19 pandemic, such as healthcare, hospitals, private and public sectors, government institutions, banking, and finance. The top targeted organizations are healthcare companies and hospitals due to their weak security setups. The targeting of healthcare organizations carried out by the advanced cyber hackers and attackers.

F. The Economic Impact of Social Engineering Cyber-Attacks During the COVID-19 Pandemic (RQ6)

The economic impact of social engineering cyber-attacks is rising exponentially with the advancement and general use of new technologies. According to Accenture’s annual security report, security breaches increased 67% in the past five years, and in the last year, companies spent $110 billion worldwide for protection against cyber-attacks [26] .

During the COVID-19 pandemic, the University of California San Francisco School of Medicine was targeted by hackers with ransomware, and they paid $1.14 million to remove the ransomware [27] . Infosecurity Magazine mentioned that the UK’s National Fraud and Cybercrime Reporting Center claimed that online scams had captured 16,352 victims through auction schemes and online shopping during the COVID-19 pandemic and lost approximately £17 million [28] . During the COVID-19 pandemic, two reports from Australia and the US stated that the Australian Competition and Consumer Commission’s Scam Watch reports over 2,700 scams causing losses of $16,390,650 AUD and that the Federal Trade Commission of the US estimated that $12 million USD were lost in fraudulent activities [40] . During the COVID-19 pandemic, Wiggen [30] reported that Russian malware targeted Ukraine, encrypting crucial data from computer systems and making it useless; the cost of the damage was estimated more than $10 billion.

According to at least one prediction [31] , the Global Cybersecurity Market will total $152 billion USD by 2025 because of the growing concern over cyber-attacks/threats and data breaches that are confronting organizations.

V. Discussion

This review has described social engineering cyber-attacks/threats on organizations and critical infrastructure during the COVID-19 pandemic. Throughout this review, we identified social engineering techniques; applied methods; platforms, malicious software, and attack methods used; and finally, the organizations targeted. Information on our proposed research questions is available more in grey literature sources than formal literature, demonstrating that practitioners are more active in providing social engineering–based cyber solutions and solving security issues as shown in Figure 11 . The types of malicious software that have been used was the research question most cited and mentioned because it involves a wide variety of malicious software, relevant software families, and attack methods. Therefore, more research and cyber solutions are needed to address cyber-attacks and threats that come in the form of malicious software. We must secure social media and other communications platforms as well, due to their usage as a weapon for different cyber-attacks. Platforms of cyber-attacks/threats are the second most cited research question in both formal and grey literature, as shown in Figure 11 .

An external file that holds a picture, illustration, etc.
Object name is alam11-3048839.jpg

Prevalence of research questions among formal and grey literature sources.

Similarly, we also explored the economic impacts of social engineering cyber-attacks with recent estimates and a future projection through 2025. Cyber solutions need to be robust and consistent because of the increasing numbers of cybercriminals, hacktivists, scammers, and extortion groups using different social engineering techniques to exploit critical assets and systems. Phishing attacks were used in different forms, such as spear-phishing, smishing, and vishing, via emails, calls, and text messages. These cyber-attacks can be reduced with awareness campaigns and by applying security email spam filters.

Consequently, solutions to social engineering–based cyber-attacks/threats require a high level of innovation, teamwork, collaboration, and performance. Cyber solutions need to be adaptive and generalizable to various organizations, especially for the healthcare industry and hospitals that are ripe targets for threat actors these days. Social engineering–based cyber solutions require significant research and development to produce outcomes capable of instant incidence response in the case of unexpected and surprising cyber events. Our review is based on the perspectives of both researchers and practitioners and will benefit both academia and industries in carrying out initial assessments for their own research and development.

VI. The Motivation Behind Social Engineering Cyber-Attacks/Threats

A. challenges.

The swift circulation of COVID-19 created potential cybersecurity challenges that need to be addressed to protect victims and critical infrastructure. Our MLR explored several cybersecurity challenges during the COVID-19 pandemic, and after the authors’ careful observations and research, we divided these challenges into seven main categories, as shown in Figure 12 .

An external file that holds a picture, illustration, etc.
Object name is alam12-3048839.jpg

Cybersecurity challenges during the COVID-19 pandemic.

1) Remote Work and Data Breaches

Remote working allows geographically spread out employees to work from various locations to fulfill their assigned tasks. The nature of office work has largely been transferred to remote working spaces due to the COVID-19 pandemic, and the majority of large organizations proceed with their work remotely from home via online platforms. However, the remote working present challenges and provides disclosures for a broad spectrum of social engineering cyber-attacks and cybersecurity issues through emails, file sharing, and access to networks via user devices [32] . In more than 12 countries, 3,000 employees were surveyed; 94% of them suffered from data breaches via cyber-attacks, with an average number of 2.17 breaches each [33] . Home networks remain less secure compared to organizational internal networks, possibly posing greater dangers for employees already at a larger risk of cyber-attacks. Also, a large number of people are not trained to work remotely in a secure way. A report from the International Association of IT Asset Managers (IATAM) is cautioning that working from home during the COVID-19 pandemic is allowing for plentiful data breaches [34] , [35] .

2) Social Scams and Phishing

Phishing attacks and scams during the COVID-19 pandemic started in January 2020 and disseminated very quickly, even producing thousands of fake sites and scams every day. UK regulatory authorities noticed a surge in the registration of new webpages related to the COVID-19 pandemic which seems suspicious as threat vectors for exploitation and cyber-attacks [36] .

Scams are more prevalent and costly due to the financial situation of most people during the COVID-19 pandemic, as those suffering from income loss and joblessness come under threat from scams. Similarly, scammers further target vulnerable people by posting fake advertisements and news regarding treatment of the coronavirus and vaccines [37] . In these efforts, fraudsters use software tools for scamming and phishing and use subcategories of these techniques, such as spear-phishing, smishing, and vishing. They use different platforms like emails, texts, social media posts, and robocalls, for impersonation schemes [38] .

3) Fake Websites, Domains, Themes, and Mobile Apps

Attackers and cybercriminals continue to build fake websites and mobile apps to steal credentials relevant to financial assistance and personal identification. The threat actors develop themes and website templates that mimic the government and trusted non-governmental organizations, such as the World Health Organization, Internal Revenue Service, and Centers for Disease Control [39] .

A statistical report from Palo Alto researchers [40] through the end of March 2020 showed that a total of 116,357 new domain titles and registrations related to COVID-19 were made during that time. They elaborated, “Out of these, 2,022 are malicious and 40,261 are with high-risk.”

4) Privacy and Security

Numerous organizations and governments have worked on efforts to develop track-and-trace mobile and web applications to empower society to get back to normal and avoid the spread of the COVID-19. Similarly, the rise of digital world services originates at the cost of privacy. However, there is a need for the right balance between institutional response, user access, and information privacy. The use of drones during the COVID-19 pandemic may also violate privacy if the data is stored or transmitted in the form of images and videos. Similarly, cybersecurity-related issues, such as brute force attacks, injections, eavesdropping, replay attacks on the communication channels, and storage drain attacks, need to be addressed to protect end users from various cyber-attacks. These privacy and security challenges are causing researchers and practitioners to re-think the application of digital transformation initiatives [62] .

5) Information Security Governance

Organizations need to understand where their approaches to information security are truly symbolic. It is essential for organizations to adopt the “Digital Security Governance” for their existing security approaches. Digital Security Governance is the “practitioners and decision makers by providing a deeper understanding of how organizations and their security approaches are actually affected by digitalization” [42] . The sharing of information by organizations needs to be in accordance with the legal and regulatory authorities as well as digital laws because data can be critical when it is related to business, industry, and personal lives. Software tools should be developed for information mapping according to standard policy and supporting security measures. Research should be performed on where the information of an organization is accessed and by whom and what the existing platforms that generate, process and store information are. Is it in accordance with reasonable security standards or not?

6) Secure Communication Channels

Effective and secure digital communication channels are needed, and they are even more critical during pandemic crisis management and onwards. The disseminated workforce needs secure communication channels to carry out their tasks in a consistent, accurate, and safe manner. Security standards are necessary for organizations to effectively communicate with their employees and to monitor these means of communication for potential security vulnerabilities. Cybersecurity efforts are essential for improving and securing digital devices and networks for promoting business continuity. It is crucial to establish device security because most wearable devices and the “internet of things” (IoT) are also vulnerable to cyber-attacks [43] .

7) Malware and Ransomware Cyber-Attacks

Attackers use different malware and ransomware for resource and system exploitation, as shown in Figure 10 , and target critical infrastructures, such as healthcare organizations, hospitals, and banks, for financial gain. Threat actors generally use phishing techniques for a ransomware attack to inject malware code into the victim’s computer and network system in order to encrypt it and make the data inaccessible to the victim. The threat actor then tries to extort a monetary payment from the victim in exchange for the key required to decrypt the compromised information files and data. For example, a British research company that was preparing the COVID-19 vaccines to conduct trials was attacked with MAZE ransomware [65] . Cybersecurity experts and researchers need to develop robust software tools for penetration testing, guidelines, and security standards to detect and comprehend the threat landscape and potential cybersecurity vulnerabilities.

An external file that holds a picture, illustration, etc.
Object name is alam10-3048839.jpg

Breakdown of the study sources by type of organization targeted with social engineering based cyber-attacks/threats.

B. Recommendations

Social engineering–based cyber-attacks targeted a diversity of victims from secure and intricate organizations to single individuals. The main objective of our proposed recommendations is to protect victims from different kinds of cyber-attacks at the initial level and how to mitigate them. These recommendations can be measured as the most minute level of defense for organizations and for end users as well. The following are the proposed recommendations:

  • • Individuals must use strong password practices and apply multi-factor password authentication for accessing their own social media accounts as well as remote devices to limit cyber-attacks from exploiting data breaches and stealing information.
  • • Organizations must implement user access restrictions and control mechanisms for remote workers to protect them from accessing sensitive data and information and to provide them access only based on their job responsibilities. This will significantly reduce the influence of social engineering cyber-attacks.
  • • Back up all critical information and data in a consistent manner and keep it safe in an external system, external hard drive, or in a secure cloud storage providers.
  • • Be conscious of suspicious messages with spelling errors, suspect emails, pop-up advertisements with fake offers, news regarding corona vaccines and treatment, and private and public financial offers. The official authorities never use personal email addresses for sending such information. Always trust and rely on well-known governmental organizations and NGOs for information updates during a pandemic, such as the World Health Organization, the Centers for Disease Control, and the National Institutes of Health.
  • • Regarding fake websites, themes, domains, and mobile apps, double-check lookalike domains, spelling errors in website headings, and top content information. Authenticate the company’s legal website before entering login credentials and other sensitive information.
  • • Be aware of the common social engineering cyber-attacks and threats such as robocalls, phishing, smishing, and vishing and how hackers and cybercriminals target victims by triggering fears of losing access to private data and money.
  • • Always review the policies on privacy and security of different software’s when using it in a remote work environment for conference meetings and telehealth.
  • • Avoid clicking on suspicious links received from unknown sources that may redirect you to the malicious software and suspicious files to download it on your device and computer systems in the form of coronavirus app, antivirus, etc. Keep your computer and devices security, firewall, and software’s up to date.

C. Prospective Solutions

1) training and awareness.

Organizations’ cybersecurity teams, whether their own or third-party hires, must stay focused on detection technologies for the stream of traffic initiating from remote employees. Similarly, the cybersecurity teams must provide an initial level of security awareness for all employees, such as the use of a strong password, secure sharing of data and information, software updates, cookies and session hijacking, detection of malicious URLs, home-based network and router security, protection security for the IoT and wearable devices, and other relevant educational awareness training. More specifically, to educate the remote employees on incident awareness and management to support their cybersecurity teams and improve response times during cyber-attacks/threats. This can be done via simulations of social engineering–based cyber-attacks with remote employees to teach them how to detect, respond, and recover in time.

2) Artificial Intelligence

Artificial intelligence uses machine-learning algorithms on various datasets to perform statistical analysis, allowing for the making of assumptions about behavioral patterns. Algorithms adjust and perform functions according to their programmed purpose and learn from the applied data. According to future predictions, up to 70% of organizations will be adopting artificial intelligence in the domain of cybersecurity [67] . Artificial intelligence–based tools play a significant role in understanding and predicting cyber-attacks/threats. A recent survey report from Webroot [44] included 800 respondents among information technology professionals with cybersecurity decision-making powers from Australia, New Zealand, Japan, the US, and the UK. They revealed that 96% of the survey respondents are using artificial intelligence and machine learning tools for cybersecurity. Artificial intelligence systems are currently used for traffic pattern and behavioral detection of zero-day cyber-attacks and continue to progress through self-learning, and generating results more quickly and more precise than analysts [45] , [46] . Artificial intelligence can improve security performance and predictions of cyber-attacks/threats, malware, trojans, and botnets [47] .

3) Big Data Analytics and Cyber Resilience

Cybersecurity attacks are increasing with the emergence of technology, and cybercriminals are using various social engineering and other sophisticated techniques to exploit victims. Various organizations and individuals are suffering cybersecurity attacks and security breaches on a massive scale, especially during the COVID-19 pandemic. Data analytics play a prominent role in leveraging cyber resilience and assist in mitigating and reducing cyber threats and crimes [48] . Big data analytics reviews an enormous amount of data from different historical cyberattacks and can help analysts assess and detect anomalies within computer systems and networks to protect the system from possible future cyber-attacks/threats [49] , [50] . Using big data analytics with different correlation algorithms for anomaly detection in combination with strong cybersecurity principles can assist organizations in enhancing their cyber resilience [51] . Big data analytics can be significant for accumulating all historical data from cyberattacks and threats related to the COVID-19 pandemic in order to forecast future cyber-threats.

4) Blockchain and the Internet of Things

Wearable and IoT devices are growing very quickly due to current advancements in technology. However, they are vulnerable to cyber-attacks as well, and these devices need to be secured by protecting sensitive information and user’s personal data. One way of protecting these devices is to implement the concept of blockchain technology. Blockchain is a distributed network used by millions around the globe. In the blockchain, the data of these devices can be added but not copied or changed, and only managed through the use of a computers cluster which is not owned by any single person [52] . By applying the blockchain technology to healthcare IoT and other critical infrastructure, the recent growth in cyber-attacks/threats and theft of information in the age of the COVID-19 pandemic can be reduced. Blockchain technology is stepping up to overcome security apprehensions in the appearance of current cybersecurity breaches [53] .

VII. Limitations of the Study

One of the limitations of the MLR might be the subjective decisions and search terms used for data extraction process from the grey and formal literature, particularly the use of the three core search engines of Google Search engine, Scopus, and Google Scholar, which may lead to missing some studies in the results. This effect was reduced by imposing the search term limitations and with the use of alternative keywords and repetitions of the search terms. Similarly, the subjectivity of our decisions was further decreased by the authors’ detailed, repeated reviews. Grey literature sources signify the voice of practitioners in the real industrial environment. Another possible limitation specifically for the grey literature, however, is that few of the practitioners’ recurring opinions overlap with those of other practitioners. To mitigate this effect, we drew our data from reputable reports, blogs, website, whitepapers, and magazines based on our defined quality assessment criteria.

VIII. Conclusion and Future Work

To the best of our knowledge, no systematic MLR has focused on the emerging social engineering–based cyber-attacks /threats from both the researchers’ and the practitioners’ perspectives. The COVID-19 pandemic has caused a considerable and long-lasting social and economic impact on the world, and social engineering–based cyber-attacks/threats are one of the primary motives for the present insecurities. Social engineering–based cyber-attacks are based on psychological and systematic techniques to manipulate users that cannot be controlled solely through the use of technology.

The objective of this research study was to detect the state-of-the-art and state-of-the-practice social engineering–based techniques, attack methods, and platforms used for conducting successful cyber-attacks/threats with economic and social impacts on various organizations. This review highlighted the most targeted organizations and critical infrastructure that are exploited by cybercriminals during the COVID-19 pandemic. This research work provided an MLR related to the rise of social engineering–based cyber-attacks/threats since the emergence of the COVID-19 pandemic. In total, 52 primary studies were selected from both formal and grey literature based on published guidelines for conducting an MLR. The review revealed that some of the major social engineering–based techniques used during the COVID-19 pandemic are phishing, scamming, spamming, smishing, and vishing in combination with the mostly used socio-technical methods of fake emails, websites, and mobile apps used as weaponized platforms for conducting cyber-attacks. Finally, the potential economic impacts of successfully conducted cyber-attacks on various organizations and critical infrastructure were also discussed. Most significantly, we explored open challenges, general recommendations, and prospective solutions by using the latest technology.

From the conducted MLR, several future works were identified that will support security practitioners and researchers in addressing the proposed challenges relevant to cybersecurity by applying their research and development skills to propose new tools, security standards, policies, and frameworks in combination with the use of emerging technologies, such as artificial intelligence, blockchain, and big data analytics. In the future, we intended to propose a framework for training and awareness aimed at the initial level of cybersecurity awareness for organizations and end-users.

Biographies

Mohammad Hijji is currently an Assistant Professor with the Faculty of Computers and Information Technology, University of Tabuk, Saudi Arabia. He is also the Chairman of the Computer Science Department. He works with a range of research centers and government sectors related to artificial intelligence and disaster and emergency management. He is also responsible for developing and teaching postgraduate programs with the Faculty of Computers and Information Technology, University of Tabuk. His research interests include artificial intelligence, cyber security, the Internet of Things (IoT), and disaster and emergency management.

Gulzar Alam received the B.S. degree in software engineering from the University of Malakand, Pakistan, and the M.S. degree in software engineering from the King Fahd University of Petroleum and Minerals, Saudi Arabia. He has worked as a Software Engineer in various national and international organizations. He also worked as a Research Assistant in many research projects funded by the Deanship of Scientific Research, King Fahd University of Petroleum and Minerals. His research interests include artificial intelligence, software engineering, secure software development life cycles, cyber security, and the Internet of Things (IoT).

See Table 5 .

See Table 6 .

Funding Statement

This work was supported by the Industrial Innovation and Robotics Center, University of Tabuk.

IMAGES

  1. (PDF) A Research Paper on Social Engineering and Growing Challenges in

    research paper on social engineering

  2. (Download) "Social Engineering" by Christopher Hadnagy # eBook PDF

    research paper on social engineering

  3. What is social engineering? A definition + techniques to watch for

    research paper on social engineering

  4. Social engineering

    research paper on social engineering

  5. Critical Essay: Social work research paper example

    research paper on social engineering

  6. (PDF) Social Engineering: An Introducction

    research paper on social engineering

VIDEO

  1. Social Engineering Artificial Intelligence #chatgpt

  2. CLASS 5 Social Science Annual Exam question paper 2022-23

  3. Social Engineering Explored Demo

  4. Research paper- social media

  5. class 9 annual exam social science question paper 2024 Hojai district with solutions SEBA

  6. What is Social Engineering?

COMMENTS

  1. Hacking Humans? Social Engineering and the Construction of the

    Today, social engineering techniques are the most common way of committing cybercrimes through the intrusion and infection of computer systems and information technology (IT) infrastructures (Abraham and Chengalur-Smith 2010, 183). Cybersecurity experts use the term "social engineering" to highlight the "human factor" in digitized systems.

  2. An interdisciplinary view of social engineering: A call to action for

    Social engineering research lacks a framework within which to view the topic and to apply findings in real-world organizational settings. As a result of the prior literature reviews and the proposed interdisciplinary approach, the following diagram illustrates a suggested framework for future research on the topic of social engineering that is flexible enough to allow for a variety of theories ...

  3. Social Engineering in Cybersecurity: Effect Mechanisms, Human

    Social engineering attacks have posed a serious security threat to cyberspace. However, there is much we have yet to know regarding what and how lead to the success of social engineering attacks. This paper proposes a conceptual model which provides an integrative and structural perspective to describe how social engineering attacks work. Three core entities (effect mechanism, human ...

  4. Social Engineering Attacks Prevention: A Systematic Literature Review

    Social engineering is an attack on information security for accessing systems or networks. Social engineering attacks occur when victims do not recognize methods, models, and frameworks to prevent them. The current research explains user studies, constructs, evaluation, concepts, frameworks, models, and methods to prevent social engineering attacks. Unfortunately, there is no specific previous ...

  5. (PDF) Social Engineering: An Introducction

    known as social engineering or social attacks [1]. Social engineering consists of techniques used to manipulate people into performing actions or divulging. confidential information. It is the ...

  6. Social engineering in cybersecurity: a domain ontology and knowledge

    Social engineering has posed a serious threat to cyberspace security. To protect against social engineering attacks, a fundamental work is to know what constitutes social engineering. This paper first develops a domain ontology of social engineering in cybersecurity and conducts ontology evaluation by its knowledge graph application. The domain ontology defines 11 concepts of core entities ...

  7. Social engineering in cybersecurity: The evolution of a concept

    This paper offers a history of the concept of social engineering in cybersecurity and argues that while the term began its life in the study of politics, and only later gained usage within the domain of cybersecurity, these are applications of the same fundamental ideas: epistemic asymmetry, technocratic dominance, and teleological replacement.The paper further argues that the term's usages in ...

  8. 3775 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on SOCIAL ENGINEERING. Find methods information, sources, references or conduct a literature review on ...

  9. Defending against social engineering attacks: A security pattern‐based

    1 INTRODUCTION. Social engineering attacks are posting a severe threat to large-scale socio-technical systems. Compared to software vulnerabilities investigated for decades, more and more attackers are using social engineering techniques to exploit people's vulnerabilities to achieve their malicious goals [].According to the Ponemon report [], insider threats have increased in frequency and ...

  10. Falling for Social Engineering: A Qualitative Analysis of Social

    For these reasons, social engineering has become a significant problem for organizations. According to Verizon's (2021, p.15) 2021 Data Breach Investigations Report, phishing emails were the most prevalent attack involved in organizational information security breaches in 2020. Similarly, phishing emails continue to be a major blight for the healthcare industry (Alder, 2021; Kamoun & Nicho ...

  11. Social Engineering Attacks: Recent Advances and Challenges

    Social engineering attacks are an urgent security threat, with the number of detected attacks rising each year. In 2011, a global survey of 853 information technology professionals revealed that 48% of large companies have experienced 25 or more social engineering attacks in the past two years [].In 2018, the annual average cost of organizations that were targets of social engineering attacks ...

  12. Full article: Gaining Access with Social Engineering: An Empirical

    Previous Research and Background. Our research investigates factors that may account for successful social engineering attacks. The perception of threat is defined as the anticipation of a psychological (e.g., assault), physical (e.g., battery), or sociological (e.g., theft) violation or harm to oneself or others, which may be induced vicariously (Lazarus, 1991).

  13. (PDF) Social Engineering

    In this chapter, four. modalities of social engineering (i.e., voice call, email, face-to-face, and text. message) are discussed. We explain the psychological concepts that are involved. in social ...

  14. A Study on the Psychology of Social Engineering-Based ...

    As cybersecurity strategies become more robust and challenging, cybercriminals are mutating cyberattacks to be more evasive. Recent studies have highlighted the use of social engineering by criminals to exploit the human factor in an organization's security architecture. Social engineering attacks exploit specific human attributes and psychology to bypass technical security measures for ...

  15. Predicting individuals' vulnerability to social engineering in social

    The popularity of social networking sites has attracted billions of users to engage and share their information on these networks. The vast amount of circulating data and information expose these networks to several security risks. Social engineering is one of the most common types of threat that may face social network users. Training and increasing users' awareness of such threats is ...

  16. A Multivocal Literature Review on Growing Social Engineering Based

    II. Social Engineering: Definition, Approach and Goals. Social engineering "is the ultimate con—the bag of tricks employed by fraudsters who lie, cheat and steal their way past your organization's security controls. Their goals: theft, fraud or espionage ." Social engineering circumvents all technologies, as well as firewalls.

  17. Overview of Social Engineering Attacks on Social Networks

    Social Engineering has become an emerging threat in virtual communities. Information security is key to any business's growth. ... Ana Ferreira and Al in their research paper- An Analysis of Social Engineering Principles in Effective Phishing talked a lot about the principles of Social Engineering applied in several phishing emails [4]. 3 ...

  18. A Study of Social Engineering Concepts Within a Deceptive Defense

    behavioral deception is the foundation of social engineering, where people that want something from another person could use a form of social engineering. Forms of human manipulation to gain a resource can be considered a form of. social engineering (CompTIA, 2022). It is an attack that has evolved with the.

  19. (PDF) Analysing Social Engineering Attacks and its Impact

    To summarise, this study aims to improve knowledge, defence, and avoidance of social engineering assaults by providing a comprehensive viewpoint on these attacks. Discover the world's research 25 ...

  20. (PDF) SOCIAL ENGINEERING AND CYBER SECURITY

    essence, social engineering refers to the design and application of deceitful techniques to deliberatel y. manipulate human targets. In a cyber securit y context, it is primarily used to induce ...

  21. [PDF] Social Engineering Attacks: A Survey

    This paper provides an in-depth survey about the social engineering attacks, their classifications, detection strategies, and prevention procedures. The advancements in digital communication technology have made communication between humans more accessible and instant. However, personal and sensitive information may be available online through social networks and online services that lack the ...

  22. Social Engineering: Hacking into Humans by Shivam Lohani :: SSRN

    Social engineering is a really common practice to gather information and sensitive data through the use of mobile numbers, emails, SMS or direct approach. Social engineering can be really useful for the attacker if done in a proper manner.'Kevin Mitnik' is the most renowned social engineers of all time. In this paper, we are going to discuss ...

  23. Social Engineering: A Technique for Managing Human Behavior

    Social engineer ing is a human behavior based tec hnique for. hacking & luring people f or s neaking into someone's security system. Since social. engineering relies heavily on human behavior ...