Home • Knowledge hub • The Impact of Mobile Devices on Survey Responses: Why Question Types Matter More Than Ever.

The Impact of Mobile Devices on Survey Responses: Why Question Types Matter More Than Ever.

The Impact of Mobile Devices on Survey Responses

With the proliferation of smartphones and tablets, it’s no surprise that more and more people are completing surveys on their mobile devices. But what does this mean for marketers, product managers, and market researchers? 

In this article, we’ll explore how mobile devices have changed the survey landscape and why it’s crucial to design mobile-friendly surveys. We’ll dive into the various question types, discuss their effectiveness on mobile devices, and provide best practices for designing surveys that work well on screens of all sizes.

But first, let’s take a step back and consider how mobile devices have changed our interaction with technology. These devices have revolutionized how we communicate, consume content, and engage with brands in just a few short years. People spend more time on their phones than ever before, and this trend will continue.

As marketers and researchers, we must keep up with these changes and adapt our strategies accordingly. By understanding the impact of mobile devices on survey responses, we can design surveys that are more engaging, more effective, and ultimately more valuable for our businesses. So let’s dive in and explore the exciting world of mobile surveys!

The Mobile Survey Landscape

The mobile survey landscape is constantly evolving, and staying up-to-date with the latest trends and statistics is essential. According to Statista, in 2023, the current number of smartphone users in the world today is 6.92 billion, meaning 86.29% of the world’s population owns a smartphone. This means that a large percentage of survey respondents are completing surveys on their mobile devices.

While mobile surveys offer many benefits, such as increased convenience and accessibility, they also present some unique challenges. One of the biggest challenges is the limited screen size of mobile devices. It’s crucial to design surveys that are optimized for smaller screens, with clear and concise questions and answer options.

In a survey by Google, 94% of respondents reported using their smartphones to take surveys.

Another challenge is user attention span. Mobile users often multitask and are easily distracted, so surveys must be engaging and easy to complete. If a survey takes too long or requires too much effort, respondents will likely abandon it before completing it.

Despite these challenges, mobile surveys can be highly effective when designed correctly. In fact, a study found that mobile surveys have a completion rate that is 10% higher than desktop surveys. Additionally, mobile surveys tend to have higher response rates and lower costs, making them an attractive option for brands.

guide-to-gen-z

Understanding Question Types

Understanding the different types of survey questions is crucial to designing effective mobile surveys. Let’s closely examine some of the most common question types and how they work on mobile devices.

Open-ended questions allow respondents to provide their own answers and can be useful for collecting qualitative data. However, they can be more challenging to answer on a mobile device, as they often require more typing and can be harder to read on a smaller screen. In contrast, closed-ended questions provide a set of predefined answer options, such as yes or no, and are often easier to answer on a mobile device.

Multiple-choice questions are a popular closed-ended question type, where respondents are given a set of answer options to choose from. These can be effective on mobile devices if the options are clear and easy to read. However, if the options are too lengthy or complex, they may be difficult to read on a small screen.

Rating scales are another common question type, where respondents are asked to rate their level of agreement or satisfaction on a scale of 1 to 5 or 1 to 10. Rating scales can be effective on mobile devices if they are designed to fit the smaller screen size, and the rating options are clearly labeled and easy to select.

Research by Quirk’s Media found that surveys optimized for mobile devices are completed 30-40% faster than those optimized for desktops.

It’s worth noting that some question types, such as matrix questions or grid questions, can be challenging to answer on a mobile device. These types of questions require respondents to evaluate multiple items, which can be difficult to do on a smaller screen.

Best Practices for Mobile-Friendly Surveys

Designing surveys that are mobile-friendly is crucial to maximizing completion rates and gathering accurate data. Here are some best practices for designing mobile-friendly surveys:

  • Keep it concise: Mobile users have limited attention spans, so it’s essential to keep survey questions and answer options short and to the point. Avoid using long or complicated sentences, and consider breaking up longer questions into smaller, more manageable chunks.
  • Use clear formatting: Use a clear and easy-to-read font, with a font size of at least 14 points, to ensure the text is readable on smaller screens. Use plenty of white space between questions and answer options to help respondents navigate the survey more easily.
  • Optimize for different devices: Make sure your survey is optimized for different screen sizes and device types. Test your survey on different devices to ensure it looks and functions correctly on each one.
  • Keep answer options consistent: Make sure that answer options are consistent throughout the survey. This will make it easier for respondents to understand the question and select the appropriate answer.
  • Provide clear instructions: Provide clear and concise instructions at the beginning of the survey to help respondents understand how to complete the survey. Include instructions on how to navigate through the survey and how long it is expected to take.
  • Use skip logic: Skip logic allows respondents to skip questions that are not relevant to them, which can help to reduce survey fatigue and improve completion rates. However, ensure that skip logic is used sparingly, as it can add complexity to the survey.
  • Test and iterate: Testing and iterating are essential parts of survey design . Test your survey on a small sample of respondents before launching it to a larger audience, and use their feedback to make improvements.

guide-to-product-marketing

Key Takeaways

Mobile devices have revolutionized how people interact with technology, including completing surveys. To maximize response rates and gather accurate data, it’s essential to design mobile-friendly surveys.

This means selecting the right question types and optimizing surveys for different screen sizes and devices.

Key takeaways from this blog post include:

  • Mobile devices are an important platform for survey completion and should be taken into consideration when designing surveys.
  • Closed-ended questions, such as multiple-choice questions and rating scales, tend to work better on mobile devices than open-ended questions.
  • Mobile surveys should be concise, well-formatted, and optimized for different devices.
  • Best practices for mobile surveys include keeping answer options consistent, providing clear instructions, and testing and iterating.

Brands and researchers can create engaging, effective surveys that provide valuable insights into consumer behavior and preferences by using a mobile-first approach and following these best practices.

Get regular insights

Keep up to date with the latest insights from our research as well as all our company news in our free monthly newsletter.

  • First Name *
  • Last Name *
  • Business Email *

research on mobile survey

Helping brands uncover valuable insights

We’ve been working with Kadence on a couple of strategic projects, which influenced our product roadmap roll-out within the region. Their work has been exceptional in providing me the insights that I need. Senior Marketing Executive Arla Foods
Kadence’s reports give us the insight, conclusion and recommended execution needed to give us a different perspective, which provided us with an opportunity to relook at our go to market strategy in a different direction which we are now reaping the benefits from. Sales & Marketing Bridgestone
Kadence helped us not only conduct a thorough and insightful piece of research, its interpretation of the data provided many useful and unexpected good-news stories that we were able to use in our communications and interactions with government bodies. General Manager PR -Internal Communications & Government Affairs Mitsubishi
Kadence team is more like a partner to us. We have run a number of projects together and … the pro-activeness, out of the box thinking and delivering in spite of tight deadlines are some of the key reasons we always reach out to them. Vital Strategies
Kadence were an excellent partner on this project; they took time to really understand our business challenges, and developed a research approach that would tackle the exam question from all directions.  The impact of the work is still being felt now, several years later. Customer Intelligence Director Wall Street Journal

Get In Touch

" (Required) " indicates required fields

Privacy Overview

Pollfish Resources

  • Pollfish School
  • Market Research
  • Survey Guides
  • Get started
  • Why Conduct Surveys On Mobile Phones?

The benefits of using mobile surveys for market research

research on mobile survey

Mobile surveys combine the principles of traditional research with scale, reach, and affordability of the smartphone-enabled economy.

Why Conduct Surveys?

The first question you have to ask when choosing your survey collection method is why are you conducting your survey? Traditional market research offers many methods to distribute surveys and collect responses. You can find respondents using your existing customers or through your email network. You can solicit people to sign up to take surveys and, over time, you may collect a large enough sample to be representative of your target audience.

Most people, however, are looking to grow beyond just their local audience or existing customer base. That’s where mobile surveys can help.

Why Conduct Surveys On Mobile Devices?

There are numerous benefits to reaching consumers via their mobile device to gather data.

The primary benefits to a researcher using mobile surveys over other methods are that they are able to:

  • Reach a broader audience
  • Get faster results
  • Enjoy a lower cost
  • Gain the potential for higher quality responses

Mobile is where your audience spends most of their time.

There are over 5 Billion smartphone users globally , and they spend the majority of their time in apps. Mobile internet usage has eclipsed desktop, and the average consumer checks their phone so often, it’s hard to miss them by more than a few minutes

With a mobile-optimized survey, mobile survey participants provide higher quality responses:

  • They’re able to respond at their convenience
  • Are more engaged since surveys are shorter
  • Find it easier to use the interface
  • Enter responses directly (avoid interviewer bias)
  • Reduce interviewer misinterpretation
  • Provide more honest answers

Simply stated, all surveys and market analyses try to arrive at the same conclusion. Smartphones will reach the target audience via a faster, simpler, cheaper, and high-quality methodology .

And, since the Pollfish database has many already known (measured) characteristics or variables, you can create a sample with the same characteristics to that of the real population. And by stratifying your sample according to a certain variable that is highly correlated to the variable that you need to explain, you get statistically significant results.

So why are mobile surveys preferred over desktop?

Vs desktop online surveys, mobile surveys:

  • Provide greater reach
  • Reach consumers who are hard-to-access—important for younger cohorts like Gen Z, and expanding nations where internet is accessed primarily on mobile
  • Increased response rate
  • Decreased survey completion time
  • Faster data capture and analysis

In summary, mobile surveys:

  • Provide excellent value, as they are inexpensive and offer greater accuracy.
  • Provide a  vast array of  question types
  • Are easy to use , both for researchers and participants of the survey
  • Is THE solution when you need to  gather data as fast as possible
  • Provide a  better participant environment , allowing the respondent to preserve their anonymity and respond at their convenience, allowing the participant to respond answer to questions as sincerely as possible.

Do you want to distribute your survey? Pollfish offers you access to millions of targeted consumers to get survey responses from $0.95 per complete. Launch your survey today.

Privacy Preference Center

Privacy preferences.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Tips for Creating Web Surveys for Completion on a Mobile Device

By Kyley McGeeney

Updated July 1, 2015

PM_2015-06-11_web-surveys-on-mobile-01

“If you’re doing a Web survey, you’re doing a mobile survey,” according to Michael Link, chief methodologist for Nielsen, recent American Association for Public Opinion Research president and a leading authority on mobile surveys. 1  Indeed, in Pew Research Center’s American Trends Panel , a nationally representative, probability-based panel designed to be primarily Web-based, fully 27% of respondents completed their most recent survey on a smartphone (another 8% used a tablet to do so).

With so many respondents taking Web surveys on smartphones, creating surveys with smartphone respondents in mind is critical. This includes both writing the questions with mobile respondents in mind and ensuring that your software properly renders the questions regardless of the type of device respondents are using. 2 If surveys aren’t designed for completion on a smartphone, there can be data quality issues such as inaccurately recorded responses, lazy answers and skipped questions. 3 Furthermore, if they become frustrated, smartphone respondents are more likely than others to abandon a survey altogether. 4

Luckily there are things researchers can do to make surveys more smartphone-compatible. The following are eight tips for creating better surveys for completion on a smartphone.

1. Software should be mobile optimized

Mobile optimization means the software automatically detects the device used, specifically the screen size, and adjusts the layout of the survey accordingly. The font and spacing are larger, as are any buttons that need to be pressed, so that respondents don’t need to pinch to zoom. Additionally, there is no horizontal scrolling, but there may be vertical scrolling.

2. Shorter is better

This refers to both the number of questions and the questions themselves. The longer the survey is, the more likely it is to lose respondents – true for any survey, but especially true for surveys taken on a mobile device. And because smartphone screens are small, shorter questions and response options make it easier for smartphone respondents to read and answer questions, which should improve data quality.

3. Avoid fancy features

It’s tempting to include features such as sliders and spin wheels in surveys. However, research has shown that these kinds of features are difficult for mobile respondents to use correctly because they require a high degree of dexterity. 5  They also may require more time for respondents to use than simpler formats. It’s best to stick to radio buttons, checkboxes or text boxes, if possible.

4. No grids

Surveys presented in grid format are meant to use space efficiently by placing questions and response options into a matrix design. But grids also have drawbacks on all devices. For example, researchers have found that grids make it easy for respondents to simply choose the same response for each item in the grid, a phenomenon known as “straightlining.” These data quality issues are even more pronounced on smartphones, as grids often require both vertical and horizontal scrolling, meaning that not all questions and response options may be visible at once. 6  These issues can also cause smartphone respondents to leave the survey. 7

5. Ask multiple questions on the same screen

If grids aren’t ideal, then what is a better option? The answer: It’s okay to put multiple questions on the same screen. 8  Previously it was en vogue in Web surveys to put one question on each screen. But slower mobile load times can unnecessarily burden smartphone respondents. One way to get around this is to group questions about the same topic on the same screen. 9

6. Maximize use of the smartphone screen

There is only so much real estate on a smartphone screen, and it is precious. Avoid cluttering up the screen with logos or graphics (they take longer to load anyway). Additionally, position navigation buttons at the bottom of the screen so that respondents are forced to scroll past all questions and response options; that way nothing is missed.

7. Use a unique URL in the survey invitation

PM_2015-06-11_web-surveys-on-mobile-02

Don’t require respondents to enter an access code, username and/or password to access a survey. Doing so creates one more barrier to completing the survey. This is especially true for smartphone respondents, as these requirements mean respondents will have to switch back and forth between the survey’s invitation email and the mobile Web browser. For an easier user experience, create unique URLs for each respondent so that when they click the link in their invitation, they are automatically taken directly to their survey and can begin immediately.

8. Invite respondents through a text message

As long as you have explicit consent to send text messages to respondents, consider sending survey invitations via text message in addition to email. Include the survey URL in the text message so respondents can click directly to the survey instead of having to wait for them to check their email.

For further reading on this topic please see the American Association for Public Opinion Research’s (AAPOR) report of the Task Force on Emerging Technologies in Public Opinion Research and the AAPOR webinar “ Smarter Smartphone Surveys 201: Data Collection Methods and Survey Design Considerations .”

  • Link, Michael, Jennie Lai and Kelly Bristol. 2013. “Accessibility or Simplicity? How Respondents Engage With a Multiportal (Mobile, Tablet, Online) Methodology for Data Collection.” Presented at the Annual Conference of the American Association for Public Opinion Research, Boston. ↩
  • Mitchell, Nicole. 2015. “The Changing Landscape of Technology and Its Effects on Online Survey Data Collection.” Presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, FL; Barlas, Frances, Randall Thomas and Patricia Graham. 2015. “Purposefully Mobile: Experimentally Assessing Device Effects in an Online Survey.” Presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, FL. ↩
  • Antoun, Christopher. 2015. “Effects of Mobile versus PC Web on Survey Response Quality: a Crossover Experiment in a Probability Web Panel.” Presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, FL; Barlas, Frances and Randall Thomas. 2015. “The Mobile Influence: How Mobile Participants Affect Survey Results.” Presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, FL; Mitchell, Nicole. 2015. “The Changing Landscape of Technology and Its Effects on Online Survey Data Collection.” Presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, FL. ↩
  • McGeeney, Kyley and Jenny Marlar. 2013. “Mobile Browser Web Surveys: Testing Response Rates, Data Quality, and Best Practices.” Presented at the Annual Conference of the American Association for Public Opinion Research, Boston. ↩
  • Antoun, Christopher. 2015. “Effects of Mobile versus PC Web on Survey Response Quality: a Crossover Experiment in a Probability Web Panel.” Presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, FL. ↩
  • Sterrett, David, Michael Stern, Gwendolyn Rugg, Ethan Raker, Jiwon Baek and Ipek Bilgen. 2015. “The Effects of Grids on Web Surveys Completed on a Mobile Device.” Presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, FL. ↩
  • Wang, Mengyang, Allan McCutcheon and Laura Allen. 2015. “Grids and Online Panels: A Comparison of Device Type from a Survey Quality Perspective.” Presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, FL. ↩
  • Richards, Ashley, Rebecca Powell, Joe Murphy, Shengchao Yu and Mai Nguyen. 2015. “Gridlocked: The Impact of Adapting Surveys Grids for Smartphones.” Presented at the Annual Conference of the American Association for Public Opinion Research, Hollywood, FL. ↩
  • Mavletova, Aigul and Mick P. Couper. 2014. “Mobile Web Survey Design: Scrolling Versus Paging, SMS Versus E-mail Invitations.” Journal of Survey Statistics and Methodology, 2(4), 498-518; McGeeney, Kyley and Jenny Marlar. 2013. “Mobile Browser Web Surveys: Testing Response Rates, Data Quality, and Best Practices.” Presented at the Annual Conference of the American Association for Public Opinion Research, Boston. ↩

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Methodological Research
  • Nonprobability Surveys
  • Survey Methods

Key things to know about U.S. election polling in 2024

National public opinion reference survey (npors), how pew research center uses its national public opinion reference survey (npors), q&a: what is the american trends panel, and how has it changed, how do people in the u.s. take pew research center surveys, anyway, most popular.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Home Resource Library Ebook The definitive guide to designing mobile surveys

The definitive guide to designing mobile surveys

research on mobile survey

According to Pew Research (1) a majority of U.S. consumers own multiple digital devices (desktop/laptop computer, tablet, and smartphone) and they seamlessly move from one platform to another as a part of their everyday lives. It is common for consumers to start an online shopping experience on a mobile phone, then finish the transaction on a PC for instance.

Today’s cross platform culture means consumers use a variety of devices for accessing your online survey. Depending on the sample source and target, estimates range anywhere from 30-60% percent of all survey takers are using a smartphone or tablet to access a survey (2). While researchers are familiar with designing a survey for the PC device, how do we also optimize the survey experience for the mobile users? The screen sizes and input methods for each of the devices are unique, and these differences must be accounted for in order to render the best possible survey taking experience for the participant no matter what device they use to take the survey. With the dramatic real-estate limitations on a mobile phone and the reduction in attention-spans overall, mobile survey takers, in particular, are more susceptible to primacy effects, satisficing (engaging in behaviors that allow them to quickly complete a survey rather than give careful consideration to the question), and survey dropout (3, 4).

There has been much research over the years—to address issues of data quality, data consistency, and respondent engagement— especially with regards to understanding best practices for mobile survey design (5-6). What follows is an attempt to synthesize our current understanding and provide a list of guidelines to make online surveys accessible and optimized for today’s multi-device consumer.

Mobile first design “Similarity”

Content that is easy to read and use on a desktop screen does not necessarily work on a mobile screen. But content designed and sized to fit on a smartphone can, in most cases, be accommodated on the desktop screen. This is due to the larger desktop screen, which offers more room to reasonably display more content within the viewable area, compared to the smaller mobile screen. Therefore, as a rule of thumb, when thinking about online survey design, start with the lowest common denominator. For most studies, that is the smartphone. Then from there, build out your survey for the remaining devices (e.g. tablet and desktop).

Aim for design similarity among devices. This will help preserve data consistency among the different devices. This means keeping the fundamental structure of the question and input requirement to be generally the same, no matter if the survey is taken on a desktop, tablet or smartphone device. However it doesn’t need to be exact and the priority should be to follow best design practice for the given device (7). Cosmetic differences between devices, such as color and spacing won’t have an impact. But altering the scale order will. Using a radio button form on a desktop will yield the same results as a button select form on a mobile device, since a point and click task is generally the same as a touch event. But it is not the same as a drag and drop event.

Five general principles

1. make it big.

Touch input

Touch input and a smaller screen are among the most defining aspects that contrast PCs and mobile devices. Therefore, ensure the selection area of a survey is large enough to accommodate the needs of touch input.

Instead, use large buttons or adjust the spacing and layout for the checkboxes. These modifications will help reduce survey dropouts among mobile users and maintain data consistency with a traditional checkbox design.

research on mobile survey

The mobile device may rescale a web page so that the content on that page fits within the viewable area of the screen. This may result in small-sized text. Therefore, ensure all text displayed is comfortable to read from a reasonable viewing distance.

By increasing the font size from 12 to 24-point in the example below, the question becomes much more readable on a mobile device. Keep font size in mind when designing mobile surveys.

research on mobile survey

2. Cut the clutter

Only a limited amount of content can comfortably be accommodated within the smaller-sized screens of the mobile device. Content must be simplified to make room for increased font size and selection areas for ease of mobile use. This is quite common outside of the research realm, for instance, mobile readers of the The New York Times don’t see the full website, but a simplified and enlarged version.

research on mobile survey

When simplifying content for mobile surveys, be sure to keep text, rows and column lengths to a minimum. Strip out any unnecessary graphics so you can enlarge pertinent content for better reading and increased survey engagement. Below is an extreme case of too much content on a mobile screen. Instead, shrink the number of rating scale points to reduce input requirements. An interactive questions can be a suitable alternative, such as a card sort question type (more on this later).

research on mobile survey

When considering how much content to cut, strive for these goals:

  • Minimize the amount of scrolling required. Vertical scrolling is acceptable, horizontal scrolling is not.
  • Input areas and font sizes should be large enough so that the user doesn’t have to zoom in to properly view and take the survey.
  • Minimize the amount of wrap-around text.

research on mobile survey

3. Swipe & touch tap

Point and click, dragging objects and typing—these are all input methods indigenous to PC use. Conversely, touch tapping and swiping are primary input methods for smartphones. As such, try to limit input requirements to these two methods when designing mobile surveys for smartphone use. Requiring other types of input is burdensome, and encourages participants to abandon surveys. For example, minimize the use of open-ends in mobile surveys. Avoid dragging or sliding events as well.

4. Survey length

The shorter the survey length the better, and it should not be longer than 15 minutes. Data quality due to participant fatigue starts to set in within the first few minutes of a survey, but signs of ‘satisficing’ becomes especially pronounced after the 15 minute mark (8). ‘Satisficing’ means to just do enough to get through the task at hand. Rather than providing thoughtful answers, participants do the minimum required to complete the survey.

This includes:

  • Not answering optional questions
  • Fewer characters typed in for open-ended questions
  • Spending less time reading and answering questions
  • More evidence of ‘cheating’ (answering in a way that allows them to skip question sections)

Studies specifically looking at survey length for mobile users have also suggested a similar 10–15 minutes maximum range (9).

5. Test and re-test

The only way to completely ensure that participants will have effective survey experiences is through direct and repeated testing. Every researcher should get into the habit of testing and taking their own surveys. This helps to pinpoint what may encourage participants to complete or drop out of a survey. Experimental evidence is not needed to identify where the text is too small, or where input areas may be difficult to manage. If a survey experience is not good for the researcher, it won’t be good for participants either. So, take a little time and make the survey more successful by personally testing them.

Test a variety of operating systems and browsers on each of the three main devices (desktop, tablet, smartphones) to ensure the survey renders properly on each. The following browsers and operating systems have the largest usage share:

research on mobile survey

Multi-platform software testing services, such as Perfecto Mobile and Keynote are available to check survey compatibility on different devices, operating systems, screen resolutions and browsers. Be sure to check for:

  • Readability: Are all questions and answer options easy to read and understand? Verify text size is readable, and ensure no labels are cut off. If there is too much wrap-around text on the mobile version of the survey, simplify the question wording. If using button select questions, ensure proper color contrast (between background and the scale label).
  • Usability: Do input forms work properly? Are they properly spaced and sized to be “thumb input” friendly? Don’t forget to check that input forms are properly spaced and sized for desktop as well. Enlarging the input areas for mobile may correspondingly overinflate the input areas on the desktop screen. Ensure feedback is provided to the participant after every input event (e.g. change in color state after pressing a button), otherwise this can create confusion. The ‘hit area’ for making an answer selection should also include the entire applicable space not just within the radio button or checkbox form.
  • Performance: Does it feel sluggish, do pages and content load without delay? Graphics, media elements, or style formats that add extra data bandwidth may cause survey performance to slow down or feel sluggish on a mobile device connected to a cellular network.

Single select and multi select question types

research on mobile survey

Radio buttons or button select forms are ideal and can render consistently across device types

research on mobile survey

Avoid lengthy wording to minimize instances of wrap-around text

research on mobile survey

Ensure ‘hit areas’ are large enough for touch input

research on mobile survey

Use proper color contrast so that scale labels stand out clearly

research on mobile survey

For button select forms, the radio button or checkbox symbol should be included in order to indicate multi or single select option

research on mobile survey

Maintain design “similarity” across devices

Categorical scales

research on mobile survey

Single select button select

research on mobile survey

Single select radio button

research on mobile survey

Multi select button select

research on mobile survey

Multi select check box

research on mobile survey

Too small. Not readable, not touch – input friendly

research on mobile survey

Poor color contrast, answer labels difficult to read

research on mobile survey

Overly wordy and too many instances of wrapping text

5 point horizontal scales

A 5 point horizontal scale may need to be vertically oriented in order to fit within the viewable width of the smartphone screen. While this creates a display inconsistency between device type, research has demonstrated minimal impact on data quality in this particular case (10,11).

research on mobile survey

Button select with end point labels. If labels are not necessary for each of the scale points (only end points are labeled) then button select is a suitable option.

research on mobile survey

11 point horizontal scales

Button select with end point labels

research on mobile survey

An 11 point horizontal scale will need to be vertically oriented in order to fit within the viewable width of the smartphone screen. In contrast to a 5 point scale, this display inconsistency between device type, does impact data quality. (11,12)

research on mobile survey

A traditional grid will have part of its scale cutoff or shrink down when rendered on a mobile screen

  • A traditional grid isn’t appropriate for mobile survey
  • These lead to increased survey dropout and inferior data quality compared to alternative options
  • Use an interactive question type

research on mobile survey

Card Sort is a suitable alternative to the standard grid. It is an interactive question type that uses a static scale. When one attribute is rated, the next one slides across the screen. Card Sort outperforms the standard grid on measures of participant engagement and data quality. Despite the scale orientation discrepancy between desktop and mobile, research has shown that data consistency is maintained across devices on 5 and 11 point scales when using Card Sort (10)

research on mobile survey

  • Minimize the use of open ends as this increases dropout
  • On average, mobile users will submit three unaided items into a list of open text boxes e.g. unaided brand list (10)

research on mobile survey

Text box list

research on mobile survey

  • Pew Research. “Smartphone, computer or tablet? 36% of Americans own all three” https://www.pewresearch.org/fact-tank/2015/11/25/ device-ownership/
  • Internal statistics from Dynata, MaritzCX and Forsta
  • Courtright, M., Saunders, T., Tice, J. Paper presented at 2014 CASRO Technology and Innovation Event. “Innovation in Web Data Collection: How ‘Smart’ Can I Make My Web Survey?”
  • Brosnan,K., Grün, B., Dolnicar, S. 2017. “PC, Phone or Tablet?: Use, Preference and Completion Rates for Web Surveys.” International Journal of Market Research.
  • Lugtig, P., Toepoel, V. (2016). The use of PCs, smartphones, and tablets in a probability-based panel survey: Effects on survey measurement error. Social Science Computer Review, 34, 78–94.
  • Mavletova, A. (2013). Data quality in PC and mobile web surveys. Social Science Computer Review, 31, 725–743.
  • Antoun, C., Couper, M., Conrad, F. 2017. “Effects of Mobile versus PC Web on Survey Response Quality: A Crossover Experiment in a Probability Web Panel.” Public Opinion Quarterly
  • Cape, P. ARF 2015 Think Conference. “Questionnaire length, Fatigue effects and response quality: Revisited.”
  • On Device Research. 2015. “30 Minute Mobile Surveys Don’t Work.” https://ondeviceresearch.com/blog/30-minutemobile-surveys-don%e2%80%99t-work
  • Saunders, T., Chavez, L., Chrzan, K., Brazil, J. “Scale Orientation, Grids and Modality Effects in Mobile Web Surveys.” Presented at AAPOR 2012 Annual Conference.
  • Saunders, T., Knowles, R., Tice, J. “Tipping the Scales: Best Practices for Using Scale Questions in the Mobile World.” Paper presented at IA Next 2018.
  • Jue, A., Dowling, Z., Saunders, T., Knowles, R. “Mobilize Me!! – Mobile Survey Design Enhancements.” Paper presented at IA Next 2019. https://www.focusvision.com/resources/mobilize-medesign-techniques-to-improve-survey-participationand-data-quality/

Related resources

Human-centered design for market research.

Human-centered design for market research Human-centered design for market research Dive into the world of human-centered design (HCD) in market research with Forsta The secret to outstanding insights starts with the people you’re asking. We’re here to share how empathy drives actionable insights, holistic data analysis accelerates understanding, and compelling storytelling enhances engagement. At Forsta, […]

Human-centered design for market research

Powered by AI: The new era for research agencies

Powered by AI: The new era for research agencies Organizations crave human insights more than ever before. And research agencies are in the perfect spot to provide them – by harnessing the power of AI. Watch our webinar with industry thought leader Mike Stevens to discover how AI can accelerate your agency’s growth. From streamlining […]

Powered by AI: The new era for research agencies

Supercharge your business: drive local traffic

Supercharge your business: drive local traffic Join Rio SEO’s Ryan Weber and Tyler Ludwig to discover how Rio SEO’s Local Pages help you dominate local search and convert online searches into real-world customers. We’ll cover best practices for your website locators (store/dealer/agent locators), state pages, city pages, hyperlocal pages, and specialty pages that drive traffic […]

Supercharge your business: drive local traffic

A mobile survey is still a survey: Why market researchers need to go beyond “mobile-first”

research on mobile survey

SUBSCRIBE Sign up to get new resources from Rival.

Recommended.

Study: Gen Zs and Millennials are ignoring emails, putting the future of market research at risk

Market research trends: Predictions on what’s next for the insights industry

What are research communities?

social-icon_twitter-x2size

According to the 2020   Insights Practice edition   of the   GreenBook   Research Industry Trends (GRIT) report, mobile-first surveys are finally the most popular emerging research method. The   annual study   found that 56% of researchers are   now   using mobile surveys , easily beating t ext analytics   (50%)   and social media analytics   (50%) as the most   widely used emerging market research method   today.   

Screen Shot 2020-02-12 at 4.43.14 PM

The popularity of mobile surveys is pretty broad. For instance, a relatively equal percentage of research buyers (52%) and suppliers (58%) are using this technology. Globally, mobile surveys also enjoy wide adoption, with a majority of researchers in North America (55%), Europe (63%) and APAC (57%) using them.     

While  it’s encouraging  to see that mobile surveys are finally  being  used by a majority of researchers,  GreenBook’s  study raises an important question:  I s the market research industry adopting mobile-first practices fast enough?   🤔    

GreenBook’s  survey found that 20% of researchers are considering bringing in mobile survey software into their research toolkit, but  15% are still unsure—and 6% are not interested at all.  Other mobile-first methodologies such as mobile ethnography and micro-surveys are growing but aren’t being used by the majority yet.   

“I t is worrying that 20% of people only list mobile-first surveys as being under consideration ,” writes Ray Poynter, Chief Research Officer at Potentiate, in his analysis of  GreenBook’s  report. “ For some time, it has been necessary to accommodate mobile devices for most projects, and it is widely recognized that mobile first is the best way of doing that. ”  

20% of researchers are considering bringing in mobile survey software into their research toolkit, but 15% are still unsure.

According to recent stats,  5 billion people globally  have mobile devices. Many of these people—for instance, younger ones like Gen  Zs —are  mobile-first consumers or digital natives . If companies want to reach and hear from a more diverse set of customers, their market research strategy should be mobile-first rather than simply being mobile-friendly.   

A  mobile  survey is still a survey  

While adapting survey software that supports mobile-first initiatives is important, it’s also not enough if companies want to increase response rates, get richer insights and improve business outcomes. Using mobile surveys doesn’t address the hard reality that surveys in general are  falling out of favor with consumers .   

The volume of  feedback  requests people get  is resulting to a  survey backlash. As MarketingMag  Editor Ben Ice recently pointed out, customers are “ buried under the immense weight of requests for feedback and survey participation .” From the ubiquitous NPS and CSAT questionnaires to those annoying pop-up website surveys, people are constantly being asked to rate this and that.   

Unfortunately, customers don’t see or understand how their feedback is  resulting to meaningful improvements .  

“Many customers submit themselves to  [the survey]  process, hoping for better experiences ,” points out  Gene  Leganza , VP at Forreste r , in an article about his 2020 predictions . “B ut firms have rewarded these customers poorly: Customer experiences haven’t gotten better for three years .”      

The terrible survey experience is resulting to lower response rates and declining ROI, according to Forrester. The research firm is predicting that more companies will adapt customer intelligence tools and build in-house research platforms this year as a result of the survey backlash.   

“If you’ve ever taken an online survey, you know that most look like you’re taking a test."

Unfortunately, re-sizing buttons and open-text boxes so they fit on mobile screens won’t increase consumer appetite for surveys. What’s needed is a  complete re-imagining of the survey experience  and an examination of the value we’re providing research participants.   

Think back to the last time you took an online survey. Was the experience fun and rewarding , or did it seem like a super long interrogation?  I’m willing to bet it’s the latter.   

“If you’ve ever taken an online survey, you know that most look like you’re taking a test ,” Reach3 Insights CEO & Founder  Matt  Kleinschmit  recently  explained to  MediaPost . “They’re oftentimes very long, there are endless multi-choice grids, radio buttons, and very clinical, formalized language.”  

This approach is problematic, according to  Kleinschmit , as it results to feedback that’s not as rich or as candid and authentic.     

“ People get into what I refer to as test-taking mode. They start to think about their answers, they’re rationalizing their responses, they feel like they’re trying to provide the answers that the person who is giving the test wants to hear. ”    

At Rival,  we think the survey fatigue issue is so big that we created a new insight platform that —in many  ways— is    similar to t raditional   survey tools ,  but  is  also significantly different.   

mobile-img

We don’t call our surveys “surveys.”   We call them  chats— conversational surveys hosted through messaging apps like Facebook Messenger, WeChat, WhatsApp or on mobile web browsers. Chat surveys are mobile-first, but they’re not just mobile surveys.  

Yes, you can capture both quantitative and qualitative data through chats, but they’re different from mobile surveys in several ways:  

  • Everything about chats—from the language and tone used to the user experience—is conversational . If it sounds like a survey, it’s not really a chat!   
  • Chats are friendlier than mobile surveys, more informal and more visual than mobile surveys  
  • They’re shorter and more iterative  
  • They  use t he smartphones’ capabilities to  make it easy  and seamless   for research participants   to submit photos, audio and  selfie  video s   

Early  research-on-research on chats  show s that people are responding to chats as a new way of engaging with brands. 

Conversational surveys and the respondent experience

Just as important, chats  provide  the  continuity  researchers need to be confident with the data they get . In a parallel study we did last year, we saw that  chats would generally yield the same business decisions as traditional surveys . Chat survey research also does not introduce any demographic skews if the sample source is the same. That said, given that chats work seamlessly with popular mobile  messaging apps  and social media, we see a huge potential for this technology to  improve  in-the-moment research  and to  reach Gen  Zs  and  more ethnically diverse respondents.   

Conclusion  

It’s great to see mobile surveys  are moving from merely a  market research trend  into a mainstream, everyday tool for capturing customer insights . As we explored here, however, there’s a bigger opportunity to completely re-imagine surveys and deliver a respondent experience that’s more fun,  more  conversational and more human. 

rival tech - chat-gaming

IT'S TIME TO GO BEYOND MOBILE SURVEYS

Watch our webinar, "Mobile Research Best Practices" to learn how to harness the full power of mobile tech to engage your customers and fans for insights. 

Watch Now

Kelvin Claveria (@kcclaveria) is Director of Demand Generation at Rival Technologies.

TALK TO AN EXPERT

You May Also Like

research on mobile survey

2022 GRIT Top 50 Most Innovative Suppliers: Thanks for voting for Rival!

research on mobile survey

Forrester's Cinny Little on elevating the impact of market research through mobile-first approaches

research on mobile survey

MRMW 2022: 5 panel sessions we can’t wait to see

Talk to an expert.

Got questions about insight communities and mobile research? Chat with one of our experts.

Subscribe by Email

No comments yet.

Let us know what you think

Being Mobile: How the Usage of Mobile Devices Affects Surveys with Highly Educated Respondents

  • First Online: 13 September 2022

Cite this chapter

research on mobile survey

  • Florian Bains 4 &
  • Bruno Albert 5  

Part of the book series: Higher Education Research and Science Studies ((HERSS))

982 Accesses

In this article, we examine the effect on data quality of using mobile devices in online surveys with highly educated respondents. Utilising panel data from the German National Educational Panel Study (NEPS) student cohort, we employ regression analyses and propensity score matching to estimate the effects of devices while controlling for confounding factors. We find higher item non-response, shorter answers to open-ended questions and longer completion times on smartphones and tablets compared to laptops and PCs. Using the difference-in-difference estimator for respondents who switched devices between two waves of the panel, we further assess whether the effects we find can be attributed to the devices or to the individual characteristics or situation of the respondent. We find that switching from PCs to mobile devices is associated with a decline in data quality, while switching from mobile devices to PCs is associated with an improvement in data quality. We find that differences in data quality between mobile devices and PCs are, at least partly, associated with features of the device used. We thus conclude that data quality is influenced by a combination of the characteristics of devices and of the participants who chose those devices.

This paper uses data from the National Educational Panel Study (NEPS): Starting Cohort First-Year Students,  https://doi.org/10.5157/NEPS:SC5:14.1.0 . From 2008 to 2013, NEPS data were collected as part of the Framework Program for the Promotion of Empirical Educational Research funded by the German Federal Ministry of Education and Research (BMBF). Since 2014, the NEPS has been conducted by the Leibniz Institute for Educational Trajectories (LIfBi) at the University of Bamberg in cooperation with a nationwide network.

Some of the conceptual work for this article was made possible by funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the priority programme 2267 ‘Digitalisation of Working Worlds: Conceptualising and capturing a Systemic Transformation’ (project number 442171541).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

research on mobile survey

Is there a growing use of mobile devices in web surveys? Evidence from 128 web surveys in Germany

research on mobile survey

Mixed-Mode and Mixed-Device Surveys

research on mobile survey

Mixed Devices in Online Surveys: Prevalence, Determinants, and Consequences

For more information, visit https://www.neps-studie.de/Studien/Hochschulstudium-und-%C3%9Cbergang-in-den-Beruf .

Documentation of the Scientific-Use-File can be found at https://www.neps-data.de/Datenzentrum/Daten-und-Dokumentation/Startkohorte-Studierende/Dokumentation .

Antoun, C., & Cernat, A. (2019). Factors affecting completion times: A comparative analysis of smartphone and PC web surveys. Social Science Computer Review, 38 (4), 477–489. https://doi.org/10.1177/0894439318823703 .

Article   Google Scholar  

Antoun, C., Conrad, F. G., Couper, M. P., & West, B. T. (2019). Simultaneous estimation of multiple sources of error in a smartphone-based survey. Journal of Survey Statistics and Methodology, 7 (1), 93–117. https://doi.org/10.1093/JSSAM/SMY002 .

Antoun, C., Couper, M. P., & Conrad, F. G. (2017). Effects of mobile versus PC web on survey response quality: A crossover experiment in a probability web panel. Public Opinion Quarterly, 81 (S1), 280–306. https://doi.org/10.1093/poq/nfw088 .

Blossfeld, H.-P., Roßbach, H.-G, & von Maurice, J. (Eds.) (2011). Education as a lifelong process—The German National Educational Panel Study (NEPS). [Special Issue] Zeitschrift für Erziehungswissenschaft : 14.

Google Scholar  

Buskirk, T. D., & Andrus, C. (2014). Making mobile browser surveys smarter: Results from a randomized experiment comparing online surveys completed via computer or smartphone. Field Methods, 26 (4), 322–342. https://doi.org/10.1177/1525822X14526146 .

Carini, R. M., Hayek, J. C., Kuh, G. D., Kennedy, J. M., & Ouimet, J. A. (2003). College student responses to web and paper surveys: Does mode matter? Research in Higher Education, 44 (1), 1–19. https://doi.org/10.1023/A:1021363527731 .

Couper, M. P., Antoun, C., & Mavletova, A. (2017). Mobile web surveys: A total survey error perspective. In P. P. Biemer, E. D. de Leeuw, S. Eckman, B. Edwards, F. Kreuter, L. E. Lyberg, C. Tucker, & B. T. West (Hrsg.), Total survey error in practice: Improving quality in the era of big data (S. 133–154). Wiley.

Chapter   Google Scholar  

Couper, M. P., & Peterson, G. J. (2017). Why do web surveys take longer on smartphones? Social Science Computer Review, 35 (3), 357–377. https://doi.org/10.1177/0894439316629932 .

Daikeler, J., Bach, R. L., Silber, H., & Eckman, S. (2020). Motivated misreporting in smartphone surveys. Social Science Computer Review (online first) . https://doi.org/10.1177/0894439319900936 .

de Bruijne, M., & Wijnant, A. (2013). Comparing survey results obtained via mobile devices and computers: An experiment with a mobile web survey on a heterogeneous group of mobile devices versus a computer-assisted web survey. Social Science Computer Review, 31 (4), 482–504. https://doi.org/10.1177/0894439313483976 .

de Bruijne, M., & Wijnant, A. (2014). Mobile response in web panels. Social Science Computer Review, 32 (6), 728–742.  https://doi.org/10.1177/0894439314525918 .

Eurostat (2017). Digital Economy and Society: ICT usage in households and by individuals . http://ec.europa.eu/eurostat/web/digital-economy-and-society/data/database .

Keusch, F., & Yan, T. (2017). Web versus mobile web: An experimental study of device effects and self-selection effects. Social Science Computer Review, 35 (6), 751–769. https://doi.org/10.1177/0894439316675566 .

Lee, H., Kim, S., Couper, M. P., & Woo, Y. (2019). Experimental comparison of PC web, smartphone web, and telephone surveys in the new technology era. Social Science Computer Review, 37 (2), 234–247. https://doi.org/10.1177/0894439318756867 .

Lugtig, P., & Toepoel, V. (2016). The use of PCs, smartphones, and tablets in a probability-based panel survey: Effects on survey measurement error. Social Science Computer Review, 34 (1), 78–94. https://doi.org/10.1177/0894439315574248 .

Maineri, A.M., Bison, I., & Luijkx, R. (2019). Slider bars in multi-device web surveys. Social Science Computer Review, 39 (4). https://doi.org/10.1177/0894439319879132 .

Mavletova, A. (2013). Data quality in PC and mobile web surveys. Social Science Computer Review, 31 (6), 725–743. https://doi.org/10.1177/0894439313485201 .

Mavletova, A., & Couper, M. P. (2013). Sensitive topics in PC web and mobile web surveys: Is there a difference? Survey Research Methods, 7 (3), 191–205. https://doi.org/10.18148/srm/2013.v7i3.5458 .

Mavletova, A., & Couper, M. P. (2014). Mobile web survey design: Scrolling versus paging, SMS versus e-mail invitations. Journal of Survey Statistics and Methodology, 2 (4), 498–518. https://doi.org/10.1093/jssam/smu015 .

Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17 (3), 437. https://doi.org/10.1037/a0028085 .

Meinefeld, W. (2010). Online-Befragungen im Kontext von Lehrevaluationen–praktisch und unzuverlässig. KZfSS Kölner Zeitschrift für Soziologie und Sozialpsychologie, 62 (2), 297–315. https://doi.org/10.1007/s11577-010-0098-x .

Messer, B. L., Edwards, M. L., & Dillman, D. A. (2012). Determinants of item nonresponse to web and mail respondents in three address-based mixed-mode surveys of the general public. Survey Practice, 5 (2), 1–8. https://doi.org/10.29115/SP-2012-0012 .

Müller, B., & Castiglioni, L. (2015). Attrition im Beziehungs- und Familienpanel pairfam. In J. Schupp & C. Wolf (Eds.), Non-response bias: Qualitätssicherung sozialwissenschaftlicher Umfragen (S. 383–408). Springer VS. https://doi.org/10.1007/978-3-658-10459-7_12 .

Porter, S. R., & Whitcomb, M. E. (2005). Non-response in student surveys: The role of demographics, engagement and personality. Research in Higher Education, 46 (2), 127–152. https://doi.org/10.1007/s11162-004-1597-2 .

Rammstedt, B., & John, O. P. (2007). Measuring personality in one minute or less: A 10-item short version of the Big Five Inventory in English and German. Journal of research in Personality, 41 (1), 203–212.

Schlosser, S., & Mays, A. (2018). Mobile and dirty: Does using mobile devices affect the data quality and the response process of online surveys? Social Science Computer Review, 36 (2), 212–230. https://doi.org/10.1177/0894439317698437 .

Steinhauer, H., Aßmann, C., Zinn, S., Goßmann, S., & Rässler, S. (2015). Sampling and weighting cohort samples in institutional contexts. AStA Wirtschafts-und Sozialstatistisches Archiv, 9 (2), 131–157. https://doi.org/10.1007/s11943-015-0162-0 .

Struminskaya, B., Weyandt, K., & Bosnjak, M. (2015). The effects of questionnaire completion using mobile devices on data quality. Evidence from a probability-based general population panel. Methods, Data, Analyses , 9 (2), 261–292. https://doi.org/10.12758/mda.2015.014 .

Toepoel, V., & Lugtig, P. V. (2014). What happens if you offer a mobile option to your web panel? Evidence from a probability-based panel of internet users. Social Science Computer Review, 32 (4), 544–560. https://doi.org/10.1177/0894439313510482 .

Toninelli, D., & Revilla, M. A. (2016). Smartphones vs PCs: Does the device affect the web survey experience and the measurement error for sensitive topics? A replication of Mavletova & Couper’s 2013 experiment. Survey Research Methods, 10 (2), 153–169. https://doi.org/10.18148/srm/2016.v10i2.6274 .

Tourangeau, R., Sun, H., Yan, T., Maitland, A., Rivero, G., & Williams, D. (2018). Web surveys by smartphones and tablets: Effects on data quality. Social Science Computer Review, 36 (5), 542–556. https://doi.org/10.1177/0894439317719438 .

Treischl, E., & Wolbring, T. (2017). The causal effect of survey mode on students’ evaluations of teaching: Empirical evidence from three field experiments. Research in Higher Education, 58 (8), 904–921. https://doi.org/10.1007/s11162-017-9452-4 .

Uhrig, S.C. (2008). The nature and causes of attrition in the British Household Panel Study (No. 2008–05). ISER Working Paper Series.

Verbree, A.-R., Toepoel, V., & Perada, D. (2019). The effect of seriousness and device use on data quality. Social Science Computer Review, 38 (6), 720–738. https://doi.org/10.1177/0894439319841027 .

Watson, N., & Wooden, M. (2009). Identifying factors affecting longitudinal survey response. In P. Lynn (Ed.), Methodology of Longitudinal Surveys (Vol. 1, S. 157–181). John Wiley & Sons, Ltd.  https://doi.org/10.1002/9780470743874.ch10 .

Wells, T., Bailey, J. T., & Link, M. W. (2013). Filling the void: Gaining a better understanding of tablet-based surveys. Survey Practice, 6 (1), 1–9. https://doi.org/10.29115/SP-2013-0002 .

Wells, T., Bailey, J. T., & Link, M. W. (2014). Comparison of smartphone and online computer survey administration. Social Science Computer Review, 32 (2), 238–255. https://doi.org/10.1177/0894439313505829 .

Wells, W., Cavanaugh, M. R., Bouffard, J. A., & Nobles, M. R. (2012). Non-response bias with a web-based survey of college students: Differences from a classroom survey about carrying concealed handguns. Journal of Quantitative Criminology, 28 (3), 455–476. https://doi.org/10.1007/s10940-011-9148-4 .

Download references

Author information

Authors and affiliations.

Leibniz-Institute for Educational Trajectories (LIfBi), Bamberg, Deutschland

Florian Bains

Friedrich-Alexander-Universität Erlangen-Nürnberg, Nürnberg, Deutschland

Bruno Albert

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Florian Bains .

Editor information

Editors and affiliations.

Abteilung Bildungsverläufe und Beschäftigung, Deutsches Zentrum für Hochschul- und Wissenschaftsforschung (DZHW), Hannover, Deutschland

Gesche Brandt

Susanne de Vogel

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Der/die Autor(en), exklusiv lizenziert durch Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature

About this chapter

Bains, F., Albert, B. (2022). Being Mobile: How the Usage of Mobile Devices Affects Surveys with Highly Educated Respondents. In: Brandt, G., de Vogel, S. (eds) Survey-Methoden in der Hochschulforschung . Higher Education Research and Science Studies. Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-36921-7_5

Download citation

DOI : https://doi.org/10.1007/978-3-658-36921-7_5

Published : 13 September 2022

Publisher Name : Springer VS, Wiesbaden

Print ISBN : 978-3-658-36920-0

Online ISBN : 978-3-658-36921-7

eBook Packages : Education and Social Work (German Language)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

5 Major Benefits of Mobile Surveys for Market Research

research on mobile survey

Benefits of Mobile Survey

Today, around  7.26 billion people  own mobile devices. It’s a multi-purpose device from which we shop, consult our bank accounts, make restaurant reservations, interact with other people or consume all kinds of multimedia content. This figure is expected to grow steadily, which makes sense considering that we’ve collectively come to depend on our mobile phones to meet a significant number of needs in our lives. 

Chocolate snacks trends 2023 report

The constant and widespread use of mobile phones makes for countless benefits for conducting a mobile survey. Mobile surveys guarantee, among other things, speed, reliability, and convenience. This makes them stand out from other platforms such as computers, which are not readily available to a large part of the population.

At Zinklar, we have  created a  mobile survey platform  that has the unique benefit of providing high-quality data in real-time. Let’s take a look at some of the additional benefits that come from mobile surveys and why we believe the  future of market research is mobile.

Read below the top 5 benefits of using a mobile survey:

1. Greater Representation Within the Sample

One of the key advantages of conducting surveys via mobile devices is the ability to reach very diverse audiences. The phone allows  access to representative samples of all ages, conditions, and socioeconomic levels , which enriches the quality of the data compared to other platforms. Access to computer use presents a legitimate barrier to getting in contact with all users. Firstly, not everyone has access to a computer and, if they do, they have to make the time to be in front of the computer to answer the questionnaire. With a mobile device, it is always easier to find that moment between activities in which to do so. 

2. Faster Result Time

These days, who takes even five minutes away from their phone? Beyond the day-to-day capabilities of smartphones that we rely on, actually, mobile phones are the primary source of the internet for many people. So if brands want to reach the consumer faster and generate faster insights, the key is reaching people in their pockets. With mobile surveys, users can answer the survey in total comfort at any time, without having to wait to be able to sit in front of a computer. Data can be collected earlier, which speeds up decision-making.

3. Up-to-the-Moment Precision 

Creating mobile-exclusive studies allows for fast results timed to precision with real-time activities. Interaction via mobile lets companies  selects the specific moments when they want to send out surveys , making it likely that consumers are most engaged. For example, a survey on pizza consumption could be sent out late on a Friday afternoon, a study on cleaning product consumption could go out on a Sunday mid-morning, or multimedia consumption in the family could be a study that is launched on a Saturday evening.

4. Brevity Yields Better Results

Surveys designed for cell phones are shorter and more concise. Brevity helps to increase the sample’s willingness to complete the study and consequently allows for better-quality data collection. In fact, studies have shown that studies that clock in around seven minutes ultimately yield the best quality data. To this end, insights teams need to work to  design questionnaires  that consistently and accurately collect the most relevant information in the least amount of time.

5. Access to Additional Data

Mobile phones make it possible to combine stated findings with additional user data that is more easily accessible using technology. For example, users can scan a purchase receipt with the phone camera to report the last purchase details or tag their location when leaving a review. Users can opt into those sharing this information at the moment that is most relevant and convenient for them. 

These mobile survey benefits make it possible to achieve two vital goals for market research:  sample quality  and  data quality . With these variables at the forefront of market research efforts, mobile technology is the best solution for brands looking to optimize their research processes. Putting convenience at the center of the user experience and combining it with process agility is the way forward, and mobile surveys are the key to getting there.

contact zinklar

Receive regular updates from Zinklar!

Related articles, cookie policy, privacy overview.

GeoPoll

Frequently Asked Questions around Mobile Phone Surveys

Roxana elliott | may. 15, 2020 | 5 min. read.

Feature phones for research

Despite mobile-based methodologies being the safest and most effective way to gather data during a crisis such as COVID-19, there are still unknown factors when using mobile to collect data. Who can be reached, what modes are best suited to each project, and how questionnaires should be designed are just a few of the questions that come up when organizations are looking to transition projects from face-to-face methodologies to mobile surveys .

While formal research on the usage of mobile surveys is sparse, below is an overview of the research available and anecdotal evidence from GeoPoll’s 8+ years conducting remote survey work in Africa, Asia, and Latin America .

What can mobile phone surveys be used for?

Mobile surveys have been used as a tool by reputable organizations including the World Bank , The United Nations World Food Programme, Unilever , GIZ , and Insight2Impact for several years. There is some debate over if mobile surveys can yet be used to fully replace face-to-face studies , however, there is agreement that mobile surveys excel at collecting rapid data during crises . USAID, FHI360, Keystone Accountability, and others have utilized remote methodologies to identify vulnerable populations and shifting trends during crises including the 2014-2015 West Africa Ebola outbreak. Organizations such as the World Food Programme have spearheaded the usage of mobile surveys in multiple regions to gather food security data, finding that mobile was able to correctly identify trends and seasonal shifts in food security .

Additionally, there is evidence that mobile is better at gathering data on sensitive questions than in-person modes. Research GeoPoll conducted with Kantar TNS found that SMS respondents were more likely to indicate that they felt unsafe in their homes and that they have gone without food than respondents from the compared face-to-face survey.

Who can be reached with remote mobile surveys?

research on mobile survey

Studies have found that the mobile population overrepresents those who are more educated, male, and younger age groups in many countries, but it’s still possible to target those who are older or less educated – it simply may require a larger sample base to draw from to find these respondents. GeoPoll has often targeted very specific groups, from farmers of certain crops to mothers of young children, and is able to do so through careful sampling methods and screening questions. We can also create samples which are nationally representative by key demographics through a stratified random sampling approach and use of quotas to reach the desired sample size within each demographic group.

Finally, GeoPoll uses mobile-friendly Living Standard Measures questions to calculate the LSM or SEC group of respondents. GeoPoll’s recent studies on coronavirus and other studies have included respondents from the lowest SEC groups, who typically do not have running water or electricity in their homes. Certain modes may be better for targeting certain groups; CATI has been found to be better at reaching older age groups, while IVR may reach those less educated.

Who you will target also depends on the sample source; GeoPoll has access to mobile subscriber databases in over 60 countries which we can draw from, or we can gather sample through an enhanced Random Digit Dialing (RDD) process, or recruitment through online, radio, or TV advertisements. GeoPoll can also send surveys to provided lists of contacts for those looking to reach a pre-recruited group.

What mobile survey mode should I use?

Current available modes from GeoPoll include:

  • SMS: Surveys conducted via a 2-way SMS chat
  • CATI : Voice calls placed by trained interviewers working remotely during COVID-19
  • IVR : Automated voice calls with a recording
  • Mobile web : A mobile web link that is sent via SMS or another mode and opens a basic web browser
  • Mobile application : Mobile applications which administer surveys with those with access to smartphones.

The ideal mode for conducting research will depend on a variety of factors, including questionnaire length, budget, and target respondent groups. It has been found that CATI and IVR are generally more expensive than SMS , however, cost varies widely by mode and is also dependent on countries studied, screening criteria, number of questions, and other factors. In order to get a true estimate of cost, you should contact a research firm such as GeoPoll who can provide price quotes by mode for your specific project.  

research on mobile survey

How should I design questionnaires for mobile?

As many remote mobile methodologies are self-administered (with the exception of CATI) and have other limitations such as character limits, questionnaires for mobile must keep the mode of survey research in mind. For example, while one study found that individual question length didn’t affect response rates , studies GeoPoll and others have done have found that longer SMS studies have lower completion rates . Additional studies have demonstrated that certain question types may work better in certain modes; for example, a GeoPoll study found that SMS select-all-that-apply questions yield fewer answers than forced-choice questions .

What other factors should I consider when conducting mobile research?

Other factors to consider when embarking on a mobile research project include:

  • Incentives : Findings are mixed on the use of incentives; A study in Ghana and Tanzania found small incentives increased completion rates, but that higher incentives had similar effects to lower amounts, and other studies have found lesser effects. Some surveys may also cost respondents airtime; GeoPoll and other services often use zero-rated shortcodes to send messages so that they can be received and replied to even when respondents do not have airtime.
  • Language: Many countries speak multiple languages, and surveys should be offered in more than one language – for SMS, services such as GeoPoll offer multi-language surveys, and for CATI, research providers should have multiple interviewers with different language skills. GeoPoll’s call centers are staffed with interviewers who often speak up to 6 languages, depending on the country.
  • Local Context: It has also been found that the local context is important. For example, dialects and wording intent can vary throughout regions, which is why GeoPoll always uses local interviewers to conduct voice calls and makes multiple checks on other survey types before sending them out. GeoPoll has also found that in countries such as Nigeria, female interviewers have higher response rates.
  • Speed: The speed at which you are looking to collect data will have an impact on the mode you choose. SMS and mobile web surveys can be sent to tens of thousands of respondents at once, while the speed at which CATI interviews are conducted depends on the number of interviewers hired. GeoPoll has also found that IVR response rates can be very low, which can slow down data collection.
  • Cost: Your budget for conducting research will be a factor in determining mode, sample size, and other aspects of data collection. While there is a tendency to think lower-cost equals lower-quality, this is not always the case; For many projects, SMS and mobile web surveys are a good option for gathering quality data at lower costs than CATI.
  • Data Output: Data can often be delivered in multiple formats – some research organizations will provide raw data in Excel or SPSS, and others may do data analysis for you. GeoPoll has a full-service research team who do data cleanup and analysis and can provide raw data, create reports, and build custom dashboards.

Conducting Mobile Surveys during COVID-19

Although there is still much research to be done on mobile surveys, coronavirus provides researchers an opportunity to test new methodologies that will be referred back to for many years to come. While mobile may not yet reach every person on the globe, it will within the coming decades, and so we must continue to test different methodologies in order to better understand the nuances of mobile data collection.

When conducting mobile research, we recommend using a firm like GeoPoll that has years of experience in the nuances of remote mobile methodologies. To request a quote from GeoPoll for mobile-based research, please contact us here .

Related Posts

Frequently Asked Questions in Mobile research

RECAP: Takeaways from the Mobile Research FAQs Webinar

The Pros and Cons of Mobile Web Surveys

  • Tags cati best practices , coronavirus in africa , mobile survey modes , sms survey , survey design

Company logo for Anpar Research Ltd

  • Apr 12, 2022

Mobile Survey: Everything You Need To Know

Updated: Mar 26

A person providing feedback via a mobile survey on their smartphone

Mobile phones have become a key part of everyone's daily lives and are more extensively used for different purposes than just making phone calls and texting from uploading videos to social media, online search queries or making contactless payments. So, it is crucial to tap into this for feedback and research by conducting a mobile survey to reach and engage with your key audience.

Table of contents:

What is a mobile survey?

What is an advantage of mobile surveys, how are mobile surveys used, 9 tips in using a mobile survey.

[Disclosure: This post contains affiliate links, meaning we get a commission if you decide to make a purchase through these links at no additional cost to you.]

A mobile survey is where a targeted group of individuals answer survey questions using a mobile device, whether that is a mobile phone or tablet as long as the survey is optimised to display on mobile devices otherwise known as Mobile First.

Mobile surveys can be accessible to a wide audience who are likely to respond to these surveys as the number of mobile users continues to grow worldwide and who are adopting multiple uses with these devices such as social media or taking pictures.

research on mobile survey

Accessibility in reaching a large audience is an advantage of mobile surveys but there many more benefits of using a mobile survey, which are highlighted below.

Enables you to access hard to reach groups

As mentioned earlier as there is a growing number of people using mobile devices, you are able to reach younger audiences who are more receptive to technology, so mobile surveys make it easier to contact them whereas it is more difficult reaching this audience via desktop surveys.

Adapts to all screen sizes

Just a follow on from the above point, mobile surveys can be used for all screen sizes providing these surveys are optimised to display on mobile devices. While standard online surveys have tended to be more suitable for laptops or desktop computers but have struggled to appear correctly on mobile devices and certain questions may not work properly especially large visuals.

Easy to use and distribute

Mobile surveys are great in capturing feedback from respondents, while they on the move as this type of survey is compatible with any mobile device, so individuals taking part can complete these surveys at their own convenience and don’t need to be at a computer. Distribution of these surveys is easy to do as well via SMS, email, website pop-up, social media, QR codes and push notifications .

Better response rates

As people carry around mobile phones all the time, it makes it a very convenient way for the surveys to be completed at any time as well as reaching all audiences thus a higher response rate compared to more standard online surveys. This is particularly true if you can make these surveys more engaging with the tools available and even include your own branding, so these respondents have a better user experience when answering your questions.

Faster responses

As mobile surveys are much shorter than traditional surveys, you tend to get a better response as mentioned above but with the additional benefit that you can collect and analyse results in real time rather than waiting, so you can take action sooner.

There are a variety of ways mobile surveys are utilised, whether that is for market research or feedback, below are just some examples of how mobile surveys are used by businesses and organisations.

Post product feedback

Where after a short period of time after purchase, customers are asked to provide feedback either via a downloaded app, email or SMS (mobile survey invite) to give their thoughts and review of the new product.

Restaurant and food delivery order feedback

Customers are prompted to give their feedback after mobile payment through a QR code or sent a message regarding the meal, service and delivery time.

Research trackers

For continuous research trackers of consumer product or service brands, mobile surveys are an ideal way to improve response rates and receive results quicker over time, particularly if you are trying reach younger audiences or people in remote areas.

Ad hoc surveys

For these one-off surveys such as concept tests , pricing research , new product development and ad evaluations , using mobile surveys are great for the same reasons mentioned above in reaching all consumers and quicker results.

In-store experience

With the use of QR codes or downloaded apps, you can ask customers their feedback , while they are in the store on their overall experience, reasons why they visited, tag their location and other aspects of their shopping experience.

Video diaries

Recording videos of their experience with a brand such as product reviews and you see visually the products, the shop and copy of their receipts to give context to what they are saying.

Website feedback

This is through pop-ups that come up when visiting a website via your mobile phone and are asked for your feedback about the website , whether it is easy to navigate or the visual appearance of the site along with reasons for visiting and how they heard about the website.

Customer service feedback

This could be based on your experience in-store or over the phone with a sales representative, which tend to be post purchase feedback on the overall service you received.

research on mobile survey

The following are 9 tips to keep in mind when using a mobile survey:

1. Keep the overall length of the survey short

People have a limited attention span especially when using their mobile for a survey, so the ideal length for this type of survey tends to be 3-4 minutes but should be no more than 10 minutes, otherwise you will have many people dropping out.

2. Questions should be short and simple

Questions should be easy to understand and concise especially as there is not much room available on a mobile screen.

3. Ensure the surveys are touch screen enabled

You need to ensure the surveys are touch screen enabled in order to be suitable for different screen sizes.

4. Minimise the amount of scrolling

Limit the amount of scrolling the respondents will need to do in order to answer each question.

5. Avoid large images

It is best to avoid large images otherwise you will not only be able to see the image clearly in portrait view where you need to scroll in any direction but also the file size may prolong download time. If you are having difficulty showing an image clearly, it’s best to add an instruction at the start of the question to ask the respondent to rotate their mobile screen to landscape to see the picture more clearly.

6. One question should appear at a time

Keep it limited to one question at a time as you cannot get away with multiple questions on one mobile screen as you may do with a survey on desktop.

7. Limit the number of open-ended questions

As these types of questions are open for respondents to give their opinion, this can take some time so it’s best to keep the number down to 1 or 2 opened ended questions.

8. Keep away from matrix style questions if you can

It's best to stay away from matrix style questions if possible because these questions may work well on desktop computers but these large grids are not compatible for mobile screen sizes, so it’s best to break this down into smaller chunks for the respondent to digest this information and respond.

9. Make sure to test the mobile survey

It’s best practice to test a mobile survey a number of times before you launch the survey. You can spread the task of testing amongst your family, friends and colleagues to ensure your survey works properly and identify any issues early on to resolve.

Hope you enjoyed reading this article and if you are interested in running your own mobile survey, why not check out the 5 best survey maker platforms to consider using which enable you to do mobile surveys easily and much more.

RELATED POSTS BELOW ABOUT SURVEYS AND QUESTIONS

Market Research Online Surveys: Best Practices & Tips  

Benchmarking Survey Results: Why Is It Important?

How To Do A Survey In 9 Easy Steps

Survey Bot: What’s Great About Using It

What's Great About Using VideoAsk

Conversational Forms: Discover What So Good About Them

Top 5 Website Survey Questions About Usability

Questionnaire vs Survey: Key Differences You Should Learn

Employee Engagement Surveys: Benefits, Tips, Questions

Survey Panel: What Is It & Benefits Of Using One

Types Of Surveys: Overview, Types, Advantages, Examples & More

Market Research Online: Benefits, Methods & Tools

How To Design A Good Questionnaire

Type Of Customer Feedback Questions To Ask

NPS Calculation: Learn All You Need To Know

Concept Testing: Examples, Types, Costs, Benefits

#AnparResearch #mobilesurvey

  • SURVEYS & QUESTIONS

Related Posts

Learn What's Great About Using VideoAsk

[Disclosure: This page contains affiliate links, meaning we get a commission if you decide to make a purchase through these links at no additional cost to you.]

©2020 by Anpar Research Ltd

research on mobile survey

Mobile App Surveys: Questions, Templates, and Tips for 5-Star Ratings

research on mobile survey

Here’s some bad news: friction can kill your mobile app.

And that’s not all. No matter how many hours you pour into development, planning, and promotion, you can’t avoid all bugs and UX mistakes. Especially in the early days, updates and fixes are all developers’ bread and butter.

But the users expect a seamless experience . And if you don’t react early enough, customer satisfaction will drop—and so will app store ratings and retention rate.

According to research by Google , only 9% of users will stay on a mobile app that doesn’t fully meet their needs. And 29% of dissatisfied users will ditch your app immediately after a problem occurs.

So, is there anything you can do?

Sure, there is. You just need to be alert to user feedback to spot bugs early on, act on customers’ complaints, and redesign your mobile app to meet their needs . In short: your app needs to be user-centered . The best way to achieve it is to set up mobile app surveys . 

download the mobile app feedback report banner

With the right mobile survey questions popping up at the right moment in your app, you will catch your users when they're most engaged. You’ll collect mobile app feedback that will let you remove friction and improve the customer experience . And if you play your cards right—you will get more 5-star reviews and avoid negative ones .

In this article, you’ll find:

  • A comprehensive list of benefits of mobile app survey and best practices;
  • The most relevant mobile app survey questions;
  • Ready-to-go in-app survey templates to implement in your mobile app in minutes.

Try out our wildly popular NPS survey for free:

Benefits of mobile app surveys

Mobile app surveys are a must if you want to keep up with your users and competition. Here are a few benefits of a well-designed mobile survey. ‍

1. Improve and speed up mobile app development

An in-app survey will help you spot bugs and UX (user experience) shortcomings in the early development stage.

Ask your users if they’ve encountered any issues while using the app. You can request general app feedback or inquire about specific features.

The best survey tools , such as Survicate, will let you run targeted surveys. They can appear in particular parts of the app or only to a specific group of users. This way, you’ll quickly discover which parts of your apps need fixing. The survey template below allows you to gauge your users' impressions of your app:

2. Design your product roadmap based on your users’ needs

User input will let you create a product roadmap and prioritize tasks based on data not guesswork. 

Don’t be afraid to ask your users what feature they’d like to see in your application. You can do that by adding open-ended questions to your surveys to collect qualitative feedback .

Add the most frequently repeating answers to your roadmap. If you receive a lot of customer feedback , you can run an additional survey asking your users to choose from the most popular options.

This product feature prioritization survey will help you discover the why behind your users' needs:

3. Keep your mobile app bug-free

No matter how many scenarios you run, you can’t predict every possible friction in user and customer experience . This is why you should use surveys even in stable apps. Give your users a possibility to voice their negative feedback —and act on it. You’ll spot emerging issues early and keep your customers satisfied.

For even deeper customer journey insights, combine qualitative data from your survey with quantitative data from your product analytics tools. For example, if you notice your users often abandon your app after using a specific feature, trigger a survey at that touchpoint and ask your customers what exactly made them leave and how you could improve for them.

Many survey tools, Survicate included, offer native integrations that let you streamline feedback directly to your analytics software, like Mixpanel , Amplitude , and Productboard. ‍

Survicate integration with Amplitude

The product satisfaction survey template below is a great example of how to make sure your product caters to your users’ needs.

4. Catch negative feedback before it turns into bad reviews

If you have a lot of negative survey responses , your app store rating will reflect that. Don’t blindly ask every user to leave you an online review. Instead, use skip logic —create different paths for the respondents depending on their previous answers.

Ask unsatisfied users what they dislike about your app, and try to fix the shortcomings. With survey apps such as Survicate, you will receive notifications of every response, so you’ll have a chance to react in real time. ‍ If your users notice you care, you might avoid a few bad reviews or even turn a hater into a fan .

5. Reap the benefits of good word of mouth

Do you get a lot of positive survey responses? Why not ask your loyal users to rate you in an app store, recommend you to their friends, or become beta testers? 

Add a call to action to your in-app survey to redirect your users wherever you want. You can also use an incentive to motivate them.

Use this quick G2 reviews survey to ask happy customers to endorse you:

6. Understand how your users feel about your brand

If the mobile app is your flagship product, it’s a good place for a general customer experience survey . Catch your customers in their preferred channel when they’re the most engaged—and gauge their satisfaction and loyalty with an NPS (Net Promoter Score) or CSAT (Customer Satisfaction Score) survey such as the one below:

7. Use surveys as an extra communication channel

Mobile app surveys are an excellent point of contact with your customers. Don’t shy away from open-ended survey questions , as long as they’re well-designed—they’ll let you hear your customers’ voices. 

You can also turn your surveys into a mini contact form . For example, when you run a new customer survey, ask them if they’d like to be contacted by your customer support at the end. 

Give your customers all the attention they need to prevent churn with the new customer survey :

13 best mobile app survey questions

Keeping your app user-friendly will ensure all the time and resources poured into app development won’t go to waste. To achieve this, you should run mobile app surveys at every stage of your app development.

But to really make your surveys count, you have to ask the right questions.

There are two main question types you can choose from:

  • Closed-ended questions, where the user has to select an answer from a limited list.
  • Open-ended questions, to which the user responds using their own words.

Combining the two types will get you qualitative results (that you can benchmark and turn into KPIs) and actionable feedback.

Here’s a list of 13 questions that will help you design your perfect survey , divided into four categories that reflect your mobile app’s development stage:

  • early development technical questions
  • early development features scouting
  • product roadmap survey questions
  • general customer experience questions

And if you don’t feel like designing the survey yourself and want something tried and tested, why not use one of our 125+ ready-to-go survey templates?

Questions for an early development technical survey

#1. the app made it easy for me to [ perform an action ].

With this straightforward statement, you will measure if your mobile app’s features are easy to use.

This question is a part of the Customer Effort Score (CES) survey. You can pair it up with an additional open-ended question (such as “What would you change about this feature?”) to discover your users’ frustrations and any in-app friction points.

The action in this question might be about any new feature you’ve just rolled out—the payment process, the search function, the tag system, you name it.

Here's the complete Customer Effort Score survey template:

#2. How does the app run after the update?

No amount of testing will let you predict all potential UX issues . Ask your users if they’ve encountered any challenges—especially if your mobile app is brand new or you’ve just developed a new feature.

This question works best as a two-part survey. Always ask the users who aren’t completely satisfied to describe the issues in detail.

One more tip: filter responses based on the user’s device. This will help you identify the source of the problem. You can do that by integrating your survey tool with your CRM to get more data points—or simply by adding a question about the respondent’s device and operating system.

#3. How do you like the app design?

There’s no outstanding customer experience without a well-designed user experience . Your customers shouldn’t have any trouble navigating through your mobile app—and they should be esthetically pleased while doing so. Otherwise, they’ll quickly get frustrated and leave.

Try to gauge how your users feel about your app design as early as possible and tweak it if necessary with the app redesign survey template :

Questions for an early development features scouting survey

#4. is our app helping you achieve your goals.

This question will let you discover the ideal use cases for your mobile app. You’ll find out why your clients install your app and when it’s the most useful—and discover if the reality is consistent with your presumptions.

If your app gets the job done for your users , then you’re one step closer to building a lasting relationship with them. If it doesn’t, ask the survey respondents to clarify their answers with an additional question, such as “What should we improve to help you achieve your goals?”

Either way, you’ll easily decide which parts of your app you should focus on. And don’t forget to turn your most popular use case into a value proposition !

Here's an app evaluation survey template you can use:

#5. Are there any functions you would like us to add?

Let your customers voice their ideas for new functions or features. Recurring answers—or just one brilliant suggestion—will help you enrich and sort out your feature development process.

This is a good follow-up question to any survey type, so don’t shy away from occasionally adding it to your in-app surveys.

#6. How would you rate this new feature?

Every time you implement a new feature, make sure it works as intended and meets your users’ expectations. 

It’s a good idea to add a follow-up question that asks unsatisfied users for clarification—for example, “What do you dislike about this new feature?”

Questions for a product roadmap survey

#7. how often do you use the following features.

This closed-ended question will help you identify the most popular features which are most likely the key to your success.

Always make sure your popular features work well and are easily accessible. And once you identify the least liked parts of the app, think about cutting them off or at least spending fewer resources on them—you don’t want to throw money on stuff nobody needs.

Simply run the feature evaluation survey template to identify the most popular features:

#8. Was this feature/button easy to find?

This variation of the CES (Customer Effort Score) question will let you gauge if your most popular features are accessible, helping you to improve user experience and boost customer satisfaction .

General customer experience questions

Once your mobile app is up and running, you have to ensure your users are fully satisfied. When asking general customer experience questions, you should always keep two additional goals in mind:

  • Enable your users to share their opinions with you (with the help of open-ended questions);
  • Encourage your satisfied users to leave a review or rate you in the app store while avoiding getting negative reviews.

To achieve that, you need to use the skip logic feature and a carefully designed question flow in your surveys. Let’s see how to do it in the examples below.

#9. How would you rate your experience with our app?

This is the core question of the CSAT (Customer Satisfaction Score) survey . Its goal is to assess your users’ general satisfaction level.

A CSAT survey comes in handy when your mobile app is new and you want to see how it’s resonating with its users. But it’s a good practice to re-run it regularly (once or twice a year).

You’ll be able to create CSAT benchmarks for yourself and monitor the state of your customer experience program.

Always follow up with open-ended questions to get app feedback and ask for reviews . Don’t worry if your users are engaged enough to pour their hearts out. Good survey tools, including Survicate, register all the responses—even from partially submitted surveys!

You can use this CSAT template to create your survey in a matter of minutes:

#10. Would you recommend this app to your friend or colleague? 

This classic NPS question comes from another battle-tested survey that gauges customer satisfaction.

It will help you discover how many advocates your mobile app has, which can be a future growth indicator. After all, word-of-mouth is the best marketing method out there, and happy customers are much less likely to churn .

The respondents to an NPS survey are divided into three categories:

  • Promoters—the satisfied users who chose 8 or 9
  • Passives—the neutral group of respondents who chose 7 or 8
  • Detractors—the dissatisfied users who chose a score between 1 and 6.

How to calculate net promoter score

You can show different follow-up questions to each respondent group. Let’s review a few options.

Try our Net Promoter Score survey template for free:

#11. What is the reason for your score?

This open-ended question is suitable for both high- and low-rating users. It allows them to share what they like or dislike about your mobile app. In return, you’ll get valuable customer feedback you can act on. 

Keep in mind that this question is very general—so it’s best to use it if you already have an opinionated, ready-to-share audience.

#12. Would you like to leave us a review/rate us on the app store?

Whenever you get a positive score in your CSAT survey (anything over three on a 5-point scale) or NPS survey (8 or 9), you can ask your users to promote you.

It’s especially effective as a follow-up question to NPS—after all, the respondent has just said they are willing to recommend you, so why not take them up on their offer?

Ask them to write a review online or rate you in an app store. It’s best if you include a link that will quickly redirect your users to the right place.

#13. What can we do to improve the app?

Sending your detractors and dissatisfied users straight to an app store might not be the best idea—let them voice their concerns in the in-app survey instead. Negative feedback is direct instruction on what you should improve in your mobile app to boost customer experience.

Always act on the app feedback you get. Improve the problematic parts of your app, or reach out to your users and make up for your mistakes. You will show them that you care, and you might avoid negative online reviews —or even turn an unhappy customer into a happy one!

P.S. After you figure out your Net Promoter Score, make sure to see how it compares to other companies in your industry. That's the only way to figure out if you're score is actually good! Our 2021 NPS benchmarks report will help you with that.

Mobile app survey questions—best practices

The key to successful mobile app surveys is not just asking the right questions. You need to create an easy, seamless experience that will maximize your response rates .

Here are a few things you should consider when designing a mobile app survey .

Timing is everything

Your surveys need to appear in the right place at the right time.

Just think about it: you wouldn’t ask a first-time user to recommend your app to a friend—just like you wouldn’t ask a person who’s just browsing through your store if they’re satisfied with your customer service.

Every time you set up an in-app survey, make sure that:

  • It doesn’t appear too often. Wait at least a month before you rerun any survey;
  • It doesn’t disrupt any user flows;
  • You trigger event-specific surveys without delay. For example, when your client completes a purchase in the app, show them a survey right away (instead of waiting until they log in the next time).

With in-app feedback tools such as Survicate , you can trigger the survey at specific events or user actions, so you’ll easily avoid any awkwardness.

survicate targeting options - image

Don’t send your users away from the mobile app

When running app-related surveys, make sure the respondents can complete the whole thing in the app. It’s as simple as that.

Don’t send surveys by email or re-direct your users to a form on your website. This forces them to take an extra step , which might discourage them from submitting their answers. Plus, you’re pulling them away from the app—and you have no guarantee they’ll come back.

There’s also another reason to keep everything in-app. The patience of mobile users wears thin easily, and they have even less tolerance for experience-disrupting elements. The more annoying your survey is, the lower your response rates will be.

statistics on smartphone users decision making and time management

Keep your surveys short

Mobile apps are all about ease of use. Long surveys are the opposite of that.

Try to keep your mobile surveys as short as possible. Prioritize closed-ended questions that only demand one tap on the screen and make the open-ended questions highly contextualized. Don’t force your users to think long and hard about their answers!

Avoiding survey fatigue should always be your goal. But if you feel like running more complex surveys , try to leave them for your website , web app, or e-mail —depending on the nature of your business.

Leverage the power of open-ended questions

Open-ended questions are the best way to hear from your customers and get actionable feedback. This is why you shouldn’t avoid them, even if they require more work from the users than their closed-ended counterparts.

Here are a few ground rules:

  • Use open-ended questions as a follow-up to closed-ended ones when your users are already engaged.
  • Keep the flow of the survey logical—stay on topic and stick to two open questions maximum.
  • Make sure to always act on any app feedback you get—if your users notice you care, they’re more likely to answer your questions again in the future.

research on mobile survey

Boost user experience with Survicate mobile app surveys

Surveys can help your mobile app reach its goals by elevating the customers’ voices . With customer feedback, you will:

  • prioritize feature development,
  • spot flaws and friction points,
  • set up benchmarks and KPIs,
  • come up with new features.

Remember to ask the right questions and stick to survey best practices to collect actionable feedback and make the most out of your users’ precious time.

Survicate will help you integrate surveys right into your mobile app for the most effortless experience with an easy-to-install SDK for Android and iOS .

Book a call with one of our colleagues and discover the best Survicate plan to start collecting mobile app feedback!

research on mobile survey

We’re also there

surveys-cube-80px

  • Solutions Industry Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Member Experience Technology Use case AskWhy Communities Audience InsightsHub InstantAnswers Digsite LivePolls Journey Mapping GDPR Positive People Science 360 Feedback Surveys Research Edition
  • Resources Blog eBooks Survey Templates Case Studies Training Webinars Help center

Mobile Research

Mobile research is a rapidly growing discipline of researchers who focus primarily on mobile-based research studies..

Mobile Research

Join over 10 million users

Logos

Content Index

  • What is Mobile Research?

Applications of Mobile Research

Factors affecting mobile research.

  • Advantages of Mobile Research

Mobile Research Survey Tool

Video

Watch the 1-minute tour

Most of the 2000s were spent on static devices like desktops. The world has transformed since then and so has the use of digital platforms and these days every person is online “on the go”. Almost 95% of the US citizens own a mobile phone and 90% of them have mobile internet access. Moreover, smartphone usage has doubled in the past 3 years with October 2016 marking the first time in history when mobile access through smartphones surpassed desktops.

With the rapid growth of mobile internet access, there is an unprecedented opportunity to tap into this newly assembled base of users to conducted focused and more precise mobile research.

So, what is mobile research?

Mobile research is a rapidly growing discipline of researchers who focus primarily on mobile-based research studies to tap into the flexibility, customizability, accuracy, and localization to get faster and more precise insights.

It’s easy, convenient and straightforward to capture data from anywhere and anytime as it uses the benefits of “mobile” technology to conduct effective “research”.

This research type can be used in three major ways:

  • Recruiting a panel that will take a survey using their mobile platforms.
  • Appoint interviewers to collect responses using tablets or smartphones.
  • Collect data without internet (Offline mobile surveys).

In the first method, you can arrange a panel that would respond to your surveys and give you precise insights. As a panel consists of selected, filtered and handpicked individuals who already qualify for the research, asking them questions and getting insights is not just more easier but far more accurate and detailed.

The second method is applied on site for B2B or B2C purposes where you appoint interviewers or in most cases employees, to collect data on mobile devices. This method is very effective during concerts or live events where face to face data collection is possible for understanding user experience and making improvements.

Another way of conducting this research is by collecting data from locations where internet isn’t available. In such cases, the data collected offline will get automatically synced once internet becomes accessible.

In mobile research, all respondents take part from a mobile device. This presents researchers with both - opportunities and some restrictions. In such a situation, it is important to keep in mind these factors that affect the mobile research process:

Scrolling can get on people’s nerves. Respondents do not mind switching pages to answers ‘n’ number of questions but they do mind long scrolling surveys. This may impact the number of people completing the survey and can be quite a decisive factor for mobile research. If your survey has too many questions, you should increase the number of pages rather than increasing the length of a single page that would increase the scrolling time.

This can also be done by evaluating the questions that you intend to add in this research survey. Removing all the redundant questions will not only shorten the surveys but will also increase the effectiveness of the survey as a shorter survey will be easy and less time consuming to fill out.

For any kind of survey, question types are important and when it comes to mobile surveys, this can be all the more important. Create questions that the respondents can easily reply to from their mobile devices. Multiple choice questions are one of the most popularly used questions for a mobile-friendly survey. You may also want to avoid using open ended or descriptive questions because of the limitations that come with the size of the mobile screen. Instead of asking longer questions, you can also split the question into various multiple choice questions which will get you better results.

Apart from the question types in general, you may want to take care of the answer options too. Offering positive options first and then the negative options will affect the kind of answers you get. For mobile devices, horizontal scrolling should be strictly restricted as it can get very laborious for respondents to do that.

Loading time of videos and images are different for laptops and for mobile devices. Most mobile devices are operated using the data from phone networks and not ethernet or wi-fi. Due to this, it takes more time for the videos to load on phone than it would take on a laptop.

Decide on the minimum number of videos that you would like to use on mobile devices that may not impact the number of people taking the mobile research survey.

A few other factors that affect mobile research-

  • Create mobile compatible logos.
  • Mobile-friendly fonts and texts.
  • Option for full screen coverage that will eliminate interruptions from other applications.

Advantages of Mobile Research:

As everyone is on mobile devices these days, gauging attention of the respondents via mobile research is prompter than the other means. Due to the various modes like surveys or mobile applications or GPS, getting in touch with your respondents becomes a very easy job. If the survey has direct, relevant questions, survey makers can get faster and more accurate answers via these research surveys.

In case a survey requires the respondents to fill in specific details, uploading images or recording voice notes or collecting information in a diary format is easier using mobile surveys. This is the main reason why these surveys are more adaptable than the traditional ones.

Quicker survey completion time, higher rate in collecting data, tracking of the respondent’s geo location etc. are other reasons that mobile research surveys are better.

These research surveys can be made interactive by asking the respondents to submit videos or images.

All QuestionPro mobile research survey tools offer 100% mobile compatible surveys. All the surveys created using our online platform are by default mobile compatible with no display restrictions regarding of the type of questions (standard or advanced).

QuestionPro also offers an offline mobile application to conduct these surveys in locations where internet connection isn’t available. Incase the responses are collected offline, they can be conveniently synced when network is accessible again. Due to this integral feature, you can get in touch with respondents that you usually wouldn’t be able to with the other survey tools in the market.

Along with these offerings, QuestionPro also provides 250+ mobile friendly survey templates.

  • Sample questions
  • Sample reports
  • Survey logic
  • Integrations
  • Professional services
  • Survey Software
  • Customer Experience
  • Communities
  • Polls Explore the QuestionPro Poll Software - The World's leading Online Poll Maker & Creator. Create online polls, distribute them using email and multiple other options and start analyzing poll results.
  • Research Edition
  • InsightsHub
  • Survey Templates
  • Case Studies
  • AI in Market Research
  • Quiz Templates
  • Qualtrics Alternative Explore the list of features that QuestionPro has compared to Qualtrics and learn how you can get more, for less.
  • SurveyMonkey Alternative
  • VisionCritical Alternative
  • Medallia Alternative
  • Likert Scale Complete Likert Scale Questions, Examples and Surveys for 5, 7 and 9 point scales. Learn everything about Likert Scale with corresponding example for each question and survey demonstrations.
  • Conjoint Analysis
  • Net Promoter Score (NPS) Learn everything about Net Promoter Score (NPS) and the Net Promoter Question. Get a clear view on the universal Net Promoter Score Formula, how to undertake Net Promoter Score Calculation followed by a simple Net Promoter Score Example.
  • Offline Surveys
  • Customer Satisfaction Surveys
  • Employee Survey Software Employee survey software & tool to create, send and analyze employee surveys. Get real-time analysis for employee satisfaction, engagement, work culture and map your employee experience from onboarding to exit!
  • Market Research Survey Software Real-time, automated and advanced market research survey software & tool to create surveys, collect data and analyze results for actionable market insights.
  • GDPR & EU Compliance
  • Employee Experience
  • Customer Journey
  • Executive Team
  • In the news
  • Testimonials
  • Advisory Board

QuestionPro in your language

  • Encuestas Online
  • Pesquisa Online
  • Umfrage Software
  • برامج للمسح

Awards & certificates

The experience journal.

Find innovative ideas about Experience Management from the experts

  • © 2021 QuestionPro Survey Software | +1 (800) 531 0228
  • Privacy Statement
  • Terms of Use
  • Cookie Settings
  • Research article
  • Open access
  • Published: 12 September 2013

A survey study of the association between mobile phone use and daytime sleepiness in California high school students

  • Nila Nathan 1 &
  • Jamie Zeitzer 2 , 3  

BMC Public Health volume  13 , Article number:  840 ( 2013 ) Cite this article

143k Accesses

36 Citations

4 Altmetric

Metrics details

Mobile phone use is near ubiquitous in teenagers. Paralleling the rise in mobile phone use is an equally rapid decline in the amount of time teenagers are spending asleep at night. Prior research indicates that there might be a relationship between daytime sleepiness and nocturnal mobile phone use in teenagers in a variety of countries. As such, the aim of this study was to see if there was an association between mobile phone use, especially at night, and sleepiness in a group of U.S. teenagers.

A questionnaire containing an Epworth Sleepiness Scale (ESS) modified for use in teens and questions about qualitative and quantitative use of the mobile phone was completed by students attending Mountain View High School in Mountain View, California (n = 211).

Multivariate regression analysis indicated that ESS score was significantly associated with being female, feeling a need to be accessible by mobile phone all of the time, and a past attempt to reduce mobile phone use. The number of daily texts or phone calls was not directly associated with ESS. Those individuals who felt they needed to be accessible and those who had attempted to reduce mobile phone use were also ones who stayed up later to use the mobile phone and were awakened more often at night by the mobile phone.

Conclusions

The relationship between daytime sleepiness and mobile phone use was not directly related to the volume of texting but may be related to the temporal pattern of mobile phone use.

Peer Review reports

Mobile phone use has drastically increased in recent years, fueled by new technology such as ‘smart phones’. In 2012, it was estimated that 78% of all Americans aged 12–17 years had a mobile phone and 37% had a smart phone [ 1 ]. Despite the growing number of adolescent mobile phone users, there has been limited examination of the behavioral effects of mobile phone usage on adolescents and their sleep and subsequent daytime sleepiness.

Mobile phone use in teens likely compounds the biological causes of sleep loss. With the onset of puberty, there are changes in innate circadian rhythms that lead to a delay in the habitual timing of sleep onset [ 2 ]. As school start times are not correspondingly later, this leads to a reduction in the time available for sleep and is consequently thought to contribute to the endemic sleepiness of teenagers. The use of mobile phones may compound this sleepiness by extending the waking hours further into the night. Munezawa and colleagues [ 3 ] analyzed 94,777 responses to questionnaires sent out to junior and senior high school students in Japan and found that the use of mobile phones for calling or sending text messages after they went to bed was associated with sleep disturbances such as short sleep duration, subjective poor sleep quality, excessive daytime sleepiness and insomnia symptoms. Soderqvist et al. in their study of Swedish adolescents aged 15–19 years, found that regular users of mobile phones reported health symptoms such as tiredness, stress, headache, anxiety, concentration difficulties and sleep disturbances more often than less frequent users [ 4 ]. Van der Bulck studied 1,656 school children in Belgium and found that prevalent mobile phone use in adolescents was related to increased levels of daytime tiredness [ 5 ]. Punamaki et al. studied Finnish teens and found that intensive mobile phone use lead to more health complaints and musculoskeletal symptoms in girls both directly and through deteriorated sleep, as well as increased daytime tiredness [ 6 ]. In one prospective study of young Swedish adults, aged 20–24, those who were high volume mobile phone users and male, but not female, were at greater risk for developing sleep disturbances a year later [ 7 ]. The association of mobile phone utilization and either sleep or sleepiness in teens in the United States has only been described by a telephone poll. In the 2011 National Sleep Foundation poll, 20% of those under the age of 30 reported that they were awakened by a phone call, text or e-mail message at least a few nights a week [ 8 ]. This type of nocturnal awakening was self-reported more frequently by those who also reported that they drove while drowsy.

As there has been limited examination of how mobile phone usage affects the behavior of young children and adolescents, none of which have addressed the effects of such usage on daytime sleepiness in U.S. teens, it seemed worthwhile to attempt a cross-sectional study of sleep and mobile phone utilization in a U.S. high school. As such, it was the purpose of this study to examine the association of mobile phone utilization and sleepiness patterns in a sample of U.S. teens. We hypothesized that an increased number of calls would be associated with increased sleepiness.

We designed a survey that contained questions concerning sleepiness and mobile phone use (see Additional file 1 ). Sleepiness was assessed using a version of the Epworth Sleepiness Scale (ESS) [ 9 ] modified for use in adolescents [ 10 ]. The modified ESS consists of eight questions that assessed the likelihood of dozing in the following circumstances: sitting and reading, watching TV, sitting inactive in a public place, as a passenger in a car for an hour without a break, lying down to rest in the afternoon when circumstances permit, sitting and talking to someone, sitting quietly after a lunch, in a car while stopped for a few minutes in traffic. Responses were limited to a Likert-like scale using the following: no chance of dozing (0), slight chance of dozing (1), moderate chance of dozing (2), or high chance of dozing (3). This yielded total ESS scores ranging from 0 to 24, with scores over 10 being associated with clinically-significant sleepiness [ 9 ]. We also included a set of modified questions, originally designed by Thomée et al., that assess the subjective impact of mobile phone use [ 7 ]. These included the number of mobile calls made or received each day, the number of texts made or received each day, being awakened by the mobile phone at night (never/occasionally/monthly/weekly/daily), staying up late to use the mobile phone (never/occasionally/monthly/weekly/daily), expectations of accessibility by mobile phone (never/occasionally/daily/all day/around-the-clock), stressfulness of accessibility (not at all/a little bit/rather/very), use mobile phone too much (yes/no), and tried and failed to reduce mobile phone use (yes/no).

An email invitation to complete an electronic form of the survey ( http://www.surveymonkey.com ) was sent to the entire student body of the Mountain View High School, located in Mountain View, California, USA, on April 5, 2012. Out of the approximately 2,000 students attending the school, a total of 211 responded by the collection date of April 23, 2012. Data analyses are described below (OrginPro8, OriginLab, Northampton MA). Summary data are provided as mean ± SD for age and ESS and as median (range) for the number of texts and/or phone calls made or received per day as these were non-normally distributed (p’s <; 0.001, Kolmogorov Smirnov test). To examine the relationship between sleepiness and predictor variables, stepwise multivariate regression analyses were performed. Collinearity in the data was examined by calculating the Variance Inflation Factor (VIF). Post hoc t-tests, ANOVA, Mann–Whitney U tests, and Spearman correlations were used, as appropriate, to examine specific components of the model and their relationship to sleepiness. χ 2 tests were used to examine categorical variables. The study was done within the regulations codified by the Declaration of Helsinki and approved by the administration of Mountain View High School.

Sixty-eight males and 143 females responded to the survey. Most (96.7%) respondents owned a mobile phone. The remainder of the analyses presented herein is on the 202 respondents (64 male, 138 female) who indicated that they owned a mobile phone (Tables  1 and 2 ). The youngest participant in the survey was 14 years old and the oldest was 19 years old (16 ± 1.2 years), representative of the age range of this school. The median number of mobile phone calls made or received per day was 2 and ranged from 0 to 60. The median number of text messages sent or received per day was 22.5 and ranged from 0 to 700. While about half of the respondents (53%) had never been awakened by the mobile phone at night, 35% were occasionally awakened, 5.9% were awakened a few times a month, 5.0% were awakened a few times a week, and 1.0% were awakened almost every night. About one-quarter (27%) of respondents had never stayed awake later than a target bedtime in order to use the mobile phone, however 36% occasionally stayed awake, 19% stayed awake a few times a month, 8.5% stayed awake a few times a week, and 10% stayed awake almost every night in order to use the mobile phone. In regards to feeling an expectation of accessibility, 7.5% reported that they needed to be accessible around the clock, 26% reported that they needed to be accessible all day, 52% reported they needed to be accessible daily, 13% reported that they only needed to be accessible now and then, and 1.0% reported they never needed to be accessible. Nearly half (49%) of the survey participants viewed accessibility via mobile phones to be not at all stressful, 45% found it to be a little bit stressful, 4.5% found it rather stressful, and 1.0% found it very stressful. More than one-third (36%) reported that they or someone close to them thought that they used the mobile phone too much. Few (17%) had tried but were unable to reduce their mobile phone use.

Subjective sleepiness on the ESS ranged from 0 to 18 (6.8 ± 3.5, with higher numbers indicating greater sleepiness), with 25% of participants having ESS scores in the excessively sleepy range (ESS ≥ 10). We examined predictors of subjective sleepiness (ESS score) using stepwise multivariate regression analysis with the following independent variables: age, sex, frequency of nocturnal awakening by the phone, frequency of staying up too late to use the phone, self-perceived accessibility by phone, stressfulness of this accessibility, attempted and failed to reduce phone use, excessive phone use determined by others, number of texts per day, and number of phone calls per day. Only subjects with complete data sets were used in our modeling (n = 191 of 202). Our final model (Table  3 ) indicated that sex, frequency of accessibility, and a failed attempt to reduce mobile phone use were all predictive of daytime sleepiness (F 6,194  = 4.35, p <; 0.001, r 2  = 0.12). These model variables lacked collinearity (VIF’s <; 3.9), indicating that they were not likely to represent the same source of variance. Despite the lack of significance in the multivariate model, given previously published data [ 4 – 6 ], we independently tested if there was a relationship between the number of estimated texts and sleepiness, but found no such correlation (r = 0.13, p = 0.07; Spearman correlation). In examining the final model, it appears that those who felt that they needed to be accessible “around the clock” (ESS = 9.2 ± 2.9) were sleepier than all others (ESS = 6.7 ± 3.4) (p <; 0.01, post hoc t -test). The relationship between sleepiness and reporting having tried, but failed, to reduce mobile phone use was such that those who had tried to reduce phone use were more sleepy (ESS = 8.3 ± 3.6) than those who had not (ESS = 6.5 ± 3.4) (p <; 0.01, post hoc t -test). While more females had tried to reduce their mobile phone use, sex did not modify the relationship between the attempt to reduce mobile phone use and sleepiness (p = 0.32, two-way ANOVA), thus retaining attempt and failure to reduce mobile phone use as an independent modifier of ESS scores.

In an attempt to better understand the relationship between ESS and accessibility, we parsed the population into those who felt that they needed to be accessible around the clock (7.4%) and those who did not (92.6%). The most accessible group, as compared to the less accessible group, had a numerically though not statistically significantly higher texting rate (50 vs. 20 per day; p = 0.07, Mann–Whitney U test), but were awakened more at night by the phone (27% vs. 4%, weekly or daily; p <; 0.05, χ 2 test), and stayed awake later than desired more often (40% vs. 17%, weekly or daily; p <; 0.05, χ 2 test). We did a similar analysis, parsing the population into those who had attempted but failed to reduce their use of their mobile phone (17%) with those who had not (83%). Those who had attempted to reduce their mobile phone use had a higher texting rate (60 vs. 20 per day; p <; 0.01, Mann–Whitney U test) and stayed awake later than desired more often (53% vs. 11%, weekly or daily; p <; 0.01, χ 2 test), but were not awakened more at night by the phone (12% vs. 5%, weekly or daily; p = 0.26, χ 2 test).

Given previous research on the topic, our a priori hypothesis was that teenagers who use their phone more often at night are likely to be more prone to daytime sleepiness. We did not, however, observe this simple relationship in this sample of U.S. teens. We did find that being female, perceived need to be accessible by mobile phone, and having tried but failed to reduce mobile phone usage were all predictive of daytime sleepiness, with the latter two likely being moderated by increased use of the phone at night. Previous work has shown that being female was associated with higher ESS scores [ 11 ]. It may be that adolescent females score higher on the ESS without being objectively sleepier, though this remains to be tested. Our analyses revealed that staying up late to use the mobile phone and being awakened by the mobile phone may be involved in the relationship between increased ESS scores and perceived need to be accessible by mobile phone and a past attempt to decrease mobile phone use. These analyses reveal some of the complexity of assessing daytime sleepiness, which is undoubtedly multifactorial. If the sheer number of text messages being sent per day is directly associated daytime sleepiness, it is likely with a small effect size. Our work, of course, is not without its limitations. Data were collected from a sample of convenience at a single, public high school in California. Only 10% of students responded to the survey and this may have introduced some response bias to the data. The data collected were cross-sectional; a longitudinal collection would have enabled a more precise analysis of moderators and mediators as well as a more accurate interpretation of causal relationships. Also, we did not objectively record the number of texts, so there may be a certain degree of bias or uncertainty associated with self-report of number of texts and calls. Several variables that might influence sleepiness both directly and indirectly through mobile phone use (e.g., socioeconomic status, comorbid sleep disorders, medication use) were not assessed. Future studies on the impact of mobile phone use on sleep and sleepiness should take into account the multifactorial and temporal nature of these behaviors.

The endemic sleepiness found in adolescents is multifactorial with both intrinsic and extrinsic factors. Mobile phone use has been assumed to be one source of increased daytime sleepiness in adolescents. Our analyses revealed that use or perceived need of use of the mobile phone during normal sleeping hours may contribute to daytime sleepiness. As overall number of text messages did not significantly contribute to daytime sleepiness, it is possible that a temporal rearrangement of phone use (e.g., limiting phone use during prescribed sleeping hours) might help in alleviating some degree of daytime sleepiness.

Abbreviations

Epworth sleepiness scale

Standard deviation

Analysis of variance.

Madden M, Lenhart A, Duggan M, Cortesi S, Gasser U: Teens and Technology. 2013, http://www.pewinternet.org/Reports/2013/Teens-and-Tech/Summary-of-Findings.aspx ,

Google Scholar  

Crowley SJ, Acebo C, Carskadon MA: Sleep, circadian rhythms, and delayed phase in adolescence. Sleep Med. 2007, 8: 602-612. 10.1016/j.sleep.2006.12.002.

Article   PubMed   Google Scholar  

Munezawa T, Kaneita Y, Osaki Y, Kanda H, Minowa M, Suzuki K, Higuchi S, Mori J, Yamamoto R, Ohida T: The association between use of mobile phones after lights out and sleep disturbances among Japanese adolescents: a nationwide cross-sectional survey. Sleep. 2011, 34: 1013-1020.

PubMed   PubMed Central   Google Scholar  

Soderqvist F, Carlberg M, Hardell L: Use of wireless telephones and self-reported health symptoms: a population-based study among Swedish adolescents aged 15–19 years. Environ Health. 2008, 7: 18-10.1186/1476-069X-7-18.

Article   PubMed   PubMed Central   Google Scholar  

Van den Bulck J: Adolescent use of mobile phones for calling and for sending text messages after lights out: results from a prospective cohort study with a one-year follow-up. Sleep. 2007, 30: 1220-1223.

Punamaki RL, Wallenius M, Nygård CH, Saarni L, Rimpelä A: Use of information and communication technology (ICT) and perceived health in adolescence: the role of sleeping habits and waking-time tiredness. J Adolescence. 2007, 30: 95-103.

Article   Google Scholar  

Thomée S, Harenstam A, Hagberg M: Mobile phone use and stress, sleep disturbances and symptoms of depression among young adults – a prospective cohort study. BMC Publ Health. 2011, 11: 66-10.1186/1471-2458-11-66.

The National Sleep Foundation. 2011, http://www.sleepfoundation.org/article/sleep-america-polls/2011-communications-technology-use-and-sleep , Sleep in America poll,

Johns MW: A new method of measuring daytime sleepiness: the Epworth sleepiness scale. Sleep. 1991, 14: 540-545.

CAS   PubMed   Google Scholar  

Melendres MC, Lutz JM, Rubin ED, Marcus CL: Daytime sleepiness and hyperactivity in children with suspected sleep-disordered breathing. Pediatrics. 2004, 114: 768-775. 10.1542/peds.2004-0730.

Gibson ES, Powles ACP, Thabane L, O’Brien S, Molnar DS, Trajanovic N, Ogilvie R, Shapiro C, Yan M, Chilcott-Tanser L: “Sleepiness” is serious in adolescence: two surveys of 3235 Canadian students. BMC Publ Health. 2006, 6: 116-10.1186/1471-2458-6-116.

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/13/840/prepub

Download references

Acknowledgements

The authors wish to thank the students of Mountain View High School (Mountain View, California) for participating in this study.

Author information

Authors and affiliations.

Mountain View High School, Mountain View, 3535 Truman Avenue, Mountain View, CA, 94040, USA

Nila Nathan

Department of Psychiatry and Behavioral Sciences, Stanford University, 3801 Miranda Avenue (151Y), Stanford CA 94305, Palo Alto, CA, 94304, USA

Jamie Zeitzer

Mental Illness Research, Education, and Clinical Center, VA Palo Alto Health Care System, 3801 Miranda Avenue (151Y), Palo Alto, CA, 94304, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jamie Zeitzer .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors’ contributions

JMZ and NN designed the study, analyzed the data, and drafted the manuscript. Both authors have read and approved the final manuscript.

Electronic supplementary material

Additional file 1: questionnaire.(doc 34 kb), rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Nathan, N., Zeitzer, J. A survey study of the association between mobile phone use and daytime sleepiness in California high school students. BMC Public Health 13 , 840 (2013). https://doi.org/10.1186/1471-2458-13-840

Download citation

Received : 10 November 2012

Accepted : 10 September 2013

Published : 12 September 2013

DOI : https://doi.org/10.1186/1471-2458-13-840

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Sleep deprivation
  • Mobile phone

BMC Public Health

ISSN: 1471-2458

research on mobile survey

Stay up to date on consumer trends by opting into our newsletter.

Ask a Free Question

We just need a little info to track your question and ensure you get the results back!

Validate your data. Trust your insights .

Top-rated consumer panel, in-store & exit surveys, app & web visit surveys, live reporting & analysis.

Every company who has put us to the test agrees, we provide the highest quality data in research. Experience *91% higher consumer data quality  for increased NPS, better brand awareness, & improved purchase intent.

vTracker+™ - Future Buyer Intentions

*Top 10 MR, Research-on-Research 1/1/23 – 2/10/24

vTracker+™, vSurveys+™, iOOH™

Consumer data you can trust., product perception.

Track shopper perception and interactions with products at shelf level.

Retailer Experience

  • Product Experience Ratings
  • Product Display Experience Ratings
  • Store Experience Ratings

Brand Promoter

Measure messaging effectiveness, assess brand awareness & boost engagement.

Reasons for Purchasing

Customer Sat & Experience

Track omnichannel visit impacts. Benchmark competitors & increase loyalty.

Net Promoter Score

Your NPS Score increased from 50-65 this quarter.

*See Drivers

TRUSTED BY COMPANIES LARGE & SMALL

Samsung

30%+ of all traditional market 
research data is fraudulent.

Traditional market research is fragmented, resulting in chaotic and unreliable data..

Chris St. Hilaire, CEO of MFour Mobile Research 

Validation: The future of market research.

Smartphone technology facilitates behavior-driven methodologies, providing researchers with more reliable and verified insights.

MFour Studio™

See the difference validation makes across all your market research. 

MFour Studio™

Automated fielding updates & drag-and-drop cross-tabs.

vConsumer Panel™

Nation’s largest validated consumer panel.

Visit pattern insight consumer targeting.

Validated surveys + app, web & location data tracked monthly.

“The future of consumer data.”

Dr. Gavan Fitzsimons , Professor of Marketing and Psychology

Continuous consumer discovery

Your consumers are always active, your research should be too.

App, Web & Location Events

Updates daily.

MFour Video placeholder

Advantages of validated research.

Validated consumers offer reduced fraud, better representation, and lower costs to improve research outcomes.

Increase incidence

Faster fielding times, richer consumer feedback, lower costs, studio: build, monitor & report..

Consumer targeting and feasibility options for location, app & web methodologies all in one platform.

SOTG and MFour Studio

Your consumers are unique.

Today’s consumers are complex, and so is their data. Just a survey won’t cut it; you need to add app, web, and location data to truly understand what consumers want, bridging the gap between what they say and what they do.

Point of Emotion

The MFour difference.

v = Validated consumers + = app, web, and location behavior data

Our Fair Trade Data® commitment.

  • Tier 1 – Top-rated panel by Qualtrics
  • Rated #1-panel quality provider by three survey data platform providers
  • Industry avg respondent rejection rate = 25%+ – MFour’s = 0.4%
  • Top-rated research app on IOS and Android 170k + 4.5-star reviews  

Our Fair Trade Data® commitment. 

Stay in Touch

Book a demo.

See how MFour can enhance your business through validated market research — book a demo today.

Man looking at laptop with graphs

Infographic

In our latest infographic, we surveyed consumers to understand where, when, and how parents are…

Social media’s transformation – from socializing to shopping.

As social media evolves into a shopping marketplace, certain apps are emerging as major players…

What is the customer satisfaction index? Strategies for enhanced consumer loyalty and satisfaction.

Understanding and Improving Customer Satisfaction Index Are you striving to improve your customer satisfaction index?…

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

buildings-logo

Article Menu

research on mobile survey

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Micro-urban heatmapping: a multi-modal and multi-temporal data collection framework.

research on mobile survey

1. Background

2. types of uhis and related technologies, 3. proposed framework, 4. cluhi data collection, 4.1. traverse data: first round, 4.1.1. data collection, 4.1.2. data processing and visualization, 4.1.3. lessons learned, 4.2. traverse data: second round, 4.2.1. data collection, 4.2.2. data processing and analysis, 4.2.3. lessons learned, 4.3. fixed sensor: rooftop, 4.4. kmeans classifier and image processing, 5. surface urban heat islands (suhis), 5.1. data collection using a thermal drone camera, 5.2. roof temperature analysis, 6. discussion, 6.1. contribution of the proposed framework, 6.2. comparison to other studies, 6.3. limitations, 7. conclusions, author contributions, data availability statement, acknowledgments, conflicts of interest.

  • Deilami, K.; Kamruzzaman, M.; Liu, Y. Urban heat island effect: A systematic review of spatio-temporal factors, data, methods, and mitigation measures. Int. J. Appl. Earth Obs. Geoinf. 2018 , 67 , 30–42. [ Google Scholar ] [ CrossRef ]
  • Rauf, S.; Bakhsh, K.; Abbas, A.; Hassan, S.; Ali, A.; Kächele, H. How hard they hit? Perception, adaptation and public health implications of heat waves in urban and peri-urban Pakistan. Environ. Sci. Pollut. Res. 2017 , 24 , 10630–10639. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sakka, A.; Santamouris, M.; Livada, I.; Nicol, F.; Wilson, M. On the thermal performance of low income housing during heat waves. Energy Build. 2012 , 49 , 69–77. [ Google Scholar ] [ CrossRef ]
  • Zhao, Q.; Guo, Y.; Ye, T.; Gasparrini, A.; Tong, S.; Overcenco, A.; Urban, A.; Schneider, A.; Entezari, A.; Vicedo-Cabrera, A.M.; et al. Global, regional, and national burden of mortality associated with non-optimal ambient temperatures from 2000 to 2019: A three-stage modelling study. Lancet Planet. Health 2021 , 5 , e415–e425. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Balbus, J.; Crimmins, A.; Gamble, J.L.; Easterling, D.R.; Kunkel, K.E.; Saha, S.; Sarofim, M.C. Ch. 1: Introduction: Climate Change and Human Health. The Impacts of Climate Change on Human Health in the United States: A Scientific Assessment. U.S. Glob. Change Res. Program 2016 . [ Google Scholar ] [ CrossRef ]
  • Adkins, K.; Wambolt, P.; Sescu, A.; Swinford, C.; Macchiarella, N.D. Observational Practices for Urban Microclimates Using Meteorologically Instrumented Unmanned Aircraft Systems. Atmosphere 2020 , 11 , 1008. [ Google Scholar ] [ CrossRef ]
  • Li, D.; Bou-Zeid, E. Synergistic Interactions between Urban Heat Islands and Heat Waves: The Impact in Cities Is Larger than the Sum of Its Parts. J. Appl. Meteorol. Clim. 2013 , 52 , 2051–2064. [ Google Scholar ] [ CrossRef ]
  • Takane, Y.; Ohashi, Y.; Grimmond, C.S.B.; Hara, M.; Kikegawa, Y. Asian megacity heat stress under future climate scenarios: Impact of air-conditioning feedback. Environ. Res. Commun. 2020 , 2 , 015004. [ Google Scholar ] [ CrossRef ]
  • Lin, C.-Y.; Chien, Y.-Y.; Su, C.-J.; Kueh, M.-T.; Lung, S.-C. Climate variability of heat wave and projection of warming scenario in Taiwan. Clim. Chang. 2017 , 145 , 305–320. [ Google Scholar ] [ CrossRef ]
  • Pyrgou, A.; Hadjinicolaou, P.; Santamouris, M. Urban-rural moisture contrast: Regulator of the urban heat island and heatwaves’ synergy over a mediterranean city. Environ. Res. 2020 , 182 , 109102. [ Google Scholar ] [ CrossRef ]
  • Founda, D.; Santamouris, M. Synergies between Urban Heat Island and Heat Waves in Athens (Greece), during an extremely hot summer (2012). Sci. Rep. 2017 , 7 , 10973. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Alonso, L.; Renard, F. A new approach for understanding urban microclimate by integrating complementary predictors at different scales in regression and machine learning models. Remote Sens. 2020 , 12 , 2434. [ Google Scholar ] [ CrossRef ]
  • Busato, F.; Lazzarin, R.; Noro, M. Three years of study of the Urban Heat Island in Padua: Experimental results. Sustain. Cities Soc. 2014 , 10 , 251–258. [ Google Scholar ] [ CrossRef ]
  • Yun, G.Y.; Ngarambe, J.; Duhirwe, P.N.; Ulpiani, G.; Paolini, R.; Haddad, S.; Vasilakopoulou, K.; Santamouris, M. Predicting the magnitude and the characteristics of the urban heat island in coastal cities in the proximity of desert landforms. The case of Sydney. Sci. Total Environ. 2019 , 709 , 136068. [ Google Scholar ] [ CrossRef ]
  • Caplin, A.; Ghandehari, M.; Lim, C.; Glimcher, P.; Thurston, G. Advancing environmental exposure assessment science to benefit society. Nat. Commun. 2019 , 10 , 1236. [ Google Scholar ] [ CrossRef ]
  • Santamouris, M. Recent progress on urban overheating and heat island research. Integrated assessment of the energy, environmental, vulnerability and health impact. Synergies with the global climate change. Energy Build. 2020 , 207 , 109482. [ Google Scholar ] [ CrossRef ]
  • Santamouris, M.; Paraponiaris, K.; Mihalakakou, G. Estimating the ecological footprint of the heat island effect over Athens, Greece. Clim. Chang. 2007 , 80 , 265–276. [ Google Scholar ] [ CrossRef ]
  • Yang, L.; Fu, R.; He, W.; He, Q.; Liu, Y. Adaptive thermal comfort and climate responsive building design strategies in dry–hot and dry–cold areas: Case study in Turpan, China. Energy Build. 2020 , 209 , 109678. [ Google Scholar ] [ CrossRef ]
  • Richard, Y.; Emery, J.; Dudek, J.; Pergaud, J.; Chateau-Smith, C.; Zito, S.; Rega, M.; Vairet, T.; Castel, T.; Thévenin, T.; et al. How relevant are local climate zones and urban climate zones for urban climate research? Dijon (France) as a case study. Urban Clim. 2018 , 26 , 258–274. [ Google Scholar ] [ CrossRef ]
  • Chapman, L.; Muller, C.L.; Young, D.T.; Warren, E.L.; Grimmond, C.S.B.; Cai, X.-M.; Ferranti, E.J.S. The Birmingham Urban Climate Laboratory: An Open Meteorological Test Bed and Challenges of the Smart City. Bull. Am. Meteorol. Soc. 2015 , 96 , 1545–1560. [ Google Scholar ] [ CrossRef ]
  • Hwang, R.-L.; Lin, T.-P.; Lin, F.-Y. Evaluation and mapping of building overheating risk and air conditioning use due to the urban heat island effect. J. Build. Eng. 2020 , 32 , 101726. [ Google Scholar ] [ CrossRef ]
  • Oke, T.R. The distinction between canopy and boundary-layer urban heat islands. Atmosphere 1976 , 14 , 268–277. [ Google Scholar ] [ CrossRef ]
  • Chen, Y.C.; Cheng, F.Y.; Yang, C.P.; Lin, T.P. Explore the accuracy of the pedestrian level temperature estimated by the combination of LCZ with WRF urban canopy model through the microclimate measurement network. Environ. Sci. Proc. 2021 , 8 , 14. [ Google Scholar ] [ CrossRef ]
  • Martilli, A.; Clappier, A.; Rotach, M.W. An urban surface exchange parameterisation for mesoscale models. Boundary-layer meteorology 2002 , 104 , 261–304. [ Google Scholar ] [ CrossRef ]
  • Ghorbany, S.; Hu, M.; Yao, S.; Wang, C. Towards a Sustainable Urban Future: A Comprehensive Review of Urban Heat Island Research Technologies and Machine Learning Approaches. Sustainability 2024 , 16 , 4609. [ Google Scholar ] [ CrossRef ]
  • Liu, S.; Kwok, Y.T.; Lau, K.K.L.; Ouyang, W.; Ng, E. Effectiveness of passive design strategies in responding to future climate change for residential buildings in hot and humid Hong Kong. Energy Build. 2020 , 228 , 110469. [ Google Scholar ] [ CrossRef ]
  • Lai, Y.; Ning, Q.; Ge, X.; Fan, S. Thermal Regulation of Coastal Urban Forest Based on ENVI-Met Model—A Case Study in Qinhuangdao, China. Sustainability 2022 , 14 , 7337. [ Google Scholar ] [ CrossRef ]
  • Imhoff, M.L.; Zhang, P.; Wolfe, R.E.; Bounoua, L. Remote sensing of the urban heat island effect across biomes in the continental USA. Remote Sens. Environ. 2010 , 114 , 504–513. [ Google Scholar ] [ CrossRef ]
  • Kaplan, G.; Avdan, U.; Avdan, Z.Y. Urban Heat Island Analysis Using the Landsat 8 Satellite Data: A Case Study in Skopje, Macedonia. Proceedings 2018 , 2 , 358. [ Google Scholar ] [ CrossRef ]
  • Anurogo, W.; Sidabutar RT, C.; Lubis, M.Z.; Panuntun, H.; Nugroho, C.B.; Sufwandika, M. Estimation of Land Surface Temperature-Assessment With Remote Sensing Data for Urban Heat Island in Batam Municipality. GeoEco 2022 , 9 , 1–11. [ Google Scholar ] [ CrossRef ]
  • Lee, K.; Kim, Y.; Sung, H.C.; Ryu, J.; Jeon, S.W. Trend Analysis of Urban Heat Island Intensity According to Urban Area Change in Asian Mega Cities. Sustainability 2019 , 12 , 112. [ Google Scholar ] [ CrossRef ]
  • Peng, S.; Piao, S.; Ciais, P.; Friedlingstein, P.; Ottle, C.; Bréon, F.-M.; Nan, H.; Zhou, L.; Myneni, R.B. Surface Urban Heat Island Across 419 Global Big Cities. Environ. Sci. Technol. 2011 , 46 , 696–703. [ Google Scholar ] [ CrossRef ]
  • Keeratikasikorn, C.; Bonafoni, S. Urban Heat Island Analysis over the Land Use Zoning Plan of Bangkok by Means of Landsat 8 Imagery. Remote Sens. 2018 , 10 , 440. [ Google Scholar ] [ CrossRef ]
  • Bohnenstengel, S.I.; Hamilton, I.; Davies, M.; Belcher, S.E. Impact of anthropogenic heat emissions on London’s temperatures. Q. J. R. Meteorol. Soc. 2013 , 140 , 687–698. [ Google Scholar ] [ CrossRef ]
  • Mohamed, M. Urbanization and Heat Island Effect: A Comparative Study in Egypt. Int. J. Clim. Stud. 2024 , 3 , 12–23. [ Google Scholar ] [ CrossRef ]
  • Zhou, D.; Zhao, S.; Zhang, L.; Sun, G.; Liu, Y. The footprint of urban heat island effect in China. Sci. Rep. 2015 , 5 , 11160. [ Google Scholar ] [ CrossRef ]
  • Likas, A.; Vlassis, N.J.; Verbeek, J. The global k-means clustering algorithm. Pattern Recognit. 2003 , 36 , 451–461. [ Google Scholar ] [ CrossRef ]
  • Schneider, F.A.; Ortiz, J.C.; Vanos, J.K.; Sailor, D.J.; Middel, A. Evidence-based guidance on reflective pavement for urban heat mitigation in Arizona. Nat. Commun. 2023 , 14 , 1467. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kotharkar, R.; Bagade, A.; Ramesh, A. Assessing urban drivers of canopy layer urban heat island: A numerical modeling approach. Landsc. Urban Plan. 2019 , 190 , 103586. [ Google Scholar ] [ CrossRef ]
  • Voogt, J.A.; Oke, T.R. Thermal remote sensing of urban climates. Remote Sens. Environ. 2003 , 86 , 370–384. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Round One Round Two
Data logger
Data logger interval 1 min20 s
Device accuracy±0.9 °F/±0.5 °C±0.2 °F/±0.1 °C
Groups 2 groups (2 person per group) 5 individuals
Data collection time slots 7:00 a.m.–10:30 a.m.
12:30 p.m.–5:30 p.m.
5:45 p.m.–8:00 p.m.
9:00 a.m.–10:00 a.m.
12:00 p.m.–1:00 p.m.
5:00 p.m.–6:00 p.m.
Route planned No Yes
Building NameRoofing Type Meanstdmin25%50%75%max
O’NeillWhite/green roof0.38 0.20 0.01 0.23 0.38 0.50 1.00
morris innGreen roof0.37 0.23 0.02 0.19 0.30 0.52 0.97
hessertWhite roof0.41 0.24 0.02 0.18 0.42 0.60 0.97
fizpatrickKEE roof0.44 0.27 0.01 0.20 0.42 0.66 1.00
facilities buildingKEE roof0.51 0.25 0.02 0.26 0.55 0.71 0.96
engineering northEPDM0.39 0.29 0.02 0.13 0.29 0.64 0.97
eck lawSlate roof0.40 0.25 0.02 0.18 0.38 0.56 1.00
DeBartoloStone ballast0.40 0.20 0.01 0.23 0.43 0.52 1.00
AlumniSlate roof0.38 0.26 0.01 0.16 0.28 0.55 0.97
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Hu, M.; Ghorbany, S.; Yao, S.; Wang, C. Micro-Urban Heatmapping: A Multi-Modal and Multi-Temporal Data Collection Framework. Buildings 2024 , 14 , 2751. https://doi.org/10.3390/buildings14092751

Hu M, Ghorbany S, Yao S, Wang C. Micro-Urban Heatmapping: A Multi-Modal and Multi-Temporal Data Collection Framework. Buildings . 2024; 14(9):2751. https://doi.org/10.3390/buildings14092751

Hu, Ming, Siavash Ghorbany, Siyuan Yao, and Chaoli Wang. 2024. "Micro-Urban Heatmapping: A Multi-Modal and Multi-Temporal Data Collection Framework" Buildings 14, no. 9: 2751. https://doi.org/10.3390/buildings14092751

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

A comprehensive survey on exploring and analyzing COVID-19 mobile apps: Meta and exploratory analysis

Affiliations.

  • 1 Department of Accounting and Information Systems, College of Business and Economics, Qatar University, Doha, Qatar.
  • 2 Shahzeb Shaheed Government Degree College Razzar, Swabi, Higher Education, KP, Pakistan.
  • 3 Software Engineering Department, Lappeenranta-Lahti University of Technology, 15210, Lappeenranta, Finland.
  • 4 Department of Computer Engineering, Gachon University, Seongnam-si, 13120, South Korea.
  • PMID: 39170132
  • PMCID: PMC11336479
  • DOI: 10.1016/j.heliyon.2024.e35137

During the current COVID-19 pandemic, many digital solutions around the world have been proposed to cope with the deadly virus but the role of mobile-based applications is dominant one. In Pakistan, during the current COVID-19 pandemic, an array of mobile health applications (apps) and platforms have been launched to grapple with the impacts of the COVID-19 situation. In this survey, our major focus is to explore and analyze the starring role of mobile apps based on the features and functionalities to tackle the COVID-19 disease, particularly in Pakistan. In this study, over fifty (50) mobile apps have been scrapped from the well-known three different sources i.e. Google Play Store, iOS Play Store, and web source. We developed our own data set after searching through the different play stores. We have designed two criteria such that the first criteria are known as eligibility criteria, while the second one is known as assessment criteria. The features and functions of each mobile app are pinpointed and discussed against the parameters of the assessment criteria. The major parameters of assessment criteria are: (i) Home monitoring; (ii) COVID-19 awareness; (iii) contact tracing; (iv) telemedicine; (v) health education; (vi) COVID-19 surveillance; (vii) self-assessment; (viii) security; and (ix) accessibility. This study conducted exploratory analysis and quantitative meta-data analysis by adopting PRISMA guidelines. This survey article is not only discussing the function and features of each COVID-19-centered app in Pakistan, but it also sheds light on the limitations of every mobile app as well. The results of this survey might be helpful for the mobile developers to review the current app products and enhance the existing mobile platforms targeted towards the COVID-19 pandemic. This is the first attempt of its kind to present a state-of-the-art survey of the COVID-19-centered mobile health apps in Pakistan.

Keywords: COVID-19; Coronavirus; Mobile applications; SARS-CoV-2.

© 2024 The Authors.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Procedure of extracting mobile apps…

Procedure of extracting mobile apps from play stores.

Mobile apps categories.

Timeline of mobile apps released…

Timeline of mobile apps released and updated during COVID-19 pandemic in Pakistan.

Structure of proposed evaluation criteria.

User feedback selected mobile apps.

Access level of the selected…

Access level of the selected mobile apps.

Comparing the number of COVID-19…

Comparing the number of COVID-19 apps of Pakistan with South Asian countries.

Major mobile app developers.

Different categories of selected mobile…

Different categories of selected mobile apps.

Permission level of mobile apps.

Platform-wise apps detail.

Gantt chart of release and…

Gantt chart of release and update dates of mobile apps during COVID-19 in…

  • Abd El-Hafeez T., et al. Harnessing machine learning to find synergistic combinations for FDA-approved cancer drugs. Sci. Rep. 2024;14(1):2428. - PMC - PubMed
  • Economist T. The global normalcy index-Is the world returning to pre-pandemic life? Find out with our interactive tracker. 2021. https://www.economist.com/graphic-detail/tracking-the-return-to-normalcy... cited 2021; Available from:
  • Vargo D., et al. Digital technology use during COVID‐19 pandemic: a rapid review. Human Behavior and Emerging Technologies. 2020;3(1):13–24.
  • Whitelaw S., et al. Applications of digital technology in COVID-19 pandemic planning and response. The Lancet Digital. 2020;2(8):435–444. - PMC - PubMed
  • Hu F., et al. Has COVID-19 changed China's digital trade?—implications for health economics. Front. Public Health. 2022;10 - PMC - PubMed

Related information

Linkout - more resources, full text sources.

  • Elsevier Science
  • PubMed Central

Miscellaneous

  • NCI CPTAC Assay Portal

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • Open access
  • Published: 02 September 2024

Sustaining the mobile medical units to bring equity in healthcare: a PLS-SEM approach

  • Jignesh Patel 1 ,
  • Sangita More 1 ,
  • Pravin Sohani 1 ,
  • Shrinath Bedarkar 1 ,
  • Kamala Kannan Dinesh 2 ,
  • Deepika Sharma 3 ,
  • Sanjay Dhir 3 ,
  • Sushil Sushil 3 ,
  • Gunjan Taneja 4 &
  • Raj Shankar Ghosh 5  

International Journal for Equity in Health volume  23 , Article number:  175 ( 2024 ) Cite this article

Metrics details

Equitable access to healthcare for rural, tribal, and underprivileged people has been an emerging area of interest for researchers, academicians, and policymakers worldwide. Improving equitable access to healthcare requires innovative interventions. This calls for clarifying which operational model of a service innovation needs to be strengthened to achieve transformative change and bring sustainability to public health interventions. The current study aimed to identify the components of an operational model of mobile medical units (MMUs) as an innovative intervention to provide equitable access to healthcare.

The study empirically examined the impact of scalability, affordability, replicability (SAR), and immunization performance on the sustainability of MMUs to develop a framework for primary healthcare in the future. Data were collected via a survey answered by 207 healthcare professionals from six states in India. Partial least squares structural equation modeling (PLS-SEM) was conducted to empirically determine the interrelationships among various constructs.

The standardized path coefficients revealed that three factors (SAR) significantly influenced immunization performance as independent variables. Comparing the three hypothesized relationships demonstrates that replicability has the most substantial impact, followed by scalability and affordability. Immunization performance was found to have a significant direct effect on sustainability. For evaluating sustainability, MMUs constitute an essential component and an enabler of a sustainable healthcare system and universal health coverage.

This study equips policymakers and public health professionals with the critical components of the MMU operational model leading toward sustainability. The research framework provides reliable grounds for examining the impact of scalability, affordability, and replicability on immunization coverage as the primary public healthcare outcome.

As India advances towards universal healthcare with substantial improvements in coverage, addressing marginalized communities remains a persistent concern for the current healthcare system [ 1 , 2 ]. Many improvements have been made to India's healthcare system as a result of the country's successful efforts to address a wide range of challenges, such as unequal access to treatment, a dearth of high-quality medical services, and inaccurate information [ 3 , 4 , 5 , 6 ]. Hesitancy, social stigma, ignorance, and a shortage of medical professionals have all contributed to these difficulties and served as roadblocks to enhancing access to healthcare for India's rural, tribal, and underprivileged people [ 7 , 8 ]. Therefore, it is imperative to implement broad innovative interventions in India's current primary healthcare system to address these issues and advance universal health coverage.

In this context, mobile medical units (MMUs) have a tremendous potential to provide equal and effective access to various healthcare facilities, including immunization clinics for the disadvantaged and immunocompromised population [ 9 , 10 ]. For communities cut off from mainstream services due to climatic conditions, geography, and social stigma, MMUs can be essential for providing service to immunocompromised, vulnerable, and marginalized people living in remote and challenging places [ 11 , 12 ].

Research indicates that MMUs have been crucial in delivering specialized healthcare services in addition to primary healthcare in rural regions [ 13 , 14 ]. Also, research has indicated that MMUs are particularly effective in delivering health care to India's underprivileged and neglected communities [ 15 ]. Hence, MMUs appear promising in remote areas where local health services lack the necessary resources. MMUs can provide primary healthcare services in locations lacking or with insufficient established facilities and specialized service delivery [ 16 ].

Mobile units, especially in certain areas of emergency and preventive medicine, have shown considerable potential in rural regions, but MMUs should not be adopted without careful assessment. The effective execution and long-term sustainability of this intervention depend on evaluating the critical elements of the operational model for MMUs. While numerous studies have highlighted the role of mobile medical units (MMUs) in increasing healthcare accessibility, especially in remote and underserved areas, there is a limited understanding of the specific operational models adopted by these units and their impact on healthcare outcomes [ 9 , 10 , 11 , 12 ]. The diversity in operational strategies, ranging from the type of services offered, staffing models, technological integration, to partnership networks, remains largely unexplored. Furthermore, while the immediate benefits of MMUs, such as increased healthcare access, are well-documented, there is a paucity of research examining the long-term impact of these units on healthcare outcomes along with the impact on sustaining these models for delivering other primary healthcare services. To improve the operational model for the future and provide a framework for policy analysis, it is crucial to comprehend the impacts of operational model components on primary healthcare outcomes and the model’s sustainability. Therefore, this study aims to achieve the following research objectives:

RO1 . To empirically determine the potential impact of scalability, affordability, replicability, and immunization performance on the sustainability of the MMUs operational model.

RO2 . To develop a framework of MMUs for future innovations in primary healthcare.

This study will empirically analyze the operational model of MMUs to confirm the impact on performance and sustainability. Jharkhand, Maharashtra, Meghalaya, Karnataka, Telangana, and Tamil Nadu have been selected as the sample states where the where the case organization has been operationally implemented and subjected to a thorough evaluation of these factors. The relay of work can be used to revisit the components of the operational model for MMUs by the practitioners and policymakers. The theoretical underpinnings of this study are presented in the following sub-sections, along with the hypotheses developed for empirical validation.

Scalability

Various studies have identified scalability as a prominent driver for improving healthcare outcomes [ 17 , 18 ]. Scalability is often associated with sustainability and higher performance in terms of efficiency and increased immunization coverage. Scalability in regard to a healthcare intervention refers to its potential suitability for scaling up. It is essential to have a clear understanding of the term ‘scalability’ in the context of public health to develop an effective health promotion intervention. Various studies have been conducted to explore the indicators for accurately measuring the scalability of public health interventions and examining their impact on health outcomes, such as immunization coverage and performance [ 19 , 20 ]. In this study we measured scalability by examining the delivery system, the availability of technical assistance, the organizational capacity, management, financial support, and partnerships.

Aspects like a delivery system that would ensure the reach and expansion of mobile clinics are essential components of an efficient strategy. Similarly, technology plays a significant role in the seamless dispersal of the immunization program, and a technical assistant helps blur the lines between the digital and physical worlds. Such integration keeps the delivery of vaccines, scheduling of vaccination drives, and other logistical concerns in check and ensures accountability with regard to the number of individuals immunized [ 21 ]. Another component that influences scalability is the organizational capacity of the stakeholders, for example in the mobile clinics employed in the COVID-19 vaccination drive. Mapping various areas of coordination and utilizing the organizational capacity for various operational purposes has helped mobile clinics to achieve their immunization targets. Advanced planning, timely delivery of vaccines, transportation, securing of awareness creation, mobilization of beneficiaries, proper registration, safe vaccination, and dispersal of certificates were crucial for critical ordination. All the elements discussed above require decision support as well as financial support. Decision support should focus on the inclusiveness of the tribal areas for immunization programs [ 22 ]. Similarly, financial support was also directed toward achieving immunization targets for the marginalized population, including tribal people, daily wage workers, street vendors, sex workers, etc. [ 23 ]. The partnership between mobile clinics and government agencies led to the creation of robust and scalable processes that integrated infrastructural and digital spaces for the successful deployment of a vaccine program [ 7 ].

Affordability

Affordability is essential for the government to provide healthcare services to ensure vaccines for all. Especially in a developing country like India, which comprises a large population, where health budgets have to be outlined judiciously [ 24 ]. Specific mechanisms are needed to ensure sustainable financing of vaccines available to individuals from marginalized populations [ 25 ]. The ability of mobile clinics to cover the hard-to-reach parts of the state was made possible only because of well-planned transportation by a network of ambulances. Close management of the transportation costs was of immediate need as the goal of the program was to bring a mobile clinic within reach of everyone to vaccinate the marginalized population [ 3 ]. Vaccination procurement and allocation were done appropriately by the government agencies to smoothly execute the plan [ 8 ].

Consideration was not limited to transportation costs, however, and the mobile clinic’s team was also concerned about limiting waste as the vaccine solutions have a shelf-life of around four hours after opening. Keeping the vaccine cold to limit waste helped to cut down the cost of the vaccine and increase affordability [ 26 ]. Strong coordination was needed between the mobile clinics’ team and government agencies to monitor and regulate the deployment of vaccines once they were removed from cold storage [ 8 ]. The other aspect that was required to be regulated at this scale was the direct and indirect costs of providing a robust infrastructure, including arrangements for transporting elderly and disabled people, and creating awareness in the population of the critical importance of getting the second dose [ 27 ].

Replicability

Replicability of the mobile clinic model is also one more way to guarantee a faster and higher coverage of target population immunization. This would include clear and transparent communication on the part of government agencies regarding dedicated timelines, prioritization of the groups to be vaccinated first, the types of vaccine, and the vaccination schedules [ 28 ]. Training accredited social health activists, doctors, data operators, and other directly involved workers are also part of the replicability strategy. Common aspects of such strategies include developing awareness and vaccination knowledge, acquiring the means to engage the community through open-sourcing strategic partnerships with influential local leaders to build confidence and trust for the medical community regarding the safety of the vaccine, preventing the spread of misinformation and rumors, and making vaccines available in hard-to-reach places [ 29 ]. Programs like mobile clinics can be replicated if government agencies are familiar with social franchising, subcontracting, and branching out to develop the necessary infrastructure [ 30 ]. Such healthcare-oriented interventions can help achieve a higher percentage of vaccination in the general population. Another strategy that can be employed if necessary in resource-constrained areas is to obtain support through public-private partnerships (PPP), not just for the COVID-19 vaccination program but also for other public healthcare initiatives [ 31 ].

  • Sustainability

The main objective of mobile clinics is to create sustainability for long-term impact. Sustainable funding for vaccines and vaccination programs comprises distribution costs, administrative expenses, surveillance, record keeping, and other needs [ 1 ]. Government agencies, the World Bank, and other multilateral banks have allocated extensive resources to achieve vaccination targets worldwide [ 32 ]. However, such financial support for the mobile clinic model should also have an element of internal rate of return and a sustainable cash flow [ 33 ]. Participation of government, private companies, and non-profit organizations has inculcated trust among the general population regarding the vaccine’s efficacy, safety, and affordability. This aspect of social sustainability is achieved through tailor-made strategies to reassure the local population regarding vaccine safety [ 34 ]. Technological assistance is one way to generate engineering sustainability to facilitate a mechanism to control pollution, recycle and reduce waste. A robust data system to check vaccine storage infrastructure, immunization schedules, and other logistics-related matters has enhanced the accountability of mobile clinics and established them as effective vaccination instruments [ 35 ]. Maximum transparency and communication between stakeholders are indispensable for a successful immunization program. Seamless coordination between these parties can aid in project management sustainability, not just at the local level but also at the national level; vaccine roll-out can be tracked, monitored, and evaluated, which helps in formulating efficient campaigns for creating awareness regarding vaccination importance [ 36 ]. Additionally, a robust data infrastructure cannot exist without isolated resources and environmental sustainability. Such data is necessary to identify the individuals eligible for priority vaccinations, create awareness, arrange transportation, and ensure that beneficiaries get the second dose [ 37 ]. The innovative methods adopted helped to coordinate the central pool of vaccine distribution with the local vaccination locations. Such a network ensured the efficient distribution of vaccines with limited wastage after they were removed from cold storage [ 38 ].

Theoretical framework and development of hypotheses

The following hypotheses have been developed based on the theoretical background described in the previous sub-sections to develop a theoretical foundation for this study. The research hypotheses were designed to demonstrate the relationship between the various constructs used in this study. Figure  1 illustrates the structural model for validating the three research hypotheses designed to evaluate the direct relationship between scalability, affordability, and replicability with immunization performance and sustainability.

H 1 . Scalability positively influences immunization performance.

H 2 . Scalability positively influences sustainability.

H 3 . Affordability positively influences immunization performance.

H 4 . Affordability positively influences sustainability.

H 5 . Replicability positively influences immunization performance.

H 6 . Replicability positively influences sustainability.

figure 1

Conceptual model for empirical testing

Case organization

In the context of mobile clinics in India, the study has considered Jivika Healthcare’s VaccineOnWheels (VOW) as the primary case organization. To develop an operational framework for MMUs by defining the components and analyzing their impacts on immunization performance as the primary healthcare outcome [ 39 ] and sustainability, the Bill & Melinda Gates Foundation, Jivika Healthcare Ltd, and the Indian Institute of Technology Delhi partnered to conduct this research. In this regard, Jivika Healthcare's service innovation VaccineOnWheels (VOW) has been regarded as one of several businesses providing immunization services through mobile clinics. In 2019, Jivika Healthcare Private Limited, in partnership with the Indian Institute of Technology Hyderabad and the Bill & Melinda Gates Foundation, launched Vaccine on Wheels, India's doctor-based mobile vaccination clinic with the objective of “ensuring access to quality vaccination for all” through the following three goals:

to reduce "inequitable access" to vaccines and increase immunization reach

to reduce the vaccine cost by cutting down out-of-pocket expenses, including travel and missed wages

to create awareness of the critical importance of immunization

Jivika's mobile vaccination unit provide ‘doorstep’ service to underserved communities hard-to-reach areas with access to COVID-19 vaccines and up-to-date information regarding vaccine safety and efficacy. To “meet communities where they are,” mobile vaccine units and staff conduct vaccination awareness camps among the community to mitigate the impact of misinformation regarding vaccination effects. In 2019, mobile clinics began mobile vaccination service in Maharashtra’s Pune city. In recent times, the mobile clinic service has been spread across six states of India. Under public-private partnership (PPP), with the support of governments, corporate social responsibility (CSR), and non-profit government organizations (NGOs), the core idea of a mobile clinic “reaching the unreached” grew rapidly. VOW provides sufficient grounds to understand the operational model of a mobile clinic to create a transformative force to increase immunization based upon collaboration with various stakeholders, processes adopted, and strategies implemented during the immunization drive.

This program helped to understand the gaps in the vaccine delivery model from close quarters and identify various issues faced by diverse stakeholders, primarily infants/caregivers/parents, in getting vaccinated. VOW has made vaccines accessible to the elderly, individuals with disabilities, female sex workers, tribal communities, rural communities, street vendors, maids, slum residents, frontline workers, the bedridden, and school children, among other vulnerable segments of society. They have also provided at-home service for those who could not get to the vaccination center, especially persons with disabilities. They have served the people residing in remote locations of the six states through more than 200 mobile vaccination units. Under the unique framework of the PPP, vaccination was administered at a reduced cost for beneficiaries with vaccines provided by the government. The PPP model enables stakeholder collaboration across industries under CSR, government, and NGOs to share a commitment to making vaccination services available even at the grass-roots level. This initiative should help India achieve higher immunization penetration by getting faster acceptance of vaccination, providing convenience, and reducing the cost of service with zero travel cost, travel time, and lost wages.

Research instrument

The questionnaire was developed with the literature review on the factors identified and the significant input from a diverse team of healthcare professionals, including public health experts and academic researchers who have extensive experience in the field of healthcare delivery and MMUs. Their practical insights and hands-on experience were invaluable in formulating relevant and context-specific questions. Given the unique operational environment of MMUs and the specific healthcare needs of rural, tribal, and underprivileged populations in India, we deemed it crucial to tailor the questionnaire to these specific contexts. The expertise of the involved professionals ensured that the questions were both relevant and comprehensive, covering critical aspects of scalability, affordability, replicability, and sustainability.

Questionnaire development began with the identification of the factors to be measured, followed by the selection of items to assess those factors, and then the testing and refinement of the items. The questionnaire items include five significant features derived from the literature linked with mobile clinics: scalability, affordability, replicability, sustainability, and immunization performance. Before distribution to respondents, a team of health professionals and academic researchers evaluated the questionnaire. The questionnaire contained six sub-factors for scalability, three to describe affordability, four to define replicability, and two to define sustainability. To factorize the broad characteristics of mobile clinics, eighteen, nine, twelve, six, and four items were proposed to structure scalability, affordability, replicability, and sustainability, respectively. An example of a statement from the questionnaire describing the scalability of the mobile clinics’ delivery system is “Mobile clinics reply quickly to vaccination-related questions from beneficiaries.”

Respondents to a questionnaire were directed to make appropriate selections ranging from 1 (strongly disagree) to 5 (strongly agree) for each item. A Likert scale ranging from strongly disagree (1) to strongly agree (5) has been utilized to simplify the response to the forty-nine questionnaire items. Various studies have adopted similar empirical techniques for health and policy-related research [ 40 , 41 ].

Statistical analysis

Numerous research studies in health and policy sectors utilize empirical methods including PLS-SEM and Covariance-based SEM (CB-SEM) [ 40 , 41 ]. While each method has distinct goals and applications, they can be seen as complementary [ 42 ]. In the realm of public health, PLS-SEM is more apt than CB-SEM for identifying relationships between key influencing factors [ 43 ]. The PLS-SEM technique has become increasingly popular across various disciplines due to its ability to calculate path coefficients, handle latent variables in non-normal distributions, and process data with modest sample sizes [ 44 , 45 ]. The research model in this study was examined using the Partial Least Square Structured Equation Modeling (PLS-SEM) method [ 46 ]. This study employed Smart PLS 4.0, a renowned tool for PLS-SEM evaluations. The PLS method, using the Smart-PLS 4.0 software, explored the causal connections among constructs. Given the study's explorative nature, the PLS approach has been adopted. Following the recommendations of Henseler et al. (2009), a two-phase data analysis method was adopted [ 47 ]. Initially, the measurement model was evaluated, followed by an exploration of the latent constructs' interrelationships. This two-phase approach ensures the reliability and validity of measurements before delving into the model's structural dynamics [ 48 ].

Sampling technique

According to the standard method for determining sample size in PLS-SEM studies, the model structure should have at least 10 times the number of structural routes [ 49 , 50 ]. There's a notable relationship between sample size and statistical power. For a model with five external variables, a minimum of 169 respondents is recommended to achieve 80% statistical power at a 5% significance level [ 51 , 52 ]. The study ensured to meet the mentioned criteria.

Participants and procedures

The questionnaire assessed mobile clinics' scalability, affordability, replicability, immunization performance, and sustainability. The target respondents for this study were healthcare stakeholders, including health officers, grassroots workers, mobile clinic operators, NGOs, policymakers, and other support staff.

Initially, the questionnaire was tested in a pilot study with a small group of healthcare stakeholders before it was finalized. Feedback from the pilot study was used to make any necessary revisions to the questionnaire. Then, the data was acquired from 207 respondents from the states of Jharkhand, Maharashtra, Meghalaya, Karnataka, Telangana, and Tamil Nadu, directly involved in a mobile clinic vaccination campaign. The respondents occupied a variety of roles within the healthcare system. The states were chosen to collect data since VOW only operated in these states.

The survey was conducted using a self-administered paper-based survey, which lasted for around two months. Participants received the questionnaire in English language and were provided with the detailed explanation of the survey's purpose and instructions on how to complete it. The participants were selected based on their professional roles and expertise in the healthcare sector, specifically those with experience in MMUs or similar healthcare delivery models. This selection criterion ensured that respondents had the necessary knowledge and expertise to answer the questions accurately.

An exploratory factor analysis (EFA) followed by a confirmatory factor analysis (CFA) was used to structure the critical factors of mobile clinics for optimal immunization performance and sustainability. This study employed a survey-based methodology to conduct proper statistical analyses to determine and validate the success factors of mobile clinics.

Respondents’ profiles

The respondents included mobile clinic healthcare workers, support employees, and consulting partners associated with mobile clinics (VOW). To better understand the nature of respondents, we classified them into different demographic profiles to interpret their contribution in terms of gender distribution, states, and geography (Refer Table  1 ). The number of female respondents was only marginally higher than that of male ones, with the female percentage at 51.21% and the male percentage at 48.79%. This shows a great degree of gender equitability in India’s fight against the COVID-19 pandemic, a remarkable sign of equality. Telangana had the highest number of responses (47.34%), followed by Maharashtra (29.5%), showing a non-uniform trend, with the other states being much lower. Thus, the state-wise distribution was governed by the severity of the pandemic showing drastic differences in percentage.

EFA Results

EFA was used as a first stage in the factorization process to extract a factor structure that ensures conceptual significance to the overall study. The initial sample of 100 responses was considered for exercising the EFA. The factor accounting for the most significant common variance was deleted during factor extraction. The Kaiser-Meyer- Olkin (KMO) test was used to ensure data sufficiency for the EFA. The KMO value from the analysis was 0.837, which is considered meritorious by various studies. EFA was carried out using principal component analysis as the extraction method and Varimax rotation as the rotation method. During the EFA procedure, the cross-loading items were deleted iteratively to increase the reliability parameters and obtain a perfect set of factors. Twelve items were deleted throughout this iterative procedure, yielding five factors with eigenvalues greater than one. Table  2 shows the extracted factors and associated items from EFA.

CFA Results

The analysis was carried out to evaluate the derived measurement model using IBM SPSS AMOS 26 [ 53 ]. To begin, Cronbach’s alpha (α) and composite reliability (CR) were used to evaluate internal consistency reliability. The values of α for all obtained factors were more significant than 0.8, while those linked with CR were greater than 0.8. Both α and CR values indicate commendable fit, which implies that they are more significant than the acceptable threshold of 0.7 for all factors [ 52 ], which means that the internal consistency reliability is satisfactory. The outer loadings and average variance extracted (AVE) were examined to assess convergent validity. The outside loading values were ≥ 0.7, whereas the AVEs values were more significant than 0.5. [ 54 ]. Thus, convergent validity of the factors was ensured by these findings. Table  3 displays the extracted values for the three outer loadings, Cronbach’s alpha, composite reliability, and average variance.

It was observed that the CFA measurement model fitted the data effectively. Comparative fit index (CFI), standardized root mean square residual (SRMR), and root mean square error of approximation (RMSEA) were calculated to be 0.96, 0.0503, and 0.035, respectively. Detailed information is provided in Table  4 .

In addition, discriminant validity was evaluated using the Heterotrait-monotrait ratio (HTMT), as shown in Table  5 . Based on these findings, it is apparent that the discriminant validity of the components in the proposed model was significantly validated by the HTMT standards [ 54 ].

Path model assessment

The model developed in the paper illustrates six critical hypotheses regarding the influence of scalability, affordability, and replicability on immunization performance and sustainability. Moreover, the model demonstrates the hypothesized link between immunization performance and sustainability. For the empirical validation of the model, the gathered data were utilized to examine the hypothesized correlations. Using SmartPLS 4.0 software, the model was empirically validated (Refer Fig.  2 ). Table  6 displays the results obtained by analyzing the structural model. All factors (SAR) were discovered to impact immunization performance substantially. While examining the direct effect of independent factors (SAR) on sustainability, it was observed that only affordability positively affected sustainability. It was also noticeable from the tested model that there was a significant path from immunization performance to sustainability. It is evident from Table  6 that the model has been validated and that several significant hypotheses were supported. The model also revealed an SRMR value of 0.053, indicating a satisfactory model fit. Analysis reveals that the R-squared values for immunization performance and sustainability were 0.494 and 0.290, respectively.

figure 2

Empirical validation of the model

In general, all significant relationships have values ranging from 0.17 to 0.33. Regarding the effect of independent factors on immunization performance, all three factors (SAR) have a substantial direct effect. Scalability and replicability were found to have a more substantial influence on immunization performance than affordability. The effect (β) values presented for hypotheses H 1 and H 5 are 0.28 and 0.33, respectively.

The β value of affordability on immunization performance (H 3 ) was 0.274. Comparing all three hypothesized links revealed that replicability had the most significant impact, followed by scalability and affordability. Similarly, when investigating the direct relationship between independent factors (SAR) and the sustainability of mobile clinics, the reported β value for the only significant influence of affordability was 0.173. In addition, the hypothesized pathway between immunization performance and sustainability was observed, providing evidence for the association. The reported β value for the relationship between immunization performance and mobile clinics’ sustainability was 0.232. Table  6 provides information regarding the validation of hypothesized linkages.

Equitable access to healthcare and immunizations is crucial for promoting public health, particularly in underserved rural and tribal areas. Mobile Medical Units (MMUs) represent a valuable community-based service delivery approach to address healthcare disparities in both urban and rural settings. While MMUs have been recognized as essential providers of medical care, their full potential and effectiveness have not been comprehensively explored in previous studies. This empirical study was conducted to shed light on the critical factors influencing the effectiveness and sustainability of mobile clinics for immunization programs.

The primary objective of this study was to discern the critical components of the operational model for MMUs and assess their impact on immunization performance and the sustainability of the model within the context of primary healthcare. For this purpose, a quantitative analysis assessed the five key factors: scalability, affordability, replicability, immunization performance, and sustainability. By employing structural equation modeling, the direct effects of these factors were examined. We aimed to construct a framework of guidelines that could enhance healthcare coverage in developing countries, specifically focusing on developing countries like India. The findings directly address the research objectives by elucidating the relationships among scalability, affordability, replicability, immunization performance, and sustainability of the MMUs operational model. In support of RO1, the results showcased that scalability, affordability, and replicability (SAR) significantly influence immunization performance, with replicability having the most substantial impact. Furthermore, immunization performance has a direct effect on the sustainability of MMUs, underscoring its crucial role. These findings collectively inform the development of a comprehensive framework for MMUs, as outlined in RO2. This framework emphasizes that to achieve sustainable primary healthcare innovations, MMUs must prioritize enhancing immunization performance through scalable, affordable, and replicable models. By directly linking these empirical findings to our research objectives, we provide actionable insights for policymakers and healthcare professionals.

Our empirical findings have yielded valuable insights into the factors contributing to mobile clinics' successful operation in India. Scalability primarily hinges on a well-defined delivery system, technical support, organizational capability, partnerships, integration, and community engagement. Affordability is closely linked to factors such as the procurement and distribution of vaccines, cold storage infrastructure, waste management, and managing both direct and indirect costs. Replicability, however, depends on open-sourcing training, strategic collaboration, and capacity building.

In the sustainability domain, a significant emphasis was placed on developing an ecosystem that supports the enduring presence of mobile clinics. This involves effective management strategies that ensure both social and project sustainability. This includes securing funding for vaccines, equipment, maintenance, and staff salaries, establishing strategic partnerships with local stakeholders, raising public awareness about immunization programs, and ensuring equitable access to quality immunization services for all, regardless of socioeconomic status or geographical location.

During the factorization process, it became evident that the delivery system is pivotal in determining scalability. Successful mobile clinics must implement standard operating procedures, maintain effective communication and data management systems, and possess a well-trained workforce capable of allocating resources efficiently to meet the diverse needs of their communities.

Similarly, in terms of affordability, cold storage and waste management emerged as the key factors. Ensuring that vaccines are stored at the appropriate temperature during transportation, particularly in remote areas, is essential for cost-effectiveness and minimizing dose wastage in immunization programs.

In the context of replicability, capacity building was identified as the strongest indicator. Building the competence of mobile clinic teams, which include nurse practitioners, physicians, public health workers, and other healthcare professionals, is critical for agile and effective vaccine delivery. Proper training and adherence to best practices are essential for success.

Lastly, when examining the 'sustainability' factor, it became evident that creating an ecosystem conducive to the long-term operation of mobile clinics is vital. This involves continuous monitoring, sound financial planning, strategic partnerships, public awareness campaigns, and a commitment to equitable service provision.

Considering the overall model, the validated framework highlights the significance of scalability, affordability, and replicability in improving immunization performance. However, while affordability significantly impacts sustainability, the other factors appear to have no direct influence. Nevertheless, a strong link exists between immunization performance and the sustainability of mobile clinics. Affordable mobile clinics are more likely to be utilized, resulting in improved immunization rates and greater sustainability due to increased demand. Furthermore, the scalability and replicability of mobile clinics enables them to adapt to various contexts, which encourages broader adoption and, consequently, enhances their long-term sustainability. This study underscores the vitality of these factors in optimizing the impact of mobile clinics in advancing public health goals.

Previous studies have highlighted the challenges and benefits of scaling up healthcare interventions and replicating successful models in different settings. Our study builds on this body of work by empirically demonstrating that while scalability and replicability do not have a direct effect on sustainability, they significantly influence immunization performance, which in turn impacts sustainability. This finding adds a nuanced understanding of the indirect pathways through which these factors contribute to sustainable healthcare systems.

Affordability has consistently been recognized as a critical factor in healthcare delivery, particularly in resource-constrained settings. Our results corroborate this by showing that affordability significantly impacts immunization performance, thus reinforcing the need for cost-effective healthcare solutions. This aligns with existing literature that emphasizes the importance of economic feasibility in healthcare interventions.

This study confirms the pivotal role of immunization performance in achieving sustainability, consistent with prior research that underscores the importance of effective immunization programs for long-term health outcomes. By linking immunization performance directly to sustainability, our findings provide empirical support for strategies aimed at enhancing immunization coverage as a pathway to sustainable healthcare.

The healthcare challenges addressed in this study, such as equitable access to healthcare for rural, tribal, and underprivileged populations, are not unique to India. Many developing countries face similar issues, making the findings of this study potentially applicable to other contexts. The operational model of MMUs evaluated in this study can serve as a reference for other developing countries looking to implement or enhance similar healthcare interventions.

The concept of scalability, as evaluated in this study, involves expanding healthcare interventions while maintaining effectiveness and efficiency. This aspect is crucial for developing countries with large rural populations and limited healthcare infrastructure. Affordability is a significant consideration in many developing countries where economic constraints limit access to healthcare services. The insights from this study on making MMUs cost-effective can guide policymakers in similar settings. The ability to replicate successful healthcare interventions in different settings is vital for broader implementation. The findings of this study on the replicability of MMUs can inform strategies in other developing countries to achieve consistent healthcare outcomes. Ensuring the long-term sustainability of healthcare interventions is a common challenge across developing nations. The study's findings on the sustainability of MMUs provide a framework for integrating such models into existing health systems for lasting impact.

Research on improving the sustainability of MMUs has not received much attention in developing countries and has also been recognized as a prominent gap in the past literature [ 10 ]. This study recognized that MMUs have the capability to engage and win underprivileged people’s trust by driving directly into communities and opening their doors on the steps of their target beneficiaries. Services offered by MMUs have been proven to enhance immunization coverage and individual health outcomes, advance community health, and lower healthcare costs compared with typical clinical settings because MMUs can overcome numerous healthcare barriers. MMUs can operate as significant players in our developing healthcare system since they can address social, behavioral, and medical health challenges and act as a bridge between the physical clinics and the community. Continuous research must be conducted to resolve the problems and enhance the capacity of MMUs, strengthen the cost-effectiveness of MMUs services, and explore both qualitative and quantitative evidence to advocate for more widespread integration of MMUs into the public health ecosystem to tackle some of the most significant challenges facing primary healthcare services in the current day.

The study examined the impact of scalability, affordability, and replicability on immunization performance and further the effect of immunization performance on the sustainability of the MMUs operation model in the future. While the initiative applies to all regions, it benefited Tier-II, rural, and tribal communities. The mobile vaccination program aims to reach and vaccinate people in hard-to-reach locations. In this way, the program had firmly established itself in urban catchment areas, rural and tribal communities. Finally, a similar model is believed to be replicated in other distant and underprivileged regions of India where healthcare services are lacking. For future routine immunization programs and other primary healthcare services in rural and tribal areas, a similar strategy can be implemented.

Additionally, this study has several limitations that should be acknowledged. The operational model of MMUs might vary significantly across different regions and settings. The study’s findings on scalability, affordability, and replicability may therefore need contextual adaptation when applied to different healthcare environments. Also, the data were collected from healthcare professionals in six states of India, which may not fully represent the diversity of healthcare settings across the entire country. Consequently, the findings may not be entirely generalizable to other regions within India or to other countries with different healthcare contexts.

Implications and future research avenues

MMUs have been recognized as a transformative intervention towards equitable access to health care and the achievement of universal health coverage in developing countries like India. MMUs can be efficient alternatives for delivering quality healthcare to the most vulnerable populations and improving the early diagnosis of various diseases. Practically, this study equips policymakers and public health professionals with the critical components of the MMUs operational model leading toward sustainability. The research framework provides reliable grounds for examining the impact of scalability, affordability, and replicability on immunization coverage as the primary public healthcare outcome. The model can be employed in planning and developing an ecosystem of MMUs for underserved populations and integrating MMUs into the public health structure of a developing country. The model can also be utilized as a management tool for monitoring and assessment of various interventions to be introduced along with MMUs in the future. Practitioners can assess the scalability and affordability of their interventions and improve their decision-making by examining the impact on sustainability.

Our study has primarily identified the impact of scalability, affordability, and replicability on immunization performance, but the model could be extended by examining how the technological readiness of MMUs influences their sustainability. Also, future researchers could explore other public health outcomes and measure the overall impact of scalability, affordability, and replicability on public health in general. In addition, it has been suggested that future researchers utilize a multiple case study approach to examine the impact of the critical components of MMU operation by generating evidence from more than one case organization and covering a wider range of geographies in India.

Availability of data and materials

Not applicable

Abbreviations

  • Mobile medical units

Scalability, affordability, replicability

Partial least squares structural equation modeling

VaccineOnWheels

Coronavirus Disease 2019

Public-private partnership

Non-profit government organizations

Corporate social responsibility

Covariance based structural equation modeling

Exploratory factor analysis

Kaiser-Meyer-Olkin

Confirmatory factor analysis

Composite reliability

Average variance extracted

Comparative fit index

Standardized root mean square residual

Heterotrait-monotrait ratio

Root mean square error of approximation

Effect value

Amimo F, Lambert B, Magit A, Hashizume M. A review of prospective pathways and impacts of COVID-19 on the accessibility, safety, quality, and affordability of essential medicines and vaccines for universal health coverage in Africa. Globalization health. 2021;17(1):1–5.

Article   Google Scholar  

Ho CJ, Khalid H, Skead K, Wong J. The politics of universal health coverage. Lancet. 2022.

Dar M, Moyer W, Dunbar-Hester A, Gunn R, Afokpa V, Federoff N, Brown M, Dave J. Supporting the most vulnerable: Covid-19 vaccination targeting and logistical challenges for the homebound population. NEJM Catalyst Innovations Care Delivery. 2021;2(5). https://catalyst.nejm.org/doi/full/10.1056/CAT.21.0117 .

Kumar A, Nayar KR, Koya SF. COVID-19: Challenges and its consequences for rural health care in India. Public Health Pract. 2020;1:100009.

Das P, Shukla S, Bhagwat A, Purohit S, Dhir S, Jandu HS, Kukreja M, Kothari N, Sharma S, Das S, Taneja G. Modeling a COVID-19 vaccination campaign in the State of Madhya Pradesh, India. Global J Flex Syst Manage. 2023;24(1):143–61.

Dhawan V, Aggarwal MK, Dhalaria P, Kharb P, Sharma D, Dinesh KK, Dhir S, Taneja G, Ghosh RS. Examining the Impact of Key Factors on COVID-19 Vaccination Coverage in India: A PLS-SEM Approach. Vaccines. 2023;11(4):868.

Article   PubMed   PubMed Central   Google Scholar  

Stamm A, Strupat C, Hornidge AK. Global access to COVID-19 vaccines: Challenges in production, affordability, distribution and utilisation. Discussion Paper; 2021.

Wouters OJ, Shadlen KC, Salcher-Konrad M, Pollard AJ, Larson HJ, Teerawattananon Y, Jit M. Challenges in ensuring global access to COVID-19 vaccines: production, affordability, allocation, and deployment. Lancet. 2021;397(10278):1023–34.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Mishra V, Seyedzenouzi G, Almohtadi A, Chowdhury T, Khashkhusha A, Axiaq A, Wong WY, Harky A. Health inequalities during COVID-19 and their effects on morbidity and mortality. J Healthc Leadersh. 2021:19–26.

Khanna AB, Narula SA. Mobile health units: mobilizing healthcare to reach unreachable. Int J Healthc Manag. 2016;9(1):58–66.

Chanda S, Randhawa S, Bambrah HS, Fernandes T, Dogra V, Hegde S. Bridging the gaps in health service delivery for truck drivers of India through mobile medical units. Indian J Occup Environ Med. 2020;24(2):84.

Dwivedi YK, Shareef MA, Simintiras AC, Lal B, Weerakkody V. A generalised adoption model for services: A cross-country comparison of mobile health (m-health). Government Inform Q. 2016;33(1):174–87.

Akhtar MH, Ramkumar J. Primary Health Center: Can it be made mobile for efficient healthcare services for hard to reach population? A state-of-the-art review. Discover Health Syst. 2023;2(1):3.

Patel V, Parikh R, Nandraj S, Balasubramaniam P, Narayan K, Paul VK, Kumar AS, Chatterjee M, Reddy KS. Assuring health coverage for all in India. Lancet. 2015;386(10011):2422–35.

Article   PubMed   Google Scholar  

Attipoe-Dorcoo S, Delgado R, Gupta A, Bennet J, Oriol NE, Jain SH. Mobile health clinic model in the COVID-19 pandemic: lessons learned and opportunities for policy changes and innovation. Int J Equity Health. 2020;19(1):1–5.

Chillimuntha AK, Thakor KR. Disadvantaged rural health–issues and challenges: a review. Natl J Med Res. 2013;3(01):80–2.

Google Scholar  

Landis-Lewis Z, Flynn A, Janda A, Shah N. A scalable service to improve health care quality through precision audit and feedback: proposal for a randomized controlled trial. JMIR Res Protocols. 2022;11(5):e34990.

Liu A, Sullivan S, Khan M, Sachs S, Singh P. Community health workers in global health: scale and scalability. Mt Sinai J Medicine: J Translational Personalized Med. 2011;78(3):419–35.

Azizatunnisa L, Cintyamena U, Mahendradhata Y, Ahmad RA. Ensuring sustainability of polio immunization in health system transition: lessons from the polio eradication initiative in Indonesia. BMC Public Health. 2021;21(1):1–6.

Cintyamena U, Azizatunnisa L, Ahmad RA, Mahendradhata Y. Scaling up public health interventions: case study of the polio immunization program in Indonesia. BMC Public Health. 2021;21(1):1–2.

English SW, Barrett KM, Freeman WD, Demaerschalk BM. Telemedicine-enabled ambulances and mobile stroke units for prehospital stroke management. J Telemed Telecare. 2022;28(6):458–63.

Ganapathy K, Reddy S. Technology enabled remote healthcare in public private partnership mode: A story from India. Telemedicine, Telehealth and Telepresence: Principles, Strategies, Applications, and New Directions. 2021:197–233.

Lin SC, Zhen A, Zamora-Gonzalez A, Hernández J, Fiala S, Duldulao A. Research Brief Report: Variation in State COVID-19 Disease Reporting Forms on Social Identity, Social Needs, and Vaccination Status. J Public Health Manage Pract. 2022;28(5):486.

Pegurri E, Fox-Rushby JA, Damian W. The effects and costs of expanding the coverage of immunisation services in developing countries: a systematic literature review. Vaccine. 2005;23(13):1624–35.

Levine DM, Chalasani R, Linder JA, Landon BE. Association of the Patient Protection and Affordable Care Act with ambulatory quality, patient experience, utilization, and cost, 2014–2016. JAMA Netw open. 2022;5(6):e2218167.

Sharma P, Pardeshi G. Rollout of COVID-19 vaccination in India: a SWOT analysis. Disaster Med Pub Health Prep. 2022;16(6):2310–3.

Epstein S, Ayers K, Swenor BK. COVID-19 vaccine prioritisation for people with disabilities. Lancet Public Health. 2021;6(6):e361.

Merritt L, Fennie K, Booher J, Fulcher O, Carretta J. Community-led effort to Reduce Disparities in COVID Vaccination: A Replicable Model. J Natl Med Assoc. 2021;114(3):S38. https://doi.org/10.1016/j.jnma.2021.08.014 .

Chadambuka A, Chimusoro A, Apollo T, Tshimanga M, Namusisi O, Luman ET. The need for innovative strategies to improve immunisation services in rural Zimbabwe. Disasters. 2012;36(1):161–73.

Kumar VM, Pandi-Perumal SR, Trakht I, Thyagarajan SP. Strategy for COVID-19 vaccination in India: the country with the second highest population and number of cases. npj Vaccines. 2021;6(1):60.

Baig MB, Panda B, Das JK, Chauhan AS. Is public private partnership an effective alternative to government in the provision of primary health care? A case study in Odisha. J Health Manage. 2014;16(1):41–52.

Skolnik A, Bhatti A, Larson A, Mitrovich R. Silent consequences of COVID-19: why it’s critical to recover routine vaccination rates through equitable vaccine policies and practices. Annals Family Med. 2021;19(6):527–31.

Oriol NE, Cote PJ, Vavasis AP, Bennet J, DeLorenzo D, Blanc P, Kohane I. Calculating the return on investment of mobile healthcare. BMC Med. 2009;7(1):1–6.

Mukherjee S, Baral MM, Chittipaka V, Pal SK, Nagariya R. Investigating sustainable development for the COVID-19 vaccine supply chain: a structural equation modelling approach. J Humanitarian Logistics Supply Chain Manage. 2022.

Christie A, Brooks JT, Hicks LA, Sauber-Schatz EK, Yoder JS, Honein MA, Team COVIDC. Guidance for implementing COVID-19 prevention strategies in the context of varying community transmission levels and vaccination coverage. Morb Mortal Wkly Rep. 2021;70(30):1044.

Article   CAS   Google Scholar  

Pilati F, Tronconi R, Nollo G, Heragu SS, Zerzer F. Digital twin of COVID-19 mass vaccination centers. Sustainability. 2021;13(13):7396.

Russo AG, Decarli A, Valsecchi MG. Strategy to identify priority groups for COVID-19 vaccination: A population based cohort study. Vaccine. 2021;39(18):2517–25.

Kimble C, Coustasse A, Maxik K. Considerations on the distribution and administration of the new COVID-19 vaccines. Int J Healthc Manag. 2021;14(1):306–10.

Kress DH, Su Y, Wang H. Assessment of primary health care system performance in Nigeria: using the primary health care performance indicator conceptual framework. Health Syst Reform. 2016;2(4):302–18.

Park K, Park J, Kwon YD, Kang Y, Noh JW. Public satisfaction with the healthcare system performance in South Korea: Universal healthcare system. Health Policy. 2016;120(6):621–9.

Hald AN, Bech M, Burau V. Conditions for successful interprofessional collaboration in integrated care–lessons from a primary care setting in Denmark. Health Policy. 2021;125(4):474–81.

Hair JF, Ringle CM, Sarstedt M. Partial least squares structural equation modeling: Rigorous applications, better results and higher acceptance. Long Range Plann. 2013;46(1–2):1–2.

Ali F, Omar R. Determinants of customer experience and resulting satisfaction and revisit intentions: PLS-SEM approach towards Malaysian resort hotels. Asia-Pacific J Innov Hospitality Tourism (APJIHT). 2014;3:1–9.

Henseler J, Dijkstra TK, Sarstedt M, Ringle CM, Diamantopoulos A, Straub DW, Ketchen DJ Jr, Hair JF, Hult GT, Calantone RJ. Common beliefs and reality about PLS: Comments on Rönkkö and Evermann (2013). Organizational Research Methods. 2014;17(2):182–209.

Rigdon EE. Rethinking partial least squares path modeling: breaking chains and forging ahead. Long Range Plann. 2014;47(3):161–7.

Hair JF, Ringle CM, Sarstedt M. PLS-SEM: Indeed a silver bullet. J Mark theory Pract. 2011;19(2):139–52.

Henseler J, Ringle CM, Sinkovics RR. The use of partial least squares path modeling in international marketing. InNew challenges to international marketing 2009 Mar 6 (Vol. 20, pp. 277–319). Emerald Group Publishing Limited.

HUYUT M, Soygüder S. The multi-relationship structure between some symptoms and features seen during the new coronavirus 19 infection and the levels of anxiety and depression post-Covid. Eastern J Med. 2022;27(1):1–10.

Tabachnick BG, Fidell LS. Experimental designs using ANOVA. Belmont, CA: Thomson/Brooks/Cole; 2007. Dec 6.

Iantovics LB, Rotar C, Morar F. Survey on establishing the optimal number of factors in exploratory factor analysis applied to data mining. Wiley Interdisciplinary Reviews: Data Min Knowl Discovery. 2019;9(2):e1294.

Jung S. Exploratory factor analysis with small sample sizes: A comparison of three approaches. Behav Process. 2013;97:90–5.

Hair J Jr, Hair JF Jr, Hult GT, Ringle CM, Sarstedt M. A primer on partial least squares structural equation modeling (PLS-SEM). Sage; 2021.

Book   Google Scholar  

Hulland J. Use of partial least squares (PLS) in strategic management research: A review of four recent studies. Strateg Manag J. 1999;20(2):195–204.

Henseler J, Ringle CM, Sarstedt M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J Acad Mark Sci. 2015;43:115–35.

Download references

Acknowledgments

Not applicable.

This study has been funded by Bill and Melinda Gates Foundation (027027; RP04054F)

Author information

Authors and affiliations.

Jivika Healthcare Private Limited, Pune, Maharashtra, India

Jignesh Patel, Sangita More, Pravin Sohani & Shrinath Bedarkar

Jindal Global Business School, O. P. Jindal Global University, Haryana, India

Kamala Kannan Dinesh

Department of Management Studies, Indian Institute of Technology Delhi, Delhi, India

Deepika Sharma, Sanjay Dhir & Sushil Sushil

Bill & Melinda Gates Foundation, New Delhi, India

Gunjan Taneja

Public Health Consultant, New Delhi, India

Raj Shankar Ghosh

You can also search for this author in PubMed   Google Scholar

Contributions

JP, SM, PS, SB, SD, SS, GT, and RG conceptualized the paper. KD, DS, SD, and SS collected and analyzed the data. KD and DS prepared the first draft of the manuscript, which was revised and edited by all other authors. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Deepika Sharma .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: annexure 1. profile of respondents based on their expertise., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Patel, J., More, S., Sohani, P. et al. Sustaining the mobile medical units to bring equity in healthcare: a PLS-SEM approach. Int J Equity Health 23 , 175 (2024). https://doi.org/10.1186/s12939-024-02260-x

Download citation

Received : 29 April 2023

Accepted : 26 August 2024

Published : 02 September 2024

DOI : https://doi.org/10.1186/s12939-024-02260-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Health care
  • Immunization

International Journal for Equity in Health

ISSN: 1475-9276

research on mobile survey

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 03 September 2024

Study of dog population dynamics and rabies awareness in Thailand using a school-based participatory research approach

  • Weerakorn Thichumpa 1 ,
  • Anuwat Wiratsudakul 2 ,
  • Sarin Suwanpakdee 2 ,
  • Chayanin Sararat 3 ,
  • Charin Modchang 3 , 4 ,
  • Setha Pan-ngum 5 ,
  • Nakornthip Prompoon 5 ,
  • Onpawee Sagarasaeranee 6 ,
  • Sith Premashthira 6 ,
  • Weerapong Thanapongtharm 6 ,
  • Arun Chumkaeo 7 &
  • Wirichada Pan-ngum 1 , 8  

Scientific Reports volume  14 , Article number:  20477 ( 2024 ) Cite this article

Metrics details

  • Epidemiology
  • Health care
  • Risk factors

Rabies is a neglected disease primarily related to dog-mediated transmission to humans. Accurate dog demographic and dynamic data are essential for effectively planning and evaluating population management strategies when designing interventions to prevent rabies. However, in Thailand, longitudinal survey data regarding dog population size are scarce. A school-based participatory research (SBPR) approach was conducted to survey owned dogs for one year in four high-risk provinces (Chiang Rai, Surin, Chonburi, and Songkhla) of Thailand, aiming to understand dog population dynamics and raise awareness about rabies. ‘Pupify’ mobile application was developed to collect data on dog population and observe the long-term population dynamics in this study. At the end of the data collection period, telephone interviews were conducted to gain insight into contextual perceptions and awareness regarding both animal and human rabies, as well as the social responsibility of dog owners in disease prevention and control. Among 303 high school students who registered in our study, 218 students reported at least one update of their dog information throughout the one-year period. Of 322 owned dogs from our survey, the updates of dog status over one year showed approximately 7.5 newborns per 100-dog-year, while deaths and missing dogs were 6.2 and 2.7 per 100-dog-year, respectively. The male to female ratio was approximately 1.8:1. Twenty-three students (10%) voluntarily participated and were interviewed in the qualitative study. The levels of rabies awareness and precautions among high-school students were relatively low. The high dropout rate of the survey was due to discontinuity in communication between the researcher and the students over the year. In conclusion, this study focused on using the SBPR approach via mobile application to collect data informing dog population dynamics and raising awareness regarding rabies in Thailand Other engaging platforms (e.g. Facebook, Instagram, Twitter, and other popular applications) is necessary to enhance communication and engagement, thereby sustaining and maintaining data collection. Further health education on rabies vaccination and animal-care practices via social media platforms would be highly beneficial. For sustainable disease control, engaging communities to raise awareness of rabies and increase dog owners’ understanding of their responsibilities should be encouraged.

Similar content being viewed by others

research on mobile survey

Systematic surveillance tools to reduce rodent pests in disadvantaged urban areas can empower communities and improve public health

research on mobile survey

One Health promotion and the politics of dog management in remote, northern Australian communities

Reorienting rabies research and practice: lessons from india, introduction.

Population demographics are important baseline data necessary for the study of infectious diseases. Human population data are available in most settings. For animal populations, however, demographic information is very limited in several countries and often only available for specific cohorts or studies. In Thailand, nationwide dog surveys are conducted by local government organizations once or twice a year and reported to a web-based reporting system, “ThaiRabies.net”, which has been updated to “Rabies One Data” since 2021 1 . These surveys require considerable human resources, while the quality of data can vary from province to province depending on the management and training of local staff teams to process and manage data 2 . Here, we proposed an innovative way to conduct dog surveys using a school-based participatory research (SBPR) as a part of community-based participatory research (CBPR), an approach to research that involves collective, reflective, and systematic inquiry in which researchers and community stakeholders engage as equal partners in all steps of the research process, with the goal of educating, improving practice, or bringing about social change 3 , 4 . We implemented the SBPR approach to perform a dog population survey among high school students in Thailand, using a mobile-phone application. This alternative approach relies on a self-reporting system for dog owners. This can be done through a mobile application developed for data collection. This approach was hoped to provide solution of a long-term data collection with lower cost to the government sectors, as well as promote community participations, raising awareness and responsibility among owners to register, monitor, and care for their dogs.

Dog ownership issues are critical for the design of rabies vaccination campaigns, especially in developing countries, including Thailand 5 . In many high-income settings, owners are responsible for properly confining their dogs and facilitating their vaccination against rabies. In Thailand, dog-keeping practices and duties of responsible ownership vary depending on the cultural setting 6 . There is an increasing evidence that most free-roaming dogs are owned and accessible for rabies prophylaxis 7 , 8 , 9 ; moreover, unvaccinated owned dogs have been affected by rabies 2 . Nevertheless, many owners cannot afford to pay for vaccination and other veterinary care for their own dogs 10 , 11 . Thus, many people rely on free, mass vaccination campaigns against rabies, provided by the government or non-governmental organizations (NGOs). In addition, limited access to dog vaccination can potentially reduce effective vaccination coverage, particularly if the proportion of unowned dogs is large. Dog movement patterns can also play a role in rabies epidemiology 12 . Dog confinement has been studied and implemented in some countries as a control measure for rabies 13 , 14 , 15 .

In Thailand, rabies is a notifiable condition, however it is not compulsory to report suspected rabies exposure in humans 16 . Both dog and human vaccination guidelines from the World Health Organization (WHO) and the World Organization for Animal Health (WOAH), recommend a comprehensive strategy to eradicate dog-mediated rabies 17 , 18 , 19 . The strategy highlights the importance of mass dog vaccination campaigns (aiming for at least 70% coverage) and the implementation of effective dog population control measures (e.g. sterilization), which have been optimized for rabies prevention and control 16 , 17 , 18 , 20 . Human rabies in Thailand has been prevented and controlled by policy promulgated since 1992. Rabies cases have decreased because of schemes including mass dog vaccination and sterilization. Although human rabies in Thailand has gradually declined, animal rabies has been generally increasing over the past ten years 2 . In 2020, there were 209 cases of rabid dogs reported in Thailand and three human deaths due to rabies. Rabies is most prevalent in the provinces of Chonburi, Songkhla, and Surin, while Chiang Rai has found high positive detection of rabid animal cases in 2018 21 , 22 . The control of rabies in animals is challenging, as the disease can be transmitted throughout the year and therefore surveillance and control of animal carriers are urgently required 20 . As for the Thai government’s policy and guideline (based on WHO & WOAH) for high-risk areas, ring vaccination is currently implemented for controlling and preventing rabies outbreaks, while sterilization is a long-term solution to control number of dog population, reducing contacts among dogs and between human and dogs. Both vaccination and sterilization are hopeful for improve management of dog bites 22 .

Although the database of dogs has been significantly improved following the introduction of dog survey reporting to ThaiRabies.net by local government organizations, the system still relies solely on the public health sectors. Moreover, data consistency remains an issue due to technical problems within the system and incomplete data entry. Here, we introduced a novel method for owned-dog data collection, using the SBPR approach. Information about dog population dynamics is essential for analyzing population and disease prediction and can act as baseline data for dog population management plans. The exploration and identification of dog population ecosystems and dynamics are required as a framework to effectively plan and evaluate population management and interventions to prevent rabies 8 . In addition, the introduction of an approach to our dog survey among school-age children could be beneficial in terms of generating awareness of animal-care practices, disease, and the development of a research mindset.

Countries in Southeast Asia are among the top users of mobile phones globally. In 2020, total population of Thailand were approximately 65.42 million 23 . The number of smartphone users in Thailand reached 53.57 million, with around 60 million predicted by 2026, due both to increases in the Thai population and internet penetration 24 . Self-reported data collection via mobile phones can be of use when conducting large-scale surveys, with the affordability and availability of mobile phones and wireless networks making them a viable alternative to traditional methods 25 . However, it is important to consider various aspects involved in the development and implementation of mobile phone data collection. For example, ensuring usability and user acceptance of the data collection system will help motivate survey participants to stay with the project and continue to provide high-quality data. Server authentication through the use of properly configured certificates will help deal with threats of data submission to a malicious server, which can increase users’ confidence in data security 26 .

Our study proposed an initial effort to conduct a long-term survey based on dog owners’ awareness and participation. The dog population dynamics data were analyzed and visualized. In addition, the qualitative study was performed on 10% of the survey participants who volunteered to do the interview on knowledge of rabies, social responsibility, community engagement and research orientation. The data collection tools and methods were assessed and further improvements when using this approach were proposed.

Dog population survey

School and participant demographics.

In the survey via ‘Pupify’ mobile application, 303 high-school students registered through the mobile application for our study. There were 29.8%, 28.9%, 27.1% and 14.2% from a school in Chonburi, Surin, Chiang Rai, and Songkhla provinces, respectively; most were female participants (72.9%) (Table 1 ). Of 303 registrations, 218 participants actually submitted at least one update of their dogs into the system over the one-year study. However, the number of participants continued submitting the monthly dog updates dropped to 46, 63 and 43 after 6 months, 9 months and by the end of one year, respectively. The number of students giving the completely one-year updates was 43 or 20% of total participants from the start (Fig.  1 ).

figure 1

A number of participants’ responses in a 3-month period during the study year.

Dog demographics and dog population dynamics

Overall, 322 owned dogs were reported during the study period. More than half were male dogs (65.0%). Owned dogs were divided into three age groups based on owners′ identification: birth to 1 year (28.3%), aged between > 1 and 8 years (57.1%), and aged > 8 years (14.6%). These age classes were used to represent three groups of dogs, puppy, adult, and elderly. Most owned dogs were reported in Surin province (35.4%), followed by Chiang Rai (28.5%), Chonburi (22.7%), Songkhla (12.1%), and others where owners dwelling in adjacent areas (1.2%). In addition, 24 new-born puppies were reported, while there were 20 deaths (e.g. caused by dog illness, bite, fight, accident, and culling) and 9 missing dogs reported. These numbers correspond to the estimated birth, death, and missing rates of 7.5, 6.2, and 2.7 per 100 dog-years, respectively. Based on the self-reporting system, 40.1% of the dogs had been vaccinated against rabies and 12.4% had been sterilized (Table 2 ).

Qualitative study

Dog owner characteristics.

A total of 23 high-school students, all aged 17 years, voluntarily participated in our interview (see Supplementary Table 3 ). There were students from all three levels of participation, including registration only (17.4%, n = 4), partially updated data (39.1%, n = 9), and fully updated data (43.5%, n = 10). Although all schools from four provinces were represented, more than half of the participants were from Chonburi province (52.2%).

Extensive knowledge and dog rabies awareness

Most participants (91.3%, n = 21) strongly agreed that rabies was fatal, resulting in death in both humans and dogs. One participant noted, “I learned from the news on TV that human infections result in a hundred percent mortality” . However, 52.1% of the participants (n = 12) reported that they were either unaware of or did not follow rabies situations locally. This indicated that while most participants are aware of rabies, they do not necessarily stay informed about local rabies situations. One participant said, “ I have very little experience of rabies disease. I have not seen the real case before and have not followed the disease situation. At school, there is minimal information for us to research more about rabies. Sometimes, external health staff came to educate us about health at school but didn’t focus on rabies” . While a majority (65.2%, n = 15) of participants considered that only cats and dogs were reservoirs for rabies, a larger proportion (78.2%, n = 18) were unsure whether there were other animal reservoirs. This result indicated that most participants were unaware that other mammals can also get infected with rabies. From the interviews, some participants made statements such as “I think it mainly comes from dogs and cats, unlikely to be other species” and “Most cases are infected from stray dogs, perhaps also from rabbits and monkeys” . In addition, 65.0% (n = 15) of participants mainly received information about rabies from social media and other online sources, while the remaining participants obtained information from other sources, including schools (such as our project visit), television and news, community announcements, medical providers, parents, and relatives.

Rabies precautions and caring for owned dogs

Most participants (87.0%, n = 20) stated that avoiding contact with stray dogs can help to prevent rabies infection. Also, 52.1% (n = 12) suggested that owned dogs should be vaccinated annually against rabies. Dog confinement was reported by most owners (87.0%, n = 20) as a way to control and limit their dogs’ contact with humans or animals. One participant said, “I keep my dog only in my house to avoid contacting with people and other dogs” and “My dog is always leashed all the time and I don’t allow other dogs nearby my dog when it is outside” . According to this, half of them (52.1%, n = 12) trusted their dogs, with 80–100% confidence due to annually vaccination and not allowing dogs outside. One participant said, “Some of my dogs are not yet vaccinated, we put the dogs to guard our properties in the factory area and sometimes outside dogs do come to visit” .

In terms of caring for owned dogs, participants reported how they managed their dog’s health (including regular health check-ups and visits to veterinarians when health issues were identified). The majority used the services of animal clinics (87.0%, n = 20), followed by animal hospitals (21.7%, n = 5), treatment by owners (21.7%, n = 5), and government veterinary services (13.0%, n = 3). However, one said, “I saw my aunt giving paracetamol to the dog when it was sick. I didn't agree with that and would have looked for more information or taken the dog to the vet instead” . This indicated that animal health education on the care of owned dogs should be enhanced, with information provided by specialists at animal service stations.

In the case of what happens to newborn puppies, participants identified two common situations: giving them away to others (65.2%, n = 15) and keeping the puppies themselves (39.1%, n = 9). In the mating season, most participants said they confined their dogs and did not allow them to breed with other dogs. One participant said, “I usually keep the dog in the house and sometimes use a lease to prevent dogs fighting”. Conversely, in the case of both neutered and non-neutered dogs, some participants still allowed their dogs to breed. Finally, the owners said they commonly observed their dog’s health status at feeding time (47.8%, n = 11); when they were sleeping (30.4%, n = 7) or playing (17.4%, n = 4); or when they observed any abnormality (17.4%, n = 4).

Obstacles, limitations, and motivations for joining in with school-based participatory research

Obstacles and limitations relating to the SBPR study mentioned by participants included forgetting to update their dog’s data (65.2%, n = 15), having school assignments and portfolios (30.4%, n = 7), having a part-time job (17.4%, n = 4), having personal works (17.4%, n = 4), having a poor internet connection (13.0%, n = 3), changing their smartphone (8.7%, n = 2), being unable to install the mobile application (4.3%, n = 1), and not interested in participating (4.3%, n = 1).

Conversely, participants reported some interesting advantages and motivations for why they participated in this study. Motivations included in the attainment of project certificates (60.9%, n = 14), followed by project rewards/gifts (34.8%, n = 8), research experience (13.0%, n = 3), dog care and follow-up (13.0%, n = 3), and rabies information (4.3%, n = 1). Other influences for joining the project mentioned included own self (65.2%, n = 15), project notification (13.0%, n = 3), project rewards (8.7%, n = 2), and support for school activities (4.3%, n = 1). After participated in this study, the main advantages given were mostly focused on caring for owned dogs, with regard to dog attention and care (69.6%, n = 16), observation of dog behavior (34.8%, n = 8), dog vaccine notification (17.4%, n = 4), and education (17.4%, n = 4). One mentioned that “In my opinion, the best thing I learned is to pay more attention to my dog. I observe my dogs more regularly and take care of them much better than earlier” .

Other suggestions from participants

Some participants suggested that they needed more information about rabies disease, its prevention and control, dog management, and dog vaccination. This could be added to the Pupify application, which was easily accessible for necessary information. Also, alternative sources of information should be considered, e.g., infographics and dog fan-pages on Facebook, Instagram, Twitter, or other popular social media platforms. One participant suggested, “I think having different channels for communication would help stimulate more interest in the work, for example, forming a ‘dog lovers’ group on social media” .

Here, we explored a new method to collect dog data via mobile application, a self-reporting system for dog owners, by focus initially on high school students who owned smartphones, which is in contrast with the conventional dog population census that is performed once or twice per year in Thailand by the government departments responsible for animal health. The key challenge to our design was the number of losses to follow-up. Our qualitative study revealed the main barriers to update dog dynamics data were due to some personal issues and technical reasons. A participant from the partial update group noted, “I gave regular updates until I changed my smart phone, I stopped updating the information completely” . One from the no-update group said, “I had difficulties installing the app and I think I am not disciplined enough to join this project anyway”. In addition, there was some feedback on the suitability of a mobile instant messaging app for data tracing. One participant suggested, “I prefer other channels of communication such as Instagram and Facebook because they are more convenient to me” .

Nevertheless, we estimated birth, death, and missing rates of 7.5, 6.2, and 2.7 per 100 dog-years, respectively. The male to female ratio was approximately 1.8:1. The variations in these rates and ratios among the studied provinces are noticeable (see Supplementary Tables 1 and 2 ). This could be due to different nature of owned dogs in different parts of Thailand. However, due to the relatively small sample size in our study, it would not be appropriate to perform any sub-analysis from this data. It is important to note that the majority of the data provided pertained to confined dogs (70.2%), which may not accurately reflect the uncertainty conditions of free-roaming dogs. Future dog censuses should include a focus on confined, free-roaming, and stray dogs to provide a more comprehensive representation of the overall dog population size. Observations in South Africa revealed that birth and death rates were 31.3–45.1 and 40.6–56.8 per 100 dog-years, respectively, while the male to female ratio was approximately 1.4–1.7:1 27 . A study in India estimated an annual per capita birth and death rate of 1.0 and 0.7, respectively, while the male to female ratio was approximately 1.4:1 28 . A sight–resight survey in Australia reported birth and death rates were approximately 2.4 and 1.7 dogs/dog-owning house/year, respectively, while the male to female ratio was approximately 1:1 7 . Compared with other studies (using different approaches to collect the data; including observational, sight-resight, and/or mark-recapture survey), births and deaths in our study were relatively low. However, the male to female ratio was in line with previous studies. Similarly to a previous study 6 , we found the proportions of dog-keeping approaches (i.e. confined or free-roaming) varied among the sites, with dogs usually confined in well-developed areas whereas free-roaming dogs were reported more frequently in remote areas.

Our study had some limitations. First, the survey was restricted to owned dogs. It would be helpful to collect similar data for stray dogs; however, to conduct a similar study of stray dogs in the Thai setting, individuals who take care of stray dogs, so called “local feeders”, must be identified 29 . Second, the participants only comprised high-school children of a specific age group, perhaps a broader target public population should be considered for future surveys. Furthermore, we simply used three reproductive age classes to represent puppy, adult, and elderly i.e. the exact dog ages as detailed classifications, i.e. puppy, juvenile, young adult, mature adult, senior, and geriatric, are not available in this study. Third, the 'Pupify' application was developed for Android phones only and required updates to remain compatible with the latest operating system versions. Fourth, there was a low number of one-year data completion among the participated students who owned a mobile phone. Because the participation was voluntary, unrelated to school nor teacher’s request. The study sites were distant from the central project location, notifications and encouragement communications were conducted solely via Line messaging application and telephone calls. This led to discontinuities in communication between the researchers and the students throughout the year. The barriers in our SBPR engagement were limitations of the mobile application platform, technical issues, personal reasons, and the lack of engagement of project through the teachers and/or schools. Further studies should consider site visits to enhance communication, encourage participation, and investigate any arising issues.

In accordance with “One Health” concepts, human health is closely connected to the health of animals and our shared environment, and research in this area should be a collaborative, multisectoral, and trans-disciplinary approach to achieve optimal health outcomes. We made considerable effort to use the SBPR approach in conducting this study. In addition, the initial motivation for study participation was primarily driven by the desire to achieve long-term goals and enhance their profiles for university enrollment. After participating, they also recognized considerable benefits in caring for their dogs and demonstrated a commitment to sustainable effort for better dog care. Although there was a low response rate among participants, we could remark that the main advantage concerning caring for owned dogs was initially successful based on participants’ perception. Most interviewees agreed that this study would encourage them to pay more regular attention to their dogs regarding their health, vaccinations, and rabies prevention. Our study demonstrated the importance of encouraging, among school-age children, early learning about the importance of disease prevention and awareness, together with community engagement and social responsibility for their future. Finally, it is important to note that the success of several research depends on effective data collections. However, this study has provided valuable lessons, demonstrating that engaging the general public, beyond researchers and experts, presents considerable challenges. Practical issues such as invitations, communications, cooperations, maintaining engagements, and overall participations should be carefully considered. We hope that the insights gained from our study with SBPR may be beneficial for further studies and similar contexts.

Conclusions

Using the SBPR approach for collecting dog population dynamics data among the high school students can be challenging. Additionally, this study was conducted with an initial effort to explore the potential of using SBPR for data collection. The primary objective aimed to propose extending the approach beyond student awareness to include general dog owners in further research. Implementing a suitable SBPR approach involves designing educational activities, training participants, conducting surveys, and engaging the community. This could lead to effective and sustained data collection while fostering community involvement and awareness in the future. Perception on the usefulness of the application and different social-media channels for communication should be considered for future development of data collecting tools and mobile application in order to provide higher incentive to participate and update dog information in a long-term. A low level of disease awareness among high school students was identified in the interviews, possibly due to insufficient information, both at school and in the media. It is critical to promote disease awareness through health education. Further studies using in-depth interviews should focus on enhancing rabies awareness, increasing owner responsibility, and supporting rabies prevention projects, as these factors are crucial for policymaking and effective public participation. Nevertheless, by conducting data collection using a new alternative approach among the students, it has clearly increased some awareness on the importance of animal welfare and provided some new experience of being part of a research for some students to reduce rabies among humans and animals.

Study sites and participants

This study was conducted between June 2018 and October 2019, in areas where rabies is endemic and where there is a high incidence of animal and human cases 30 . It formed part of a larger study conducted in Thailand between 2015 and 2018, which aimed to investigate the cultural and socioeconomic factors that contribute to rabies outbreaks in Thailand 31 . Four provinces were included: Chiang Rai province in the north, Surin province in the northeast, Chonburi province in the east, and Songkhla province in the south (Fig.  2 ). Based on the past five year report of rabies in Thailand 22 , 30 , we purposively surveyed high school students dwelling in high endemic areas among the four provinces. Inclusion criteria were: (1) students aged between 16 and 17 years who owned at least one dog and possessed a smartphone that used the Android operating system, and (2) volunteer students whose parents consented to their participation in the study. In this study, dog ownership was defined as those who owned or cared for at least one dog at the residence only. Students were eligible to voluntarily participate by registering dog data on the ‘Pupify’ application.

figure 2

Maps showing; laboratory positive detection of rabies cases in animals in 2018 (Source: Thairabies.net: http://www.thairabies.net 1 ; and The four provinces included in the study: Chiang Rai, Surin, Chonburi, and Songkhla.

Data collection using the “Pupify”

‘Pupify’ mobile application was developed to collect long-term data on dog population numbers and dynamics from dog owners, feeders, and the general public. The ‘Pupify’ was developed by a group of university students from the Department of Computer Engineering, Chulalongkorn University 32 . The software architecture was three-tiered i.e. client, application server, and database server. The client section was initially constructed for Android OS using Java language. The application server was developed by using JavaScript which responded to user requests and monitored the types of data that should be recorded in the database server. All processes were tested accurately in both software testing and acceptance testing by developers and research team to ensure that the application can function in real settings.

In this study, the application was initially designed to target high-school students who have a smartphone and presumably have good knowledge of rabies. The application was developed in collaboration with the Department of Livestock Development (DLD), Ministry of Agriculture and Cooperatives Thailand, who are responsible for rabies control in Thailand. The application comprised three main sections: (i) demographic information about a dog’s owner, (ii) demographic information about dogs, and (iii) routine information updates and report management. The first and second sections were recorded in literal format once for each dog and owner upon registration. Monthly updates were required to follow-up on status of registered dogs, e.g. still alive, moved out, dead, vaccination status, and sterilization status. The participants were reminded to provide at least the monthly updates through the application and other channels of communication including Line messaging application and telephone calls with the researchers.

Qualitative study for the evaluation of participatory research

The second part of the study was conducted once the dog survey had been completed. This qualitative study aimed to explore in detail the knowledge, perceptions, and awareness of dog owners with regard to rabies in dogs and humans. Semi-structured interviews were used to collect the information. First, the participants from the survey were asked to voluntarily participate in the qualitative study by registering online to express their interest. To ensure a diversity of data, the research team purposively selected participants to include students whose duration of participation in the dog survey varied and those who attended different schools. Second, they were invited to participate in a one-to-one online interview with Thichumpa W. Each interview lasted for 15–30 min and was recorded. Informed consent was obtained from all participants’ parents. The interviews were conducted between March to May 2021.

The study protocol was approved by the ethical committees of Mahidol University Central Institutional Review Board (MU-CIRB 2019/157.0606; August 2019). Written informed consent was obtained from all high school students who participated in the research. All the methods were performed in accordance with relevant guidelines and regulations.

Data analyzes

Descriptive statistics were generated using SPSS version 23.0 33 . For the qualitative study, transcript data were evaluated by determining the frequency of answers given by interviewees and then coding keywords into pre-set themes 34 , including the theme of rabies knowledge, rabies awareness, caring for owned dog, perception about project, and other suggestions. The content analysis and thematic narrative approach were performed using QDA Miner Lite 35 .

Data availability

The data that support the findings of this study are available from the corresponding author, (WP), upon reasonable request.

Thai Rabies Net. Thai rabies report . http://www.thairabies.net/trn/ (2012).

Thanapongtharm, W. et al. Spatial distribution and population estimation of dogs in Thailand: Implications for rabies prevention and control. Front. Vet. Sci. 8 , 790701. https://doi.org/10.3389/fvets.2021.790701 (2021).

Article   PubMed   PubMed Central   Google Scholar  

Baum, F., MacDougall, C. & Smith, D. Participatory action research. J. Epidemiol. Community Health 60 , 854–857. https://doi.org/10.1136/jech.2004.028662 (2006).

Israel, B. A., Schulz, A. J., Parker, E. A. & Becker, A. B. Review of community-based research: Assessing partnership approaches to improve public health. Annu. Rev. Public Health 19 , 173–202. https://doi.org/10.1146/annurev.publhealth.19.1.173 (1998).

Article   CAS   PubMed   Google Scholar  

Morters, M. K. et al. Participatory methods for the assessment of the ownership status of free-roaming dogs in Bali, Indonesia, for disease control and animal welfare. Prev. Vet. Med. 116 , 203–208. https://doi.org/10.1016/j.prevetmed.2014.04.012 (2014).

Kasempimolporn, S., Sichanasai, B., Saengseesom, W., Puempumpanich, S. & Sitprija, V. Stray dogs in Bangkok, Thailand: Rabies virus infection and rabies antibody prevalence. Dev. Biol. (Basel) 131 , 137–143 (2008).

CAS   PubMed   Google Scholar  

Hudson, E. G., Brookes, V. J. & Ward, M. P. Demographic studies of owned dogs in the Northern Peninsula Area, Australia, to inform population and disease management strategies. Aust. Vet. J. 96 , 487–494. https://doi.org/10.1111/avj.12766 (2018).

Tiwari, H. K., Robertson, I. D., O’Dea, M. & Vanak, A. T. Author correction: Demographic characteristics of free-roaming dogs (FRD) in rural and urban India following a photographic sight-resight survey. Sci. Rep. 10 , 3757. https://doi.org/10.1038/s41598-020-58147-8 (2020).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Wilson, P. J., Oertli, E. H., Hunt, P. R. & Sidwa, T. J. Evaluation of a postexposure rabies prophylaxis protocol for domestic animals in Texas: 2000–2009. J. Am. Vet. Med. Assoc. 237 , 1395–1401. https://doi.org/10.2460/javma.237.12.1395 (2010).

Article   PubMed   Google Scholar  

Knobel, D. L. et al. Rabies Scientific Basis of the Disease and Its Management Vol. 17, 591–615 (Elsevier Inc, 2013).

Google Scholar  

Arechiga Ceballos, N., Karunaratna, D. & Aguilar Setien, A. Control of canine rabies in developing countries: Key features and animal welfare implications. Rev. Sci. Tech. 33 , 311–321. https://doi.org/10.20506/rst.33.1.2278 (2014).

Raynor, B. et al. Movement patterns of free-roaming dogs on heterogeneous urban landscapes: Implications for rabies control. Prev. Vet. Med. 178 , 104978. https://doi.org/10.1016/j.prevetmed.2020.104978 (2020).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Smith, L. M. et al. The effectiveness of dog population management: A systematic review. Animals (Basel) https://doi.org/10.3390/ani9121020 (2019).

Ballantyne, K. C. Separation, confinement, or noises: What is scaring that dog?. Vet. Clin. N. Am. Small Anim. Pract. 48 , 367–386. https://doi.org/10.1016/j.cvsm.2017.12.005 (2018).

Article   Google Scholar  

Astorga, F., Poo-Munoz, D. A., Organ, J. & Medina-Vogel, G. Why let the dogs out? Exploring variables associated with dog confinement and general characteristics of the free-ranging owned-dog population in a peri-urban area. J. Appl. Anim. Welf. Sci. 25 , 311–325. https://doi.org/10.1080/10888705.2020.1820334 (2022).

Yurachai, O., Hinjoy, S. & Wallace, R. M. An epidemiological study of suspected rabies exposures and adherence to rabies post-exposure prophylaxis in Eastern Thailand, 2015. PLoS Negl. Trop. Dis. 14 , e0007248. https://doi.org/10.1371/journal.pntd.0007248 (2020).

The World Organization for Animal Health (WOAH). Rabies . https://www.woah.org/en/disease/rabies/#ui-id-2 (2024).

World Health Organization (WHO). Rabies . https://www.who.int/news-room/fact-sheets/detail/rabies#:~:text=Rabies%20infects%20mammals%2C%20including%20dogs,rabies%20is%20virtually%20100%25%20fatal (2022).

World Health Organization (WHO). One Health . https://www.who.int/news-room/fact-sheets/detail/one-health (2024).

Department of Disease Control-Ministry of Public Health. Rabies . https://ddc.moph.go.th/disease_detail.php?d=25 (2022).

Department of Disease Control & Ministry of Public Health. Rabies exposure report system (R36 database in Thai language) . http://r36.ddc.moph.go.th/r36/home or http://odpc9.ddc.moph.go.th/EOC/eoc.html (2022).

Department of Livestock Development-Ministry of Agriculture and Cooperatives. Rabies situation report in animals . https://dld.go.th/th/index.php/th/service-people/infographic-menu/64-hot-issue/rabies (2021).

National Statistical Office Thailand. Size and structure of the population report . https://www.nso.go.th/nsoweb/nso/statistics_and_indicators?order=&search=&impt_side=&impt_branch=300&impt_group=0&impt_subgroup=&year=2563&announcement_date= (2020).

Statista Research Department. Number of smartphone users in Thailand from 2017 to 2020 with a forecast through 2026 . https://www.statista.com/statistics/467191/forecast-of-smartphone-users-in-thailand/ (2021).

Tomlinson, M. et al. The use of mobile phones as a data collection tool: A report from a household survey in South Africa. BMC Med. Inform. Decis. Mak. 9 , 51. https://doi.org/10.1186/1472-6947-9-51 (2009).

Samaila, M. G., Neto, M., Fernandes, D. A. B., Freire, M. M. & Inácio, P. R. M. Challenges of securing Internet of Things devices: A survey. Secur. Privacy https://doi.org/10.1002/spy2.20 (2018).

Conan, A. et al. Population dynamics of owned, free-roaming dogs: Implications for rabies control. PLoS Negl. Trop. Dis. 9 , e0004177. https://doi.org/10.1371/journal.pntd.0004177 (2015).

Totton, S. C. et al. Stray dog population demographics in Jodhpur, India following a population control/rabies vaccination program. Prev. Vet. Med. 97 , 51–57. https://doi.org/10.1016/j.prevetmed.2010.07.009 (2010).

Komol, P., Sommanosak, S., Jaroensrisuwat, P., Wiratsudakul, A. & Leelahapongsathon, K. The spread of rabies among dogs in Pranburi District, Thailand: A metapopulation modeling approach. Front. Vet. Sci. 7 , 570504. https://doi.org/10.3389/fvets.2020.570504 (2020).

Bureau of Epidemiology, Department of Disease Control & Ministry of Public Health. Rabies annual reports (in Thai language) . https://ddc.moph.go.th/disease_detail.php?d=25 (2022).

Premashthira, S. et al. The impact of socioeconomic factors on knowledge, attitudes, and practices of dog owners on dog rabies control in Thailand. Front. Vet. Sci. 8 , 699352. https://doi.org/10.3389/fvets.2021.699352 (2021).

Luangcharoenpong, S. Application for Dog Census , Chulalongkorn University (2018).

IBM Corp. IBM SPSS Statistics for Windows [Computer software] . https://www.ibm.com/spss (2022).

Hsieh, H. F. & Shannon, S. E. Three approaches to qualitative content analysis. Qual. Health Res. 15 , 1277–1288. https://doi.org/10.1177/1049732305276687 (2005).

Provalis Research. QDA Miner Lite . https://provalisresearch.com/products/qualitative-data-analysis-software/freeware/ (2020).

Download references

Acknowledgements

We cordially thank all the high school students who participated in our surveys. We also thank Siwakorn Luengcharoenpong and the teams from the Department of Computer Engineering, Chulalongkorn University, Bangkok, Thailand for software development and consultation.

This study was funded by the National Science and Technology Development Agency (NSTDA), Thailand (Grant ID. P-18-51758) and the Disease Control Department, Ministry of Public Health, Thailand. In addition, this research was funded in part by the Wellcome Trust [220211]. For the purpose of Open Access, the authors have applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission.

Author information

Authors and affiliations.

Department of Tropical Hygiene, Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand

Weerakorn Thichumpa & Wirichada Pan-ngum

Department of Clinical Sciences and Public Health, and the Monitoring and Surveillance, Center for Zoonotic Diseases in Wildlife and Exotic Animals, Faculty of Veterinary Science, Mahidol University, Nakhon Pathom, Thailand

Anuwat Wiratsudakul & Sarin Suwanpakdee

Biophysics Group, Department of Physics, Faculty of Science, Mahidol University, Bangkok, Thailand

Chayanin Sararat & Charin Modchang

Centre of Excellence in Mathematics, MHESI, Bangkok, Thailand

Charin Modchang

Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, Thailand

Setha Pan-ngum & Nakornthip Prompoon

Department of Livestock Development, Ministry of Agriculture and Cooperatives, Bangkok, Thailand

Onpawee Sagarasaeranee, Sith Premashthira & Weerapong Thanapongtharm

Songkhla Provincial Livestock Office, Muang, Songkhla, Thailand

Arun Chumkaeo

Mahidol Oxford Tropical Medicine Research Unit (MORU), Faculty of Tropical Medicine, Mahidol University, Bangkok, Thailand

Wirichada Pan-ngum

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization and Methodology: WK.T., W.P., C.M., A.W. and WP.T. Mobile application: S.P. and N.P. Survey and data collection: WK.T., S.S., C.S., O.S., S.PR., WP.T. and A.C. Formal analysis: WK.T. and W.P. Project administration and data management: WK.T. Writing–original draft: WK.T. and W.P. Writing–review & editing: All authors. The authors declare consent for publication.

Corresponding author

Correspondence to Wirichada Pan-ngum .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary tables., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Thichumpa, W., Wiratsudakul, A., Suwanpakdee, S. et al. Study of dog population dynamics and rabies awareness in Thailand using a school-based participatory research approach. Sci Rep 14 , 20477 (2024). https://doi.org/10.1038/s41598-024-71207-7

Download citation

Received : 10 May 2024

Accepted : 26 August 2024

Published : 03 September 2024

DOI : https://doi.org/10.1038/s41598-024-71207-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Dog population
  • Rabies awareness
  • School-based participatory research (SBPR)
  • Public health

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research on mobile survey

IMAGES

  1. Effective Mobile Survey Design for Market Research

    research on mobile survey

  2. Seven Tips to Create a Successful Mobile Survey

    research on mobile survey

  3. 5 reasons your organization needs a Mobile Survey Panel

    research on mobile survey

  4. Free, Interactive Mobile Phone Ready Surveys

    research on mobile survey

  5. Using Smartphones in Survey Research

    research on mobile survey

  6. Mobile Survey Software

    research on mobile survey

VIDEO

  1. Teledyne Optech

  2. Customer Feedback in Real-Time using Opiniator

  3. Tutorial Part 3: Cara membuat form survey, instal mobile survey dan cek hasil survey di dasmap.com

  4. CLAIM 500 COD POINTS BY ANSWERING A SURVEY

  5. Earn $50 Per Survey + Instant Payment!

  6. Surveys and Questionnaires: Research

COMMENTS

  1. The Impact of Mobile Devices on Survey Responses: Why ...

    The Mobile Survey Landscape. The mobile survey landscape is constantly evolving, and staying up-to-date with the latest trends and statistics is essential. According to Statista, in 2023, the current number of smartphone users in the world today is 6.92 billion, meaning 86.29% of the world's population owns a smartphone. This means that a ...

  2. Mobile Surveys vs Online Surveys: Definitions, Examples & More

    Survey Feedback. Online surveys offer really in-depth data which definitely helps decision-makers. However, the data sometimes takes a few hours, or even days, to be compiled meaning decisions can't be made as quickly. Mobile surveys on the other hand usually offer real-time feedback, making data analysis a much more responsive task.

  3. Why Conduct Surveys On Mobile Phones?

    With a mobile-optimized survey, mobile survey participants provide higher quality responses: Simply stated, all surveys and market analyses try to arrive at the same conclusion. Smartphones will reach the target audience via a faster, simpler, cheaper, and high-quality methodology. And, since the Pollfish database has many already known ...

  4. Mobile Fact Sheet

    Mobile phone ownership over time. The vast majority of Americans - 97% - now own a cellphone of some kind. Nine-in-ten own a smartphone, up from just 35% in Pew Research Center's first survey of smartphone ownership conducted in 2011. Mobile phone ownership. % of U.S. adults who say they own a ….

  5. Meet them where they are. How mobile research gets you closer to

    Online surveys: Only 49% of respondents were satisfied 5; Enter mobile research. Today, a whopping 81% of the U.S. owns a smartphone 6. And they spend more than three hours a day on their phones. That's an easy-to-reach, collective and representative audience. The idea behind mobile research is to use smartphones to reach consumers.

  6. Is there a growing use of mobile devices in web surveys ...

    Recent advances in web survey methodology were motivated by the observation that respondents increasingly use mobile devices, such as smartphones and tablets, to participate in web surveys. Even though we do not doubt this general observation, we argue that the claim is lacking a solid empirical basis. Most research on increasing mobile device use in web surveys covers limited periods of time ...

  7. Tips for Creating Web Surveys for Completion on a Mobile Device

    The following are eight tips for creating better surveys for completion on a smartphone. 1. Software should be mobile optimized. Mobile optimization means the software automatically detects the device used, specifically the screen size, and adjusts the layout of the survey accordingly. The font and spacing are larger, as are any buttons that ...

  8. The definitive guide to designing mobile surveys

    3. Swipe & touch tap. Point and click, dragging objects and typing—these are all input methods indigenous to PC use. Conversely, touch tapping and swiping are primary input methods for smartphones. As such, try to limit input requirements to these two methods when designing mobile surveys for smartphone use.

  9. Mobile Phone Surveys: The Ultimate Guide

    Mobile phone surveys have become one of the most common methods of data collection. GSMA data suggests that 66.79% of the world's population owns a mobile phone; making mobile phones one of the most accessible tools for data gathering in the world.. Using mobile phones for survey research makes it easier for you to collect information from the different groups in your sample population.

  10. A mobile survey is still a survey: Why market researchers need to go

    The popularity of mobile surveys is pretty broad. For instance, a relatively equal percentage of research buyers (52%) and suppliers (58%) are using this technology. Globally, mobile surveys also enjoy wide adoption, with a majority of researchers in North America (55%), Europe (63%) and APAC (57%) using them.

  11. Free Mobile Surveys

    Improve survey response rates by delivering surveys on mobile devices—right at the point of your audience's experience or interaction. We offer many different question types. Use single choice, multiple choice, and open-ended questions to make responding easy. Extend the reach of your survey by asking your contacts to post, tweet, or ...

  12. Being Mobile: How the Usage of Mobile Devices Affects Surveys with

    In this article, we examine the effect on data quality of using mobile devices in online surveys with highly educated respondents. Utilising panel data from the German National Educational Panel Study (NEPS) student cohort, we employ regression analyses and propensity score matching to estimate the effects of devices while controlling for confounding factors.

  13. 5 Major Benefits of Mobile Surveys for Market Research

    1. Greater Representation Within the Sample. One of the key advantages of conducting surveys via mobile devices is the ability to reach very diverse audiences. The phone allows access to representative samples of all ages, conditions, and socioeconomic levels, which enriches the quality of the data compared to other platforms.

  14. Mobile Research Methods: Opportunities and challenges of mobile ...

    Breakoff rates in mobile web surveys are a key challenge for survey researchers. The research software Kinesis Survey Technologies (2013) reports that mobile breakoff rates in the surveys hosted on their SaaS infrastructure varied from 68% to 84% in the period of 2012- 2013. These breakoff rates appear to be increasing in 2013 compared to 2012.

  15. How to Design a Mobile-Friendly Survey

    Types of survey questions that are mobile-friendly. Multiple choice questions: When they are well designed, multiple-choice questions give your respondents all the possible answers they could provide to your survey, without them having to do too much activity, like typing. Mobile devices are great for reading short pieces of text and minor ...

  16. Mobile-Friendly Surveys: What They Are & How to Make Them

    Mobile-Friendly Surveys: An Effective Guide. Let's look at what you can do to create mobile-friendly surveys and why this will benefit your research with four practical tips. 1. Keep it simple Mobile online surveys may help you solve complex problems, but the user experience must be simple. For starters, word questions clearly and concisely ...

  17. Frequently Asked Questions around Mobile Phone Surveys

    After years of being used as an experimental research method or to supplement traditional research modes, remote mobile surveys are suddenly in the spotlight. Despite mobile-based methodologies being the safest and most effective way to gather data during a crisis such as COVID-19, there are still unknown factors when using mobile to collect ...

  18. Mobile Survey: Everything You Need To Know

    1. Keep the overall length of the survey short. People have a limited attention span especially when using their mobile for a survey, so the ideal length for this type of survey tends to be 3-4 minutes but should be no more than 10 minutes, otherwise you will have many people dropping out. 2. Questions should be short and simple.

  19. Mobile App Surveys: Questions, Templates, and Tips for 5 ...

    Mobile app surveys are a must if you want to keep up with your users and competition. Here are a few benefits of a well-designed mobile survey. ‍. 1. Improve and speed up mobile app development. An in-app survey will help you spot bugs and UX (user experience) shortcomings in the early development stage.

  20. Mobile Research

    Along with these offerings, QuestionPro also provides 250+ mobile friendly survey templates. Mobile research is a rapidly growing discipline of researchers who focus primarily on mobile based research studies to tap into the flexibility, customizability, accuracy and localization to get faster and more precise insights.

  21. A survey study of the association between mobile phone use and daytime

    Sixty-eight males and 143 females responded to the survey. Most (96.7%) respondents owned a mobile phone. The remainder of the analyses presented herein is on the 202 respondents (64 male, 138 female) who indicated that they owned a mobile phone (Tables 1 and 2).The youngest participant in the survey was 14 years old and the oldest was 19 years old (16 ± 1.2 years), representative of the age ...

  22. (PDF) Implementing online questionnaires and surveys by using mobile

    Online questionnaires and surveys are one of th e most efficient methods that are used by people all aro und th e wo rld for. statistics, to ask questions and to evaluate so mething; however, this ...

  23. Validated Consumer Data in an End-to-End Workflow

    Tier 1 - Top-rated panel by Qualtrics. Rated #1-panel quality provider by three survey data platform providers. Industry avg respondent rejection rate = 25%+ - MFour's = 0.4%. Top-rated research app on IOS and Android 170k + 4.5-star reviews. Download our vConsumer Panel™ Book.

  24. Micro-Urban Heatmapping: A Multi-Modal and Multi-Temporal Data ...

    The methodology integrates mobile surveys, stationary sensor networks, and drone-based thermal imaging, providing a detailed analysis of temperature variations within urban microenvironments. ... The research lays the groundwork for future studies in sustainable urban heat management and highlights the importance of innovative methodologies in ...

  25. Mobile phones are not linked to brain cancer, according to a ...

    A systematic review into the potential health effects from radio wave exposure has shown mobile phones are not linked to brain cancer. The review was commissioned by the World Health Organization ...

  26. A comprehensive survey on exploring and analyzing COVID-19 mobile apps

    The results of this survey might be helpful for the mobile developers to review the current app products and enhance the existing mobile platforms targeted towards the COVID-19 pandemic. This is the first attempt of its kind to present a state-of-the-art survey of the COVID-19-centered mobile health apps in Pakistan.

  27. Sustaining the mobile medical units to bring equity in healthcare: a

    Background Equitable access to healthcare for rural, tribal, and underprivileged people has been an emerging area of interest for researchers, academicians, and policymakers worldwide. Improving equitable access to healthcare requires innovative interventions. This calls for clarifying which operational model of a service innovation needs to be strengthened to achieve transformative change and ...

  28. Study of dog population dynamics and rabies awareness in ...

    Dog population survey School and participant demographics. In the survey via 'Pupify' mobile application, 303 high-school students registered through the mobile application for our study.