Last week I was watching with great interest how people in the U.S. were reacting to the potential shutdown of TikTok. What fascinates me most is how the debate over a social media platform has transformed into a larger conversation about personal freedom, data ownership, and a growing distrust in government. One of the most striking aspects of this phenomenon is the symbolic defiance shown by some users migrating to Rednote, another Chinese-owned platform. For many, this wasn’t simply about finding a TikTok alternative – it was a protest against what they perceive as government overreach in determining who controls their data.
As a researcher, part of my role is to step back from individual events and examine the bigger picture, identifying broader patterns and connections. Reflecting on this trend of app migration, I’ve been considering its implications beyond social media – particularly in the realms of healthcare, health data, and the use of AI in managing that data. I believe the behaviours we’re observing provide deeper insights into how people perceive their personal information, the digital and technological systems that manage it and their expectations for control and autonomy.
In a digital age, this is about more than just social media. I believe it reflects larger trends in how people value agency over their data and how their trust – or lack of trust – in institutions like governments and corporations shapes their decision-making. Importantly, it also underscores a critical insight: people are more willing to share their data when they perceive it provides value to them personally, not just value to a larger system or institution.
This distinction is vital in contexts like healthcare. When individuals feel their data is being used in ways that align with their personal priorities. Whether it’s improving their care, tailoring treatments, or offering meaningful insights – they are far more likely to engage and share. On the other hand, when data collection feels abstract or primarily designed to serve institutional or governmental priorities, resistance grows.
These trends hold critical lessons for how policymakers, health tech companies and other healthcare organizations need to adapt. Transparency, empowerment, and ethical practices must align with what individuals perceive as valuable in their own lives. By focusing on this connection, we can create systems that not only build trust but also unlock the potential of data to drive meaningful change.
The Intersection of Data and Distrust
We’re living in a time of rapid change, where the intersection of technology, data, and trust has never been more complex. Environics’ long history of studying social values across Canada and the U.S. provides us with valuable insights into these shifts. Trends in social values aren’t just snapshots of the present – they help us understand how we got here and, more importantly, where we might be headed.
Here are key Social Values measured by Environics Research that provide insight into how people think about data privacy, control, and trust in institutions.
I’m not a regular TikTok user (I worry I’d get too addicted), but this topic piqued my interest. As a result, I found myself scrolling through TikTok, watching countless videos where users shared their thoughts on migrating to Rednote. The language in these videos reveals deep frustration with the government. Many users express the belief that their leaders prioritize maintaining power and wealth over serving the interests of ordinary citizens. This perception fosters a narrative that government actions are more about self-preservation and catering to elite agendas than demonstrating genuine concern for the public good.
These users are also more likely to view a TikTok ban as a tool for suppressing speech that challenges or diverges from the narratives endorsed by the government, rather than a decision driven solely by concerns for national security. Many see such actions as favoring certain agendas while stifling alternative voices, fueling perceptions of institutional self-interest over public interest.
This sentiment reflects a broader and growing skepticism, as trust in institutions across the U.S. and Canada which have been showing a decline over the last few years.
Canadian Trust in Parliament: 3 Different Measures
Data from The Environics Institute: Trust in Institutions. Victoria Forum 2024.

To what extent do you trust Parliament? (Using a scale ranging from 1 to 7, where 1 means “not at all” and 7 means “a lot.”)
This distrust in institutions is fueling defiance. For many users, moving to platforms like Rednote represents a way to reclaim control over their digital lives. I believe it’s less about the specifics of Rednote itself (especially since most people migrating cannot understand or read Mandarin or Cantonese) and more about asserting autonomy in a world where they feel their agency is being consistently undermined and the social contract between government and its citizens is being broken, with leadership seen as failing to act in the public’s best interest.
What This Means for Healthcare and AI
The same dynamics of trust and control play out in healthcare, particularly when it comes to the use of AI tools in data privacy and sharing. Just as social media users resist government intervention, patients hesitate to engage with AI-driven healthcare solutions when they fear their data could be misused, sold, or accessed without consent. This distrust—whether directed at AI technologies, healthcare systems, or private companies—creates significant barriers to leveraging the transformative potential of AI in healthcare.
AI tools have the capacity to revolutionize healthcare by predicting outcomes, personalizing treatments, and streamlining processes. However, these innovations rely on access to large-scale health data, and without trust, this data will remain out of reach. Patients need to believe that AI is not just efficient but also ethical and designed to prioritize their well-being and protect their sensitive information.
The Challenge of Sensitive Data in Healthcare
This issue is particularly pronounced when we consider the types of data that patients might hesitate to share, even in a healthcare context. While hospitals routinely collect essential data for treatment, there are instances where patients might withhold or limit sharing additional information due to concerns about how it could be used.
For example, a patient recovering from surgery might choose not to share wearable data tracking their activity levels if they fear it could be misinterpreted to judge their recovery progress unfairly or lead to increased insurance premiums. Similarly, individuals with diabetes might hesitate to connect their continuous glucose monitor data to their medical record, concerned that it could be used to question their treatment adherence, raise their premiums, or restrict access to government programs.
This hesitancy isn’t limited to wearable data either. Patients may withhold mental health histories due to fears of stigma or biased treatment. Genetic testing results might not be disclosed if patients worry about how the information could be used by insurers or even employers. Substance use, dietary habits, and reproductive health information are other sensitive areas where patients may feel particularly vulnerable. For instance, a patient might avoid revealing past substance use or contraceptive choices if they suspect it could influence their eligibility for certain treatments or government assistance.
These examples highlight a core challenge for AI in healthcare: while it offers the promise of improved outcomes, personalized treatments, and operational efficiencies, it also amplifies patient concerns about data privacy, control, and misuse.
The potential for these tools to transform healthcare is enormous, but it relies on patients being willing to trust that their information will be handled ethically and transparently.
Overcoming Barriers
To overcome these barriers, healthcare organizations must prioritize trust-building initiatives. This starts with transparency: patients need clear explanations about why specific data is being collected, how it will be used, and what safeguards are in place to protect it. Ethical practices are equally crucial – patients must feel confident that their data will only be used to benefit their care, not to penalize them in any way. Finally, empowering individuals to have control over what data they share and how it’s shared is essential. These steps are not just good practices I believe that they are necessary to unlock the full potential of AI in healthcare and ensure its benefits are accessible to everyone.
Balancing Control and Value
Sensitive data, including wearable information, highlights the critical balance that healthcare organizations and companies must achieve to build trust and encourage patient participation. These principles apply broadly to all types of sensitive data – whether it’s wearable data, mental health histories, genetic information, or medication records:
Transparency in Data Use: Providers and AI developers must clearly explain how data will be used, stored, and protected. Patients are more likely to share their data when they understand its purpose, how it will benefit them, and the safeguards in place to prevent misuse.
Empowering Patients: Offering patients control over what data they share and how it’s used fosters trust and encourages participation.
Demonstrating Tangible Benefits: Patients are more willing to share data if they see clear and immediate benefits. Ensuring patients understand how their data improves their care is essential to building trust.
Ethical Safeguards: Establishing and enforcing strict ethical guidelines is critical for all types of sensitive data. Patients need assurance that their information – whether it’s related to substance use, reproductive health, or chronic conditions – will not be used to penalize them, such as impacting insurance premiums or access to care.
Data Minimization and Anonymization: Collecting only the data necessary for the intended purpose and using techniques like anonymization can help address patient concerns about misuse or breaches. This approach reassures patients that their data is being handled responsibly and securely.
Equitable Access: Ensuring that the benefits of data-driven healthcare extend to all patients, regardless of socioeconomic status, can demonstrate that systems are designed to reduce disparities, not reinforce them.
By applying these principles consistently across all types of sensitive data, healthcare organizations can build the trust needed to unlock the transformative potential of AI and other innovations. This trust isn’t just about privacy – it’s about creating a sense of partnership with patients where their data is used to improve their care in ways that are empowering.
The Path Forward
The TikTok-Rednote migration is a microcosm of a larger cultural reckoning about data, trust, and autonomy. Whether in social media or healthcare, people want systems that respect their agency and align with their values. For healthcare organizations, the message is clear: rebuilding trust starts with giving people control over their data, being transparent about its use, and demonstrating how it drives meaningful outcomes.
I believe that in a world where trust is fragile, putting people first isn’t just the ethical choice – it’s really the only choice available to us.
Control Of Privacy
Rejection Of Authority
Personal Control