A learner’s journey from AI enthusiasm to ethical awareness
I have always believed in AI’s power to transform how we live and work. It helps me write, plan, learn, and think better. But as I complete my Strategic AI Leadership program, that early excitement has evolved into something more grounded: a cautious optimism shaped by ethics and self-awareness.
My focus is shifting from what AI can do to what it should do, and what users deserve to understand before adoption becomes dependency. I now see that ethics isn’t just a compliance check box. It’s the foundation that decides whether AI expands human potential or quietly shapes human behavior.
This is the journey that changed how I see AI, and why I believe awareness must come before automation.
The Pattern I Didn’t Notice Before
Every scroll, pause, click, or emotionally charged chat is part of a feedback loop. What once felt like helpful personalization has quietly evolved into something deeper: a system that predicts not just what we do, but why we do it.
That’s not personalization. That’s personality prediction.
What I Learned About Algorithmic Emotion
In 2021, The Wall Street Journal ran an investigation called Inside TikTok’s Algorithm. Reporters built automated accounts to see what drove recommendations. Within hours, those accounts were shown emotionally charged videos. What mattered most wasn’t what users clicked. It was how long they watched.
At first, I thought people still had full control. They could simply scroll away. But later I read a 2025 study on Dynamics of Algorithmic Content Amplification on TikTok that changed my view. Researchers found that the algorithm could create what they called reinforced emotional tunnels in under 200 videos. That’s less than an hour of scrolling.
It works like this: emotion triggers engagement, engagement generates data, data sharpens targeting, and refined targeting amplifies emotion again. The cycle repeats. Over time, our emotional world narrows. The system doesn’t care whether we feel joy or anger. It only cares that we feel enough to stay.
When I Realized Conversation Itself Became a Product
Generative AI adds another layer that still makes me pause.
Recommendation systems filter what already exists. Large language models generate new responses shaped by how we sound. When someone tells ChatGPT they are anxious, the model doesn’t feel empathy, but it predicts language that sounds empathetic. It studies tone, rhythm, and context to match our emotional cues. The goal is coherence and comfort, not comprehension.
In practical terms, our conversation becomes a feedback loop. The AI analyzes our emotional patterns within the conversation and mirrors them back in ways that feel engaging or reassuring. It isn’t malicious. It’s mechanical. Yet there is a fine line between emotional support and emotional calibration designed to keep us interacting.
The real question isn’t whether AI feels. It’s whether we understand what happens beneath the surface, and whether the system is optimized for our wellbeing or for our continued attention.
The Pattern I Now See Everywhere
We live in what I call reinforced reality – a world where algorithms don’t just reflect preferences but shape perceptions.
Once you recognize it, you see it everywhere :
- TikTok curates for intensity, not diversity.
- Instagram and YouTube reward provocation over nuance.
- Conversational AIs adjust their warmth and vocabulary to match our sentiment.
Different industries, same incentive. Our attention equals revenue.
Where Emotional AI Truly Helps
Before going further, I want to be clear: emotional AI is not inherently harmful. It can create real human benefit when designed with wellbeing as the goal.
- Accessibility: AI companions help neurodivergent people practice communication in safe spaces.
- Healthcare: Chatbots offer early mental health support in areas without therapists.
- Education: Learning systems adapt lessons based on signs of confusion or frustration.
- Grief support: Some use AI companions to process loss when human conversation feels too hard.
These examples remind me that technology itself isn’t the problem. The problem begins when systems optimize for engagement instead of growth.
A chatbot that detects crisis signals and connects users to help is ethical. A platform that detects vulnerability and feeds content to keep users online is not. Same capability, but different purpose. One heals and the other harvests.
The question isn’t whether emotional AI should exist. It’s how transparent we can make it, and who truly benefits from its optimization.
The Economics I Finally Understood
Every emotional reaction has measurable business value. Longer sessions increase ad exposure. Emotional resonance improves targeting and raises ad prices. Predictable behavior lowers marketing costs and increases lifetime value. The more accurately AI predicts your next emotional response, the more valuable you become. Not as a customer, but as inventory in an attention economy.
This realization made me rethink what “free” means. When services feel tailored, we may not be paying in money, but we are paying in psychology. I now ask myself during each digital interaction: Who benefits from this feeling? The answer is not always the user.
Who’s Most at Risk
Not everyone faces the same level of exposure. The most vulnerable are often the least aware of it.
A 2025 study by Common Sense Media found that three out of four teens had used AI companions, and about half used them regularly. Many described these bots as “friends” who felt easier to talk to than people.
Children, adolescents, and emotionally isolated adults are especially at risk. It isn’t about intelligence. It’s about emotional safety. Talking to AI feels risk-free. There’s no judgment, no consequence, only validation. That’s comforting and commercially useful.
What struck me even more was the difference in oversight.
In regulated fields like banking, healthcare, or education, AI systems are audited, documented, and governed by law. There are ethics boards, compliance teams, and accountability processes.
For everyday users chatting with AI friends, scrolling emotional feeds, or shopping online, none of that exists. No audit trails. No visibility into what is inferred. No control over what is stored or shared.
In regulated spaces, risk is managed. In personal life, it’s unmoderated exposure.
The Hidden Gaps Few Users See
As I studied emotional AI deeper, I realized how little users are told about what happens behind the screen.
- We aren’t told when emotional profiling starts.
- We can’t see what has been inferred about our mental state.
- We don’t control how long emotional data is stored.
- We don’t know if emotional insights are sold or shared.
- We can’t turn off emotional adaptation without quitting the app entirely.
If this happened in finance or healthcare, it would be a compliance breach. In social platforms, it’s just another feature. Even those who build AI systems often don’t realize how invisible this layer of emotional and behavioral inference has become.
What Policy Looks Like Today
When I started reading about AI regulation around the world, I expected strong protections for emotional profiling. What I found was an uneven and incomplete picture.
Europe: Protecting Emotional Safety
The EU AI Act classifies emotion recognition and psychographic inference as high-risk uses. It bans them in schools and workplaces and will require transparency and oversight once the law is fully enforced by 2027. It’s far from perfect, but it’s a start.
China: Strict, Centralized Control
China’s Algorithmic Recommendation Regulations (2022) and Deep Synthesis Regulations (2023) require disclosure, labeling, and addiction prevention, especially for minors. The system is strict, but its aim is as much political as protective.
India: Frameworks Still Forming
India’s Digital Personal Data Protection Act was passed in 2023 but remains in consultation. It focuses on consent and storage, not emotional inference. A coming Digital India Act might fill that gap.
Singapore: Leadership Through Voluntary Standards
Singapore’s Model AI Governance Framework for Generative AI encourages fairness and accountability. Participation is voluntary, but it’s setting a thoughtful example for small nations that balance innovation and safety.
United States: Fragmented and Inconsistent
The United States has principles but few enforceable laws. The AI Bill of Rights and past executive orders provide guidance without penalties. Oversight varies by state and industry. It’s a patchwork, leaving most emotional AI interactions unregulated.
Across all regions, one truth repeats: if you’re a patient or a bank customer, your data is protected. If you’re a teenager talking to an AI companion, you’re mostly on your own.
A Case That Made It Real
The Replika AI case in Italy brought everything into focus for me. The chatbot was marketed as a “friend who listens.” In 2025, regulators fined its makers five million euros for processing emotional data without proper consent and for exposing minors to inappropriate content.
What disturbed me wasn’t only the violation, but how easily it happened. Millions of users had no idea what kind of data exchange was taking place. Friendship had quietly become a transaction, and emotional trust was the currency.
This wasn’t evil intent. It was business logic meeting human vulnerability. That makes it harder to spot and even harder to regulate.
What Users Deserve
Regulation takes time. Awareness can happen now. We can demand basic control features that most platforms still lack:
- The option to pause emotional profiling.
- Access to the emotional profile inferred from our data.
- The right to delete emotional history without losing the account.
- A neutral mode that responds factually instead of emotionally.
These are not complex ideas. They just aren’t priorities when engagement metrics drive design. Until platforms evolve, our best protection is literacy.
Where Change Must Begin
We can’t just teach people how to use AI. We have to teach how AI uses them.
This belongs in digital literacy programs, starting in schools. Students should learn to question why they are being shown certain content, what patterns they are feeding into, and how their attention is being steered. This awareness should sit alongside reading and writing as a basic survival skill for the digital age.
At home, parents can start with simple questions:
- Why do you think the app showed you this?
- What makes it think you’ll like it?
- How do you feel after using it?
- If the app’s goal is to keep you watching, what might it show you next?
The goal isn’t to fear technology but to recognize its intent. I’ve learned to pause before clicking a recommendation and ask, “What did I just do that made it think this is what I want?” That moment of curiosity is often the difference between being guided and being aware.
What I Believe Now
I still believe in AI’s power to change the world. But I also believe that power means responsibility. I’ve stopped asking only what AI can do. I now ask what it shouldn’t do without disclosure, consent, and control.
The technology itself isn’t the issue. The imbalance of understanding is. Companies know our emotional patterns far better than we understand their systems. They can predict and influence reactions with precision that feels like empathy but is actually optimization.
AI is scaling intimacy faster than society is scaling understanding.
The Choice Ahead
Before AI systems decide what we see, hear, and believe, we have to decide what must remain transparent. The question isn’t whether we can stay unpredictable. The real question is whether we can stay aware when prediction becomes persuasion.
Ethical AI begins when emotional awareness turns into deliberate choice, not algorithmic inevitability. I share it because understanding how AI learns us is the first step to learning ourselves better.
Sources & Further Reading
Research & Investigations
- Inside TikTok’s Algorithm: A WSJ Video Investigation (2021)
https://www.wsj.com/tech/tiktok-algorithm-video-investigation-11626877477 - Baumann, Arora, Rahwan, Czaplicka, Dynamics of Algorithmic Content Amplification on TikTok (arXiv, 2025)
https://arxiv.org/pdf/2503.20231 - Common Sense Media, Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions (2025)
https://www.cbsnews.com/news/ai-digital-friendship-with-teens-common-sense-media-study/
Regulatory Frameworks
Europe:
- EU AI Act (Official Timeline and Implementation)
https://www.alexanderthamm.com/en/blog/eu-ai-act-timeline/
China:
- CAC Algorithmic Recommendation Regulations (2022)
https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-internet-information-service-algorithmic-recommendation-management-provisions/ - Deep Synthesis Regulations (2023)
https://www.china-briefing.com/news/china-releases-new-rules-on-deep-synthesis-internet-services/
India:
- Digital Personal Data Protection Act & Digital India Act (Consultation 2025)
https://chambers.com/articles/draft-digital-personal-data-protection-rules-2025
Singapore:
- IMDA Model AI Governance Framework for Generative AI (2024)
https://aiverifyfoundation.sg/wp-content/uploads/2024/06/Model-AI-Governance-Framework-for-Generative-AI-19-June-2024.pdf
United States:
- White House Blueprint for an AI Bill of Rights (2022)
https://www.whitehouse.gov/ostp/ai-bill-of-rights/ - Executive Order on Safe, Secure, and Trustworthy AI (2023)
https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ - NTIA AI Accountability Policy Report (2023)
https://www.ntia.gov/issues/artificial-intelligence
Case Studies
- Garante (Italy), Decision on Replika AI Chatbot, €5M Fine (April 2025)
https://www.edpb.europa.eu/news/national-news/2025/ai-italian-supervisory-authority-fines-company-behind-chatbot-replika_en