Inner Harbnor
--
An empathetic AI companion for Highly Sesitive People
AI Fine Tuning | IoT product | future design
This project explores the emotional regulation challenges faced by highly sensitive people (HSPs) — individuals who are more responsive to external stimuli and more prone to anxiety, fatigue, and stress. Grounded in Affective Computing and Emotional Design, it examines how emerging technologies can be integrated with emotional design to provide timely emotional support for HSPs.
I developed an AI-based emotional companion system tailored for highly sensitive individuals. It recognizes emotions in real time through physiological data and initiates natural, human-like conversations. By offering friend-like emotional companionship, the project aims to help HSPs maintain a more stable emotional state and ultimately enhance overall social well-being.
Tools: Python, Google Colab
Github Link:
https://github.com/Roqi-zhang/Hypersensitivity-AI-Project
CLOUD
A warm response at every moment you need.
A warm response at every moment you need.
01 Background research
1.1 What is HSP?
1.2 Feature Analysis
1.3 Summary
02 First-hand Research
2.1 Interview
- Q1: What features would you like this product to have?
- Q2: What kind of design would make the product feel highly usable and appealing to you?
- Q3: In your view, how does this concept differ from similar products already on the market?
- Q4: Could you share an interesting idea or suggestion you would like to see in this product?
2.2 Summary
03 Market Research
04 Conclusion
05 Training tools and Workflow
06 Physiological Data Monitoring
Arduino Programming
Initialize serial port and LED output, then sample GSR data 500 times to calculate the average as the resting threshold for comparison.
Send start command 0x24 via software serial every 3 seconds until a correctly formatted data packet is received (flag hrvStarted = true).
Continuously read GSR values. If the difference from the resting threshold exceeds 60, perform a second confirmation reading. If the difference is still large, it indicates a significant emotional fluctuation—turn on the LED as an alert.
When the software serial buffer contains 24 bytes, read the packet byte by byte. Check if the first byte is 0xFF and the last byte is 0xF1. If valid, extract physiological parameters.
Output a JSON-formatted data set every 1 second, including the GSR value, emotion detection status, and various physiological indicators from the integrated module.
Add a short delay (20ms) at the end of the loop() to prevent data processing from being too fast, which could overload the CPU or cause system instability.
07 AI Fine-Tuning Process
7.1
Base model selection
7.2
Data collection
Corpora were collected and compiled. Compared with a single type of corpus that makes the model "very gentle but very empty", by mixing three types of corpora and adding cute emoticons, the model can simultaneously possess: knowledge (know-why), emotion (feel-with), and action (what-next).
7.3
Construction of a mixed dataset
Corpora were converted into JSON format following a standardized dialogue structure (instruction — input — output), uploaded to Hugging Face, and used to construct a mixed dataset.
7.4
Fine-tuning attempt
On Google Colab, the mixed dataset hosted on Hugging Face was retrieved to conduct LoRA-based fine-tuning of the model.
7.5
Deployment on Hugging Face Space
The trained model is uploaded to Hugging Face Space, with its API link integrated into Python. When the sensors detect emotional anomalies, the system makes real-time API calls to trigger an immediate response.
7.6 Final effect
08 Usage process
09 User test
A total of 30 participants took part in user testing. Each participant first interacted with the AI model before fine-tuning, then with the fine-tuned version, allowing for a direct and intuitive comparison of the results. At the end of the test, every participant was asked to complete a questionnaire to provide feedback on their user experience.
10 Future Outlook -- Virtual product
12 Reflection and Future expectations
I hope to bring this project into real use, helping people cultivate emotional well-being and a sense of calm in a fast-changing world.
To achieve this, I’m developing three core abilities:
- AI Product Design – defining product value, understanding users, and building a foundation in AI systems and models.
- Product Management – bridging technology and users, translating needs into development goals, and fostering cross-team collaboration.
- Market Strategy – finding resources, writing business plans, and communicating the social value and unique story of AI products.
The future vision for Instant Friend AI is a Hybrid AI system combining edge and cloud:
- Real-time emotion sensing, local interaction, and offline speech recognition.
- Empathetic dialogue, emotional pattern analysis, and adaptive language generation.
- Constrcuting richer data and refining continuously,to build an AI that not only understands people—but feels with them.
Ultimately, Instant Friend AI is more than a technological experiment—it embodies my belief that design should care for the unseen: our emotions, fragility, and longing for connection. This project taught me that technology’s future lies not in replacing humanity, but in amplifying it. At the intersection of empathy and intelligence, art and algorithm, I hope to keep creating systems that understand, accompany, and remind us what it means to be alive.