HIPAA Privacy Notice
Effective date: March 25, 2026 · Last updated: March 25, 2026
Important: HIPAA Applicability Statement
Family Guardian AI is not a Covered Entity or Business Associate under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). We are a consumer technology company, not a healthcare provider, health plan, or healthcare clearinghouse. HIPAA’s specific technical requirements do not legally apply to us.
However, because our Service involves sensitive health and wellbeing data about elderly individuals, we voluntarily adopt privacy and security practices that are consistent with HIPAA principles and, in several respects, exceed them. This notice explains those practices and what you should know about your health data.
1. Health-Related Data We Handle
Through your use of the Service, we handle the following categories of health-related information:
Profile-level health context
Provided by youMedical conditions (e.g., Alzheimer’s disease, dementia, Parkinson’s, diabetes, heart disease), current medications, mobility limitations, cognitive status, dietary restrictions, and special health notes provided voluntarily by you when setting up a senior’s profile.
Call transcript content
Generated during callsDuring AI check-in calls, seniors may voluntarily disclose health information — for example, mentioning pain, recent doctor’s visits, medication side effects, or falls. These disclosures are captured in the call transcript.
AI-generated health signals
AI-generatedOur AI analyzes transcripts and generates mood scores (1–10), cognitive engagement assessments, anomaly flags (confusion, distress, pain, fall risk), and wellbeing scores. These are inferences, not clinical measurements.
Behavioral patterns
System-derivedLongitudinal patterns derived from multiple calls — such as declining mood scores, increasing confusion flags, or reduced engagement — used to compute trend-based alerts.
2. How We Protect Your Health Data
We implement comprehensive technical and organizational safeguards to protect health-related information:
2.1 Encryption
- In transit: All data transmitted between your browser and our servers, and between our servers and third-party APIs, is encrypted using TLS 1.2 or higher. This includes call transcripts sent to our AI analysis service and health profile data written to our database.
- At rest: All data stored in our database (Supabase on AWS) is encrypted at rest using AES-256 encryption. Health notes, transcripts, AI analyses, and mood scores are all protected at the storage layer.
- No audio storage: We do not record or store audio of calls. Transcripts are text-only, eliminating risks associated with audio file storage.
2.2 Access Controls
- Row-Level Security (RLS): Our database enforces RLS policies at the query level. A Family Member can only read data for seniors they are explicitly connected to. No user can access another family’s data.
- Role separation: Our application uses separate database roles: a limited “anon” key for public operations, a “service role” key restricted to server-side operations only, and an authenticated user session token for user-scoped queries.
- Authentication: All accounts are protected by password (hashed with bcrypt via Supabase Auth). Passwords are never stored in plaintext.
- Internal access: No employee or contractor has unrestricted access to health data. Production access is limited to systems and engineers with a legitimate operational need.
2.3 Data Minimization
- We collect health data only to the extent necessary to personalize calls and detect wellbeing concerns.
- Call transcripts sent to Anthropic’s Claude API for analysis contain only the transcript text — no explicit identifiers like full names, addresses, or Social Security numbers are included in the API request.
- SMS alerts sent via Twilio contain only the alert message and recipient phone number — not full health profiles or transcript content.
2.4 Incident Response
We maintain an incident response plan for potential data breaches. In the event of a breach involving health data, we will notify affected users promptly (within 72 hours of discovery where feasible), describe the nature of the breach, and take immediate steps to contain and remediate it.
3. Vendor Agreements & Business Associate Status
Because we are not a HIPAA Covered Entity, we are not required to enter into formal Business Associate Agreements (BAAs) with our vendors. However, we work only with vendors that maintain strong data protection standards:
| Vendor | Health Data Involved | BAA Available? | Security Certifications |
|---|---|---|---|
| Supabase (database) | All data including health profiles, transcripts, analyses | Yes (Enterprise) | SOC 2 Type II, ISO 27001 |
| Anthropic (AI analysis) | Call transcript text for analysis | Yes (available) | SOC 2 Type II |
| ElevenLabs (voice AI) | Senior phone number, system prompt with context | Contact for enterprise | SOC 2 in progress |
| Twilio (telephony & SMS) | Phone numbers, alert message content | Yes (HIPAA eligible) | SOC 2 Type II, ISO 27001 |
| Vercel (hosting) | Request logs, IP addresses | Yes (Enterprise) | SOC 2 Type II |
As our platform scales and as regulatory requirements evolve, we are committed to upgrading vendor relationships to include formal BAAs where appropriate.
4. Permitted Uses of Health Data
We use health-related data only for the following purposes:
- Personalizing check-in calls. A senior’s medical conditions, medications, and cognitive status inform how the AI companion conducts the conversation — for example, avoiding complex topics for seniors with advanced dementia, or gently checking on medication adherence when a family member has flagged it as a concern.
- Analyzing call transcripts. Transcripts are analyzed by Claude to produce mood scores and anomaly flags. Health disclosures made during a call (e.g., “my knee has been hurting”) inform the analysis.
- Generating family alerts. When anomalies are detected — confusion, distress, reported pain, fall risk — we alert family members so they can take appropriate action.
- Compiling wellbeing trends. Longitudinal analysis of mood scores and anomaly flags gives families a picture of their loved one’s wellbeing over time.
- Powering the AI chat assistant. When a Family Member asks the AI chat about their senior, relevant call analyses and summaries are used to generate a contextually accurate response.
We do NOT use health data to:
- Sell or license health data to insurance companies, pharmaceutical companies, or data brokers;
- Train AI models for third parties;
- Target advertising or make eligibility decisions;
- Share with employers or any third party not listed in our Privacy Policy;
- Any purpose beyond delivering and improving the Service.
5. Your Rights Over Health Data
You have the following rights regarding health data in our Service:
Access
You can view all health profile data in the app at any time. You can request a full export of your data by emailing privacy@familyguardian.ai.
Correction
You can update health information in the senior’s profile at any time directly in the app.
Deletion
You can delete individual call analyses, memories, or the entire senior profile from the app. You can also request full account deletion, which will delete all associated health data per our retention policy.
Portability
You can request a machine-readable export (JSON) of all data associated with your account, including transcripts and analyses.
Restriction
You can stop new calls from being placed at any time by adjusting the call schedule. You can disable alerts in account settings.
6. Consent for Sensitive Health Information
When you enroll a senior, you voluntarily provide health context to enable personalized care calls. By doing so, you represent that you have the legal right to share this information — either because the senior has consented directly, or because you have appropriate legal authority (e.g., power of attorney, healthcare proxy, or guardian status).
During calls, seniors may voluntarily disclose additional health information. Our AI is designed to respond empathetically and to note medically relevant disclosures in the transcript for family awareness. The AI does not solicit detailed medical histories, ask for Social Security numbers or insurance information, or conduct formal cognitive testing.
Family members are encouraged to inform seniors that they are speaking with an AI companion and that conversations are analyzed to help keep their family informed of their wellbeing.
7. Not Medical Advice or a Medical Device
This Service does not provide medical advice, diagnosis, or treatment. All AI-generated outputs — mood scores, wellbeing assessments, anomaly flags, cognitive engagement scores, and weekly reports — are informational tools for family awareness. They are not clinical assessments, diagnostic results, or professional medical opinions.
This Service is not a medical device and has not been cleared or approved by the FDA or any other health regulatory authority.
This Service is not an emergency response system. If you have any reason to believe a senior is in immediate danger, call 911 without delay. Do not rely on our alert system as a substitute for emergency services.
Always consult a qualified, licensed healthcare professional for medical advice, diagnosis, or treatment decisions.
8. Questions About Health Data Privacy
If you have questions or concerns about how we handle health-related data, or to exercise any of your rights, please contact us:
We aim to respond to all privacy inquiries within 5 business days. For urgent data concerns, please include “URGENT” in the subject line.
Our Commitment
We built Family Guardian AI because we care deeply about the dignity, privacy, and safety of elderly individuals. We treat every piece of health data as a sacred trust — something shared with us not out of convenience, but out of love for a family member who deserves the best possible care. We will never compromise on that responsibility.