Every conversation starts with trust

Organizations entrust Kindred with their most important relationships — students, alumni, and communities. Protecting participant data isn't a feature. It's the foundation of everything we build.

🔇

No Audio Storage

Kindred never records, stores, or has access to call audio. Voice streams directly between the browser and OpenAI via encrypted WebRTC.

Minimal Data Retention

Only AI-generated insights are retained. Raw transcripts are not stored long-term — we keep only what's needed.

🔐

Passwordless Authentication

Sign in via SSO with major identity providers or secure magic-link email — no passwords to leak or phish, no self-registration. Accounts are provisioned by administrators only.

Consent with Audit Trail

Every call requires explicit opt-in. Consent events are recorded with full forensic detail: text snapshot, hash, timestamp, and user agent.

AI Governance

Your data, your conversations, your control. Here's exactly how we handle AI processing.

No Training on Your Data

✓ Implemented
  • Voice and transcript data is processed via OpenAI's commercial API
  • Per OpenAI's API terms, inputs and outputs are not used for model training
  • Ephemeral voice sessions with short-lived tokens
  • Your conversations remain exclusively yours

Data Minimization

✓ Implemented
  • Only the data necessary for each task is sent to the AI
  • Transcripts are scoped to individual conversations — not your entire database
  • Only structured insights are retained long-term

Prompt Security

✓ Implemented
  • Input sanitization to mitigate prompt injection
  • Campaign instructions set by authorized admins only
  • AI behavior bounded by system-level instructions

Built for participants, not just admins

Organizations don't just adopt Kindred for themselves — they adopt it on behalf of their networks. We take that responsibility seriously.

AI Disclosure

✓ Implemented

Participants are clearly informed that their conversation is AI-powered before every call begins. No hidden automation.

Explicit Consent

✓ Implemented
  • Active opt-in required (no pre-checked boxes)
  • Clear description of data collected and how it's used
  • Full audit trail: text snapshot, hash, IP, user agent, timestamp

Opt-Out & Deletion

✓ Implemented
  • Decline consent and not proceed
  • End any call at any time
  • Request complete data deletion
  • No account required — single-use secure links only

How Kindred Works

A quick overview so the security controls make sense.

1

Campaign setup

An org admin creates a campaign, uploads a member list (name and email only), and generates unique call links.

2

Consent

Before any call begins, the participant sees a consent screen and must explicitly opt in. Consent is recorded with a full audit trail.

3

Voice call

The participant has a conversation with an AI voice agent directly in their browser. Audio streams directly between the browser and OpenAI via WebRTC — Kindred's servers never see or store audio.

4

Transcript & insights

A text transcript is generated from the conversation. AI-generated insights are extracted and retained for the organization.

5

Org admin review

Admins view aggregated insights and analytics. They never hear the original audio.

Compliance Framework

FERPA Alignment

In Progress
  • Participant data is organization-scoped with strict access controls
  • Only authorized org admins can view insights from their own organization
  • Minimal PII collected (name and email only)
  • Only AI-generated insights retained, not raw transcripts
  • Explicit consent required before every interaction

Coming soon: Formal FERPA compliance documentation and institutional agreements

GDPR Ready

In Progress
  • Lawful basis: explicit consent with audit trail
  • Data minimization: only name and email collected
  • Right to access: org admins can export data
  • Retention limits: only insights retained, not raw transcripts

Coming soon: GDPR-compliant data residency options and consent withdrawal workflow

CCPA Ready

In Progress
  • Transparent data collection practices
  • Member opt-out supported
  • Data deletion on request
  • No sale of personal information

SOC 2 Type II

Coming Soon

We are actively preparing for SOC 2 Type II certification. Our current practices align with SOC 2 trust service criteria, and we are formalizing policies and controls for audit readiness.

Security

Encryption

✓ Implemented
  • TLS (HTTPS/WSS) for all data in transit
  • Voice audio via encrypted WebRTC directly to OpenAI
  • Infrastructure-level encryption at rest via Render

Coming soon: Application-level AES-256 encryption for sensitive fields

Access Control

✓ Implemented
  • SSO with major identity providers
  • Complete tenant isolation between organizations
  • Role-based access: admin and member roles
  • Cryptographically random, single-use participant tokens

Coming soon: MFA enforcement for all org admins

Application Security

✓ Implemented
  • JWT verification and per-request authorization
  • Schema enforcement on all API inputs
  • Automated access control test suite
  • Code review process for all changes

Coming soon: Rate limiting on all public-facing endpoints

Sub-processors

We're transparent about every third-party service that touches your data.

VendorPurposeData SharedLocationTheir Compliance
OpenAIVoice AI & insight generationVoice audio (ephemeral), transcript textUSSOC 2 Type II
ClerkAuthentication & SSOEmail, name, auth tokensUSSOC 2 Type II, GDPR
VercelFrontend hostingStatic assets only (no PII)USSOC 2 Type II, ISO 27001
RenderBackend & databaseAll application data (encrypted at rest)USSOC 2 Type II
ElevenLabsVoice AIVoice audio (ephemeral)USSOC 2 Type II
SendGridTransactional emailEmail addresses, notification contentUSSOC 2 Type II, ISO 27001

What's Coming Next

We are continuously improving our security posture.

2026

SOC 2 Type II

Formal audit and certification for enterprise readiness

Q2 2026

FERPA documentation

Institutional agreements and formal compliance documentation

H2 2026

Penetration testing

Third-party penetration testing and vulnerability assessment

H2 2026

Data residency options

Configurable data residency for institutions with geographic requirements

Frequently Asked Questions

Does Kindred record or store audio?

No. Audio streams directly between the participant's browser and OpenAI via encrypted WebRTC. Kindred's servers never see, process, or store audio data.

What conversation data is retained?

Only AI-generated insights are retained for the organization. These insights do not contain raw transcript text or verbatim participant responses.

Can one organization see another's data?

No. Kindred uses strict organization-level data isolation. Every API request validates that the requesting user is an authorized member of the relevant organization.

Does OpenAI train on our data?

No. Per OpenAI's API data usage policy, data submitted through the API is not used to train their models.

What data do you collect about participants?

The minimum necessary: name and email (provided by the organization), consent records, and AI-generated insights. We do not collect phone numbers, SSNs, grades, or financial information.

Can we get a DPA or BAA?

Yes. We are happy to work with institutions on Data Processing Agreements, BAAs, and other institutional agreements. Contact us at trust@projectkindred.co.

Questions about security?

We can provide additional security documentation, complete security questionnaires, or schedule a call with our team.

trust@projectkindred.co