AI Security

How Kaana protects your data when using AI features.

Updated over a month ago

Overview

Kaana's AI features (Kai Advisor, Health Assessments, Contract to Subscription AI) are designed with privacy and security as top priorities.

How AI Features Work

1

When you use an AI feature, relevant data is sent to our AI provider.

2

The AI processes the request.

3

Results are returned to you.

4

Your data is not stored by the AI provider.

Data Sent to AI

What IS Sent

Only data necessary for the specific request:

Feature
Data Sent

Kai Advisor

Project summary, task counts, milestone status

Health Assessment

Aggregated metrics, not individual records

Contract to Subscription AI

Document content you choose to analyze

What is NOT Sent

  • Passwords or authentication tokens

  • Payment information

  • Personal identification numbers

  • Data from other users or tenants

AI Provider Security

We use OpenAI as our AI provider. Their security commitments:

  • No Training on Your Data — Your data is not used to train AI models

  • Data Retention — API data retained for 30 days max for abuse monitoring, then deleted

  • Encryption — All data encrypted in transit and at rest

  • SOC 2 Compliant — Enterprise-grade security standards

Your Control Over AI

Opt-In Features

AI features are optional. You choose when to:

  • Run AI analysis on a project

  • Use the AI advisor

  • Analyze documents with AI

No Automatic Processing

AI doesn't automatically scan your data. It only processes information when you explicitly request it.

Audit Trail

All AI interactions are logged so you can see:

  • When AI was used

  • What was analyzed

  • Results generated

AI Output Accuracy

Important Disclaimers

  • AI insights are suggestions, not guarantees

  • Always verify AI recommendations

  • AI may occasionally produce inaccurate results

  • Use AI as a tool, not a replacement for judgment

Human Review

For critical decisions:

  • Review AI suggestions carefully

  • Verify against actual data

  • Consult with team members

  • Document decisions made

Best Practices

1

Review before submitting — Check what data will be analyzed.

2

Sanitize sensitive content — Remove confidential info if needed.

3

Validate outputs — Don't blindly trust AI results.

4

Report issues — Let us know if something seems wrong.

Protecting Sensitive Information

  • Avoid analyzing documents with highly sensitive data

  • Redact confidential information before AI analysis

  • Use AI for general insights, not sensitive details

Frequently Asked Questions

chevron-rightDoes AI have access to all my data?hashtag

No. AI only sees data you explicitly choose to analyze.

chevron-rightIs my data used to train AI models?hashtag

No. Your data is never used for AI training.

chevron-rightCan other users see my AI interactions?hashtag

No. AI interactions are private to your account and tenant.

chevron-rightWhat happens if AI gives wrong advice?hashtag

AI provides suggestions only. You maintain control over all decisions. Always verify important recommendations.

chevron-rightCan I disable AI features?hashtag

Yes. AI features are optional and can be avoided if preferred.

Security Updates

We continuously monitor and update our AI security practices:

  • Regular security reviews

  • Provider compliance verification

  • Prompt response to any concerns

Last updated

Was this helpful?