Is your AI keeping secrets?

Simple habits that keep your data under control

Welcome to The Logical Box!

Your guide to making AI work for you.

Hey,

Andrew here from The Logical Box, where I break down AI so it’s easy to understand and even easier to use.

When you start using AI tools, it’s natural to worry about security. Not because they will leak your secrets, but because defaults aren’t always set up with maximum safety. The good news is: you can keep control, lock things down, and still get all the usefulness.

Today I’ll walk you through setting up AI tools safely, especially OpenAI’s tools, and share 3 building-blocks for security that work broadly. By the end, you’ll know specific steps you can take today to make your data safer.

What “Security Without the Scare” Looks Like

Instead of assuming the worst, imagine this: your AI only sees what you let it see; it can’t wander into all your folders; it doesn’t automatically record and reuse your private stuff; and you control who else sees what. That’s what security feels like when it’s done well.

Here are the main principles:

  1. Account & domain boundaries - Always use tools inside your own, or your organization’s, account. Avoid free tools that aren’t managed because they may not respect your security or have strong controls.

  2. Strict permissions - Think of each AI assistant/tool like a new hire. Give it only the permissions it needs. Not everything; not “just to be safe”.

  3. Limited scope & role separation - Don’t let one assistant or tool handle everything sensitive. Separate them by role (e.g. one for marketing drafts, another for internal docs). That way, a slip up in one area doesn’t expose everything.

OpenAI: Settings You Should Know About

Here are some of the concrete knobs and levers you can use, especially in OpenAI / ChatGPT, to increase security and privacy.

Setting / Feature

What It Does

What You Should Do

“Improve the model for everyone” toggle (Data Controls)

Controls whether your new conversations are used to train OpenAI’s models, i.e. help improve the AI overall. (OpenAI Help Center)

Turn this off if you don’t want your inputs used for training. This doesn’t erase past data, but stops new content from being fed into model-training. (OpenAI Help Center)

Chat history / Temporary Chats

Some chats can be marked temporary; others are stored and persistent. Temporary chats are often treated differently (e.g. deleted after a time) and not used for training or memory. (GenAI)

If you’re entering something sensitive, use a “temporary chat” if available, or disable history for that conversation. Don’t assume history is always off.

Export / Delete account data

You may be able to delete old conversations, export your data, or request removal of certain data. (OpenAI Help Center)

Regularly audit what you have stored. Clean up old chats or data you no longer need. If you want to sever ties entirely, know how to request full deletion.

Administrative & Role-Based Controls (for teams / organizations)

When using business / enterprise / edu plans, there are often features to restrict who can do what: what connectors are allowed, what documents or folders are accessible, custom roles, etc. (OpenAI Help Center)

If you’re part of a team or run one, get familiar with those controls. Don’t let “everyone by default” have broad access. Use custom roles, restrict connectors, limit file access.

Examples of Other Tools and What They Allow

To give some comparison, here are how other AI-tools or related services approach similar settings:

  • Azure OpenAI: they offer role-based access control (RBAC) so you can specify which users or roles get to access which resources. Prompts, uploads, model usage are separated by role. (Microsoft Learn)

  • If you’re using tools or assistants that connect to your internal drives (Google Drive, OneDrive, etc.), many allow you to limit to specific shared folders or limit what types of file you sync. (OpenAI Help Center)

What to Do Today to Lock Things Down

Here’s a checklist for you. Pick some items from this list, and you’ll be more secure by the end of the day:

  • In ChatGPT or any similar tool, go to Settings → Data Controls → Turn off “Improve the model for everyone”.

  • For sensitive chats, start them as “temporary chats” or use any mode that reduces retention.

  • Review connectors / third-party integrations. Are there ones you don’t need? Remove them. Are permissions too broad? Narrow them.

  • If you’re on a team or organization plan, review roles & permissions. Does everyone need access to drive or docs? Probably not.

  • Periodically export or delete old data you no longer need. Think of it like cleaning old files off your hard drive: less clutter, less risk.

  • Be careful what you feed into the AI: sensitive personal data, financials, trade secrets, etc. Treat prompts like public (because in many cases, they could be).

What Research / Policy Shows

When I looked into what OpenAI and others allow, here’s the landscape:

  • OpenAI’s Data Controls let users opt out of letting their new conversations train models via the “Improve the model for everyone” toggle. (OpenAI Help Center)

  • Turning that toggle off does not automatically delete past training data or remove everything you’ve already shared. It generally applies moving forward. (CarlosPérez)

  • For business or edu / enterprise plans, there are stronger guarantees: OpenAI says they do not train on business-data accessed via connectors, for example, and administrators have more control. (OpenAI)

  • Many tools and settings are device-agnostic (if you turn off model training on web, it applies to mobile too). (OpenAI Help Center)

Why These Steps Actually Matter

Because even though AI tools are powerful, they are still just systems that follow rules. If defaults leave doors open, someone might accidentally expose something, or you might lose control of your own intellectual property or private info. These steps give you the muscle to enforce your own boundaries.

Also, as AI tools get adopted more widely in business, being able to show you “did your due diligence” in security becomes a trust factor. Clients or partners will care.

Ready to take the next step?

I work alongside businesses to develop AI skills and systems that stay with you. Rather than just building prompts, I help you become a confident AI user who can solve real problems and no more starting from zero each time.

If you are ready for some guidance to get you or your team truly comfortable with AI tools, reach out to Andrew on LinkedIn and let's talk about what is possible.

Thanks for reading,

Andrew Keener
Founder of Keen Alliance & Your Guide at The Logical Box

Please share The Logical Box link if you know anyone else who would enjoy!

Think Inside the Box: Where AI Meets Everyday Logic