Swipe Right for AI Consent
Why your relationship with AI needs better boundaries than your last Tinder date
đ¸ ikigai çăç˛ć is a reason for being, your purpose in life - from the Japanese iki çă meaning life and gai ç˛ć meaning worth đ¸
Your smart thermostat learned the householdâs heating preferences beautifully. Trouble is, it learned them from your ex who was perpetually cold-averse! Now every evening, just when youâre settling in for a cosy night, the heating drops to a brisk 16 degrees because the algorithm thinks thatâs what âhome preferencesâ look like.
You like it properly toasty. Your ex treated central heating like a personal enemy.
The device meant well. Your ex... well, thatâs another story⌠but here you are, wrapped in a blanket, wondering when exactly you agreed to let your house make assumptions about your comfort based on someone elseâs frugal habits.
Welcome to algorithmic intimacy without informed consent. The kind where technology knows your patterns better than your best friend, makes decisions based on data you didnât realise you were sharing and occasionally gets things spectacularly wrong because itâs missing context.
Iâm not breaking up with AI, hells no! I LOVE it⌠I use it in heaps of ways daily and Iâm not about to stop, but the best relationships happen when both parties are clear about boundaries from the start.
The anatomy of algorithmic intimacy
Small signals add up to a startlingly complete picture.
Your phone knows you checked an exâs Instagram at 2am. Your streaming service has clocked a run of Taylor Swift break-up songs. Your fitness tracker noticed your heart rate spike during that awkward work meeting. Your shopping app has learned that you stress-buy notebooks and LEGO sets.
Individually, these are crumbs. Together, theyâre a story more revealing than a diary⌠when youâre restless, who you message when anxious, what makes you pause, when youâre content. Most of us stumbled into this level of intimacy without noticing the moment consent slipped from active to assumed.
Itâs all pattern recognition gold for algorithms designed to know you better than you know yourself.
The âfirst date rulesâ for AI
Match sharing to trust, let trust be earned.
Remember when you had rules about what youâd share on a first date? Definitely no ex-partner comparisons and absolutely no discussing your latest therapy session.
AI deserves the same graduated intimacy approach.
Green light data (share freely) - coffee order, what genre of books you enjoy, favourite workout playlist, generic work templates. The stuff youâd happily mention to a stranger at a bus stop.
Amber light data (share with care) - daily routines, shopping habits, who you message most often, drafts containing other peopleâs words. Information thatâs useful for personalisation but could feel intrusive if used incorrectly.
Red light data (absolutely not on the first date) - real-time location when vulnerable, finances, childrenâs data, health concerns, anything promised confidential. The intimate stuff, some of which may never be appropriate to share.
Just like dating, you can always share more as trust is earned.
Creating your personal AI prenup
Before things get too serious, itâs time for the relationship-defining conversation. What are you willing to share, what stays private and what happens when things go wrong? Some categories for you to consider and list out your own tolerances against;
Whatâs mine stays mine - Your location during vulnerable moments, your health data, your financial information, your family conflicts.
What we can share - Your professional preferences, entertainment tastes, general lifestyle patterns.
What needs permission every time - Direct access to your contacts, your photos, your conversations with others.
The dealbreakers - Tools that infer mental health, creditworthiness, or job suitability without explicit consent and human review.
Pop this in your bullet journal as a spread you revisit each quarter. Lots of tools and apps will try to get maximum permission upfront, your prenup is your guide for where you may want to say no!
Red flag spotting guide
When AI starts acting like a pushy date.
You know when someone assumes they can choose for you at a restaurant because âthey know your typeâ? AI features can do this too, and itâs just as annoying. Watch for these algorithmic red flags;
The âI know you better than you know yourselfâ move - When your fitness app insists you should run today despite you clearly choosing rest, or your shopping algorithm decides you âobviouslyâ want the premium version of everything.
The âLet me handle this for youâ overreach - Calendar apps that auto-decline meetings because theyâve decided youâre too busy, or email filters that hide messages they think arenât important.
The âTrust me, everyone loves thisâ manipulation - Recommendation engines that push popular content over diverse options, gradually narrowing your world to whatâs popular, not whatâs you.
The âI was just trying to helpâ defence - When decisions are made that feel invasive but get defensive about your reaction. (Looking at you, every app that turns on notifications after an update.)
Your gut feeling matters here. If something feels too presumptuous, it probably is. Trust that instinct and adjust your settings accordingly. Or delete!
Digital peer review AKA meeting the friends
In healthy relationships, you introduce new partners to your trusted circle. They provide perspective, spot red flags you might miss and help protect you.
AI decisions benefit from the same peer review approach.
Before implementing significant AI solutions or suggestions;
Run important recommendations past a trusted human
Check AI-generated content with someone who knows your voice
Ask friends about their experience with similar tools
Test AI decisions and outputs in low-stakes situations first
Create your own algorithmic advisory board. Maybe itâs your most techie colleague for work tools, your friend group for lifestyle apps, a friendly teenager for social media dos and donâts. The point is having humans in the loop who can say âhang on, that doesnât sound like youâ or âare you sure thatâs a good idea?â
Even as we build up trust in different AI tools, itâs worth remembering that the best decisions usually involve multiple perspectives. Even the smartest algorithms have blind spots.
âThe Talkâ - Scripts for family digital democracy
House rules work best when everyone helps write them.
Just like any household decision that affects everyone, your familyâs AI boundaries deserve a proper conversation. Not a lecture, not a unilateral decision from the most tech-savvy person, an actual democratic chat.
Starter script for partners - âIâve been thinking about how much our devices know about us... shall we have a chat about what weâre comfortable sharing and what feels too personal? Iâd love to hear your thoughts.â
For families with teenagers - âRight, letâs talk about our digital housemates, Alexa, Siri, and all the apps that are quietly learning about us. What do we want them to know about our family, and what stays private?â
The key questions to discuss;
Which smart features make our lives better?
What would we never want an algorithm to decide for us?
How do we protect each otherâs privacy in shared spaces?
What happens when AI gets it wrong?
It doesnât hurt to document your familyâs consensus. Get the most arty in the family to draw it out and pin it to the fridge alongside the emergency numbers and the shopping list. Boundaries work when everyone knows what they are.
This protects ikigai (and boosts hatarakigai)
I donât want anyone to live in fear of digital tools. I just think it sensible to approach them with the same consensual thinking weâd apply to any important relationship. AI can be a brilliant partner when both parties respect each otherâs limits.
Boundaries donât make life smaller; they make meaning safer. Your ikigai lives in the quiet, untracked parts as much as the productive ones. And your hatarakigai, work worth doing, gets easier when workflows are cleaner and kinder.
Your algorithmic relationships should enhance who you are, not reshape you into something easier for a model to predict. The humans who set boundaries today stay recognisably themselves tomorrow.
Thereâs something rather satisfying about being the human who taught their smart apps to respect their contradictions.
Iâd love to hear your thoughts in the comments beautiful souls⌠be honest... do any apps know you better than your best friend? Was that a conscious decision? Do you have any privacy protecting tips to share, or questions to pose?
Sarah, seeking ikigai xxx
PS - Here are some bullet journal spread category ideas to map your prenup ideas;
⢠Green/Amber/Red data zones with specific examples
⢠Current AI relationship audit (list tools; whatâs working, what feels icky)
⢠Family digital democracy values from your household chat
⢠Red flag tracker for when AI gets presumptuous
⢠Peer review contacts for different types of AI decisions
Algorithmic hygiene (one-minute checklist)
Source noted?
Facts spot-checked (names, dates, numbers)?
Sensitive data removed or masked?
Bias scan done (whoâs missing, whoâs misrepresented)?
Human owner reviewed and named?
Version/date noted?
PPS - Want to try this with AI? Hereâs a prompt to explore your own boundaries
âI want to audit my relationship with AI tools. Help me identify: What data am I currently sharing that I might want to reconsider? Which AI features feel helpful versus invasive? How can I create better boundaries without losing the benefits? Be specific about steps I can take this week.â
PPPS - This weekâs soundtrack has to be Lesley Goreâs âYou Donât Own Meâ because sometimes you need to remind your algorithms exactly whoâs in charge around here...
âYou donât own me, Iâm not just one of your many toys...â
Perfect for anyone whoâs ever wanted to tell their smart apps to back off and respect their contradictions! đľ