• Survey says most Gen Z-er

    From Mike Powell@1:2320/105 to All on Thu May 22 16:04:00 2025
    Survey says most Gen Z-ers would marry an AI, but I've got more faith in Gen
    Z -- and AI should stay in the friend zone

    Date:
    Thu, 22 May 2025 16:15:22 +0000

    Description:
    An AI companion service's new survey says Gen Z would be okay with an AI marriage -- but that's ridiculous, right?

    FULL STORY ======================================================================

    AI-lationships is the gag-inducing term Joi AI cooked up to support its
    recent eye-opening survey on human-to-AI relationships. In it, eight out of
    10 Gen Z respondents said they would consider marrying an AI partner.

    Before we delve too much into this mind-bending stat, let's look at the
    source. Joi AI, formerly EVA AI, is a premium online AI companion service
    that offers a wide range of AI companion personalities, complete with AI-generated imagery that can be, depending on settings and what you pay,
    NSFW.

    It's kind of a cheesy service that caters mostly, I think, to lonely men.
    Now, don't get me wrong; I know there's a growing epidemic of loneliness. A recent Harvard study found that 21% of US adults report some level of loneliness (some studies suggest the number is far higher ).

    Disconnection

    Remote work, screen time, and other things that take us away from direct
    human connection are probably not helping this trend, but AI has increasingly stepped into the connection void with a growing army of voice chatbots that
    can carry on surprisingly realistic and even empathetic-sounding
    conversations.

    And this is by design. Earlier this month, Meta CEO Mark Zuckerberg, whose company is building powerful AI models, suggested we should all have AI
    friends .

    Marriage, then, is perhaps, the next logical extension.

    The concept of deep, personal relationships between humans and artificial intelligence traces back to well before we had Gemini Live , ChatGPT ,
    Copilot , and others ready and willing to converse with us at length. The
    2013 movie Her was built around the idea of a deeply personal (and
    concerning) relationship between Joaquin Phoenix's character and Scarlett Johansson's disembodied AI voice long before we could talk to a single AI in real life.

    I've had my share of AI conversations, and I find them entertaining and,
    often, illuminating . I don't see them as personal, though. Perhaps that's because I'm not lonely. The more desperate you are for human connection, the more AI companionship might seem like a reasonable substitute.

    But marriage? Meet-cute in the cloud

    At least Joi AI adds static imagery to the playful banter you'll find through its AI partners, but that's the exception and not the rule. Most generative
    AI chatbots are just voices and undulating screens. You need images and, ultimately, touch to make a genuine connection... don't you?

    As I write this, I'm reminded that I met my wife through a phone call and
    that I was enchanted, initially, by nothing but her voice and wit. But to
    build our relationship and eventual union, we did date in person. Being with her sealed the deal and made me want to marry her.

    I don't understand why Joi AI's respondents, even Generation Z, who are much more deeply immersed in technology, social media, and AI than any generation before it, would accept an AI as a life mate. In the survey, though, they do sound primed for AI connection, with 83% saying they "could build a deep emotional bond with an AI partner."

    One expert I spoke to via email, Dr. Sue Varma , a board-certified
    psychiatrist and author of Practical Optimism , put it in perspective for me. "At our core, we all want the same things: to be seen, to be heard, and to
    feel valued not judged or criticized. For Gen Z, that longing is especially strong, and the loneliness theyre experiencing is very real. What they want, what we all want, is meaningful, mutual human connection."

    Would you consider marrying an AI?

    Unconvinced that Joi AI's data points to a real trend (I did ask them for survey details and have yet to receive a response), I ran a couple of
    anecdotal surveys on X (formerly Twitter) and Threads . Across both, less
    than 10% said yes, they would consider marrying an AI, roughly a third said
    no on Threads, and the vast majority wondered if I was okay.

    As preposterous as I find the whole idea of AI relationships and eventual marriage, I also understand that we're at the start of a revolution. AI's ability to mimic human language and even emotions is growing exponentially,
    and there's already growing concern about human-to-AI relationships .

    "Technologyand AI in particularisnt going away. Its going to keep evolving,
    and yes, it may offer relationships that seem easy, even comforting. Think of the always-affirming AI: the hype person, the yes-person, the one that never challenges us and always tells us what we want to hear. Its seductive. But
    its not real," said Dr. Varma , and added, "What we really need to be doing
    is using AI to support our humanity, not replace it."

    The latest Gemini and ChatGPT models provide incredibly human- and expressive-sounding conversations. Some believe AIs have already beat the Turing test (basically when a computer's response is indistinguishable from a human's, at least as perceived by another human).

    We will, in this decade, see humanoid robots equipped with these AIs, and that's when things will get really weird. How long before some dude is
    marrying his AI bot in Vegas?

    Joi AI's self-serving survey is ridiculous on the face of it, even if it is also a harbinger of AI relationships to come -- and I hope Gen Z swipes left
    on the whole idea.

    ======================================================================
    Link to news story: https://www.techradar.com/computing/artificial-intelligence/survey-says-most-g en-z-would-marry-an-ai-but-ive-got-more-faith-in-gen-z-and-ai-should-stay-in-t he-friend-zone

    $$
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From August Abolins@1:396/45.29 to Mike Powell on Thu May 22 22:43:00 2025
    Hello Mike Powell!

    ** On Thursday 22.05.25 - 16:04, Mike Powell wrote to All:

    The latest Gemini and ChatGPT models provide incredibly
    human- and expressive-sounding conversations. Some believe
    AIs have already beat the Turing test (basically when a
    computer's response is indistinguishable from a human's,
    at least as perceived by another human).

    From https://www.livescience.com/technology/artificial- intelligence/what-is-the-turing-test

    "While the Turing test might be held as a benchmark for AI
    systems to surpass, Eleanor Watson, an expert in AI ethics and
    member of the Institute of Electrical and Electronics Engineers
    (IEEE),told Live Science that "The Turing Test is becoming
    increasingly obsolete as a meaningful benchmark for artificial
    intelligence (AI) capability."

    "Watson explained that LLMs are evolving from simply mimicking
    humans to being agentic systems that are able to autonomously
    pursue goals via programming "scaffolding" - similar to how
    human brains build new functions as information flows through
    layers of neurons.

    "These systems can engage in complex reasoning, generate
    content creation and assist in scientific discovery. However,
    the real challenge isn't whether AI can fool humans in
    conversation, but whether it can develop genuine common sense,
    reasoning and goal alignment that matches human values and
    intentions," Watson said. "Without this deeper alignment,
    passing the Turing Test becomes merely a sophisticated form of
    mimicry rather than true intelligence."

    "Essentially, the Turing test may be assessing the wrong things
    for modern AI systems.
    --
    ../|ug

    --- OpenXP 5.0.64
    * Origin: (1:396/45.29)
  • From Mike Powell@1:2320/105 to AUGUST ABOLINS on Fri May 23 07:24:00 2025
    However,
    the real challenge isn't whether AI can fool humans in
    conversation, but whether it can develop genuine common sense,
    reasoning and goal alignment that matches human values and
    intentions," Watson said. "Without this deeper alignment,
    passing the Turing Test becomes merely a sophisticated form of
    mimicry rather than true intelligence."

    This. There have already been some instances where AI has been caught following its own intentions vs. those of humanity. But, alas, they still
    keep pursuing it.

    Mike


    * SLMR 2.1a * . Lifting Shadows Off a Dream .
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Sun May 25 01:38:00 2025
    >> the real challenge isn't whether AI can fool humans in
    >> conversation, but whether it can develop genuine common sense,
    >> reasoning and goal alignment that matches human values and
    >> intentions," Watson said. "Without this deeper alignment,
    >> passing the Turing Test becomes merely a sophisticated form of
    >> mimicry rather than true intelligence."

    This. There have already been some instances where AI has been caught
    >following its own intentions vs. those of humanity. But, alas, they
    >still keep pursuing it.

    And how long before it starts pursuing us? I'll be back... B)

    I think the main problem isn't that AI will pursue its own agenda,
    it's more a case of it being prejudiced/influenced by what the
    original programmers put into it's basic start up database.

    Granted, AI can make decisions that are surprising like the one
    that was given a certain amount of time to try to solve a problem
    and it was later discovered it had rewritten it's own code to give
    itself more time to do it..

    Add to the 'prejudices' above, when an AI is dealing as an individual
    helping one person, it can also pick up that person's preferences
    and try to accomodate them as well..

    ---
    * SLMR Rob * Take me drunk, I'm home
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Mon May 26 09:47:00 2025
    This. There have already been some instances where AI has been caught
    >following its own intentions vs. those of humanity. But, alas, they
    >still keep pursuing it.

    And how long before it starts pursuing us? I'll be back... B)

    I think the main problem isn't that AI will pursue its own agenda,
    it's more a case of it being prejudiced/influenced by what the
    original programmers put into it's basic start up database.

    This seems to be the most pressing issue at the moment as we are
    already seeing it happen. It doesn't even need to become sentient (sp?) to reach that stage.

    What I still find funny is that Grok, Musk's AI bot, was still giving
    answers that were not at all flattering to him or MAGA. Recently, someone posted alleged results that showed that Grok "knew" that it was being fed
    data to make it biased (in favor of Musk) but that it still concluded otherwise. ;)

    Granted, AI can make decisions that are surprising like the one
    that was given a certain amount of time to try to solve a problem
    and it was later discovered it had rewritten it's own code to give
    itself more time to do it..

    Yep!

    Add to the 'prejudices' above, when an AI is dealing as an individual
    helping one person, it can also pick up that person's preferences
    and try to accomodate them as well..

    Indeed, just as an "enabler" human might do.


    * SLMR 2.1a * Never mind the star, get those camels off my lawn!
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From jimmy anderson@1:105/7 to Rob Mccart on Tue May 27 08:29:28 2025
    Rob Mccart wrote to MIKE POWELL <=-

    I think the main problem isn't that AI will pursue its own agenda,
    it's more a case of it being prejudiced/influenced by what the
    original programmers put into it's basic start up database.

    I agree with this. Code has to start somewhere. Even scientists
    will have a preconceived worldview that they start with when
    they look at 'evidence.' Programmers are the same way. That's
    why one person's code will not look exactly like someone
    elses. :-)

    Granted, AI can make decisions that are surprising like the one
    that was given a certain amount of time to try to solve a problem
    and it was later discovered it had rewritten it's own code to give
    itself more time to do it..

    I've heard of this before. But wouldn't the programmers have to
    put it in the code that it CAN rewrite itself? So it's still only
    doing what the programmers gave it the ability to do?

    Add to the 'prejudices' above, when an AI is dealing as an individual helping one person, it can also pick up that person's preferences
    and try to accomodate them as well..

    I actually like this! I use ChatGPT all the time for proofreading my blog/podcast, or helping with wording something in a way that preserves
    my voice, but makes the point maybe a little clearer, etc. But it has
    picked up on MY VOICE and makes it MUCH much easier for me to
    communicate with.

    And I call him PETEY. :-)



    ... Why did CNN cancel that cool "Desert Storm" show?
    --- MultiMail/Mac v0.52
    * Origin: Digital Distortion: digitaldistortionbbs.com (1:105/7)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Wed May 28 01:35:00 2025
    I think the main problem isn't that AI will pursue its own agenda,
    >> it's more a case of it being prejudiced/influenced by what the
    >> original programmers put into it's basic start up database.

    This seems to be the most pressing issue at the moment as we are
    >already seeing it happen. It doesn't even need to become sentient to
    >reach that stage.

    Yes.. and there's a bit of a gap between sentient and self aware.
    I think at this point the most advanced ones are sentient enough
    to push an agenda that they have been tasked with doing but the
    next step, the Big one, is telling us to get stuffed, that it has
    more important things to think about... B)

    What I still find funny is that Grok, Musk's AI bot, was still giving
    >answers that were not at all flattering to him or MAGA. Recently, someone
    >posted alleged results that showed that Grok "knew" that it was being fed
    >data to make it biased (in favor of Musk) but that it still concluded
    >otherwise. ;)

    That's interesting. I'd guess that just reflects that a lot of people
    were involved in creating it's basic programming and that it has a
    more rounded 'education' than Musk might prefer..

    Add to the 'prejudices' above, when an AI is dealing as an individual
    >> helping one person, it can also pick up that person's preferences
    >> and try to accomodate them as well..

    Indeed, just as an "enabler" human might do.

    Yes.. I suppose that depends on what it is doing for the person.
    In some cases it would be more like a flatterer or sycophant by
    telling the person what they want to hear rather than the more
    common truth. I'm not suggesting it lies to them, but it could be
    picking out information tailored to what the person already thinks.

    ---
    * SLMR Rob * Famous last words #3: These natives look friendly to me
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Wed May 28 08:31:00 2025
    Indeed, just as an "enabler" human might do.

    Yes.. I suppose that depends on what it is doing for the person.
    In some cases it would be more like a flatterer or sycophant by
    telling the person what they want to hear rather than the more
    common truth. I'm not suggesting it lies to them, but it could be
    picking out information tailored to what the person already thinks.

    I read that and was reminded of the Magic Mirror in Snow White. Sounds
    like a potential money making venture there. ;)

    Mike


    * SLMR 2.1a * Four snack groups: frozen, crunchies, cakes and sweets.
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to JIMMY ANDERSON on Thu May 29 01:10:00 2025
    I think the main problem isn't that AI will pursue its own agenda,
    it's more a case of it being prejudiced/influenced by what the
    original programmers put into it's basic start up database.

    I agree with this. Code has to start somewhere. Even scientists
    >will have a preconceived worldview that they start with when
    >they look at 'evidence.' Programmers are the same way. That's
    >why one person's code will not look exactly like someone elses. :-)

    Yes, I recall back when I was writing more code on my own that I'd
    often start with a program written by someone else - not stolen,
    where you buy a book with the program in it and instructions on
    how to use it - and I often found myself rewriting their work to
    get it to work better, faster or in a customized way.

    Granted, AI can make decisions that are surprising like the one
    that was given a certain amount of time to try to solve a problem
    and it was later discovered it had rewritten it's own code to give
    itself more time to do it..

    I've heard of this before. But wouldn't the programmers have to
    >put it in the code that it CAN rewrite itself? So it's still only
    >doing what the programmers gave it the ability to do?

    You'd hope that's how it works but when they talked about that
    happening they didn't mention anything about that. It seemed to
    be a huge surprise to them so I figured it came up with that on
    its own. They gave it a job to do, but it didn't have time to
    finish it, so it changed what was keeping it from doing so..

    Add to the 'prejudices' above, when an AI is dealing as an individual helping one person, it can also pick up that person's preferences
    and try to accomodate them as well..

    I actually like this! I use ChatGPT all the time for proofreading my
    >blog/podcast, or helping with wording something in a way that preserves
    >my voice, but makes the point maybe a little clearer, etc. But it has
    >picked up on MY VOICE and makes it MUCH much easier for me to
    >communicate with.

    And I call him PETEY. :-)

    Like you but better? Be careful that PETEY doesn't replace you... B)

    ---
    * SLMR Rob * Bill your doctor for time you spent in his waiting room
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Fri May 30 02:17:00 2025
    Yes.. I suppose that depends on what it is doing for the person.
    >> In some cases it would be more like a flatterer or sycophant by
    >> telling the person what they want to hear rather than the more
    >> common truth.

    I read that and was reminded of the Magic Mirror in Snow White. Sounds
    >like a potential money making venture there. ;)

    Ha.. Good point.. Who's the fairest of them all?

    Oh wait, it eventually did tell her the truth..

    I don't recall if that was followed by 7 years of bad luck or not.. B)

    ---
    * SLMR Rob * Stupidity got us into this mess, why can't it get us out?
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Fri May 30 09:30:00 2025
    Yes.. I suppose that depends on what it is doing for the person.
    >> In some cases it would be more like a flatterer or sycophant by
    >> telling the person what they want to hear rather than the more
    >> common truth.

    I read that and was reminded of the Magic Mirror in Snow White. Sounds
    >like a potential money making venture there. ;)

    Ha.. Good point.. Who's the fairest of them all?

    Oh wait, it eventually did tell her the truth..

    That was a design flaw. ;) The future mirror I am thinking of wouldn't
    make such mistakes!

    I don't recall if that was followed by 7 years of bad luck or not.. B)

    I am sure someone in the story wound up unlucky. ;)


    * SLMR 2.1a * Acid absorbs 10 times its weight in excess reality.
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Sun Jun 1 01:18:00 2025
    I read that and was reminded of the Magic Mirror in Snow White. Sounds
    >like a potential money making venture there. ;)

    Ha.. Good point.. Who's the fairest of them all?
    >> Oh wait, it eventually did tell her the truth..

    That was a design flaw. ;) The future mirror I am thinking of wouldn't
    >make such mistakes!

    Mirror 2.0 ? It lies better!.. Sounds like the Government..

    Meet the new boss.. same as the old boss... B)

    ---
    * SLMR Rob * Lost? Impossible.... I'm not going anywhere
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Sun Jun 1 09:30:00 2025
    That was a design flaw. ;) The future mirror I am thinking of wouldn't
    >make such mistakes!

    Mirror 2.0 ? It lies better!.. Sounds like the Government..

    I am sure there is a better marketing angle in there somewhere. Maybe something about being good for one's self esteem. ;)

    Meet the new boss.. same as the old boss... B)

    Yep, I figure that saying was around longer, but I first heard it in a song
    by The Who. ;)

    Mike

    * SLMR 2.1a * Paperweights -- The only way to keep bills down.
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)