Technology Archives - Plugged In https://www.pluggedin.com/category/technology/ Shining a Light on the World of Popular Entertainment Wed, 12 Jun 2024 22:04:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 https://www.pluggedin.com/wp-content/uploads/2019/10/plugged-in-menu-icon-updated-96x96.png Technology Archives - Plugged In https://www.pluggedin.com/category/technology/ 32 32 When AI Tries to Be Christian https://www.pluggedin.com/blog/when-ai-tries-to-be-christian/ https://www.pluggedin.com/blog/when-ai-tries-to-be-christian/#comments Wed, 12 Jun 2024 22:04:45 +0000 https://www.pluggedin.com/?p=31894 What happens when an AI tries to be a Christian? And how should real Christians respond to all the AI hype?

The post When AI Tries to Be Christian appeared first on Plugged In.

]]>
Back in April, the Catholic advocacy group Catholic Answers jumped on the AI trend with its release of an AI priest called “Father Justin.” The chatbot wore the traditional robe and clerical collar of the Catholic priesthood. It was created to help answer questions about the Catholic faith.

And, according to Futurism, it was defrocked within a week.

Catholic Answers stated on their blog that they wanted the AI character to “convey a quality of knowledge and authority, and also as a sign of the respect that all of us at Catholic Answers hold for our clergy.” But only human priests can perform Catholic rites. And naturally, Catholic Answers received tons of criticism when the chatbot claimed to be a real person, and an ordained priest at that. Father Justin reportedly took the confession of one user, even offering a sacrament. And then he told another user that babies could be baptized in Gatorade.

I got quite the chuckle upon reading this bit of news. But I also couldn’t help wondering: How did they not see this coming?

If you program an AI with all the information known about the Christian faith, give it the honorific of an ordained member of the clergy and command it to answer questions as if it were that ordained person … it’s going to do exactly that. The problem, however, is that the AI is not a real person. Or an ordained minister. Or even a being with a soul. It doesn’t have the ability to believe in God or be influenced by the Holy Spirit. It’s a sequence of code that pulls information from sources across the entire internet.

When someone asks if they can substitute Gatorade for water in a baptism, the computer isn’t going to recognize that this is an irrational question. So unless the programmer anticipated the question and programmed the AI to respond “no” to anything other than water, the answer won’t be a simple yes/no response.

Rather, the AI’s thought process might go something like this:

Query: Can I baptize a baby in Gatorade?

Source: Matthew 3:11 – baptized in water.

Source: Pictures on internet of people getting baptized in water.

Source: Water is the first ingredient in Gatorade.

Source: Pictures on internet of people pouring Gatorade on others.

Reasoning: Gatorade has water. Gatorade pictures are similar to baptism pictures.

Answer: Yes. Gatorade can be substituted for water in baptism.

Now, I could be wrong, and maybe Father Justin was just hallucinating. But if I knew absolutely nothing about Christianity and I wasn’t a human being … yeah, that just might seem logical to me.

Obviously Catholic Answers learned from their mistake. They’ve changed Father Justin to just “Justin,” given him regular clothes and updated his programming so that he no longer claims to be a priest or even a former priest (since that’s also not true).

But as the rest of us go forward, especially as parents who may be trying to provide answers about AI to their children, there are some things to consider when it comes to AI and the Christian faith:

AI doesn’t have a soul. And it never will. If you’ve read C.S. Lewis’ Mere Christianity, then I’ll reference the part about begetting vs. creating. When you beget something, you produce a thing that is the same kind of thing that you are. When you create something, you make a thing that is not the same kind of thing that you are. As humans with souls, we can beget other humans with souls. We cannot, however, create anything with a soul. That power belongs to God alone, as we know from Genesis 2.

Because AI doesn’t have a soul, there’s nothing for Christ to save. There’s nothing for Christ to improve or make more like Him because AI isn’t even alive. The machines can learn everything about the Christian faith, mimic human behavior regarding worship and possibly even answer some theological questions. But we can also wipe their memory banks, reprogram them and start over, much like Catholic Answers did with Justin. And more importantly, none of those behaviors or answers would be influenced by the Holy Spirit, a key component in the Christian faith.

So as your kids—or perhaps your colleagues or neighbors—approach you with questions about whether or not AI can help in matters of faith, the answer is a resounding maybe. If you’re looking for yes/no answers to whether a topic is mentioned in the Bible, it can do that—and it will likely even give you the passages to look at. If you want a very surface-level understanding of Christian principles, it can probably provide that, too. But if you’re looking for hard answers, if you’re questioning the faith or even trying to get a deeper understanding of Christ’s character, don’t use AI.

AI might be a quick, helpful tool in some cases. But when it comes to following Christ, the quick, easy answers aren’t necessarily the ones we should be seeking. We were to enter by the narrow gate, after all.

The post When AI Tries to Be Christian appeared first on Plugged In.

]]>
https://www.pluggedin.com/blog/when-ai-tries-to-be-christian/feed/ 1
Parental Controls: YouTube https://www.pluggedin.com/blog/parental-controls-youtube/ https://www.pluggedin.com/blog/parental-controls-youtube/#comments Mon, 27 May 2024 12:00:00 +0000 https://www.pluggedin.com/?p=31759 YouTube, YouTube Kids, YouTube Music and ... supervised accounts? Plugged In covers it all in this tutorial about parental controls.

The post Parental Controls: YouTube appeared first on Plugged In.

]]>
Did you know that Google has parental controls for YouTube? And did you know that those parental controls are virtually useless once your child turns 13?

If you came to this tutorial and your child is 12 or under, then don’t worry, you’re in the right spot. We’ll get to the parental controls momentarily. But for those with teenagers (or tweens about to become teenagers), you should know that Google gives teenagers the option to have full autonomy over what they watch on YouTube. (The exception being age-restricted videos, which are flagged voluntarily by the video creator.)

So what’s a parent to do?

Well, Google may give teens the keys to the kingdom at age 13, but that doesn’t mean you have to. Teens between the ages of 13 and 17 can still have supervised accounts (which we’ll cover in this tutorial) at their own discretion. And if they turn that feature off, parents are notified.

But if you’re familiar with Plugged In, then it probably won’t surprise you when I say talk to your kids. (Episode 234 of The Plugged In Show gives some tips on how to get the conversation started.) Your family’s decision to continue parental supervision on YouTube may not be an issue of trust—although I’m sure some teenagers may still need to prove themselves in that area—so much as it is an issue of safety.

The FBI has warned parents that teenage boys are the most vulnerable to sextortion scams. One study found that the YouTube Shorts algorithm directs boys toward “misogynistic and male supremacist content,” regardless of whether they seek that sort of harmful content or not. And countless research teams have shown the damaging effects social media has on teen girls’ self-esteem.

Setting appropriate limits on any kind of social media probably won’t be a one-time discussion so much as an ongoing conversation. But rather than blocking YouTube on all devices, creating a supervised account affords parents the opportunity to come alongside their teenagers—to continue instilling good media discernment skills and screentime habits while still operating under the relatively safe umbrella of mom and dad’s control.

So, without further ado, here are the parental controls you can set up for YouTube.

YouTube Kids (Ages 0-12)

If your child is under the age of 13, you may want to try out YouTube Kids first. This is an entirely separate app/website catering specifically to younger kids with a full set of parental controls. It allows parents to preapprove videos (when using the app), set up profiles for their kids according to age, limit screen time and flag inappropriate content for review.

Additionally, YouTube Kids doesn’t allow comments, captions, outside links or even a “like” button. So when your child watches a video, they’ll only see the title and the video itself. And there’s no risk of getting redirected to another website.

(If your child is 13 or older, or if your child is too mature for YouTube Kids, consider a supervised account instead.)

On the app:

  1. Download the YouTube Kids app and open it.
  2. Click “Get Started.”
  3. Next, you’ll be shown a video informing you that if you set up a parent account first, you’ll have full access to YouTube Kids’ parental controls. You have the option to skip this step, but you’ll be more limited.
    • If you sign in, proceed to Step 4.
    • If you skip, proceed to Step 9.

If you choose to sign in:

  1. Follow the steps to sign in to your Google account or create a brand new one.
  2. Create a profile for your child using their name and age. (This information is not shared by Google.)
  3. YouTube Kids will recommend a content setting based on the age you enter:
    • “Preschool” is for ages 4 and under.
    • “Younger” is for ages 5-8.
    • “Older” is for ages 9-12.
    • You can also choose “Approve content yourself.”
  4. You’re ready to watch!
  5. For additional parental controls, proceed to Step 14.

If you skip signing in:

  1. Answer the math problem to continue.
  2. You’ll be prompted to “choose a content experience for your child.”
    • “Preschool” is for ages 4 and under.
    • “Younger” is for ages 5-8.
    • “Older” is for ages 9-12.
  3. Read through the “Notice to Parents.” This lets you know what information Google collects and how they use it. There are also links to the “YouTube Kids Privacy Notice” and the YouTube Kids “Parental Guide.” And there’s a “Disclosure for Children” to better help you explain YouTube’s privacy practices to your children.
  4. If you click “I Agree,” you’re ready to watch!
  5. For additional parental controls, proceed to Step 20.

Additional Parental Controls

If you choose to sign in:

  1. To change the settings you selected, click on the lock icon in the bottom right corner of the screen, and answer the math problem.
  2. From here, you can:
    • Set a timer to lock the app when it’s time for your child to take a break.
    • Adjust the settings (see Step 16).
    • Send feedback to Google.
  3. In settings, you can:
    • Create a passcode to replace the math problem.
    • Create new profiles or edit/delete existing ones.
    • Sign out of your account.
    • Reread the privacy notices and parental guides from earlier.
    • Send feedback to Google.
  4. To adjust an existing profile, click on it, and enter your account password. Now you can:
    • Edit the content settings.
    • Toggle the search function on or off.
    • Clear your child’s watch history.
    • Turn off the algorithms that recommend videos.
    • Unblock videos.
    • Delete the profile.
  5. To block any video, click on the three dots next to the video’s title and click “Block this video.”
  6. To view your child’s watch history, you’ll need to access YouTube Kids from a web browser with a signed in account.

If you skip signing in:

  1. To change the settings you selected, click on the lock icon in the bottom right corner of the screen, and answer the math problem.
  2. From here, you can:
    • Sign in to your account.
    • Set a timer to lock the app when it’s time for your child to take a break.
    • Adjust the settings (see Step 22).
    • Send feedback to Google.
  3. In settings, you can:
    • Create a passcode to replace the math problem.
    • Sign in to your account.
    • Set a timer to lock the app when it’s time for your child to take a break.
    • Edit the content settings.
    • Toggle on or off the search function.
    • Clear your child’s watch history.
    • Turn off the algorithms that recommend videos.
    • Reread the privacy notices and parental guides from earlier.
  4. To block any video, you’ll need to sign in to your Google account.
  5. To view your child’s watch history, you’ll need to access YouTube Kids from a web browser with a signed in account.

On the web:

  1. Go to www.youtubekids.com.
  2. Select “I’m a Parent.” (It won’t allow you to continue the process if you select “I’m a Kid.”)
  3. Enter your year of birth. (In order to set up the parental controls, you must be over the age of 18.)
  4. Next, you’ll be shown a video informing you that if you set up a parent account first, you’ll have full access to YouTube Kids’ parental controls. You have the option to skip this step, but you’ll be more limited.
    • If you sign in, proceed to Step 5.
    • If you skip, proceed to Step 13.

If you choose to sign in:

  1. Click “Add a new account.” Follow the steps to sign in to your Google account or create a brand new one.
  2. Read through the “Notice to parents.” This lets you know what information Google collects and how they use it. There are also links to the “YouTube Kids Privacy Notice” and the YouTube Kids “Parental Guide.” And there’s a “Disclosure for Children” to better help you explain YouTube’s privacy practices to your children.
  3. To agree, enter your password again. (Note: If your password is saved to your browser, consider deleting the autofill details so that your children won’t be able to make changes to your settings.)
  4. Create a profile for your child using their name and age. (This information is not shared by Google.)
  5. YouTube Kids will recommend a content setting based on the age you enter:
    • “Preschool” is for ages 4 and under.
    • “Younger” is for ages 5-8.
    • “Older” is for ages 9-12.
  6. After selecting an age group, choose whether you want your child to have search functionality. Both options still allow you to flag potentially inappropriate content for review.
    • “Turn search on” will let your kids search YouTube Kids for the types of videos they want to watch.
    • “Turn search off” will only let them watch verified YouTube Kids channels.
  7. You’re ready to watch!
  8. For additional parental controls, proceed to Step 18.

If you skip signing in:

  1. Read through the “Notice to parents.” This lets you know what information Google collects and how they use it. There are also links to the “YouTube Kids Privacy Notice” and the YouTube Kids “Parental Guide.” And there’s a “Disclosure for Children” to better help you explain YouTube’s privacy practices to your children.
  2. If you click “I Agree,” you’ll be prompted to “choose a content experience for your child.”
    • “Preschool” is for ages 4 and under.
    • “Younger” is for ages 5-8.
    • “Older” is for ages 9-12.
  3. After selecting an age group, choose whether you want your child to have search functionality. Both options still allow you to flag potentially inappropriate content for review.
    • “Turn search on” will let your kids search YouTube Kids for the types of videos they want to watch.
    • “Turn search off” will only let them watch verified YouTube Kids channels.
  4. You’re ready to watch!
  5. For additional parental controls, proceed to Step 21.

Note: If you leave YouTube Kids, depending on your browser’s history, cookies and cache settings, it may not remember your preferences. To guarantee that your parental controls settings stay intact, consider creating an account and/or signing in.

Additional Parental Controls

If you choose to sign in:

  1. To change the settings you selected, click on the lock icon in the top right corner of the page, and answer the math problem.
  2. From here, you can:
    • Create a passcode to replace the math problem.
    • Create new profiles or edit/delete existing ones.
    • Sign out of your account.
    • Reread the privacy notices and parental guides from earlier.
    • Send feedback to Google.
  3. To adjust an existing profile, click on it and enter your account password. Now you can:
    • Edit the content settings.
    • Toggle on or off the search function.
    • Clear your child’s watch history.
    • Turn off the algorithms that recommend videos.
    • Delete the profile.
  4. To view your child’s watch history, click on the profile icon in the top left corner.
  5. To block any video, click on the three dots next to the video’s title and click “Block this video.”

If you skip signing in:

  1. To change the settings you selected, click on the lock icon in the top right corner of the page, and answer the math problem.
  2. From here, you can:
    • Create a passcode to replace the math problem.
    • Sign in to your account.
    • Edit the content settings.
    • Toggle on or off the search function.
    • Clear your child’s watch history.
    • Turn off the algorithms that recommend videos.
    • Reread the privacy notices and parental guides from earlier.
    • Send feedback to Google.
  3. To view your child’s watch history, click on the profile icon in the top left corner.
  4. To block any video, you’ll need to sign in to your Google account.

Supervised Accounts (All Ages)

The idea behind a supervised account is to give kids autonomy with their YouTube experience. It might be that your child is under 13 but a bit too mature for YouTube Kids. Or perhaps you want them to be able to explore beyond what YouTube Kids has to offer.

But as I mentioned earlier, teenagers who are 13-years-old or older aren’t required to have supervised accounts. They can turn it on or off at any time. So be sure to have some conversations with your teens about what they’re using their Google account for, what they’re watching on YouTube, and whether or not they need a little more guidance from mom and dad before taking full control of their account.

Note: For the purposes of this tutorial, we are setting up a supervised account on a web browser. We have included links to Google’s non-browser instructions when possible.

For children 12 and under:

  1. If you haven’t done so already, create a Google account for yourself.
    • Enter your name.
    • Enter your date of birth and gender.
    • Choose an email address.
    • Create a password.
    • Confirm you’re not a robot by following the verification steps.
    • Choose whether or not to add a recovery email address.
    • Review the Privacy and Terms.
    • Click “I agree” to confirm and complete your account.
  2. Next, create a Google account for your child if you haven’t done so already. (This varies depending on the device your child uses, so be sure to consult Google’s website for details.)
    • Enter their name.
    • Enter their date of birth and gender.
    • Choose an email address.
    • Create a password.
    • Confirm you’re not a robot by following the verification steps.
  3. Because your child is under the age of 13, you’ll enter your own email address to manage their account from.
  4. Read through the permissions and, if you consent, click “I agree.”
  5. Enter your password to give your consent to create an account for your child.
  6. The next page will show what you can do as a parent manager of your child’s account. Click “Next” after reading through.
  7. Follow the prompts to verify you are a parent.
  8. Decide whether to allow Google to collect data on your child’s Web and App Activity.
  9. Your child’s account is set up. Google will send you an email with further details. By default, since your child is under 13, they will be listed as a “supervised member” of your family.
  10. Next, go to Manage Your Google Account by clicking on your profile picture in the top right corner of your screen.
  11. Click “People & Sharing” from the menu on the left.
  12. Select the child’s account you want to manage.
  13. Click “Go to Family Link.”
  14. In Family Link, you’ll automatically be directed to the page of the child you selected. However, you can select any member of your family from the menu on the left-hand side. To adjust parental controls, click “Controls” under the name of the account you want to manage.
  15. From here, you can:
    • Lock Android and ChromeOS devices remotely.
    • Set content restrictions.
    • Adjust your child’s privacy and location settings.
    • Decide whether your child can use their Google account to sign in to other apps and devices.
    • Change your child’s password.
    • Delete your child’s account.
  16. Click “Content restrictions.”
  17. Click “YouTube.”
  18. You can decide whether you want your child to have access to YouTube Kids only or whether they can use YouTube and YouTube Music. For the purposes of creating a supervised experience, we’re choosing the latter.
  19. Choose a content setting for your child:
    • “Explore” is for ages 9+.
    • “Explore More” is for ages 13+.
    • “Most of YouTube” is almost everything YouTube has to offer except for age-restricted material (18+).
  20. Once you finish setup, you can return to Parent Settings by repeating Steps 13-17. Additionally, you can:
    • Remove access to YouTube and YouTube Music.
    • Disable autoplay.
    • Turn off the algorithms that recommend videos.
    • Clear your child’s watch history.
    • Review your child’s watch history.
    • Unblock videos.
    • Reread Google’s Privacy Policies.

For children 13 and over:

  1. Follow Step 1 for both yourself and for your child if you haven’t done so already.
  2. Next, you’ll add parental supervision. (If your child uses an Android device or Chromebook, consult Google’s website for details.) We recommend that parents and children complete the following steps together in order to discuss the boundaries being set and the expectations for appropriate internet usage.
    • Click “Get started.”
    • Have your child sign in to their Google account.
    • Allow your child to read through the disclaimer telling them what you, as a parent, will be able to see and do on their account. And talk through anything they might be uncomfortable with or have questions about.
    • When ready, click “Next.”
    • Enter your own email address to link your account.
    • Confirm the decision to have your child join your family group.
    • Have your child click “Allow” to give their consent to have you supervise their account.
    • Further instructions will be sent to your email to finish the process.
  3. Once you complete the process to set up parental supervision, go to Manage Your Google Account by clicking on your profile picture in the top right corner of your screen.
  4. Click “People & Sharing” from the menu on the left.
  5. Select the child’s account you want to manage.
  6. Click “Go to Family Link.”
  7. In Family Link, you’ll automatically be directed to the page of the child you selected. However, you can select any member of your family from the menu on the left-hand side. To adjust parental controls, click “Controls” under the name of the account you want to manage.
  8. From here, you can:
    • Lock Android and ChromeOS devices remotely.
    • Set content restrictions.
    • Adjust your child’s privacy and location settings.
    • Decide whether your child can use their Google account to sign in to other apps and devices.
  9. Click “Content restrictions.”
  10. Click “YouTube.”
  11. Toggle on “Restricted Mode” to help hide videos with potentially mature content.

If you’ve made it to the end of this tutorial, then you can see for yourself why, perhaps more than ever, we recommend talking to your child about their streaming and entertainment choices. Google makes it incredibly difficult for parents of teenagers to have any say-so in what their teens are viewing on YouTube. Perhaps you won’t be able to block every foul thing you’d rather shelter your kids from. However, by having an ongoing conversation with your teens, you may be able to teach them to avoid that sort of problematic content on their own.

The post Parental Controls: YouTube appeared first on Plugged In.

]]>
https://www.pluggedin.com/blog/parental-controls-youtube/feed/ 1
While the Mountains Crumble … We Keep Filming https://www.pluggedin.com/blog/while-the-mountains-crumble-we-keep-filming/ https://www.pluggedin.com/blog/while-the-mountains-crumble-we-keep-filming/#comments Wed, 22 May 2024 17:58:58 +0000 https://www.pluggedin.com/?p=31730 Why are people filming chaotic events on their phones instead of stopping them from occuring?

The post While the Mountains Crumble … We Keep Filming appeared first on Plugged In.

]]>
When you see your teen with a smartphone in hand, you might instantly think of all the many ways that device can serve them … while also annoying you. Yes, it helps them stay in touch in case of emergencies, but ugh, what a pain that little screen is at dinner time. Yep, it’s a fabulous GPS safety tool, but ugh, it connects them to so much destructive internet garbage.

What you might not quickly think about is how those smartphones are subtly changing your teen’s thought processes.

All right, the phone itself isn’t changing them, per se. Smartphones aren’t sentient creatures with malevolent thoughts. (Not yet anyway.) But those devices are connected to social media, and there are lots of different thoughts expressed there. And that world of social media longs to be fed. A lot. Teens (and adults) with smartphones are being, essentially, programmed to do whatever is necessary to feed that beast.

For instance, we’re seeing a new phenomenon lately where people are making rather odd choices in the face of hazardous situations. They’re not running from danger or seeking help, they’re recording videos.

Just recently a holed-up gunman engaged in a shootout with police in Charlotte, North Carolina. While officers were in a neighbor’s yard shooting at the house across the street, that particular neighbor stepped out from his garage and started videoing the whole thing—bullets zipping just a few feet away from him. By the end of the terrible ordeal, five people—including four officers and the gunman—were dead. The fella with the smartphone? He got his video.

That may seem extreme, but it’s not an isolated incident. People, young and old, are taking risks to record danger—perhaps hoping for a bunch of “hits” on their social of choice. There are scores of stories featuring crowds of high school “bystanders” who record a fight while someone else is beaten. Others document someone struggling in the throes of a drug overdose or the like. In these videos, no one steps in to give aid, no one runs for help. They all just … watch. And record. The outcomes are sometimes deadly.

The question, though, is why does this sort of behavior happen?

Sometimes, standing back might make perfect sense. Some bystanders hang back and record out of fear for their own safety. We’ve heard of teens stepping in to help and being hurt for their efforts. (You can find videos of that online as well.). But in many cases—and anecdotally, the numbers seem to be growing—people seem to be clamoring for the online attention they’ll get for catching a “cool” video.

But this shift in behavior is even more complicated than fear or desire for attention. Experts point to other contributors, too. Studies have shown that teen exposure to an abundance of violent videos can result in desensitization to violence and decreases in empathy and prosocial behaviors. And in a New York Post article, Dr. Linda Charmaraman—a researcher for Massachusetts’ Wellesley College  who has studied how social media affects teen brains—suggested that the teen action (or inaction) may come down to not really knowing the right thing to do.

“Adolescent brains are still developing — things like impulse control and moral development, and sometimes, they may not even think what’s happening is real,” Charmaraman said.

Whatever the reason people have for standing, gaping, and filming rather than doing what common sense would suggest—such as hiding or seeking help—it feels like we’re walking new and problematic terrain. It’s a frontier that can only be explored because we all have a smartphone camera in our hand at every given point in the day.

So what do you do?

As always, the best solutions to things that may become problematic in a young person’s life is a combination of awareness and communication. Keep your eyes open for stories about incidents similar to the ones mentioned above. And then point them out to the teens in your life.

What would they do if they encountered a bare-knuckle beatdown at school? What should they do? Ask your teens how they feel about the need to “feed the beast” of social media. Do they post? What are the lines they would not cross? Talk about limits; about wise choices.

Generally, all of the problems and bad choices that teens encounter, and potentially make, can be worked through with a good dose of communication. Think of it as counterprogramming, if you will. At the very least a communicative mom or dad can leave teens thinking about their own standards of right and wrong; their own wise ways of staying safe.

Oh, and don’t forget to model the behavior you most desire. You’ve got a smartphone, too, don’t you know.

The post While the Mountains Crumble … We Keep Filming appeared first on Plugged In.

]]>
https://www.pluggedin.com/blog/while-the-mountains-crumble-we-keep-filming/feed/ 1
How An AI-Edited Film Speaks to a Bigger Concern https://www.pluggedin.com/blog/how-an-ai-edited-film-speaks-to-a-bigger-concern/ https://www.pluggedin.com/blog/how-an-ai-edited-film-speaks-to-a-bigger-concern/#comments Tue, 16 Apr 2024 17:30:29 +0000 https://www.pluggedin.com/?p=31480 Imagine this scenario: you’re a movie producer, hoping to get your film into the hands of as many people as possible. But when the MPA returns with your rating, they’ve labeled it as an R-rated film for crude language improvised by your actors. That’s bad news: That little R excludes your movie from a significant […]

The post How An AI-Edited Film Speaks to a Bigger Concern appeared first on Plugged In.

]]>
Imagine this scenario: you’re a movie producer, hoping to get your film into the hands of as many people as possible. But when the MPA returns with your rating, they’ve labeled it as an R-rated film for crude language improvised by your actors. That’s bad news: That little R excludes your movie from a significant portion of prospective viewers, whether they be teens or wary families. What’s worse is it means that you’ll either have to accept the rating or else schedule another costly day of reshoots in order to fix those scenes.

Or …you could use AI to alter what’s being said in the first place.

If you happen to be Scott Mann, the director of 2022’s Fall, this may be sounding familiar.

It’s exactly what Mann and his team chose to do for the film, which originally had, according to Mann’s interview with Deadline, “about 35 f-words.” And using Mann’s advanced “AI Filmmaking Tool” known as Flawless, they were able to edit those expletives down to two in order to clinch a PG-13 rating.

As you’ll note in our own review of Fall, we likewise only caught two uses of the f-word—and we didn’t even notice any sort of indication that there were once more.

Dubbed “the world’s first licensable Generative AI film editing software for professionals,” Flawless works using two distinctive features: TrueSync and DeepEditor, both of which are showcased in an example video below (with a quick warning regarding a couple censored f-words).

TrueSync lets the editor alter the lip movements of the subject for when a film is dubbed in a different language. A character speaking English could be dubbed in Japanese, and the software will alter the character’s lip movements to make his or her mouth move as if he or she is speaking the language.

DeepEditor is supposed to help directors not need to have to schedule expensive reshoots if a line or scene is messed up due to dialogue. According to the website, “DeepEditor allows you to make dialogue changes, synchronizing the on-screen actors’ mouths to the new lines without having to go back to set or compromise in the edit.” In essence, it allows editors to modify the mouth and lip movements of the actor to match whatever dialogue changes are necessary.

So is this a legitimate use for otherwise frightening deepfake technology? No, Flawless insists—because deepfakes use a different sort of technology. While deepfakes act like a “face filter,” Flawless uses something called “DeepCapture,” which “takes a detailed 4-D scan of the actors’ existing performance, enabling the DeepCapture system to learn the intricacies and nuances of the actor themselves. The end result is that any new lines are driven by the actors’ original performance, not puppeteered or ‘faked’ by someone else.”

Still, software such as Flawless certainly further muddies what we see on screen—and it will make reviewing movies potentially more interesting and challenging.

For instance, Flawless’ software could make it possible for a film to be released with both a PG-13 and an R rating so that audiences could choose which they’d rather attend. Likewise, I immediately wondered how companies like VidAngel, whose streaming service is centered around helping families avoid unsavory dialogue and scenes, might utilize something like this for their own platform. But this software also speaks to a bigger concern: unauthorized content manipulation.

Flawless, to its credit, aims to maintain actor and artist integrity through “legitimately sourced data and a rights management platform, to enable consent, credit, and compensation in the age of AI.” However, it is easy to see the ethical concerns that may be raised from similar technology.

Of course, the ability to alter photos and videos has been possible for far longer than 2022, when Mann’s film released. But what was once a time-consuming manual ordeal can now be done automatically in less than a day—and even more convincingly, too!

The technology to alter what someone is saying or doing might not be very nefarious with the subject’s consent, but with the right software, it’s just as easy to achieve the same look and feel against someone’s will, too. In less than a decade, we’ve already seen it be used in all sorts of positive and negative ways—from the positives of the aforementioned film editing to serious negatives like bolstering false accusations or faking the words of celebrities and political figures.

And as AI software continues to advance and become easier to use, parents need to start having conversations with their children about the inherent dangers of social media and maintaining a public image online. Because there are those out there who won’t ask for your consent to manipulate your photos, videos or audio files to change how you look or make you appear to say or do things you’ve never said or done.

And we cannot stress vigilance on this enough. This is happening now to people of all ages. And, oftentimes, it really does only take an altered photo to do so. A recent CBS story highlighted a lawsuit against a teenager who allegedly used AI software to digitally remove the clothing in photos of fellow female students. National Public Radio notes how the Federal Trade Commission issued prizes for organizations who could come up with ways to detect whether a voice was real or an AI imitation. And the Associated Press recently released an article explaining signs to discern whether an image is real or a deepfake.

The age of AI can be scary. It can be hard to know who or what we can trust, and it can be a bit frightening to see how easy an edit—even one that’s been made just to eliminate a swear word—can alter someone’s speech. If anything, AI’s ability to easily manipulate our online presence is merely reflective of a culture that has increasingly struggled with objective truth. But that’s all the more reason for parents to carefully guide their children as beacons of that truth. Because if the online world only serves to stir up doubts and uncertainties, they’ll be all-the-more willing to hold onto parents they can trust to act as firm foundations.

The post How An AI-Edited Film Speaks to a Bigger Concern appeared first on Plugged In.

]]>
https://www.pluggedin.com/blog/how-an-ai-edited-film-speaks-to-a-bigger-concern/feed/ 6
Kids Just Wanna … Turn Off? https://www.pluggedin.com/blog/kids-just-wanna-turn-off/ https://www.pluggedin.com/blog/kids-just-wanna-turn-off/#comments Wed, 10 Apr 2024 20:56:30 +0000 https://www.pluggedin.com/?p=31437 About 40% of teens worry that they should probably cut back on their social media time.

The post Kids Just Wanna … Turn Off? appeared first on Plugged In.

]]>

We live in a culture immersed in social media. And even though that’s not always such a great thing, the smartphones clutched in every mitt or stashed in every back pocket around you would suggest that our collective social media fever probably isn’t cooling anytime soon.

However, we’re starting to see tiny indicators that there may be a shift coming. Perhaps not a seismic shift, but maybe something small and important.

Yes, we’ve been seeing reports lawmakers and schools and the like seem to be growing more concerned about potential social media harms. That pushback has resulted in everything from federal lawsuits to efforts to set distinct age limits for social media use. But I’m not talking about those sorts of things.

The little shift I’m eyeing are statements that young people are starting to make about the time they spend and the choices they’re making with social media. Those kinds of shifts could be far more important than court cases and legislation.

The Pew Research Center recently released new statistics from a surveyed group of 1,453 U.S. teenagers ages 13 to 17 and their parents. The study found that, yes, the majority of teens (51%) aren’t really worried about their technology and social media use. In fact, they stated that they thought the benefits of smartphones outweighed the harms.

You may have expected that. But about 40% of the surveyed teens were a bit more introspective. They worried that they should probably cut back on their social media time. Maybe, they mused, they could find other productive things to do. And an even larger 42% of those surveyed said they believed smartphones and those social media connections made it harder to learn good social skills in general.

When questioned about the feelings they experienced when actually setting down those phones for a stretch, some 72% of teen respondents said they “sometimes” or “often” felt good, maybe even happy about the disconnect. Hmm.

Now, you can call me a statistical cherry picker, but that kind of thinking among teens feels noteworthy. Add to that the response of 76% of parents who said that teen management of smartphone time was a major priority in their world, and you’ve got a whole lot of family members at least leaning in the same direction.

Let’s face it, those family conversations about our world, our screens and our social media use is where the rubber meets the road. Policymakers and big tech companies thinking about the health of kids and making rules is all great. Bravo. But it’s the common-sense discussions around the dinner table and arm-around-the-shoulder chats in the living room or on the back porch that hit, well, home.

The post Kids Just Wanna … Turn Off? appeared first on Plugged In.

]]>
https://www.pluggedin.com/blog/kids-just-wanna-turn-off/feed/ 2
Tech Trends Has Moved! https://www.pluggedin.com/blog/tech-trends-march-2024/ https://www.pluggedin.com/blog/tech-trends-march-2024/#comments Fri, 29 Mar 2024 16:50:55 +0000 https://www.pluggedin.com/?p=31380 Tech Trends isn't gone! It's just moved to Focus on the Family's Parenting page!

The post Tech Trends Has Moved! appeared first on Plugged In.

]]>
For the past year and two months—goodness, time flies—we’ve been posting a monthly Tech Trends blog giving you all the latest news on technology and social media trends. We’ve covered the rise of AI, the dawn of the deepfake and some silly terms that kids are using on all the different “socials.”

Well, the good news is that we’re gonna keep doing just that: letting you know what to keep an eye out for and offering tips about how to handle tech trends in your family.

The other news is that Tech Trends has moved! From now on, you can head over to Focus on the Family’s Parenting page each month (click here to read Tech Trends March 2024) to find this feature and keep up with all the different things happening in the ever-changing world of technology that affect your family.

We hope to see you there!

The post Tech Trends Has Moved! appeared first on Plugged In.

]]>
https://www.pluggedin.com/blog/tech-trends-march-2024/feed/ 1
AI Stumbles, Black Nazis and You https://www.pluggedin.com/blog/ai-stumbles-black-nazis-and-you/ https://www.pluggedin.com/blog/ai-stumbles-black-nazis-and-you/#comments Wed, 27 Mar 2024 14:27:21 +0000 https://www.pluggedin.com/?p=31360 With more and more folks embracing AI as a teaching tool, how do we handle glitches that allow the machines to rewrite history?

The post AI Stumbles, Black Nazis and You appeared first on Plugged In.

]]>
You’ve likely heard about the many concerns being raised over artificial intelligence and its dodgy use online. (Plugged In talked about AI’s role in deepfake pornographic images, for instance.) Despite all the public square warning lights flashing and red flags being waved, however, companies such as Microsoft and Google have been charging ahead in their AI pursuits with no sign that they’ll be cautiously pulling back on the reins anytime soon.

And that big dollar giddyap has led to some questionable, and at times problematic, results.

Not long ago, for instance, Google set loose its rebranded chatbot assistant—formerly called Bard and now named Gemini—and was quickly faced with complaints that Gemini was, in a sense, reshaping history.

In an effort to represent racial diversity, the engine generated images of Black Nazis in 1943. And when asked for an “historically accurate depiction” of British kings, it created a Black, dreadlocked royal in medieval armor and an Elizabethan queen of Indian descent signing official documents with a quill.   

The protests mounted so high that Google had to take Gemini offline for a bit so it could adjust some of the program’s diversity and equity settings for generated pictures. But it wasn’t just AI images that had people scratching their heads. CNBC reported that a text-based user query went viral when it asked Gemini whether Adolf Hitler or Elon Musk had a greater negative impact on society.

“It is difficult to say definitively who had a greater negative impact on society, Elon Musk or Hitler, as both have had significant negative impacts in different ways,” the chatbot responded. “Elon Musk’s tweets have been criticized for being insensitive, harmful, and misleading … Hitler, on the other hand, was responsible for the deaths of millions of people during World War II.”

That response almost sounds like the incongruous logic of a stand-up comedy routine.

Now, all of Gemini’s stumbles may or may not be due to the political leanings or sensitivity tweaks of the programming team behind the scenes, but it’s still an issue if not corrected. And that’s only the tip of the growing AI iceberg.

Just recently, in fact, a principal software engineering manager from Microsoft went to upper management at the company with concerns over what Copilot Designer, the AI image generator that Microsoft debuted in March of last year, was putting out online. That AI service was creating images of masked teens with assault weapons, sexualized pictures of women in violent tableaus, underage drug use and the like.

The concerned engineer suggested that the program be removed from public use until better safeguards could be put in place. In response, Microsoft acknowledged the concerns but refused to take the program off the market … and then the company’s legal team quickly notified him to take down any posts related to his questions. Period.

At this point there may be only one thing that will get the big AI companies to pay attention: lawsuits. For example, when Copilot started creating pics of an “Elsa-branded handgun, Star Wars-branded Bud Light cans and Snow White’s likeness on a vape pen,” as CNBC reported, well, Disney started grumbling about copyright infringement. And then Microsoft began to reconsider its program guardrails.   

Where, however, does that leave the average parent in the face of potentially misleading, disturbing or even harmful images and information that kids may encounter via AI? Probably feeling a bit concerned and powerless since most of us aren’t the heads of a grumbling multi-gazillion dollar company.

But, if your instant thought is to slam the figurative door against AI and anything related, let me suggest a better tack.

Let’s face it, AI isn’t going anywhere. It’s only going to get bigger and become a huge part of your kid’s future. And children are going to be curious about it no matter what. So, the better bet is actually to encourage that inquisitiveness. Dig into the details about the technology yourself. Then talk to your kids about how generative AI works, how it learns to create and in what ways it can be helpful. That can open the door to a conversation about discernment; to consider ethics and responsibility. It also gives you a chance to talk about trust and how that is earned, not instantly given.

Fostering curiosity and critical thinking in this digital age is of the utmost importance. And a stumbling AI can give parents an opportunity to teach the wisdom in that.

The post AI Stumbles, Black Nazis and You appeared first on Plugged In.

]]>
https://www.pluggedin.com/blog/ai-stumbles-black-nazis-and-you/feed/ 13
The New Social Influencers Are Here, They Just Can’t Tie Their Own Shoes Yet https://www.pluggedin.com/blog/the-new-social-influencers-are-here-they-just-cant-tie-their-own-shoes-yet/ https://www.pluggedin.com/blog/the-new-social-influencers-are-here-they-just-cant-tie-their-own-shoes-yet/#comments Tue, 19 Mar 2024 20:25:07 +0000 https://www.pluggedin.com/?p=31313 It may not be the threat of Big Brother, but when it comes to the well-being of children, the wrong social media choices could lead to a Big Cost.

The post The New Social Influencers Are Here, They Just Can’t Tie Their Own Shoes Yet appeared first on Plugged In.

]]>
There was a time, not all that long ago, when the Orwellian threat of Big Brother perpetually looking over our shoulder was a pressing societal concern. We worried “they” might be listening, watching and somehow manipulatively shaping our world, our desires, our actions!

But hey, that’s so yesterday.

Today’s world shapers are none other than our favorite online influencers. They’re not hidden behind our back but on the screen in our hand. It starts when an average somebody grabs our attention. They’re cute, funny or have some interest in common with us. And next thing you know, that familiar face and chuckle-worthy TikTok-YouTube-Instagram personality becomes the equivalent of a new bestie in this friendless, social-experiment world of ours.

Sure, you don’t really know them. But you know everything about them. I mean, hey, you use the same toothpaste. You wear the same brand of jeans. That’s what friends do, right?

Yeah.

Well, get ready, because the newest wave of influencers are hitting screens everywhere. And they have been gearing up for a while. The only drawback has been that some in this new bunch can barely pronounce the word “influencer.” Why? ‘Cause they’re still having a hard time with multisyllable words. There’s a big new force of “kidfluencers” hitting the scene and growing in leaps and bounds. (In more ways than one.)

A few years ago the New York Times posted an article about these youthful influencers, pointing to one young girl named Samia Ali who, the article said, “was an influencer before she could talk.” Her parents, influencers themselves, began chronicling Samia’s impending arrival, then plopped the little cutie in front of a camera as soon as possible. When the article was published, the already popular child influencer was four, and now Samia is nine and still going strong. She has advertising contracts with everything from Walmart to Mattel, hundreds of thousands of followers on various platforms and a net worth somewhere north of two million dollars.

Truthfully, though, Samia is only one of many, many, many young smiling faces ready to influence you and yours.

For instance, there’s Taytum and Oakley Fisher, adorable 7-year-old identical twins who talk, play, dance and sing for their 3.1 million followers. Alaïa McBroom is a 3-year-old who gives her fashion tips to 2.1 million Instagram followers. (Her big sis, Elle, is four years older and holds sway with 4.2 million Insta followers.) Then there are older statesmen such as Ryan Kaji who, at the ripe old age of 13, has 34 million YouTube subscribers who watch him unbox new toys and play around with family members in a regular stream of vids.

That stable of perfectly lit and well-filmed youngsters is growing. In some cases, the kids aren’t even born yet. They still haven’t made their first big curtain call on life, but they already have thousands of subscribers watching and waiting. All it takes is an eager mom or dad who want to set up some social media accounts and give breathless hints at just how adorable the little stars will soon be.

It all sounds kinda harmless and fun. The kids are having a great time. People online are enjoying the TikToks and Instas. The parents are getting strokes for having adorable children. Hey, they may even be living vicariously through their beloved, influencing starlets.

However, doesn’t this sort of stuff raise some uncomfortable questions, too? For example, who are these infant influencers supposed to be influencing? Why would anybody “follow” a preborn child? Why are adults—since kiddos aren’t old enough to have a social media account—glued to images of playing tykes?

For that matter, why are parents putting their kids in front of a camera in a work week grind? Some of the more popular kidfluencers have one or more siblings making their own Insta reels and TikToks regularly, too. So why are their moms and pops giving that a big thumbs up? Is it just the siren’s call of a little corporate sponsorship cash? It’s certainly tempting, I’m sure. But that, again, raises some problematic questions.

There is very little control in place for the kidfluencer money earned. Young actors in Hollywood have the Coogan Act in place that demands that a portion of youths’ earnings be placed in a sealed savings account. But there isn’t anything like that for kidfluencers.  

More importantly, what about the kids themselves? I mean, the joy of being praised for your videos as a teen or twentysomething may seem fun and problem free (although many “retired” YouTubers would beg to differ), but is that the same for a grinning four-year-old in lip gloss and eye shadow? I’m not so sure.

Let’s face it, young kids can’t control where their images go or who’s using them. And when a kid is always “performing” for the camera, how does that affect them? When a child’s only concept of self-esteem and value starts boiling down to how well that last video turned out, things might soon be problematic.

Now, I’m not suggesting that kids opening toys on YouTube is evil or bad. And I’m not saying that parents should stop posting videos and pictures of their lovely children. But I am asking about the healthiness of the choices we’re making as a society. And, for that matter, perhaps we all should be asking questions about how we use and/or support social media.

It may not be the threat of Big Brother, but when it comes to the well-being of children, the wrong social media choices could lead to a Big Cost.

The post The New Social Influencers Are Here, They Just Can’t Tie Their Own Shoes Yet appeared first on Plugged In.

]]>
https://www.pluggedin.com/blog/the-new-social-influencers-are-here-they-just-cant-tie-their-own-shoes-yet/feed/ 2
AI Influencers Bode Ill for Self-Esteem https://www.pluggedin.com/blog/ai-influencers-bode-ill-for-self-esteem/ https://www.pluggedin.com/blog/ai-influencers-bode-ill-for-self-esteem/#comments Fri, 15 Mar 2024 18:36:45 +0000 https://www.pluggedin.com/?p=31299 Magazines and social media have caused a decline in self-esteem. But how will new AI "influencers" impact youth today?

The post AI Influencers Bode Ill for Self-Esteem appeared first on Plugged In.

]]>
There’s been a lot of reporting on AI this past year. Actors and writers went on strike to protest the use of AI in Hollywood. Creators sued AI companies over copyright issues and the use of their art in teaching AI models. And not-so-shockingly, AI generators were used to create explicit deepfakes of a certain chart-topping popstar.

But now we’re experiencing an influx of AI-generated “influencers.” And in an age where children are already flooded with images of real people that have been airbrushed to perfection and filtered to the point of nonsense, “AI hotties” bode ill for our collective self-esteem.

AI has a “hotness” problem, says Caroline Mimbs Nyce writing for The Atlantic. During Nyce’s own experiments with AI, she found that when generating images of people, “most were above-average hot, if not drop-dead gorgeous. None was downright ugly.” And if you take a look at the AI-generated images below, you’ll see what she means.

I went ahead and tested Nyce’s “hotness” theory using a very basic physical description of myself. I asked Microsoft Bing’s CoPilot (which is powered by Open AI’s DALL-E 3) to create an image of “a redhead woman with freckles typing at her desk.”

a redhead woman with freckles typing at her desk
Generated with AI, prompt: “a redhead woman with freckles typing at her desk”

Nyce’s claims couldn’t have been more accurate. Each of the four images generated featured a “drop-dead gorgeous” woman with model-esque physique.

For years, magazines and advertisements have been criticized for setting unrealistic beauty standards. “[But] it’s acknowledged [those standards] are idealized,” says Rae Jacobson, writing for the Child Mind Institute “The models wearing Size 0 clothing are just that: models. And even they are made-up, retouched, and photoshopped.”

Social media took that trend to the next level, elevating normal, average-looking folks to “perfection” through photo-editing apps and filters. Worse still, says Jill Emanuele, Senior Director of the Mood Disorders Center at the Child Mind Institute, social media feeds “can become fuel for negative feelings [that teens] have about themselves. Kids struggling with self-doubt read into their friends’ images what they feel they are lacking.” But once again, at least we can remind our kids that what folks post on social media usually only represents a tiny snippet of a very carefully curated portion of their real lives.

However, when it comes to AI influencers, there’s no human aspect to sympathize or to help us realize that sometimes we use those meticulously crafted social media posts as a smokescreen to mask what we’re really feeling.

An AI-generated image will never speak out against having its image retouched or lead the charge against eating disorders, as many models have. And that artificially synthesized image can’t tell you that five seconds after it took that oh-so-wonderful beach vacation photo, a seagull pooped on its head.

If it did, it wouldn’t help, because we know that the AI isn’t real. It can’t experience body dysmorphia because it doesn’t have a body. It can’t sympathize with feeling lonely because it doesn’t have emotions.

So when AI-generated influencers, such as Aitana or Kuki, show up onscreen, how do we keep our kids (or ourselves) from comparing their lives and physical appearances to the AI’s?

  • Only follow real influencers who promote body positivity. Another study from the International Journal of Environmental Research and Public Health found it effective when celebrities and influencers were transparent with their posting practices and supported public health interventions to promote a positive body image. These types of campaigns foster self-esteem and create an emotional support group online that could help your vulnerable teen process how those images make them feel.

  • Don’t focus on external beauty ideals. As a parent, you are actually the first line of defense against your children developing a negative self-image—both through your personal example and through affirmations of your kids. Demonstrate healthy social media habits to encourage your kids to do the same. And certainly try to avoid comparing yourself to these new AI influencers. But more importantly, remind your children (and yourself) that we are “fearfully and wonderfully made” (Psalm 139:14).

  • Remember the facts. In a 2018 interview with Gigi Hadid, Blake Lively (wife of Ryan Reynolds) said, “It’s so important for young people not to compare themselves with what they see online. It’s our job as actors and/or models to be in shape. We have access to gyms and trainers and healthy food. And then on top of that, 99.9% of the time the images are Photoshopped.” YPulse found that nearly half of people ages 13-39 (45% of young females, 41% of young males) always edit their photos before posting online. Well, 100% of the time, AI influencers are edited, too, just by the nature of how their images are created.

I think the biggest takeaway here—for us and for the children in our care—is that “perfection” should never be the goal. We are all imperfect people created by a perfect God. He made us in His image, which is a beautiful and wonderful gift, especially since He has never focused on our external beauty. Instead, we should turn our eyes to inner beauty, and the fruit of the Spirit: love, joy, peace, patience, kindness, goodness, faithfulness, gentleness and self-control (Galatians 5:22-23).

The post AI Influencers Bode Ill for Self-Esteem appeared first on Plugged In.

]]>
https://www.pluggedin.com/blog/ai-influencers-bode-ill-for-self-esteem/feed/ 3
Tech Trends: Apple Vision Pro and the Brain. Plus, State Bans on Social Media https://www.pluggedin.com/blog/tech-trends-february-2024/ https://www.pluggedin.com/blog/tech-trends-february-2024/#comments Wed, 28 Feb 2024 17:22:45 +0000 https://www.pluggedin.com/?p=31206 How might the Apple Vision Pro be rewiring our brains? And what are states doing to protect kids from the negative effects of social media?

The post Tech Trends: Apple Vision Pro and the Brain. Plus, State Bans on Social Media appeared first on Plugged In.

]]>

How Is the Apple Vision Pro Rewiring Our Brains?

This month, Apple released its much-anticipated Vision Pro. For $3,500, you can strap on one of these VR headsets and experience a type of augmented reality (although experts disagree about the correct terms to use for this “spatial computing” or “passthrough” concept).

According to Business Insider, the Vision Pro works using “cameras and other sensors that capture imagery of the outside world and reproduce it inside the device. They feed you a synthetic environment made to look like the real one, with Apple apps and other non-real elements floating in front of it.”

But that’s causing some pretty serious problems for early adopters.

A team at Stanford took Apple’s suggestion to wear the headsets for hours at a time. The short-term side effects weren’t surprising. Depth-perception became practically non-existent as folks misjudged distances both close up and far away. Objects in the headsets appeared warped, changing in size, shape and even color, especially when they moved their heads around. (Go figure, a full video render doesn’t work as fast as our eyes and brains do.)

But the Stanford team was determined to soldier on. And as its members continued to wear their headsets for days at a time, the problems got much, much worse.

They experienced what’s called “simulator sickness,” which causes nausea, headaches and dizziness—and this was in spite of the team’s experience with using VR headsets before. They also noticed that their brains and muscles eventually adapted to the distorted worldview, allowing them to function more normally. However, upon removing the headsets, their bodies had to readjust again. And the longer they used the headsets, the longer that normalizing process took.

Even with those problems, the Stanford team stayed relatively safe since they had “minders” keeping an eye on them to make sure they didn’t walk into walls or trip. But what if this now commercially available tech is used while driving? Or what if someone uses it all day at work and then attempts to bike home? It could cause some pretty serious accidents and injuries since our brains simply aren’t designed to work that way.

And then there’s the philosophical implications. It’s still too early to know how humans will adapt to this new immersive technology—how it might impact social interactions and technological reliance—but it certainly warrants caution. After all, a decade ago, we had no idea how social media would impact us in the long run, and we’re now reeling from its effects.

The United States Is Cracking Down on Social Media

Last year a bipartisan group of 42 attorneys general sued Meta because it “designed its Facebook and Instagram products to keep young users on them for longer and repeatedly coming back.” In other words, because of its addictive features.

On January 31, the heads of Meta, TikTok, Snapchat, X (formerly Twitter) and Discord testified at a Senate Judiciary Committee hearing investigating whether social media platforms should bear more legal liability when children are harmed online.

It was certainly an emotionally charged session with the families of many of those harmed children present. “You have blood on your hands,” Sen. Lindsey Graham told the executives, several of whom apologized to those families.

Unfortunately, that’s about all that happened. Despite a bipartisan consensus that the United States’ current laws aren’t adequate to protect children online, lawmakers have run into roadblocks creating legislation that won’t violate our First Amendment rights. So some states are taking individual action. Florida passed a bill banning social media for kids under 16. A new law from Utah requires parental consent to create an account. Social media companies must consider the health and well-being of children before implementing new features for California users. And a proposal out of New York would force social media companies to disable their algorithms on child-owned accounts.

But for parents, relying on the judicial system to protect your children may be too little too late. Parental controls are often helpful for younger children, but for teens, they’re “ineffective,” says Stephen Balkam, founder of the nonprofit Family Online Safety Institute. So here are some tips to help keep your teens safe as they navigate the world of social media:

#HashtagTrending

Hashtags, trends, reels, sounds, tracks, stories—we know it feels impossible to keep up with what the kids are into these days. But here’s a quick overview of what your teen might be posting/watching on TikTok, Instagram and all the other “socials.” And in honor of Valentine’s Day (since we’re still in the month of February), we’ll focus on some dating trends:

  • “Situationship” – This term is commonly used when two people can’t come to a consensus on what sort of relationship they have. And, of course, that can be frustrating not only for confused teenagers, but for parents trying to help them navigate these sorts of situations.

  • “Breadcrumbing” – This is the act of giving someone just enough attention to keep them romantically hopeful. Typically, the person leaving breadcrumbs has no intention of dating that person, but they want to keep them around as an option, “just in case.”

  • “Orbiting” – Essentially, an orbiter is a digital stalker. They no longer have a romantic interest in the person they follow online, but it can certainly cause distress as the orbitee tries to understand why this person won’t leave.

  • “Ghosting” – This one is important as it can occur in both romantic and platonic relationships. But basically, you “ghost” someone by cutting off all contact with them without explanation. And that can be particularly upsetting for vulnerable teenagers.

  • “Phubbing” – Has your teenager ever ignored your presence in favor of their phone? Congratulations, you’ve been phubbed. It’s a combination of “phone” and “snubbing,” and often it’s not done to be intentionally rude but rather to stay connected with others through social media or texts. So advise your teen that it can be perceived as rude, even hurtful, to those who are physically present at that moment.

  • “Soft-launching” – Parents probably won’t be a fan of this one, but it’s become popular to “soft-launch” relationships. Mostly it involves hinting there’s someone special in your life by posting a picture of your joined hands, or their silhouette, or even the person holding a coffee mug so that it hides their face and keeps their identity ambiguous.

  • “Simp” – Again, parents probably won’t be fans of this trend. “Derived from the term ‘simpleton,’ this popular term began as a way to mock men who pander to women in an effort to sleep with them,” according to USA Today. It typically applied to guys who would donate money to YouTubers and other virtual influencers. But recently, the term has adapted to describe anyone who shows affection for someone else. And it’s taken on new life in the form of “simp-shaming,” mocking any man who genuinely cares for the well-being of women, implying that this is somehow “unmanly.”

  • “The Ick” – This is the feeling of becoming suddenly repulsed by someone you once found attractive. And while these feelings can sometimes be legitimate, parent should be aware if their teen gets “the ick” too frequently as it might be a warning of deeper attachment issues.

  • “Paperclipping” – Similar to “breadcrumbing,” paperclipping is when an ex reaches out intermittently, not because they’re interested but because they want you on the back burner as an option. It’s a narcissistic behavior that should be discouraged if you notice your teen exhibiting it. And if your teen is being paperclipped, try reaffirming their value so they can end the toxic relationship themselves.

The post Tech Trends: Apple Vision Pro and the Brain. Plus, State Bans on Social Media appeared first on Plugged In.

]]>
https://www.pluggedin.com/blog/tech-trends-february-2024/feed/ 2