Elon Musk now owns 9.2% of twitter...update.. will soon be the sole owner as Board of Directors accepts his purchase offer

Page 234 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
I'm all for not holding the platform liable when it's actually a neutral/providing simple moderation. Facebook, YouTube, etc. are not doing that, they are taking that content and actively promoting. They've desired their websites to promote harmful material for profits.

Just like a company dumping chemicals into the river and then saying "we didn't make the chemicals, not our fault."

How do you not understand the difference in simple moderation and actively taking known harmful content and pushing it to known suspectable individuals? Like actively promoting suicide posts to depressed teens?

They have algorithms that will seek to feed the content that is popular among a group that shares your tastes for content.

So if someone watches Fox news clips and Trump speeches they are more likely to get whatever nonsense DeSantis is peddling this week, or the latest Right Wing conspiracy theory.

What they feed you is what you typically seek. I don't get all kinds of harmful videos on YT. If I click home this is what I see right now:

YTRecomend.png
 

MrSquished

Lifer
Jan 14, 2013
23,155
21,281
136
Because Fox is speaking there, not someone else.

I hear your point, I just don’t know how you would implement it without disastrous effects.

If an internet company is knowingly pushing harmful videos because it is specifically looking to promote an agenda then that is one thing, but providing an algorithm that recommends videos based on what other viewers of said content are viewing is a tough one to fight. Unless we can legislate a certain bit of randomness to these algorithms.
 

MrSquished

Lifer
Jan 14, 2013
23,155
21,281
136
They have algorithms that will seek to feed the content that is popular among a group that shares your tastes for content.

So if someone watches Fox news clips and Trump speeches they are more likely to get whatever nonsense DeSantis is peddling this week, or the latest Right Wing conspiracy theory.

What they feed you is what you typically seek. I don't get all kinds of harmful videos on YT. If I click home this is what I see right now:

View attachment 78810

Same with me.

However on my FB feed even though I have unfriended or unfollowed any Trumpies on my timeline and do not follow any conservative media, I constantly see right wing nonsense in my feed. I am not sure if those are all paid for, and why the fuck I only see the worst of the worst right wing shit there, but if that is purposeful then that is different.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
Same with me.

However on my FB feed even though I have unfriended or unfollowed any Trumpies on my timeline and do not follow any conservative media, I constantly see right wing nonsense in my feed. I am not sure if those are all paid for, and why the fuck I only see the worst of the worst right wing shit there, but if that is purposeful then that is different.

I don't have FB/Instagram/Tik Tok/Twitter or anything beyond YT, so I don't see what they are feeding people.

Some of that might just be feeding content with high engagement, without taking preferences into account.

Bad algorithms aren't reason to revoke Section 230 though.
 

Zorba

Lifer
Oct 22, 1999
15,282
10,879
136
Because Fox is speaking there, not someone else.

I hear your point, I just don’t know how you would implement it without disastrous effects.
I think once you choose promote specific speech, especially when you choose to promote harmful speech to get more engagement, you are now speaking as well.
How would you define neutrality in a way that wouldn’t be abused?
I don't have the specific answer but I think it could be figured out. We already have an extremely high bar to prove liability over speech in this country, so if nothing else that would still apply.

I'm not saying I'm for banning all algorithmic curation. But honestly I'm not sure that it'd be the worst thing for society. You could still follow things and see "People that follow this also follow that." I think it'd likely cut down on the addictiveness of these platforms and massively decrease their ability to amplify harmful content. And honestly it'd be nice if YouTube or Facebook actually showed me posts from people I follow instead of whatever random crap their algorithm wanted to push.

I agree it's complicated, but we've managed to regulate much more complicated things than this in the past.
 
  • Like
Reactions: Fanatical Meat

Zorba

Lifer
Oct 22, 1999
15,282
10,879
136
They have algorithms that will seek to feed the content that is popular among a group that shares your tastes for content.

So if someone watches Fox news clips and Trump speeches they are more likely to get whatever nonsense DeSantis is peddling this week, or the latest Right Wing conspiracy theory.

What they feed you is what you typically seek. I don't get all kinds of harmful videos on YT. If I click home this is what I see right now:

View attachment 78810
YouTube had cleaned up a lot, I used to always have conspiracy bullshit recommended to me. But there has been research on how the YouTube algorithm has radicalized people through it's recommendation engine.
 

Zorba

Lifer
Oct 22, 1999
15,282
10,879
136
Same with me.

However on my FB feed even though I have unfriended or unfollowed any Trumpies on my timeline and do not follow any conservative media, I constantly see right wing nonsense in my feed. I am not sure if those are all paid for, and why the fuck I only see the worst of the worst right wing shit there, but if that is purposeful then that is different.
It's purposeful. They are selling outrage and anger as it gets more engagement. These algorithms are not benign little engines that say "oh you really like Katy Perry, would you like to listen to Taylor Swift?" They are tracking monsters that say "oh he went to a page about depression, let's serve him these posts that will encourage more depression, oh that worked let's start pushing posts about suicide, etc." Except far more sophisticated.

They track people all over the internet and then serve them up the most "engaging" thing it can. It doesn't care if it makes you an extremist, a mass murderer, or commit suicide.
 
Last edited:
  • Like
Reactions: ivwshane

Zorba

Lifer
Oct 22, 1999
15,282
10,879
136
I don't have FB/Instagram/Tik Tok/Twitter or anything beyond YT, so I don't see what they are feeding people.

Some of that might just be feeding content with high engagement, without taking preferences into account.

Bad algorithms aren't reason to revoke Section 230 though.
No one is talking about revoking 230, just modifying it to make bad actors liable for their business decisions.
 
  • Like
Reactions: Fanatical Meat

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
No one is talking about revoking 230, just modifying it to make bad actors liable for their business decisions.

230 has nothing to do with their business decisions. It's about not being liable for user content. I doubt we will get a law about what algorithms can and can't do when recommending content. There doesn't seem to be any way to reasonably construct such laws.

This seems to just be a case of: "I don't like some things going on, there should be a law"...
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136

MrSquished

Lifer
Jan 14, 2013
23,155
21,281
136
It's purposeful. They are selling outrage anger as it gets more engagement. These algorithms are not benign little engines that say "oh you really like Katy Perry, would you like to listen to Taylor Swift?" They are tracking monsters that say "oh he went to a page about depression, let's serve him these posts that will encourage more depression, oh that worked let's start pushing posts about suicide, etc." Except far more sophisticated.

They track people all over the internet and then serve them up the most "engaging" thing it can. It doesn't care if it makes you an extremist, a mass murderer, or commit suicide.

Like I said if it is proven it is malicious, by all means they should be held liable.

But my YT feed is based on my preferences. I don't get any political or shitty recs there. It's food videos, travel videos, drone videos, standup videos, gear reviews, video game videos, etc.. and that's it.

There can't just be blanket thing where any internet company is responsible for any content on their site, that is the worst outcome. You know republicans would use that law and go after what they didn't like, and that's it. That's a can of worms you don't want to open, because once you open it, you can't close it.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,109
136
They have algorithms that will seek to feed the content that is popular among a group that shares your tastes for content.

So if someone watches Fox news clips and Trump speeches they are more likely to get whatever nonsense DeSantis is peddling this week, or the latest Right Wing conspiracy theory.

What they feed you is what you typically seek. I don't get all kinds of harmful videos on YT. If I click home this is what I see right now:

View attachment 78810
Well, it Doesn’t look like you follow any political stuff at all - which is the best way to go.
 

Zorba

Lifer
Oct 22, 1999
15,282
10,879
136
230 has nothing to do with their business decisions. It's about not being liable for user content. I doubt we will get a law about what algorithms can and can't do when recommending content. There doesn't seem to be any way to reasonably construct such laws.

This seems to just be a case of: "I don't like some things going on, there should be a law"...
Okay, I don't understand how you can possibly say Facebook's decisions to track everything about you so they can micro target you with "user content" is not a business decision. Until you accept basic facts, there is no point in continuing to discuss this with you.

Were the algorithms handed down by God, or did someone at Facebook actively decide how they worked?
 
  • Like
Reactions: VirtualLarry

Zorba

Lifer
Oct 22, 1999
15,282
10,879
136
Like I said if it is proven it is malicious, by all means they should be held liable.
So you agree Section 230 needs to be revised? Because today YouTube could promote nothing but pro-nazi videos and face zero liability.
There can't just be blanket thing where any internet company is responsible for any content on their site, that is the worst outcome. You know republicans would use that law and go after what they didn't like, and that's it. That's a can of worms you don't want to open, because once you open it, you can't close it.
No one has said they should be. They should be liable for their business actions that they take with that content. Just like any non-internet company.

ETA: This is really a slippery slope argument. But a fairly poor one, because every non-internet company is liable for their business decisions, including media companies like the NYT and republicans have not gone after them for all of their content. There is a high bar to prove liability from harmful speech in this country that has nothing to do with Section 230.

If NYTime.com posted a head line on the top of their page that incident violence against a specific person they could be sued. If they took a user comment, posted it the same spot, saying exactly the same thing, they would face zero liability.
 
Last edited:

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
Okay, I don't understand how you can possibly say Facebook's decisions to track everything about you so they can micro target you with "user content" is not a business decision. Until you accept basic facts, there is no point in continuing to discuss this with you.

Were the algorithms handed down by God, or did someone at Facebook actively decide how they worked?

The point you seem to miss, is that the algorithm is NOT specifically choosing to serve you Qanon BS, or any other miserable damaging content.

They are just serving you what others are sharing.

If Pink unicorns were being shared like wildfire, they would send you the pink unicorn stories/pic/videos.

The algorithm doesn't' assess content, it reads engagement.
 

Zorba

Lifer
Oct 22, 1999
15,282
10,879
136
The point you seem to miss, is that the algorithm is NOT specifically choosing to serve you Qanon BS, or any other miserable damaging content.

They are just serving you what others are sharing.

If Pink unicorns were being shared like wildfire, they would send you the pink unicorn stories/pic/videos.

The algorithm doesn't' assess content, it reads engagement.
It doesn't matter, right now Twitter could take a post inciting violence against a specific person and push it to everyone on twitter and face zero liability.

Further, you are being very naive about the sophistication of their algorithms.
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
It doesn't matter, right now Twitter could take a post inciting violence against a specific person and push it to everyone on twitter and face zero liability.

Further, you are being very naive about the sophistication of their algorithms.

So you think their algorithms, identify hate speech, and incitement to violence and then specifically spread that?
 

Heartbreaker

Diamond Member
Apr 3, 2006
4,340
5,464
136
Well, it Doesn’t look like you follow any political stuff at all - which is the best way to go.

I follow news. I have old School RSS feeds of CBC, NPR, BBC and a couple of Canadian newspapers. But yeah I don't follow it on someplace like YT, where stories would be driven by engagement.

I just recently started replaying Fallout 3, using a mod to put it in Fallout NV engine (Fallout Tale of Two Wastelands) and I used a YT guide to that install, and check out a few things, so my YT is very fallout heavy right now.
 

ivwshane

Lifer
May 15, 2000
32,565
15,449
136
So you think their algorithms, identify hate speech, and incitement to violence and then specifically spread that?

The algorithms are designed to push content that gets engagement. Whether that engagement is positive or negative doesn’t matter except for the fact that negative content (ie things that upset people) gets more engagement than positive things. What that does is radicalize people and right wingers seem to be extremely susceptible to this compared to left leaning people.
 
  • Like
Reactions: Zorba

Zorba

Lifer
Oct 22, 1999
15,282
10,879
136
So you think their algorithms, identify hate speech, and incitement to violence and then specifically spread that?
I don't know exactly how they work, but I do know research has shown they are quite effective at radicalizing people and promoting suicide. And Facebook in particular has done nothing to fix this.
 

cytg111

Lifer
Mar 17, 2008
24,027
13,536
136
So you think their algorithms, identify hate speech, and incitement to violence and then specifically spread that?
Yep. If thats a silo that sells ads, sure they will.
Anyway, soon we will have chatbots running influence operations, thats gonna be its own nightmare to deal with.
 
  • Like
Reactions: Zorba

MrSquished

Lifer
Jan 14, 2013
23,155
21,281
136
Yes that happens because of the rabbit hole. They see the content that you're watching and then feed you tons of other content similar to it the other people like that watched and engaged and keep watching. You start watching a couple Q anon on videos now you're inundated with them.

If there's anything beyond that then yes I think they should be liable for that. But not for algorithms that just do that.

As far as Facebook and other companies spying on you. I complete 100%. The privacy laws in this country are a fucking joke and we could at least take a little lesson from Europe who is at least a little more evolved on this stuff. And they should be even stricter too. But here our rights as digital consumers are virtually zero.

But that's separate from recommending content that someone started watching.
 
Feb 4, 2009
35,286
16,766
136
No one is talking about revoking 230, just modifying it to make bad actors liable for their business decisions.
Exactly. As I said they need to have skin in the game to act appropriately.
Again I am not sure of the solution, we have tons of smart people who can figure that out and/or simply modify or evaluate if the change is working, if not change something else and allow something else.
 
  • Like
Reactions: Zorba

brycejones

Lifer
Oct 18, 2005
27,735
26,885
136
Exactly. As I said they need to have skin in the game to act appropriately.
Again I am not sure of the solution, we have tons of smart people who can figure that out and/or simply modify or evaluate if the change is working, if not change something else and allow something else.
Yeah but those people aren’t passing laws. Do you really trust this congress to come up with something that isn’t 10x worse?
 

fskimospy

Elite Member
Mar 10, 2006
85,710
51,001
136
Exactly. As I said they need to have skin in the game to act appropriately.
Again I am not sure of the solution, we have tons of smart people who can figure that out and/or simply modify or evaluate if the change is working, if not change something else and allow something else.
Or maybe section 230 is the right solution and any change is worse than what we have.