Starbuck1975
Lifer
- Jan 6, 2005
- 14,698
- 1,909
- 126
C-suite executives get paid to mitigate risk and face the consequences when they fail to do so. You can’t release products into the wild with inherent flaws and hope for the best later. Some companies are able to self regulate. The government steps in when they arrogantly refuse to do so to the detriment of society. Also, there is enough evidence to suggest that some of these platforms had an inclination as to what was happening and chose not to.This thread has degenerated to a name calling contest, so I'm going to attempt to get some useful conversation going again before I just ignore this whole thread, and here is something actually worth talking about.
This is a problem, and honestly I have no idea what to do about it. Social media has become an important part of American (and really world) culture. It kind of depends on the ability for people to post about the things they are thinking about, which means we can't really heavily regulate content without killing what makes social media work. There is just not many ways to filter out the real content from propaganda, because propaganda works by getting normal people to repeat it. Simply put propaganda when well done is real content for social media, and I for one don't want what would end up amounting to a 'thought police' that tells us what we can and can't talk about, because that is going to get corrupted almost instantly.
Holding social media responsible after the fact seems wrong as well when we can't even come up with some way that they could have prevented it. So, what do we do? Any ideas?
While investigations may be reactive, it sets precedence and tone. Furthermore, until you take the time to evaluate where the chain broke, how can you regulate the construction of a new one?
Last edited: