Twitter published it and twitter's algorithms pushed the dangerous content onto her feed. They have responsibility for it.I would agree the Molly Russell case could be attributed to online words in a direct sense. But those directly to blame are the evil people who have posted the nonsense. Not Twitter itself.
So you haven't grasped that people can become radicals by exposure to material that they wouldn't otherwise have been exposed to.The radicals will just continue regardless of any laws anyway because they are fanatics who already don`t care about law and order or society at large.
In case you hadn't noticed, your post's still here ;-)It’a clear from your comment that you’re not a supporter of free speech.
The algorithm just gives you what it thinks you want from previous interactions. If you click on 10 videos about cats it will keep sending you cat videos.Twitter published it and twitter's algorithms pushed the dangerous content onto her feed. They have responsibility for it.
Better moderation would have prevented her death.
Similarly other people have been driven to commit crimes of violence that they wouldn't have considered if they hadn't been deliberately exposed to lies and malicious content.
So you haven't grasped that people can become radicals by exposure to material that they wouldn't otherwise have been exposed to.
No it doesn't, and that's the problem.The algorithm just gives you what it thinks you want from previous interactions.
I'v had a facebook account for years and don't get any irrelevant or offensive content sent to me, it has got to be to do with where you have gone on facebook and the algorithms pick up on your likes.No it doesn't, and that's the problem.
Read up on the Molly Russell case and you'll find that the algorithms pushed more and radically greater dangerous content to her than she ever started looking at.
If you've a Facebook account you'll find that you get pushed content that you've never searched for AT ALL. Maybe your 'friends' might have looked at it, maybe just people in groups you've joined have engaged with it. But you still get video content pushed at you that is irrelevant and offensive.
It’s more insidious than that. It’s where people or groups you have connected to have looked/been interested in. So if you read a post about stained glass, others who have read that post might also have looked at church windows and other people who looked at church windows might have looked at religious iconography and others who looked at iconography might have looked at crucifixes. The algorithm might toss you a recommendation of crucifixions on the off chance you’ll click so they can monetise an add. It’s all about the probability of getting you to click. Of course once you’ve clicked the click bait you’ll get a lot more of the same plus even more of the seemingly random click bait offerings.I'v had a facebook account for years and don't get any irrelevant or offensive content sent to me, it has got to be to do with where you have gone on facebook and the algorithms pick up on your likes.
Lucky you. It's pretty obvious from my feed that content is being pushed at me that I've NEVER looked for or engaged with.I'v had a facebook account for years and don't get any irrelevant or offensive content sent to me, it has got to be to do with where you have gone on facebook and the algorithms pick up on your likes.
I agree that online content could cause problems for some people with pre-existing mental health issues but surely even regular TV and film content exposes viewers to all kinds of potential radicalisation triggers if they are predisposed to being seriously influenced by what they see or read?No it doesn't, and that's the problem.
Read up on the Molly Russell case and you'll find that the algorithms pushed more and radically greater dangerous content to her than she ever started looking at.
If you've a Facebook account you'll find that you get pushed content that you've never searched for AT ALL. Maybe your 'friends' might have looked at it, maybe just people in groups you've joined have engaged with it. But you still get video content pushed at you that is irrelevant and offensive.
I know that people can become radicalised by what they are exposed to. Plenty of people have been radicalised before the internet as well, certain people are drawn to fanaticism.
You have to credit people with some intelligence and agency to think for themselves.
I think given the way that lately police have been acting on some postings on social media sites and the government using legislature as a thought policing approach, then yes this site and most other forums fall foul by allowing what we refer to here as off topic 2 threads.
Anyone can claim to be offended be anything, it no longer relates to just defined terms that are offensive, but now seems to be a broader encompassing narrative to silence even discussion on some agendas that don't meet the ruling governments points of view. There is much in the off topic 2 threads that could fall foul of "this", but the problem is what is "this"
I'm a supporter of free speech (or rather, freedom of expression, which is the actual human right you mean in a UK context) BUT not extending that to permitting paedophiles to share CSA images.
Government set the judiciary landscape, they set, amend and enact the laws into legislature. The police then enforce said laws, then judiciary prosecute those laws.... Policing is separate and distinct from government. Judiciary is separate and distinct from government. This has nothing to do with any "ruling government's point of view".
I don't have that problem. Just call me No-Mates. I reject all invitations etc.Lucky you. It's pretty obvious from my feed that content is being pushed at me that I've NEVER looked for or engaged with.
Spotting content that I might be interested in or have searched for is easy enough. It's also easy to see the algorithm thinking I might be interested in my friends interests too, although I don't have the same interests.
The rest must be coming from other people's interests in groups I'm a member of.
You can also watch (slow) changes in the pushed content as you drop out of some groups or join others. Joining some 3D printing groups last year saw totally new, and unwanted, content being pushed to my feed.
This is where social media becomes a menace as unwanted content is pushed at people who might not have the ability to spot hurtful content or misinformation.
I agree that online content could cause problems for some people with pre-existing mental health issues but surely even regular TV and film content exposes viewers to all kinds of potential radicalisation triggers if they are predisposed to being seriously influenced by what they see or read?
An average person would not be goaded or influenced in such a way that they would self harm and anyone who does under those circumstances already has arguably underlying pre-existing mental health or other personality disorder issues prior to viewing or receiving such content which in turn makes them predisposed to being influenced and encourages them to act upon such harmful content!
The vast majority of internet content I receive is directly or indirectly linked to searches I've made or content I've read no matter how obscure or casually I may have browsed at some point.
I have two cats and I get quite a daily bombardment of cat related content simply because I've clicked on cat content in the past. I don't even need to search, it simply finds me and that's how algorithms work.
I won't do it myself but I am in no doubt that if I searched for content appertaining to such as 'self harm or even how to commit suicide', then algorithms if they detect it would direct that type content to my screen in some form or other...that's how they work.
I've been on the internet for many years and never once have I received any harmful content such as how to self harm etc simply because I haven't searched for such content.
The problem is that most parents will warn their kids about the bogey man out to harm them and will intervene if they think they are exposing themselves to physical danger but many parents fail miserably when it comes to monitoring online content and material their children are accessing or receiving.
That's where the danger lies for most impressionable young people.
The internet is full of content both good and bad so if parents have a child with known or suspected mental health issues then it is essential that the content they view is monitored carefully and not just lay the blame at someone else's door when things go wrong.
That's a very naïve thing to say, because words lead to actions;
Jo Cox, Molly Rose Russell, David Amess all recent tragic deaths caused by online words.
Just consider how much harm has ben done by online radicalisation.