Online Safety Act

UKworkshop.co.uk

Help Support UKworkshop.co.uk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
We arent going anywhere. I'll dig into it further, but we're already on it.

I do appreciate the concern!
 
I would agree the Molly Russell case could be attributed to online words in a direct sense. But those directly to blame are the evil people who have posted the nonsense. Not Twitter itself.
Twitter published it and twitter's algorithms pushed the dangerous content onto her feed. They have responsibility for it.
Better moderation would have prevented her death.

Similarly other people have been driven to commit crimes of violence that they wouldn't have considered if they hadn't been deliberately exposed to lies and malicious content.
The radicals will just continue regardless of any laws anyway because they are fanatics who already don`t care about law and order or society at large.
So you haven't grasped that people can become radicals by exposure to material that they wouldn't otherwise have been exposed to.
 
Twitter published it and twitter's algorithms pushed the dangerous content onto her feed. They have responsibility for it.
Better moderation would have prevented her death.

Similarly other people have been driven to commit crimes of violence that they wouldn't have considered if they hadn't been deliberately exposed to lies and malicious content.

So you haven't grasped that people can become radicals by exposure to material that they wouldn't otherwise have been exposed to.
The algorithm just gives you what it thinks you want from previous interactions. If you click on 10 videos about cats it will keep sending you cat videos.

I know that people can become radicalised by what they are exposed to. Plenty of people have been radicalised before the internet as well, certain people are drawn to fanaticism.
You have to credit people with some intelligence and agency to think for themselves.
 
I think given the way that lately police have been acting on some postings on social media sites and the government using legislature as a thought policing approach, then yes this site and most other forums fall foul by allowing what we refer to here as off topic 2 threads. Anyone can claim to be offended be anything, it no longer relates to just defined terms that are offensive, but now seems to be a broader encompassing narrative to silence even discussion on some agendas that don't meet the ruling governments points of view. There is much in the off topic 2 threads that could fall foul of "this", but the problem is what is "this"
 
We see, that as time goes on we are accepting more...of everything. Goods, services, TV, films, politics etc., and as we progress so we push the boundaries. Freedom of speech is enshrined in the US Constitution. Here, we have Freedom of expression. We are entitled to express ourselves by music, art, political views, press etc., But that's not what this is all about, is it? This is about the decay of courtesy, of manners and behaviour. because we seem to think it's modern.

As a consequence, the laws will inevitably have to evolve to keep pace with 'modern' behaviour. We won't end up in a dictatorship, the public wouldn't allow it, but we do need to maintain a watchful eye on the more detrimental extremists. For the generations yet to come.
 
The algorithm just gives you what it thinks you want from previous interactions.
No it doesn't, and that's the problem.
Read up on the Molly Russell case and you'll find that the algorithms pushed more and radically greater dangerous content to her than she ever started looking at.

If you've a Facebook account you'll find that you get pushed content that you've never searched for AT ALL. Maybe your 'friends' might have looked at it, maybe just people in groups you've joined have engaged with it. But you still get video content pushed at you that is irrelevant and offensive.
 
No it doesn't, and that's the problem.
Read up on the Molly Russell case and you'll find that the algorithms pushed more and radically greater dangerous content to her than she ever started looking at.

If you've a Facebook account you'll find that you get pushed content that you've never searched for AT ALL. Maybe your 'friends' might have looked at it, maybe just people in groups you've joined have engaged with it. But you still get video content pushed at you that is irrelevant and offensive.
I'v had a facebook account for years and don't get any irrelevant or offensive content sent to me, it has got to be to do with where you have gone on facebook and the algorithms pick up on your likes.
 
I'v had a facebook account for years and don't get any irrelevant or offensive content sent to me, it has got to be to do with where you have gone on facebook and the algorithms pick up on your likes.
It’s more insidious than that. It’s where people or groups you have connected to have looked/been interested in. So if you read a post about stained glass, others who have read that post might also have looked at church windows and other people who looked at church windows might have looked at religious iconography and others who looked at iconography might have looked at crucifixes. The algorithm might toss you a recommendation of crucifixions on the off chance you’ll click so they can monetise an add. It’s all about the probability of getting you to click. Of course once you’ve clicked the click bait you’ll get a lot more of the same plus even more of the seemingly random click bait offerings.
 
I'v had a facebook account for years and don't get any irrelevant or offensive content sent to me, it has got to be to do with where you have gone on facebook and the algorithms pick up on your likes.
Lucky you. It's pretty obvious from my feed that content is being pushed at me that I've NEVER looked for or engaged with.
Spotting content that I might be interested in or have searched for is easy enough. It's also easy to see the algorithm thinking I might be interested in my friends interests too, although I don't have the same interests.
The rest must be coming from other people's interests in groups I'm a member of.
You can also watch (slow) changes in the pushed content as you drop out of some groups or join others. Joining some 3D printing groups last year saw totally new, and unwanted, content being pushed to my feed.
This is where social media becomes a menace as unwanted content is pushed at people who might not have the ability to spot hurtful content or misinformation.
 
No it doesn't, and that's the problem.
Read up on the Molly Russell case and you'll find that the algorithms pushed more and radically greater dangerous content to her than she ever started looking at.

If you've a Facebook account you'll find that you get pushed content that you've never searched for AT ALL. Maybe your 'friends' might have looked at it, maybe just people in groups you've joined have engaged with it. But you still get video content pushed at you that is irrelevant and offensive.
I agree that online content could cause problems for some people with pre-existing mental health issues but surely even regular TV and film content exposes viewers to all kinds of potential radicalisation triggers if they are predisposed to being seriously influenced by what they see or read?

An average person would not be goaded or influenced in such a way that they would self harm and anyone who does under those circumstances already has arguably underlying pre-existing mental health or other personality disorder issues prior to viewing or receiving such content which in turn makes them predisposed to being influenced and encourages them to act upon such harmful content!

The vast majority of internet content I receive is directly or indirectly linked to searches I've made or content I've read no matter how obscure or casually I may have browsed at some point.
I have two cats and I get quite a daily bombardment of cat related content simply because I've clicked on cat content in the past. I don't even need to search, it simply finds me and that's how algorithms work.
I won't do it myself but I am in no doubt that if I searched for content appertaining to such as 'self harm or even how to commit suicide', then algorithms if they detect it would direct that type content to my screen in some form or other...that's how they work.
I've been on the internet for many years and never once have I received any harmful content such as how to self harm etc simply because I haven't searched for such content.

The problem is that most parents will warn their kids about the bogey man out to harm them and will intervene if they think they are exposing themselves to physical danger but many parents fail miserably when it comes to monitoring online content and material their children are accessing or receiving.
That's where the danger lies for most impressionable young people.
The internet is full of content both good and bad so if parents have a child with known or suspected mental health issues then it is essential that the content they view is monitored carefully and not just lay the blame at someone else's door when things go wrong.
 
I know that people can become radicalised by what they are exposed to. Plenty of people have been radicalised before the internet as well, certain people are drawn to fanaticism.
You have to credit people with some intelligence and agency to think for themselves.

Even though you didn't? You called those people "crazy" - so how do we reconcile the crediting of all of the people with intelligence and agency to think for themselves, while at the same time acknowledging that some people are crazies?
It'd be interesting to have that discussion...
 
I think given the way that lately police have been acting on some postings on social media sites and the government using legislature as a thought policing approach, then yes this site and most other forums fall foul by allowing what we refer to here as off topic 2 threads.

Completely disagree - nobody has been taken to task for thinking bad thoughts - why is there still such disinformation abounding on this?

Anyone can claim to be offended be anything, it no longer relates to just defined terms that are offensive, but now seems to be a broader encompassing narrative to silence even discussion on some agendas that don't meet the ruling governments points of view. There is much in the off topic 2 threads that could fall foul of "this", but the problem is what is "this"

Again - totally disagree because it is factually incorrect. Policing is separate and distinct from government. Judiciary is separate and distinct from government. This has nothing to do with any "ruling government's point of view".
 
I'm a supporter of free speech (or rather, freedom of expression, which is the actual human right you mean in a UK context) BUT not extending that to permitting paedophiles to share CSA images.

I have no idea what that has to do with free speech and no, I’m not talking about freedom of expression; you are.
You seem deeply confused about two different concepts.
 
A friend of mine's wife died a couple of years ago. One of his daughters (early twenties) posted a tribute to her mum on her social media. People commented and she responded to one of the comments to say words along the lines of she'd give anything to be with her mum again.

Over the next few hours her feed filled with "bereavement" material which included links discussing the best ways to commit suicide.

This went on for weeks and of course separating the emotions of bereavement with deep depression in someone while grieving yourself is probably quite difficult. She was engaging with "people" that she thought were understanding but were actually leading her down a dark path.

She took an overdose. Luckily her sister found her in time and this story unravelled.

At the same time I was involved through my work with discussions about how social media platforms that operate market places (no need to mention any names) were shirking any responsibility for allowing openly fraudulent adverts. The social media firms were just interested in their bottom line ... not the human impact of what they were allowing.

That's two examples of the stuff that this legislation is looking to protect us against. We do need to be careful about not heading down an Orwellian path ... arguably that's where the algorithms will take us if we don't have some sensible legislation to require those who make money from the clicks on their platforms to show some responsibility for how they are used.
 
... Policing is separate and distinct from government. Judiciary is separate and distinct from government. This has nothing to do with any "ruling government's point of view".
Government set the judiciary landscape, they set, amend and enact the laws into legislature. The police then enforce said laws, then judiciary prosecute those laws.

When governments change laws or want to focus on one area, then the police and judiciary have to follow.
 
Lucky you. It's pretty obvious from my feed that content is being pushed at me that I've NEVER looked for or engaged with.
Spotting content that I might be interested in or have searched for is easy enough. It's also easy to see the algorithm thinking I might be interested in my friends interests too, although I don't have the same interests.
The rest must be coming from other people's interests in groups I'm a member of.
You can also watch (slow) changes in the pushed content as you drop out of some groups or join others. Joining some 3D printing groups last year saw totally new, and unwanted, content being pushed to my feed.
This is where social media becomes a menace as unwanted content is pushed at people who might not have the ability to spot hurtful content or misinformation.
I don't have that problem. Just call me No-Mates. I reject all invitations etc.
 
I agree that online content could cause problems for some people with pre-existing mental health issues but surely even regular TV and film content exposes viewers to all kinds of potential radicalisation triggers if they are predisposed to being seriously influenced by what they see or read?

An average person would not be goaded or influenced in such a way that they would self harm and anyone who does under those circumstances already has arguably underlying pre-existing mental health or other personality disorder issues prior to viewing or receiving such content which in turn makes them predisposed to being influenced and encourages them to act upon such harmful content!

The vast majority of internet content I receive is directly or indirectly linked to searches I've made or content I've read no matter how obscure or casually I may have browsed at some point.
I have two cats and I get quite a daily bombardment of cat related content simply because I've clicked on cat content in the past. I don't even need to search, it simply finds me and that's how algorithms work.
I won't do it myself but I am in no doubt that if I searched for content appertaining to such as 'self harm or even how to commit suicide', then algorithms if they detect it would direct that type content to my screen in some form or other...that's how they work.
I've been on the internet for many years and never once have I received any harmful content such as how to self harm etc simply because I haven't searched for such content.

The problem is that most parents will warn their kids about the bogey man out to harm them and will intervene if they think they are exposing themselves to physical danger but many parents fail miserably when it comes to monitoring online content and material their children are accessing or receiving.
That's where the danger lies for most impressionable young people.
The internet is full of content both good and bad so if parents have a child with known or suspected mental health issues then it is essential that the content they view is monitored carefully and not just lay the blame at someone else's door when things go wrong.

I reckon you haven't actually thought that through very well.

On the one hand you say "personal responsibility" and your anecdote (one anecdote doesn't equal data) relates that "I don't get unwanted content, so others shouldn't either". Also that your opinion appears to be "won't get self harm content unless they search for self harm content" and that a "pre-disposal to mental health issues" is just bad luck for them.

On the other hand it is the fault of the parents if they haven't "monitored the content of their children"...


Scrutinising these all together, I would offer the following:

- First and foremost - there is the issue of parents monitoring content - surely that explicitly acknowledges that the content has already been served to their children - at which point it "cannot be unseen".

- Second is the issue of "parents" - and the contradiction this implies with "personal responsibility".
- Perhaps the idea of "parental monitoring" is just not feasible or even realistic in today's online society? - in which case who becomes "the parent"

- Proposing that "parental monitoring" is an acceptable avenue - what is the real-world difference between "parental" or "societal" monitoring.

- Bearing in mind that "monitoring" implies an after-the-fact approach - perhaps a "before-the-fact" approach might be more appropriate - particularly when the "cannot be unseen" content is considered.

- Bearing in mind that you acknowledge that there are some people who have "predisposed conditions" - how then do we consider, as a society, as a community, acting in an appropriate "parental" fashion? Is it just a case of "anybody who is predisposed to mental health conditions: sure sucks to be you" kind of thing that you want to advocate for? Or perhaps there is some common societal or community behaviour that we could al agree upon?

- Combining the above, perhaps it is a duty of a civilised society (a society which acknowledges predisposition of mental health conditions) that we ought to freely and proactively adopt some kind of societal rules of conduct and expression?



An observation, if I may: How cute it is espousing "freedom of expression", on an online forum - when that online forum has strict rules on what kind of things can be freely expressed within that online forum. <- I freely admit that this makes me smirk ruefully.
 
That's a very naïve thing to say, because words lead to actions;
Jo Cox, Molly Rose Russell, David Amess all recent tragic deaths caused by online words.

Just consider how much harm has ben done by online radicalisation.

IF you are claiming that ideals of freedom of speech has effectively killed 2 people (the last was by a muslim), which is bizarre given that freedom of speech has nothing to do with the murder of Jo Cox, but someone radicalised by mass immigration, which again has nothing to do with freedom of speech and everything to do with government policy, hence why she, as a representative of the government, was attacked; then I don't think you are arguing in good faith or rationale

You lot (you know who you are, it’s the same group every time), live in a world where every problem can be solved with more regulation, more rules and less freedom and the targets of your authoritarianism are always your political opponents.
You seem to have no idea the value of freedom. You blame it for everything, you look down on anyone trying to promote it as somehow being the next genocidal leader, all the while driving us into the very type of systems that lead to such outcomes. Freedom costs, that is a reality, you do not accept.

You guys don’t seem to think at all but rather react.
Your intellectualised talking points are just backwards and illogical.
If we keep chipping away at the fundamentals of our civilisation, we won’t have one.

I’ll leave you with this thought, as potentially un PC as it might ‘seem’.
If freedom of speech has been responsible for the death of 3 people in the last say 70 years, don’t you think that’s an incredible return on investment, given the benefits of living in a free society which exists due to freedom of speech?

How many people have died in the last 10 years because of the Koran? I’ve never heard one of you show any concern over that.
If anything, you’re more likely to defend it.
You’re more likely to blame freedom of speech on rising islamaphobia.

If you say ‘i believe in feee speech but…’ you’re an authoritarian. You have authoritarian tendencies and you want to control the world based on your principles.

We have a sensible guide line regarding free speech; no call to violence. Hypocritically I think we should have a law saying ‘no calls to remove freedoms’. A far more pernicious and dangerous use of language imo.
 
Last edited:
If freedom of speech has been responsible for the death of 3 people in the last say 70 years,
No one has said that. You've just missed the point completely and aren't following the discussion.

Maybe you're happy with 'freedom of speech' if it allows people to lie, scam and manipulate vulnerable people.
 
Back
Top