Online Safety Act

UKworkshop.co.uk

Help Support UKworkshop.co.uk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

niemeyjt

Established Member
Joined
11 Oct 2018
Messages
641
Reaction score
850
Location
Zurich CH and Suffolk UK
The end of UKW?

"Every single service will have to go through all 17 categories of priority illegal content and consider ‘non priority’ illegal content too. This is the case even if you're a small cycling based forum that only allows text based comments. That's just the way that Parliament designed the act. Unfortunately, that's not Ofcom's choice to do it that way,"

There's no situation a politician cannot make worse.

The Act applies to search services and services that allow users to post content online or to interact with each other, such as websites, apps and other services, including social media services, consumer file cloud storage and sharing sites, video-sharing platforms, online forums, dating services, and online instant messaging services.

CHECK

The Act also applies to services that have links to the UK, even if they are based outside it. If a service, platform or forum has a significant number of UK users, if the UK is a target market, or it is capable of being accessed by UK users and there is a material risk of significant harm to such users, then the law also applies.

CHECK

So - is someone going to do the necessary? I am not sure even Jacob's views on push sticks and sharpening would fall foul, but . . .

Mods - any thoughts / comments (I mean about OSA, not Jacob)

source: https://www.theregister.com/2025/01/14/online_safety_act/
 
Update, I tried the link - https://ofcomlive.my.salesforce-sites.com/formentry/RegulationChecker

I answered:
  1. Does your online service have links with the UK?
    Your answer:
    Yes

  2. Do you provide a “user-to-user” service?
    Your answer:
    Yes

  3. Do you provide a search service?
    Your answer:
    No

  4. Does your online service publish or display pornographic content?
    Your answer:
    • No, we don’t publish/display any pornographic content
  5. Do any exemptions apply to the content on your online service?
    Your answer:
    • No, my service is not limited to these types of content
  6. Do any exemptions apply to your online service?
    Your answer:
    • No, none of the above applies
The reply was:

osa.png
 
I think people need to read the detail of this before making silly comments.
Anyone that knows the harms of online content will appreciate why this legislation is happening and ought to be supportive of it.

Yes, site owners will have do a risk assessment, possibly increase moderation and may need to change some forum features. If that's too onerous maybe they shouldn't be running sites at all.
 
Anyone that knows the harms of online content will appreciate why this legislation is happening and ought to be supportive of it.
It's been said before, to suppress the people and achieve total authoritarian control you need to curb freedom of speech and public expression, then take away all sources of information that might not support your own views and then the people can have your thoughts enforced upon them. This will go one of two ways, you fall under a dictatorship like North Korea or there is civil unrest and the dictatorship fails and is brought to justice.
 
I think people need to read the detail of this before making silly comments.
Anyone that knows the harms of online content will appreciate why this legislation is happening and ought to be supportive of it.

Yes, site owners will have do a risk assessment, possibly increase moderation and may need to change some forum features. If that's too onerous maybe they shouldn't be running sites at all.
Would you like to volunteer and go through with a fine toothcomb Phase 1 which is the Risk Assessment Guidance and Risk Profiles and which runs to 84 pages of gobbledegook. I am sure that the Mods would be very grateful.
 
I'm a supporter of free speech, but not without responsibility and consequences.
There is too much toxic content on the internet and those responsible are not being held to account for either creating it or publishing it.
Some providers give an unmoderated platform to individuals who would never traditionally have it because everyone's voice SHOULD NOT be equal, and those too young and too undiscerning are harmed by it.
I haven't read the new legislation but I expect that I will broadly support it because I feel the pendulum is currently too far the wrong way.
At some point in the future, I may feel the opposite. That's normal. The happy medium isn't easy to find or to maintain for long.
 
I think there is generally a bit of an over-reaction to this...
A forum like this will count as a small low risk scenario which means the list below which summarises as:

- Someone responsible (admin / moderators / owner) - already have
- Ability to moderate content - built in
- Reporting and complaints - already have the ability to report
- Terms of Service - might need tweaking
- Ability to block users - built in

Not seeing anything terribly exciting novel or new in that...

Governance and accountability
Individual accountable for illegal content safety duties and reporting and complaints duties

Content Moderation:
Having a content moderation function to review and assess suspected illegal content
Having a content moderation function that allows for the swift take down of illegal content

Reporting & Complaints
Enabling complaints
Having an easy to find, easy to access, and easy to use complaints system and processes
Appropriate action for relevant complaints about suspected illegal content
Appropriate action for relevant complaints which are appeals – determination (services that are neither large nor multi risk)
Appropriate action for relevant complaints which are appeals – action following determination
Appropriate action for relevant complaints about proactive technology, which are not appeals
Appropriate action for all other relevant complaints
Exception: manifestly unfounded complaints

Terms of Service
Terms of service: substance (all services)
Terms of service: clarity and accessibility

User Access
Removing accounts of proscribed organisations


The aim is to block:
- child sexual abuse
- controlling or coercive behaviour
- extreme sexual violence
- extreme pornography
- fraud
- racially or religiously aggravated public order offences
- inciting violence
- illegal immigration and people smuggling
- promoting or facilitating suicide
- intimate image abuse
- selling illegal drugs or weapons
- sexual exploitation
- terrorism

Do you really think that there is any such issue here?
All of that is already moderated out - and you are not required to never have it - simply to act fast to remove it.

The most challenging bit might be age verification, but arguably if there is no dodgy content then that might not be needed - and the solution might simply be to ask for a 4,000 word essay on the benefits and alternatives in sharpening ;)
 
I think there is generally a bit of an over-reaction to this...
A forum like this will count as a small low risk scenario which means the list below which summarises as:

- Someone responsible (admin / moderators / owner) - already have
- Ability to moderate content - built in
- Reporting and complaints - already have the ability to report
- Terms of Service - might need tweaking
- Ability to block users - built in

Not seeing anything terribly exciting novel or new in that...

Governance and accountability
Individual accountable for illegal content safety duties and reporting and complaints duties

Content Moderation:
Having a content moderation function to review and assess suspected illegal content
Having a content moderation function that allows for the swift take down of illegal content

Reporting & Complaints
Enabling complaints
Having an easy to find, easy to access, and easy to use complaints system and processes
Appropriate action for relevant complaints about suspected illegal content
Appropriate action for relevant complaints which are appeals – determination (services that are neither large nor multi risk)
Appropriate action for relevant complaints which are appeals – action following determination
Appropriate action for relevant complaints about proactive technology, which are not appeals
Appropriate action for all other relevant complaints
Exception: manifestly unfounded complaints

Terms of Service
Terms of service: substance (all services)
Terms of service: clarity and accessibility

User Access
Removing accounts of proscribed organisations


The aim is to block:
- child sexual abuse
- controlling or coercive behaviour
- extreme sexual violence
- extreme pornography
- fraud
- racially or religiously aggravated public order offences
- inciting violence
- illegal immigration and people smuggling
- promoting or facilitating suicide
- intimate image abuse
- selling illegal drugs or weapons
- sexual exploitation
- terrorism

Do you really think that there is any such issue here?
All of that is already moderated out - and you are not required to never have it - simply to act fast to remove it.

The most challenging bit might be age verification, but arguably if there is no dodgy content then that might not be needed - and the solution might simply be to ask for a 4,000 word essay on the benefits and alternatives in sharpening ;)
Are you saying that this does not apply ? Good news if it doesn't.

Screenshot 2025-01-14 at 16.51.08.png
 
Are you saying that this does not apply ? Good news if it doesn't.

View attachment 195991
I'd imagine the owners of the site will either have or develop a standard risk assessment across the forums they own, and make minor adaptations to each according to the needs of the forum. You can probably pretty much download an assessment and adapt that.
 
I agree with Akirk that a forum like this is going to be largely left alone. If everyone on here agrees to the terms that should be the end of it.

I do think online safety bill is possibly more about controlling dissent and opinion in the guise of protecting people. It seems to me that really bad people are going to circumnavigate any protection and just use other methods anyway.
The rest is just freedom erosion in disguise, the vague use of the word harm is problematic given that nowadays people can be "harmed" by people not agreeing with them.
 
Last edited:
Are you saying that this does not apply ? Good news if it doesn't.

View attachment 195991
of course not - but you do a risk assessment to work out what you need to do - takes about 10 seconds when you have a forum that doesn't meet any of the criteria for concerns - and then as posted above, those are the active things you need to do...

it really isn't complicated - don't host illegal or dodgy stuff / be respectful to each other - i.e. standard politeness

A woodworking forum isn't the place for any of the 17 illegal activities and fortunately they haven't included dodgy videos on using table saws without riving knives / safety guards - nor discussions on how to sharpen a chisel :D
 
I do think online safety bill is possibly more about controlling dissent and opinion in the guise of protecting people.
You can be paranoid, but a lot of this has been driven by the suicide of Ian Russel's daughter caused by Instagram algorithms pushing harmful content to her. That's got a LOT of publicity.
 
I'd imagine the owners of the site will either have or develop a standard risk assessment across the forums they own, and make minor adaptations to each according to the needs of the forum. You can probably pretty much download an assessment and adapt that.
That is one viewpoint. But don't forget when the EU GDPR rules were implemented, some US based websites blocked all EU IP addresses. And what about global sites like FOG ?
 
You can be paranoid, but a lot of this has been driven by the suicide of Ian Russel's daughter caused by Instagram algorithms pushing harmful content to her. That's got a LOT of publicity.
Things like this are heartbreaking and clearly "social media" like instagram and tiktok are attempting to turn us all into zombies by tiny repeated dopamine hits.
Studys have shown the mental health of children and teens has fallen dramatically since the ubiquity of the smartphone in high school age children and judging by the number of toddlers I have seen in prams and pushchairs with I-pads this is likely to get worse. We as a society are going to have to get used to the almost symbiotic nature of the smartphone / human hybrid and no doubt soon enough the implanted communication chip.

However, the blanket control of everything "online" is probably not going to be super helpful as people up to no good don`t care about rules and regulations anyway.
I would hope that some of the advanced AI that is being developed could be used here to very quickly go through instagram and find and remove the real harmful content almost as fast as it appears.
 
Back
Top