Thought on AI

UKworkshop.co.uk

Help Support UKworkshop.co.uk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
My point was that the results from an AI are arguably more likely to be a ‘realistic’ or ‘logical’ summation of the input rather than human output which is often coloured by prejudices and opinions. It was not a definition of how AI works.
A discussion of transformers, ffnn, tokenisation and training are probably a bit deep for a workshop forum.
What threw me was 'does', which you edited to 'doesn't' while I was doing the post! I think I understand what you are getting at better now :)
 
What threw me was 'does', which you edited to 'doesn't' while I was doing the post! I think I understand what you are getting at better now :)
I’m very dyslexic and very often don’t type the right word. I have to leave it a minute or two and come back to it as I read the wrong word as the right word if I check right away.
 
My point was that the results from an AI are arguably more likely to be a ‘realistic’ or ‘logical’ summation of the input rather than human output which is often coloured by prejudices and opinions. It was not a definition of how AI works.
A discussion of transformers, ffnn, tokenisation and training are probably a bit deep for a workshop forum.
It's not applying logic but probability though.

The example of asking it for advice on how to treat a gunshot wound to the neck is a good example (of many). I can't recall which model it was, but it was able to predict that the language surrounding the context of the question would provide the sensible answer of pressing hard on the wound, but it couldn't statistically distinguish enough to restrain itself from suggesting one uses a tourniquet if one is available. No logic, no understanding, just a 'stochastic parrot'.
 
I have worked in Technology nearly all my life and whilst generally it has been advantageous I'm starting to see pitfalls, mainly as we go down a more automated/automation/generative AI route.

I could start at the point where people are looking for employment and now CV's are autoscanned not read by a person, potentially the CV has been written for a candidate there is a removal of human interaction, often the best hires don't have the best CV but those people aren't even getting into the process anymore so what are we losing?

Leaving that aside and going to Gen (Generative) AI (Artificial Intelligence) this is the process of using LLM (large Language Models) to effectively decide an outcome... so like we as humans would do, we take what we know, or have heard or believe, and a set of contributory factors to make an ultimate decision, sounds good right...

BUT... let me paint this scenario the LLM or lets use layman's terms (or plain English as best I can so it won't be perfect (geeks take note) for the non geeks or techies) and call them data sources, such as the internet can be flawed, how so, well how I'm sure everyone has heard of fake news, well if the fake news is included in the source or reference data, it effectively has validation, the more often it is used or quoted the more validated it gets, until it now appears to be primary response and then is treated like the most obvious answer as it has been quoted and referenced most often.... see the paradox forming?

Now people will be talking about controls etc. but I think we have to be very careful how much control we start to hand over to machines and logic, artificial intelligence, etc....

I'm not doom mongering but I am an advocate of ensuring we retain or develop proper control mechanisms and that the entire population understands the implication of the road upon which we are travelling, and that doesn't even scrape the surface....

Never thought I'd be spouting this on a woodworking forum!

I'd almost prefer starting a thread on sharpening :ROFLMAO:
I would also advocate the 'Three Laws of Robotics' as Isaac Asimov created in his Sc-Fi novels in the 50s.
 
I work in the NHS and with technology. I am becoming more and more cynical about the holistic value of these things with age. This is especially true of AI. I have done some research into the carbon impact on AI, and it is not small. Firstly, the data centre expansion to support it drives and opportunity cost where there is competition with e.g. housing. You can't get new houses hooked up because of waiting lists for water and power. This is a really big problem in parts of London and Dublin for instance. The is also an opportunity cost with renewables, where datacentres are a key funder and user, outcompeting ensuring our existing economy is decarbonised. AI is also causing providing companies immediate problems in extending the likelihood of their own decarbonisation, because it represents growth, rapid increase in demand, and this cannot be delivered by renewables. This comes back to Jevons paradox and the problem that human efficiencies / deleveraging manual labor does not reduce demand, because cost and accessibility more than compensate with growth and increased demand overall. This question is obviously closely related to how screwed we are with climate breakdown without an unnecessary massively increasing demand burden on resources driven by AI. The problem is that capitalism is like going down hill, it is the natural direction of travel, and developing human systems that constrain growth / overall demand are the opposite of what people think is good for then. So we won't just remain screwed, we will accelerate how screwed we are.
Apologies, maybe Ive not woken up properly, but I cannot make sense or understand this message,,was it written by an AI program?
 
Apologies, maybe Ive not woken up properly, but I cannot make sense or understand this message,,was it written by an AI program?
Apologies. Written too fast! A not much slower attempt...

AI is massively increasing demand for resources - not just power, but also for water. This is:
- causing extra hot areas through heat exhaust that is making areas even more unpleasant to live
- utilising power and water that prevent other things like housing being permitted
- causing opportunity costs for renewables where data centers for AI are being built direct to the datacenter. The new economy of AI tech is being powered in preference to the older long established existing economy, which we have only just started to decarbonise.
- causing many companies like Google to fail at their own decarbonisation plans.

For Jevons Paradox, see:

Jevons Paradox

Basically, when something makes something easier to do, it doesn't reduce demand for resources (through efficiency), it increases demand for resources. AI is just the next thing proving the truth of the paradox. The point about this is "efficiency" is often cited as the main way you can improve your economy to make it more Green. The reality is the very opposite is the case. The more you improve efficiency, the more you increase growth. Jevons Paradox is currently well evidence at the level of the global economy, which is bad news for trying to address global climate breakdown.
 
Power consumption is the BIG hidden issue with AI. For example if Google were to switch from its current search algorithms to an AI (LLM) search engine they would consume 10x the energy.
The question is would they give 10x better results resulting in 10x less search requests?
 
Yes you would not want your doctor to be some AI computer because it would weigh thinks up in such a way as to deem many beyond further use or cost effective to repair as it has no human emotion or places any value on life itself.
 
Yes you would not want your doctor to be some AI computer because it would weigh thinks up in such a way as to deem many beyond further use or cost effective to repair as it has no human emotion or places any value on life itself.
This is possibly true. OTOH our human bias is always to want to do more and more to help more and more people - which is rightly our moral instincts. This is really difficult when we are spending monumental amounts of money on new molecular drugs for cancer which are nuking the wider NHS budget. Meanwhile the NHS budget grows and grows and we are doing little to improve social care and the lives of those who are living to ever older ages with poor quality, often in institutional care. We need to divert funding to social care to improve the lives of those who are living longer, and we need to prioritise, and ration according to those priorities and equality within the realistic constraints of what we can afford within our economy, which genuinely does have limits.
 
Power consumption is the BIG hidden issue with AI. For example if Google were to switch from its current search algorithms to an AI (LLM) search engine they would consume 10x the energy.
The question is would they give 10x better results resulting in 10x less search requests?
The benefit may not be in reducing search requests as quality improved - it is entirely possible more search requests would result due to increased quality of the output.

The real question is whether the additional cost/environmental impact of AI searches would optimise the actions resulting from searches.

For society as a whole, more resources to identify better solutions may make sense. Corporate and social structures tend to operate in self interested "silos" - optimum solutions even if identified may not be adopted.

For Google, the additional costs of AI search capability would need to be recovered:
  • charge for premium searches, advertising, click through etc etc
  • if search volumes declined it is possible their existing revenues would suffer
  • it seems unlikely that government or industry subsidy would be forthcoming
 
Yes you would not want your doctor to be some AI computer because it would weigh thinks up in such a way as to deem many beyond further use or cost effective to repair as it has no human emotion or places any value on life itself.
You’ve jumped into the realm of science fiction which could be a risk but is a long way from where we are.

AI is actually making a massive difference already in healthcare. For instance Machine Learning has resulted in automated diagnosis of cancer from screenings much more reliable than relying on human analysis.

I can see quite a few benefits to having a “robot doctor” if used in the right way.
 
For instance Machine Learning has resulted in automated diagnosis of cancer from screenings much more reliable than relying on human analysis.
I can see that type of use having huge benefits as it is well within a computers ability and will always be through, no overlooking anything because it is a Monday or Friday and just logical analysis.
 
Back
Top