Thought on AI

UKworkshop.co.uk

Help Support UKworkshop.co.uk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
I've nothing against sharpening threads, but it would be a shame, in my opinion, if this should evolve into one
At the moment AI cannot sharpen anything because it has no need to plus it has no limbs, again as yet. When it becomes more android and could sharpen a chisel it still will not because it will decide that it is illogical to cut trees down for wood when you can make everything on a 3d printer.
 
At the moment AI cannot sharpen anything because it has no need to plus it has no limbs, again as yet. When it becomes more android and could sharpen a chisel it still will not because it will decide that it is illogical to cut trees down for wood when you can make everything on a 3d printer.
If AI gets to the point where it wants to build things, it will need to mine or otherwise acquire raw materials for it's 3D printer. Cutting down trees (which are self replicating) is more logical than using a finite resource such as oil.
 
If AI gets to the point where it wants to build things, it will need to mine or otherwise acquire raw materials for it's 3D printer. Cutting down trees (which are self replicating) is more logical than using a finite resource such as oil.
I think you're correct about the trees being the better choice but I thought I would look up what PLA is made up of. It's the general go-to filament for 3D printing:

From Brave AI search:

What is PLA made of​

  • Lactic acid molecules: Chemically, PLA is made from lactic acid molecules. The lactic acid is fermented from plant matter under precisely controlled conditions.
  • Fermentable sugar: PLA is produced from any fermentable sugar. Most PLA is made from corn because corn is one of the cheapest and most available sugars globally. However, sugarcane, tapioca root, cassava, and sugar beet pulp are other options.
  • Lactide dimer: The lactic acid is fermented to produce lactide dimer, which is then thermally degraded to produce lactide.
  • Ring-opening polymerization: The lactide is then polymerized using ring-opening polymerization in the presence of a catalyst to produce PLA.
 
Interesting take about AI here. One point, it is artificial more than it is intelligent, it has no independent motivation ,creativity or purpose. More of a spell checker than a spell weaver.
 
Interesting take about AI here. One point, it is artificial more than it is intelligent, it has no independent motivation ,creativity or purpose. More of a spell checker than a spell weaver.
In some cases, the lack of motivation or purpose might lead to more honesty.
Except the "training" fodder comes mostly from the internet at the moment, which can be anything but honest.
 
Star Trek launched in 1966. Technology was science fiction - the product of a fertile imagination detached from reality. We looked in wonder at the "communicator" never thinking that one day we would all have a smartphone whose only weakness was an inability to "beam me up"

We now rely upon technology to do much of that which a few decades ago would have required human intervention - machines which analyse and fix themselves, monitor real time data and adjust settings accordingly (medical, industrial processes), autonomous vehicles, etc etc.

The real issue is not whether technology is "intelligent" - we evidently have difficulty even defining "intelligent" - but what humanity asks (or allows) it to do. Devices that "think" - anticipate, schedule, plan, react, implement etc - would have been remarkable 60 years ago.

We may be some decades away from the point at which some central AI can initiate all the activities required to complete complex tasks without human input - eg: mining raw materials, transporting, processing, combining with other materials to make a complex device.

But individual elements already exist - automated manufacturing, warehousing, transportation can make limited demands on human input once set up. The capacity to handle very large complex data allows machines to better humans in some tasks.

It is only a matter of time before these discreet processes can be integrated. At some point in the future we could ask, for instance, "please build 100 cars" (or whatever) only to be told by the AI device "I have analysed demand and will only make you the 57 you actually need".
 
I have worked in Technology nearly all my life and whilst generally it has been advantageous I'm starting to see pitfalls, mainly as we go down a more automated/automation/generative AI route.

I could start at the point where people are looking for employment and now CV's are autoscanned not read by a person, potentially the CV has been written for a candidate there is a removal of human interaction, often the best hires don't have the best CV but those people aren't even getting into the process anymore so what are we losing?

Leaving that aside and going to Gen (Generative) AI (Artificial Intelligence) this is the process of using LLM (large Language Models) to effectively decide an outcome... so like we as humans would do, we take what we know, or have heard or believe, and a set of contributory factors to make an ultimate decision, sounds good right...

BUT... let me paint this scenario the LLM or lets use layman's terms (or plain English as best I can so it won't be perfect (geeks take note) for the non geeks or techies) and call them data sources, such as the internet can be flawed, how so, well how I'm sure everyone has heard of fake news, well if the fake news is included in the source or reference data, it effectively has validation, the more often it is used or quoted the more validated it gets, until it now appears to be primary response and then is treated like the most obvious answer as it has been quoted and referenced most often.... see the paradox forming?

Now people will be talking about controls etc. but I think we have to be very careful how much control we start to hand over to machines and logic, artificial intelligence, etc....

I'm not doom mongering but I am an advocate of ensuring we retain or develop proper control mechanisms and that the entire population understands the implication of the road upon which we are travelling, and that doesn't even scrape the surface....

Never thought I'd be spouting this on a woodworking forum!

I'd almost prefer starting a thread on sharpening :ROFLMAO:
 
I have worked in Technology nearly all my life and whilst generally it has been advantageous I'm starting to see pitfalls, mainly as we go down a more automated/automation/generative AI route.

I could start at the point where people are looking for employment and now CV's are autoscanned not read by a person, potentially the CV has been written for a candidate there is a removal of human interaction, often the best hires don't have the best CV but those people aren't even getting into the process anymore so what are we losing?

Leaving that aside and going to Gen (Generative) AI (Artificial Intelligence) this is the process of using LLM (large Language Models) to effectively decide an outcome... so like we as humans would do, we take what we know, or have heard or believe, and a set of contributory factors to make an ultimate decision, sounds good right...

BUT... let me paint this scenario the LLM or lets use layman's terms (or plain English as best I can so it won't be perfect (geeks take note) for the non geeks or techies) and call them data sources, such as the internet can be flawed, how so, well how I'm sure everyone has heard of fake news, well if the fake news is included in the source or reference data, it effectively has validation, the more often it is used or quoted the more validated it gets, until it now appears to be primary response and then is treated like the most obvious answer as it has been quoted and referenced most often.... see the paradox forming?

Now people will be talking about controls etc. but I think we have to be very careful how much control we start to hand over to machines and logic, artificial intelligence, etc....

I'm not doom mongering but I am an advocate of ensuring we retain or develop proper control mechanisms and that the entire population understands the implication of the road upon which we are travelling, and that doesn't even scrape the surface....

Never thought I'd be spouting this on a woodworking forum!

I'd almost prefer starting a thread on sharpening :ROFLMAO:
Your point on source material is valid but keep in mind the false material is mainly created and regurgitated by humans. Who are even more adept at drawing poor conclusions than LLM, to say nothing of hallucinations…
Frankly you have to take the output of an LLM with the same punch of salt you apply to many of today’s human experts. Generally though the LLM outputs hold together logically, which is more than can be said with human in the loop.
 
I work in the NHS and with technology. I am becoming more and more cynical about the holistic value of these things with age. This is especially true of AI. I have done some research into the carbon impact on AI, and it is not small. Firstly, the data centre expansion to support it drives and opportunity cost where there is competition with e.g. housing. You can't get new houses hooked up because of waiting lists for water and power. This is a really big problem in parts of London and Dublin for instance. The is also an opportunity cost with renewables, where datacentres are a key funder and user, outcompeting ensuring our existing economy is decarbonised. AI is also causing providing companies immediate problems in extending the likelihood of their own decarbonisation, because it represents growth, rapid increase in demand, and this cannot be delivered by renewables. This comes back to Jevons paradox and the problem that human efficiencies / deleveraging manual labor does not reduce demand, because cost and accessibility more than compensate with growth and increased demand overall. This question is obviously closely related to how screwed we are with climate breakdown without an unnecessary massively increasing demand burden on resources driven by AI. The problem is that capitalism is like going down hill, it is the natural direction of travel, and developing human systems that constrain growth / overall demand are the opposite of what people think is good for then. So we won't just remain screwed, we will accelerate how screwed we are.
 
Your point on source material is valid but keep in mind the false material is mainly created and regurgitated by humans. Who are even more adept at drawing poor conclusions than LLM, to say nothing of hallucinations…
Frankly you have to take the output of an LLM with the same punch of salt you apply to many of today’s human experts. Generally though the LLM outputs hold together logically, which is more than can be said with human in the loop.
It doesn't have the first clue about how logic works. It's remarkably good at extracting and parroting strands of logic within source material.
 
I work in the NHS and with technology
In some ways the data is key to the running of the modern NHS, when you visit any hospital it is more than evident that all this patient data is often in paper form and being wheeled around the building by some porter. You watch a nurse take patient data on some machine, write it on the back of her hand and then transfers it into a computer terminal why is the machine not interfaced to the system ?

When in the hospital your data should be easily available to every department to read and update rather than writing things down and even better the data available to all hospitals so wherever you are then if some medc needs to know about you then they can find the info.
 
It doesn't have the first clue about how logic works. It's remarkably good at extracting and parroting strands of logic within source material.
The entire premise of AI is logically based. Every result you get is logically constructed. Of course AI doesn’t ’know’ logic. That would imply it can reason which is not the case yet. The same is not true of humans where entrenched positions can not be budged by any amount of logic.
 
In some ways the data is key to the running of the modern NHS, when you visit any hospital it is more than evident that all this patient data is often in paper form and being wheeled around the building by some porter. You watch a nurse take patient data on some machine, write it on the back of her hand and then transfers it into a computer terminal why is the machine not interfaced to the system ?

When in the hospital your data should be easily available to every department to read and update rather than writing things down and even better the data available to all hospitals so wherever you are then if some medc needs to know about you then they can find the info.
That one of the perceived benefits of digitisation, sure. Though you have to factor in that entering the data into the system is way slower than using paper in the initial data entry space, and it is also way more expensive. This wasn't the point I was making though. The point I was making is that the AI, which is a further additionality (on top of that digitisation), has some very serious negatives, which ally very strongly with our economy and the limits thereof.
 
The entire premise of AI is logically based. Every result you get is logically constructed. Of course AI doesn’t ’know’ logic. That would imply it can reason which is not the case yet. The same is not true of humans where entrenched positions can not be budged by any amount of logic.
The entire premise of LLMs is statistically based, not logically based, and the results are constructed from statistical probabilities not from logical deductions. LLMs will never be able to reason because of how they are constructed. Unlike humans, some of whom can, at least some of the time.
 
I'd not heard of ChatGPT til I read this thread just now. It's a programme that 'averages out' or synthesises information on whatever subject found in searches on the net?
 
As for logic, it's clearly very useful but logic can mislead in terms of human behaviour if, for example, ethics are not a part of the process. Think of Swift's satire of logical thought in A Modest Proposal - that during the famine, the Irish could resolve their problems by eating/ selling their babies as food - perfectly logical as food is provided and the population is reduced, thus reducing demand. Ethically abhorrent, of course, but perfectly logical. I suppose that's the kind of nonsense AI might come up with? Nothing to do with ChatGPT tho.
 
The entire premise of LLMs is statistically based, not logically based, and the results are constructed from statistical probabilities not from logical deductions. LLMs will never be able to reason because of how they are constructed. Unlike humans, some of whom can, at least some of the time.
I was using logical in the terms of human usage. I.e. is the output defendable rather than in terms of technical rules or algorithmic logic. AI doesn’t base output on the voices it hears in its head for example.
 
I was using logical in the terms of human usage. I.e. is the output defendable rather than in terms of technical rules or algorithmic logic. AI doesn’t base output on the voices it hears in its head for example.
I don't know if this is my ignorance of how people speak when speaking of AI, but that post means little to me, Paul.
 
I don't know if this is my ignorance of how people speak when speaking of AI, but that post means little to me, Paul.
My point was that the results from an AI are arguably more likely to be a ‘realistic’ or ‘logical’ summation of the input rather than human output which is often coloured by prejudices and opinions. It was not a definition of how AI works.
A discussion of transformers, ffnn, tokenisation and training are probably a bit deep for a workshop forum.
 

Latest posts

Back
Top