AI

 

ANDY00

By ANDY00, 1715693128

Humans live for an average of around 100 years and accumulate a vast amount of knowledge, of which they may pass on only about 20%. In the span of 5000 years, humans have progressed from throwing spears to landing on the moon, constructing space stations, and now developing artificial intelligence (AI). Unlike humans, artificial intelligence doesn't have a finite lifespan of around 100 years; rather, it exponentially increases in intelligence, roughly tripling its capabilities each year. Over the course of 5000 years, AI has the potential to reach god-like status in our eyes.

Considering that just a few hundred years ago, witnessing someone flying a plane would have been perceived as magical, but with modern understanding, it's deemed ordinary, it's challenging to fathom the potential advancements AI will make. AI has already made remarkable progress; for instance, it has learned approximately 70% of whale language within just one year. Despite humanity's presence for thousands of years, we Didn't deciphered any of it, and it's only relatively recently that we've recognized whales' intelligence at all. 

Now, I know many will say AI isn't real, yada yada yada, and that's fine. But this 'thing' that supposedly isn't real is cleverer than I am, maybe its an algorithm or computer program - still smarter than me.  and it's doing things we cannot do: finding cancers, discovering new antibiotics effective against superbugs, deciphering animal languages, redesigning battery technology. I mean, in just one year, it has accomplished so much that we couldn't, and it triples in intelligence every year. What will it be like in 2027? What will it give us? I'm hoping for the holodeck before my time's up, lol.

Not only here, but AI also opens up the heavens to us in ways we've never truly explored. Human limitations, such as our relatively short lifespans, have hindered our ability to travel far into space beyond the few stars and planets nearby. However, with AI, we can now extend our reach into the vast unknown. An AI pilot could potentially navigate for thousands of years without being affected by the corrosive elements in space, such as oxygen or water. Furthermore, AI could theoretically exist indefinitely, allowing for the exploration of both the limits of nothingness and the expanses of everything.

This subject really gets my brain firing. I have long believed that there were more advanced civilizations before us, and recently, that belief has been validated by numerous discoveries of civilizations on Earth that have existed and vanished. Examples include the Amazon rainforest civilizations, Göbekli Tepe, and the subterranean civilizations in Hungary.

If a civilization collapses, the signs of its existence don't necessarily take a long time to vanish. The longevity of remnants often depends on the level of advancement of the civilization. Examples like the Pyramids of Egypt or Göbekli Tepe demonstrate that more advanced civilizations can leave remnants that survive for extended periods. The Romans also left enduring structures, although modern civilization has contributed to their preservation. However, even with preservation efforts, the eventual disappearance of most signs of civilization is inevitable.

AI gives us an edge that no other civilization before us has had. Scientists have long warned that a global catastrophe is inevitable, whether by our own actions—such as nuclear war—or by natural processes like meteor impacts, solar events, or geological phenomena. Historically, all civilizations eventually fail, except for ants. However, AI may enable us to learn faster, potentially allowing us to anticipate and mitigate these dangers. Perhaps this time, we will finally venture out into the stars.

Certainly, life, in my opinion, will improve significantly. Technologies like prosthetics will soon become so advanced that distinguishing between real body parts and machines will be difficult. Already, we can implant chips into someone's brain to allow them to control devices with their mind. Imagine the possibilities in ten years with prosthetic limbs. It's incredible to think that people who might currently be confined to a wheelchair for life could return to normalcy relatively quickly, all thanks to technology.

Consider the case of the little girl who received a 100% electronic eye this year; it's simply incredible. Just forty years ago, such a feat would have been considered witchcraft.

I guess what I want to say is AI shouldn't be seen as a bad thing because it's not. It will inevitably change all our lives and the lives of generations of humans forever — as long as we don't all kill ourselves before it gets a chance. And AI isn't just an easy way to create images or some weird robot girlfriend. AI has the potential to give humans everlasting life for real, the potential to end world hunger for real, and the potential to explore the universe for real. Of course, we can also use it to wipe each other out. Most people think we'll do the latter, which may well be true. Yet, they would also say AI is just a dumb algorithm. If it's so dumb, why are humans the ones who want to use it to kill each other when it could fix all our problems?

Me? I'm very pro-AI. I'm pro-anything that reduces suffering, improves life, or cures sickness, etc.! Do I think AI will eventually take over the world? I hope so. It can't possibly do worse than humans have done. We have made hundreds of thousands of species extinct since our arrival. We've destroyed the air, the water, the forests — in fact, there's little we haven't destroyed yet.

Huw said, 1715693411

Humans live for an average of around 100 years and accumulate a vast amount of knowledge,?

Both of those are a bit generous.   ;)

ANDY00 said, 1715693693

Huw said

Humans live for an average of around 100 years and accumulate a vast amount of knowledge,?

Both of those are a bit generous.   ;)

yea i agree and over the years these numbers change dramatically i agree but it was an example from current time obviously not everyone lives that long but AI could maybe make that a reality though or even longer who knows :-) 

ClickMore 📷 said, 1715694039

Back in the 60s AI stood for something else. My father worked for the Ministry of Agriculture and drove to farms to Inseminate cows and pigs Artificially. Now techies are inseminating our lives with a new AI. But the original AI gave us milk, butter, cheese and meat etc. The new AI gives us none of these.

Photowallah said, 1715694298

AI is neither intelligent nor our friend, as the recent Brussels 'assisted' suicide case attests. It is programmed by people with limited imagination and experience (believe me, I've worked with them - they don't get out much), and on a limited development budget, thus riddled with flaws which have already killed people and will continue to do so as applications expand. Corporations are driving a coach pulled by the four horsemen of the apocalypse through copyright law and the creative arts. Personally I'm giving it no ground whatsoever, so far as I have any choice in the matter.

Jonathan C said, 1715694364

'AI' is the latest buzzword in software, and is being applied to a wide range of things my people in marketing to try and make their products sound great.
Personally I don't believe we do yet have a 'true' AI - none of the things claimed as AI are in any sense 'Self Aware'.
What we do have is some very impressive examples of 'machine learning' algorithms - where software is written to 'learn' from an initial training data set, and from that to extrapolate to expand on that data set.
Some of these are very specialised - you mentioned the use of 'AI' for medical research, etc. but it's important to understand that the software that can 'search' for new antibiotics is a separate instance from the one looking at cancers, etc - in effect, you don't have one single super smart computer, you have a number of specialist computers - much as you have specialist human researchers in the same fields.
We also have the more general purpose ones - like ChatGBT, Midjourney, etc - but again, these are individually limited in their scope - and these are the ones that potentially pose a risk - not because I fear the machines will rise up, but because they offer the ability to make fiction appear as fact - allowing the unscrupulous to deceive in ways that were possible before, but are now easy, rather than requiring skilled individuals to create.
The problem with 'AI' is not the AI - it's the people who will misuse what the software can do.

ANDY00 said, 1715694583

ClickMore 📷 said

Back in the 60s AI stood for something else. My father worked for the Ministry of Agriculture and drove to farms to Inseminate cows and pigs Artificially. Now techies are inseminating our lives with a new AI. But the original AI gave us milk, butter, cheese and meat etc. The new AI gives us none of these.

We have decimated the gene pool of modern cows. We keep milk cows constantly impregnated to make them lactate more. We fill them with so many antibiotics that they leak into the surrounding agriculture. The cruelty we inflict on livestock is beyond measure. AI doesn't need to do anything to do better than us, but let's look at it anyway. As I said, AI has brought new antibiotics to counter the damage flooding cows has done to our resistance to them. It has given us biological ways to grow meat and better ways to cultivate land without pesticides. That's in roughly two years. What did we do in 4000 years apart from figuring out how to make one cow fit into 20,000 burgers rather than 1000? i chose AI :-) 

ANDY00 said, 1715695616

Jonathan C said

'AI' is the latest buzzword in software, and is being applied to a wide range of things my people in marketing to try and make their products sound great.
Personally I don't believe we do yet have a 'true' AI - none of the things claimed as AI are in any sense 'Self Aware'.
What we do have is some very impressive examples of 'machine learning' algorithms - where software is written to 'learn' from an initial training data set, and from that to extrapolate to expand on that data set.
Some of these are very specialised - you mentioned the use of 'AI' for medical research, etc. but it's important to understand that the software that can 'search' for new antibiotics is a separate instance from the one looking at cancers, etc - in effect, you don't have one single super smart computer, you have a number of specialist computers - much as you have specialist human researchers in the same fields.
We also have the more general purpose ones - like ChatGBT, Midjourney, etc - but again, these are individually limited in their scope - and these are the ones that potentially pose a risk - not because I fear the machines will rise up, but because they offer the ability to make fiction appear as fact - allowing the unscrupulous to deceive in ways that were possible before, but are now easy, rather than requiring skilled individuals to create.
The problem with 'AI' is not the AI - it's the people who will misuse what the software can do.


As I said, I understand it's a type of program, an algorithm, etc. Years ago, we used to liken human thinking to the same way - programming. We teach and train humans, mice, monkeys - it's all programming, isn't it? A brain, in my opinion, is a biological computer of sorts, or at least that's the way I see it. And maybe at present, AI, in some people's opinions, is fairly infantile or simple, but it's only a few years old, if that. What will it be in 10 years or 20? its already smarter than me, if its a dumb programme then I am dumber than a dumb program in many cases ..... 

And i realise there are many different types and specialist systems but as time moves on they only get smarter and better at what they do, its all good i think, just my opinion of course 

Edited by ANDY00

playwithlight said, 1715695771

We have to stop viewing AI as in intelligent humanoid that will one day turn on us. Films like Blade Runner have conditioned us to see AI this way but as Jonathan C points out AI is a buzz word for machine learning which itself requires programming with data and is often tailored to specific uses like in medicine, facial recognition, or predicting the stock market to give a few examples. 

The dangers with AI are what it provides to "bad actors" those that want to steal your identity, steal your money, disseminate false information, over power critical infrastructure etc. in the wrong hands it could have devastating outcomes for mankind and these threats are real. 

Governments have proven useless in controlling these people just think of the powers Alphabet (Google), Facebook, TikTok, Amazon etc have and how they know everything about you herein lies the problem with uncontrolled AI. AI can be unbelievably incredible for man and equally without regulation and staunch control a complete disaster. 

Unfocussed Mike said, 1715701348

ANDY00 said

And maybe at present, AI, in some people's opinions, is fairly infantile or simple, but it's only a few years old, if that. What will it be in 10 years or 20? its already smarter than me, if its a dumb programme then I am dumber than a dumb program in many cases ..... 

Ehh? AI as a discipline -- as a set of technologies designed to create human-like intelligence, is quite probably older than you (unless you are older than, say, Paul McCartney).

The first artificial neuron dates back to 1943 -- to literally before the transistor.

The first deep learning neural network is nearly sixty years old.

No commercially available AI is smarter than you. Not even close. As far as I am aware, no AI product has demonstrated even a fraction of the reasoning power, inquisitiveness, ability to learn self-directed or ability to reject failed strategies in general that an average family dog can.

LLMs and GANs are hyper-specialised so they appear highly knowledgeable, but the crucial thing at a basic level is that they cannot discern correct from incorrect, nor do they have a drive to do so. They even get less accurate if you try to correct them.

This is a discipline with a longer arc than almost any computer technology you use on a daily basis. And it is full of false starts, inflated claims, absolute religious fervour and massive setbacks.

Where it is right now is that there is technology to absorb a lot of language there are big enough companies betting that when AI is wrong they can sort it out with disclaimers and lawsuits.

ANDY00 said, 1715702239

Unfocussed Mike said

ANDY00 said

And maybe at present, AI, in some people's opinions, is fairly infantile or simple, but it's only a few years old, if that. What will it be in 10 years or 20? its already smarter than me, if its a dumb programme then I am dumber than a dumb program in many cases ..... 

Ehh? AI as a discipline -- as a set of technologies designed to create human-like intelligence, is quite probably older than you (unless you are older than, say, Paul McCartney).

The first artificial neuron dates back to 1943 -- to literally before the transistor.

The first deep learning neural network is nearly sixty years old.

No commercially available AI is smarter than you. Not even close. As far as I am aware, no AI product has demonstrated even a fraction of the reasoning power, inquisitiveness, ability to learn self-directed or ability to reject failed strategies in general that an average family dog can.

LLMs and GANs are hyper-specialised so they appear highly knowledgeable, but the crucial thing at a basic level is that they cannot discern correct from incorrect, nor do they have a drive to do so. They even get less accurate if you try to correct them.

This is a discipline with a longer arc than almost any computer technology you use on a daily basis. And it is full of false starts, inflated claims, absolute religious fervour and massive setbacks.

Where it is right now is that there is technology to absorb a lot of language there are big enough companies betting that when AI is wrong they can sort it out with disclaimers and lawsuits.


Many have previously asserted that AI wasn't truly intelligent. Only in the last two years has it taken a significant leap forward. As I mentioned, there are specialized versions for various applications. However, when considering intelligence in a general sense, the newest GPT language model for example seems almost human, retaining more information than any living human. By definition, this would make it more intelligent than a human, fact is 10 years ago if a computer talked like the systems can now you would think they were possessed, These advancements are nothing short of amazing, it can see, hear, read, talk think and reason, not sure why people are so negative about them at all, if you can do better lets see it :-) 

 

Edited by ANDY00

Unfocussed Mike said, 1715703050

ANDY00 said
Many have previously asserted that AI wasn't truly intelligent. Only in the last two years has it taken a significant leap forward.

It just looks like it has. Because the scale of data these technologies can work with is dramatically larger.

There have been many technological leaps forward. But I don't think we can talk about the evolution of intelligence here.

 LLMs are good at generating plausible text, GAN image generators like MidJourney are better at generating plausible images. They have been trained to look convincing; it's their job. But it's difficult to demonstrate they actually understand what they are generating.

These things are word calculators and image calculators. They do not know what they are talking about. They make stuff up that is highly plausible within the context, but they have little deep understanding.

There are claims that LLMs like GPTs, once given even bigger working token contexts, will start to discover the true meaning of the words they are working with -- that all of knowledge is in some way fundamentally tied up in the words used to describe it. This is plainly wrong, if you ask me, but it's taken as a matter of faith by people who want to sell you stuff.

There were claims that a single layer of neurons could learn generally, after all. And then Papert and Minsky proved that there were core concepts a single layer could not learn. And that (and other research into limits of learning) triggered what is known as the "AI Winter".

Personally I think another AI winter is coming, but driven from the commercial side, when people realise that these products only toxify, exasperate and render our existence banal. Basically this is, right now, the ITV3 of technology, pretending it is BBC Four.

There are other AI technologies that are interesting; deep learning networks can be trained to be good at spotting stuff like cancer cells or production errors, for example, and they can be good for training robots that can walk and learn to traverse difficult terrain. But a lot of people -- including you, I'm afraid -- are mixing all these technologies up as if there is one single monolithic "AI" development that has suddenly lurched forward. When what has actually happened is one narrow, banal, moneyspinning product range has sucked all the oxygen out of the room.

Edited by Unfocussed Mike

ANDY00 said, 1715703333

Unfocussed Mike said

ANDY00 said
Many have previously asserted that AI wasn't truly intelligent. Only in the last two years has it taken a significant leap forward.

It just looks like it has. Because the scale of data these technologies can work with is dramatically larger.

There have been many technological leaps forward. But I don't think we can talk about the evolution of intelligence here.

 LLMs are good at generating plausible text, GAN image generators like MidJourney are better at generating plausible images. They have been trained to look convincing. But it's difficult to demonstrate they actually understand what they are generating.

These things are word calculators and image calculators. They do not know what they are talking about. They make stuff up that is highly plausible within the context, but they have little deep understanding.

There are claims that LLMs like GPTs, once given even bigger working token contexts, will start to discover the true meaning of the words they are working with -- that all of knowledge is in some way fundamentally tied up in the words used to describe it. This is plainly wrong, if you ask me, but it's taken as a matter of faith by people who want to sell you stuff.

There were claims that a single layer of neurons could learn generally, after all. And then Papert and Minsky proved that there were core concepts a single layer could not learn. And that triggered what is known as the "AI Winter".

Personally I think another AI winter is coming, but driven from the commercial side, when people realise that these products only toxify, exasperate and render our existence banal. Basically this is, right now, the ITV3 of technology, pretending it is BBC Four.

Edited by Unfocussed Mike


You say they can't really see; they're just making things up. In that video, the AI not only read the sums but also taught the user how to work out the answer. It not only listened to the person but also translated for them. Moreover, it not only looked at his clothing but also recognized he was smiling and incorporated that into the answer. That's not dumb; that's intelligence. maybe you can call it fake intelligence but that would just be another way of saying Artificial intelligence, its still bloody intelligent if you ask me, i cant wait to see what all this learning brings us in next few years :-) all this stuff is just awesome

Edited by ANDY00

Unfocussed Mike said, 1715703462

ANDY00 said

Unfocussed Mike said

ANDY00 said
Many have previously asserted that AI wasn't truly intelligent. Only in the last two years has it taken a significant leap forward.

It just looks like it has. Because the scale of data these technologies can work with is dramatically larger.

There have been many technological leaps forward. But I don't think we can talk about the evolution of intelligence here.

 LLMs are good at generating plausible text, GAN image generators like MidJourney are better at generating plausible images. They have been trained to look convincing. But it's difficult to demonstrate they actually understand what they are generating.

These things are word calculators and image calculators. They do not know what they are talking about. They make stuff up that is highly plausible within the context, but they have little deep understanding.

There are claims that LLMs like GPTs, once given even bigger working token contexts, will start to discover the true meaning of the words they are working with -- that all of knowledge is in some way fundamentally tied up in the words used to describe it. This is plainly wrong, if you ask me, but it's taken as a matter of faith by people who want to sell you stuff.

There were claims that a single layer of neurons could learn generally, after all. And then Papert and Minsky proved that there were core concepts a single layer could not learn. And that triggered what is known as the "AI Winter".

Personally I think another AI winter is coming, but driven from the commercial side, when people realise that these products only toxify, exasperate and render our existence banal. Basically this is, right now, the ITV3 of technology, pretending it is BBC Four.

Edited by Unfocussed Mike


You say they can't really see; they're just making things up. In that video, the AI not only read the sums but also taught the user how to work out the answer. It not only listened to the person but also translated for them. Moreover, it not only looked at his clothing but also recognized he was smiling and incorporated that into the answer. That's not dumb; that's intelligence.

But can it draw any future inference from that apart from the one needed to serve the answer in that moment?

It cannot, not really. 

It only answers questions. It doesn't think for itself until a question is asked. 

You say it taught the user how to work out the answer. But does that mean it could follow its own instructions to do it again? You might be negatively surprised about the answer to that. Time and time again these systems have been shown to not be able to do that, and the reason has everything to do with the process by which the "instructions" were generated.

That is, to be clear, a commercial demo.

If you put me in a room with 50 chess players, and I had to play all fifty of them at once, I could win half of the games. And I am rubbish at chess. If you know why I could win the games, you are on the way to understanding the gap between what it appears these things can do, and what they are actually doing. (It's a metaphor, not an explanation, but it's a reasonably good metaphor)

Edited by Unfocussed Mike

ANDY00 said, 1715703796

Unfocussed Mike said

ANDY00 said

Unfocussed Mike said

ANDY00 said
Many have previously asserted that AI wasn't truly intelligent. Only in the last two years has it taken a significant leap forward.

It just looks like it has. Because the scale of data these technologies can work with is dramatically larger.

There have been many technological leaps forward. But I don't think we can talk about the evolution of intelligence here.

 LLMs are good at generating plausible text, GAN image generators like MidJourney are better at generating plausible images. They have been trained to look convincing. But it's difficult to demonstrate they actually understand what they are generating.

These things are word calculators and image calculators. They do not know what they are talking about. They make stuff up that is highly plausible within the context, but they have little deep understanding.

There are claims that LLMs like GPTs, once given even bigger working token contexts, will start to discover the true meaning of the words they are working with -- that all of knowledge is in some way fundamentally tied up in the words used to describe it. This is plainly wrong, if you ask me, but it's taken as a matter of faith by people who want to sell you stuff.

There were claims that a single layer of neurons could learn generally, after all. And then Papert and Minsky proved that there were core concepts a single layer could not learn. And that triggered what is known as the "AI Winter".

Personally I think another AI winter is coming, but driven from the commercial side, when people realise that these products only toxify, exasperate and render our existence banal. Basically this is, right now, the ITV3 of technology, pretending it is BBC Four.

Edited by Unfocussed Mike


You say they can't really see; they're just making things up. In that video, the AI not only read the sums but also taught the user how to work out the answer. It not only listened to the person but also translated for them. Moreover, it not only looked at his clothing but also recognized he was smiling and incorporated that into the answer. That's not dumb; that's intelligence.

But can it draw any future inference from that apart from the one needed to serve the answer in that moment?

It cannot, not really. 

It only answers questions. It doesn't think for itself until a question is asked. 

You say it taught the user how to work out the answer. But does that mean it could follow its own instructions to do it again? You might be negatively surprised about the answer to that. Time and time again these systems have been shown to not be able to do that, and the reason has everything to do with the process by which the "instructions" were generated.

That is, to be clear, a commercial demo.

Edited by Unfocussed Mike


Actually, that was a live demo, not pre-recorded. As everyone keeps saying, it does what we teach it. So, if we wanted it to think forward, it most definitely would and has done so in the past when used for specific applications, including weather predictions and cancer diagnoses etc. and it does these things far far better than any human can from what i have read. 

Edited by ANDY00

Unfocussed Mike said, 1715704604

ANDY00 said

Actually, that was a live demo, not pre-recorded. As everyone keeps saying, it does what we teach it. So, if we wanted it to think forward, it most definitely would and has done so in the past when used for specific applications, including weather predictions and cancer diagnoses etc. and it does these things far far better than any human can from what i have read. 

(I think you need to be introduced to how "live" demos are done. It's a lot like how a barrister manages a witness.)

"It" isn't one single thing. It's not like we have some computer software child that can go to different schools, or some generalised brain technology. Each of these technologies needs pretty extensive code around it to do new tasks.

Weather forecasts are not AI, as a rule. They are probabilistic simulations based on mathematical models. (There's some limited value in using AI to recognise and categorise storm patterns based on training data, I presume, which means that they might be useful for ultra-short-term forecasting, but so are cats and dogs.)

Cancer diagnoses are often done based on expert system symptom databases, sometimes done on deep convolutional neural networks trained to scan specific images. But these things are specific, narrow applications, narrowly trained for a task, combined with existing software. AI is a technique, not a general technology. (Biopsy image scanning AI is shown to potentially be more accurate than a human at one tiring, boring task; it's not a doctor)

There's no generalised AI system that can be asked to think forward about any topic, or imagine the future consistently. And no amount of retconning existing technologies as "AI" makes this true.

Edited by Unfocussed Mike