AI

 

Unfocussed Mike said, 1715714203

ANDY00 said

waist.it said

Unfocussed Mike Just wanted to say that your posts in this thread have been quite excellent, especially that last one.

Lazy or disinterested people often pass exams by regurgitating what one can remember from the textbook and lectures, without actually having any real understanding or appreciation of the subject. I'd never have passed my o'levels otherwise! ;-)

So yes of course ChatGPt can pass a bar exam. It essentially has searchable copies of the textbooks in its virtual head. I'd wager it would also have digested all the past exam papers too. So if it were asked, it could probably make a pretty good stab at what will be on next year's exam too, :-)


True, so it learned something, retained it, and called on it when it needed it, pretty intelligent, (not sentient)  i cant get my son to remember to phone :-) look i get it, not everyone likes AI, fair enough. me and the point of the post is that i do so im sorry lol :-D

I'm not saying AI is as clever as humans; I'm saying it will be vastly more intelligent than humans in the future. That's undisputed and has been said by Elon Musk and everyone who's been involved in OpenAI. We learn slowly, and it learns fast. It will now begin overtaking us to the stage where we will be learning from it, that is a fact.

its already teaching us in languages and different fields in some big ways from what i read online

Edited by ANDY00

I guess what I am trying to get across is this: 

It isn't doing what you think it is doing the way you think it is doing it, and we are nowhere near the place on the timeline that you think we are.

General advice to anyone who is excited right now would be this: please don't invest your pension in any plucky AI startups with a good demo! We are not yet at the point of the Gartner Hype Cycle where you get to make steady, small gains. We're at the point where you might have to downsize to a one-bedroom house in the cheapest part of town and sell the cat.

Edited by Unfocussed Mike

Unfocussed Mike said, 1715715404

ANDY00 said

Unfocussed Mike said

waist.it said

Unfocussed Mike Just wanted to say that your posts in this thread have been quite excellent, especially that last one.

Lazy or disinterested people often pass exams by regurgitating what one can remember from the textbook and lectures, without actually having any real understanding or appreciation of the subject. I'd never have passed my o'levels otherwise! ;-)

So yes of course ChatGPt can pass a bar exam. It essentially has searchable copies of the textbooks in its virtual head. I'd wager it would also have digested all the past exam papers too. So if it were asked, it could probably make a pretty good stab at what will be on next year's exam too, :-)

It is an incredibly difficult topic to discuss and negotiate in part precisely because of how we assess and reward intelligence with qualifications based on written exams.

I'm not great at conversation. I have to think hard about everything I write, to be honest. And as I've said, I'm not an expert. But then, I don't really want to be. I don't need to know how the car works to know it's good, and I know it's (especially in the early stages) just a tool. But it's getting smarter; there's no doubt about that. And it's going to change the world; there's no doubt about that.

But i have appreciated your point of view and i have learned a few things although its not taken away my enthusiasm for AI applications and there potential :-)

Thank you -- I hope I've not banged on toooo much. It's a bit of a rabbit-hole for me.

I definitely understand that the line between "actually understands what it's talking about" and "only reads like it knows what it's talking about" is really blurry in a lot of topics, and to a point where it might make no difference.

My main concern is that society will jump on replacing workers with LLMs on a massive scale before it has an understanding of this difference; I'd like us to get out of the hype part and into the realism part.

Buddygb said, 1715716669

Intelligence and memory are not synonyms.

I could spend (waste) my life learning PI to 1,000,000+ decimal places. Being able to recite them on demand would demonstrate an exceptional, nay astonishing, memory. It would not necessarily demonstrate intelligence. Without application it is just a very impressive feat of memory.

Memorising PI to 10 decimal places and then applying that knowledge in solving an engineering or mathematical problem is perhaps less impressive but is more indicative of critical thinking and intelligence.

B.

Unfocussed Mike said, 1715717331

Buddygb said

Intelligence and memory are not synonyms.

A (hopefully) fascinating aside: one of the early problems with neural networks was figuring out how many neurons to assign to a task. If you give a system too many neurons for its task, it simply memorises the training set and never learns how to infer from it -- or learns far too slowly, needing more training data. It's called "over-fitting".

Edited by Unfocussed Mike

Russ Freeman (staff) said, 1715771099

ANDY00 said

Russ Freeman said

I wonder how much fuss there will be when we get AI 50 or 100 years from now.

Our great-great-great-grandchildren will laugh at how we thought these fancy tools were AI.


I would genuinely love to see that and what that could do, These things will change the world i think 

Most things we have invented have changed the world in some way, for good or bad, whether it is a fancy algorithm, an automatic loom, Ford's production line, or glyphosate.



ANDY00 said, 1715773707

Russ Freeman A 386 processor is considered ancient by today's standards. Despite its limited processing power compared to modern CPUs, it remained in use on the space station and NASA spacecraft until last year. The 386 processor had a clock speed of around 16 to 40 MHz and could address up to 4 GB of memory, whereas modern processors boast clock speeds in the GHz range and can handle terabytes of memory. However, just because something will be a million times better in 100 years does not diminish the significance of the versions we have today. Every step is necessary to reach the top of the curve and every step is as important as the first and last :-)


 

Edited by ANDY00

Stanmore said, 1715774650

A properly educated, researched, detailed and considered view can be found in 'Homo Deus' by Yuval Noah Harari. This is beast read after his previous book, 'Sapiens'.

Most 'pressing' is the imminent loss of billions of jobs to a (far better) robot workforce - This decade imminent. Longer-term Homo Sapiens - us - will cease to a exist as a species, having evolved into a technologically enhanced immortal species, with a connected mind and capabilities we cannot comprehend today.

Highly recommend both of those books, they're fascinating and proving prescient given the most recent was written almost 10 years ago.

If you don't read, you can learn about the near cyborg future here

...it gets to proper business after the first couple of mildly hyperbolic introduction minutes, and it's not really "terrifying".

Edited by Stanmore

ANDY00 said, 1715775431

Stanmore said

A properly educated, researched, detailed and considered view can be found in 'Homo Deus' by Yuval Noah Harari. This is beast read after his previous book, 'Sapiens'.

Most 'pressing' is the imminent loss of billions of jobs to a (far better) robot workforce - This decade imminent. Longer-term Homo Sapiens - us - will cease to a exist as a species, having evolved into a technologically enhanced immortal species, with a connected mind and capabilities we cannot comprehend today.

Highly recommend both of those books, they're fascinating and proving prescient given the most recent was written almost 10 years ago.

If you don't read, you can learn about the near cyborg future here...

...it gets to proper business after the first couple of mildly hyperbolic introduction minutes, and it's not really "terrifying".

Edited by Stanmore


The Figure 01 robot shown in this thread has already been mass-purchased and sponsored by BMW and Amazon for their production and packaging sites, among many other companies. In fact, there have been so many sponsorships that the management for Figure 01 publicly announced they would not take on any more sponsorships for the foreseeable future. Then, of course, there's the Tesla bot - Optimus, and the Chinese version as well. I would not be surprised if we see these robots in their millions occupying factories as early as next year and probably heavily in the  agriculture sector as well as that's a sector they struggle to get workers for also.

 

Edited by ANDY00

ADWsPhotos said, 1715775011

The most interesting about this thread is the reply rate by the OP. Kudos!

ANDY00 said, 1715775079

ADWsPhotos said

The most interesting about this thread is the reply rate by the OP. Kudos!


Lol im interested in the subject :-D 

Unfocussed Mike said, 1715777913

ANDY00 said

Stanmore said

A properly educated, researched, detailed and considered view can be found in 'Homo Deus' by Yuval Noah Harari. This is beast read after his previous book, 'Sapiens'.

Most 'pressing' is the imminent loss of billions of jobs to a (far better) robot workforce - This decade imminent. Longer-term Homo Sapiens - us - will cease to a exist as a species, having evolved into a technologically enhanced immortal species, with a connected mind and capabilities we cannot comprehend today.

Highly recommend both of those books, they're fascinating and proving prescient given the most recent was written almost 10 years ago.

If you don't read, you can learn about the near cyborg future here...

...it gets to proper business after the first couple of mildly hyperbolic introduction minutes, and it's not really "terrifying".

Edited by Stanmore


The Figure 01 robot shown in this thread has already been mass-purchased and sponsored by BMW and Amazon for their production and packaging sites, among many other companies. In fact, there have been so many sponsorships that the management for Figure 01 publicly announced they would not take on any more sponsorships for the foreseeable future. Then, of course, there's the Tesla bot - Optimus, and the Chinese version as well. I would not be surprised if we see these robots in their millions occupying factories as early as next year and probably heavily in the  agriculture sector as well as that's a sector they struggle to get workers for also.

 

Edited by ANDY00

Robots are IMO a bit off-topic here; they are still mostly a cybernetics/control problem. But I really liked this story:

https://www.euronews.com/next/2023/08/15/meet-pibot-the-humanoid-robot-that-can-safely-pilot-an-airplane-better-than-a-human

Though I think there is an element of news management here; this video and the others at the time seemed to be the result of a pretty major PR blitz. So it might be a bit less near-future than it sounds.

Edited by Unfocussed Mike

ANDY00 said, 1715778191

Unfocussed Mike said

ANDY00 said

Stanmore said

A properly educated, researched, detailed and considered view can be found in 'Homo Deus' by Yuval Noah Harari. This is beast read after his previous book, 'Sapiens'.

Most 'pressing' is the imminent loss of billions of jobs to a (far better) robot workforce - This decade imminent. Longer-term Homo Sapiens - us - will cease to a exist as a species, having evolved into a technologically enhanced immortal species, with a connected mind and capabilities we cannot comprehend today.

Highly recommend both of those books, they're fascinating and proving prescient given the most recent was written almost 10 years ago.

If you don't read, you can learn about the near cyborg future here...

...it gets to proper business after the first couple of mildly hyperbolic introduction minutes, and it's not really "terrifying".

Edited by Stanmore


The Figure 01 robot shown in this thread has already been mass-purchased and sponsored by BMW and Amazon for their production and packaging sites, among many other companies. In fact, there have been so many sponsorships that the management for Figure 01 publicly announced they would not take on any more sponsorships for the foreseeable future. Then, of course, there's the Tesla bot - Optimus, and the Chinese version as well. I would not be surprised if we see these robots in their millions occupying factories as early as next year and probably heavily in the  agriculture sector as well as that's a sector they struggle to get workers for also.

 

Edited by ANDY00

Robots are IMO a bit off-topic here; they are still mostly a cybernetics/control problem. But I really liked this story:

https://www.euronews.com/next/2023/08/15/meet-pibot-the-humanoid-robot-that-can-safely-pilot-an-airplane-better-than-a-human

Though I think there is an element of news management here; this video and the others at the time seemed to be the result of a pretty major PR blitz. So it might be a bit less near-future than it sounds.

Edited by Unfocussed Mike


Not in the least, The figure 01 robot primary intelligence control is chatGPT (AI) and as i already mentioned the us army has F16 and f18s piloted by Automatous Ai systems already and a drone submarine that can traverse both submerged under water and switch to air for short periods. but its a shame everything comes back to war and killing humans, people are so scared robots will rise up and kill humans but its us that would teach them that

YorVikIng said, 1715784281

Unfocussed Mike , ANDY00 I think to simplify Mike's explanation, we can compare a LLM with the old adage about "give a thousand monkeys a typewriter each, and let them type for a thousand years, and sooner or later they will randomly produce all Shakespeare's works".

That doesn't mean the monkeys understand poetry. It just means with enough random guesses, you end up being right sometimes.

Similarly, a LLM spits out words in a sequence that matches what it has seen before, and the adversarial network checks "is this identical to a Shakespeare play?". If the answer is no, the generational part has another go. And so on, and so on, until it produces something that (randomly) matches what Shakespeare wrote.

ANDY00 said, 1715785822

YorVikIng said

Unfocussed Mike , ANDY00 I think to simplify Mike's explanation, we can compare a LLM with the old adage about "give a thousand monkeys a typewriter each, and let them type for a thousand years, and sooner or later they will randomly produce all Shakespeare's works".

That doesn't mean the monkeys understand poetry. It just means with enough random guesses, you end up being right sometimes.

Similarly, a LLM spits out words in a sequence that matches what it has seen before, and the adversarial network checks "is this identical to a Shakespeare play?". If the answer is no, the generational part has another go. And so on, and so on, until it produces something that (randomly) matches what Shakespeare wrote.

How do humans or animals learn? As babies, humans mimic what they see. Sometimes they make mistakes, and from those mistakes, they learn to avoid similar situations in the future. Other times, they succeed, and those successes are stored in their memory for later use. AI, on the other hand, learns by analysing vast memory banks of information created by humans and computers alike. They learn at a much faster rate and with greater efficiency than we do. Unlike humans, they do not forget, and they can retain vastly more information than we could attain in 1000 lifetimes.

The human brain is fully mature in your 20s, although this varies slightly from person to person and can last anywhere up to around 70-100 years. AI, on the other hand, can reach full maturity in minutes or hours and is potentially immortal.

Elon Musk, the CEO of Tesla, In a live interview streamed on the social media platform X predicted that AI will be smarter than any human around the end of the next year. He stated, "My guess is we’ll have AI smarter than any one human probably around the end of next year". He also added, "If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year or within two years. However, he mentioned that this prediction comes with the caveat that increasing demands for power and shortages of the most powerful AI training chips could limit their capability in the near term

Edited by ANDY00

Unfocussed Mike said, 1715785813

YorVikIng said

Unfocussed Mike , ANDY00 I think to simplify Mike's explanation, we can compare a LLM with the old adage about "give a thousand monkeys a typewriter each, and let them type for a thousand years, and sooner or later they will randomly produce all Shakespeare's works".

That doesn't mean the monkeys understand poetry. It just means with enough random guesses, you end up being right sometimes.

Similarly, a LLM spits out words in a sequence that matches what it has seen before, and the adversarial network checks "is this identical to a Shakespeare play?". If the answer is no, the generational part has another go. And so on, and so on, until it produces something that (randomly) matches what Shakespeare wrote.

There is an interesting, very powerful (and quite simple) mathematical/computer science thing called a "Markov chain".

https://en.wikipedia.org/wiki/Markov_chain

LLMs are definitionally Markov chains -- they are higher order systems, for sure, but they engage in probabilistic prediction of the next best token based on prior token context.

So it is better than the monkeys, because the monkeys hammer on the keys without regard to prior context. 

People often use the term "stochastic parrot", somewhat derisively. Which is to say, if you have a parrot that has heard all human speech and writing and can remember it, it might engage in what sounds like very serious conversation and you'd have to work harder to prove to yourself that in fact it's just mouthing incredibly plausible responses to prompts it does not actually understand.

But the question -- about emergence -- then becomes: at what point does the parrot's grasp of what to say for a given prompt based on previous prompts and responses, actually begin to constitute knowledge about what things mean? Can the parrot develop an intuition simply from the words?

I am getting a bit too far from what I understand here but there are AI researchers who believe that there is encoded understanding in the LLM models and that it'll naturally get better with larger, better trained models and with larger working contexts (that is, the number of tokens of "working memory" in the dialogue.)

There are really bolshy ones who say that humans aren't any different to LLMs; that we are simply incredibly advanced stochastic parrots. I think they are wrong (but admittedly only from intuition) and I think partly they tell themselves that to self-soothe over the impacts of their actions.

Edited by Unfocussed Mike