AI

 

ANDY00 said, 1715709241

Thru the looking glass Photography said

I wonder though.....

If someone who has 'reasons' gets hold of a few of these 'benign' robots, and gives them weapons, and programmes them to kill humans, and then to move elsewhere and do the same, and wherever possible 'heal' each other when they are 'injured', surely that is threatening the populous of this planet?

Do you not think this is possible?

Heck, the Chinese were in GCHQ last week. Someone somewhere probably thought that improbable.


We've been teaching animals to kill humans for years, why should machines be different, point being, humans are the killers, its not the guns fault who it kills.

Unfocussed Mike said, 1715710143

ANDY00 said

Unfocussed Mike said

ANDY00 said

Unfocussed Mike said

ANDY00 said

FlashArt said

Self-populating data, predictive text, learning models, etc., are all complicated ways of describing intelligent software. When you ask me a question, I think through it, draw on my experiences and historical information, and come up with an answer. AI does the same thing.

No, it doesn't. There may be nobody behind the curtain but you are profoundly (I mean that precisely, not rudely) mischaracterising how GPT works.

It's difficult to get across how it can do what it does without the process you're describing, but it does. There's much less going on than that.

Edited by Unfocussed Mike


Yes, it does. A Large Language Model (LLM) is trained on huge amounts of data and, as a new development, has access to search the web as well. So, it's not hallucinating an answer; it's providing an informed response based on the data it has learned or researched to give the correct response. That's intelligence, artificial but still intelligence :-)

 

Edited by ANDY00

You are still mischaracterising how it comes up with answers. I can only recommend reading more on the technology of what a language model actually is. And what it is not.

Edited by Unfocussed Mike


So, you're suggesting it guesses the answer? If that's the case, it remarkably gets a crazy amount of answers right.

Yes and no.

Now I am no expert but I am learning to understand what is and is not here. So (and someone who has really got into this can correct me):

An LLM uses a process of probabilistic selection of a "next token" based on a set of previous tokens (the prompts, both visible and hidden), and the model weights (which are statistical relationships between tokenised words in the training set). 

It has (mostly hard-coded) grammar parsing and generating behaviour (GPT was originally designed to work on languages with context-free grammars like programming languages), which provide it with clever pattern-matched token weights.

But it is fundamentally a system that understands human language word relationships and generates grammatical human language.

It's a word calculator. 

The thing is, the model weights form an absolutely huge database. So it can produce sentences that are eerily complete. And with all these systems there is the chance of "emergent behaviour". So a system that has seen a lot of text where people play a word game might end up functionally knowing the rules of the game, without really "knowing" it. Basically that the weighted connections between the words in the training set actually contain representations of the inherent meanings. And in some cases, they might.

The loudest and most invested LLM proponents believe, I think, that the majority of human knowledge is encoded in word relationships describing that knowledge, and that given a sufficiently large training set, appropriate training and a sufficiently large token context -- the working pool of tokens for the current conversation -- LLMs will ultimately demonstrate a true understanding as well as the ability to generate plausible answers.

(I think this is motivated reasoning, personally.)

The more reductive proponents try to tear down human intelligence to reduce it to the same system, which is a great way to dehumanise and justify not hiring someone whose job an LLM can superficially simulate.

GANs -- the systems behind things like Midjourney and Stable Diffusion work a different way. Roughly (based on my understanding): two neural networks (imagine one is a forger, and the other is an art fraud expert cop who has seen a lot of real art) locked in an adversarial network, where (based on a random noise image and a set of weights that link images to their text description) the forger produces an image and the cop tries to catch it out. The process is self-directed, so over time as the model is trained, what happens is the forger gets better and better at tricking the cop into thinking it's producing real art indistinguishable from the cop's training set.

Neither of these systems have to really understand the subject matter to produce convincing output. But they can be combined in really interesting ways -- such as (I am getting off my own reservation here a bit so could be v. wrong) feeding the output of an LLM into a GAN, or pitting two LLMs against each other adversarially.

And you have things like Sora, generating video with a starting frame from a midjourney-type GAN and then modifying it, frame-by-frame, to probabilistically meet the description of a script.

All of these AI developers believe that with enough power, and enough context, true intelligence will emerge (in the way that a whirlpool in a river is not really an object but a persistent phenomenon).

But these system do not "research". And they do not "formulate ideas". They toss out plausible outputs.

Edited by Unfocussed Mike

Unfocussed Mike said, 1715710013

Thru the looking glass Photography said

Heck, the Chinese were in GCHQ last week. Someone somewhere probably thought that improbable.

The Chinese, and others, are probably getting into GCHQ quite a bit. One of the useful properties of spies you know are watching is that you can see what they are looking at and make them think things that aren't what they were after. Or you can tell them things you want them to know, to de-escalate things, but cannot tell them officially. I'm sure the same is true in reverse.

It's just that right now the politics of the situation make it so that we are saying out loud, look, we found your guys.

That doesn't mean these are the only guys that we do know about :-)

ANDY00 said, 1715711137

You literally say it has an information database it learns from. Not only that, but as of this year's model, LLMs have access to the web to gather more information on subject matter as required, although most can only see up to 2022, I believe. So, it's not just guessing or grabbing from thin air. I'm not buying it, so keep your token, lol.

To be absolutely clear, I don't really care how it does it; I care that it can do it. I don't need to know the thought processes my neighbour has when he decides to take my bin out for me, but I'm very happy he does and can. I don't need to be an internet expert to say I like AI and what it potentially can bring us :-) 

I don't know the thought processes of anyone or anything I interact with. I don't need to; I just do my best to concentrate on my side of the conversation. Sometimes, that's hard enough for me :-)


Edited by ANDY00

Unfocussed Mike said, 1715711356

ANDY00 said

You literally say it has an information database it learns from. Not only that, but as of this year's model, LLMs have access to the web to gather more information on subject matter as required, although most can only see up to 2022, I believe. So, it's not just guessing or grabbing from thin air. I'm not buying it, so keep your token, lol.

To be absolutely clear, I don't really care how it does it; I care that it can do it. I don't need to know the thought processes my neighbor has when he decides to take my bin out for me, but I'm very happy he does and can. I don't need to be an internet expert to say I like AI and what it potentially can bring us :-) 

It doesn't really learn the meanings of words, though -- it cannot "do research and come up with an answer". It learns the probabilistic weights between words in the training set. And can use that to produce a plausible response.

The crucial point here is that a lot of what people think AI is doing or can do, it is not doing, and cannot do. It just reads as if it does. 

Now, in some applications -- writing bullshit form letters or generating blog posts, or in the case of image generating GANs, an empty but plausible image -- that appearance is enough.

But it is not the work of understanding of the meaning of words, or the result of "research" or thinking about the subject.

As a result, people often think GPT can for example do legal research. It can't. Or that it can produce a bibliography for an article, or find references. It can't.

Or rather it can, but it will appear to make stuff up (because actually it's making everything up, it just happens to be right quite a bit), and doesn't either know or care when it is wrong. 

The results are empty but plausible.

The question here is do you need the system to do a damn good job of appearing intelligent, or actually being intelligent? Because we're still on the first one here, for the most part. 

Edited by Unfocussed Mike

ANDY00 said, 1715711356

Unfocussed Mike said

ANDY00 said

You literally say it has an information database it learns from. Not only that, but as of this year's model, LLMs have access to the web to gather more information on subject matter as required, although most can only see up to 2022, I believe. So, it's not just guessing or grabbing from thin air. I'm not buying it, so keep your token, lol.

To be absolutely clear, I don't really care how it does it; I care that it can do it. I don't need to know the thought processes my neighbor has when he decides to take my bin out for me, but I'm very happy he does and can. I don't need to be an internet expert to say I like AI and what it potentially can bring us :-) 

It doesn't learn the meanings of words, though -- it cannot "do research and come up with an answer". It learns the probabilistic weights between words in the training set. And can use that to produce a plausible response.

The crucial point here is that a lot of what people think AI is doing, it is not doing. It just reads as if it does. 

Now, in some applications -- writing bullshit form letters or generating blog posts, or in the case of image generating GANs, an empty but plausible image -- that is enough.

But it is not the work of understanding of the meaning of words, or the result of "research" or thinking about the subject.

As a result, people often think GPT can for example do legal research. It can't. Or that it can produce a bibliography for an article, or find references. It can't.

Or rather it can, but it doesn't either know or care when it is wrong. 


Didn't Chatgpt pass the bar ? yes - yes it did, i definitely could not do that even with training, but im only human :-)

Latest version of ChatGPT aces bar exam with score nearing 90th percentile (abajournal.com)

Edited by ANDY00

Unfocussed Mike said, 1715711592

ANDY00 said

Unfocussed Mike said

ANDY00 said

You literally say it has an information database it learns from. Not only that, but as of this year's model, LLMs have access to the web to gather more information on subject matter as required, although most can only see up to 2022, I believe. So, it's not just guessing or grabbing from thin air. I'm not buying it, so keep your token, lol.

To be absolutely clear, I don't really care how it does it; I care that it can do it. I don't need to know the thought processes my neighbor has when he decides to take my bin out for me, but I'm very happy he does and can. I don't need to be an internet expert to say I like AI and what it potentially can bring us :-) 

It doesn't learn the meanings of words, though -- it cannot "do research and come up with an answer". It learns the probabilistic weights between words in the training set. And can use that to produce a plausible response.

The crucial point here is that a lot of what people think AI is doing, it is not doing. It just reads as if it does. 

Now, in some applications -- writing bullshit form letters or generating blog posts, or in the case of image generating GANs, an empty but plausible image -- that is enough.

But it is not the work of understanding of the meaning of words, or the result of "research" or thinking about the subject.

As a result, people often think GPT can for example do legal research. It can't. Or that it can produce a bibliography for an article, or find references. It can't.

Or rather it can, but it doesn't either know or care when it is wrong. 


Didn't Chatgpt pass the bar ? yes - yes it did, i definitely could not do that even with training, but im only human :-)

Latest version of ChatGPT aces bar exam with score nearing 90th percentile (abajournal.com)

Edited by ANDY00

Yes. But -- and here's the crucial bit -- it doesn't know how it did it. It doesn't understand the law. It's just that there's a hell of a lot of law out there in its training set and it can sound like it does. 

It'll pass a lot of written tests. It's just studying to the test, though. And when it makes mistakes, it will not care, or understand the substance of the mistake. When you correct it, it'll likely make a new mistake.

Edited by Unfocussed Mike

ANDY00 said, 1715712200

Unfocussed Mike said

ANDY00 said

Unfocussed Mike said

ANDY00 said

You literally say it has an information database it learns from. Not only that, but as of this year's model, LLMs have access to the web to gather more information on subject matter as required, although most can only see up to 2022, I believe. So, it's not just guessing or grabbing from thin air. I'm not buying it, so keep your token, lol.

To be absolutely clear, I don't really care how it does it; I care that it can do it. I don't need to know the thought processes my neighbor has when he decides to take my bin out for me, but I'm very happy he does and can. I don't need to be an internet expert to say I like AI and what it potentially can bring us :-) 

It doesn't learn the meanings of words, though -- it cannot "do research and come up with an answer". It learns the probabilistic weights between words in the training set. And can use that to produce a plausible response.

The crucial point here is that a lot of what people think AI is doing, it is not doing. It just reads as if it does. 

Now, in some applications -- writing bullshit form letters or generating blog posts, or in the case of image generating GANs, an empty but plausible image -- that is enough.

But it is not the work of understanding of the meaning of words, or the result of "research" or thinking about the subject.

As a result, people often think GPT can for example do legal research. It can't. Or that it can produce a bibliography for an article, or find references. It can't.

Or rather it can, but it doesn't either know or care when it is wrong. 


Didn't Chatgpt pass the bar ? yes - yes it did, i definitely could not do that even with training, but im only human :-)

Latest version of ChatGPT aces bar exam with score nearing 90th percentile (abajournal.com)

Edited by ANDY00

Yes. But -- and here's the crucial bit -- it doesn't know how it did it. It doesn't understand the law. It's just that there's a hell of a lot of law out there in its training set and it can sound like it does.


Right so with the information its been taught on it can do this ? :-D which is what i said lol i get what your saying but all i see is it thinks different, Different is good, different is most likely better than us for sure then its very good. still intelligent to me :-D 

Look, the point is, when the airplane came along, people didn't reject it, shouting, 'It's not real, look, its wings don't flap!' 😄 Because that's not the point of them. And in the same way, as planes weren't trying to prove they were birds, AI isn't trying to be human; it's trying to be better at whichever part or job each specific one tries to fill. ChatGPT for conversation and grammar, etc. The medical ones, the predictive ones—I reckon eventually they will all become one intelligent system that can turn its focus to many things

Edited by ANDY00

Unfocussed Mike said, 1715712411

ANDY00 said

Unfocussed Mike said

ANDY00 said

Unfocussed Mike said

ANDY00 said

You literally say it has an information database it learns from. Not only that, but as of this year's model, LLMs have access to the web to gather more information on subject matter as required, although most can only see up to 2022, I believe. So, it's not just guessing or grabbing from thin air. I'm not buying it, so keep your token, lol.

To be absolutely clear, I don't really care how it does it; I care that it can do it. I don't need to know the thought processes my neighbor has when he decides to take my bin out for me, but I'm very happy he does and can. I don't need to be an internet expert to say I like AI and what it potentially can bring us :-) 

It doesn't learn the meanings of words, though -- it cannot "do research and come up with an answer". It learns the probabilistic weights between words in the training set. And can use that to produce a plausible response.

The crucial point here is that a lot of what people think AI is doing, it is not doing. It just reads as if it does. 

Now, in some applications -- writing bullshit form letters or generating blog posts, or in the case of image generating GANs, an empty but plausible image -- that is enough.

But it is not the work of understanding of the meaning of words, or the result of "research" or thinking about the subject.

As a result, people often think GPT can for example do legal research. It can't. Or that it can produce a bibliography for an article, or find references. It can't.

Or rather it can, but it doesn't either know or care when it is wrong. 


Didn't Chatgpt pass the bar ? yes - yes it did, i definitely could not do that even with training, but im only human :-)

Latest version of ChatGPT aces bar exam with score nearing 90th percentile (abajournal.com)

Edited by ANDY00

Yes. But -- and here's the crucial bit -- it doesn't know how it did it. It doesn't understand the law. It's just that there's a hell of a lot of law out there in its training set and it can sound like it does.


Right so with the information its been taught on it can do this ? :-D which is what i said lol i get what your saying but all i see is it thinks different, Different is good, different is most likely better than us for sure then its very good. still intelligent to me :-D 

It hasn't learned the law!

It has learned how to very plausibly say what a lawyer would probably say given a certain prompt.

Which sounds good, right?

But the crucial point is that it gets there by making everything up.

It's mostly right, because it's seen a lot of words. But it doesn't know why it is right. When it is wrong, it doesn't know it is wrong, or care it is wrong, or know how to correct it without input from the user, and may correct it wrong even then.

Because the meanings of the words or terms aren't what it knows.  

And it will be wrong in ways that are unique and get it into trouble -- like citing cases that don't exist (this has already happened), quoting testimony that didn't happen, naming experts that don't exist, etc.

Now I don't know about you, but that doesn't sound like a lawyer to me: someone who doesn't know when they are wrong and making broken arguments and needs the client to set them straight on the law.

This is why I think there has not been a great leap forward in intelligence.

There has been a great leap forward in believable automated bullshitting.

Edited by Unfocussed Mike

Wondrous said, 1715712511

Unfocussed Mike what's interesting about what you've said Mike is that some people like AI and this is no reference to you ,they bullshit their way through a conversation and life, and some are actually successful.

Edited by Wondrous

waist.it said, 1715712795

Unfocussed Mike Just wanted to say that your posts in this thread have been quite excellent, especially that last one. Believable automated bullshitting. lol I will definitely have to remember that one!

Lazy or disinterested people often pass exams by regurgitating what one can remember from the textbook and lectures, without actually having any real understanding or appreciation of the subject. I'd never have passed my o'levels otherwise! ;-)

So yes of course ChatGPt can pass a bar exam. It essentially has searchable copies of the textbooks in its virtual head. I'd wager it would also have digested all the past exam papers too. So if it were asked, it could probably make a pretty good stab at what will be on next year's exam too, :-)

Edited by waist.it

ANDY00 said, 1715713370

waist.it said

Unfocussed Mike Just wanted to say that your posts in this thread have been quite excellent, especially that last one.

Lazy or disinterested people often pass exams by regurgitating what one can remember from the textbook and lectures, without actually having any real understanding or appreciation of the subject. I'd never have passed my o'levels otherwise! ;-)

So yes of course ChatGPt can pass a bar exam. It essentially has searchable copies of the textbooks in its virtual head. I'd wager it would also have digested all the past exam papers too. So if it were asked, it could probably make a pretty good stab at what will be on next year's exam too, :-)


True, so it learned something, retained it, and called on it when it needed it, pretty intelligent, (not sentient)  i cant get my son to remember to phone :-) look i get it, not everyone likes AI, fair enough. me and the point of the post is that i do so im sorry lol :-D

I'm not saying AI is as clever as humans; I'm saying it will be vastly more intelligent than humans in the future. That's undisputed and has been said by Elon Musk and everyone who's been involved in OpenAI. We learn slowly, and it learns fast. It will now begin overtaking us to the stage where we will be learning from it, that is a fact.

its already teaching us in languages and different fields in some big ways from what i read online

Edited by ANDY00

Unfocussed Mike said, 1715713076

waist.it said

Unfocussed Mike Just wanted to say that your posts in this thread have been quite excellent, especially that last one.

Lazy or disinterested people often pass exams by regurgitating what one can remember from the textbook and lectures, without actually having any real understanding or appreciation of the subject. I'd never have passed my o'levels otherwise! ;-)

So yes of course ChatGPt can pass a bar exam. It essentially has searchable copies of the textbooks in its virtual head. I'd wager it would also have digested all the past exam papers too. So if it were asked, it could probably make a pretty good stab at what will be on next year's exam too, :-)

It is an incredibly difficult topic to discuss and negotiate in part precisely because of how we assess and reward intelligence with qualifications based on written exams.


Wondrous said, 1715713514

The last terminator film Terminator Genysis did get a bit more AI ,not to say that the previous films were not AI influenced but Genysis was a bit more modern in its approach.

ANDY00 said, 1715713849

Unfocussed Mike said

waist.it said

Unfocussed Mike Just wanted to say that your posts in this thread have been quite excellent, especially that last one.

Lazy or disinterested people often pass exams by regurgitating what one can remember from the textbook and lectures, without actually having any real understanding or appreciation of the subject. I'd never have passed my o'levels otherwise! ;-)

So yes of course ChatGPt can pass a bar exam. It essentially has searchable copies of the textbooks in its virtual head. I'd wager it would also have digested all the past exam papers too. So if it were asked, it could probably make a pretty good stab at what will be on next year's exam too, :-)

It is an incredibly difficult topic to discuss and negotiate in part precisely because of how we assess and reward intelligence with qualifications based on written exams.

I'm not great at conversation. I have to think hard about everything I write, to be honest. And as I've said, I'm not an expert. But then, I don't really want to be. I don't need to know how the car works to know it's good, and I know it's (especially in the early stages) just a tool. But it's getting smarter; there's no doubt about that. And it's going to change the world; there's no doubt about that.

But i have appreciated your point of view and i have learned a few things although its not taken away my enthusiasm for AI applications and there potential :-)