AI

 

YorVikIng said, 1715785645

ANDY00 said

YorVikIng said

Unfocussed Mike , ANDY00 I think to simplify Mike's explanation, we can compare a LLM with the old adage about "give a thousand monkeys a typewriter each, and let them type for a thousand years, and sooner or later they will randomly produce all Shakespeare's works".

That doesn't mean the monkeys understand poetry. It just means with enough random guesses, you end up being right sometimes.

Similarly, a LLM spits out words in a sequence that matches what it has seen before, and the adversarial network checks "is this identical to a Shakespeare play?". If the answer is no, the generational part has another go. And so on, and so on, until it produces something that (randomly) matches what Shakespeare wrote.

How do humans or animals learn? As babies, humans mimic what they see. Sometimes they make mistakes, and from those mistakes, they learn to avoid similar situations in the future. Other times, they succeed, and those successes are stored in their memory for later use. AI, on the other hand, learns by analysing vast memory banks of information created by humans and computers alike. They learn at a much faster rate and with greater efficiency than we do. Unlike humans, they do not forget, and they can retain vastly more information than we could attain in 1000 lifetimes.

Elon Musk, the CEO of Tesla, In a live interview streamed on the social media platform X predicted that AI will be smarter than any human around the end of the next year. He stated, "My guess is we’ll have AI smarter than any one human probably around the end of next year". He also added, "If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year or within two years. However, he mentioned that this prediction comes with the caveat that increasing demands for power and shortages of the most powerful AI training chips could limit their capability in the near term


Have at it Andy. He also said Twitter was worth $44Bn. Whilst that might have appeared to be true at the moment he said it, it certainly isn't anymore.

Unfocussed Mike said, 1715786158

ANDY00 said

YorVikIng said

Elon Musk, the CEO of Tesla, In a live interview streamed on the social media platform X predicted that AI will be smarter than any human around the end of the next year. He stated, "My guess is we’ll have AI smarter than any one human probably around the end of next year". He also added, "If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year or within two years. However, he mentioned that this prediction comes with the caveat that increasing demands for power and shortages of the most powerful AI training chips could limit their capability in the near term

I think I'd like to point out here that Elon Musk is actually a bullshitter and a moral fabulist. I think he's _wholly_ wrong here. We have no technologies even close to that timescale, and it's an easy bookmark to return to.

He, like other people in the milieu of OpenAI, is arguing for a short term doom scenario that creates a regulatory framework that leads OpenAI and existing businesses to a form of regulatory capture. It amounts to "it's too dangerous for other companies and the general public to have this technology that we are only going to use responsibly". He has his own gains in mind.

But again you are oversimplifying "AI" in a science-fiction way that makes this very difficult to rebut. There is no AI out there that is free-learning the way you suggest.

Edited by Unfocussed Mike

Theta Aeterna said, 1715786175

AI is not smart yet and can‘t solve problems with a new approach. It can approach problems with already existing solutions in it‘s database. Creations made by AI are just mixed and matched clones of what‘s in the database.

I am all for it as long as it serves us. But at some point, we might also step up and stop being useless, selfish and stupid.

BTW, Elon Musk says a lot of things and his estimations are mostly off. Roadster never came etc.

ANDY00 said, 1715786792

YorVikIng said

ANDY00 said

YorVikIng said

Unfocussed Mike , ANDY00 I think to simplify Mike's explanation, we can compare a LLM with the old adage about "give a thousand monkeys a typewriter each, and let them type for a thousand years, and sooner or later they will randomly produce all Shakespeare's works".

That doesn't mean the monkeys understand poetry. It just means with enough random guesses, you end up being right sometimes.

Similarly, a LLM spits out words in a sequence that matches what it has seen before, and the adversarial network checks "is this identical to a Shakespeare play?". If the answer is no, the generational part has another go. And so on, and so on, until it produces something that (randomly) matches what Shakespeare wrote.

How do humans or animals learn? As babies, humans mimic what they see. Sometimes they make mistakes, and from those mistakes, they learn to avoid similar situations in the future. Other times, they succeed, and those successes are stored in their memory for later use. AI, on the other hand, learns by analysing vast memory banks of information created by humans and computers alike. They learn at a much faster rate and with greater efficiency than we do. Unlike humans, they do not forget, and they can retain vastly more information than we could attain in 1000 lifetimes.

Elon Musk, the CEO of Tesla, In a live interview streamed on the social media platform X predicted that AI will be smarter than any human around the end of the next year. He stated, "My guess is we’ll have AI smarter than any one human probably around the end of next year". He also added, "If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year or within two years. However, he mentioned that this prediction comes with the caveat that increasing demands for power and shortages of the most powerful AI training chips could limit their capability in the near term


Have at it Andy. He also said Twitter was worth $44Bn. Whilst that might have appeared to be true at the moment he said it, it certainly isn't anymore.

He knows far more about AI than any of us; he had a large hand in its creation, but he's not the only one who's said it—that was just the first quote that came to mind.

Gary Marcus, a scientist, bestselling author, and entrepreneur, said, 'It’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine.'

Sam Altman, CEO of OpenAI, believes AGI could be reached sometime in the next four or five years.

Dario Amodei, co-founder, and CEO of Anthropic, said he expects a 'human-level' AI could be developed in two to three years.

Most experts predict that AGI and AI will surpass human intelligence within the next 2-5 years. A few prominent figures in the technology field even suggest it could happen as soon as the end of next year. However, it's essential to trust the expertise of those in the field, rather than speculating, as even a vast list of essentially superbrains could be wrong. Perhaps some photographers on a forum know better, but I'll trust the experts, as I'm not one myself.

Edited by ANDY00

YorVikIng said, 1715787197

I had this conversation with ChatGPT to see what the AI thinks about your views ANDY00

Would it be in the interest of the owner or CEO of an AI company to overstate how quickly AI is likely to improve?

Yes, it can be in the interest of the owner or CEO of an AI company to overstate how quickly AI is likely to improve. There are several reasons for this:

  1. Attracting Investment: Bold claims about rapid advancements in AI can attract venture capital and other forms of investment. Investors are often looking for high-growth opportunities, and overstating the pace of improvement can create a sense of urgency and potential for significant returns.

  2. Market Positioning: By projecting a rapid improvement in AI, a company can position itself as a leader in the field, attracting customers, partners, and top talent who want to be associated with cutting-edge technology.

  3. Public and Media Attention: Overstated claims can generate buzz and media coverage, increasing the company's visibility and brand recognition. This attention can lead to further business opportunities and a stronger market presence.

  4. Competitive Advantage: Creating the perception that a company is ahead of its competitors in AI development can discourage competitors and potentially influence their strategic decisions, giving the overstating company a competitive edge.

However, there are also significant risks and potential downsides to this approach:

  1. Reputation Damage: If the promised advancements do not materialize, the company could face reputational damage. This can lead to a loss of trust among investors, customers, and partners, and can be difficult to recover from.

  2. Regulatory Scrutiny: Overstating capabilities can attract regulatory scrutiny, especially if the claims are seen as misleading or deceptive. This can result in legal consequences and financial penalties.

  3. Internal Pressure: Unrealistic expectations can create immense pressure on the company's employees, potentially leading to burnout, ethical lapses, and a toxic work environment.

  4. Market Backlash: The broader AI community, including researchers and industry peers, may react negatively to exaggerated claims, leading to a backlash that can isolate the company and harm its long-term prospects.

In summary, while overstating the pace of AI improvement can provide short-term benefits in terms of investment and market positioning, it carries significant long-term risks that need to be carefully managed. An ethical and balanced approach, with realistic projections and transparent communication, often serves as a more sustainable strategy.

With that in mind, should I believe that the predictions from such an owner or CEO is likely to be accurate, or should I do my own independent research and investigation?ChatGPT

Given the potential motivations for an owner or CEO of an AI company to overstate the pace of AI advancements, it's prudent to approach their predictions with a healthy degree of skepticism. Here are some reasons why independent research and investigation are advisable:

  1. Bias and Incentives: The owner or CEO has a vested interest in presenting an optimistic outlook to attract investment, customers, and talent. This bias can lead to exaggeration or selective presentation of information.

  2. Complexity and Uncertainty: AI is a rapidly evolving field with many uncertainties. Even well-intentioned leaders may make optimistic predictions that do not account for unforeseen challenges or limitations.

  3. Independent Verification: By conducting your own research, you can compare the company's claims against independent sources, such as academic papers, industry reports, and expert analyses. This can provide a more balanced and comprehensive understanding of the likely pace of AI advancements.

  4. Diverse Perspectives: Consulting a variety of sources, including AI researchers, analysts, and other industry leaders, can help you get a broader perspective on the state of AI technology and its future trajectory. This can counteract any single source's potential biases.

  5. Track Record: Reviewing the company's historical performance and the accuracy of past predictions can also provide insights. If a company has a history of over-promising and under-delivering, it may be wise to take their current predictions with a grain of salt.

In practice, here are some steps you can take for independent research:

  • Read Academic Literature: Look for peer-reviewed papers and studies on AI advancements. These sources are often rigorous and less likely to be influenced by commercial interests.
  • Consult Industry Reports: Reputable consulting firms and market research organizations often publish reports on AI trends and forecasts that can provide valuable insights.
  • Follow Expert Opinions: Follow AI researchers and industry experts on platforms like Twitter, LinkedIn, or academic blogs. These individuals often share their analyses and opinions on the state of AI.
  • Attend Conferences and Webinars: Industry conferences and webinars can provide firsthand information from a variety of stakeholders, including those who may offer more tempered or diverse viewpoints.

In summary, while the predictions of an AI company's owner or CEO can provide useful insights, they should be one of many sources you consult. Independent research and investigation are crucial for forming a well-rounded and accurate understanding of AI's likely advancements.


ANDY00 said, 1715788315

YorVikIng said

I had this conversation with ChatGPT to see what the AI thinks about your views ANDY00

Would it be in the interest of the owner or CEO of an AI company to overstate how quickly AI is likely to improve?

Yes, it can be in the interest of the owner or CEO of an AI company to overstate how quickly AI is likely to improve. There are several reasons for this:

  1. Attracting Investment: Bold claims about rapid advancements in AI can attract venture capital and other forms of investment. Investors are often looking for high-growth opportunities, and overstating the pace of improvement can create a sense of urgency and potential for significant returns.

  2. Market Positioning: By projecting a rapid improvement in AI, a company can position itself as a leader in the field, attracting customers, partners, and top talent who want to be associated with cutting-edge technology.

  3. Public and Media Attention: Overstated claims can generate buzz and media coverage, increasing the company's visibility and brand recognition. This attention can lead to further business opportunities and a stronger market presence.

  4. Competitive Advantage: Creating the perception that a company is ahead of its competitors in AI development can discourage competitors and potentially influence their strategic decisions, giving the overstating company a competitive edge.

However, there are also significant risks and potential downsides to this approach:

  1. Reputation Damage: If the promised advancements do not materialize, the company could face reputational damage. This can lead to a loss of trust among investors, customers, and partners, and can be difficult to recover from.

  2. Regulatory Scrutiny: Overstating capabilities can attract regulatory scrutiny, especially if the claims are seen as misleading or deceptive. This can result in legal consequences and financial penalties.

  3. Internal Pressure: Unrealistic expectations can create immense pressure on the company's employees, potentially leading to burnout, ethical lapses, and a toxic work environment.

  4. Market Backlash: The broader AI community, including researchers and industry peers, may react negatively to exaggerated claims, leading to a backlash that can isolate the company and harm its long-term prospects.

In summary, while overstating the pace of AI improvement can provide short-term benefits in terms of investment and market positioning, it carries significant long-term risks that need to be carefully managed. An ethical and balanced approach, with realistic projections and transparent communication, often serves as a more sustainable strategy.

With that in mind, should I believe that the predictions from such an owner or CEO is likely to be accurate, or should I do my own independent research and investigation?ChatGPT

Given the potential motivations for an owner or CEO of an AI company to overstate the pace of AI advancements, it's prudent to approach their predictions with a healthy degree of skepticism. Here are some reasons why independent research and investigation are advisable:

  1. Bias and Incentives: The owner or CEO has a vested interest in presenting an optimistic outlook to attract investment, customers, and talent. This bias can lead to exaggeration or selective presentation of information.

  2. Complexity and Uncertainty: AI is a rapidly evolving field with many uncertainties. Even well-intentioned leaders may make optimistic predictions that do not account for unforeseen challenges or limitations.

  3. Independent Verification: By conducting your own research, you can compare the company's claims against independent sources, such as academic papers, industry reports, and expert analyses. This can provide a more balanced and comprehensive understanding of the likely pace of AI advancements.

  4. Diverse Perspectives: Consulting a variety of sources, including AI researchers, analysts, and other industry leaders, can help you get a broader perspective on the state of AI technology and its future trajectory. This can counteract any single source's potential biases.

  5. Track Record: Reviewing the company's historical performance and the accuracy of past predictions can also provide insights. If a company has a history of over-promising and under-delivering, it may be wise to take their current predictions with a grain of salt.

In practice, here are some steps you can take for independent research:

  • Read Academic Literature: Look for peer-reviewed papers and studies on AI advancements. These sources are often rigorous and less likely to be influenced by commercial interests.
  • Consult Industry Reports: Reputable consulting firms and market research organizations often publish reports on AI trends and forecasts that can provide valuable insights.
  • Follow Expert Opinions: Follow AI researchers and industry experts on platforms like Twitter, LinkedIn, or academic blogs. These individuals often share their analyses and opinions on the state of AI.
  • Attend Conferences and Webinars: Industry conferences and webinars can provide firsthand information from a variety of stakeholders, including those who may offer more tempered or diverse viewpoints.

In summary, while the predictions of an AI company's owner or CEO can provide useful insights, they should be one of many sources you consult. Independent research and investigation are crucial for forming a well-rounded and accurate understanding of AI's likely advancements.


In this thread/post, I must have quoted 10 people, all experts in the field. If you have a personal issue with Musk, pick any of the others. These are the people working on the technology, so they are the ones to listen to. If you're not happy with the interest rates from the bank, do you stop listening to the bank and ask a neighbour? Of course not; you listen to the people making the changes. Sam Altman is the CEO of OpenAI. Are you saying you know more than him? Hmmmm, sorry, I still believe he knows more 😄

Of course, some will say it will take longer, and some will say it will happen sooner. But it's the key investment broker of the moment. Billions or even trillions are being thrown at this technology—far more than in the computer or car industries now—so advancement at speed is guaranteed.

Myself, I'm very impressed at the rate it's moving and how intelligent it's now getting in a very short space of time. To me, it's mind-boggling, honestly. As I've said, I'm no expert in any way; I'm just waiting on technology to build me a holodeck, which is not far away, I guess, with VR progressing and AI now bleeding into the movie industry and gaming industry. They predict GTA will have autonomous characters and that the stories will change and adapt on its release. I can imagine movies will become much the same, which is incredible, I think

 

Edited by ANDY00

YorVikIng said, 1715788351

I'm saying even the AI you say is brilliant cautions against trusting what people with vested interests are saying. And that the people you quote appear to have a vested interest. That's all I'm saying.

ANDY00 said, 1715789232

YorVikIng said

I'm saying even the AI you say is brilliant cautions against trusting what people with vested interests are saying. And that the people you quote appear to have a vested interest. That's all I'm saying.


I get that, but two things: I think ChatGPT only has information up to 2022, so it doesn't actually know how far the tech has moved. And if you don't listen to the people making the technology, who do you listen to? Who knows more than those developing it? It's worth noting that if they say they will reach certain milestones in x amount of years and don't, it damages them as well, so they have a motive to reach the goals they place for themselves.

Also, not one of these experts has ever said that their platform will reach a goal per se; they say the tech as a whole will reach certain levels at these timeframes, meaning all AI companies will reach these levels, as AI is extremely open source in its very nature.

Edited by ANDY00

Unfocussed Mike said, 1715789071

Elon Musk had nothing whatsoever to do with the “creation” of AI.

ANDY00 said, 1715789632

Unfocussed Mike said

Elon Musk had nothing whatsoever to do with the “creation” of AI.


Elon Musk played a significant role in the formation and early development of OpenAI. He was one of the co-founders of OpenAI and provided both financial and strategic support

Musk was part of the team that created the initial OpenAI charter and also served as an advisor to the organization

He provided significant funding for OpenAI. When starting OpenAI in late 2015, Greg Brockman and Sam Altman had initially planned to raise $100M. Musk suggested they should announce an initial $1B funding commitment to OpenAI and offered to cover whatever amount anyone else didn’t provide

Eventually, Musk left OpenAI, stating that he planned to build an AGI competitor within Tesla and he started xAI


So yea he's played a significant role in modern ai learning systems advancement, there's no doubt and he's one of the largest developers on the scene alongside open AI and some other big name big funded foundations. 

Edited by ANDY00

Jonathan C said, 1715789535

ANDY00 said

YorVikIng said

I had this conversation with ChatGPT to see what the AI thinks about your views ANDY00

Would it be in the interest of the owner or CEO of an AI company to overstate how quickly AI is likely to improve?

Yes, it can be in the interest of the owner or CEO of an AI company to overstate how quickly AI is likely to improve. There are several reasons for this:

  1. Attracting Investment: Bold claims about rapid advancements in AI can attract venture capital and other forms of investment. Investors are often looking for high-growth opportunities, and overstating the pace of improvement can create a sense of urgency and potential for significant returns.

  2. Market Positioning: By projecting a rapid improvement in AI, a company can position itself as a leader in the field, attracting customers, partners, and top talent who want to be associated with cutting-edge technology.

  3. Public and Media Attention: Overstated claims can generate buzz and media coverage, increasing the company's visibility and brand recognition. This attention can lead to further business opportunities and a stronger market presence.

  4. Competitive Advantage: Creating the perception that a company is ahead of its competitors in AI development can discourage competitors and potentially influence their strategic decisions, giving the overstating company a competitive edge.

However, there are also significant risks and potential downsides to this approach:

  1. Reputation Damage: If the promised advancements do not materialize, the company could face reputational damage. This can lead to a loss of trust among investors, customers, and partners, and can be difficult to recover from.

  2. Regulatory Scrutiny: Overstating capabilities can attract regulatory scrutiny, especially if the claims are seen as misleading or deceptive. This can result in legal consequences and financial penalties.

  3. Internal Pressure: Unrealistic expectations can create immense pressure on the company's employees, potentially leading to burnout, ethical lapses, and a toxic work environment.

  4. Market Backlash: The broader AI community, including researchers and industry peers, may react negatively to exaggerated claims, leading to a backlash that can isolate the company and harm its long-term prospects.

In summary, while overstating the pace of AI improvement can provide short-term benefits in terms of investment and market positioning, it carries significant long-term risks that need to be carefully managed. An ethical and balanced approach, with realistic projections and transparent communication, often serves as a more sustainable strategy.

With that in mind, should I believe that the predictions from such an owner or CEO is likely to be accurate, or should I do my own independent research and investigation?ChatGPT

Given the potential motivations for an owner or CEO of an AI company to overstate the pace of AI advancements, it's prudent to approach their predictions with a healthy degree of skepticism. Here are some reasons why independent research and investigation are advisable:

  1. Bias and Incentives: The owner or CEO has a vested interest in presenting an optimistic outlook to attract investment, customers, and talent. This bias can lead to exaggeration or selective presentation of information.

  2. Complexity and Uncertainty: AI is a rapidly evolving field with many uncertainties. Even well-intentioned leaders may make optimistic predictions that do not account for unforeseen challenges or limitations.

  3. Independent Verification: By conducting your own research, you can compare the company's claims against independent sources, such as academic papers, industry reports, and expert analyses. This can provide a more balanced and comprehensive understanding of the likely pace of AI advancements.

  4. Diverse Perspectives: Consulting a variety of sources, including AI researchers, analysts, and other industry leaders, can help you get a broader perspective on the state of AI technology and its future trajectory. This can counteract any single source's potential biases.

  5. Track Record: Reviewing the company's historical performance and the accuracy of past predictions can also provide insights. If a company has a history of over-promising and under-delivering, it may be wise to take their current predictions with a grain of salt.

In practice, here are some steps you can take for independent research:

  • Read Academic Literature: Look for peer-reviewed papers and studies on AI advancements. These sources are often rigorous and less likely to be influenced by commercial interests.
  • Consult Industry Reports: Reputable consulting firms and market research organizations often publish reports on AI trends and forecasts that can provide valuable insights.
  • Follow Expert Opinions: Follow AI researchers and industry experts on platforms like Twitter, LinkedIn, or academic blogs. These individuals often share their analyses and opinions on the state of AI.
  • Attend Conferences and Webinars: Industry conferences and webinars can provide firsthand information from a variety of stakeholders, including those who may offer more tempered or diverse viewpoints.

In summary, while the predictions of an AI company's owner or CEO can provide useful insights, they should be one of many sources you consult. Independent research and investigation are crucial for forming a well-rounded and accurate understanding of AI's likely advancements.


In this thread/post, I must have quoted 10 people, all experts in the field. If you have a personal issue with Musk, pick any of the others. These are the people working on the technology, so they are the ones to listen to. If you're not happy with the interest rates from the bank, do you stop listening to the bank and ask a neighbour? Of course not; you listen to the people making the changes. Sam Altman is the CEO of OpenAI. Are you saying you know more than him? Hmmmm, sorry, I still believe he knows more 😄

Of course, some will say it will take longer, and some will say it will happen sooner. But it's the key investment broker of the moment. Billions or even trillions are being thrown at this technology—far more than in the computer or car industries now—so advancement at speed is guaranteed.

Myself, I'm very impressed at the rate it's moving and how intelligent it's now getting in a very short space of time. To me, it's mind-boggling, honestly. As I've said, I'm no expert in any way; I'm just waiting on technology to build me a holodeck, which is not far away, I guess, with VR progressing and AI now bleeding into the movie industry and gaming industry. They predict GTA will have autonomous characters and that the stories will change and adapt on its release. I can imagine movies will become much the same, which is incredible, I think

 

Edited by ANDY00

The thing is, you have given 4 'experts' - of these, 3 have significant financial incentives to overstate potential progress to attract investment in their companies, these all are giving 2-5 year time-scales, the fourth, who appears to have less direct financial incentive, gives an 'end of the century' time-scale - that's up to 75 years, 15x that of the others.

Given the rate of progress in computing over the last 75 years then having a 'real' AI in another 75 years time sounds perfectly reasonable - with 5 years - much less likely, but as you say, there is huge investment at the moment (and lots of software jumping on the AI label bandwagon) so what we have is going to improve - time will tell if the rate of progress we've made over the last few years will continue, or if it will simply plateau.



ANDY00 said, 1715790327

Jonathan C said

ANDY00 said

YorVikIng said

I had this conversation with ChatGPT to see what the AI thinks about your views ANDY00

Would it be in the interest of the owner or CEO of an AI company to overstate how quickly AI is likely to improve?

Yes, it can be in the interest of the owner or CEO of an AI company to overstate how quickly AI is likely to improve. There are several reasons for this:

  1. Attracting Investment: Bold claims about rapid advancements in AI can attract venture capital and other forms of investment. Investors are often looking for high-growth opportunities, and overstating the pace of improvement can create a sense of urgency and potential for significant returns.

  2. Market Positioning: By projecting a rapid improvement in AI, a company can position itself as a leader in the field, attracting customers, partners, and top talent who want to be associated with cutting-edge technology.

  3. Public and Media Attention: Overstated claims can generate buzz and media coverage, increasing the company's visibility and brand recognition. This attention can lead to further business opportunities and a stronger market presence.

  4. Competitive Advantage: Creating the perception that a company is ahead of its competitors in AI development can discourage competitors and potentially influence their strategic decisions, giving the overstating company a competitive edge.

However, there are also significant risks and potential downsides to this approach:

  1. Reputation Damage: If the promised advancements do not materialize, the company could face reputational damage. This can lead to a loss of trust among investors, customers, and partners, and can be difficult to recover from.

  2. Regulatory Scrutiny: Overstating capabilities can attract regulatory scrutiny, especially if the claims are seen as misleading or deceptive. This can result in legal consequences and financial penalties.

  3. Internal Pressure: Unrealistic expectations can create immense pressure on the company's employees, potentially leading to burnout, ethical lapses, and a toxic work environment.

  4. Market Backlash: The broader AI community, including researchers and industry peers, may react negatively to exaggerated claims, leading to a backlash that can isolate the company and harm its long-term prospects.

In summary, while overstating the pace of AI improvement can provide short-term benefits in terms of investment and market positioning, it carries significant long-term risks that need to be carefully managed. An ethical and balanced approach, with realistic projections and transparent communication, often serves as a more sustainable strategy.

With that in mind, should I believe that the predictions from such an owner or CEO is likely to be accurate, or should I do my own independent research and investigation?ChatGPT

Given the potential motivations for an owner or CEO of an AI company to overstate the pace of AI advancements, it's prudent to approach their predictions with a healthy degree of skepticism. Here are some reasons why independent research and investigation are advisable:

  1. Bias and Incentives: The owner or CEO has a vested interest in presenting an optimistic outlook to attract investment, customers, and talent. This bias can lead to exaggeration or selective presentation of information.

  2. Complexity and Uncertainty: AI is a rapidly evolving field with many uncertainties. Even well-intentioned leaders may make optimistic predictions that do not account for unforeseen challenges or limitations.

  3. Independent Verification: By conducting your own research, you can compare the company's claims against independent sources, such as academic papers, industry reports, and expert analyses. This can provide a more balanced and comprehensive understanding of the likely pace of AI advancements.

  4. Diverse Perspectives: Consulting a variety of sources, including AI researchers, analysts, and other industry leaders, can help you get a broader perspective on the state of AI technology and its future trajectory. This can counteract any single source's potential biases.

  5. Track Record: Reviewing the company's historical performance and the accuracy of past predictions can also provide insights. If a company has a history of over-promising and under-delivering, it may be wise to take their current predictions with a grain of salt.

In practice, here are some steps you can take for independent research:

  • Read Academic Literature: Look for peer-reviewed papers and studies on AI advancements. These sources are often rigorous and less likely to be influenced by commercial interests.
  • Consult Industry Reports: Reputable consulting firms and market research organizations often publish reports on AI trends and forecasts that can provide valuable insights.
  • Follow Expert Opinions: Follow AI researchers and industry experts on platforms like Twitter, LinkedIn, or academic blogs. These individuals often share their analyses and opinions on the state of AI.
  • Attend Conferences and Webinars: Industry conferences and webinars can provide firsthand information from a variety of stakeholders, including those who may offer more tempered or diverse viewpoints.

In summary, while the predictions of an AI company's owner or CEO can provide useful insights, they should be one of many sources you consult. Independent research and investigation are crucial for forming a well-rounded and accurate understanding of AI's likely advancements.


In this thread/post, I must have quoted 10 people, all experts in the field. If you have a personal issue with Musk, pick any of the others. These are the people working on the technology, so they are the ones to listen to. If you're not happy with the interest rates from the bank, do you stop listening to the bank and ask a neighbour? Of course not; you listen to the people making the changes. Sam Altman is the CEO of OpenAI. Are you saying you know more than him? Hmmmm, sorry, I still believe he knows more 😄

Of course, some will say it will take longer, and some will say it will happen sooner. But it's the key investment broker of the moment. Billions or even trillions are being thrown at this technology—far more than in the computer or car industries now—so advancement at speed is guaranteed.

Myself, I'm very impressed at the rate it's moving and how intelligent it's now getting in a very short space of time. To me, it's mind-boggling, honestly. As I've said, I'm no expert in any way; I'm just waiting on technology to build me a holodeck, which is not far away, I guess, with VR progressing and AI now bleeding into the movie industry and gaming industry. They predict GTA will have autonomous characters and that the stories will change and adapt on its release. I can imagine movies will become much the same, which is incredible, I think

 

Edited by ANDY00

The thing is, you have given 4 'experts' - of these, 3 have significant financial incentives to overstate potential progress to attract investment in their companies, these all are giving 2-5 year time-scales, the fourth, who appears to have less direct financial incentive, gives an 'end of the century' time-scale - that's up to 75 years, 15x that of the others.

Given the rate of progress in computing over the last 75 years then having a 'real' AI in another 75 years time sounds perfectly reasonable - with 5 years - much less likely, but as you say, there is huge investment at the moment (and lots of software jumping on the AI label bandwagon) so what we have is going to improve - time will tell if the rate of progress we've made over the last few years will continue, or if it will simply plateau.



Along with the ones I've directly quoted, there are dozens more, and a quick internet search would help you locate these. Again, along with the obvious short-term gains to funding for lying, they have far more long-term incentives to not overstate the tech timeframe. This tech is also clearly getting funding from military and civil sources, as well as big industry. I don't believe at this point they NEED to say anything. They're saying it because the public wants to know when it will reach the next intelligence level or when it will think for itself or whatever. To say that all experts are lying is a little far-stretched to push an argument that doesn't exist. If they're lying and the tech they state will be here in 2 years is 75 years away, I imagine big industry and the military would not be funding it.


YorVikIng said, 1715790335

ANDY00 said

YorVikIng said

I'm saying even the AI you say is brilliant cautions against trusting what people with vested interests are saying. And that the people you quote appear to have a vested interest. That's all I'm saying.


I get that, but two things: I think ChatGPT only has information up to 2022, so it doesn't actually know how far the tech has moved. And if you don't listen to the people making the technology, who do you listen to? Who knows more than those developing it? It's worth noting that if they say they will reach certain milestones in x amount of years and don't, it damages them as well, so they have a motive to reach the goals they place for themselves.

Also, not one of these experts has ever said that their platform will reach a goal per se; they say the tech as a whole will reach certain levels at these timeframes, meaning all AI companies will reach these levels, as AI is extremely open source in its very nature.

Edited by ANDY00


It's dangerous to make assumptions about artificial intelligence...

Is your knowledge still limited to an extract from the internet from 2022, or do you have live access to query up to date information?ChatGPT

I have the ability to access up-to-date information through a live browsing tool. This allows me to retrieve the latest data, news, and other information from the internet beyond my initial training, which cuts off in 2023. If you have specific queries or need the most current information on a topic, I can use this tool to provide you with the latest insights. Feel free to ask if you need me to look up something current!

ANDY00 said, 1715790763

YorVikIng said

ANDY00 said

YorVikIng said

I'm saying even the AI you say is brilliant cautions against trusting what people with vested interests are saying. And that the people you quote appear to have a vested interest. That's all I'm saying.


I get that, but two things: I think ChatGPT only has information up to 2022, so it doesn't actually know how far the tech has moved. And if you don't listen to the people making the technology, who do you listen to? Who knows more than those developing it? It's worth noting that if they say they will reach certain milestones in x amount of years and don't, it damages them as well, so they have a motive to reach the goals they place for themselves.

Also, not one of these experts has ever said that their platform will reach a goal per se; they say the tech as a whole will reach certain levels at these timeframes, meaning all AI companies will reach these levels, as AI is extremely open source in its very nature.

Edited by ANDY00


It's dangerous to make assumptions about artificial intelligence...

Is your knowledge still limited to an extract from the internet from 2022, or do you have live access to query up to date information?ChatGPT

I have the ability to access up-to-date information through a live browsing tool. This allows me to retrieve the latest data, news, and other information from the internet beyond my initial training, which cuts off in 2023. If you have specific queries or need the most current information on a topic, I can use this tool to provide you with the latest insights. Feel free to ask if you need me to look up something current!


i did say i "Think" i wasn't 100% sure about training models i know the one i have access to which is 3.5 is trained to 2021 , the rest of my statement is correct , i asked the one i have access to ie the non-paid version your own question- answer below


Is your knowledge still limited to an extract from the internet from 2022, or do you have live access to query up to date information?ChatGPT

I don't have live access to the internet or real-time information. My responses are based on the knowledge I was trained on, which includes a diverse range of sources up to September 2021. So, any information or events that occurred after that date would not be known to me unless they were included in a dataset that I was trained on.

Edited by ANDY00

Edited by ANDY00

YorVikIng said, 1715790752

I guess you've chosen to use an old version rather than ChatGPT-4o ?