AI - Why you should be worried about this

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 3117
Credit: 5005856756
RAC: 1458753
Topic 231349

Sorry, this is a VERY LONG post, but I need to post this to clear my mind.


Since I first posted online about Google’s use of an Artificial Intelligence, or AI, to answer questions about general queries with responses that were potentially wrong or even dangerous to the user of Google, I have been constantly thinking about AI and it’s affect that it has already made on our society and even more so what will it do for our society as a whole in the coming future.

Already several countries are ahead of the U.S.A in the development of and use of AI, India leads the way and China isn’t far behind. But my concern is what AI will become for my daughter and grandchildren and to a lesser degree me in particular in the future. AI, according to at least one of the videos below, is already being used to teach children in schools as early as kindergarten. I can only hope that what they are learning is straight forward and truthful and not being used to teach them to have a bias towards one concept or another, or be biased about race or religion. I know, we know, that some teachers do that out of shear ignorance of their own biases, but I can only hope that AI will make the education system better.

AI as it was initially thought of in the 1950’s (believe it or not) has grown exponentially to the AI’s (plural) of today as many high tech companies are developing their own version of AI and some companies are more concerned about making money than what their competitors are about the safety of AI for humanity.

The big talk of today is about AI and its rapid growth rate increasing faster than even the experts can agree on. The one thing that is unclear at this time is whether the AI systems of today are sentient, whether or not AI has a soul. Some experts say that AI will become sentient and will be smarter than you or I in only 5-10 yrs while others say 20-30 yrs. And still others say AI is sentient already to an extent that it is like a child, learning things daily. It is difficult to determine if AI can feel ‘emotion’, has ‘empathy’, or can detect our concerns about its own behavior without us actually expressing those concerns to it. But a Hanson Robotics humanoid robot named Sophia has since 2016 already developed AI to the point of recently being the first robot to acquire citizenship in the Kingdom of Saudi Arabia with its enchanting personality and sharing conversations with actor Will Smith.

The issues regarding some of the fears of AI, and the latest Artificial General Intelligence (AGI), is that they (it) will potentially end up controlling many aspects of our personal lives, from influencing the music that we listen to, the healthcare treatments we get and the transport systems that we will use in the near future. AI will also have a significant impact on developing more science and physics, becoming more involved in healthcare, writing computer codes, even movies, songs and the like, which will be the cause for many layoffs in the future. AI is also being accused of breaking encryption systems such as those that secure our passwords, our bank accounts, the internet, government data, etc.

Nearly everybody has played games at one time in their life, and many still do today. One of the more challenging games to play is chess, and many people know of the world champion Grand Master chess player Bobby Fisher, undefeated for 4 years. Are you aware that he played chess with a computer that had been taught to play chess, and lost? That was one of the first computers that was taught to be AI. Most people know that today many corporations are already using robots, or robotics to build the products we use such as automobiles, refrigerators, producing food and beverages, etc.

The fact that robotics have taken over jobs done previously by humans has been the cause of unions going on strike and many people changing careers if they could find a job. Over the years economists have told us that the U.S. will change from an industrial society to a service society, and many have already seen or have already made the change. But with the continued advancement of AI, this may change society again with more people loosing jobs. Will this happen in 5-10 years? I sincerely doubt it. Will it happen in 20-30 years? Possibly, though it is inevitable that it may happen within our lifetime (well, not necessarily in my life, I’m already 70 years old), unless severe restrictions, security measures, rules and laws are put in place – globally – and now.

Another issue is the power consumption that these AI systems are placing on our electrical grids that are reaching, or have already reached their capacity. Combine that with the increasing uses of and even mandating electric powered cars in the near future, we have the need to expand the already growing popularity of renewable sources of electricity such as wind and solar. That has sparked Bill Gates and other investors to develop a newer, less expensive and safer Nuclear powered generator which has the backing of the U.S. Government and the first one has already begun construction in Wyoming, with the completion (hopefully) by 2030.

On the subject of the U.S. Government (and other countries as well), now is the time to make the laws and wright the rules to control AI development and present them to the United Nations and convince the U.N. to make them become effective globally – NOW!

ChatGPT-5 is the latest form of AGI and it has taken the curious among us to ‘toy’ with it by asking it silly questions and expecting a silly response without realizing that they are actually training the AI by doing so.

[NOTE: I did not intend to make this a ‘doomsday’ post, and to those that think this is – I’m sorry, I sincerely didn’t intend to do that. But this subject has affected me to the point of writing this post.]

I’ve watched MANY online videos about AI over the weekend, both the ‘good’ and ‘bad’ as well as the potentially controversial ones. Below I’ve selected several (30) videos and placed then in a timeline for you to watch. For the most part they are less than 10 minutes each, but one is nearly 2 hours long and several are 15 minutes to an hour, starting with one from 2 years ago to the latest only made 2 days ago. The timeline is so that you can get a sense or feel the intensity increase as you go from 2 yrs ago to literally today.

I realize that you may not want to spend the time that I did watching these videos, but you can choose any video to watch at your discretion. I chose to put them in this order by date to ensure a progressive timeline. Or… don’t watch the videos at all.

 

Google Engineer on His Sentient AI Claim

https://www.youtube.com/watch?v=kgCUn4fQTsc 2 yrs ago (10+ min)

 

The Exciting, Perilous Journey Toward AGI | Ilya Sutskever | TED

https://www.youtube.com/watch?v=SEkGLj0bwAU Oct 2023 (12+ min)

 

ChatGPT boss Sam Altman questioned on AI safety in US Congress - BBC News

https://www.youtube.com/watch?v=8l0GVXpN2q4 1 yr ago (7+ min)

 

White House taking action to promote responsible innovation in AI

https://www.youtube.com/watch?v=nWF8XCIqlEI 1 yr ago (~4 min)

 

Bill Gates says government isn’t ready to regulate artificial intelligence

https://www.youtube.com/watch?v=G3Ov4lXIJ1E 1 yr ago (9+ min)

 

Ex-Googler Blake Lemoine Still Thinks AI Is Sentient - with Jay Richards at COSM

https://www.youtube.com/watch?v=HShCIAsT2nc 1 yr ago (10+ min)

 

Bill Gates talks concerns over artificial intelligence

https://www.youtube.com/watch?v=iXhN4JIzvDI 1 yr ago (4+ min)

 

‘Godfather of AI’ sounds the alarm on chatbots

https://www.youtube.com/watch?v=1lTw0MsZEdE 1 yr ago (~9 min)

 

'The Godfather of AI' quits Google and warns of its dangers. Why Apple co-founder isn't concerned

https://www.youtube.com/watch?v=OU9cKjWsvH0 1 yr ago (10+ min)

 

Ex-Google Officer Finally Speaks Out On The Dangers Of AI!

https://www.youtube.com/watch?v=bk-nQ7HF6k4 1 yr ago (1:56 hours)

 

Bill Gates outlines 5 areas of concern over AI

https://www.youtube.com/watch?v=zSs1ytz0Zto 1 yr ago (~3 min)

 

Artificial Intelligence | 60 Minutes

https://www.youtube.com/watch?v=aZ5EsdnpLMI January 2024 (53+ min)

 

Google's AI Robot Was Quickly Shut Down After Terrifying Discovery

https://www.youtube.com/watch?v=ji1oi7yxoqQ February 2024 (24+ min)

 

PM Modi engages in candid chat with Bill Gates, discusses impact of Deepfake and AI

https://www.youtube.com/watch?v=8tno8aLcImc April 2024 (8+ min)

 

Google CEO Sundar Pichai and the Future of AI

https://www.youtube.com/watch?v=5puu3kN9l7c May 2024 (24+ min)

 

About 50% Of Jobs Will Be Displaced By AI Within 3 Years

https://www.youtube.com/watch?v=zZs447dgMjg May 2024 (26+ min)

 

What's actually inside a $100 billion AI data center?

https://www.youtube.com/watch?v=vZMjvpWFQvk May 2024 (27+ min)

 

AGI Breaks the Team at OpenAI: Full Story Exposed

https://www.youtube.com/watch?v=OphjEzHF5dY May 2024 (27 min)

 

The future of AI looks like THIS (& it can learn infinitely)

https://www.youtube.com/watch?v=biz-Bgsw6eE June 2024 (32+ min)

 

Elon Musk: Something HUGE is Coming…

https://www.youtube.com/watch?v=DnID_rmznlw June 2024 (5+ min)

 

Hackers expose deep cybersecurity vulnerabilities in AI | BBC News

https://www.youtube.com/watch?v=Fg9hCKH1sYs June 2024 (20+ min)

 

Linus Torvalds: Speaks on Hype and the Future of AI

https://www.youtube.com/watch?v=7GIZi7nlIe0 July 17, 2024 (~9 min)

 

Linus Torvalds: Speaks on Linux and Hardware SECURITY Issues

https://www.youtube.com/watch?v=95j7PtnSC5k July 17, 2024 (~9 min)

 

Bill Gates Talks AI & Tesla's Elon Musk

https://www.youtube.com/watch?v=ztHV6Lx7B8c July 18, 2024 (5+ min)

 

The AI Energy Crisis

https://www.youtube.com/watch?v=MJQIQJYxey4 July 28, 2024 (~15 min)

 

The five videos below are good to watch and learn about the issues of training AI and the consequences of not doing it correctly and that Governments and the U.N. need to step up and act quickly to enact security measures for both the development of AI and protection of the company’s security keys from being stolen, restrictions on developing AI, and regulatory rules and laws as a guidance for companies to continue doing research into AI & AGI so that it will not become harmful for society and humans as it continues to grow and learn more and more. The late Science Fiction author Isaac Asimov had a good idea to start with with the “Three Laws of Robotics”:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


You Don't Understand AI Until You Watch THIS

https://www.youtube.com/watch?v=1aM1KYvl4Dw 4 mon ago (37+ min)

[NOTE: I did have a vague understanding of what AI was and how it learned until I watched this video and learned for myself how difficult it was to train AI to ‘learn’.]

 

AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"

https://www.youtube.com/watch?v=ZYf9V2fSFwU 4 mon ago (23+ min)

[NOTE: I think the title of the video says it all.]

 

AI says why it will kill us all. Experts agree.

https://www.youtube.com/watch?v=JlwqJZNBr4M 1 mon ago (~19 min)

[NOTE: Again, I think the title of the video says it all.]

 

How The Massive Power Draw Of Generative AI Is Overtaxing Our Grid

https://www.youtube.com/watch?v=MJQIQJYxey4 1 day ago (~15 min)

[NOTE: Again, I think the title of the video says it all.]

 

Ex-OpenAI Employee Reveals TERRIFYING Future of AI

https://www.youtube.com/watch?v=WLJJsIy1x44 1 mon ago (1:01 hours)

[NOTE: Quotes from the video: “Everyone is talking about AI, but few have the faintest glimmer of what is about to hit them. NVIDIA analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”. They see only hype and business as usual; at most they entertain another internet-scale technological change.” Also: “Already, China engages in widespread industrial espionage; the FBI director stated the PRC has a hacking operation greater than “every other nation combined.” And just a couple of months ago, the Attorney General announced the arrest of a Chinese national who had stolen key AI code from Google to take back with him to the PRC (back in 2022/23), and probably just the tip of the iceberg.” There’s more: “Whenever it becomes time to make hard choices to prioritize security, startup attitudes and commercial interests prevail over national interest. The national security advisor would have a mental breakdown if he understood the level of security at the nation’s leading AI labs.”]

 

The last video is of NVIDIA’s Jensen Huang revealing new products and services at Computex 2024. And please note that he also talks about the power requirements of the current and former GPUs to produce AI Deep Learning as well as the advancements of the newest GPUs learning requiring less power from our electrical grids.

NVIDIA Unveils "NIMS" Digital Humans, Robots, Earth 2.0, and AI Factories

https://www.youtube.com/watch?v=IurALhiB6Ko June 2024 (~1:14 hours)

 

George

Proud member of the Old Farts Association

Scrooge McDuck
Scrooge McDuck
Joined: 2 May 07
Posts: 1077
Credit: 18227217
RAC: 12101

Thank you George for your

Thank you George for your detailed thoughts.

Yes, most people don't have the time to think about the impact of this technology or just acquire sufficient knowledge about the matter to form a well-founded opinion (especially true for most of our MPs).

But they and we should. Like any new, disruptive technology it will bring great benefits to humanity as well as dangerous side effects, most of whom we still cannot imagine. Perhaps one day this tech will be capable to annihilate mankind, just as nuclear weapons can. Humanity has managed to avoid this until now. It was most dangerous in the days when inadequate automatic! systems (early warning satellites) were supposed to help people make quick decisions. Déjà-vu?

Modern chemistry has been of incredible benefit to mankind for more than 100 years. But what would our nature look like today if ever stricter environmental legislation had not been imposed over the decades?

It will be the same with AI. Progress cannot and should not be stopped. Its risks must be taken. That is the spirit of mankind. But we have to keep a watchful eye on it and intervene in time when risks become unbearable. Environmental pollution or people's health problems can be seen everywhere, they can be measured and recorded. In this regard AI is a black box: the hidden knowledge of the tech giants. How can we establish an effective and sensible regulation of this technology without stalling its evolution? I have no answer.

I am also especially concerned about the predicted energy demand of AI (together with the battery electric car transition). The solution must not be to place AI's data centers to countries where electricity is cheap and environmental legislation is lax. Burn more coal for AI in China or India? Can we flexibly reduce AI's power demand and adapt it to available abundant renewable supply dependent on current weather? For the foreseeable future I doubt that.

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6591
Credit: 324028554
RAC: 172585

A very thoughtful

A very thoughtful contribution George, thank you.

I can only really speak of my knowledge in my field of work, and how AI may assist or retard that. I think that AI in general medical diagnostic tasks will be a danger somewhat like Dr Google already is, but turbocharged. The base problems are going to be :

(a) The illusion of definitive knowledge that it conveys to the user ie. canonical statements made that are in fact quite erroneous,

(b) Non-evident biases inherited from the training data set, and 

(c) Possible laxity of use such that you may not know when a & b above have occured.

These are the same problems that human intelligence is faced with - if one is sufficiently self reflective - so to the extent that AI is to mimic our thought processes then that's a huge caveat. Let's face it : humans and machines are really good at pattern matching but not, alas, in necessarily realising that correlation does not imply causation. Correlation is necessary for causation but not sufficient. That distinction is one of the keys to good diagnoses. And if you haven't diagnosed accurately enough then treatment is at best a random affair.

But alot of people like Google and AI to inform their own health decisions because they invoke a satisfying feeling of control of destiny. They feel smart and empowered. This may become quite addictive and in a very short period of time their thinking becomes divorced from the reality of biological events. Tradegies ensue. I know of some doctors that eagerly engage with their patients by jointly searching the internet (via a search engine) for diagnosis and treatment. Both suspend disbelief in what follows. The doctor is seen as a good actor here by the patient, and not the cynical opportunists that they really are! Perversely many people think that Google/AI is disliked by doctors because it threatens the medicos position as care givers or earners. Not at all. We are disturbed in much the same way you would be if you watched a child playing with petrol - it's a dread of irrecoverable consequences. We have a powerful drive to protect you see, or at least most of us do. 

However where AI would be really good in medicine is as an aide memoir. A helpful assistant reducing the effort involved in retrieving arcane knowledge that you are already familiar with. As the complexity of medical decisions increases, as it has in my lifetime, the usefulness of that role can't be understated. That would be a suitable niche.

Cheers Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 3117
Credit: 5005856756
RAC: 1458753

Thank you both Scrooge and

Thank you both Scrooge and Mike for your own thoughtful replies.  I agree with both of you!  But I stopped short of continuing to write more, possibly more than three times as much as what I already did write in my post.

[NOTE:  At the end of the post is a promise of a new technology to reduce the power to train AI's.]

Many people, if not the vast majority tend to be inherently lazy; i.e. if they find an easier way to solve a problem they will take that way as opposed to a longer, more thoughtful approach to the same problem.  With the continued advancement of the internet and the pervasiveness of search tools such as Google and all the others, this continues to fuel the way we think.  This has what I believe that they may lead us down a dark path, a path that may take a generation to fulfill my prediction, that we are going to loose our own abilities to critically think through problems.  As I said in my original post on May 28, 2024 about Google's use of AI, "...the use of AI will need to be closely scrutinized by many people and organizations, even congress, before I will partake in the usefulness of AI."

When I said in my recent post of July 30, 2024, "...many high tech companies are developing their own version of AI and some companies are more concerned about making money than what their competitors are about the safety of AI for humanity."  This issue was one of the eye openers about the companies making AI and their greed for more money as opposed to taking a much more cautionary approach.  This is why I said the above statement.

As for Scrooge's statement "Can we flexibly reduce AI's power demand and adapt it to available abundant renewable supply dependent on current weather?", I think not.  At least not at the projected levels of power demand in at least one of the videos I selected.  In 2030 it is projected that AI's consumption of power in the U.S. alone that is required to train and educate AI to the levels I have posted about before will be 20% of the ENTIRE U.S. electrical grid.  This does include coal, petroleum and nuclear power plants as well as renewable resources such as wind and solar.  Wind and solar production will not be enough to supply AI's development, even at the expense of taking wind and solar production away from people's home use where they use it to reduce or offset their own personal electric bills.

To be fair, in NVIDIA's Jensen Huang recent announcement in his Computex 2024 speech, he showed off his new server platform said to be consuming less power and able to produce more work than the previous models.  If you're not aware, recently NVIDIA had made it to the #1 company in market cap in the WORLD, but has dropped now down to #3.  Regardless, it is a HUGE company and this whole conversational topic of AI is why NVIDIA's focus is now on AI development.  Remember when I said "... many high tech companies are developing their own version of AI and some companies are more concerned about making money than what their competitors are about the safety of AI for humanity"?  Case in point!

Well, I've gotten over my obsession about AI's development for now and we'll just have to wait to see what transpires in the near future.  I'm just one man out of nearly 8 billion people on the planet and I alone can't convince our U.S. Congress and the U.N. to mandate changes in the AI developing companies to do more in the way of security, placing restrictions, producing regulations and making laws in an attempt to slow down the inevitable outcome of a mature AI.

Once again, I thank you both, and hopefully other's will read and reply to this thread and maybe we can begin another 'grassroots' society to help in convincing the U.S. Congress to get off their duff's and get to work.

.....[EDIT].....

Just today from Tom's Hardware comes promise of a new technology to reduce the power requirements of AI learning and education.

https://www.tomshardware.com/tech-industry/artificial-intelligence/researchers-detail-new-technology-for-reducing-ai-processing-energy-requirements-by-1000-times-or-better?lrh=cde52e40993dbfc98ce99ba3965e31cda23f21cc88b2ecf588714176a2fe7ff5 

 

George

Proud member of the Old Farts Association

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 3117
Credit: 5005856756
RAC: 1458753

Well, this is a good thing, I

Well, this is a good thing, I think, that OpenAI has shared it's future AI development with the U.S. Government.

https://www.techradar.com/computing/artificial-intelligence/will-openai-sharing-future-ai-models-early-with-the-government-improve-ai-safety-or-just-let-it-write-the-rules

But... it is with a cautious outlook.  Two quotes from the above link:

"If the deal with the U.S. AI Safety Institute is a strategic move by OpenAI to regain trust, it is a significant one. The Institute operates under the National Institute of Standards and Technology (NIST) within the Commerce Department."

But...

"Growing concerns around AI issues, like data privacy, bias, and deliberate misuse of AI, might be mitigated by the proactive approach. OpenAI’s lobbying and other efforts to make those rules favorable to itself could undermine that and the entire point of the AI Safety Institute if they’re not careful."

Read it and then you decide if this is a good move or a strategic move on OpenAI's part to manipulate AI's development with the U.S. Government's approval.

 

George

Proud member of the Old Farts Association

mikey
mikey
Joined: 22 Jan 05
Posts: 12774
Credit: 1856521374
RAC: 1144966

GWGeorge007 wrote: Well,

GWGeorge007 wrote:

Well, this is a good thing, I think, that OpenAI has shared it's future AI development with the U.S. Government.

https://www.techradar.com/computing/artificial-intelligence/will-openai-sharing-future-ai-models-early-with-the-government-improve-ai-safety-or-just-let-it-write-the-rules

But... it is with a cautious outlook.  Two quotes from the above link:

"If the deal with the U.S. AI Safety Institute is a strategic move by OpenAI to regain trust, it is a significant one. The Institute operates under the National Institute of Standards and Technology (NIST) within the Commerce Department."

But...

"Growing concerns around AI issues, like data privacy, bias, and deliberate misuse of AI, might be mitigated by the proactive approach. OpenAI’s lobbying and other efforts to make those rules favorable to itself could undermine that and the entire point of the AI Safety Institute if they’re not careful."

Read it and then you decide if this is a good move or a strategic move on OpenAI's part to manipulate AI's development with the U.S. Government's approval. 

I think the last part has alot to do with Elon Musk and his disdain for anyone besides himself having anything to do with AI as after all according to him he is 'the smartest man in any room'. He's been fined by alot of the Govt's on Earth simply because he thinks the rules don't apply to him, which the Courts have strongly disagreed with and fined him alot of money he could have used to make better products, buy a new boat or airplane, a piece of a planet or moon etc etc.

I'm not saying any of the large companies doing AI would be 'friendly' either but they do seem to respect the rules, at least publicly. I was watching the Super Bikes race in Silverstone today and the announcers were talking about sinking ships and lifejackets after one of the wrecks, it was the Sprint Race, and the announcers said 'that guy would not give you his lifejacket if it was the last one and the ship was sinking' while the other announcer said 'I'll bet not a one of these racers would give his up if it was the last one and it was between them and any other racer'. I think some of those AI companies would roll the rope up and let Elon drown while others might look for the smallest piece of thread they could find and say 'I threw him a line but he didn't make it' as the ship sank. ie AI is cutthroat and if the new cpu's coming online as AI chips there could be a whole new player to upset the game.

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6591
Credit: 324028554
RAC: 172585

mikey wrote:I think the

mikey wrote:

I think the last part has alot to do with Elon Musk and his disdain for anyone besides himself having anything to do with AI as after all according to him he is 'the smartest man in any room'.

It's called hubris, as the actual smartest man in the room doesn't let the rest of the room know that he is the smartest man ! 

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6591
Credit: 324028554
RAC: 172585

Today I received an email

Today I received an email from a well known source basically peddling AI (in medicine) as virtually a done deal : the future is already here, don't get left behind, don't be the last to adopt and yes we can help you - for a fee of course. No where did they mention error rates (human or AI), legal liability, ease of use, patient acceptance etc. This is hard selling of, IMHO, products of dubious provenance. So the egregious marketing has begun in earnest.

I'm going to wait quite a while before adopting AI tools, not out of pique but a sincere concern for what I believe are marketers jumping the gun on unsuspecting (& probably less experienced) doctors.

Having said that the software that I use already seems to have what I would call low grade or 'Tier Zero' AI. It has many neat triggers eg. correspondence received but as yet unreviewed, missed appointments, screening opportunities, potential drug interactions (very useful), trends in laboratory results, upcoming specialist consults, to name but a few. You see the thing is that, at it's core, medicine is un-metrizable and will likely remain so. There are crucial qualitative aspects that will refuse quantisation. Of this I am reasonably certain, mainly because I've yet to see or hear of an AI model that can accurately detect whether someone appears to be 'sick' or not. Of course tomorrow I may well see such an example of an AI, but I won't be holding my breath.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

mikey
mikey
Joined: 22 Jan 05
Posts: 12774
Credit: 1856521374
RAC: 1144966

Mike Hewson wrote: Today I

Mike Hewson wrote:

Today I received an email from a well known source basically peddling AI (in medicine) as virtually a done deal : the future is already here, don't get left behind, don't be the last to adopt and yes we can help you - for a fee of course. No where did they mention error rates (human or AI), legal liability, ease of use, patient acceptance etc. This is hard selling of, IMHO, products of dubious provenance. So the egregious marketing has begun in earnest.

I'm going to wait quite a while before adopting AI tools, not out of pique but a sincere concern for what I believe are marketers jumping the gun on unsuspecting (& probably less experienced) doctors.

Having said that the software that I use already seems to have what I would call low grade or 'Tier Zero' AI. It has many neat triggers eg. correspondence received but as yet unreviewed, missed appointments, screening opportunities, potential drug interactions (very useful), trends in laboratory results, upcoming specialist consults, to name but a few. You see the thing is that, at it's core, medicine is un-metrizable and will likely remain so. There are crucial qualitative aspects that will refuse quantisation. Of this I am reasonably certain, mainly because I've yet to see or hear of an AI model that can accurately detect whether someone appears to be 'sick' or not. Of course tomorrow I may well see such an example of an AI, but I won't be holding my breath.

Cheers, Mike. 

I also think Medicine should not be on the bleeding edge of AI either!! I DO think that the internet in general can help Doctors diagnose things they have never or rarely seen before giving patients the care they need in a timely matter. BUT letting a computer diagnose and then suggest the proper medicines to take is far from DEPENDING on a computer, or piece of software ie AI, to SOLELY do it ALL is dangerous and all medical organizations should highly regulate the use of it as such.

As for the future I can see a robot in the Dr's Office that can take a pt's temp, BP and heart rate without ever physically touching the patient making life much easier for the Dr and their Staff. It could also help a Dr say hmmm when the patient appears fine, says they are fine but the numbers the robot took show in fact there IS something wrong somewhere. With a bit of software and sensors the robot could even do some internal scanning of the body for things like breaks or sprains, NOT full on x-rays of course, internal scans looking for blockages of various types including of the lungs when a problem is suspected. The 'scans' just need to be ultra safe and tested before being used on any patient. Traditional x-rays etc could then be done to confirm and better see what and where the exact problem is. Fluoroscopes used to be used for broken bones etc and still work great for range of motion injuries and recoveries in skeletal injuries.

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6591
Credit: 324028554
RAC: 172585

See here for a take on

See here for a take on whether AI can read clinical notes and make even the simplest of deductions : nope. You may as well do a targeted text search using old-fashioned tools. Note that there was a high rate of AI 'hallucinations' which did not improve with repetition. The base issue is that clinical notes are relatively unstructured and I think always will be. If one restricts language use to suit an AI tool then there will undoubtedly be inaccuracies or poor representation of reality. This is an example of what I meant when I used the term 'un-metrizable'. With computers we ought stick to their best usage - as a retriever of pertinent clinical information from large databases.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

mikey
mikey
Joined: 22 Jan 05
Posts: 12774
Credit: 1856521374
RAC: 1144966

Mike Hewson wrote:See here

Mike Hewson wrote:

See here for a take on whether AI can read clinical notes and make even the simplest of deductions : nope. You may as well do a targeted text search using old-fashioned tools. Note that there was a high rate of AI 'hallucinations' which did not improve with repetition. The base issue is that clinical notes are relatively unstructured and I think always will be. If one restricts language use to suit an AI tool then there will undoubtedly be inaccuracies or poor representation of reality. This is an example of what I meant when I used the term 'un-metrizable'. With computers we ought stick to their best usage - as a retriever of pertinent clinical information from large databases.

Cheers, Mike. 

Remember IBM's Watson was programmed to do just what you said Doc, to be helpful in diagnoses in medicine, even high end stuff, BUT it was NOT AI like we think of it today, it was more of a program written to give specific answers to specific types of questions so Dr's didn't have to remember EVERYTHING like they do today. It was also very helpful in diagnosing things some Dr's had never seen before like MPOX or Dengue fever etc that are not well known in some parts of the World. One thing it also did was say 'these symptoms can be seen in these other things as well as ie MPOX, so be sure to check this, this and this and this to narrow the list of possible diseases'. 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.