By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

Elevate Your Style: Unleashing Trends, Embracing Elegance in Every Stitch.

RanaBeauty
  • Home
  • Accessories
  • Beauty
    • Beauty Products
  • Cosmetics
  • Health
  • Makeup
  • Fashion
    • Men’s Fashion
    • Women’s Fashion
  • Care
    • Skin Care
    • Hair Care
    • Personal Care
Reading: ‘Garbage In Is Garbage Out’: Why Healthcare AI Models Can Only Be As Good As The Data They’re Trained On
Search
0

No products in the cart.

RanaBeautyRanaBeauty
0
Font ResizerAa
  • Home
  • Accessories
  • Beauty
  • Cosmetics
  • Health
  • Makeup
  • Fashion
  • Care
Search
  • Home
  • Accessories
  • Beauty
    • Beauty Products
  • Cosmetics
  • Health
  • Makeup
  • Fashion
    • Men’s Fashion
    • Women’s Fashion
  • Care
    • Skin Care
    • Hair Care
    • Personal Care
Have an existing account? Sign In
Follow US
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
RanaBeauty > Blog > Health > ‘Garbage In Is Garbage Out’: Why Healthcare AI Models Can Only Be As Good As The Data They’re Trained On
Health

‘Garbage In Is Garbage Out’: Why Healthcare AI Models Can Only Be As Good As The Data They’re Trained On

docNIA
Last updated: 2024/04/01 at 8:16 PM
docNIA 3 Min Read
Share
‘Garbage In Is Garbage Out’: Why Healthcare AI Models Can Only Be As Good As The Data They’re Trained On
SHARE

‘Garbage In Is Garbage Out’: Why Healthcare AI Models Can Only Be As Good As The Data They’re Trained On

The accuracy and reliability of AI models hinges on the quality of the data they are trained on. This can’t be forgotten — especially when these tools are being applied to healthcare settings, where the stakes are high. 

When developing or deploying new technologies, hospitals and healthcare AI developers must pay meticulous attention to the quality of training datasets, as well as take active steps to mitigate biases, said Divya Pathak, chief data officer at NYC Health + Hospitals, during a virtual panel held by Reuters Events last week.

“Garbage in is garbage out,” she declared.

There are various forms of biases that can be present within data, Pathak noted. 

For example, bias can emerge when certain demographics are over or underrepresented in a dataset, as this skews the model’s understanding of the broader population. Bias could also arise from historical inequalities or systemic discriminations present in the data. Additionally, there could be algorithmic biases. These reflect biases inherent in the algorithms themselves, which may disproportionately favor certain groups or outcomes due to the model’s design or training process.

One of the most important actions that hospitals and AI developers can take to mitigate these biases is to look at the population involved in the training data and make sure it matches the population on which the algorithm is being used, Pathak said. 

For instance, her health system would not use an algorithm trained on patient data from people living in rural Nebraska. The demographics in a rural area versus New York City are too different for the model to perform reliably, she explained.

Pathak encouraged organizations developing healthcare AI models to create data validation teams who can identify bias before a dataset is used to train algorithms. 

She also pointed out that bias isn’t just a problem that goes away after a quality training dataset has been established.

“Bias actually exists in the entirety of the AI lifecycle — all the way from ideation to deployment and evaluating outcomes. Having the right guardrails, frameworks and checklists at each stage of AI development is key to ensuring that we are able to remove as much bias as possible that propagates through that lifecycle,” Pathak remarked. 

She added that she doesn’t believe bias can be removed altogether. 

Humans are biased, and they are the ones who design algorithms as well as decide how to best put these models to use. Hospitals should be prepared to mitigate bias as much as possible — but shouldn’t have the expectation of a completely bias-free algorithm, Pathak explained.

Photo: Filograph, Getty Images

docNIA April 1, 2024 April 1, 2024
Share This Article
Facebook Twitter Print
Previous Article 11 Best Citrus Colognes For Men To Brighten Your 2024 11 Best Citrus Colognes For Men To Brighten Your 2024
Next Article Health Execs Applaud Biden’s Final Rule on ‘Junk’ Health Insurance Health Execs Applaud Biden’s Final Rule on ‘Junk’ Health Insurance
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

Ignite Your Style

Our mission is to bring you the freshest insights into the world of fashion, from the hottest runway trends to the most coveted street style looks.

  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions

Never miss a beat when it comes to fashion

Copyright © 2024 Rana Beauty. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?