Return to page

H2O.ai Blog

Filter By:

13 results Category: Year:
Testing Large Language Model (LLM) Vulnerabilities Using Adversarial Attacks

Adversarial analysis seeks to explain a machine learning model by understanding locally what changes need to be made to the input to change a model’s outcome. Depending on the context, adversarial results could be used as attacks, in which a change is made to trick a model into reaching a different outcome. Or they could be used as an exp...

Read more
A Brief Overview of AI Governance for Responsible Machine Learning Systems
by Navdeep Gill, Abhishek Mathur, Marcos V. | November 30, 2022 AI Governance , Machine Learning , Responsible AI

Our paper “A Brief Overview of AI Governance for Responsible Machine Learning Systems” was recently accepted to the Trustworthy and Socially Responsible Machine Learning (TSRML) workshop at NeurIPS 2022 (New Orleans). In this paper, we discuss the framework and value of AI Governance for organizations of all sizes, across all industries a...

Read more
Using AI to unearth the unconscious bias in job descriptions
by Parul Pandey, Shivam Bansal | January 19, 2021 H2O Hydrogen Torch , Responsible AI

“Diversity is the collective strength of any successful organization Unconscious Bias in Job DescriptionsUnconscious bias is a term that affects us all in one way or the other. It is defined as the prejudice or unsupported judgments in favor of or against one thing, person, or group as compared to another, in a way that is usually con...

Read more
H2O Driverless AI 1.9.1: Continuing to Push the Boundaries for Responsible AI
by Benjamin Cox | January 18, 2021 H2O Driverless AI , Responsible AI

At H2O.ai, we have been busy. Not only do we have our most significant new software launch coming up (details here ), but we also are thrilled to announce the latest release of our flagship enterprise platform H2O Driverless AI 1.9.1. With that said, let’s jump into what is new: Faster Python scoring pipelines with embedded MOJOs for r...

Read more
The Importance of Explainable AI

This blog post was written by Nick Patience, Co-Founder & Research Director, AI Applications & Platforms at 451 Research, a part of S&P Global Market Intelligence From its inception in the mid-twentieth century, AI technology has come a long way. What was once purely the topic of science fiction and academic discussion is now...

Read more
Building an AI Aware Organization

Responsible AI is paramount when we think about models that impact humans, either directly or indirectly. All the models that are making decisions about people, be that about creditworthiness, insurance claims, HR functions, and even self-driving cars, have a huge impact on humans. We recently hosted James Orton, Parul Pandey, and Sudala...

Read more
The Challenges and Benefits of AutoML
by Eve-Anne Trehin | October 14, 2020 AutoML , H2O Driverless AI , Machine Learning , Responsible AI

Machine Learning and Artificial Intelligence have revolutionized how organizations are utilizing their data. AutoML or Automatic Machine Learning automates and improves the end-to-end data science process. This includes everything from cleaning the data, engineering features, tuning the model, explaining the model, and deploying it into p...

Read more
3 Ways to Ensure Responsible AI Tools are Effective

Since we began our journey making tools for explainable AI (XAI) in late 2016, we’ve learned many lessons, and often the hard way. Through headlines, we’ve seen others grapple with the difficulties of deploying AI systems too. Whether it’s: a healthcare resource allocation system that likely discriminated against millions of black peop...

Read more
5 Key Considerations for Machine Learning in Fair Lending

This month, we hosted a virtual panel with industry leaders and explainable AI experts from Discover, BLDS, and H2O.ai to discuss the considerations in using machine learning to expand access to credit fairly and transparently and the challenges of governance and regulatory compliance. The event was moderated by Sri Ambati, Founder and CE...

Read more
From GLM to GBM – Part 2

How an Economics Nobel Prize could revolutionize insurance and lending Part 2: The Business Value of a Better ModelIntroductionIn Part 1 , we proposed better revenue and managing regulatory requirements with machine learning (ML). We made the first part of the argument by showing how gradient boosting machines (GBM), a type of ML, can mat...

Read more
From GLM to GBM - Part 1

How an Economics Nobel Prize could revolutionize insurance and lending Part 1: A New Solution to an Old ProblemIntroductionInsurance and credit lending are highly regulated industries that have relied heavily on mathematical modeling for decades. In order to provide explainable results for their models, data scientists and statisticians i...

Read more
Brief Perspective on Key Terms and Ideas in Responsible AI

INTRODUCTIONAs fields like explainable AI and ethical AI have continued to develop in academia and industry, we have seen a litany of new methodologies that can be applied to improve our ability to trust and understand our machine learning and deep learning models. As a result of this, we’ve seen several buzzwords emerge. In this short po...

Read more
Summary of a Responsible Machine Learning Workflow

A paper resulting from a collaboration between H2O.AI and BLDS, LLC was recently published in a special “Machine Learning with Python” issue of the journal, Information (https://www.mdpi.com/2078-2489/11/3/137). In “A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing...

Read more

ERROR