Return to page
h2o gen ai world conference san francisco h2o gen ai world conference san francisco

Fireside Chat | Agus Sudjianto and Sri Ambati

 

Speaker Bio

Sri Ambati | Founder & Chief Executive Officer

Sri Ambati is the founder and CEO of H2O.ai. A product visionary who has assembled world-class teams throughout his career, Sri founded H2O.ai in 2012 with a mission to democratize AI for anyone, anywhere, creating a movement of the world’s top data scientists, physicists, academics and technologists at more than 20,000 organizations worldwide. Sri also regularly partners with global business leaders to fund AI projects designed to solve compelling business, environmental and societal challenges. Most recently, Sri led the initiative o2forindia.org, sourcing O2 concentrators for more than 200 public health organizations in Tier 2 cities and rural communities in India during the Delta wave of the COVID-19 pandemic, helping to save thousands of lives. His strong “AI for Good” ethos for the responsible and fair use of AI to make the world a better place drives H2O.ai’s business model and corporate direction.

A sought-after speaker and thought-leader, Sri has presented at industry events including Ai4, Money2020, Red Hat Summit and more, is a frequent university guest speaker, and has been featured in publications including The Wall Street Journal, CNBC, IDG and the World Economic Forum and has been named a Datanami Person to Watch.

Before founding H2O.ai, Sri co-founded Platfora, a big data analytics company (acquired by Workday) and was director of engineering at DataStax and Azul Systems. His academic background includes sabbaticals focused on Theoretical Neuroscience at Stanford University and U.C. Berkeley, and he holds a master’s degree in Math and Computer Science from the University of Memphis.

 

Dr. Agus Sudjianto, Executive Vice President, Head of Corporate Model Risk, Wells Fargo

Agus Sudjianto is the Executive Vice President and Head of Model Risk at Wells Fargo, a role that includes chairing the Model Risk Committee and overseeing enterprise model risk management. His extensive career in the financial sector features roles at Lloyds Banking Group in the UK as the Modeling and Analytics Director and Chief Model Risk Officer, and at Bank of America as an executive leading Quantitative Risk. Sudjianto also has experience in the automotive industry, having been a Product Design Manager at Ford Motor Company.

An accomplished academic, he holds advanced degrees in engineering and management from Wayne State University and MIT. His expertise spans quantitative risk, credit risk modeling, machine learning, and computational statistics. Sudjianto is a prolific innovator with several U.S. patents in finance and engineering, and has made significant contributions to the field through his publications, including co-authoring "Design and Modeling for Computer Experiments." His work, particularly in interpretable machine learning models, is vital in regulated sectors like banking, and his patents cover a wide range from time series simulation to financial crime detection, highlighting his dedication to technological advancements in risk management.

Read the Full Transcript

 

Sri Ambati 00:06

August leads all of models I used to only ingest but only half-gest refer to him as the Lord of all models in some of the largest banks around. Years before that he built the last real engine that one of the largest car companies has built and is istill running over the years. 

 

Sri Ambati 00:34

He tested it at extreme temperatures probably in deserts of Arizona to make sure those models actually work and safety of cars to safety of models. So today I'm super excited please welcome Augus Sudjianto to stage. 

 

Agus Sudjianto 00:51

Thank you Sri. 

 

Sri Ambati 00:56

So August without further ado I want to kick off. What is this? So with great powers come great responsibility I kind of prompted the topic. How does one manage risk of LLMs? Should LLMs be severely regulated? 

 

Sri Ambati 01:18

What should be done? They're taking over the planet on one side of the Silicon Valley you hear LLMs are destroying everything that we know and other side is the techno optimism which we actually believe very strongly in. 

 

Agus Sudjianto 01:32

So please play the judge. Well I probably look at it from the more practical point of view this is our day to day thing how we use this tool in real world right. So for me and for us all model are wrong and they will be wrong and they will be more wrong. 

 

Agus Sudjianto 01:57

So we understood that and what's important is really understand what the limitation of this model. and how are we going to use it properly, how are we going to manage this? So I'm not in the camp of glum and doom, and I'm a bit worried about that piece because when it has glum and doom, and this is personal for you, not Wells Fargo for you, when governments start playing into it and decide to win or to lose it, I don't think that's the wise thing to do. 

 

Agus Sudjianto 02:33

That's in my opinion. And we probably have in the history many of that, some of the experience when government picked the technology winner, the outcome may not be what we would like to, and typically we're going to bear with the consequence of many, many years, even 100 years. 

 

Agus Sudjianto 02:53

I came from combustion engine, that was part of the things that we have to live with combustion engine for 100 something years. So I would say, I hope we'll come with a sensible regulation. but it's important, but I hope it's not going to pick a winner. 

 

Sri Ambati 03:12

Regulatory capture, as they will say, right? Sort of, but getting to the basics of how you're enforcing model accuracy, model validation, and of course, with Aletheia and some of the projects you've built, interpretability for even deep learning models is something you've strongly advocated. 

 

Sri Ambati 03:37

Perhaps worth touching us up back on how model validation today is helping the planet. Some of you that's not from banks probably have a definition, use model validation probably differently than many of us that's in banks. 

 

Agus Sudjianto 04:01

In large financial institutions, we spend a lot of time on model validation. And let me put a little bit of context on this. For every three model developers in Wells Fargo, we have one independent model validator. 

 

Agus Sudjianto 04:17

And they report to different lines all the way. To the top of the house, report to the different line. So model risk management, the practice of model risk management with model validation, that's our independent. 

 

Agus Sudjianto 04:31

Independent report to the chief risk officer who reported the board directly. Completely independent. So people that do respond, looking at model validation, will not be influenced or contaminated by the business need. 

 

Agus Sudjianto 04:46

So that is the setting in large institution like bank. Because we use a lot of model, it's a model can create financial harm and create harm to the customer as well. So the independent is very, very important. 

 

Agus Sudjianto 05:01

So this is part of my criticism to the field because when we look at the response, responsible AI that is very, very important. And then you can look at who are the head of responsible AI. What is the seniority of the head of responsible AI? 

 

Agus Sudjianto 05:16

Who the person report to? And if you have the head of responsible AI report to the business that's also built model and use model, then you can start smell the rotten piece because it has conflict of interest, right? 

 

Agus Sudjianto 05:31

So I think the, and that is part of what's important for us. How do we, how the model validation people that do testing of this model have absolute power to let, to make decision whether model can go to production or not. 

 

Agus Sudjianto 05:50

And what need to what kind of monitoring and risk management that need to be done. And that's something that becoming a lot more challenged with large language model, of course, because typically when we build model, we build model with certain intent and purpose. 

 

Agus Sudjianto 06:12

We choose what data to go in, we control the data, and we control the architecture of a model. For example, for credit decision, the architecture is very controlled so that the model is inherently interpretable. 

 

Agus Sudjianto 06:26

Not in LLM, so the model is there, the foundation model is there, so now we have to do damage control, how to control it, instead of building from the ground up to be model that's very well designed. 

 

Agus Sudjianto 06:39

And this is typically not how engineering is done. I used to design engine, so we are purposefully have a design mindset, how to design, and we have to deal with different paradigm in LLM. The model is out there, not very well designed in terms of all kind of consideration we done, and we have to do risk control from that. 

 

Sri Ambati 06:59

So for an LLM, what part of the techniques, so we were talking about eval first design for LLM power applications, are in most of the methods that the bank has in place, or are they out there? I think the principle are there, let's say explainability. 

 

Agus Sudjianto 07:23

This is a difficult subject in LLM. In tabular data, the issue of interpretability are solved. We can build inherently interpretable machine learning models, so we stick with the model. LLM is different things, right? 

 

Agus Sudjianto 07:39

We're dealing with embedding and sometimes large embedding dimension. So at the end of the day, yes, we need explainability, and I think somebody is going to talk this afternoon about interpretability. 

 

Agus Sudjianto 07:50

So it's about, if you look at LLM, it's all boiled down into embedding. you How the information are embedded in the high dimension of hidden layer is no more than that. So now it's about understanding what the information is captured and represented in the embedding. 

 

Agus Sudjianto 08:12

Is the embedding correct? Is the embedding makes sense? So the issue of interpretability or explainability are about understanding the embedding. And that can be done in various way about understanding embedding. 

 

Agus Sudjianto 08:26

Many techniques that we can apply to understand what is the embedding, how embedding represent information. There was a talk that didn't make it to this event, probably for the next one, which is embedding is all you need. 

 

Sri Ambati 08:44

And I think Professor Stephen Boyd was going to power that talk for us in the next H2O GenAI World. But you have a very similar thought process on embeddings. probably worth opening the... 

 

Agus Sudjianto 08:59

Yeah, embedding is not something new, it's just a very, very different way in a neural network. 

 

Agus Sudjianto 09:05

Every hidden layer in a neural network is embedding, a different embedding representation. Right? And even in traditional model, in regression model, we have embedding. For example, a simple embedding, we do log transformation. 

 

Agus Sudjianto 09:20

We do square transformation. We created an interaction. That's embedding, right? In tabular data. Now the embedding in large language model is a bit more complicated because some of you that know statistics, principal component is embedding, right? 

 

Agus Sudjianto 09:38

And here it's just, okay, I'm going to do masking during training. That's how you create embedding. So so many ways to do embedding. And the key in all of this application is what embedding is the best. 

 

Agus Sudjianto 09:51

And I'm looking at it. And that depends on the data, depends on how you train it. It's like every one of us speak differently, different style, thinking differently. LLM out there, we have so many of them, are like that. 

 

Agus Sudjianto 10:07

It's a different style. And different embedding will be more appropriate for certain applications than the other. And pick which one by evaluating which one that will be more suitable for certain applications. 

 

Agus Sudjianto 10:21

Can LLMs be used to produce model documentation? We do use a large language model in our model documentation and a few way. But let me say a little bit because a lot of discussion talk about RAG and fundamentally how we're using LLM in our real world at least today. 

 

Agus Sudjianto 10:41

It may change in the future, but at least today. We don't use LLM to store knowledge. We let the database store the knowledge. So we have factor database, that is your knowledge. document you started in there. 

 

Agus Sudjianto 10:58

We don't use LLM to start knowledge. What do we use LLM for? We use LLM for English. This is trained very well with English. It can rewrite really well. Don't ask LLM knowledge. Ask your database knowledge. 

 

Agus Sudjianto 11:15

Ask LLM to write it in a nicer way or in a certain style. So when we do model documentation, because we do a lot of model documentation and model testing, we have some requirement. So we use LLM to check whether it meets the requirement that we set ourselves how to document model and all of those things. 

 

Agus Sudjianto 11:35

But those are always combined with the knowledge and the knowledge in the vector database, not in LLM. So we use LLM to do prompt, the process prompt to do embedding, find it in the database. So we probably use it, but you just use it LLM to embed. 

 

Agus Sudjianto 11:53

and that do as index to document search, search for your knowledge. Once you get the knowledge, now you summarize it, you send it to LLM to rewrite it in the nicer English. So we use large language model to help model documentation, to write documentation, to also ask to do quality assurance whether it's documented properly or not. 

 

Agus Sudjianto 12:16

So LLM is really a good English generator. For me, yes. Yeah. It's better. For me, it's a blessing. I'm not a native English speaker. I hit writing before, now I can write easily. Yeah, easier at least. 

 

Agus Sudjianto 12:32

So we use LLM, like I said, as a good writing, as a good generation of things. But it's a limited way. The knowledge is in the database. 

 

Sri Ambati 12:42

There is a talk later in the evening called Never Write an Email Again, use LLM. 

 

Sri Ambati 12:48

So that's the headline of the talk. Question for folks who are just joining into this space. They're all starting to enter from generative AI as a first business user. Sometimes in contracts or RFP organizers, procurement people are getting into AI with Chen AI. 

 

Sri Ambati 13:14

What are some pitfalls that you, or what are some ways they can truly understand the nuances? 

 

Agus Sudjianto 13:25

Right. Well, I think for somebody like me who's been working in the data science for many, many years, sometimes I feel like I was born 25 years too early. 

 

Agus Sudjianto 13:39

Because when I got my PhD in 96 in neural networks and machine learning, I could not find a job, right? In machine learning. So I have to design engine. So I say, OK, I left that. behind a design car engine. 

 

Agus Sudjianto 13:54

So now it's 25 years later all this thing back and the demand of this data science and AI came from the user instead of from data scientists which is really incredible. So the idea and what they have, what they would like to, it's a good thing. 

 

Agus Sudjianto 14:14

Now how to channel it to do it properly and for us institution like us, we want to do it in more coordinated way. So what we have is we have a centralized team that take all of those idea, we have hundreds of them across the company and then we prioritize, we look at it, okay this is the highest ROI, this is what we're going to attack and also the risk side is probably more manageable than a lot of the unknown. 

 

Agus Sudjianto 14:47

So that's what the prioritization that we do. And then I work very closely with our CIO to put the system together and where the system is developed and deployed, et cetera, in much controlled way. So for us, when we have all of this, we're looking at it from two axes. 

 

Agus Sudjianto 15:10

One is from the user point of view, and the other one from the usage point of view. From the usage point of view, many of you are familiar. We use LLM a lot for classification, document sorting, routing, complain, and all of those things. 

 

Agus Sudjianto 15:25

Those are classification problems. And then we use LLM for information retrieval, document retrieval. And then we use LLM for information retrieval with summarization, which is RAG and also summarization world into that. 

 

Agus Sudjianto 15:39

And then LLM for pure content generation, which we kind of limit the use on the content generation. So we use 1, 2, 3 that I talk about. The content generation is a lot more limited. And then in terms of user, we look at it from, is it expert user assisting people to write code? 

 

Agus Sudjianto 16:00

That's expert user. Is it from a more novice user, internal user, novice user? Or is it going to be a novice user assisting our external customer? This is banking center answering questions from the customer through all of this. 

 

Agus Sudjianto 16:18

And then really the external customer use model, use it directly. This is typically like our JetBot. So those are the four by four metrics that we look at. And each of those entry have different risks involved in it. 

 

Agus Sudjianto 16:36

And based on the risk and the opportunity, how fast can we build, how fast can we deploy, what kind of return on investment, and what kind of risk that we are dealing with, we decide which one we're going to attack first. 

 

Agus Sudjianto 16:50

You how we communicate, how we talk with the user across the company. 

 

Sri Ambati 16:55

So not long ago, maybe the same time last year, pre -chat GPT era, right? You were trying to push applications of AI to different parts of the business. 

 

Sri Ambati 17:08

Now the business is pulling in more ways than normal, especially in LP use cases. What kind of guardrails do you recommend? 

 

Agus Sudjianto 17:22

We were a little bit draconian on this. So we shut down chat GPT. We shut down cloud. 

 

Agus Sudjianto 17:29

We shut down anything generative AI out there, right? So the web is just blocked. So we decided to do that because for some reason that we think is important. So we blocked many of those, most of those, all of those actually. 

 

Agus Sudjianto 17:47

We blocked those. For us then, okay, which area that we're going to use and apply and what kind of tool that we make it available internally for people to be able to customize and develop appropriately. 

 

Agus Sudjianto 18:04

So we are really looking at the, again, based on those metrics that I talk about, that set up our strategy in terms of, yes, everybody want to do it, but we're looking at the return and the investment and what the safe and responsible way so that if we cannot do it, for example, we're not going to do content generation with external customer to use it directly. 

 

Agus Sudjianto 18:33

So we're not going to have something like chat GPT, right? So that's completely now, basically. So at least for the near future. So based on that, Sri, that's what we look at the gut reel, depending on who are the impact of the model. 

 

Agus Sudjianto 18:52

If the model impacting our team member, if the model impacting our customer, that is will be a lot more stringent requirement compared to, I'm using it to do auto complete of email, right? So it's really very, very broad gradation in terms of what kind of gut reel or no gut reel in some area, depending on what are the harm or damage that this model can do to the company or to our customer. 

 

Sri Ambati 19:25

Let me ask you a question which is more probably more closer to H2O. One of our customers recently chose us over some of these large language model companies with a code they used. You're not just an LLM company. 

 

Sri Ambati 19:46

That's why we like you. Can you, I mean, can you You kind of you have experienced H2O as well from different angles. Can you kind of expand I mean probably give voice to that sentiment is auto ML, regular ML, regular statistical machine learning is that old fashioned traditional is that obsolete. 

 

Agus Sudjianto 20:10

Right. Well, in I know the noise today and the all every the excitement is in LLM. The potential are huge are big that we can do because we do process a lot of document. But at the end of the day today, our we have a few thousand model in the company about 15%, 20% are machine learning model. 

 

Agus Sudjianto 20:35

So we have our universe is model subset of that are machine learning model and subset of that are deep learning model and subset of that is LLM and subset of that are gen AI. So we look at the magnitude, the number of models, right? 

 

Agus Sudjianto 20:55

So those are what we have to deal with. So when we look at two like H2O, we use H2O before... We met. Before we met, right? Because of open source. The open source, the H2O -generized linear model, GLM model is a fantastic tool that you use very, very widely, because of the skill that this open source can do. 

 

Agus Sudjianto 21:21

So we experience it from that. And then our predictive machine learning, which is our bread and butter, large population of predictive machine learning model that we do. And then of course, nowadays, this is the LLM, which will grow. 

 

Sri Ambati 21:40

The growth rate of GenAI is faster. 

 

Agus Sudjianto 21:43

Yes. And if we look, talking about LLM, we're using LLM starting probably in 2019, when BERT came out in 2018, right? So that's the small language model. We call it today small now, with BERT and all the variants of BERT. 

 

Agus Sudjianto 22:00

We've been using it for quite some time. And so it's very, very natural way to do. And it's just that, actually. And then the thing that's important is we rely a lot also on the open source. We use a lot of open source. 

 

Agus Sudjianto 22:19

And we like model the smaller the better for the certain purpose. So for a lot of language, for our model, the validation report, a 7 billion LLM to rewrite what our knowledge has is enough. So why do we want to go to a bigger model? 

 

Agus Sudjianto 22:38

Those kind of considerations that we look at really depend on the use cases, what model we want to use. I mean, I think you're a huge supporter. for vision for open source LLMs. Open source LLM powered software stacks, which RAC is a first example of, I think a very strong builder of communities. 

 

Sri Ambati 23:02

So thank you for all your support over the years, as well as when we started on February, March, April, the number of brainstorms we've had on interpretability for LLMs and how to build some back when pre Falcon era, when the models were even more not as good. 

 

Sri Ambati 23:23

What would, I mean, you're also, I think I remember Agus calling me and saying, we could just use an Apple computer for inference. What kind of, what would you, what is your advice for kind of the cost of AI? 

 

Agus Sudjianto 23:43

I think it's, we run out of time, I have to do it very quickly here. I think it's really inference time and costs are very, very important. So and depend on the use what you need. We use it more as a certain way that we don't use really, we don't really not need large model. 

 

Agus Sudjianto 24:04

Other area we'll need large model, but most of the things that we do we don't need and cost consideration is very important. And inference at the cheapest at commodity hardware, your laptop and things probably will be important for certain applications. 

 

Agus Sudjianto 24:19

So I think it's the beauty today is we have wide range of availability, a lot of smart people working on this. Building model will be coming a lot easier. The key is building API, building apps and also testing the model. 

 

Agus Sudjianto 24:34

It's not, the key is not building model, but building apps and test the model so that we know what we have. 

 

Sri Ambati 24:42

That's all we have time for August today, but thanks a lot for joining us for the first H2O Jenaei world. 

 

Sri Ambati 24:49

Hopefully we'll have a lot more experiments and real live applications same time next year. Thank you. Thanks August. Thank you.