Overview

The possibilities are endless with H2O MOJOs. H2O MOJOs are production-ready, scoring pipelines produced by H2O Driverless AI and H2O open-source. H2O MOJOs are perfect for production deployment with a small size and low latency for real-time and large-scale batch prediction use cases. H2O MOJOs can be deployed in real-time, batch, or streaming use cases on a diverse set of platforms and technologies including AWS Lambda, AWS SageMaker, Azure ML, Google Cloud, Hadoop, Kafka, Kubernetes, Snowflake, and more.

Anatomy of a MOJO

The MOJO (Model Objects, Optimized) scoring pipeline is a scoring engine that can be deployed in any Java environment for scoring in real-time or batch. MOJOs separate the scoring model from the runtime environment library to provide maximum performance and flexibility to deploy virtually anywhere. MOJOs are protobuf optimized to reduce the size of big data models and provide millisecond response times for high volume, low-latency applications.

Deploying MOJOs to Kubernetes using H2O MLOps

MOJOs can be deployed to Kubernetes as part of a REST server deployment. This is exactly what H2O MLOps does by putting the MOJO into a container and deploying the container to Kubernetes. H2O MLOps provides all the management and monitoring for models deployed in production on Kubernetes.

Learn more about H2O MLOps

Deploying MOJOs with SnowFlake External Functions

With the external function integration between Snowflake and H2O Driverless AI, data ops and IT ops are more productive as they work on AI projects directly in Snowflake. Ops can use the SQL commands they know to retrain models, deploy updated models score records, and store the scores in Snowflake. The integration speeds IT workflows and reduces errors and cost to manage AI pipelines in production.

Learn More

Deploying MOJOs with a Standalone REST Server on Kubernetes

H2O MOJOs can be deployed in a stand-alone rest server mode using Docker and Kubernetes. In this configuration, the dependency libraries are packaged with the MOJO files in the container and then replicated on Kubernetes. The load balancer manages requests across multiple pods hosting the same MOJO files.

Deploying MOJOs as a Database Scorer Scorer

This scorer enables most databases to use  MOJOs to score records using any JDBC type 4 data source. The process reads specific rows from the database, scores using the MOJO, and then saves the predictions to a file or back to the database.

Deploying MOJOs with AWS Lambda

H2O Driverless AI supports easy deployment for the MOJO to AWS Lambda. With just a few clicks, users can deploy MOJOs and run models with AWS lambda.

Deploying MOJOs with Hive UDF

User-defined functions (UDF) in Hive are a powerful way to leverage MOJOs in a Hive environment. A HQL call passes rows to the model to score.

Deploying MOJOs with Kafka or Active MQ

Queues provide a scalable and resilient way to loosely couple complex systems, using MOJOs in these environments enables predictions to be added to existing data. H2O MOJOs can be used with systems like ActiveMQ, Kafka and more.

Start Your 21-Day Free Trial Today

Get It Now
Desktop img