Bringing all the GPU-optimized frameworks into the H2O platform
Deep Water offers the following benefits
Integrated with the state-of-the-art GPU-optimized Deep Learning frameworks – TensorFlow, MXNet and Caffe, H2O Deep Water speeds up Deep Learning on GPUs that are known to be faster than CPUs by 10-75x. This acceleration gives enterprises opportunities to build better models and enable new use cases.
Ease of Use
Deep Water users can readily access TensorFlow, MXNet and Caffe backends with the interfaces familiar to them: H2O Flow, R, Python, Spark/Scala, Java or the REST API, including model deployment via MOJO and H2O Steam. In H2O Flow, the user can easily switch between the backends from a dropdown menu.
The H2O platform is known to make model training and deployment easy with its interactive interfaces, distributed algorithms and scalable architecture. Enterprise features such as hyper-parameter optimization, cross-validation and automatic model tuning are fully supported by Deep Water.
Try Out H2O Deep Water
You can choose to download, compile the source code and configure your own environment or use the pre-built AMI Image.
java -jar h2o.jar
R CMD INSTALL h2o_3.13.0.tar.gz
Docker image: Installation instruction: https://github.com/h2oai/deepwater#pre-release-docker-image
GPU system dependencies:
- Latest NVIDIA Display driver
- Ubuntu 16.04
- CUDA 8
- cuDNN 5.1
Pre-built AMI Image:
Refer to http://docs.h2o.ai. Deep Water AMI Guide is listed in the Deep Water section in the Getting Started section.