diff --git a/README.md b/README.md index 9ba7a7e..7576e2a 100644 --- a/README.md +++ b/README.md @@ -12,8 +12,9 @@ scale with [Spark](http://spark.apache.org). Elephas currently supports a number applications, including: - [Data-parallel training of deep learning models](#basic-spark-integration) -- [Distributed hyper-parameter optimization](#distributed-hyper-parameter-optimization) - [Distributed training of ensemble models](#distributed-training-of-ensemble-models) +- [~~Distributed hyper-parameter optimization~~](#distributed-hyper-parameter-optimization) (removed as of 3.0.0) + Schematically, elephas works as follows. @@ -194,6 +195,8 @@ estimator.set_custom_objects({'custom_activation': custom_activation, 'CustomLay ## Distributed hyper-parameter optimization +**UPDATE**: As of 3.0.0, Hyper-parameter optimization features have been removed, since Hyperas is no longer active and was causing versioning compatibility issues. To use these features, install version 2.1 or below. + Hyper-parameter optimization with elephas is based on [hyperas](https://github.com/maxpumperla/hyperas), a convenience wrapper for hyperopt and keras. Each Spark worker executes a number of trials, the results get collected and the best model is returned. As the distributed mode in hyperopt (using MongoDB), is somewhat difficult to configure and error