As I discussed in my review of PyTorch, the foundational deep neural network (DNN) frameworks such as TensorFlow (Google) and CNTK (Microsoft) tend to be hard to use for model building. However, TensorFlow now contains three high-level APIs for creating models, one of which, tf.keras, is a bespoke version of Keras.
InfoWorld
Keras proper, a high-level front end for building neural network models, ships with support for three back-end deep learning frameworks: TensorFlow, CNTK, and Theano. Amazon is currently working on developing a MXNet back end for Keras. It’s also possible to use PlaidML (an independent project) as a back end for Keras to take advantage of PlaidML’s OpenCL support for all GPUs.
As an aside, the name Keras is from the Greek for horn, κέρας, and refers to a passage from the Odyssey. The dream spirits that come through the gate made of horn are the ones that announce a true future; the ones that come through the gate made of ivory, ἐλέφας, deceive men with false visions.
TensorFlow is the default back end for Keras, and the one recommended for many use cases involving GPU acceleration on Nvidia hardware via CUDA and cuDNN, as well as for TPU acceleration in the Google Cloud. I used the TensorFlow back end configured for CPU-only to do my basic Keras testing on a MacBook Pro.