-
- Description:
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features:
* tight integration with numpy – Use numpy.ndarray in Theano-compiled functions. * transparent use of a GPU – perform data-intensive computations much faster than on a CPU. * symbolic differentiation – Let Theano do your derivatives. * speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny. * dynamic C code generation – Evaluate expressions faster. * extensive unit-testing and self-verification – Detect and diagnose many types of mistake.
Theano has been powering large-scale computationally intensive scientific investigations since 2007. But it is also approachable enough to be used in the classroom (IFT6266 at the University of Montreal).
Theano has been used primarily to implement large-scale deep learning algorithms. To see how, see the Deep Learning Tutorials (http://www.deeplearning.net/tutorial/)
- Changes to previous version:
Theano 0.9.0 (20th of March, 2017)
Highlights (since 0.8.0):
* Better Python 3.5 support * Better numpy 1.12 support * Conda packages for Mac, Linux and Windows * Support newer Mac and Windows versions * More Windows integration: * Theano scripts (``theano-cache`` and ``theano-nose``) now works on Windows * Better support for Windows end-lines into C codes * Support for space in paths on Windows * Scan improvements: * More scan optimizations, with faster compilation and gradient computation * Support for checkpoint in scan (trade off between speed and memory usage, useful for long sequences) * Fixed broadcast checking in scan * Graphs improvements: * More numerical stability by default for some graphs * Better handling of corner cases for theano functions and graph optimizations * More graph optimizations with faster compilation and execution * smaller and more readable graph * New GPU back-end: * Removed warp-synchronous programming to get good results with newer CUDA drivers * More pooling support on GPU when cuDNN isn't available * Full support of ignore_border option for pooling * Inplace storage for shared variables * float16 storage * Using PCI bus ID of graphic cards for a better mapping between theano device number and nvidia-smi number * Fixed offset error in ``GpuIncSubtensor`` * Less C code compilation * Added support for bool dtype * Updated and more complete documentation * Bug fixes related to merge optimizer and shape inference * Lot of other bug fixes, crashes fixes and warning improvements
Comments
No one has posted any comments yet. Perhaps you'd like to be the first?
Leave a comment
You must be logged in to post comments.