Project details for Theano

Logo Theano 0.3.1

by jaberg - March 7, 2011, 06:55:14 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ]

view ( today), download ( today ), 0 subscriptions


Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features:

* tight integration with numpy – Use numpy.ndarray in Theano-compiled functions.
* transparent use of a GPU – Perform data-intensive calculations up to 140x faster than with CPU.
* symbolic differentiation – Let Theano do your derivatives.
* speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny.
* dynamic C code generation – Evaluate expressions faster.
* extensive unit-testing and self-verification – Detect and diagnose many types of mistake.

Theano has been powering large-scale computationally intensive scientific investigations since 2007. But it is also approachable enough to be used in the classroom (IFT6266 at the University of Montreal).

Theano has been used primarily to implement large-scale deep learning algorithms. To see how, see the Deep Learning Tutorials (

Changes to previous version:


 * The theano shared variable attribute `value` is deprecated, use `get_value()` or `set_value()`

Bugs fixed:

* The random number generator in theano/sandbox/ did not always return the same sequence of number on the CPU and GPU.
* In python mode (not the default mode) when input of elemwise operation was an empty ndarray, we were not returning an empty ndarray.
* Scan cached the number of steps.
* In GpuConv, errors in conv_patch_stack_reduce when the entire kernel doesn't fit into shared memory.
* Implemented some cases that previously triggered exceptions.


* Minor optimizations.
* cuda_shared.value = X now works inplace to save memory.
* Allow to create a CudaNdarraySharedVariable from a CudaNdarray.
* New init_gpu_device theano flags.
* Fuse GpuElemwise more often.
* CPU join of only 1 element that was not moved to the GPU.

New features:

* tensor.reshape now makes dimensions of length 1 broadcastable.
* now implements the gradient.
* Sparse.structured_dot now works when both matrices are sparse
* Sparse type is now supported by the shape op, and the ShapeFeature optimizer works correctly with them.
* New 3D convolution ops, with CPU and GPU implementations.
* New colors in pydotprint.
BibTeX Entry: Download
Corresponding Paper BibTeX Entry: Download
Supported Operating Systems: Linux, Macosx, Windows
Data Formats: Agnostic
Tags: Cuda, Gpu, Symbolic Differentiation
Archive: download here


No one has posted any comments yet. Perhaps you'd like to be the first?

Leave a comment

You must be logged in to post comments.