Project details for Theano

Logo Theano 1.0.0

by jaberg - November 16, 2017, 17:42:27 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ]

view ( today), download ( today ), 0 subscriptions

Description:

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features:

* tight integration with numpy – Use numpy.ndarray in Theano-compiled functions.
* transparent use of a GPU – perform data-intensive computations much faster than on a CPU.
* symbolic differentiation – Let Theano do your derivatives.
* speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny.
* dynamic C code generation – Evaluate expressions faster.
* extensive unit-testing and self-verification – Detect and diagnose many types of mistake.

Theano has been powering large-scale computationally intensive scientific investigations since 2007. But it is also approachable enough to be used in the classroom (IFT6266 at the University of Montreal).

Theano has been used primarily to implement large-scale deep learning algorithms. To see how, see the Deep Learning Tutorials (http://www.deeplearning.net/tutorial/)

Changes to previous version:

Theano 1.0.0 (15th of November, 2017)

Highlights (since 0.9.0):

  • Announcing that MILA will stop developing Theano <https://groups.google.com/d/msg/theano-users/7Poq8BZutbY/rNCIfvAEAwAJ>_

  • conda packages now available and updated in our own conda channel mila-udem To install it: conda install -c mila-udem theano pygpu

  • Support NumPy 1.13

  • Support pygpu 0.7

  • Moved Python 3.* minimum supported version from 3.3 to 3.4

  • Added conda recipe

  • Replaced deprecated package nose-parameterized with up-to-date package parameterized for Theano requirements

  • Theano now internally uses sha256 instead of md5 to work on systems that forbid md5 for security reason

  • Removed old GPU backend theano.sandbox.cuda. New backend theano.gpuarray is now the official GPU backend

  • Make sure MKL uses GNU OpenMP

  • NB: Matrix dot product (gemm) with mkl from conda could return wrong results in some cases. We have reported the problem upstream and we have a work around that raises an error with information about how to fix it.

  • Improved elemwise operations

  • Speed-up elemwise ops based on SciPy

  • Fixed memory leaks related to elemwise ops on GPU

  • Scan improvements

  • Speed up Theano scan compilation and gradient computation

  • Added meaningful message when missing inputs to scan

  • Speed up graph toposort algorithm

  • Faster C compilation by massively using a new interface for op params

  • Faster optimization step, with new optional destroy handler

  • Documentation updated and more complete

  • Added documentation for RNNBlock

  • Updated conv documentation

  • Support more debuggers for PdbBreakpoint

  • Many bug fixes, crash fixes and warning improvements

BibTeX Entry: Download
Corresponding Paper BibTeX Entry: Download
Supported Operating Systems: Linux, Macosx, Windows
Data Formats: Agnostic
Tags: Python, Cuda, Gpu, Symbolic Differentiation, Numpy
Archive: download here

Comments

No one has posted any comments yet. Perhaps you'd like to be the first?

Leave a comment

You must be logged in to post comments.