Project details for DiffSharp

Logo DiffSharp 0.7.0

by gbaydin - September 29, 2015, 14:09:01 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ]

view ( today), download ( today ), 0 subscriptions

Description:

DiffSharp is an automatic differentiation (AD) library.

AD allows exact and efficient calculation of derivatives, by systematically invoking the chain rule of calculus at the elementary operator level during program execution. AD is different from numerical differentiation, which is prone to truncation and round-off errors, and symbolic differentiation, which is affected by expression swell and cannot fully handle algorithmic control flow.

Using the DiffSharp library, derivative calculations (gradients, Hessians, Jacobians, directional derivatives, and matrix-free Hessian- and Jacobian-vector products) can be incorporated with minimal change into existing algorithms. Diffsharp supports nested forward and reverse AD up to any level, meaning that you can compute exact higher-order derivatives or differentiate functions that are internally making use of differentiation. Please see the API Overview page for a list of available operations.

The library is under active development by Atılım Güneş Baydin and Barak A. Pearlmutter mainly for research applications in machine learning, as part of their work at the Brain and Computation Lab, Hamilton Institute, National University of Ireland Maynooth.

DiffSharp is implemented in the F# language and can be used from C# and the other languages running on Mono or the .Net Framework, targeting the 64 bit platform. It is tested on Linux and Windows. We are working on interfaces/ports to other languages.

Changes to previous version:

Version 0.7.0 is a reimplementation of the library with support for linear algebra primitives, BLAS/LAPACK, 32- and 64-bit precision and different CPU/GPU backends

Changed: Namespaces have been reorganized and simplified. This is a breaking change. There is now just one AD implementation, under DiffSharp.AD (with DiffSharp.AD.Float32 and DiffSharp.AD.Float64 variants, see below). This internally makes use of forward or reverse AD as needed.

Added: Support for 32 bit (single precision) and 64 bit (double precision) floating point operations. All modules have Float32 and Float64 versions providing the same functionality with the specified precision. 32 bit floating point operations are significantly faster (as much as twice as fast) on many current systems.

Added: DiffSharp now uses the OpenBLAS library by default for linear algebra operations. The AD operations with the types D for scalars, DV for vectors, and DM for matrices use the underlying linear algebra backend for highly optimized native BLAS and LAPACK operations. For non-BLAS operations (such as Hadamard products and matrix transpose), parallel implementations in managed code are used. All operations with the D, DV, and DM types support forward and reverse nested AD up to any level. This also paves the way for GPU backends (CUDA/CuBLAS) which will be introduced in following releases. Please see the documentation and API reference for information about how to use the D, DV, and DM types. (Deprecated: The FsAlg generic linear algebra library and the Vector<'T> and Matrix<'T> types are no longer used.)

Fixed: Reverse mode AD has been reimplemented in a tail-recursive way for better performance and preventing StackOverflow exceptions encountered in previous versions.

Changed: The library now uses F# 4.0 (FSharp.Core 4.4.0.0).

Changed: The library is now 64 bit only, meaning that users should set "x64" as the platform target for all build configurations.

Fixed: Overall bug fixes.

BibTeX Entry: Download
Corresponding Paper BibTeX Entry: Download
Supported Operating Systems: Linux, Windows
Data Formats: Agnostic
Tags: Optimization, Automatic Differentiation, Symbolic Differentiation, Backpropagation, Numerical Differentiation
Archive: download here

Comments

No one has posted any comments yet. Perhaps you'd like to be the first?

Leave a comment

You must be logged in to post comments.