Project details for DiffSharp

Logo DiffSharp 0.7.7

by gbaydin - January 4, 2016, 00:57:35 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ]

view (16 today), download ( 0 today ), 0 subscriptions

Description:

DiffSharp: Differentiable Functional Programming

DiffSharp is a functional automatic differentiation (AD) library.

AD allows exact and efficient calculation of derivatives, by systematically invoking the chain rule of calculus at the elementary operator level during program execution. AD is different from numerical differentiation, which is prone to truncation and round-off errors, and symbolic differentiation, which is affected by expression swell and cannot fully handle algorithmic control flow.

Using the DiffSharp library, differentiation (gradients, Hessians, Jacobians, directional derivatives, and matrix-free Hessian- and Jacobian-vector products) is applied using higher-order functions, that is, functions which take other functions as arguments. Your functions can use the full expressive capability of the language including control flow. DiffSharp allows composition of differentiation using nested forward and reverse AD up to any level, meaning that you can compute exact higher-order derivatives or differentiate functions that are internally making use of differentiation. Please see the API Overview page for a list of available operations.

The library is developed by Atılım Güneş Baydin and Barak A. Pearlmutter mainly for research applications in machine learning, as part of their work at the Brain and Computation Lab, Hamilton Institute, National University of Ireland Maynooth.

DiffSharp is implemented in the F# language and can be used from C# and the other languages running on Mono, .NET Core, or the .Net Framework, targeting the 64 bit platform. It is tested on Linux and Windows. We are working on interfaces/ports to other languages.

Changes to previous version:

Fixed: Bug fix in forward AD implementation of Sigmoid and ReLU for D, DV, and DM (fixes #16, thank you @mrakgr)

Improvement: Performance improvement by removing several more Parallel.For and Array.Parallel.map operations, working better with OpenBLAS multithreading

Added: Operations involving incompatible dimensions of DV and DM will now throw exceptions for warning the user

BibTeX Entry: Download
Corresponding Paper BibTeX Entry: Download
Supported Operating Systems: Linux, Windows
Data Formats: Agnostic
Tags: Optimization, Automatic Differentiation, Symbolic Differentiation, Backpropagation, Numerical Differentiation
Archive: download here

Other available revisons

Version Changelog Date
0.7.7

Fixed: Bug fix in forward AD implementation of Sigmoid and ReLU for D, DV, and DM (fixes #16, thank you @mrakgr)

Improvement: Performance improvement by removing several more Parallel.For and Array.Parallel.map operations, working better with OpenBLAS multithreading

Added: Operations involving incompatible dimensions of DV and DM will now throw exceptions for warning the user

January 4, 2016, 00:57:35
0.7.6

Fixed: Bug fix in LAPACK wrappers ssysv and dsysv in the OpenBLAS backend that caused incorrect solution for linear systems described by a symmetric matrix (fixes #11, thank you @grek142)

Added: Added unit tests covering the whole backend interface

January 4, 2016, 00:56:28
0.7.5

Improved: Performance improvement thanks to faster Array2D.copy operations (thank you Don Syme @dsyme)

Improved: Significantly faster matrix transposition using extended BLAS operations cblas_?omatcopy provided by OpenBLAS

Improved: Performance improvement by disabling parts of the OpenBLAS backend using System.Threading.Tasks, which was interfering with OpenBLAS multithreading. Pending further tests.

Update: Updated the Win64 binaries of OpenBLAS to version 0.2.15 (27-10-2015), which has bug fixes and optimizations.

Fixed: Bug fixes in reverse AD operations Sub_D_DV and Sub_D_DM (fixes #8, thank you @mrakgr)

Fixed: Fixed bug in the benchmarking module causing incorrect reporting of the overhead factor of the AD grad operation

Improved: Documentation updates

January 4, 2016, 00:54:33
0.7.4

Improved: Overall performance improvements with parallelization and memory reshaping in OpenBLAS backend

Fixed: Bug fixes in reverse AD Make_DM_ofDV and DV.Append

Fixed: Bug fixes in DM operations map2Cols, map2Rows, mapi2Cols, mapi2Rows

Added: New operation primalDeep for the deepest primal value in nested AD values

October 27, 2015, 16:27:35
0.7.3

Fixed: Bug fix in DM.Min

Added: Mean, Variance, StandardDev, Normalize, and Standardize functions

Added: Support for visualizations with configurable Unicode/ASCII palette and contrast

October 27, 2015, 16:26:09
0.7.2

Added: Fast reshape operations ReshapeCopy_DV_DM and ReshapeCopy_DM_DV

October 27, 2015, 16:24:30
0.7.1

Fixed: Bug fixes for reverse AD Abs, Sign, Floor, Ceil, Round, DV.AddSubVector, Make_DM_ofDs, Mul_Out_V_V, Mul_DVCons_D

Added: New methods DV.isEmpty and DM.isEmpty

October 27, 2015, 16:22:37
0.7.0

Version 0.7.0 is a reimplementation of the library with support for linear algebra primitives, BLAS/LAPACK, 32- and 64-bit precision and different CPU/GPU backends

Changed: Namespaces have been reorganized and simplified. This is a breaking change. There is now just one AD implementation, under DiffSharp.AD (with DiffSharp.AD.Float32 and DiffSharp.AD.Float64 variants, see below). This internally makes use of forward or reverse AD as needed.

Added: Support for 32 bit (single precision) and 64 bit (double precision) floating point operations. All modules have Float32 and Float64 versions providing the same functionality with the specified precision. 32 bit floating point operations are significantly faster (as much as twice as fast) on many current systems.

Added: DiffSharp now uses the OpenBLAS library by default for linear algebra operations. The AD operations with the types D for scalars, DV for vectors, and DM for matrices use the underlying linear algebra backend for highly optimized native BLAS and LAPACK operations. For non-BLAS operations (such as Hadamard products and matrix transpose), parallel implementations in managed code are used. All operations with the D, DV, and DM types support forward and reverse nested AD up to any level. This also paves the way for GPU backends (CUDA/CuBLAS) which will be introduced in following releases. Please see the documentation and API reference for information about how to use the D, DV, and DM types. (Deprecated: The FsAlg generic linear algebra library and the Vector<'T> and Matrix<'T> types are no longer used.)

Fixed: Reverse mode AD has been reimplemented in a tail-recursive way for better performance and preventing StackOverflow exceptions encountered in previous versions.

Changed: The library now uses F# 4.0 (FSharp.Core 4.4.0.0).

Changed: The library is now 64 bit only, meaning that users should set "x64" as the platform target for all build configurations.

Fixed: Overall bug fixes.

September 29, 2015, 14:09:01
0.6.3

Fixed: Bug fix in DiffSharp.AD subtraction operation between D and DF

July 18, 2015, 22:04:00
0.6.2

Changed: Update FsAlg to 0.5.8

June 6, 2015, 21:00:02
0.6.1

Added: Support for C#, through the new DiffSharp.Interop namespace

Added: Support for casting AD types to int

Changed: Update FsAlg to 0.5.6

Improved: Documentation updates

June 3, 2015, 03:07:03
0.6.0

Initial Announcement on mloss.org.

April 27, 2015, 01:47:06

Comments

No one has posted any comments yet. Perhaps you'd like to be the first?

Leave a comment

You must be logged in to post comments.