项目作者: sdl1

项目描述 :
Automatic differentiation in python using dual numbers.
高级语言: Python
项目地址: git://github.com/sdl1/nabla.git
创建时间: 2018-11-25T15:29:33Z
项目社区:https://github.com/sdl1/nabla

开源协议:MIT License

下载


Nabla

Note: this is an unmaintained personal project, use at your own risk.

Automatic, machine-precision forward differentiation in python using dual numbers.

  1. from nabla import grad
  2. def f(x):
  3. return 3*x*x
  4. print(grad(f)(5))
  1. Dual(75, [30.])

Example use for logistic regression

Support for multiple variables

  1. def f(x, y, param, z):
  2. return 2*x*y + z**4
  3. x, y, z, param = 1, 2, 3, "this is a non-numeric parameter"
  4. print(f(x, y, param, z))
  5. # Get the gradient w.r.t. x,y,z
  6. # The non-numeric parameter is automatically ignored
  7. print(grad(f)(x, y, param, z))
  1. 85
  2. Dual(85, [ 4. 2. 108.])

Specify variables explicitly by position

  1. # Find gradient w.r.t y,x
  2. print(grad([1,0])(f)(x, y, param, z))
  1. Dual(85, [2. 4.])

Use decorators; interop with numpy

  1. from numpy import sin, cos
  2. @grad
  3. def f(x, y):
  4. return sin(x)*cos(y)
  5. print(f(1,2))
  6. # nabla can automatically differentiate w.r.t. a combination of numpy array entries and other function arguments:
  7. print(f(np.array([1,2,3]), 2))
  1. Dual(-0.35017548837401463, [-0.2248451 -0.7651474])
  2. [Dual(-0.35017548837401463, [-0.2248451 -0. -0. -0.7651474])
  3. Dual(-0.37840124765396416, [ 0. 0.17317819 0. -0.82682181])
  4. Dual(-0.05872664492762098, [ 0. 0. 0.41198225 -0.12832006])]

Gradient descent without any extra code

  1. from nabla import minimise
  2. def f(x, y, z):
  3. return sin(x+1) + 2*cos(y-1) + (z-1)**2
  4. x0, fval, gradient = minimise(f, [0, 0, 0])
  5. print("Minimum found at x0 = {}".format(x0))
  6. print("Function, gradient at minimum = {}, {}\n".format(fval, gradient))
  7. # Can also minimise w.r.t. a subset of variables
  8. # Here we minimise w.r.t. x and y while holding z=0 fixed
  9. x0, fval, gradient = minimise(f, [0, 0, 0], variables=[0,1])
  10. print("Minimum found at x0 = {}".format(x0))
  11. print("Function, gradient at minimum = {}, {}\n".format(fval, gradient))
  1. Minimum found at x0 = [-2.57079546 -2.14159265 1. ]
  2. Function, gradient at minimum = -2.9999999999996243, [ 8.66727231e-07 1.62321409e-14 -3.77475828e-15]
  3. Minimum found at x0 = [-2.57079546 -2.14159265 0. ]
  4. Function, gradient at minimum = -1.9999999999996243, [8.66727231e-07 1.62321409e-14]

Comparison with finite-difference

  1. import numpy as np
  2. import matplotlib.pyplot as plt
  3. %matplotlib inline
  4. def f(x):
  5. return np.sqrt(3*x + np.sin(x)**2)/(4*x**3 + 2*x + 1)
  6. def analytical_derivative(x):
  7. A = (2*np.sin(x)*np.cos(x)+3)/(2*(4*x**3 + 2*x + 1)*np.sqrt(3*x + np.sin(x)**2))
  8. B = (12*x**2 + 2)*np.sqrt(3*x + np.sin(x)**2) / (4*x**3 + 2*x + 1)**2
  9. return A - B
  10. x = 1
  11. dfdx_nabla = grad(f)(x).dual
  12. dfdx_analytic = analytical_derivative(x)
  13. eps = np.logspace(-15, -3)
  14. dfdx_fd = np.zeros(eps.shape)
  15. for i,e in enumerate(eps):
  16. dfdx_fd[i] = (f(x+e) - f(x))/e
  17. err_nabla = np.abs(dfdx_nabla - dfdx_analytic) * np.ones(eps.shape)
  18. err_fd = np.abs(dfdx_fd - dfdx_analytic)
  19. # Plot error
  20. plt.loglog(eps, err_fd, label='Finite-difference')
  21. plt.loglog(eps, err_nabla, label='Nabla (dual numbers)')
  22. plt.xlabel('Finite-difference step')
  23. plt.ylabel('Error in gradient')
  24. plt.legend()
  1. <matplotlib.legend.Legend at 0x7fce4983bfd0>

png

Compare time taken:

  1. %timeit -n10000 grad(f)(x)
  2. %timeit -n10000 (f(x+1e-8) - f(x))/1e-8
  1. 124 µs ± 6.93 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
  2. 8.27 µs ± 366 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)