我目前正在研究的一个Python程序(高斯过程分类)是对Sympy符号矩阵的评估的瓶颈,我无法弄清楚我能做什么,如果有的话,加快它 . 我已经确保的程序的其他部分是正确键入的(就numpy数组而言),因此它们之间的计算是适当的矢量化等 .

我特别研究了Sympy的codegen函数(autowrap,binary_function),但因为我的ImmutableMatrix对象本身是符号矩阵元素的偏导数,所以有一长串“不可用”的东西阻止我使用codegen功能 .

我研究的另一种可能性是使用Theano - 但经过一些初步的基准测试后,我发现虽然它可以更快地构建初始偏导数符号矩阵,但在评估时它似乎慢了几个数量级,与我寻求的相反 .

下面是我正在处理的代码的工作,提取片段 .

import theano
import sympy
from sympy.utilities.autowrap import autowrap
from sympy.utilities.autowrap import binary_function
import numpy as np
import math
from datetime import datetime

# 'Vectorized' cdist that can handle symbols/arbitrary types - preliminary benchmarking put it at ~15 times faster than python list comprehension, but still notably slower (forgot at the moment) than cdist, of course
def sqeucl_dist(x, xs):
    m = np.sum(np.power(
        np.repeat(x[:,None,:], len(xs), axis=1) -
        np.resize(xs, (len(x), xs.shape[0], xs.shape[1])),
        2), axis=2)
    return m


def build_symbolic_derivatives(X):
    # Pre-calculate derivatives of inverted matrix to substitute values in the Squared Exponential NLL gradient
    f_err_sym, n_err_sym = sympy.symbols("f_err, n_err")

    # (1,n) shape 'matrix' (vector) of length scales for each dimension
    l_scale_sym = sympy.MatrixSymbol('l', 1, X.shape[1])

    # K matrix
    print("Building sympy matrix...")
    eucl_dist_m = sqeucl_dist(X/l_scale_sym, X/l_scale_sym)
    m = sympy.Matrix(f_err_sym**2 * math.e**(-0.5 * eucl_dist_m) 
                     + n_err_sym**2 * np.identity(len(X)))


    # Element-wise derivative of K matrix over each of the hyperparameters
    print("Getting partial derivatives over all hyperparameters...")
    pd_t1 = datetime.now()
    dK_df   = m.diff(f_err_sym)
    dK_dls  = [m.diff(l_scale_sym) for l_scale_sym in l_scale_sym]
    dK_dn   = m.diff(n_err_sym)
    print("Took: {}".format(datetime.now() - pd_t1))

    # Lambdify each of the dK/dts to speed up substitutions per optimization iteration
    print("Lambdifying ")
    l_t1 = datetime.now()
    dK_dthetas = [dK_df] + dK_dls + [dK_dn]
    dK_dthetas = sympy.lambdify((f_err_sym, l_scale_sym, n_err_sym), dK_dthetas, 'numpy')
    print("Took: {}".format(datetime.now() - l_t1))
    return dK_dthetas


# Evaluates each dK_dtheta pre-calculated symbolic lambda with current iteration's hyperparameters
def eval_dK_dthetas(dK_dthetas_raw, f_err, l_scales, n_err):
    l_scales = sympy.Matrix(l_scales.reshape(1, len(l_scales)))
    return np.array(dK_dthetas_raw(f_err, l_scales, n_err), dtype=np.float64)


dimensions = 3 
X = np.random.rand(50, dimensions)
dK_dthetas_raw = build_symbolic_derivatives(X)

f_err = np.random.rand()
l_scales = np.random.rand(3)
n_err = np.random.rand()

t1 = datetime.now()
dK_dthetas = eval_dK_dthetas(dK_dthetas_raw, f_err, l_scales, n_err) # ~99.7%
print(datetime.now() - t1)

在这个例子中,评估5个50×50符号矩阵,即仅12,500个元素,花费7秒 . 我花了很长时间寻找像这样的超速操作资源,并尝试将其翻译成Theano(至少直到我发现它的评价在我的情况下更慢)并且在那里也没有运气 .

任何帮助非常感谢!