-
-
Notifications
You must be signed in to change notification settings - Fork 66
Implement a minimizer for INLA #513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement a minimizer for INLA #513
Conversation
Currently there's a few outstanding TODOs. These are just issues getting quality-of-life features to work with pytensor - the actual algorithm itself works fine. Please find the TODOs listed as comments in the code, and use the code in |
pymc_extras/inference/laplace.py
Outdated
model: pm.Model | None = None, | ||
method: minimize_method = "BFGS", | ||
use_jac: bool = True, | ||
use_hess: bool = False, # TODO Tbh we can probably just remove this arg and pass True to the minimizer all the time, but if this is the case, it will throw a warning when the hessian doesn't need to be computed for a particular optimisation routine. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not really sure why these are options here. Presumably, the minimization method itself knows what it needs and it's redundant to specify use_jac
or use_hess
here at all.
tests/test_laplace.py
Outdated
sigma_mu = rng.random() | ||
|
||
coords = {"city": ["A", "B", "C"], "obs_idx": np.arange(n)} | ||
with pm.Model(coords=coords) as model: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would try to make this test in pytensor directly if possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe my refactor to the unitests addresses this perhaps? It's working with tensors directly and compares it to an analytic solution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And note that this isn't about validating minimize, but about the function which uses it as a means to an end.
@ricardoV94 @jessegrabowski The unittests currently seem to be failing because the current release of pytensor doesn't have optimize in it yet. Would it be possible to make a point release to so we can merge this? |
@Michal-Novomestsky whenever you are ready for review, remove the Draft status. It looks like there is a small bug in the unit test, but good otherwise! |
pymc_extras/inference/laplace.py
Outdated
jac = pytensor.gradient.grad(f_x, x) | ||
hess = pytensor.gradient.jacobian(jac.flatten(), x) | ||
|
||
# Component of log(p(x | y, params)) which depends on x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't just the component that depends on x but includes an additional term. You should explain what that term is
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Good job!
Addresses #342.
This PR should add:
get_conditional_gaussian_approximation
To get the mode and the laplace approximation at that point.
Contingent on pymc-devs/pytensor#1182, as it uses
pytensor.tensor.optimize.minimize
to find the mode (and hessian at that point).