causalexplain.estimators.notears package#
Submodules#
- class LBFGSBScipy(params)[source]#
Bases:
Optimizer
Wrap L-BFGS-B algorithm, using scipy routines.
Courtesy: Arthur Mensch’s gist https://gist.github.com/arthurmensch/c55ac413868550f89225a0b9212aa4cd
Methods
add_param_group
(param_group)Add a param group to the
Optimizer
s param_groups.load_state_dict
(state_dict)Load the optimizer state.
register_load_state_dict_post_hook
(hook[, ...])Register a load_state_dict post-hook which will be called after
load_state_dict()
is called. It should have the following signature::.register_load_state_dict_pre_hook
(hook[, ...])Register a load_state_dict pre-hook which will be called before
load_state_dict()
is called. It should have the following signature::.register_state_dict_post_hook
(hook[, prepend])Register a state dict post-hook which will be called after
state_dict()
is called.register_state_dict_pre_hook
(hook[, prepend])Register a state dict pre-hook which will be called before
state_dict()
is called.register_step_post_hook
(hook)Register an optimizer step post hook which will be called after optimizer step.
register_step_pre_hook
(hook)Register an optimizer step pre hook which will be called before optimizer step.
state_dict
()Return the state of the optimizer as a
dict
.step
(closure)Performs a single optimization step.
zero_grad
([set_to_none])Reset the gradients of all optimized
torch.Tensor
s.OptimizerPostHook
OptimizerPreHook
profile_hook_step
- notears_linear(X, lambda1, loss_type, max_iter=100, h_tol=1e-08, rho_max=1e+16, w_threshold=0.3)[source]#
Solve min_W L(W; X) + lambda1 ‖W‖_1 s.t. h(W) = 0 using augmented Lagrangian.
- Parameters:
- Returns:
[d, d] estimated DAG
- Return type:
W_est (np.ndarray)
- class LocallyConnected(num_linear, input_features, output_features, bias=True)[source]#
Bases:
Module
Local linear layer, i.e. Conv1dLocal() with filter size 1.
- Parameters:
num_linear – num of local linear layers, i.e.
in_features – m1
out_features – m2
bias – whether to include bias or not
- Shape:
Input: [n, d, m1]
Output: [n, d, m2]
- weight#
[d, m1, m2]
- bias#
[d, m2]
Methods
add_module
(name, module)Add a child module to the current module.
apply
(fn)Apply
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Return an iterator over module buffers.
children
()Return an iterator over immediate children modules.
compile
(*args, **kwargs)Compile this Module's forward using
torch.compile()
.cpu
()Move all model parameters and buffers to the CPU.
cuda
([device])Move all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Set the module in evaluation mode.
Return the extra representation of the module.
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(input)Define the computation performed at every call.
get_buffer
(target)Return the buffer given by
target
if it exists, otherwise throw an error.get_extra_state
()Return any extra state to include in the module's state_dict.
get_parameter
(target)Return the parameter given by
target
if it exists, otherwise throw an error.get_submodule
(target)Return the submodule given by
target
if it exists, otherwise throw an error.half
()Casts all floating point parameters and buffers to
half
datatype.ipu
([device])Move all model parameters and buffers to the IPU.
load_state_dict
(state_dict[, strict, assign])Copy parameters and buffers from
state_dict
into this module and its descendants.modules
()Return an iterator over all modules in the network.
mtia
([device])Move all model parameters and buffers to the MTIA.
named_buffers
([prefix, recurse, ...])Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse, ...])Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Return an iterator over module parameters.
register_backward_hook
(hook)Register a backward hook on the module.
register_buffer
(name, tensor[, persistent])Add a buffer to the module.
register_forward_hook
(hook, *[, prepend, ...])Register a forward hook on the module.
register_forward_pre_hook
(hook, *[, ...])Register a forward pre-hook on the module.
register_full_backward_hook
(hook[, prepend])Register a backward hook on the module.
register_full_backward_pre_hook
(hook[, prepend])Register a backward pre-hook on the module.
register_load_state_dict_post_hook
(hook)Register a post-hook to be run after module's
load_state_dict()
is called.register_load_state_dict_pre_hook
(hook)Register a pre-hook to be run before module's
load_state_dict()
is called.register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Add a parameter to the module.
register_state_dict_post_hook
(hook)Register a post-hook for the
state_dict()
method.register_state_dict_pre_hook
(hook)Register a pre-hook for the
state_dict()
method.requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)Set extra state contained in the loaded state_dict.
set_submodule
(target, module[, strict])Set the submodule given by
target
if it exists, otherwise throw an error.share_memory
()See
torch.Tensor.share_memory_()
.state_dict
(*args[, destination, prefix, ...])Return a dictionary containing references to the whole state of the module.
to
(*args, **kwargs)Move and/or cast the parameters and buffers.
to_empty
(*, device[, recurse])Move the parameters and buffers to the specified device without copying storage.
train
([mode])Set the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Move all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Reset gradients of all model parameters.
__call__
reset_parameters
- __init__(num_linear, input_features, output_features, bias=True)[source]#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input)[source]#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- notears_standard(data, loss, loss_grad, c=0.25, r=10.0, e=1e-08, rnd_W_init=False, output_all_progress=False, verbose=False)[source]#
Runs NOTEARS algorithm.
- Parameters:
data (np.array) – n x d data matrix with n samples, d variables
c (float) – minimum rate of progress, c in (0,1)
r (float) – penalty growth rate, r > 1
e (float) – optimation accuracy, e > 0 (acyclicity stopping criteria)
loss (function) – loss function
loss_grad (function) – gradient of the loss function
rnd_W_init (bool) – initialize W to std. normal random matrix, rather than zero matrix
output_all_progress (bool) – return all intermediate values of W, rather than
value (just the final)
verbose (bool) – print optimization information
- Returns:
- { ‘h’: acyclicity of output,
’loss’: loss of output, ‘W’: resulting optimized adjacency matrix}
- Return type:
NOTEARS
Original code from xunzheng/notears
- class NOTEARS(name, loss=<function least_squares_loss>, loss_grad=<function least_squares_loss_grad>, c=0.25, r=10.0, e=1e-08, rnd_W_init=False, verbose=False)[source]#
Bases:
object
Methods
notears_standard
(data[, return_all_progress])Runs NOTEARS algorithm.
fit
fit_predict
predict
- __init__(name, loss=<function least_squares_loss>, loss_grad=<function least_squares_loss_grad>, c=0.25, r=10.0, e=1e-08, rnd_W_init=False, verbose=False)[source]#
- notears_standard(data, return_all_progress=False)[source]#
Runs NOTEARS algorithm.
- Parameters:
data (np.array) – n x d data matrix with n samples, d variables
c (float) – minimum rate of progress, c in (0,1)
r (float) – penalty growth rate, r > 1
e (float) – optimation accuracy, e > 0 (acyclicity stopping criteria)
loss (function) – loss function
loss_grad (function) – gradient of the loss function
rnd_W_init (bool) – initialize W to std. normal random matrix, rather than zero matrix
output_all_progress (bool) – return all intermediate values of W, rather than
value (just the final)
- Returns:
- { ‘h’: acyclicity of output,
’loss’: loss of output, ‘W’: resulting optimized adjacency matrix}
- Return type:
- main(dataset_name, input_path='/Users/renero/phd/data/', output_path='/Users/renero/phd/output/', save=False)[source]#
- class TraceExpm(*args, **kwargs)[source]#
Bases:
Function
- Attributes:
- dirty_tensors
- materialize_grads
- metadata
- needs_input_grad
- next_functions
- non_differentiable
- requires_grad
- saved_for_forward
- saved_tensors
- saved_variables
- to_save
Methods
__call__
(*args, **kwargs)Call self as a function.
backward
(ctx, grad_output)Define a formula for differentiating the operation with backward mode automatic differentiation.
forward
(ctx, input)Define the forward of the custom autograd Function.
jvp
(ctx, *grad_inputs)Define a formula for differentiating the operation with forward mode automatic differentiation.
mark_dirty
(*args)Mark given tensors as modified in an in-place operation.
mark_non_differentiable
(*args)Mark outputs as non-differentiable.
save_for_backward
(*tensors)Save given tensors for a future call to
backward()
.save_for_forward
(*tensors)Save given tensors for a future call to
jvp()
.set_materialize_grads
(value)Set whether to materialize grad tensors.
setup_context
(ctx, inputs, output)There are two ways to define the forward pass of an autograd.Function.
vjp
(ctx, *grad_outputs)Define a formula for differentiating the operation with backward mode automatic differentiation.
vmap
(info, in_dims, *args)Define the behavior for this autograd.Function underneath
torch.vmap()
.apply
mark_shared_storage
maybe_clear_saved_tensors
name
register_hook
register_prehook
- static forward(ctx, input)[source]#
Define the forward of the custom autograd Function.
This function is to be overridden by all subclasses. There are two ways to define forward:
Usage 1 (Combined forward and ctx):
@staticmethod def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any: pass
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
See combining-forward-context for more details
Usage 2 (Separate forward and ctx):
@staticmethod def forward(*args: Any, **kwargs: Any) -> Any: pass @staticmethod def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None: pass
The forward no longer accepts a ctx argument.
Instead, you must also override the
torch.autograd.Function.setup_context()
staticmethod to handle setting up thectx
object.output
is the output of the forward,inputs
are a Tuple of inputs to the forward.See extending-autograd for more details
The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with
ctx.save_for_backward()
if they are intended to be used inbackward
(equivalently,vjp
) orctx.save_for_forward()
if they are intended to be used for injvp
.
- static backward(ctx, grad_output)[source]#
Define a formula for differentiating the operation with backward mode automatic differentiation.
This function is to be overridden by all subclasses. (Defining this function is equivalent to defining the
vjp
function.)It must accept a context
ctx
as the first argument, followed by as many outputs as theforward()
returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computed w.r.t. the output.
- simulate_parameter(B, w_ranges=((-2.0, -0.5), (0.5, 2.0)))[source]#
Simulate SEM parameters for a DAG.
- Parameters:
B (np.ndarray) – [d, d] binary adj matrix of DAG
w_ranges (tuple) – disjoint weight ranges
- Returns:
[d, d] weighted adj matrix of DAG
- Return type:
W (np.ndarray)
- simulate_linear_sem(W, n, sem_type, noise_scale=None)[source]#
Simulate samples from linear SEM with specified type of noise.
For uniform, noise z ~ uniform(-a, a), where a = noise_scale.
- Parameters:
- Returns:
[n, d] sample matrix, [d, d] if n=inf
- Return type:
X (np.ndarray)
- simulate_nonlinear_sem(B, n, sem_type, noise_scale=None)[source]#
Simulate samples from nonlinear SEM.
- count_accuracy(B_true, B_est)[source]#
Compute various accuracy metrics for B_est.
true positive = predicted association exists in condition in correct direction reverse = predicted association exists in condition in opposite direction false positive = predicted association does not exist in condition
- Parameters:
B_true (np.ndarray) – [d, d] ground truth graph, {0, 1}
B_est (np.ndarray) – [d, d] estimate, {0, 1, -1}, -1 is undirected edge in CPDAG
- Returns:
(reverse + false positive) / prediction positive tpr: (true positive) / condition positive fpr: (reverse + false positive) / condition negative shd: undirected extra + undirected missing + reverse nnz: prediction positive
- Return type:
fdr