causalexplain.metrics package#

Submodules#

debug_(*args, **kwargs)[source]#
pm_(m)[source]#
allDagsIntern(gm, a, row_names, tmp=None)[source]#
allDagsJonas(adj, row_names)[source]#
computePathMatrix(G, spars=False)[source]#
computePathMatrix2(G, condSet, PathMatrix1, spars=False)[source]#
compute_caus_order(G)[source]#
dag2cpdagAdj(Adj)[source]#
dSepAdji(AdjMat, i, condSet, PathMatrix=None, PathMatrix2=None, spars=None, p=None)[source]#
unique_rows(m)[source]#
SID(trueGraph, estGraph, output=False, spars=False)[source]#
get_indInAllOthers(p, mmm, uniqueRows, allParentsOfI, count, allOthers)[source]#
hammingDist(G1, G2, allMistakesOne=True)[source]#
main()[source]#

This method computes metrics between a pair of graphs. To call this method, simply pass the reference graph, and the predicted graph (the one you want to make as much similar to the first one as possible), and all metrics will be computed.

Use: >>> from random import random >>> target = nx.DiGraph() >>> target.add_nodes_from([‘A’, ‘B’, ‘C’, ‘D’, ‘E’]) >>> target.add_weighted_edges_from([ (‘A’, ‘B’, random()), (‘B’, ‘D’, random()),(‘C’, ‘B’, random()), (‘D’, ‘E’, random()), (‘C’, ‘E’, random())])

>>> predicted = nx.DiGraph()
>>> predicted.add_nodes_from(['A', 'B', 'C', 'D', 'E'])
>>> predicted.add_weighted_edges_from([        ('A', 'B', random()), ('A', 'C', random()), ('E', 'A', random()),        ('E', 'B', random()), ('C', 'D', random())])
>>> result = evaluate_graph(target, predicted)
class Metrics(Tp, Tn, Fn, Fp, precision, recall, aupr, f1, shd, sid)[source]#

Bases: object

This class contains all the metrics computed by the evaluate_graph method.

Methods

matplotlib_repr()

Generates a formatted string representation of the metrics for display in a matplotlib plot.

to_dict()

Convert the metrics to a dictionary.

__init__(Tp, Tn, Fn, Fp, precision, recall, aupr, f1, shd, sid)[source]#
Tp: int = 0#
Tn: int = 0#
Fn: int = 0#
Fp: int = 0#
precision: float = 0.0#
recall: float = 0.0#
aupr: float = 0.0#
f1: float = 0.0#
shd: float = 0.0#
sid: float = 0.0#
to_dict()[source]#

Convert the metrics to a dictionary.

matplotlib_repr()[source]#

Generates a formatted string representation of the metrics for display in a matplotlib plot.

Returns:

The formatted string representation of the metrics.

Return type:

str

evaluate_graph(ground_truth, predicted_graph, feature_names=None, threshold=0.0, absolute=False, double_for_anticausal=True)[source]#

This method computes metrics between a pair of graphs: the ground truth and the predicted graph. To call this method, simply pass the reference graph, and the predicted graph (the one you want to make as much similar to the first one as possible), and all metrics will be computed.

Parameters:
  • ground_truth (AnyGraph) – The ground truth graph.

  • predicted_graph (AnyGraph) – The predicted graph.

  • feature_names (Optional[List[str]], optional) – The list of feature names, by default None

  • threshold (float, optional) – The threshold to use for the precision-recall curve, by default 0.0

  • absolute (bool, optional) – Whether to use the absolute value of the weights, by default False

  • double_for_anticausal (bool, optional) – Whether to double the weights of anticausal edges, by default True

Returns:

The metrics object containing all the metrics: Tp, Tn, Fn, Fp, precision, recall, AuPR, F1, SHD, ISHD, UMI, OMI.

Return type:

Metrics

Module contents#

Metrics module for evaluating causal graphs.

This module provides various metrics and comparison tools for causal graphs, including: - SID (Structural Intervention Distance) - Graph comparison utilities