bnlearn
is Python package for learning the graphical structure of Bayesian networks, parameter learning, inference and sampling methods. This work is inspired by the R package (bnlearn.com) that has been very usefull to me for many years. Although there are very good Python packages for probabilistic graphical models, it still can remain difficult (and somethimes unnecessarily) to (re)build certain pipelines. Bnlearn for python (this package) is build on the pgmpy package and contains the most-wanted pipelines. Navigate to API documentations for more detailed information.
⭐️ Star this repo if you like it ⭐️
Read the blogs to get a structured overview of bayesian methods and detailed usage of bnlearn
.
On the documentation pages you can find detailed information about the working of the bnlearn
with many examples.
conda create -n env_bnlearn python=3.8
conda activate env_bnlearn
pip install bnlearn
# Import library
import bnlearn as bn
# Structure learning
bn.structure_learning.fit()
# Compute edge strength with the test statistic
bn.independence_test(model, df, test='chi_square', prune=True)
# Parameter learning
bn.parameter_learning.fit()
# Inference
bn.inference.fit()
# Make predictions
bn.predict()
# Based on a DAG, you can sample the number of samples you want.
bn.sampling()
# Load well known examples to play arround with or load your own .bif file.
bn.import_DAG()
# Load simple dataframe of sprinkler dataset.
bn.import_example()
# Compare 2 graphs
bn.compare_networks()
# Plot graph
bn.plot()
# To make the directed grapyh undirected
bn.to_undirected()
# Convert to one-hot datamatrix
bn.df2onehot()
# Derive the topological ordering of the (entire) graph
bn.topological_sort()
# See below for the exact working of the functions
- inference
- sampling
- comparing two networks
- loading bif files
- conversion of directed to undirected graphs
Learning a Bayesian network can be split into the underneath problems which are all implemented in this package:
-
Structure learning: Given the data: Estimate a DAG that captures the dependencies between the variables.
- There are multiple manners to perform structure learning.
- Exhaustivesearch
- Hillclimbsearch
- NaiveBayes
- TreeSearch
- Chow-liu
- Tree-augmented Naive Bayes (TAN)
- There are multiple manners to perform structure learning.
-
Parameter learning: Given the data and DAG: Estimate the (conditional) probability distributions of the individual variables.
-
Inference: Given the learned model: Determine the exact probability values for your queries.
A structured overview of all examples are now available on the documentation pages.
-
Example: Learn structure on the Sprinkler dataset based on a simple dataframe
-
Example: Comparison method and scoring types types for structure learning
import bnlearn as bn
# Example dataframe sprinkler_data.csv can be loaded with:
df = bn.import_example()
# df = pd.read_csv('sprinkler_data.csv')
Cloudy Sprinkler Rain Wet_Grass
0 0 1 0 1
1 1 1 1 1
2 1 0 1 1
3 0 0 1 1
4 1 0 1 1
.. ... ... ... ...
995 0 0 0 0
996 1 0 0 0
997 0 0 1 0
998 1 1 0 1
999 1 0 1 1
model = bn.structure_learning.fit(df)
# Compute edge strength with the chi_square test statistic
model = bn.independence_test(model, df)
G = bn.plot(model)
- Choosing various methodtypes and scoringtypes:
model_hc_bic = bn.structure_learning.fit(df, methodtype='hc', scoretype='bic')
model_hc_k2 = bn.structure_learning.fit(df, methodtype='hc', scoretype='k2')
model_hc_bdeu = bn.structure_learning.fit(df, methodtype='hc', scoretype='bdeu')
model_ex_bic = bn.structure_learning.fit(df, methodtype='ex', scoretype='bic')
model_ex_k2 = bn.structure_learning.fit(df, methodtype='ex', scoretype='k2')
model_ex_bdeu = bn.structure_learning.fit(df, methodtype='ex', scoretype='bdeu')
model_cl = bn.structure_learning.fit(df, methodtype='cl', root_node='Wet_Grass')
model_tan = bn.structure_learning.fit(df, methodtype='tan', root_node='Wet_Grass', class_node='Rain')
import bnlearn as bn
# Import dataframe
df = bn.import_example()
# As an example we set the CPD at False which returns an "empty" DAG
model = bn.import_DAG('sprinkler', CPD=False)
# Now we learn the parameters of the DAG using the df
model_update = bn.parameter_learning.fit(model, df)
# Make plot
G = bn.plot(model_update)
import bnlearn as bn
model = bn.import_DAG('sprinkler')
query = bn.inference.fit(model, variables=['Rain'], evidence={'Cloudy':1,'Sprinkler':0, 'Wet_Grass':1})
print(query)
print(query.df)
# Lets try another inference
query = bn.inference.fit(model, variables=['Rain'], evidence={'Cloudy':1})
print(query)
print(query.df)
- https://erdogant.github.io/bnlearn/
- http://pgmpy.org
- https://programtalk.com/python-examples/pgmpy.factors.discrete.TabularCPD/
- http://www.bnlearn.com/bnrepository/
- All kinds of contributions are welcome!
Please cite bnlearn
in your publications if this is useful for your research. See column right for citation information.