aGrUM  0.14.2
gum::credal::InferenceEngine< GUM_SCALAR > Class Template Referenceabstract

Abstract class template representing a CredalNet inference engine. More...

#include <agrum/CN/inferenceEngine.h>

+ Inheritance diagram for gum::credal::InferenceEngine< GUM_SCALAR >:
+ Collaboration diagram for gum::credal::InferenceEngine< GUM_SCALAR >:

Public Attributes

Signaler3< Size, double, doubleonProgress
 Progression, error and time. More...
 
Signaler1< std::string > onStop
 Criteria messageApproximationScheme. More...
 

Public Member Functions

Constructors / Destructors
 InferenceEngine (const CredalNet< GUM_SCALAR > &credalNet)
 Construtor. More...
 
virtual ~InferenceEngine ()
 Destructor. More...
 
Pure virtual methods
virtual void makeInference ()=0
 To be redefined by each credal net algorithm. More...
 
Getters and setters
VarMod2BNsMap< GUM_SCALAR > * getVarMod2BNsMap ()
 Get optimum IBayesNet. More...
 
const CredalNet< GUM_SCALAR > & credalNet ()
 Get this creadal network. More...
 
const NodeProperty< std::vector< NodeId > > & getT0Cluster () const
 Get the _t0 cluster. More...
 
const NodeProperty< std::vector< NodeId > > & getT1Cluster () const
 Get the _t1 cluster. More...
 
void setRepetitiveInd (const bool repetitive)
 
void storeVertices (const bool value)
 
void storeBNOpt (const bool value)
 
bool repetitiveInd () const
 Get the current independence status. More...
 
bool storeVertices () const
 Get the number of iterations without changes used to stop some algorithms. More...
 
bool storeBNOpt () const
 
Pre-inference initialization methods
void insertModalsFile (const std::string &path)
 Insert variables modalities from file to compute expectations. More...
 
void insertModals (const std::map< std::string, std::vector< GUM_SCALAR > > &modals)
 Insert variables modalities from map to compute expectations. More...
 
virtual void insertEvidenceFile (const std::string &path)
 Insert evidence from file. More...
 
void insertEvidence (const std::map< std::string, std::vector< GUM_SCALAR > > &eviMap)
 Insert evidence from map. More...
 
void insertEvidence (const NodeProperty< std::vector< GUM_SCALAR > > &evidence)
 Insert evidence from Property. More...
 
void insertQueryFile (const std::string &path)
 Insert query variables states from file. More...
 
void insertQuery (const NodeProperty< std::vector< bool > > &query)
 Insert query variables and states from Property. More...
 
Post-inference methods
virtual void eraseAllEvidence ()
 Erase all inference related data to perform another one. More...
 
const std::vector< GUM_SCALAR > & marginalMin (const NodeId id) const
 Get the lower marginals of a given node id. More...
 
const std::vector< GUM_SCALAR > & marginalMax (const NodeId id) const
 Get the upper marginals of a given node id. More...
 
const std::vector< GUM_SCALAR > & marginalMin (const std::string &varName) const
 Get the lower marginals of a given variable name. More...
 
const std::vector< GUM_SCALAR > & marginalMax (const std::string &varName) const
 Get the upper marginals of a given variable name. More...
 
const GUM_SCALAR & expectationMin (const NodeId id) const
 Get the lower expectation of a given node id. More...
 
const GUM_SCALAR & expectationMax (const NodeId id) const
 Get the upper expectation of a given node id. More...
 
const GUM_SCALAR & expectationMin (const std::string &varName) const
 Get the lower expectation of a given variable name. More...
 
const GUM_SCALAR & expectationMax (const std::string &varName) const
 Get the upper expectation of a given variable name. More...
 
const std::vector< GUM_SCALAR > & dynamicExpMin (const std::string &varName) const
 Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e. More...
 
const std::vector< GUM_SCALAR > & dynamicExpMax (const std::string &varName) const
 Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e. More...
 
const std::vector< std::vector< GUM_SCALAR > > & vertices (const NodeId id) const
 Get the vertice of a given node id. More...
 
void saveMarginals (const std::string &path) const
 Saves marginals to file. More...
 
void saveExpectations (const std::string &path) const
 Saves expectations to file. More...
 
void saveVertices (const std::string &path) const
 Saves vertices to file. More...
 
void dynamicExpectations ()
 Compute dynamic expectations. More...
 
std::string toString () const
 Print all nodes marginals to standart output. More...
 
const std::string getApproximationSchemeMsg ()
 Get approximation scheme state. More...
 
Getters and setters
void setEpsilon (double eps)
 Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|. More...
 
double epsilon () const
 Returns the value of epsilon. More...
 
void disableEpsilon ()
 Disable stopping criterion on epsilon. More...
 
void enableEpsilon ()
 Enable stopping criterion on epsilon. More...
 
bool isEnabledEpsilon () const
 Returns true if stopping criterion on epsilon is enabled, false otherwise. More...
 
void setMinEpsilonRate (double rate)
 Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|). More...
 
double minEpsilonRate () const
 Returns the value of the minimal epsilon rate. More...
 
void disableMinEpsilonRate ()
 Disable stopping criterion on epsilon rate. More...
 
void enableMinEpsilonRate ()
 Enable stopping criterion on epsilon rate. More...
 
bool isEnabledMinEpsilonRate () const
 Returns true if stopping criterion on epsilon rate is enabled, false otherwise. More...
 
void setMaxIter (Size max)
 Stopping criterion on number of iterations. More...
 
Size maxIter () const
 Returns the criterion on number of iterations. More...
 
void disableMaxIter ()
 Disable stopping criterion on max iterations. More...
 
void enableMaxIter ()
 Enable stopping criterion on max iterations. More...
 
bool isEnabledMaxIter () const
 Returns true if stopping criterion on max iterations is enabled, false otherwise. More...
 
void setMaxTime (double timeout)
 Stopping criterion on timeout. More...
 
double maxTime () const
 Returns the timeout (in seconds). More...
 
double currentTime () const
 Returns the current running time in second. More...
 
void disableMaxTime ()
 Disable stopping criterion on timeout. More...
 
void enableMaxTime ()
 Enable stopping criterion on timeout. More...
 
bool isEnabledMaxTime () const
 Returns true if stopping criterion on timeout is enabled, false otherwise. More...
 
void setPeriodSize (Size p)
 How many samples between two stopping is enable. More...
 
Size periodSize () const
 Returns the period size. More...
 
void setVerbosity (bool v)
 Set the verbosity on (true) or off (false). More...
 
bool verbosity () const
 Returns true if verbosity is enabled. More...
 
ApproximationSchemeSTATE stateApproximationScheme () const
 Returns the approximation scheme state. More...
 
Size nbrIterations () const
 Returns the number of iterations. More...
 
const std::vector< double > & history () const
 Returns the scheme history. More...
 
void initApproximationScheme ()
 Initialise the scheme. More...
 
bool startOfPeriod ()
 Returns true if we are at the beginning of a period (compute error is mandatory). More...
 
void updateApproximationScheme (unsigned int incr=1)
 Update the scheme w.r.t the new error and increment steps. More...
 
Size remainingBurnIn ()
 Returns the remaining burn in. More...
 
void stopApproximationScheme ()
 Stop the approximation scheme. More...
 
bool continueApproximationScheme (double error)
 Update the scheme w.r.t the new error. More...
 
Getters and setters
std::string messageApproximationScheme () const
 Returns the approximation scheme message. More...
 

Public Types

enum  ApproximationSchemeSTATE : char {
  ApproximationSchemeSTATE::Undefined, ApproximationSchemeSTATE::Continue, ApproximationSchemeSTATE::Epsilon, ApproximationSchemeSTATE::Rate,
  ApproximationSchemeSTATE::Limit, ApproximationSchemeSTATE::TimeLimit, ApproximationSchemeSTATE::Stopped
}
 The different state of an approximation scheme. More...
 

Protected Attributes

const CredalNet< GUM_SCALAR > * _credalNet
 A pointer to the Credal Net used. More...
 
margi _oldMarginalMin
 Old lower marginals used to compute epsilon. More...
 
margi _oldMarginalMax
 Old upper marginals used to compute epsilon. More...
 
margi _marginalMin
 Lower marginals. More...
 
margi _marginalMax
 Upper marginals. More...
 
credalSet _marginalSets
 Credal sets vertices, if enabled. More...
 
expe _expectationMin
 Lower expectations, if some variables modalities were inserted. More...
 
expe _expectationMax
 Upper expectations, if some variables modalities were inserted. More...
 
dynExpe _dynamicExpMin
 Lower dynamic expectations. More...
 
dynExpe _dynamicExpMax
 Upper dynamic expectations. More...
 
dynExpe _modal
 Variables modalities used to compute expectations. More...
 
margi _evidence
 Holds observed variables states. More...
 
query _query
 Holds the query nodes states. More...
 
cluster _t0
 Clusters of nodes used with dynamic networks. More...
 
cluster _t1
 Clusters of nodes used with dynamic networks. More...
 
bool _storeVertices
 True if credal sets vertices are stored, False otherwise. More...
 
bool _repetitiveInd
 True if using repetitive independence ( dynamic network only ), False otherwise. More...
 
bool _storeBNOpt
 Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling. More...
 
VarMod2BNsMap< GUM_SCALAR > _dbnOpt
 Object used to efficiently store optimal bayes net during inference, for some algorithms. More...
 
int _timeSteps
 The number of time steps of this network (only usefull for dynamic networks). More...
 
double _current_epsilon
 Current epsilon. More...
 
double _last_epsilon
 Last epsilon value. More...
 
double _current_rate
 Current rate. More...
 
Size _current_step
 The current step. More...
 
Timer _timer
 The timer. More...
 
ApproximationSchemeSTATE _current_state
 The current state. More...
 
std::vector< double_history
 The scheme history, used only if verbosity == true. More...
 
double _eps
 Threshold for convergence. More...
 
bool _enabled_eps
 If true, the threshold convergence is enabled. More...
 
double _min_rate_eps
 Threshold for the epsilon rate. More...
 
bool _enabled_min_rate_eps
 If true, the minimal threshold for epsilon rate is enabled. More...
 
double _max_time
 The timeout. More...
 
bool _enabled_max_time
 If true, the timeout is enabled. More...
 
Size _max_iter
 The maximum iterations. More...
 
bool _enabled_max_iter
 If true, the maximum iterations stopping criterion is enabled. More...
 
Size _burn_in
 Number of iterations before checking stopping criteria. More...
 
Size _period_size
 Checking criteria frequency. More...
 
bool _verbosity
 If true, verbosity is enabled. More...
 

Protected Member Functions

Protected initialization methods
void _repetitiveInit ()
 Initialize _t0 and _t1 clusters. More...
 
void _initExpectations ()
 Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality. More...
 
void _initMarginals ()
 Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0. More...
 
void _initMarginalSets ()
 Initialize credal set vertices with empty sets. More...
 
Protected algorithms methods
const GUM_SCALAR _computeEpsilon ()
 Compute approximation scheme epsilon using the old marginals and the new ones. More...
 
void _updateExpectations (const NodeId &id, const std::vector< GUM_SCALAR > &vertex)
 Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations. More...
 
void _updateCredalSets (const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
 Given a node id and one of it's possible vertex, update it's credal set. More...
 
Proptected post-inference methods
void _dynamicExpectations ()
 Rearrange lower and upper expectations to suit dynamic networks. More...
 

Detailed Description

template<typename GUM_SCALAR>
class gum::credal::InferenceEngine< GUM_SCALAR >

Abstract class template representing a CredalNet inference engine.

Used by credal network inference algorithms such as CNLoopyPropagation (inner multi-threading) or CNMonteCarloSampling (outer multi-threading).

Template Parameters
GUM_SCALARA floating type ( float, double, long double ... ).
Author
Matthieu HOURBRACQ and Pierre-Henri WUILLEMIN

Definition at line 57 of file inferenceEngine.h.

Member Typedef Documentation

◆ cluster

template<typename GUM_SCALAR >
using gum::credal::InferenceEngine< GUM_SCALAR >::cluster = NodeProperty< std::vector< NodeId > >
private

Definition at line 68 of file inferenceEngine.h.

◆ credalSet

template<typename GUM_SCALAR >
using gum::credal::InferenceEngine< GUM_SCALAR >::credalSet = NodeProperty< std::vector< std::vector< GUM_SCALAR > > >
private

Definition at line 60 of file inferenceEngine.h.

◆ dynExpe

template<typename GUM_SCALAR >
using gum::credal::InferenceEngine< GUM_SCALAR >::dynExpe = typename gum::HashTable< std::string, std::vector< GUM_SCALAR > >
private

Definition at line 65 of file inferenceEngine.h.

◆ expe

template<typename GUM_SCALAR >
using gum::credal::InferenceEngine< GUM_SCALAR >::expe = NodeProperty< GUM_SCALAR >
private

Definition at line 62 of file inferenceEngine.h.

◆ margi

template<typename GUM_SCALAR >
using gum::credal::InferenceEngine< GUM_SCALAR >::margi = NodeProperty< std::vector< GUM_SCALAR > >
private

Definition at line 61 of file inferenceEngine.h.

◆ query

template<typename GUM_SCALAR >
using gum::credal::InferenceEngine< GUM_SCALAR >::query = NodeProperty< std::vector< bool > >
private

Definition at line 67 of file inferenceEngine.h.

Member Enumeration Documentation

◆ ApproximationSchemeSTATE

The different state of an approximation scheme.

Enumerator
Undefined 
Continue 
Epsilon 
Rate 
Limit 
TimeLimit 
Stopped 

Definition at line 63 of file IApproximationSchemeConfiguration.h.

63  : char {
64  Undefined,
65  Continue,
66  Epsilon,
67  Rate,
68  Limit,
69  TimeLimit,
70  Stopped
71  };

Constructor & Destructor Documentation

◆ InferenceEngine()

template<typename GUM_SCALAR >
gum::credal::InferenceEngine< GUM_SCALAR >::InferenceEngine ( const CredalNet< GUM_SCALAR > &  credalNet)
explicit

Construtor.

Parameters
credalNetThe credal net to be used with this inference engine.

Definition at line 38 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_dbnOpt, gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginals(), and gum::credal::InferenceEngine< GUM_SCALAR >::credalNet().

39  :
42 
43  _dbnOpt.setCNet(credalNet);
44 
46 
47  GUM_CONSTRUCTOR(InferenceEngine);
48  }
InferenceEngine(const CredalNet< GUM_SCALAR > &credalNet)
Construtor.
VarMod2BNsMap< GUM_SCALAR > _dbnOpt
Object used to efficiently store optimal bayes net during inference, for some algorithms.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
void _initMarginals()
Initialize lower and upper old marginals and marginals before inference, with the lower marginal bein...
const CredalNet< GUM_SCALAR > & credalNet()
Get this creadal network.
+ Here is the call graph for this function:

◆ ~InferenceEngine()

template<typename GUM_SCALAR >
gum::credal::InferenceEngine< GUM_SCALAR >::~InferenceEngine ( )
virtual

Destructor.

Definition at line 51 of file inferenceEngine_tpl.h.

51  {
52  GUM_DESTRUCTOR(InferenceEngine);
53  }
InferenceEngine(const CredalNet< GUM_SCALAR > &credalNet)
Construtor.

Member Function Documentation

◆ _computeEpsilon()

template<typename GUM_SCALAR >
const GUM_SCALAR gum::credal::InferenceEngine< GUM_SCALAR >::_computeEpsilon ( )
inlineprotected

Compute approximation scheme epsilon using the old marginals and the new ones.

Highest delta on either lower or upper marginal is epsilon.

Also updates oldMarginals to current marginals.

Returns
Epsilon.

Definition at line 1013 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMin, and gum::HashTable< Key, Val, Alloc >::size().

1013  {
1014  GUM_SCALAR eps = 0;
1015 #pragma omp parallel
1016  {
1017  GUM_SCALAR tEps = 0;
1018  GUM_SCALAR delta;
1019 
1021  int nsize = int(_marginalMin.size());
1022 
1023 #pragma omp for
1024 
1025  for (int i = 0; i < nsize; i++) {
1026  auto dSize = _marginalMin[i].size();
1027 
1028  for (Size j = 0; j < dSize; j++) {
1029  // on min
1030  delta = _marginalMin[i][j] - _oldMarginalMin[i][j];
1031  delta = (delta < 0) ? (-delta) : delta;
1032  tEps = (tEps < delta) ? delta : tEps;
1033 
1034  // on max
1035  delta = _marginalMax[i][j] - _oldMarginalMax[i][j];
1036  delta = (delta < 0) ? (-delta) : delta;
1037  tEps = (tEps < delta) ? delta : tEps;
1038 
1039  _oldMarginalMin[i][j] = _marginalMin[i][j];
1040  _oldMarginalMax[i][j] = _marginalMax[i][j];
1041  }
1042  } // end of : all variables
1043 
1044 #pragma omp critical(epsilon_max)
1045  {
1046 #pragma omp flush(eps)
1047  eps = (eps < tEps) ? tEps : eps;
1048  }
1049  }
1050 
1051  return eps;
1052  }
margi _oldMarginalMin
Old lower marginals used to compute epsilon.
Size size() const noexcept
Returns the number of elements stored into the hashtable.
margi _marginalMin
Lower marginals.
margi _oldMarginalMax
Old upper marginals used to compute epsilon.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:45
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:

◆ _dynamicExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpectations ( )
protected

Rearrange lower and upper expectations to suit dynamic networks.

Definition at line 718 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMax, gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMin, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin, gum::credal::InferenceEngine< GUM_SCALAR >::_modal, and gum::HashTable< Key, Val, Alloc >::empty().

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

718  {
719  // no modals, no expectations computed during inference
720  if (_expectationMin.empty() || _modal.empty()) return;
721 
722  // already called by the algorithm or the user
723  if (_dynamicExpMax.size() > 0 && _dynamicExpMin.size() > 0) return;
724 
725  // typedef typename std::map< int, GUM_SCALAR > innerMap;
726  using innerMap = typename gum::HashTable< int, GUM_SCALAR >;
727 
728  // typedef typename std::map< std::string, innerMap > outerMap;
729  using outerMap = typename gum::HashTable< std::string, innerMap >;
730 
731  // typedef typename std::map< std::string, std::vector< GUM_SCALAR > >
732  // mod;
733 
734  // si non dynamique, sauver directement _expectationMin et Max (revient au
735  // meme
736  // mais plus rapide)
737  outerMap expectationsMin, expectationsMax;
738 
739  for (const auto& elt : _expectationMin) {
740  std::string var_name, time_step;
741 
742  var_name = _credalNet->current_bn().variable(elt.first).name();
743  auto delim = var_name.find_first_of("_");
744  time_step = var_name.substr(delim + 1, var_name.size());
745  var_name = var_name.substr(0, delim);
746 
747  // to be sure (don't store not monitored variables' expectations)
748  // although it
749  // should be taken care of before this point
750  if (!_modal.exists(var_name)) continue;
751 
752  expectationsMin.getWithDefault(var_name, innerMap())
753  .getWithDefault(atoi(time_step.c_str()), 0) =
754  elt.second; // we iterate with min iterators
755  expectationsMax.getWithDefault(var_name, innerMap())
756  .getWithDefault(atoi(time_step.c_str()), 0) =
757  _expectationMax[elt.first];
758  }
759 
760  for (const auto& elt : expectationsMin) {
761  typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
762 
763  for (const auto& elt2 : elt.second)
764  dynExp[elt2.first] = elt2.second;
765 
766  _dynamicExpMin.insert(elt.first, dynExp);
767  }
768 
769  for (const auto& elt : expectationsMax) {
770  typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
771 
772  for (const auto& elt2 : elt.second) {
773  dynExp[elt2.first] = elt2.second;
774  }
775 
776  _dynamicExpMax.insert(elt.first, dynExp);
777  }
778  }
dynExpe _dynamicExpMin
Lower dynamic expectations.
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
The class for generic Hash Tables.
Definition: hashTable.h:676
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
dynExpe _dynamicExpMax
Upper dynamic expectations.
dynExpe _modal
Variables modalities used to compute expectations.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.
bool empty() const noexcept
Indicates whether the hash table is empty.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _initExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations ( )
protected

Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality.

Definition at line 692 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin, gum::credal::InferenceEngine< GUM_SCALAR >::_modal, gum::HashTable< Key, Val, Alloc >::clear(), and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence(), gum::credal::InferenceEngine< GUM_SCALAR >::insertModals(), and gum::credal::InferenceEngine< GUM_SCALAR >::insertModalsFile().

692  {
695 
696  if (_modal.empty()) return;
697 
698  for (auto node : _credalNet->current_bn().nodes()) {
699  std::string var_name, time_step;
700 
701  var_name = _credalNet->current_bn().variable(node).name();
702  auto delim = var_name.find_first_of("_");
703  var_name = var_name.substr(0, delim);
704 
705  if (!_modal.exists(var_name)) continue;
706 
707  _expectationMin.insert(node, _modal[var_name].back());
708  _expectationMax.insert(node, _modal[var_name].front());
709  }
710  }
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
dynExpe _modal
Variables modalities used to compute expectations.
void clear()
Removes all the elements in the hash table.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _initMarginals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginals ( )
protected

Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0.

Definition at line 660 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMin, gum::HashTable< Key, Val, Alloc >::clear(), and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence(), and gum::credal::InferenceEngine< GUM_SCALAR >::InferenceEngine().

660  {
665 
666  for (auto node : _credalNet->current_bn().nodes()) {
667  auto dSize = _credalNet->current_bn().variable(node).domainSize();
668  _marginalMin.insert(node, std::vector< GUM_SCALAR >(dSize, 1));
669  _oldMarginalMin.insert(node, std::vector< GUM_SCALAR >(dSize, 1));
670 
671  _marginalMax.insert(node, std::vector< GUM_SCALAR >(dSize, 0));
672  _oldMarginalMax.insert(node, std::vector< GUM_SCALAR >(dSize, 0));
673  }
674  }
margi _oldMarginalMin
Old lower marginals used to compute epsilon.
margi _marginalMin
Lower marginals.
margi _oldMarginalMax
Old upper marginals used to compute epsilon.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _initMarginalSets()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginalSets ( )
protected

Initialize credal set vertices with empty sets.

Definition at line 677 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets, gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices, gum::HashTable< Key, Val, Alloc >::clear(), and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence(), and gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices().

677  {
679 
680  if (!_storeVertices) return;
681 
682  for (auto node : _credalNet->current_bn().nodes())
683  _marginalSets.insert(node, std::vector< std::vector< GUM_SCALAR > >());
684  }
credalSet _marginalSets
Credal sets vertices, if enabled.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
void clear()
Removes all the elements in the hash table.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _repetitiveInit()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit ( )
protected

Initialize _t0 and _t1 clusters.

Definition at line 781 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_t0, gum::credal::InferenceEngine< GUM_SCALAR >::_t1, gum::credal::InferenceEngine< GUM_SCALAR >::_timeSteps, gum::HashTable< Key, Val, Alloc >::clear(), GUM_ERROR, and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference(), and gum::credal::InferenceEngine< GUM_SCALAR >::setRepetitiveInd().

781  {
782  _timeSteps = 0;
783  _t0.clear();
784  _t1.clear();
785 
786  // t = 0 vars belongs to _t0 as keys
787  for (auto node : _credalNet->current_bn().dag().nodes()) {
788  std::string var_name = _credalNet->current_bn().variable(node).name();
789  auto delim = var_name.find_first_of("_");
790 
791  if (delim > var_name.size()) {
792  GUM_ERROR(InvalidArgument,
793  "void InferenceEngine< GUM_SCALAR "
794  ">::_repetitiveInit() : the network does not "
795  "appear to be dynamic");
796  }
797 
798  std::string time_step = var_name.substr(delim + 1, 1);
799 
800  if (time_step.compare("0") == 0) _t0.insert(node, std::vector< NodeId >());
801  }
802 
803  // t = 1 vars belongs to either _t0 as member value or _t1 as keys
804  for (const auto& node : _credalNet->current_bn().dag().nodes()) {
805  std::string var_name = _credalNet->current_bn().variable(node).name();
806  auto delim = var_name.find_first_of("_");
807  std::string time_step = var_name.substr(delim + 1, var_name.size());
808  var_name = var_name.substr(0, delim);
809  delim = time_step.find_first_of("_");
810  time_step = time_step.substr(0, delim);
811 
812  if (time_step.compare("1") == 0) {
813  bool found = false;
814 
815  for (const auto& elt : _t0) {
816  std::string var_0_name =
817  _credalNet->current_bn().variable(elt.first).name();
818  delim = var_0_name.find_first_of("_");
819  var_0_name = var_0_name.substr(0, delim);
820 
821  if (var_name.compare(var_0_name) == 0) {
822  const Potential< GUM_SCALAR >* potential(
823  &_credalNet->current_bn().cpt(node));
824  const Potential< GUM_SCALAR >* potential2(
825  &_credalNet->current_bn().cpt(elt.first));
826 
827  if (potential->domainSize() == potential2->domainSize())
828  _t0[elt.first].push_back(node);
829  else
830  _t1.insert(node, std::vector< NodeId >());
831 
832  found = true;
833  break;
834  }
835  }
836 
837  if (!found) { _t1.insert(node, std::vector< NodeId >()); }
838  }
839  }
840 
841  // t > 1 vars belongs to either _t0 or _t1 as member value
842  // remember _timeSteps
843  for (auto node : _credalNet->current_bn().dag().nodes()) {
844  std::string var_name = _credalNet->current_bn().variable(node).name();
845  auto delim = var_name.find_first_of("_");
846  std::string time_step = var_name.substr(delim + 1, var_name.size());
847  var_name = var_name.substr(0, delim);
848  delim = time_step.find_first_of("_");
849  time_step = time_step.substr(0, delim);
850 
851  if (time_step.compare("0") != 0 && time_step.compare("1") != 0) {
852  // keep max time_step
853  if (atoi(time_step.c_str()) > _timeSteps)
854  _timeSteps = atoi(time_step.c_str());
855 
856  std::string var_0_name;
857  bool found = false;
858 
859  for (const auto& elt : _t0) {
860  std::string var_0_name =
861  _credalNet->current_bn().variable(elt.first).name();
862  delim = var_0_name.find_first_of("_");
863  var_0_name = var_0_name.substr(0, delim);
864 
865  if (var_name.compare(var_0_name) == 0) {
866  const Potential< GUM_SCALAR >* potential(
867  &_credalNet->current_bn().cpt(node));
868  const Potential< GUM_SCALAR >* potential2(
869  &_credalNet->current_bn().cpt(elt.first));
870 
871  if (potential->domainSize() == potential2->domainSize()) {
872  _t0[elt.first].push_back(node);
873  found = true;
874  break;
875  }
876  }
877  }
878 
879  if (!found) {
880  for (const auto& elt : _t1) {
881  std::string var_0_name =
882  _credalNet->current_bn().variable(elt.first).name();
883  auto delim = var_0_name.find_first_of("_");
884  var_0_name = var_0_name.substr(0, delim);
885 
886  if (var_name.compare(var_0_name) == 0) {
887  const Potential< GUM_SCALAR >* potential(
888  &_credalNet->current_bn().cpt(node));
889  const Potential< GUM_SCALAR >* potential2(
890  &_credalNet->current_bn().cpt(elt.first));
891 
892  if (potential->domainSize() == potential2->domainSize()) {
893  _t1[elt.first].push_back(node);
894  break;
895  }
896  }
897  }
898  }
899  }
900  }
901  }
int _timeSteps
The number of time steps of this network (only usefull for dynamic networks).
cluster _t0
Clusters of nodes used with dynamic networks.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
cluster _t1
Clusters of nodes used with dynamic networks.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _updateCredalSets()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_updateCredalSets ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex,
const bool elimRedund = false 
)
inlineprotected

Given a node id and one of it's possible vertex, update it's credal set.

To maximise efficiency, don't pass a vertex we know is inside the polytope (i.e. not at an extreme value for any modality)

Parameters
idThe id of the node to be updated
vertexA (potential) vertex of the node credal set
elimRedundremove redundant vertex (inside a facet)

Definition at line 925 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets, gum::HashTable< Key, Val, Alloc >::cbegin(), gum::HashTable< Key, Val, Alloc >::cend(), gum::credal::LRSWrapper< GUM_SCALAR >::elimRedundVrep(), gum::credal::LRSWrapper< GUM_SCALAR >::fillV(), gum::credal::LRSWrapper< GUM_SCALAR >::getOutput(), and gum::credal::LRSWrapper< GUM_SCALAR >::setUpV().

Referenced by gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_verticesFusion().

928  {
929  auto& nodeCredalSet = _marginalSets[id];
930  auto dsize = vertex.size();
931 
932  bool eq = true;
933 
934  for (auto it = nodeCredalSet.cbegin(), itEnd = nodeCredalSet.cend();
935  it != itEnd;
936  ++it) {
937  eq = true;
938 
939  for (Size i = 0; i < dsize; i++) {
940  if (std::fabs(vertex[i] - (*it)[i]) > 1e-6) {
941  eq = false;
942  break;
943  }
944  }
945 
946  if (eq) break;
947  }
948 
949  if (!eq || nodeCredalSet.size() == 0) {
950  nodeCredalSet.push_back(vertex);
951  return;
952  } else
953  return;
954 
955  // because of next lambda return condition
956  if (nodeCredalSet.size() == 1) return;
957 
958  // check that the point and all previously added ones are not inside the
959  // actual
960  // polytope
961  auto itEnd = std::remove_if(
962  nodeCredalSet.begin(),
963  nodeCredalSet.end(),
964  [&](const std::vector< GUM_SCALAR >& v) -> bool {
965  for (auto jt = v.cbegin(),
966  jtEnd = v.cend(),
967  minIt = _marginalMin[id].cbegin(),
968  minItEnd = _marginalMin[id].cend(),
969  maxIt = _marginalMax[id].cbegin(),
970  maxItEnd = _marginalMax[id].cend();
971  jt != jtEnd && minIt != minItEnd && maxIt != maxItEnd;
972  ++jt, ++minIt, ++maxIt) {
973  if ((std::fabs(*jt - *minIt) < 1e-6 || std::fabs(*jt - *maxIt) < 1e-6)
974  && std::fabs(*minIt - *maxIt) > 1e-6)
975  return false;
976  }
977  return true;
978  });
979 
980  nodeCredalSet.erase(itEnd, nodeCredalSet.end());
981 
982  // we need at least 2 points to make a convex combination
983  if (!elimRedund || nodeCredalSet.size() <= 2) return;
984 
985  // there may be points not inside the polytope but on one of it's facet,
986  // meaning it's still a convex combination of vertices of this facet. Here
987  // we
988  // need lrs.
989  LRSWrapper< GUM_SCALAR > lrsWrapper;
990  lrsWrapper.setUpV((unsigned int)dsize, (unsigned int)(nodeCredalSet.size()));
991 
992  for (const auto& vtx : nodeCredalSet)
993  lrsWrapper.fillV(vtx);
994 
995  lrsWrapper.elimRedundVrep();
996 
997  _marginalSets[id] = lrsWrapper.getOutput();
998  }
credalSet _marginalSets
Credal sets vertices, if enabled.
margi _marginalMin
Lower marginals.
const const_iterator & cend() const noexcept
Returns the unsafe const_iterator pointing to the end of the hashtable.
const_iterator cbegin() const
Returns an unsafe const_iterator pointing to the beginning of the hashtable.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:45
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _updateExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_updateExpectations ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex 
)
inlineprotected

Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations.

Parameters
idThe id of the node to be updated
vertexA (potential) vertex of the node credal set

Definition at line 904 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin, and gum::credal::InferenceEngine< GUM_SCALAR >::_modal.

905  {
906  std::string var_name = _credalNet->current_bn().variable(id).name();
907  auto delim = var_name.find_first_of("_");
908 
909  var_name = var_name.substr(0, delim);
910 
911  if (_modal.exists(var_name) /*_modal.find(var_name) != _modal.end()*/) {
912  GUM_SCALAR exp = 0;
913  auto vsize = vertex.size();
914 
915  for (Size mod = 0; mod < vsize; mod++)
916  exp += vertex[mod] * _modal[var_name][mod];
917 
918  if (exp > _expectationMax[id]) _expectationMax[id] = exp;
919 
920  if (exp < _expectationMin[id]) _expectationMin[id] = exp;
921  }
922  }
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
dynExpe _modal
Variables modalities used to compute expectations.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:45

◆ continueApproximationScheme()

INLINE bool gum::ApproximationScheme::continueApproximationScheme ( double  error)
inherited

Update the scheme w.r.t the new error.

Test the stopping criterion that are enabled.

Parameters
errorThe new error value.
Returns
false if state become != ApproximationSchemeSTATE::Continue
Exceptions
OperationNotAllowedRaised if state != ApproximationSchemeSTATE::Continue.

Definition at line 225 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_epsilon, gum::ApproximationScheme::_current_rate, gum::ApproximationScheme::_current_state, gum::ApproximationScheme::_current_step, gum::ApproximationScheme::_enabled_eps, gum::ApproximationScheme::_enabled_max_iter, gum::ApproximationScheme::_enabled_max_time, gum::ApproximationScheme::_enabled_min_rate_eps, gum::ApproximationScheme::_eps, gum::ApproximationScheme::_history, gum::ApproximationScheme::_last_epsilon, gum::ApproximationScheme::_max_iter, gum::ApproximationScheme::_max_time, gum::ApproximationScheme::_min_rate_eps, gum::ApproximationScheme::_stopScheme(), gum::ApproximationScheme::_timer, gum::IApproximationSchemeConfiguration::Continue, gum::IApproximationSchemeConfiguration::Epsilon, GUM_EMIT3, GUM_ERROR, gum::IApproximationSchemeConfiguration::Limit, gum::IApproximationSchemeConfiguration::messageApproximationScheme(), gum::IApproximationSchemeConfiguration::onProgress, gum::IApproximationSchemeConfiguration::Rate, gum::ApproximationScheme::startOfPeriod(), gum::ApproximationScheme::stateApproximationScheme(), gum::Timer::step(), gum::IApproximationSchemeConfiguration::TimeLimit, and gum::ApproximationScheme::verbosity().

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), gum::SamplingInference< GUM_SCALAR >::_loopApproxInference(), gum::learning::DAG2BNLearner< ALLOC >::createBN(), gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

225  {
226  // For coherence, we fix the time used in the method
227 
228  double timer_step = _timer.step();
229 
230  if (_enabled_max_time) {
231  if (timer_step > _max_time) {
233  return false;
234  }
235  }
236 
237  if (!startOfPeriod()) { return true; }
238 
240  GUM_ERROR(OperationNotAllowed,
241  "state of the approximation scheme is not correct : "
243  }
244 
245  if (verbosity()) { _history.push_back(error); }
246 
247  if (_enabled_max_iter) {
248  if (_current_step > _max_iter) {
250  return false;
251  }
252  }
253 
255  _current_epsilon = error; // eps rate isEnabled needs it so affectation was
256  // moved from eps isEnabled below
257 
258  if (_enabled_eps) {
259  if (_current_epsilon <= _eps) {
261  return false;
262  }
263  }
264 
265  if (_last_epsilon >= 0.) {
266  if (_current_epsilon > .0) {
267  // ! _current_epsilon can be 0. AND epsilon
268  // isEnabled can be disabled !
269  _current_rate =
271  }
272  // limit with current eps ---> 0 is | 1 - ( last_eps / 0 ) | --->
273  // infinity the else means a return false if we isEnabled the rate below,
274  // as we would have returned false if epsilon isEnabled was enabled
275  else {
277  }
278 
279  if (_enabled_min_rate_eps) {
280  if (_current_rate <= _min_rate_eps) {
282  return false;
283  }
284  }
285  }
286 
288  if (onProgress.hasListener()) {
290  }
291 
292  return true;
293  } else {
294  return false;
295  }
296  }
double step() const
Returns the delta time between now and the last reset() call (or the constructor).
Definition: timer_inl.h:39
Signaler3< Size, double, double > onProgress
Progression, error and time.
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
bool _enabled_eps
If true, the threshold convergence is enabled.
void _stopScheme(ApproximationSchemeSTATE new_state)
Stop the scheme given a new state.
double _current_epsilon
Current epsilon.
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
bool startOfPeriod()
Returns true if we are at the beginning of a period (compute error is mandatory). ...
double _eps
Threshold for convergence.
double _current_rate
Current rate.
bool _enabled_max_time
If true, the timeout is enabled.
Size _current_step
The current step.
std::vector< double > _history
The scheme history, used only if verbosity == true.
double _min_rate_eps
Threshold for the epsilon rate.
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
bool verbosity() const
Returns true if verbosity is enabled.
std::string messageApproximationScheme() const
Returns the approximation scheme message.
double _last_epsilon
Last epsilon value.
Size _max_iter
The maximum iterations.
#define GUM_EMIT3(signal, arg1, arg2, arg3)
Definition: signaler3.h:40
ApproximationSchemeSTATE _current_state
The current state.
double _max_time
The timeout.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ credalNet()

template<typename GUM_SCALAR >
const CredalNet< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::credalNet ( )

Get this creadal network.

Returns
A constant reference to this CredalNet.

Definition at line 56 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet.

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::InferenceEngine().

56  {
57  return *_credalNet;
58  }
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
+ Here is the caller graph for this function:

◆ currentTime()

INLINE double gum::ApproximationScheme::currentTime ( ) const
virtualinherited

Returns the current running time in second.

Returns
Returns the current running time in second.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 126 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_timer, and gum::Timer::step().

Referenced by gum::learning::genericBNLearner::currentTime().

126 { return _timer.step(); }
double step() const
Returns the delta time between now and the last reset() call (or the constructor).
Definition: timer_inl.h:39
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ disableEpsilon()

INLINE void gum::ApproximationScheme::disableEpsilon ( )
virtualinherited

Disable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 52 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_eps.

Referenced by gum::learning::genericBNLearner::disableEpsilon().

52 { _enabled_eps = false; }
bool _enabled_eps
If true, the threshold convergence is enabled.
+ Here is the caller graph for this function:

◆ disableMaxIter()

INLINE void gum::ApproximationScheme::disableMaxIter ( )
virtualinherited

Disable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 103 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_iter.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::learning::genericBNLearner::disableMaxIter(), and gum::learning::GreedyHillClimbing::GreedyHillClimbing().

103 { _enabled_max_iter = false; }
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
+ Here is the caller graph for this function:

◆ disableMaxTime()

INLINE void gum::ApproximationScheme::disableMaxTime ( )
virtualinherited

Disable stopping criterion on timeout.

Returns
Disable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 129 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_time.

Referenced by gum::learning::genericBNLearner::disableMaxTime(), and gum::learning::GreedyHillClimbing::GreedyHillClimbing().

129 { _enabled_max_time = false; }
bool _enabled_max_time
If true, the timeout is enabled.
+ Here is the caller graph for this function:

◆ disableMinEpsilonRate()

INLINE void gum::ApproximationScheme::disableMinEpsilonRate ( )
virtualinherited

Disable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 77 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_min_rate_eps.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), gum::learning::genericBNLearner::disableMinEpsilonRate(), and gum::learning::GreedyHillClimbing::GreedyHillClimbing().

77  {
78  _enabled_min_rate_eps = false;
79  }
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the caller graph for this function:

◆ dynamicExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations ( )

Compute dynamic expectations.

See also
_dynamicExpectations Only call this if an algorithm does not call it by itself.

Definition at line 713 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpectations().

713  {
715  }
void _dynamicExpectations()
Rearrange lower and upper expectations to suit dynamic networks.
+ Here is the call graph for this function:

◆ dynamicExpMax()

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMax ( const std::string &  varName) const

Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which upper expectation we want.
Returns
A constant reference to the variable upper expectation over all time steps.

Definition at line 501 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMax, and GUM_ERROR.

502  {
503  std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
504  "GUM_SCALAR >::dynamicExpMax ( const std::string & "
505  "varName ) const : ";
506 
507  if (_dynamicExpMax.empty())
508  GUM_ERROR(OperationNotAllowed,
509  errTxt + "_dynamicExpectations() needs to be called before");
510 
511  if (!_dynamicExpMax.exists(
512  varName) /*_dynamicExpMin.find(varName) == _dynamicExpMin.end()*/)
513  GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName);
514 
515  return _dynamicExpMax[varName];
516  }
dynExpe _dynamicExpMax
Upper dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52

◆ dynamicExpMin()

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMin ( const std::string &  varName) const

Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which lower expectation we want.
Returns
A constant reference to the variable lower expectation over all time steps.

Definition at line 483 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMin, and GUM_ERROR.

484  {
485  std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
486  "GUM_SCALAR >::dynamicExpMin ( const std::string & "
487  "varName ) const : ";
488 
489  if (_dynamicExpMin.empty())
490  GUM_ERROR(OperationNotAllowed,
491  errTxt + "_dynamicExpectations() needs to be called before");
492 
493  if (!_dynamicExpMin.exists(
494  varName) /*_dynamicExpMin.find(varName) == _dynamicExpMin.end()*/)
495  GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName);
496 
497  return _dynamicExpMin[varName];
498  }
dynExpe _dynamicExpMin
Lower dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52

◆ enableEpsilon()

INLINE void gum::ApproximationScheme::enableEpsilon ( )
virtualinherited

Enable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 55 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_eps.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), and gum::learning::genericBNLearner::enableEpsilon().

55 { _enabled_eps = true; }
bool _enabled_eps
If true, the threshold convergence is enabled.
+ Here is the caller graph for this function:

◆ enableMaxIter()

INLINE void gum::ApproximationScheme::enableMaxIter ( )
virtualinherited

Enable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 106 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_iter.

Referenced by gum::learning::genericBNLearner::enableMaxIter().

106 { _enabled_max_iter = true; }
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
+ Here is the caller graph for this function:

◆ enableMaxTime()

INLINE void gum::ApproximationScheme::enableMaxTime ( )
virtualinherited

Enable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 132 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_time.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling(), and gum::learning::genericBNLearner::enableMaxTime().

132 { _enabled_max_time = true; }
bool _enabled_max_time
If true, the timeout is enabled.
+ Here is the caller graph for this function:

◆ enableMinEpsilonRate()

INLINE void gum::ApproximationScheme::enableMinEpsilonRate ( )
virtualinherited

Enable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 82 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_min_rate_eps.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), and gum::learning::genericBNLearner::enableMinEpsilonRate().

82  {
83  _enabled_min_rate_eps = true;
84  }
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the caller graph for this function:

◆ epsilon()

INLINE double gum::ApproximationScheme::epsilon ( ) const
virtualinherited

Returns the value of epsilon.

Returns
Returns the value of epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 49 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_eps.

Referenced by gum::ImportanceSampling< GUM_SCALAR >::_onContextualize(), and gum::learning::genericBNLearner::epsilon().

49 { return _eps; }
double _eps
Threshold for convergence.
+ Here is the caller graph for this function:

◆ eraseAllEvidence()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence ( )
virtual

Erase all inference related data to perform another one.

You need to insert evidence again if needed but modalities are kept. You can insert new ones by using the appropriate method which will delete the old ones.

Reimplemented in gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >, and gum::credal::CNLoopyPropagation< GUM_SCALAR >.

Definition at line 61 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMax, gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMin, gum::credal::InferenceEngine< GUM_SCALAR >::_evidence, gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginals(), gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginalSets(), gum::credal::InferenceEngine< GUM_SCALAR >::_query, and gum::HashTable< Key, Val, Alloc >::clear().

Referenced by gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::eraseAllEvidence().

61  {
62  _evidence.clear();
63  _query.clear();
64  /*
65  _marginalMin.clear();
66  _marginalMax.clear();
67  _oldMarginalMin.clear();
68  _oldMarginalMax.clear();
69  */
71  /*
72  _expectationMin.clear();
73  _expectationMax.clear();
74  */
76 
77  // _marginalSets.clear();
79 
80  _dynamicExpMin.clear();
81  _dynamicExpMax.clear();
82 
83  //_modal.clear();
84 
85  //_t0.clear();
86  //_t1.clear();
87  }
dynExpe _dynamicExpMin
Lower dynamic expectations.
dynExpe _dynamicExpMax
Upper dynamic expectations.
void _initMarginals()
Initialize lower and upper old marginals and marginals before inference, with the lower marginal bein...
query _query
Holds the query nodes states.
void _initMarginalSets()
Initialize credal set vertices with empty sets.
margi _evidence
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
void _initExpectations()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ expectationMax() [1/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const NodeId  id) const

Get the upper expectation of a given node id.

Parameters
idThe node id which upper expectation we want.
Returns
A constant reference to this node upper expectation.

Definition at line 476 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax.

476  {
477  try {
478  return _expectationMax[id];
479  } catch (NotFound& err) { throw(err); }
480  }
expe _expectationMax
Upper expectations, if some variables modalities were inserted.

◆ expectationMax() [2/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const std::string &  varName) const

Get the upper expectation of a given variable name.

Parameters
varNameThe variable name which upper expectation we want.
Returns
A constant reference to this variable upper expectation.

Definition at line 459 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, and gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax.

460  {
461  try {
462  return _expectationMax[_credalNet->current_bn().idFromName(varName)];
463  } catch (NotFound& err) { throw(err); }
464  }
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.

◆ expectationMin() [1/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const NodeId  id) const

Get the lower expectation of a given node id.

Parameters
idThe node id which lower expectation we want.
Returns
A constant reference to this node lower expectation.

Definition at line 468 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin.

468  {
469  try {
470  return _expectationMin[id];
471  } catch (NotFound& err) { throw(err); }
472  }
expe _expectationMin
Lower expectations, if some variables modalities were inserted.

◆ expectationMin() [2/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const std::string &  varName) const

Get the lower expectation of a given variable name.

Parameters
varNameThe variable name which lower expectation we want.
Returns
A constant reference to this variable lower expectation.

Definition at line 451 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, and gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin.

452  {
453  try {
454  return _expectationMin[_credalNet->current_bn().idFromName(varName)];
455  } catch (NotFound& err) { throw(err); }
456  }
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.

◆ getApproximationSchemeMsg()

template<typename GUM_SCALAR >
const std::string gum::credal::InferenceEngine< GUM_SCALAR >::getApproximationSchemeMsg ( )
inline

Get approximation scheme state.

Returns
A constant string about approximation scheme state.

Definition at line 513 of file inferenceEngine.h.

References gum::IApproximationSchemeConfiguration::messageApproximationScheme().

513  {
514  return this->messageApproximationScheme();
515  }
std::string messageApproximationScheme() const
Returns the approximation scheme message.
+ Here is the call graph for this function:

◆ getT0Cluster()

template<typename GUM_SCALAR >
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT0Cluster ( ) const

Get the _t0 cluster.

Returns
A constant reference to the _t0 cluster.

Definition at line 1002 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_t0.

1002  {
1003  return _t0;
1004  }
cluster _t0
Clusters of nodes used with dynamic networks.

◆ getT1Cluster()

template<typename GUM_SCALAR >
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT1Cluster ( ) const

Get the _t1 cluster.

Returns
A constant reference to the _t1 cluster.

Definition at line 1008 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_t1.

1008  {
1009  return _t1;
1010  }
cluster _t1
Clusters of nodes used with dynamic networks.

◆ getVarMod2BNsMap()

template<typename GUM_SCALAR >
VarMod2BNsMap< GUM_SCALAR > * gum::credal::InferenceEngine< GUM_SCALAR >::getVarMod2BNsMap ( )

Get optimum IBayesNet.

Returns
A pointer to the optimal net object.

Definition at line 138 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dbnOpt.

138  {
139  return &_dbnOpt;
140  }
VarMod2BNsMap< GUM_SCALAR > _dbnOpt
Object used to efficiently store optimal bayes net during inference, for some algorithms.

◆ history()

INLINE const std::vector< double > & gum::ApproximationScheme::history ( ) const
virtualinherited

Returns the scheme history.

Returns
Returns the scheme history.
Exceptions
OperationNotAllowedRaised if the scheme did not performed or if verbosity is set to false.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 171 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_history, GUM_ERROR, gum::ApproximationScheme::stateApproximationScheme(), gum::IApproximationSchemeConfiguration::Undefined, and gum::ApproximationScheme::verbosity().

Referenced by gum::learning::genericBNLearner::history().

171  {
173  GUM_ERROR(OperationNotAllowed,
174  "state of the approximation scheme is udefined");
175  }
176 
177  if (verbosity() == false) {
178  GUM_ERROR(OperationNotAllowed, "No history when verbosity=false");
179  }
180 
181  return _history;
182  }
std::vector< double > _history
The scheme history, used only if verbosity == true.
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
bool verbosity() const
Returns true if verbosity is enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ initApproximationScheme()

INLINE void gum::ApproximationScheme::initApproximationScheme ( )
inherited

Initialise the scheme.

Definition at line 185 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_epsilon, gum::ApproximationScheme::_current_rate, gum::ApproximationScheme::_current_state, gum::ApproximationScheme::_current_step, gum::ApproximationScheme::_history, gum::ApproximationScheme::_timer, gum::IApproximationSchemeConfiguration::Continue, and gum::Timer::reset().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), gum::SamplingInference< GUM_SCALAR >::_loopApproxInference(), gum::SamplingInference< GUM_SCALAR >::_onStateChanged(), gum::learning::DAG2BNLearner< ALLOC >::createBN(), gum::learning::GreedyHillClimbing::learnStructure(), and gum::learning::LocalSearchWithTabuList::learnStructure().

185  {
187  _current_step = 0;
189  _history.clear();
190  _timer.reset();
191  }
double _current_epsilon
Current epsilon.
void reset()
Reset the timer.
Definition: timer_inl.h:29
double _current_rate
Current rate.
Size _current_step
The current step.
std::vector< double > _history
The scheme history, used only if verbosity == true.
ApproximationSchemeSTATE _current_state
The current state.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ insertEvidence() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const std::map< std::string, std::vector< GUM_SCALAR > > &  eviMap)

Insert evidence from map.

Parameters
eviMapThe map variable name - likelihood.

Definition at line 226 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_evidence, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

227  {
228  if (!_evidence.empty()) _evidence.clear();
229 
230  for (auto it = eviMap.cbegin(), theEnd = eviMap.cend(); it != theEnd; ++it) {
231  NodeId id;
232 
233  try {
234  id = _credalNet->current_bn().idFromName(it->first);
235  } catch (NotFound& err) {
236  GUM_SHOWERROR(err);
237  continue;
238  }
239 
240  _evidence.insert(id, it->second);
241  }
242  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:58
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
margi _evidence
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
Size NodeId
Type for node ids.
Definition: graphElements.h:97
+ Here is the call graph for this function:

◆ insertEvidence() [2/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const NodeProperty< std::vector< GUM_SCALAR > > &  evidence)

Insert evidence from Property.

Parameters
evidenceThe on nodes Property containing likelihoods.

Definition at line 248 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_evidence, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

249  {
250  if (!_evidence.empty()) _evidence.clear();
251 
252  // use cbegin() to get const_iterator when available in aGrUM hashtables
253  for (const auto& elt : evidence) {
254  try {
255  _credalNet->current_bn().variable(elt.first);
256  } catch (NotFound& err) {
257  GUM_SHOWERROR(err);
258  continue;
259  }
260 
261  _evidence.insert(elt.first, elt.second);
262  }
263  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:58
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
margi _evidence
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
+ Here is the call graph for this function:

◆ insertEvidenceFile()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidenceFile ( const std::string &  path)
virtual

Insert evidence from file.

Parameters
pathThe path to the evidence file.

Reimplemented in gum::credal::CNLoopyPropagation< GUM_SCALAR >, and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >.

Definition at line 267 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_evidence, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_ERROR, GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::insertEvidenceFile(), and gum::credal::CNLoopyPropagation< GUM_SCALAR >::insertEvidenceFile().

267  {
268  std::ifstream evi_stream(path.c_str(), std::ios::in);
269 
270  if (!evi_stream.good()) {
271  GUM_ERROR(IOError,
272  "void InferenceEngine< GUM_SCALAR "
273  ">::insertEvidence(const std::string & path) : could not "
274  "open input file : "
275  << path);
276  }
277 
278  if (!_evidence.empty()) _evidence.clear();
279 
280  std::string line, tmp;
281  char * cstr, *p;
282 
283  while (evi_stream.good() && std::strcmp(line.c_str(), "[EVIDENCE]") != 0) {
284  getline(evi_stream, line);
285  }
286 
287  while (evi_stream.good()) {
288  getline(evi_stream, line);
289 
290  if (std::strcmp(line.c_str(), "[QUERY]") == 0) break;
291 
292  if (line.size() == 0) continue;
293 
294  cstr = new char[line.size() + 1];
295  strcpy(cstr, line.c_str());
296 
297  p = strtok(cstr, " ");
298  tmp = p;
299 
300  // if user input is wrong
301  NodeId node = -1;
302 
303  try {
304  node = _credalNet->current_bn().idFromName(tmp);
305  } catch (NotFound& err) {
306  GUM_SHOWERROR(err);
307  continue;
308  }
309 
310  std::vector< GUM_SCALAR > values;
311  p = strtok(nullptr, " ");
312 
313  while (p != nullptr) {
314  values.push_back(GUM_SCALAR(atof(p)));
315  p = strtok(nullptr, " ");
316  } // end of : line
317 
318  _evidence.insert(node, values);
319 
320  delete[] p;
321  delete[] cstr;
322  } // end of : file
323 
324  evi_stream.close();
325  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:58
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
margi _evidence
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
Size NodeId
Type for node ids.
Definition: graphElements.h:97
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ insertModals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModals ( const std::map< std::string, std::vector< GUM_SCALAR > > &  modals)

Insert variables modalities from map to compute expectations.

Parameters
modalsThe map variable name - modalities.

Definition at line 190 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_modal, and GUM_SHOWERROR.

191  {
192  if (!_modal.empty()) _modal.clear();
193 
194  for (auto it = modals.cbegin(), theEnd = modals.cend(); it != theEnd; ++it) {
195  NodeId id;
196 
197  try {
198  id = _credalNet->current_bn().idFromName(it->first);
199  } catch (NotFound& err) {
200  GUM_SHOWERROR(err);
201  continue;
202  }
203 
204  // check that modals are net compatible
205  auto dSize = _credalNet->current_bn().variable(id).domainSize();
206 
207  if (dSize != it->second.size()) continue;
208 
209  // GUM_ERROR(OperationNotAllowed, "void InferenceEngine< GUM_SCALAR
210  // >::insertModals( const std::map< std::string, std::vector< GUM_SCALAR
211  // > >
212  // &modals) : modalities does not respect variable cardinality : " <<
213  // _credalNet->current_bn().variable( id ).name() << " : " << dSize << "
214  // != "
215  // << it->second.size());
216 
217  _modal.insert(it->first, it->second); //[ it->first ] = it->second;
218  }
219 
220  //_modal = modals;
221 
223  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:58
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
dynExpe _modal
Variables modalities used to compute expectations.
void _initExpectations()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...
Size NodeId
Type for node ids.
Definition: graphElements.h:97
+ Here is the call graph for this function:

◆ insertModalsFile()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModalsFile ( const std::string &  path)

Insert variables modalities from file to compute expectations.

Parameters
pathThe path to the modalities file.

Definition at line 143 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_modal, and GUM_ERROR.

143  {
144  std::ifstream mod_stream(path.c_str(), std::ios::in);
145 
146  if (!mod_stream.good()) {
147  GUM_ERROR(OperationNotAllowed,
148  "void InferenceEngine< GUM_SCALAR "
149  ">::insertModals(const std::string & path) : "
150  "could not open input file : "
151  << path);
152  }
153 
154  if (!_modal.empty()) _modal.clear();
155 
156  std::string line, tmp;
157  char * cstr, *p;
158 
159  while (mod_stream.good()) {
160  getline(mod_stream, line);
161 
162  if (line.size() == 0) continue;
163 
164  cstr = new char[line.size() + 1];
165  strcpy(cstr, line.c_str());
166 
167  p = strtok(cstr, " ");
168  tmp = p;
169 
170  std::vector< GUM_SCALAR > values;
171  p = strtok(nullptr, " ");
172 
173  while (p != nullptr) {
174  values.push_back(GUM_SCALAR(atof(p)));
175  p = strtok(nullptr, " ");
176  } // end of : line
177 
178  _modal.insert(tmp, values); //[tmp] = values;
179 
180  delete[] p;
181  delete[] cstr;
182  } // end of : file
183 
184  mod_stream.close();
185 
187  }
dynExpe _modal
Variables modalities used to compute expectations.
void _initExpectations()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52
+ Here is the call graph for this function:

◆ insertQuery()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQuery ( const NodeProperty< std::vector< bool > > &  query)

Insert query variables and states from Property.

Parameters
queryThe on nodes Property containing queried variables states.

Definition at line 328 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_query, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

329  {
330  if (!_query.empty()) _query.clear();
331 
332  for (const auto& elt : query) {
333  try {
334  _credalNet->current_bn().variable(elt.first);
335  } catch (NotFound& err) {
336  GUM_SHOWERROR(err);
337  continue;
338  }
339 
340  _query.insert(elt.first, elt.second);
341  }
342  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:58
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
query _query
Holds the query nodes states.
void clear()
Removes all the elements in the hash table.
NodeProperty< std::vector< bool > > query
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
+ Here is the call graph for this function:

◆ insertQueryFile()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQueryFile ( const std::string &  path)

Insert query variables states from file.

Parameters
pathThe path to the query file.

Definition at line 345 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_query, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_ERROR, GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

345  {
346  std::ifstream evi_stream(path.c_str(), std::ios::in);
347 
348  if (!evi_stream.good()) {
349  GUM_ERROR(IOError,
350  "void InferenceEngine< GUM_SCALAR >::insertQuery(const "
351  "std::string & path) : could not open input file : "
352  << path);
353  }
354 
355  if (!_query.empty()) _query.clear();
356 
357  std::string line, tmp;
358  char * cstr, *p;
359 
360  while (evi_stream.good() && std::strcmp(line.c_str(), "[QUERY]") != 0) {
361  getline(evi_stream, line);
362  }
363 
364  while (evi_stream.good()) {
365  getline(evi_stream, line);
366 
367  if (std::strcmp(line.c_str(), "[EVIDENCE]") == 0) break;
368 
369  if (line.size() == 0) continue;
370 
371  cstr = new char[line.size() + 1];
372  strcpy(cstr, line.c_str());
373 
374  p = strtok(cstr, " ");
375  tmp = p;
376 
377  // if user input is wrong
378  NodeId node = -1;
379 
380  try {
381  node = _credalNet->current_bn().idFromName(tmp);
382  } catch (NotFound& err) {
383  GUM_SHOWERROR(err);
384  continue;
385  }
386 
387  auto dSize = _credalNet->current_bn().variable(node).domainSize();
388 
389  p = strtok(nullptr, " ");
390 
391  if (p == nullptr) {
392  _query.insert(node, std::vector< bool >(dSize, true));
393  } else {
394  std::vector< bool > values(dSize, false);
395 
396  while (p != nullptr) {
397  if ((Size)atoi(p) >= dSize)
398  GUM_ERROR(OutOfBounds,
399  "void InferenceEngine< GUM_SCALAR "
400  ">::insertQuery(const std::string & path) : "
401  "query modality is higher or equal to "
402  "cardinality");
403 
404  values[atoi(p)] = true;
405  p = strtok(nullptr, " ");
406  } // end of : line
407 
408  _query.insert(node, values);
409  }
410 
411  delete[] p;
412  delete[] cstr;
413  } // end of : file
414 
415  evi_stream.close();
416  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:58
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
query _query
Holds the query nodes states.
void clear()
Removes all the elements in the hash table.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:45
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
Size NodeId
Type for node ids.
Definition: graphElements.h:97
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52
+ Here is the call graph for this function:

◆ isEnabledEpsilon()

INLINE bool gum::ApproximationScheme::isEnabledEpsilon ( ) const
virtualinherited

Returns true if stopping criterion on epsilon is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 59 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_eps.

Referenced by gum::learning::genericBNLearner::isEnabledEpsilon().

59  {
60  return _enabled_eps;
61  }
bool _enabled_eps
If true, the threshold convergence is enabled.
+ Here is the caller graph for this function:

◆ isEnabledMaxIter()

INLINE bool gum::ApproximationScheme::isEnabledMaxIter ( ) const
virtualinherited

Returns true if stopping criterion on max iterations is enabled, false otherwise.

Returns
Returns true if stopping criterion on max iterations is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 110 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_iter.

Referenced by gum::learning::genericBNLearner::isEnabledMaxIter().

110  {
111  return _enabled_max_iter;
112  }
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
+ Here is the caller graph for this function:

◆ isEnabledMaxTime()

INLINE bool gum::ApproximationScheme::isEnabledMaxTime ( ) const
virtualinherited

Returns true if stopping criterion on timeout is enabled, false otherwise.

Returns
Returns true if stopping criterion on timeout is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 136 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_time.

Referenced by gum::learning::genericBNLearner::isEnabledMaxTime().

136  {
137  return _enabled_max_time;
138  }
bool _enabled_max_time
If true, the timeout is enabled.
+ Here is the caller graph for this function:

◆ isEnabledMinEpsilonRate()

INLINE bool gum::ApproximationScheme::isEnabledMinEpsilonRate ( ) const
virtualinherited

Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 88 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_min_rate_eps.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), and gum::learning::genericBNLearner::isEnabledMinEpsilonRate().

88  {
89  return _enabled_min_rate_eps;
90  }
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the caller graph for this function:

◆ makeInference()

template<typename GUM_SCALAR >
virtual void gum::credal::InferenceEngine< GUM_SCALAR >::makeInference ( )
pure virtual

◆ marginalMax() [1/2]

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const NodeId  id) const

Get the upper marginals of a given node id.

Parameters
idThe node id which upper marginals we want.
Returns
A constant reference to this node upper marginals.

Definition at line 444 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax.

444  {
445  try {
446  return _marginalMax[id];
447  } catch (NotFound& err) { throw(err); }
448  }
margi _marginalMax
Upper marginals.

◆ marginalMax() [2/2]

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const std::string &  varName) const

Get the upper marginals of a given variable name.

Parameters
varNameThe variable name which upper marginals we want.
Returns
A constant reference to this variable upper marginals.

Definition at line 427 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, and gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax.

428  {
429  try {
430  return _marginalMax[_credalNet->current_bn().idFromName(varName)];
431  } catch (NotFound& err) { throw(err); }
432  }
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
margi _marginalMax
Upper marginals.

◆ marginalMin() [1/2]

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const NodeId  id) const

Get the lower marginals of a given node id.

Parameters
idThe node id which lower marginals we want.
Returns
A constant reference to this node lower marginals.

Definition at line 436 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin.

436  {
437  try {
438  return _marginalMin[id];
439  } catch (NotFound& err) { throw(err); }
440  }
margi _marginalMin
Lower marginals.

◆ marginalMin() [2/2]

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const std::string &  varName) const

Get the lower marginals of a given variable name.

Parameters
varNameThe variable name which lower marginals we want.
Returns
A constant reference to this variable lower marginals.

Definition at line 419 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, and gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin.

420  {
421  try {
422  return _marginalMin[_credalNet->current_bn().idFromName(varName)];
423  } catch (NotFound& err) { throw(err); }
424  }
margi _marginalMin
Lower marginals.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.

◆ maxIter()

INLINE Size gum::ApproximationScheme::maxIter ( ) const
virtualinherited

Returns the criterion on number of iterations.

Returns
Returns the criterion on number of iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 100 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_max_iter.

Referenced by gum::learning::genericBNLearner::maxIter().

100 { return _max_iter; }
Size _max_iter
The maximum iterations.
+ Here is the caller graph for this function:

◆ maxTime()

INLINE double gum::ApproximationScheme::maxTime ( ) const
virtualinherited

Returns the timeout (in seconds).

Returns
Returns the timeout (in seconds).

Implements gum::IApproximationSchemeConfiguration.

Definition at line 123 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_max_time.

Referenced by gum::learning::genericBNLearner::maxTime().

123 { return _max_time; }
double _max_time
The timeout.
+ Here is the caller graph for this function:

◆ messageApproximationScheme()

INLINE std::string gum::IApproximationSchemeConfiguration::messageApproximationScheme ( ) const
inherited

Returns the approximation scheme message.

Returns
Returns the approximation scheme message.

Definition at line 38 of file IApproximationSchemeConfiguration_inl.h.

References gum::IApproximationSchemeConfiguration::Continue, gum::IApproximationSchemeConfiguration::Epsilon, gum::IApproximationSchemeConfiguration::epsilon(), gum::IApproximationSchemeConfiguration::Limit, gum::IApproximationSchemeConfiguration::maxIter(), gum::IApproximationSchemeConfiguration::maxTime(), gum::IApproximationSchemeConfiguration::minEpsilonRate(), gum::IApproximationSchemeConfiguration::Rate, gum::IApproximationSchemeConfiguration::stateApproximationScheme(), gum::IApproximationSchemeConfiguration::Stopped, gum::IApproximationSchemeConfiguration::TimeLimit, and gum::IApproximationSchemeConfiguration::Undefined.

Referenced by gum::ApproximationScheme::_stopScheme(), gum::ApproximationScheme::continueApproximationScheme(), and gum::credal::InferenceEngine< GUM_SCALAR >::getApproximationSchemeMsg().

38  {
39  std::stringstream s;
40 
41  switch (stateApproximationScheme()) {
42  case ApproximationSchemeSTATE::Continue: s << "in progress"; break;
43 
45  s << "stopped with epsilon=" << epsilon();
46  break;
47 
49  s << "stopped with rate=" << minEpsilonRate();
50  break;
51 
53  s << "stopped with max iteration=" << maxIter();
54  break;
55 
57  s << "stopped with timeout=" << maxTime();
58  break;
59 
60  case ApproximationSchemeSTATE::Stopped: s << "stopped on request"; break;
61 
62  case ApproximationSchemeSTATE::Undefined: s << "undefined state"; break;
63  };
64 
65  return s.str();
66  }
virtual double epsilon() const =0
Returns the value of epsilon.
virtual ApproximationSchemeSTATE stateApproximationScheme() const =0
Returns the approximation scheme state.
virtual double maxTime() const =0
Returns the timeout (in seconds).
virtual Size maxIter() const =0
Returns the criterion on number of iterations.
virtual double minEpsilonRate() const =0
Returns the value of the minimal epsilon rate.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ minEpsilonRate()

INLINE double gum::ApproximationScheme::minEpsilonRate ( ) const
virtualinherited

Returns the value of the minimal epsilon rate.

Returns
Returns the value of the minimal epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 72 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_min_rate_eps.

Referenced by gum::learning::genericBNLearner::minEpsilonRate().

72  {
73  return _min_rate_eps;
74  }
double _min_rate_eps
Threshold for the epsilon rate.
+ Here is the caller graph for this function:

◆ nbrIterations()

INLINE Size gum::ApproximationScheme::nbrIterations ( ) const
virtualinherited

Returns the number of iterations.

Returns
Returns the number of iterations.
Exceptions
OperationNotAllowedRaised if the scheme did not perform.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 161 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_step, GUM_ERROR, gum::ApproximationScheme::stateApproximationScheme(), and gum::IApproximationSchemeConfiguration::Undefined.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), and gum::learning::genericBNLearner::nbrIterations().

161  {
163  GUM_ERROR(OperationNotAllowed,
164  "state of the approximation scheme is undefined");
165  }
166 
167  return _current_step;
168  }
Size _current_step
The current step.
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ periodSize()

INLINE Size gum::ApproximationScheme::periodSize ( ) const
virtualinherited

Returns the period size.

Returns
Returns the period size.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 147 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_period_size.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference(), and gum::learning::genericBNLearner::periodSize().

147 { return _period_size; }
Size _period_size
Checking criteria frequency.
+ Here is the caller graph for this function:

◆ remainingBurnIn()

INLINE Size gum::ApproximationScheme::remainingBurnIn ( )
inherited

Returns the remaining burn in.

Returns
Returns the remaining burn in.

Definition at line 208 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_burn_in, and gum::ApproximationScheme::_current_step.

208  {
209  if (_burn_in > _current_step) {
210  return _burn_in - _current_step;
211  } else {
212  return 0;
213  }
214  }
Size _burn_in
Number of iterations before checking stopping criteria.
Size _current_step
The current step.

◆ repetitiveInd()

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInd ( ) const

Get the current independence status.

Returns
True if repetitive, False otherwise.

Definition at line 117 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInd.

117  {
118  return _repetitiveInd;
119  }
bool _repetitiveInd
True if using repetitive independence ( dynamic network only ), False otherwise.

◆ saveExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveExpectations ( const std::string &  path) const

Saves expectations to file.

Parameters
pathThe path to the file to be used.

Definition at line 551 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMax, gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMin, and GUM_ERROR.

552  {
553  if (_dynamicExpMin.empty()) //_modal.empty())
554  return;
555 
556  // else not here, to keep the const (natural with a saving process)
557  // else if(_dynamicExpMin.empty() || _dynamicExpMax.empty())
558  //_dynamicExpectations(); // works with or without a dynamic network
559 
560  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
561 
562  if (!m_stream.good()) {
563  GUM_ERROR(IOError,
564  "void InferenceEngine< GUM_SCALAR "
565  ">::saveExpectations(const std::string & path) : could "
566  "not open output file : "
567  << path);
568  }
569 
570  for (const auto& elt : _dynamicExpMin) {
571  m_stream << elt.first; // it->first;
572 
573  // iterates over a vector
574  for (const auto& elt2 : elt.second) {
575  m_stream << " " << elt2;
576  }
577 
578  m_stream << std::endl;
579  }
580 
581  for (const auto& elt : _dynamicExpMax) {
582  m_stream << elt.first;
583 
584  // iterates over a vector
585  for (const auto& elt2 : elt.second) {
586  m_stream << " " << elt2;
587  }
588 
589  m_stream << std::endl;
590  }
591 
592  m_stream.close();
593  }
dynExpe _dynamicExpMin
Lower dynamic expectations.
dynExpe _dynamicExpMax
Upper dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52

◆ saveMarginals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveMarginals ( const std::string &  path) const

Saves marginals to file.

Parameters
pathThe path to the file to be used.

Definition at line 525 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, and GUM_ERROR.

526  {
527  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
528 
529  if (!m_stream.good()) {
530  GUM_ERROR(IOError,
531  "void InferenceEngine< GUM_SCALAR >::saveMarginals(const "
532  "std::string & path) const : could not open output file "
533  ": "
534  << path);
535  }
536 
537  for (const auto& elt : _marginalMin) {
538  Size esize = Size(elt.second.size());
539 
540  for (Size mod = 0; mod < esize; mod++) {
541  m_stream << _credalNet->current_bn().variable(elt.first).name() << " "
542  << mod << " " << (elt.second)[mod] << " "
543  << _marginalMax[elt.first][mod] << std::endl;
544  }
545  }
546 
547  m_stream.close();
548  }
margi _marginalMin
Lower marginals.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:45
margi _marginalMax
Upper marginals.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52

◆ saveVertices()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveVertices ( const std::string &  path) const

Saves vertices to file.

Parameters
pathThe path to the file to be used.

Definition at line 625 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets, and GUM_ERROR.

625  {
626  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
627 
628  if (!m_stream.good()) {
629  GUM_ERROR(IOError,
630  "void InferenceEngine< GUM_SCALAR >::saveVertices(const "
631  "std::string & path) : could not open outpul file : "
632  << path);
633  }
634 
635  for (const auto& elt : _marginalSets) {
636  m_stream << _credalNet->current_bn().variable(elt.first).name()
637  << std::endl;
638 
639  for (const auto& elt2 : elt.second) {
640  m_stream << "[";
641  bool first = true;
642 
643  for (const auto& elt3 : elt2) {
644  if (!first) {
645  m_stream << ",";
646  first = false;
647  }
648 
649  m_stream << elt3;
650  }
651 
652  m_stream << "]\n";
653  }
654  }
655 
656  m_stream.close();
657  }
credalSet _marginalSets
Credal sets vertices, if enabled.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52

◆ setEpsilon()

INLINE void gum::ApproximationScheme::setEpsilon ( double  eps)
virtualinherited

Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|.

If the criterion was disabled it will be enabled.

Parameters
epsThe new epsilon value.
Exceptions
OutOfLowerBoundRaised if eps < 0.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 41 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_eps, gum::ApproximationScheme::_eps, and GUM_ERROR.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsSampling< GUM_SCALAR >::GibbsSampling(), gum::learning::GreedyHillClimbing::GreedyHillClimbing(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setEpsilon().

41  {
42  if (eps < 0.) { GUM_ERROR(OutOfLowerBound, "eps should be >=0"); }
43 
44  _eps = eps;
45  _enabled_eps = true;
46  }
bool _enabled_eps
If true, the threshold convergence is enabled.
double _eps
Threshold for convergence.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52
+ Here is the caller graph for this function:

◆ setMaxIter()

INLINE void gum::ApproximationScheme::setMaxIter ( Size  max)
virtualinherited

Stopping criterion on number of iterations.

If the criterion was disabled it will be enabled.

Parameters
maxThe maximum number of iterations.
Exceptions
OutOfLowerBoundRaised if max <= 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 93 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_iter, gum::ApproximationScheme::_max_iter, and GUM_ERROR.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setMaxIter().

93  {
94  if (max < 1) { GUM_ERROR(OutOfLowerBound, "max should be >=1"); }
95  _max_iter = max;
96  _enabled_max_iter = true;
97  }
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
Size _max_iter
The maximum iterations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52
+ Here is the caller graph for this function:

◆ setMaxTime()

INLINE void gum::ApproximationScheme::setMaxTime ( double  timeout)
virtualinherited

Stopping criterion on timeout.

If the criterion was disabled it will be enabled.

Parameters
timeoutThe timeout value in seconds.
Exceptions
OutOfLowerBoundRaised if timeout <= 0.0.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 116 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_time, gum::ApproximationScheme::_max_time, and GUM_ERROR.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setMaxTime().

116  {
117  if (timeout <= 0.) { GUM_ERROR(OutOfLowerBound, "timeout should be >0."); }
118  _max_time = timeout;
119  _enabled_max_time = true;
120  }
bool _enabled_max_time
If true, the timeout is enabled.
double _max_time
The timeout.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52
+ Here is the caller graph for this function:

◆ setMinEpsilonRate()

INLINE void gum::ApproximationScheme::setMinEpsilonRate ( double  rate)
virtualinherited

Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|).

If the criterion was disabled it will be enabled

Parameters
rateThe minimal epsilon rate.
Exceptions
OutOfLowerBoundif rate<0

Implements gum::IApproximationSchemeConfiguration.

Definition at line 64 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_min_rate_eps, gum::ApproximationScheme::_min_rate_eps, and GUM_ERROR.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsSampling< GUM_SCALAR >::GibbsSampling(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setMinEpsilonRate().

64  {
65  if (rate < 0) { GUM_ERROR(OutOfLowerBound, "rate should be >=0"); }
66 
67  _min_rate_eps = rate;
68  _enabled_min_rate_eps = true;
69  }
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
double _min_rate_eps
Threshold for the epsilon rate.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52
+ Here is the caller graph for this function:

◆ setPeriodSize()

INLINE void gum::ApproximationScheme::setPeriodSize ( Size  p)
virtualinherited

How many samples between two stopping is enable.

Parameters
pThe new period value.
Exceptions
OutOfLowerBoundRaised if p < 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 141 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_period_size, and GUM_ERROR.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setPeriodSize().

141  {
142  if (p < 1) { GUM_ERROR(OutOfLowerBound, "p should be >=1"); }
143 
144  _period_size = p;
145  }
Size _period_size
Checking criteria frequency.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:52
+ Here is the caller graph for this function:

◆ setRepetitiveInd()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::setRepetitiveInd ( const bool  repetitive)
Parameters
repetitiveTrue if repetitive independence is to be used, false otherwise. Only usefull with dynamic networks.

Definition at line 108 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInd, and gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit().

108  {
109  bool oldValue = _repetitiveInd;
110  _repetitiveInd = repetitive;
111 
112  // do not compute clusters more than once
113  if (_repetitiveInd && !oldValue) _repetitiveInit();
114  }
void _repetitiveInit()
Initialize _t0 and _t1 clusters.
bool _repetitiveInd
True if using repetitive independence ( dynamic network only ), False otherwise.
+ Here is the call graph for this function:

◆ setVerbosity()

INLINE void gum::ApproximationScheme::setVerbosity ( bool  v)
virtualinherited

Set the verbosity on (true) or off (false).

Parameters
vIf true, then verbosity is turned on.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 150 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_verbosity.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setVerbosity().

150 { _verbosity = v; }
bool _verbosity
If true, verbosity is enabled.
+ Here is the caller graph for this function:

◆ startOfPeriod()

INLINE bool gum::ApproximationScheme::startOfPeriod ( )
inherited

Returns true if we are at the beginning of a period (compute error is mandatory).

Returns
Returns true if we are at the beginning of a period (compute error is mandatory).

Definition at line 195 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_burn_in, gum::ApproximationScheme::_current_step, and gum::ApproximationScheme::_period_size.

Referenced by gum::ApproximationScheme::continueApproximationScheme().

195  {
196  if (_current_step < _burn_in) { return false; }
197 
198  if (_period_size == 1) { return true; }
199 
200  return ((_current_step - _burn_in) % _period_size == 0);
201  }
Size _burn_in
Number of iterations before checking stopping criteria.
Size _current_step
The current step.
Size _period_size
Checking criteria frequency.
+ Here is the caller graph for this function:

◆ stateApproximationScheme()

INLINE IApproximationSchemeConfiguration::ApproximationSchemeSTATE gum::ApproximationScheme::stateApproximationScheme ( ) const
virtualinherited

Returns the approximation scheme state.

Returns
Returns the approximation scheme state.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 156 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_state.

Referenced by gum::ApproximationScheme::continueApproximationScheme(), gum::ApproximationScheme::history(), gum::ApproximationScheme::nbrIterations(), and gum::learning::genericBNLearner::stateApproximationScheme().

156  {
157  return _current_state;
158  }
ApproximationSchemeSTATE _current_state
The current state.
+ Here is the caller graph for this function:

◆ stopApproximationScheme()

INLINE void gum::ApproximationScheme::stopApproximationScheme ( )
inherited

Stop the approximation scheme.

Definition at line 217 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_state, gum::ApproximationScheme::_stopScheme(), gum::IApproximationSchemeConfiguration::Continue, and gum::IApproximationSchemeConfiguration::Stopped.

Referenced by gum::learning::DAG2BNLearner< ALLOC >::createBN(), gum::learning::GreedyHillClimbing::learnStructure(), and gum::learning::LocalSearchWithTabuList::learnStructure().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ storeBNOpt() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( const bool  value)
Parameters
valueTrue if optimal bayesian networks are to be stored for each variable and each modality.

Definition at line 96 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt.

96  {
97  _storeBNOpt = value;
98  }
bool _storeBNOpt
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

◆ storeBNOpt() [2/2]

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( ) const
Returns
True if optimal bayes net are stored for each variable and each modality, False otherwise.

Definition at line 132 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt.

132  {
133  return _storeBNOpt;
134  }
bool _storeBNOpt
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

◆ storeVertices() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( const bool  value)
Parameters
valueTrue if vertices are to be stored, false otherwise.

Definition at line 101 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginalSets(), and gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices.

101  {
102  _storeVertices = value;
103 
104  if (value) _initMarginalSets();
105  }
void _initMarginalSets()
Initialize credal set vertices with empty sets.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
+ Here is the call graph for this function:

◆ storeVertices() [2/2]

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( ) const

Get the number of iterations without changes used to stop some algorithms.

Returns
the number of iterations.int iterStop () const;
True if vertice are stored, False otherwise.

Definition at line 127 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices.

127  {
128  return _storeVertices;
129  }
bool _storeVertices
True if credal sets vertices are stored, False otherwise.

◆ toString()

template<typename GUM_SCALAR >
std::string gum::credal::InferenceEngine< GUM_SCALAR >::toString ( ) const

Print all nodes marginals to standart output.

Definition at line 596 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_query, gum::HashTable< Key, Val, Alloc >::empty(), and gum::HashTable< Key, Val, Alloc >::exists().

596  {
597  std::stringstream output;
598  output << std::endl;
599 
600  // use cbegin() when available
601  for (const auto& elt : _marginalMin) {
602  Size esize = Size(elt.second.size());
603 
604  for (Size mod = 0; mod < esize; mod++) {
605  output << "P(" << _credalNet->current_bn().variable(elt.first).name()
606  << "=" << mod << "|e) = [ ";
607  output << _marginalMin[elt.first][mod] << ", "
608  << _marginalMax[elt.first][mod] << " ]";
609 
610  if (!_query.empty())
611  if (_query.exists(elt.first) && _query[elt.first][mod])
612  output << " QUERY";
613 
614  output << std::endl;
615  }
616 
617  output << std::endl;
618  }
619 
620  return output.str();
621  }
margi _marginalMin
Lower marginals.
bool exists(const Key &key) const
Checks whether there exists an element with a given key in the hashtable.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
query _query
Holds the query nodes states.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:45
bool empty() const noexcept
Indicates whether the hash table is empty.
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:

◆ updateApproximationScheme()

INLINE void gum::ApproximationScheme::updateApproximationScheme ( unsigned int  incr = 1)
inherited

Update the scheme w.r.t the new error and increment steps.

Parameters
incrThe new increment steps.

Definition at line 204 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_step.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), gum::SamplingInference< GUM_SCALAR >::_loopApproxInference(), gum::learning::DAG2BNLearner< ALLOC >::createBN(), gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

204  {
205  _current_step += incr;
206  }
Size _current_step
The current step.
+ Here is the caller graph for this function:

◆ verbosity()

INLINE bool gum::ApproximationScheme::verbosity ( ) const
virtualinherited

Returns true if verbosity is enabled.

Returns
Returns true if verbosity is enabled.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 152 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_verbosity.

Referenced by gum::ApproximationScheme::continueApproximationScheme(), gum::ApproximationScheme::history(), and gum::learning::genericBNLearner::verbosity().

152 { return _verbosity; }
bool _verbosity
If true, verbosity is enabled.
+ Here is the caller graph for this function:

◆ vertices()

template<typename GUM_SCALAR >
const std::vector< std::vector< GUM_SCALAR > > & gum::credal::InferenceEngine< GUM_SCALAR >::vertices ( const NodeId  id) const

Get the vertice of a given node id.

Parameters
idThe node id which vertice we want.
Returns
A constant reference to this node vertice.

Definition at line 520 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets.

520  {
521  return _marginalSets[id];
522  }
credalSet _marginalSets
Credal sets vertices, if enabled.

Member Data Documentation

◆ _burn_in

◆ _credalNet

template<typename GUM_SCALAR >
const CredalNet< GUM_SCALAR >* gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet
protected

A pointer to the Credal Net used.

Definition at line 72 of file inferenceEngine.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcThreadDataCopy(), gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__verticesSampling(), gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginals(), gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginalSets(), gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit(), gum::credal::InferenceEngine< GUM_SCALAR >::_updateExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::credalNet(), gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax(), gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin(), gum::credal::InferenceEngine< GUM_SCALAR >::InferenceEngine(), gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence(), gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidenceFile(), gum::credal::InferenceEngine< GUM_SCALAR >::insertModals(), gum::credal::InferenceEngine< GUM_SCALAR >::insertQuery(), gum::credal::InferenceEngine< GUM_SCALAR >::insertQueryFile(), gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax(), gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin(), gum::credal::InferenceEngine< GUM_SCALAR >::saveMarginals(), gum::credal::InferenceEngine< GUM_SCALAR >::saveVertices(), and gum::credal::InferenceEngine< GUM_SCALAR >::toString().

◆ _current_epsilon

double gum::ApproximationScheme::_current_epsilon
protectedinherited

◆ _current_rate

double gum::ApproximationScheme::_current_rate
protectedinherited

◆ _current_state

◆ _current_step

◆ _dbnOpt

template<typename GUM_SCALAR >
VarMod2BNsMap< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::_dbnOpt
protected

◆ _dynamicExpMax

template<typename GUM_SCALAR >
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMax
protected

◆ _dynamicExpMin

template<typename GUM_SCALAR >
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMin
protected

◆ _enabled_eps

◆ _enabled_max_iter

bool gum::ApproximationScheme::_enabled_max_iter
protectedinherited

◆ _enabled_max_time

◆ _enabled_min_rate_eps

bool gum::ApproximationScheme::_enabled_min_rate_eps
protectedinherited

◆ _eps

double gum::ApproximationScheme::_eps
protectedinherited

◆ _evidence

◆ _expectationMax

◆ _expectationMin

◆ _history

std::vector< double > gum::ApproximationScheme::_history
protectedinherited

◆ _last_epsilon

double gum::ApproximationScheme::_last_epsilon
protectedinherited

Last epsilon value.

Definition at line 370 of file approximationScheme.h.

Referenced by gum::ApproximationScheme::continueApproximationScheme().

◆ _marginalMax

◆ _marginalMin

◆ _marginalSets

◆ _max_iter

Size gum::ApproximationScheme::_max_iter
protectedinherited

◆ _max_time

double gum::ApproximationScheme::_max_time
protectedinherited

◆ _min_rate_eps

double gum::ApproximationScheme::_min_rate_eps
protectedinherited

◆ _modal

◆ _oldMarginalMax

◆ _oldMarginalMin

◆ _period_size

Size gum::ApproximationScheme::_period_size
protectedinherited

◆ _query

◆ _repetitiveInd

◆ _storeBNOpt

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt
protected

◆ _storeVertices

◆ _t0

template<typename GUM_SCALAR >
cluster gum::credal::InferenceEngine< GUM_SCALAR >::_t0
protected

Clusters of nodes used with dynamic networks.

Any node key in _t0 is present at \( t=0 \) and any node belonging to the node set of this key share the same CPT than the key. Used for sampling with repetitive independence.

Definition at line 115 of file inferenceEngine.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcThreadDataCopy(), gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit(), and gum::credal::InferenceEngine< GUM_SCALAR >::getT0Cluster().

◆ _t1

template<typename GUM_SCALAR >
cluster gum::credal::InferenceEngine< GUM_SCALAR >::_t1
protected

Clusters of nodes used with dynamic networks.

Any node key in _t1 is present at \( t=1 \) and any node belonging to the node set of this key share the same CPT than the key. Used for sampling with repetitive independence.

Definition at line 122 of file inferenceEngine.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcThreadDataCopy(), gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit(), and gum::credal::InferenceEngine< GUM_SCALAR >::getT1Cluster().

◆ _timer

◆ _timeSteps

template<typename GUM_SCALAR >
int gum::credal::InferenceEngine< GUM_SCALAR >::_timeSteps
protected

The number of time steps of this network (only usefull for dynamic networks).

Deprecated:

Definition at line 152 of file inferenceEngine.h.

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit().

◆ _verbosity

bool gum::ApproximationScheme::_verbosity
protectedinherited

If true, verbosity is enabled.

Definition at line 418 of file approximationScheme.h.

Referenced by gum::ApproximationScheme::setVerbosity(), and gum::ApproximationScheme::verbosity().

◆ onProgress

◆ onStop

Signaler1< std::string > gum::IApproximationSchemeConfiguration::onStop
inherited

Criteria messageApproximationScheme.

Definition at line 60 of file IApproximationSchemeConfiguration.h.

Referenced by gum::ApproximationScheme::_stopScheme(), and gum::learning::genericBNLearner::distributeStop().


The documentation for this class was generated from the following files: