aGrUM  0.15.1
gum::credal::CNLoopyPropagation< GUM_SCALAR > Class Template Reference

<agrum/CN/CNLoopyPropagation.h> More...

#include <CNLoopyPropagation.h>

+ Inheritance diagram for gum::credal::CNLoopyPropagation< GUM_SCALAR >:
+ Collaboration diagram for gum::credal::CNLoopyPropagation< GUM_SCALAR >:

Public Attributes

Signaler3< Size, double, doubleonProgress
 Progression, error and time. More...
 
Signaler1< std::string > onStop
 Criteria messageApproximationScheme. More...
 

Public Member Functions

virtual void insertEvidenceFile (const std::string &path)
 Insert evidence from file. More...
 
Public algorithm methods
void makeInference ()
 Starts the inference. More...
 
Getters and setters
void inferenceType (InferenceType inft)
 Set the inference type. More...
 
InferenceType inferenceType ()
 Get the inference type. More...
 
Post-inference methods
void eraseAllEvidence ()
 Erase all inference related data to perform another one. More...
 
void saveInference (const std::string &path)
 
Constructors / Destructors
 CNLoopyPropagation (const CredalNet< GUM_SCALAR > &cnet)
 Constructor. More...
 
virtual ~CNLoopyPropagation ()
 Destructor. More...
 
Getters and setters
VarMod2BNsMap< GUM_SCALAR > * getVarMod2BNsMap ()
 Get optimum IBayesNet. More...
 
const CredalNet< GUM_SCALAR > & credalNet ()
 Get this creadal network. More...
 
const NodeProperty< std::vector< NodeId > > & getT0Cluster () const
 Get the _t0 cluster. More...
 
const NodeProperty< std::vector< NodeId > > & getT1Cluster () const
 Get the _t1 cluster. More...
 
void setRepetitiveInd (const bool repetitive)
 
void storeVertices (const bool value)
 
bool storeVertices () const
 Get the number of iterations without changes used to stop some algorithms. More...
 
void storeBNOpt (const bool value)
 
bool storeBNOpt () const
 
bool repetitiveInd () const
 Get the current independence status. More...
 
Pre-inference initialization methods
void insertModalsFile (const std::string &path)
 Insert variables modalities from file to compute expectations. More...
 
void insertModals (const std::map< std::string, std::vector< GUM_SCALAR > > &modals)
 Insert variables modalities from map to compute expectations. More...
 
void insertEvidence (const std::map< std::string, std::vector< GUM_SCALAR > > &eviMap)
 Insert evidence from map. More...
 
void insertEvidence (const NodeProperty< std::vector< GUM_SCALAR > > &evidence)
 Insert evidence from Property. More...
 
void insertQueryFile (const std::string &path)
 Insert query variables states from file. More...
 
void insertQuery (const NodeProperty< std::vector< bool > > &query)
 Insert query variables and states from Property. More...
 
Post-inference methods
const std::vector< GUM_SCALAR > & marginalMin (const NodeId id) const
 Get the lower marginals of a given node id. More...
 
const std::vector< GUM_SCALAR > & marginalMin (const std::string &varName) const
 Get the lower marginals of a given variable name. More...
 
const std::vector< GUM_SCALAR > & marginalMax (const NodeId id) const
 Get the upper marginals of a given node id. More...
 
const std::vector< GUM_SCALAR > & marginalMax (const std::string &varName) const
 Get the upper marginals of a given variable name. More...
 
const GUM_SCALAR & expectationMin (const NodeId id) const
 Get the lower expectation of a given node id. More...
 
const GUM_SCALAR & expectationMin (const std::string &varName) const
 Get the lower expectation of a given variable name. More...
 
const GUM_SCALAR & expectationMax (const NodeId id) const
 Get the upper expectation of a given node id. More...
 
const GUM_SCALAR & expectationMax (const std::string &varName) const
 Get the upper expectation of a given variable name. More...
 
const std::vector< GUM_SCALAR > & dynamicExpMin (const std::string &varName) const
 Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e. More...
 
const std::vector< GUM_SCALAR > & dynamicExpMax (const std::string &varName) const
 Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e. More...
 
const std::vector< std::vector< GUM_SCALAR > > & vertices (const NodeId id) const
 Get the vertice of a given node id. More...
 
void saveMarginals (const std::string &path) const
 Saves marginals to file. More...
 
void saveExpectations (const std::string &path) const
 Saves expectations to file. More...
 
void saveVertices (const std::string &path) const
 Saves vertices to file. More...
 
void dynamicExpectations ()
 Compute dynamic expectations. More...
 
std::string toString () const
 Print all nodes marginals to standart output. More...
 
const std::string getApproximationSchemeMsg ()
 Get approximation scheme state. More...
 
Getters and setters
void setEpsilon (double eps)
 Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|. More...
 
double epsilon () const
 Returns the value of epsilon. More...
 
void disableEpsilon ()
 Disable stopping criterion on epsilon. More...
 
void enableEpsilon ()
 Enable stopping criterion on epsilon. More...
 
bool isEnabledEpsilon () const
 Returns true if stopping criterion on epsilon is enabled, false otherwise. More...
 
void setMinEpsilonRate (double rate)
 Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|). More...
 
double minEpsilonRate () const
 Returns the value of the minimal epsilon rate. More...
 
void disableMinEpsilonRate ()
 Disable stopping criterion on epsilon rate. More...
 
void enableMinEpsilonRate ()
 Enable stopping criterion on epsilon rate. More...
 
bool isEnabledMinEpsilonRate () const
 Returns true if stopping criterion on epsilon rate is enabled, false otherwise. More...
 
void setMaxIter (Size max)
 Stopping criterion on number of iterations. More...
 
Size maxIter () const
 Returns the criterion on number of iterations. More...
 
void disableMaxIter ()
 Disable stopping criterion on max iterations. More...
 
void enableMaxIter ()
 Enable stopping criterion on max iterations. More...
 
bool isEnabledMaxIter () const
 Returns true if stopping criterion on max iterations is enabled, false otherwise. More...
 
void setMaxTime (double timeout)
 Stopping criterion on timeout. More...
 
double maxTime () const
 Returns the timeout (in seconds). More...
 
double currentTime () const
 Returns the current running time in second. More...
 
void disableMaxTime ()
 Disable stopping criterion on timeout. More...
 
void enableMaxTime ()
 Enable stopping criterion on timeout. More...
 
bool isEnabledMaxTime () const
 Returns true if stopping criterion on timeout is enabled, false otherwise. More...
 
void setPeriodSize (Size p)
 How many samples between two stopping is enable. More...
 
Size periodSize () const
 Returns the period size. More...
 
void setVerbosity (bool v)
 Set the verbosity on (true) or off (false). More...
 
bool verbosity () const
 Returns true if verbosity is enabled. More...
 
ApproximationSchemeSTATE stateApproximationScheme () const
 Returns the approximation scheme state. More...
 
Size nbrIterations () const
 Returns the number of iterations. More...
 
const std::vector< double > & history () const
 Returns the scheme history. More...
 
void initApproximationScheme ()
 Initialise the scheme. More...
 
bool startOfPeriod ()
 Returns true if we are at the beginning of a period (compute error is mandatory). More...
 
void updateApproximationScheme (unsigned int incr=1)
 Update the scheme w.r.t the new error and increment steps. More...
 
Size remainingBurnIn ()
 Returns the remaining burn in. More...
 
void stopApproximationScheme ()
 Stop the approximation scheme. More...
 
bool continueApproximationScheme (double error)
 Update the scheme w.r.t the new error. More...
 
Getters and setters
std::string messageApproximationScheme () const
 Returns the approximation scheme message. More...
 

Public Types

enum  InferenceType : char { InferenceType::nodeToNeighbours, InferenceType::ordered, InferenceType::randomOrder }
 Inference type to be used by the algorithm. More...
 
using msg = std::vector< Potential< GUM_SCALAR > *>
 
using cArcP = const Arc *
 
enum  ApproximationSchemeSTATE : char {
  ApproximationSchemeSTATE::Undefined, ApproximationSchemeSTATE::Continue, ApproximationSchemeSTATE::Epsilon, ApproximationSchemeSTATE::Rate,
  ApproximationSchemeSTATE::Limit, ApproximationSchemeSTATE::TimeLimit, ApproximationSchemeSTATE::Stopped
}
 The different state of an approximation scheme. More...
 

Protected Attributes

NodeProperty< bool_update_p
 Used to keep track of which node needs to update it's information coming from it's parents. More...
 
NodeProperty< bool_update_l
 Used to keep track of which node needs to update it's information coming from it's children. More...
 
NodeSet active_nodes_set
 The current node-set to iterate through at this current step. More...
 
NodeSet next_active_nodes_set
 The next node-set, i.e. More...
 
NodeProperty< NodeSet *> _msg_l_sent
 Used to keep track of one's messages sent to it's parents. More...
 
ArcProperty< GUM_SCALAR > _ArcsL_min
 "Lower" information \( \Lambda \) coming from one's children. More...
 
ArcProperty< GUM_SCALAR > _ArcsP_min
 "Lower" information \( \pi \) coming from one's parent. More...
 
NodeProperty< GUM_SCALAR > _NodesL_min
 "Lower" node information \( \Lambda \) obtained by combinaison of children messages. More...
 
NodeProperty< GUM_SCALAR > _NodesP_min
 "Lower" node information \( \pi \) obtained by combinaison of parent's messages. More...
 
ArcProperty< GUM_SCALAR > _ArcsL_max
 "Upper" information \( \Lambda \) coming from one's children. More...
 
ArcProperty< GUM_SCALAR > _ArcsP_max
 "Upper" information \( \pi \) coming from one's parent. More...
 
NodeProperty< GUM_SCALAR > _NodesL_max
 "Upper" node information \( \Lambda \) obtained by combinaison of children messages. More...
 
NodeProperty< GUM_SCALAR > _NodesP_max
 "Upper" node information \( \pi \) obtained by combinaison of parent's messages. More...
 
bool _InferenceUpToDate
 TRUE if inference has already been performed, FALSE otherwise. More...
 
const CredalNet< GUM_SCALAR > * _credalNet
 A pointer to the Credal Net used. More...
 
margi _oldMarginalMin
 Old lower marginals used to compute epsilon. More...
 
margi _oldMarginalMax
 Old upper marginals used to compute epsilon. More...
 
margi _marginalMin
 Lower marginals. More...
 
margi _marginalMax
 Upper marginals. More...
 
credalSet _marginalSets
 Credal sets vertices, if enabled. More...
 
expe _expectationMin
 Lower expectations, if some variables modalities were inserted. More...
 
expe _expectationMax
 Upper expectations, if some variables modalities were inserted. More...
 
dynExpe _dynamicExpMin
 Lower dynamic expectations. More...
 
dynExpe _dynamicExpMax
 Upper dynamic expectations. More...
 
dynExpe _modal
 Variables modalities used to compute expectations. More...
 
margi _evidence
 Holds observed variables states. More...
 
query _query
 Holds the query nodes states. More...
 
cluster _t0
 Clusters of nodes used with dynamic networks. More...
 
cluster _t1
 Clusters of nodes used with dynamic networks. More...
 
bool _storeVertices
 True if credal sets vertices are stored, False otherwise. More...
 
bool _repetitiveInd
 True if using repetitive independence ( dynamic network only ), False otherwise. More...
 
bool _storeBNOpt
 Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling. More...
 
VarMod2BNsMap< GUM_SCALAR > _dbnOpt
 Object used to efficiently store optimal bayes net during inference, for some algorithms. More...
 
int _timeSteps
 The number of time steps of this network (only usefull for dynamic networks). More...
 
double _current_epsilon
 Current epsilon. More...
 
double _last_epsilon
 Last epsilon value. More...
 
double _current_rate
 Current rate. More...
 
Size _current_step
 The current step. More...
 
Timer _timer
 The timer. More...
 
ApproximationSchemeSTATE _current_state
 The current state. More...
 
std::vector< double_history
 The scheme history, used only if verbosity == true. More...
 
double _eps
 Threshold for convergence. More...
 
bool _enabled_eps
 If true, the threshold convergence is enabled. More...
 
double _min_rate_eps
 Threshold for the epsilon rate. More...
 
bool _enabled_min_rate_eps
 If true, the minimal threshold for epsilon rate is enabled. More...
 
double _max_time
 The timeout. More...
 
bool _enabled_max_time
 If true, the timeout is enabled. More...
 
Size _max_iter
 The maximum iterations. More...
 
bool _enabled_max_iter
 If true, the maximum iterations stopping criterion is enabled. More...
 
Size _burn_in
 Number of iterations before checking stopping criteria. More...
 
Size _period_size
 Checking criteria frequency. More...
 
bool _verbosity
 If true, verbosity is enabled. More...
 

Protected Member Functions

Protected initialization methods
void _initialize ()
 Topological forward propagation to initialize old marginals & messages. More...
 
Protected algorithm methods
void _makeInferenceNodeToNeighbours ()
 Starts the inference with this inference type. More...
 
void _makeInferenceByOrderedArcs ()
 Starts the inference with this inference type. More...
 
void _makeInferenceByRandomOrder ()
 Starts the inference with this inference type. More...
 
void _updateMarginals ()
 Compute marginals from up-to-date messages. More...
 
void _msgL (const NodeId X, const NodeId demanding_parent)
 Sends a message to one's parent, i.e. More...
 
void _compute_ext (GUM_SCALAR &msg_l_min, GUM_SCALAR &msg_l_max, std::vector< GUM_SCALAR > &lx, GUM_SCALAR &num_min, GUM_SCALAR &num_max, GUM_SCALAR &den_min, GUM_SCALAR &den_max)
 Used by _msgL. More...
 
void _compute_ext (std::vector< std::vector< GUM_SCALAR > > &combi_msg_p, const NodeId &id, GUM_SCALAR &msg_l_min, GUM_SCALAR &msg_l_max, std::vector< GUM_SCALAR > &lx, const Idx &pos)
 Used by _msgL. More...
 
void _enum_combi (std::vector< std::vector< std::vector< GUM_SCALAR > > > &msgs_p, const NodeId &id, GUM_SCALAR &msg_l_min, GUM_SCALAR &msg_l_max, std::vector< GUM_SCALAR > &lx, const Idx &pos)
 Used by _msgL. More...
 
void _msgP (const NodeId X, const NodeId demanding_child)
 Sends a message to one's child, i.e. More...
 
void _enum_combi (std::vector< std::vector< std::vector< GUM_SCALAR > > > &msgs_p, const NodeId &id, GUM_SCALAR &msg_p_min, GUM_SCALAR &msg_p_max)
 Used by _msgP. More...
 
void _compute_ext (std::vector< std::vector< GUM_SCALAR > > &combi_msg_p, const NodeId &id, GUM_SCALAR &msg_p_min, GUM_SCALAR &msg_p_max)
 Used by _msgP. More...
 
void _refreshLMsPIs (bool refreshIndic=false)
 Get the last messages from one's parents and children. More...
 
GUM_SCALAR _calculateEpsilon ()
 Compute epsilon. More...
 
Post-inference protected methods
void _computeExpectations ()
 Since the network is binary, expectations can be computed from the final marginals which give us the credal set vertices. More...
 
void _updateIndicatrices ()
 Only update indicatrices variables at the end of computations ( calls _msgP ). More...
 
Protected initialization methods
void _repetitiveInit ()
 Initialize _t0 and _t1 clusters. More...
 
void _initExpectations ()
 Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality. More...
 
void _initMarginals ()
 Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0. More...
 
void _initMarginalSets ()
 Initialize credal set vertices with empty sets. More...
 
Protected algorithms methods
const GUM_SCALAR _computeEpsilon ()
 Compute approximation scheme epsilon using the old marginals and the new ones. More...
 
void _updateExpectations (const NodeId &id, const std::vector< GUM_SCALAR > &vertex)
 Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations. More...
 
void _updateCredalSets (const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
 Given a node id and one of it's possible vertex, update it's credal set. More...
 
Proptected post-inference methods
void _dynamicExpectations ()
 Rearrange lower and upper expectations to suit dynamic networks. More...
 

Detailed Description

template<typename GUM_SCALAR>
class gum::credal::CNLoopyPropagation< GUM_SCALAR >

<agrum/CN/CNLoopyPropagation.h>

Class implementing loopy-propagation with binary networks - L2U algorithm.

Template Parameters
GUM_SCALARA floating type ( float, double, long double ... ).
Author
Matthieu HOURBRACQ and Pierre-Henri WUILLEMIN

Definition at line 58 of file CNLoopyPropagation.h.

Member Typedef Documentation

◆ __infE

template<typename GUM_SCALAR >
using gum::credal::CNLoopyPropagation< GUM_SCALAR >::__infE = InferenceEngine< GUM_SCALAR >
private

To easily access InferenceEngine< GUM_SCALAR > methods.

Definition at line 368 of file CNLoopyPropagation.h.

◆ cArcP

template<typename GUM_SCALAR >
using gum::credal::CNLoopyPropagation< GUM_SCALAR >::cArcP = const Arc*

Definition at line 61 of file CNLoopyPropagation.h.

◆ msg

template<typename GUM_SCALAR >
using gum::credal::CNLoopyPropagation< GUM_SCALAR >::msg = std::vector< Potential< GUM_SCALAR >* >

Definition at line 60 of file CNLoopyPropagation.h.

Member Enumeration Documentation

◆ ApproximationSchemeSTATE

The different state of an approximation scheme.

Enumerator
Undefined 
Continue 
Epsilon 
Rate 
Limit 
TimeLimit 
Stopped 

Definition at line 65 of file IApproximationSchemeConfiguration.h.

65  : char {
66  Undefined,
67  Continue,
68  Epsilon,
69  Rate,
70  Limit,
71  TimeLimit,
72  Stopped
73  };

◆ InferenceType

template<typename GUM_SCALAR >
enum gum::credal::CNLoopyPropagation::InferenceType : char
strong

Inference type to be used by the algorithm.

Enumerator
nodeToNeighbours 

Uses a node-set so we don't iterate on nodes that can't send a new message.

Should be the fastest inference type. A step is going through the node-set.

ordered 

Chooses an arc ordering and sends messages accordingly at all steps.

Avoid it since it can give slightly worse results than other inference types. A step is going through all arcs.

randomOrder 

Chooses a random arc ordering and sends messages accordingly.

A new order is set at each step. A step is going through all arcs.

Definition at line 66 of file CNLoopyPropagation.h.

66  : char {
67  nodeToNeighbours,
72  ordered,
78  randomOrder
82  };

Constructor & Destructor Documentation

◆ CNLoopyPropagation()

template<typename GUM_SCALAR >
gum::credal::CNLoopyPropagation< GUM_SCALAR >::CNLoopyPropagation ( const CredalNet< GUM_SCALAR > &  cnet)
explicit

Constructor.

Parameters
cnetThe CredalNet to be used with this algorithm.

Definition at line 1508 of file CNLoopyPropagation_tpl.h.

References gum::credal::CNLoopyPropagation< GUM_SCALAR >::__bnet, gum::credal::CNLoopyPropagation< GUM_SCALAR >::__cn, gum::credal::CNLoopyPropagation< GUM_SCALAR >::__inferenceType, gum::credal::CNLoopyPropagation< GUM_SCALAR >::_InferenceUpToDate, gum::credal::CredalNet< GUM_SCALAR >::current_bn(), GUM_ERROR, gum::credal::CredalNet< GUM_SCALAR >::hasComputedCPTMinMax(), gum::credal::CredalNet< GUM_SCALAR >::isSeparatelySpecified(), and gum::credal::CNLoopyPropagation< GUM_SCALAR >::nodeToNeighbours.

1509  :
1511  if (!cnet.isSeparatelySpecified()) {
1512  GUM_ERROR(OperationNotAllowed,
1513  "CNLoopyPropagation is only available "
1514  "with separately specified nets");
1515  }
1516 
1517  // test for binary cn
1518  for (auto node : cnet.current_bn().nodes())
1519  if (cnet.current_bn().variable(node).domainSize() != 2) {
1520  GUM_ERROR(OperationNotAllowed,
1521  "CNLoopyPropagation is only available "
1522  "with binary credal networks");
1523  }
1524 
1525  // test if compute CPTMinMax has been called
1526  if (!cnet.hasComputedCPTMinMax()) {
1527  GUM_ERROR(OperationNotAllowed,
1528  "CNLoopyPropagation only works when "
1529  "\"computeCPTMinMax()\" has been called for "
1530  "this credal net");
1531  }
1532 
1533  __cn = &cnet;
1534  __bnet = &cnet.current_bn();
1535 
1537  _InferenceUpToDate = false;
1538 
1539  GUM_CONSTRUCTOR(CNLoopyPropagation);
1540  }
InferenceType __inferenceType
The choosen inference type.
const CredalNet< GUM_SCALAR > * __cn
A pointer to the CredalNet to be used.
InferenceEngine(const CredalNet< GUM_SCALAR > &credalNet)
Construtor.
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
bool _InferenceUpToDate
TRUE if inference has already been performed, FALSE otherwise.
CNLoopyPropagation(const CredalNet< GUM_SCALAR > &cnet)
Constructor.
Uses a node-set so we don&#39;t iterate on nodes that can&#39;t send a new message.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:

◆ ~CNLoopyPropagation()

template<typename GUM_SCALAR >
gum::credal::CNLoopyPropagation< GUM_SCALAR >::~CNLoopyPropagation ( )
virtual

Destructor.

Definition at line 1543 of file CNLoopyPropagation_tpl.h.

References gum::credal::CNLoopyPropagation< GUM_SCALAR >::__bnet, gum::credal::CNLoopyPropagation< GUM_SCALAR >::_InferenceUpToDate, and gum::credal::CNLoopyPropagation< GUM_SCALAR >::_msg_l_sent.

1543  {
1544  _InferenceUpToDate = false;
1545 
1546  if (_msg_l_sent.size() > 0) {
1547  for (auto node : __bnet->nodes()) {
1548  delete _msg_l_sent[node];
1549  }
1550  }
1551 
1552  //_msg_l_sent.clear();
1553  //_update_l.clear();
1554  //_update_p.clear();
1555 
1556  GUM_DESTRUCTOR(CNLoopyPropagation);
1557  }
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
bool _InferenceUpToDate
TRUE if inference has already been performed, FALSE otherwise.
CNLoopyPropagation(const CredalNet< GUM_SCALAR > &cnet)
Constructor.
NodeProperty< NodeSet *> _msg_l_sent
Used to keep track of one&#39;s messages sent to it&#39;s parents.

Member Function Documentation

◆ _calculateEpsilon()

template<typename GUM_SCALAR >
GUM_SCALAR gum::credal::CNLoopyPropagation< GUM_SCALAR >::_calculateEpsilon ( )
protected

Compute epsilon.

Returns
Epsilon.

Definition at line 1457 of file CNLoopyPropagation_tpl.h.

1457  {
1458  _refreshLMsPIs();
1459  _updateMarginals();
1460 
1461  return __infE::_computeEpsilon();
1462  }
const GUM_SCALAR _computeEpsilon()
Compute approximation scheme epsilon using the old marginals and the new ones.
void _updateMarginals()
Compute marginals from up-to-date messages.
void _refreshLMsPIs(bool refreshIndic=false)
Get the last messages from one&#39;s parents and children.

◆ _compute_ext() [1/3]

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_compute_ext ( GUM_SCALAR &  msg_l_min,
GUM_SCALAR &  msg_l_max,
std::vector< GUM_SCALAR > &  lx,
GUM_SCALAR &  num_min,
GUM_SCALAR &  num_max,
GUM_SCALAR &  den_min,
GUM_SCALAR &  den_max 
)
protected

Used by _msgL.

pour les fonctions suivantes, les GUM_SCALAR min/max doivent etre initialises (min a 1 et max a 0) pour comparer avec les resultats intermediaires

Compute the final message for the given parent's message and likelihood (children's messages), numerators & denominators.

Parameters
msg_l_minThe reference to the current lower value of the message to be sent.
msg_l_maxThe reference to the current upper value of the message to be sent.
lxThe lower and upper likelihood.
num_minThe reference to the previously computed lower numerator.
num_maxThe reference to the previously computed upper numerator.
den_minThe reference to the previously computed lower denominator.
den_maxThe reference to the previously computed upper denominator.

une fois les cpts marginalises sur X et Ui, on calcul le min/max,

Definition at line 181 of file CNLoopyPropagation_tpl.h.

References _INF.

188  {
189  GUM_SCALAR num_min_tmp = 1.;
190  GUM_SCALAR den_min_tmp = 1.;
191  GUM_SCALAR num_max_tmp = 1.;
192  GUM_SCALAR den_max_tmp = 1.;
193 
194  GUM_SCALAR res_min = 1.0, res_max = 0.0;
195 
196  auto lsize = lx.size();
197 
198  for (decltype(lsize) i = 0; i < lsize; i++) {
199  bool non_defini_min = false;
200  bool non_defini_max = false;
201 
202  if (lx[i] == _INF) {
203  num_min_tmp = num_min;
204  den_min_tmp = den_max;
205  num_max_tmp = num_max;
206  den_max_tmp = den_min;
207  } else if (lx[i] == (GUM_SCALAR)1.) {
208  num_min_tmp = GUM_SCALAR(1.);
209  den_min_tmp = GUM_SCALAR(1.);
210  num_max_tmp = GUM_SCALAR(1.);
211  den_max_tmp = GUM_SCALAR(1.);
212  } else if (lx[i] > (GUM_SCALAR)1.) {
213  GUM_SCALAR li = GUM_SCALAR(1.) / (lx[i] - GUM_SCALAR(1.));
214  num_min_tmp = num_min + li;
215  den_min_tmp = den_max + li;
216  num_max_tmp = num_max + li;
217  den_max_tmp = den_min + li;
218  } else if (lx[i] < (GUM_SCALAR)1.) {
219  GUM_SCALAR li = GUM_SCALAR(1.) / (lx[i] - GUM_SCALAR(1.));
220  num_min_tmp = num_max + li;
221  den_min_tmp = den_min + li;
222  num_max_tmp = num_min + li;
223  den_max_tmp = den_max + li;
224  }
225 
226  if (den_min_tmp == 0. && num_min_tmp == 0.) {
227  non_defini_min = true;
228  } else if (den_min_tmp == 0. && num_min_tmp != 0.) {
229  res_min = _INF;
230  } else if (den_min_tmp != _INF || num_min_tmp != _INF) {
231  res_min = num_min_tmp / den_min_tmp;
232  }
233 
234  if (den_max_tmp == 0. && num_max_tmp == 0.) {
235  non_defini_max = true;
236  } else if (den_max_tmp == 0. && num_max_tmp != 0.) {
237  res_max = _INF;
238  } else if (den_max_tmp != _INF || num_max_tmp != _INF) {
239  res_max = num_max_tmp / den_max_tmp;
240  }
241 
242  if (non_defini_max && non_defini_min) {
243  std::cout << "undefined msg" << std::endl;
244  continue;
245  } else if (non_defini_min && !non_defini_max) {
246  res_min = res_max;
247  } else if (non_defini_max && !non_defini_min) {
248  res_max = res_min;
249  }
250 
251  if (res_min < 0.) { res_min = 0.; }
252 
253  if (res_max < 0.) { res_max = 0.; }
254 
255  if (msg_l_min == msg_l_max && msg_l_min == -2.) {
256  msg_l_min = res_min;
257  msg_l_max = res_max;
258  }
259 
260  if (res_max > msg_l_max) { msg_l_max = res_max; }
261 
262  if (res_min < msg_l_min) { msg_l_min = res_min; }
263 
264  } // end of : for each lx
265  }
#define _INF

◆ _compute_ext() [2/3]

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_compute_ext ( std::vector< std::vector< GUM_SCALAR > > &  combi_msg_p,
const NodeId id,
GUM_SCALAR &  msg_l_min,
GUM_SCALAR &  msg_l_max,
std::vector< GUM_SCALAR > &  lx,
const Idx pos 
)
protected

Used by _msgL.

extremes pour une combinaison des parents, message vers parent

Compute the numerators & denominators for the given parent's message and likelihood (children's messages). Marginalisation.

Parameters
combi_msg_pThe parent's choosen message.
idThe constant id of the node sending the message.
msg_l_minThe reference to the current lower value of the message to be sent.
msg_l_maxThe reference to the current upper value of the message to be sent.
lxThe lower and upper likelihood.
posThe position of the parent node to receive the message in the CPT of the one sending the message ( first parent, second ... ).

Definition at line 271 of file CNLoopyPropagation_tpl.h.

277  {
278  GUM_SCALAR num_min = 0.;
279  GUM_SCALAR num_max = 0.;
280  GUM_SCALAR den_min = 0.;
281  GUM_SCALAR den_max = 0.;
282 
283  auto taille = combi_msg_p.size();
284 
285  std::vector< typename std::vector< GUM_SCALAR >::iterator > it(taille);
286 
287  for (decltype(taille) i = 0; i < taille; i++) {
288  it[i] = combi_msg_p[i].begin();
289  }
290 
291  Size pp = pos;
292 
293  Size combi_den = 0;
294  Size combi_num = pp;
295 
296  // marginalisation
297  while (it[taille - 1] != combi_msg_p[taille - 1].end()) {
298  GUM_SCALAR prod = 1.;
299 
300  for (decltype(taille) k = 0; k < taille; k++) {
301  prod *= *it[k];
302  }
303 
304  den_min += (__cn->get_CPT_min()[id][combi_den] * prod);
305  den_max += (__cn->get_CPT_max()[id][combi_den] * prod);
306 
307  num_min += (__cn->get_CPT_min()[id][combi_num] * prod);
308  num_max += (__cn->get_CPT_max()[id][combi_num] * prod);
309 
310  combi_den++;
311  combi_num++;
312 
313  if (combi_den % pp == 0) {
314  combi_den += pp;
315  combi_num += pp;
316  }
317 
318  // incrementation
319  ++it[0];
320 
321  for (decltype(taille) i = 0;
322  (i < taille - 1) && (it[i] == combi_msg_p[i].end());
323  ++i) {
324  it[i] = combi_msg_p[i].begin();
325  ++it[i + 1];
326  }
327  } // end of : marginalisation
328 
329  _compute_ext(msg_l_min, msg_l_max, lx, num_min, num_max, den_min, den_max);
330  }
const CredalNet< GUM_SCALAR > * __cn
A pointer to the CredalNet to be used.
void _compute_ext(GUM_SCALAR &msg_l_min, GUM_SCALAR &msg_l_max, std::vector< GUM_SCALAR > &lx, GUM_SCALAR &num_min, GUM_SCALAR &num_max, GUM_SCALAR &den_min, GUM_SCALAR &den_max)
Used by _msgL.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48

◆ _compute_ext() [3/3]

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_compute_ext ( std::vector< std::vector< GUM_SCALAR > > &  combi_msg_p,
const NodeId id,
GUM_SCALAR &  msg_p_min,
GUM_SCALAR &  msg_p_max 
)
protected

Used by _msgP.

extremes pour une combinaison des parents, message vers enfant marginalisation cpts

Marginalisation.

Parameters
combi_msg_pThe parent's choosen message.
idThe constant id of the node sending the message.
msg_p_minThe reference to the current lower value of the message to be sent.
msg_p_maxThe reference to the current upper value of the message to be sent.

Definition at line 337 of file CNLoopyPropagation_tpl.h.

341  {
342  GUM_SCALAR min = 0.;
343  GUM_SCALAR max = 0.;
344 
345  auto taille = combi_msg_p.size();
346 
347  std::vector< typename std::vector< GUM_SCALAR >::iterator > it(taille);
348 
349  for (decltype(taille) i = 0; i < taille; i++) {
350  it[i] = combi_msg_p[i].begin();
351  }
352 
353  int combi = 0;
354  auto theEnd = combi_msg_p[taille - 1].end();
355 
356  while (it[taille - 1] != theEnd) {
357  GUM_SCALAR prod = 1.;
358 
359  for (decltype(taille) k = 0; k < taille; k++) {
360  prod *= *it[k];
361  }
362 
363  min += (__cn->get_CPT_min()[id][combi] * prod);
364  max += (__cn->get_CPT_max()[id][combi] * prod);
365 
366  combi++;
367 
368  // incrementation
369  ++it[0];
370 
371  for (decltype(taille) i = 0;
372  (i < taille - 1) && (it[i] == combi_msg_p[i].end());
373  ++i) {
374  it[i] = combi_msg_p[i].begin();
375  ++it[i + 1];
376  }
377  }
378 
379  if (min < msg_p_min) { msg_p_min = min; }
380 
381  if (max > msg_p_max) { msg_p_max = max; }
382  }
const CredalNet< GUM_SCALAR > * __cn
A pointer to the CredalNet to be used.

◆ _computeEpsilon()

template<typename GUM_SCALAR >
const GUM_SCALAR gum::credal::InferenceEngine< GUM_SCALAR >::_computeEpsilon ( )
inlineprotectedinherited

Compute approximation scheme epsilon using the old marginals and the new ones.

Highest delta on either lower or upper marginal is epsilon.

Also updates oldMarginals to current marginals.

Returns
Epsilon.

Definition at line 1016 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMin, and gum::HashTable< Key, Val, Alloc >::size().

1016  {
1017  GUM_SCALAR eps = 0;
1018 #pragma omp parallel
1019  {
1020  GUM_SCALAR tEps = 0;
1021  GUM_SCALAR delta;
1022 
1024  int nsize = int(_marginalMin.size());
1025 
1026 #pragma omp for
1027 
1028  for (int i = 0; i < nsize; i++) {
1029  auto dSize = _marginalMin[i].size();
1030 
1031  for (Size j = 0; j < dSize; j++) {
1032  // on min
1033  delta = _marginalMin[i][j] - _oldMarginalMin[i][j];
1034  delta = (delta < 0) ? (-delta) : delta;
1035  tEps = (tEps < delta) ? delta : tEps;
1036 
1037  // on max
1038  delta = _marginalMax[i][j] - _oldMarginalMax[i][j];
1039  delta = (delta < 0) ? (-delta) : delta;
1040  tEps = (tEps < delta) ? delta : tEps;
1041 
1042  _oldMarginalMin[i][j] = _marginalMin[i][j];
1043  _oldMarginalMax[i][j] = _marginalMax[i][j];
1044  }
1045  } // end of : all variables
1046 
1047 #pragma omp critical(epsilon_max)
1048  {
1049 #pragma omp flush(eps)
1050  eps = (eps < tEps) ? tEps : eps;
1051  }
1052  }
1053 
1054  return eps;
1055  }
margi _oldMarginalMin
Old lower marginals used to compute epsilon.
Size size() const noexcept
Returns the number of elements stored into the hashtable.
margi _marginalMin
Lower marginals.
margi _oldMarginalMax
Old upper marginals used to compute epsilon.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:

◆ _computeExpectations()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_computeExpectations ( )
protected

Since the network is binary, expectations can be computed from the final marginals which give us the credal set vertices.

Definition at line 1482 of file CNLoopyPropagation_tpl.h.

1482  {
1483  if (__infE::_modal.empty()) { return; }
1484 
1485  std::vector< std::vector< GUM_SCALAR > > vertices(
1486  2, std::vector< GUM_SCALAR >(2));
1487 
1488  for (auto node : __bnet->nodes()) {
1489  vertices[0][0] = __infE::_marginalMin[node][0];
1490  vertices[0][1] = __infE::_marginalMax[node][1];
1491 
1492  vertices[1][0] = __infE::_marginalMax[node][0];
1493  vertices[1][1] = __infE::_marginalMin[node][1];
1494 
1495  for (auto vertex = 0, vend = 2; vertex != vend; vertex++) {
1496  __infE::_updateExpectations(node, vertices[vertex]);
1497  // test credal sets vertices elim
1498  // remove with L2U since variables are binary
1499  // but does the user know that ?
1501  node,
1502  vertices[vertex]); // no redundancy elimination with 2 vertices
1503  }
1504  }
1505  }
margi _marginalMin
Lower marginals.
const std::vector< std::vector< GUM_SCALAR > > & vertices(const NodeId id) const
Get the vertice of a given node id.
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
dynExpe _modal
Variables modalities used to compute expectations.
void _updateCredalSets(const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
Given a node id and one of it&#39;s possible vertex, update it&#39;s credal set.
void _updateExpectations(const NodeId &id, const std::vector< GUM_SCALAR > &vertex)
Given a node id and one of it&#39;s possible vertex obtained during inference, update this node lower and...
margi _marginalMax
Upper marginals.

◆ _dynamicExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpectations ( )
protectedinherited

Rearrange lower and upper expectations to suit dynamic networks.

Definition at line 721 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMax, gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMin, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin, gum::credal::InferenceEngine< GUM_SCALAR >::_modal, and gum::HashTable< Key, Val, Alloc >::empty().

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

721  {
722  // no modals, no expectations computed during inference
723  if (_expectationMin.empty() || _modal.empty()) return;
724 
725  // already called by the algorithm or the user
726  if (_dynamicExpMax.size() > 0 && _dynamicExpMin.size() > 0) return;
727 
728  // typedef typename std::map< int, GUM_SCALAR > innerMap;
729  using innerMap = typename gum::HashTable< int, GUM_SCALAR >;
730 
731  // typedef typename std::map< std::string, innerMap > outerMap;
732  using outerMap = typename gum::HashTable< std::string, innerMap >;
733 
734  // typedef typename std::map< std::string, std::vector< GUM_SCALAR > >
735  // mod;
736 
737  // si non dynamique, sauver directement _expectationMin et Max (revient au
738  // meme
739  // mais plus rapide)
740  outerMap expectationsMin, expectationsMax;
741 
742  for (const auto& elt : _expectationMin) {
743  std::string var_name, time_step;
744 
745  var_name = _credalNet->current_bn().variable(elt.first).name();
746  auto delim = var_name.find_first_of("_");
747  time_step = var_name.substr(delim + 1, var_name.size());
748  var_name = var_name.substr(0, delim);
749 
750  // to be sure (don't store not monitored variables' expectations)
751  // although it
752  // should be taken care of before this point
753  if (!_modal.exists(var_name)) continue;
754 
755  expectationsMin.getWithDefault(var_name, innerMap())
756  .getWithDefault(atoi(time_step.c_str()), 0) =
757  elt.second; // we iterate with min iterators
758  expectationsMax.getWithDefault(var_name, innerMap())
759  .getWithDefault(atoi(time_step.c_str()), 0) =
760  _expectationMax[elt.first];
761  }
762 
763  for (const auto& elt : expectationsMin) {
764  typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
765 
766  for (const auto& elt2 : elt.second)
767  dynExp[elt2.first] = elt2.second;
768 
769  _dynamicExpMin.insert(elt.first, dynExp);
770  }
771 
772  for (const auto& elt : expectationsMax) {
773  typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
774 
775  for (const auto& elt2 : elt.second) {
776  dynExp[elt2.first] = elt2.second;
777  }
778 
779  _dynamicExpMax.insert(elt.first, dynExp);
780  }
781  }
dynExpe _dynamicExpMin
Lower dynamic expectations.
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
The class for generic Hash Tables.
Definition: hashTable.h:679
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
dynExpe _dynamicExpMax
Upper dynamic expectations.
dynExpe _modal
Variables modalities used to compute expectations.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.
bool empty() const noexcept
Indicates whether the hash table is empty.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _enum_combi() [1/2]

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_enum_combi ( std::vector< std::vector< std::vector< GUM_SCALAR > > > &  msgs_p,
const NodeId id,
GUM_SCALAR &  msg_l_min,
GUM_SCALAR &  msg_l_max,
std::vector< GUM_SCALAR > &  lx,
const Idx pos 
)
protected

Used by _msgL.

comme precedemment mais pour message parent, vraisemblance prise en compte

Enumerate parent's messages.

Parameters
msgs_pAll the messages from the parents which will be enumerated.
idThe constant id of the node sending the message.
msg_l_minThe reference to the current lower value of the message to be sent.
msg_l_maxThe reference to the current upper value of the message to be sent.
lxThe lower and upper likelihood.
posThe position of the parent node to receive the message in the CPT of the one sending the message ( first parent, second ... ).

Definition at line 464 of file CNLoopyPropagation_tpl.h.

470  {
471  GUM_SCALAR msg_l_min = real_msg_l_min;
472  GUM_SCALAR msg_l_max = real_msg_l_max;
473 
474  auto taille = msgs_p.size();
475 
476  // one parent node, the one receiving the message
477  if (taille == 0) {
478  GUM_SCALAR num_min = __cn->get_CPT_min()[id][1];
479  GUM_SCALAR num_max = __cn->get_CPT_max()[id][1];
480  GUM_SCALAR den_min = __cn->get_CPT_min()[id][0];
481  GUM_SCALAR den_max = __cn->get_CPT_max()[id][0];
482 
483  _compute_ext(msg_l_min, msg_l_max, lx, num_min, num_max, den_min, den_max);
484 
485  real_msg_l_min = msg_l_min;
486  real_msg_l_max = msg_l_max;
487  return;
488  }
489 
490  decltype(taille) msgPerm = 1;
491 #pragma omp parallel
492  {
493  GUM_SCALAR msg_lmin = msg_l_min;
494  GUM_SCALAR msg_lmax = msg_l_max;
495  std::vector< std::vector< GUM_SCALAR > > combi_msg_p(taille);
496 
497  decltype(taille) confs = 1;
498 #pragma omp for
499 
500  for (int i = 0; i < int(taille); i++) {
501  confs *= msgs_p[i].size();
502  }
503 
504 #pragma omp atomic
505  msgPerm *= confs;
506 #pragma omp barrier
507 #pragma omp flush(msgPerm)
508 
509 // direct binary representation of config, no need for iterators
510 #pragma omp for
511 
512  for (long j = 0; j < long(msgPerm); j++) {
513  // get jth msg :
514  auto jvalue = j;
515 
516  for (decltype(taille) i = 0; i < taille; i++) {
517  if (msgs_p[i].size() == 2) {
518  combi_msg_p[i] = (jvalue & 1) ? msgs_p[i][1] : msgs_p[i][0];
519  jvalue /= 2;
520  } else {
521  combi_msg_p[i] = msgs_p[i][0];
522  }
523  }
524 
525  _compute_ext(combi_msg_p, id, msg_lmin, msg_lmax, lx, pos);
526  }
527 
528 // there may be more threads here than in the for loop, therefor positive test
529 // is NECESSARY (init is -2)
530 #pragma omp critical(msglminmax)
531  {
532 #pragma omp flush(msg_l_min)
533 #pragma omp flush(msg_l_max)
534 
535  if ((msg_l_min > msg_lmin || msg_l_min == -2) && msg_lmin > 0) {
536  msg_l_min = msg_lmin;
537  }
538 
539  if ((msg_l_max < msg_lmax || msg_l_max == -2) && msg_lmax > 0) {
540  msg_l_max = msg_lmax;
541  }
542  }
543  }
544 
545  real_msg_l_min = msg_l_min;
546  real_msg_l_max = msg_l_max;
547  }
const CredalNet< GUM_SCALAR > * __cn
A pointer to the CredalNet to be used.
void _compute_ext(GUM_SCALAR &msg_l_min, GUM_SCALAR &msg_l_max, std::vector< GUM_SCALAR > &lx, GUM_SCALAR &num_min, GUM_SCALAR &num_max, GUM_SCALAR &den_min, GUM_SCALAR &den_max)
Used by _msgL.

◆ _enum_combi() [2/2]

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_enum_combi ( std::vector< std::vector< std::vector< GUM_SCALAR > > > &  msgs_p,
const NodeId id,
GUM_SCALAR &  msg_p_min,
GUM_SCALAR &  msg_p_max 
)
protected

Used by _msgP.

enumere combinaisons messages parents, pour message vers enfant

Enumerate parent's messages.

Parameters
msgs_pAll the messages from the parents which will be enumerated.
idThe constant id of the node sending the message.
msg_p_minThe reference to the current lower value of the message to be sent.
msg_p_maxThe reference to the current upper value of the message to be sent.

Definition at line 388 of file CNLoopyPropagation_tpl.h.

392  {
393  auto taille = msgs_p.size();
394 
395  // source node
396  if (taille == 0) {
397  msg_p_min = __cn->get_CPT_min()[id][0];
398  msg_p_max = __cn->get_CPT_max()[id][0];
399  return;
400  }
401 
402  decltype(taille) msgPerm = 1;
403 #pragma omp parallel
404  {
405  GUM_SCALAR msg_pmin = msg_p_min;
406  GUM_SCALAR msg_pmax = msg_p_max;
407 
408  std::vector< std::vector< GUM_SCALAR > > combi_msg_p(taille);
409 
410  decltype(taille) confs = 1;
411 
412 #pragma omp for
413 
414  for (long i = 0; i < long(taille); i++) {
415  confs *= msgs_p[i].size();
416  }
417 
418 #pragma omp atomic
419  msgPerm *= confs;
420 #pragma omp barrier
421 #pragma omp \
422  flush // ( msgPerm ) let the compiler choose what to flush (due to mvsc)
423 
424 #pragma omp for
425 
426  for (int j = 0; j < int(msgPerm); j++) {
427  // get jth msg :
428  auto jvalue = j;
429 
430  for (decltype(taille) i = 0; i < taille; i++) {
431  if (msgs_p[i].size() == 2) {
432  combi_msg_p[i] = (jvalue & 1) ? msgs_p[i][1] : msgs_p[i][0];
433  jvalue /= 2;
434  } else {
435  combi_msg_p[i] = msgs_p[i][0];
436  }
437  }
438 
439  _compute_ext(combi_msg_p, id, msg_pmin, msg_pmax);
440  }
441 
442 // since min is _INF and max is 0 at init, there is no issue having more threads
443 // here
444 // than during for loop
445 #pragma omp critical(msgpminmax)
446  {
447 #pragma omp flush //( msg_p_min )
448  //#pragma omp flush ( msg_p_max ) let the compiler choose what to
449  // flush (due to mvsc)
450 
451  if (msg_p_min > msg_pmin) { msg_p_min = msg_pmin; }
452 
453  if (msg_p_max < msg_pmax) { msg_p_max = msg_pmax; }
454  }
455  }
456  return;
457  }
const CredalNet< GUM_SCALAR > * __cn
A pointer to the CredalNet to be used.
void _compute_ext(GUM_SCALAR &msg_l_min, GUM_SCALAR &msg_l_max, std::vector< GUM_SCALAR > &lx, GUM_SCALAR &num_min, GUM_SCALAR &num_max, GUM_SCALAR &den_min, GUM_SCALAR &den_max)
Used by _msgL.

◆ _initExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations ( )
protectedinherited

Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality.

Definition at line 695 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin, gum::credal::InferenceEngine< GUM_SCALAR >::_modal, gum::HashTable< Key, Val, Alloc >::clear(), and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence(), gum::credal::InferenceEngine< GUM_SCALAR >::insertModals(), and gum::credal::InferenceEngine< GUM_SCALAR >::insertModalsFile().

695  {
698 
699  if (_modal.empty()) return;
700 
701  for (auto node : _credalNet->current_bn().nodes()) {
702  std::string var_name, time_step;
703 
704  var_name = _credalNet->current_bn().variable(node).name();
705  auto delim = var_name.find_first_of("_");
706  var_name = var_name.substr(0, delim);
707 
708  if (!_modal.exists(var_name)) continue;
709 
710  _expectationMin.insert(node, _modal[var_name].back());
711  _expectationMax.insert(node, _modal[var_name].front());
712  }
713  }
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
dynExpe _modal
Variables modalities used to compute expectations.
void clear()
Removes all the elements in the hash table.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _initialize()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_initialize ( )
protected

Topological forward propagation to initialize old marginals & messages.

Definition at line 605 of file CNLoopyPropagation_tpl.h.

References _INF, gum::ArcGraphPart::children(), GUM_ERROR, gum::Set< Key, Alloc >::insert(), gum::ArcGraphPart::parents(), and gum::Set< Key, Alloc >::size().

605  {
606  const DAG& graphe = __bnet->dag();
607 
608  // use const iterators with cbegin when available
609  for (auto node : __bnet->topologicalOrder()) {
610  _update_p.set(node, false);
611  _update_l.set(node, false);
612  NodeSet* _parents = new NodeSet();
613  _msg_l_sent.set(node, _parents);
614 
615  // accelerer init pour evidences
616  if (__infE::_evidence.exists(node)) {
617  if (__infE::_evidence[node][1] != 0.
618  && __infE::_evidence[node][1] != 1.) {
619  GUM_ERROR(OperationNotAllowed,
620  "CNLoopyPropagation can only handle HARD evidences");
621  }
622 
623  active_nodes_set.insert(node);
624  _update_l.set(node, true);
625  _update_p.set(node, true);
626 
627  if (__infE::_evidence[node][1] == (GUM_SCALAR)1.) {
628  _NodesL_min.set(node, _INF);
629  _NodesP_min.set(node, (GUM_SCALAR)1.);
630  } else if (__infE::_evidence[node][1] == (GUM_SCALAR)0.) {
631  _NodesL_min.set(node, (GUM_SCALAR)0.);
632  _NodesP_min.set(node, (GUM_SCALAR)0.);
633  }
634 
635  std::vector< GUM_SCALAR > marg(2);
636  marg[1] = _NodesP_min[node];
637  marg[0] = 1 - marg[1];
638 
639  __infE::_oldMarginalMin.set(node, marg);
640  __infE::_oldMarginalMax.set(node, marg);
641 
642  continue;
643  }
644 
645  NodeSet _par = graphe.parents(node);
646  NodeSet _enf = graphe.children(node);
647 
648  if (_par.size() == 0) {
649  active_nodes_set.insert(node);
650  _update_p.set(node, true);
651  _update_l.set(node, true);
652  }
653 
654  if (_enf.size() == 0) {
655  active_nodes_set.insert(node);
656  _update_p.set(node, true);
657  _update_l.set(node, true);
658  }
659 
664  const auto parents = &__bnet->cpt(node).variablesSequence();
665 
666  std::vector< std::vector< std::vector< GUM_SCALAR > > > msgs_p;
667  std::vector< std::vector< GUM_SCALAR > > msg_p;
668  std::vector< GUM_SCALAR > distri(2);
669 
670  // +1 from start to avoid _counting itself
671  // use const iterators when available with cbegin
672  for (auto jt = ++parents->begin(), theEnd = parents->end(); jt != theEnd;
673  ++jt) {
674  // compute probability distribution to avoid doing it multiple times
675  // (at
676  // each combination of messages)
677  distri[1] = _NodesP_min[__bnet->nodeId(**jt)];
678  distri[0] = (GUM_SCALAR)1. - distri[1];
679  msg_p.push_back(distri);
680 
681  if (_NodesP_max.exists(__bnet->nodeId(**jt))) {
682  distri[1] = _NodesP_max[__bnet->nodeId(**jt)];
683  distri[0] = (GUM_SCALAR)1. - distri[1];
684  msg_p.push_back(distri);
685  }
686 
687  msgs_p.push_back(msg_p);
688  msg_p.clear();
689  }
690 
691  GUM_SCALAR msg_p_min = 1.;
692  GUM_SCALAR msg_p_max = 0.;
693 
694  if (__cn->currentNodeType(node)
696  _enum_combi(msgs_p, node, msg_p_min, msg_p_max);
697  }
698 
699  if (msg_p_min <= (GUM_SCALAR)0.) { msg_p_min = (GUM_SCALAR)0.; }
700 
701  if (msg_p_max <= (GUM_SCALAR)0.) { msg_p_max = (GUM_SCALAR)0.; }
702 
703  _NodesP_min.set(node, msg_p_min);
704  std::vector< GUM_SCALAR > marg(2);
705  marg[1] = msg_p_min;
706  marg[0] = 1 - msg_p_min;
707 
708  __infE::_oldMarginalMin.set(node, marg);
709 
710  if (msg_p_min != msg_p_max) {
711  marg[1] = msg_p_max;
712  marg[0] = 1 - msg_p_max;
713  _NodesP_max.insert(node, msg_p_max);
714  }
715 
716  __infE::_oldMarginalMax.set(node, marg);
717 
718  _NodesL_min.set(node, (GUM_SCALAR)1.);
719  }
720 
721  for (auto arc : __bnet->arcs()) {
722  _ArcsP_min.set(arc, _NodesP_min[arc.tail()]);
723 
724  if (_NodesP_max.exists(arc.tail())) {
725  _ArcsP_max.set(arc, _NodesP_max[arc.tail()]);
726  }
727 
728  _ArcsL_min.set(arc, _NodesL_min[arc.tail()]);
729  }
730  }
NodeProperty< bool > _update_p
Used to keep track of which node needs to update it&#39;s information coming from it&#39;s parents...
NodeProperty< GUM_SCALAR > _NodesP_min
"Lower" node information obtained by combinaison of parent&#39;s messages.
margi _oldMarginalMin
Old lower marginals used to compute epsilon.
Set< NodeId > NodeSet
Some typdefs and define for shortcuts ...
#define _INF
ArcProperty< GUM_SCALAR > _ArcsP_min
"Lower" information coming from one&#39;s parent.
NodeSet active_nodes_set
The current node-set to iterate through at this current step.
const CredalNet< GUM_SCALAR > * __cn
A pointer to the CredalNet to be used.
margi _oldMarginalMax
Old upper marginals used to compute epsilon.
ArcProperty< GUM_SCALAR > _ArcsP_max
"Upper" information coming from one&#39;s parent.
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
void set(const Key &key, const Val &default_value)
Add a new property or modify it if it already existed.
ArcProperty< GUM_SCALAR > _ArcsL_min
"Lower" information coming from one&#39;s children.
void _enum_combi(std::vector< std::vector< std::vector< GUM_SCALAR > > > &msgs_p, const NodeId &id, GUM_SCALAR &msg_l_min, GUM_SCALAR &msg_l_max, std::vector< GUM_SCALAR > &lx, const Idx &pos)
Used by _msgL.
NodeProperty< NodeSet *> _msg_l_sent
Used to keep track of one&#39;s messages sent to it&#39;s parents.
margi _evidence
Holds observed variables states.
NodeProperty< GUM_SCALAR > _NodesL_min
"Lower" node information obtained by combinaison of children messages.
NodeProperty< GUM_SCALAR > _NodesP_max
"Upper" node information obtained by combinaison of parent&#39;s messages.
void insert(const Key &k)
Inserts a new element into the set.
Definition: set_tpl.h:613
NodeProperty< bool > _update_l
Used to keep track of which node needs to update it&#39;s information coming from it&#39;s children...
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:

◆ _initMarginals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginals ( )
protectedinherited

Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0.

Definition at line 663 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMin, gum::HashTable< Key, Val, Alloc >::clear(), and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence(), and gum::credal::InferenceEngine< GUM_SCALAR >::InferenceEngine().

663  {
668 
669  for (auto node : _credalNet->current_bn().nodes()) {
670  auto dSize = _credalNet->current_bn().variable(node).domainSize();
671  _marginalMin.insert(node, std::vector< GUM_SCALAR >(dSize, 1));
672  _oldMarginalMin.insert(node, std::vector< GUM_SCALAR >(dSize, 1));
673 
674  _marginalMax.insert(node, std::vector< GUM_SCALAR >(dSize, 0));
675  _oldMarginalMax.insert(node, std::vector< GUM_SCALAR >(dSize, 0));
676  }
677  }
margi _oldMarginalMin
Old lower marginals used to compute epsilon.
margi _marginalMin
Lower marginals.
margi _oldMarginalMax
Old upper marginals used to compute epsilon.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _initMarginalSets()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginalSets ( )
protectedinherited

Initialize credal set vertices with empty sets.

Definition at line 680 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets, gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices, gum::HashTable< Key, Val, Alloc >::clear(), and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence(), and gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices().

680  {
682 
683  if (!_storeVertices) return;
684 
685  for (auto node : _credalNet->current_bn().nodes())
686  _marginalSets.insert(node, std::vector< std::vector< GUM_SCALAR > >());
687  }
credalSet _marginalSets
Credal sets vertices, if enabled.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
void clear()
Removes all the elements in the hash table.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _makeInferenceByOrderedArcs()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_makeInferenceByOrderedArcs ( )
protected

Starts the inference with this inference type.

Definition at line 824 of file CNLoopyPropagation_tpl.h.

References gum::credal::CredalNet< GUM_SCALAR >::currentNodeType().

824  {
825  Size nbrArcs = __bnet->dag().sizeArcs();
826 
827  std::vector< cArcP > seq;
828  seq.reserve(nbrArcs);
829 
830  for (const auto& arc : __bnet->arcs()) {
831  seq.push_back(&arc);
832  }
833 
834  GUM_SCALAR eps;
835  // validate TestSuite
837 
838  do {
839  for (const auto it : seq) {
840  if (__cn->currentNodeType(it->tail())
842  || __cn->currentNodeType(it->head())
844  continue;
845  }
846 
847  _msgP(it->tail(), it->head());
848  _msgL(it->head(), it->tail());
849  }
850 
851  eps = _calculateEpsilon();
852 
854 
856  }
const CredalNet< GUM_SCALAR > * __cn
A pointer to the CredalNet to be used.
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
bool continueApproximationScheme(double error)
Update the scheme w.r.t the new error.
GUM_SCALAR _calculateEpsilon()
Compute epsilon.
void _msgL(const NodeId X, const NodeId demanding_parent)
Sends a message to one&#39;s parent, i.e.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
void _msgP(const NodeId X, const NodeId demanding_child)
Sends a message to one&#39;s child, i.e.
void updateApproximationScheme(unsigned int incr=1)
Update the scheme w.r.t the new error and increment steps.
+ Here is the call graph for this function:

◆ _makeInferenceByRandomOrder()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_makeInferenceByRandomOrder ( )
protected

Starts the inference with this inference type.

Definition at line 778 of file CNLoopyPropagation_tpl.h.

References gum::credal::CredalNet< GUM_SCALAR >::currentNodeType(), and gum::credal::lp::swap().

778  {
779  Size nbrArcs = __bnet->dag().sizeArcs();
780 
781  std::vector< cArcP > seq;
782  seq.reserve(nbrArcs);
783 
784  for (const auto& arc : __bnet->arcs()) {
785  seq.push_back(&arc);
786  }
787 
788  GUM_SCALAR eps;
789  // validate TestSuite
791 
792  do {
793  for (Size j = 0, theEnd = nbrArcs / 2; j < theEnd; j++) {
794  auto w1 = rand() % nbrArcs, w2 = rand() % nbrArcs;
795 
796  if (w1 == w2) { continue; }
797 
798  std::swap(seq[w1], seq[w2]);
799  }
800 
801  for (const auto it : seq) {
802  if (__cn->currentNodeType(it->tail())
804  || __cn->currentNodeType(it->head())
806  continue;
807  }
808 
809  _msgP(it->tail(), it->head());
810  _msgL(it->head(), it->tail());
811  }
812 
813  eps = _calculateEpsilon();
814 
816 
818  }
void swap(HashTable< LpCol, double > *&a, HashTable< LpCol, double > *&b)
Swap the addresses of two pointers to hashTables.
const CredalNet< GUM_SCALAR > * __cn
A pointer to the CredalNet to be used.
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
bool continueApproximationScheme(double error)
Update the scheme w.r.t the new error.
GUM_SCALAR _calculateEpsilon()
Compute epsilon.
void _msgL(const NodeId X, const NodeId demanding_parent)
Sends a message to one&#39;s parent, i.e.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
void _msgP(const NodeId X, const NodeId demanding_child)
Sends a message to one&#39;s child, i.e.
void updateApproximationScheme(unsigned int incr=1)
Update the scheme w.r.t the new error and increment steps.
+ Here is the call graph for this function:

◆ _makeInferenceNodeToNeighbours()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_makeInferenceNodeToNeighbours ( )
protected

Starts the inference with this inference type.

Definition at line 733 of file CNLoopyPropagation_tpl.h.

References gum::ArcGraphPart::children(), and gum::ArcGraphPart::parents().

733  {
734  const DAG& graphe = __bnet->dag();
735 
736  GUM_SCALAR eps;
737  // to validate TestSuite
739 
740  do {
741  for (auto node : active_nodes_set) {
742  for (auto chil : graphe.children(node)) {
743  if (__cn->currentNodeType(chil)
745  continue;
746  }
747 
748  _msgP(node, chil);
749  }
750 
751  for (auto par : graphe.parents(node)) {
752  if (__cn->currentNodeType(node)
754  continue;
755  }
756 
757  _msgL(node, par);
758  }
759  }
760 
761  eps = _calculateEpsilon();
762 
764 
765  active_nodes_set.clear();
766  active_nodes_set = next_active_nodes_set;
768 
770  && active_nodes_set.size() > 0);
771 
772  __infE::stopApproximationScheme(); // just to be sure of the
773  // approximationScheme has been notified of
774  // the end of looop
775  }
NodeSet active_nodes_set
The current node-set to iterate through at this current step.
const CredalNet< GUM_SCALAR > * __cn
A pointer to the CredalNet to be used.
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
bool continueApproximationScheme(double error)
Update the scheme w.r.t the new error.
void stopApproximationScheme()
Stop the approximation scheme.
GUM_SCALAR _calculateEpsilon()
Compute epsilon.
void _msgL(const NodeId X, const NodeId demanding_parent)
Sends a message to one&#39;s parent, i.e.
void clear()
Removes all the elements, if any, from the set.
Definition: set_tpl.h:375
void _msgP(const NodeId X, const NodeId demanding_child)
Sends a message to one&#39;s child, i.e.
NodeSet next_active_nodes_set
The next node-set, i.e.
void updateApproximationScheme(unsigned int incr=1)
Update the scheme w.r.t the new error and increment steps.
+ Here is the call graph for this function:

◆ _msgL()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_msgL ( const NodeId  X,
const NodeId  demanding_parent 
)
protected

Sends a message to one's parent, i.e.

X is sending a message to a demanding_parent.

Parameters
XThe constant node id of the node sending the message.
demanding_parentThe constant node id of the node receiving the message.

Definition at line 859 of file CNLoopyPropagation_tpl.h.

References gum::Set< Key, Alloc >::empty(), gum::Set< Key, Alloc >::insert(), and gum::Set< Key, Alloc >::size().

859  {
860  NodeSet const& children = __bnet->children(Y);
861  NodeSet const& _parents = __bnet->parents(Y);
862 
863  const auto parents = &__bnet->cpt(Y).variablesSequence();
864 
865  if (((children.size() + parents->size() - 1) == 1)
866  && (!__infE::_evidence.exists(Y))) {
867  return;
868  }
869 
870  bool update_l = _update_l[Y];
871  bool update_p = _update_p[Y];
872 
873  if (!update_p && !update_l) { return; }
874 
875  _msg_l_sent[Y]->insert(X);
876 
877  // for future refresh LM/PI
878  if (_msg_l_sent[Y]->size() == _parents.size()) {
879  _msg_l_sent[Y]->clear();
880  _update_l[Y] = false;
881  }
882 
883  // refresh LM_part
884  if (update_l) {
885  if (!children.empty() && !__infE::_evidence.exists(Y)) {
886  GUM_SCALAR lmin = 1.;
887  GUM_SCALAR lmax = 1.;
888 
889  for (auto chil : children) {
890  lmin *= _ArcsL_min[Arc(Y, chil)];
891 
892  if (_ArcsL_max.exists(Arc(Y, chil))) {
893  lmax *= _ArcsL_max[Arc(Y, chil)];
894  } else {
895  lmax *= _ArcsL_min[Arc(Y, chil)];
896  }
897  }
898 
899  lmin = lmax;
900 
901  if (lmax != lmax && lmin == lmin) { lmax = lmin; }
902 
903  if (lmax != lmax && lmin != lmin) {
904  std::cout << "no likelihood defined [lmin, lmax] (incompatibles "
905  "evidence ?)"
906  << std::endl;
907  }
908 
909  if (lmin < 0.) { lmin = 0.; }
910 
911  if (lmax < 0.) { lmax = 0.; }
912 
913  // no need to update nodeL if evidence since nodeL will never be used
914 
915  _NodesL_min[Y] = lmin;
916 
917  if (lmin != lmax) {
918  _NodesL_max.set(Y, lmax);
919  } else if (_NodesL_max.exists(Y)) {
920  _NodesL_max.erase(Y);
921  }
922 
923  } // end of : node has children & no evidence
924 
925  } // end of : if update_l
926 
927  GUM_SCALAR lmin = _NodesL_min[Y];
928  GUM_SCALAR lmax;
929 
930  if (_NodesL_max.exists(Y)) {
931  lmax = _NodesL_max[Y];
932  } else {
933  lmax = lmin;
934  }
935 
940  if (lmin == lmax && lmin == 1.) {
941  _ArcsL_min[Arc(X, Y)] = lmin;
942 
943  if (_ArcsL_max.exists(Arc(X, Y))) { _ArcsL_max.erase(Arc(X, Y)); }
944 
945  return;
946  }
947 
948  // garder pour chaque noeud un table des parents maj, une fois tous maj,
949  // stop
950  // jusque notification msg L ou P
951 
952  if (update_p || update_l) {
953  std::vector< std::vector< std::vector< GUM_SCALAR > > > msgs_p;
954  std::vector< std::vector< GUM_SCALAR > > msg_p;
955  std::vector< GUM_SCALAR > distri(2);
956 
957  Idx pos;
958 
959  // +1 from start to avoid _counting itself
960  // use const iterators with cbegin when available
961  for (auto jt = ++parents->begin(), theEnd = parents->end(); jt != theEnd;
962  ++jt) {
963  if (__bnet->nodeId(**jt) == X) {
964  // retirer la variable courante de la taille
965  pos = parents->pos(*jt) - 1;
966  continue;
967  }
968 
969  // compute probability distribution to avoid doing it multiple times
970  // (at
971  // each combination of messages)
972  distri[1] = _ArcsP_min[Arc(__bnet->nodeId(**jt), Y)];
973  distri[0] = GUM_SCALAR(1.) - distri[1];
974  msg_p.push_back(distri);
975 
976  if (_ArcsP_max.exists(Arc(__bnet->nodeId(**jt), Y))) {
977  distri[1] = _ArcsP_max[Arc(__bnet->nodeId(**jt), Y)];
978  distri[0] = GUM_SCALAR(1.) - distri[1];
979  msg_p.push_back(distri);
980  }
981 
982  msgs_p.push_back(msg_p);
983  msg_p.clear();
984  }
985 
986  GUM_SCALAR min = -2.;
987  GUM_SCALAR max = -2.;
988 
989  std::vector< GUM_SCALAR > lx;
990  lx.push_back(lmin);
991 
992  if (lmin != lmax) { lx.push_back(lmax); }
993 
994  _enum_combi(msgs_p, Y, min, max, lx, pos);
995 
996  if (min == -2. || max == -2.) {
997  if (min != -2.) {
998  max = min;
999  } else if (max != -2.) {
1000  min = max;
1001  } else {
1002  std::cout << std::endl;
1003  std::cout << "!!!! pas de message L calculable !!!!" << std::endl;
1004  return;
1005  }
1006  }
1007 
1008  if (min < 0.) { min = 0.; }
1009 
1010  if (max < 0.) { max = 0.; }
1011 
1012  bool update = false;
1013 
1014  if (min != _ArcsL_min[Arc(X, Y)]) {
1015  _ArcsL_min[Arc(X, Y)] = min;
1016  update = true;
1017  }
1018 
1019  if (_ArcsL_max.exists(Arc(X, Y))) {
1020  if (max != _ArcsL_max[Arc(X, Y)]) {
1021  if (max != min) {
1022  _ArcsL_max[Arc(X, Y)] = max;
1023  } else { // if ( max == min )
1024  _ArcsL_max.erase(Arc(X, Y));
1025  }
1026 
1027  update = true;
1028  }
1029  } else {
1030  if (max != min) {
1031  _ArcsL_max.insert(Arc(X, Y), max);
1032  update = true;
1033  }
1034  }
1035 
1036  if (update) {
1037  _update_l.set(X, true);
1039  }
1040 
1041  } // end of update_p || update_l
1042  }
NodeProperty< bool > _update_p
Used to keep track of which node needs to update it&#39;s information coming from it&#39;s parents...
Set< NodeId > NodeSet
Some typdefs and define for shortcuts ...
ArcProperty< GUM_SCALAR > _ArcsP_min
"Lower" information coming from one&#39;s parent.
bool exists(const Key &key) const
Checks whether there exists an element with a given key in the hashtable.
ArcProperty< GUM_SCALAR > _ArcsP_max
"Upper" information coming from one&#39;s parent.
ArcProperty< GUM_SCALAR > _ArcsL_max
"Upper" information coming from one&#39;s children.
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
ArcProperty< GUM_SCALAR > _ArcsL_min
"Lower" information coming from one&#39;s children.
void _enum_combi(std::vector< std::vector< std::vector< GUM_SCALAR > > > &msgs_p, const NodeId &id, GUM_SCALAR &msg_l_min, GUM_SCALAR &msg_l_max, std::vector< GUM_SCALAR > &lx, const Idx &pos)
Used by _msgL.
NodeProperty< NodeSet *> _msg_l_sent
Used to keep track of one&#39;s messages sent to it&#39;s parents.
margi _evidence
Holds observed variables states.
NodeProperty< GUM_SCALAR > _NodesL_min
"Lower" node information obtained by combinaison of children messages.
NodeSet next_active_nodes_set
The next node-set, i.e.
void insert(const Key &k)
Inserts a new element into the set.
Definition: set_tpl.h:613
NodeProperty< bool > _update_l
Used to keep track of which node needs to update it&#39;s information coming from it&#39;s children...
NodeProperty< GUM_SCALAR > _NodesL_max
"Upper" node information obtained by combinaison of children messages.
+ Here is the call graph for this function:

◆ _msgP()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_msgP ( const NodeId  X,
const NodeId  demanding_child 
)
protected

Sends a message to one's child, i.e.

X is sending a message to a demanding_child.

Parameters
XThe constant node id of the node sending the message.
demanding_childThe constant node id of the node receiving the message.

Definition at line 1045 of file CNLoopyPropagation_tpl.h.

References _INF, gum::Set< Key, Alloc >::erase(), and gum::Set< Key, Alloc >::size().

1046  {
1047  NodeSet const& children = __bnet->children(X);
1048 
1049  const auto parents = &__bnet->cpt(X).variablesSequence();
1050 
1051  if (((children.size() + parents->size() - 1) == 1)
1052  && (!__infE::_evidence.exists(X))) {
1053  return;
1054  }
1055 
1056  // LM_part ---- from all children but one --- the lonely one will get the
1057  // message
1058 
1059  if (__infE::_evidence.exists(X)) {
1060  _ArcsP_min[Arc(X, demanding_child)] = __infE::_evidence[X][1];
1061 
1062  if (_ArcsP_max.exists(Arc(X, demanding_child))) {
1063  _ArcsP_max.erase(Arc(X, demanding_child));
1064  }
1065 
1066  return;
1067  }
1068 
1069  bool update_l = _update_l[X];
1070  bool update_p = _update_p[X];
1071 
1072  if (!update_p && !update_l) { return; }
1073 
1074  GUM_SCALAR lmin = 1.;
1075  GUM_SCALAR lmax = 1.;
1076 
1077  // use cbegin if available
1078  for (auto chil : children) {
1079  if (chil == demanding_child) { continue; }
1080 
1081  lmin *= _ArcsL_min[Arc(X, chil)];
1082 
1083  if (_ArcsL_max.exists(Arc(X, chil))) {
1084  lmax *= _ArcsL_max[Arc(X, chil)];
1085  } else {
1086  lmax *= _ArcsL_min[Arc(X, chil)];
1087  }
1088  }
1089 
1090  if (lmin != lmin && lmax == lmax) { lmin = lmax; }
1091 
1092  if (lmax != lmax && lmin == lmin) { lmax = lmin; }
1093 
1094  if (lmax != lmax && lmin != lmin) {
1095  std::cout << "pas de vraisemblance definie [lmin, lmax] (observations "
1096  "incompatibles ?)"
1097  << std::endl;
1098  return;
1099  }
1100 
1101  if (lmin < 0.) { lmin = 0.; }
1102 
1103  if (lmax < 0.) { lmax = 0.; }
1104 
1105  // refresh PI_part
1106  GUM_SCALAR min = _INF;
1107  GUM_SCALAR max = 0.;
1108 
1109  if (update_p) {
1110  std::vector< std::vector< std::vector< GUM_SCALAR > > > msgs_p;
1111  std::vector< std::vector< GUM_SCALAR > > msg_p;
1112  std::vector< GUM_SCALAR > distri(2);
1113 
1114  // +1 from start to avoid _counting itself
1115  // use const_iterators if available
1116  for (auto jt = ++parents->begin(), theEnd = parents->end(); jt != theEnd;
1117  ++jt) {
1118  // compute probability distribution to avoid doing it multiple times
1119  // (at
1120  // each combination of messages)
1121  distri[1] = _ArcsP_min[Arc(__bnet->nodeId(**jt), X)];
1122  distri[0] = GUM_SCALAR(1.) - distri[1];
1123  msg_p.push_back(distri);
1124 
1125  if (_ArcsP_max.exists(Arc(__bnet->nodeId(**jt), X))) {
1126  distri[1] = _ArcsP_max[Arc(__bnet->nodeId(**jt), X)];
1127  distri[0] = GUM_SCALAR(1.) - distri[1];
1128  msg_p.push_back(distri);
1129  }
1130 
1131  msgs_p.push_back(msg_p);
1132  msg_p.clear();
1133  }
1134 
1135  _enum_combi(msgs_p, X, min, max);
1136 
1137  if (min < 0.) { min = 0.; }
1138 
1139  if (max < 0.) { max = 0.; }
1140 
1141  if (min == _INF || max == _INF) {
1142  std::cout << " ERREUR msg P min = max = INF " << std::endl;
1143  std::cout.flush();
1144  return;
1145  }
1146 
1147  _NodesP_min[X] = min;
1148 
1149  if (min != max) {
1150  _NodesP_max.set(X, max);
1151  } else if (_NodesP_max.exists(X)) {
1152  _NodesP_max.erase(X);
1153  }
1154 
1155  _update_p.set(X, false);
1156 
1157  } // end of update_p
1158  else {
1159  min = _NodesP_min[X];
1160 
1161  if (_NodesP_max.exists(X)) {
1162  max = _NodesP_max[X];
1163  } else {
1164  max = min;
1165  }
1166  }
1167 
1168  if (update_p || update_l) {
1169  GUM_SCALAR msg_p_min;
1170  GUM_SCALAR msg_p_max;
1171 
1172  // cas limites sur min
1173  if (min == _INF && lmin == 0.) {
1174  std::cout << "MESSAGE P ERR (negatif) : pi = inf, l = 0" << std::endl;
1175  }
1176 
1177  if (lmin == _INF) { // cas infini
1178  msg_p_min = GUM_SCALAR(1.);
1179  } else if (min == 0. || lmin == 0.) {
1180  msg_p_min = 0;
1181  } else {
1182  msg_p_min = GUM_SCALAR(1. / (1. + ((1. / min - 1.) * 1. / lmin)));
1183  }
1184 
1185  // cas limites sur max
1186  if (max == _INF && lmax == 0.) {
1187  std::cout << "MESSAGE P ERR (negatif) : pi = inf, l = 0" << std::endl;
1188  }
1189 
1190  if (lmax == _INF) { // cas infini
1191  msg_p_max = GUM_SCALAR(1.);
1192  } else if (max == 0. || lmax == 0.) {
1193  msg_p_max = 0;
1194  } else {
1195  msg_p_max = GUM_SCALAR(1. / (1. + ((1. / max - 1.) * 1. / lmax)));
1196  }
1197 
1198  if (msg_p_min != msg_p_min && msg_p_max == msg_p_max) {
1199  msg_p_min = msg_p_max;
1200  std::cout << std::endl;
1201  std::cout << "msg_p_min is NaN" << std::endl;
1202  }
1203 
1204  if (msg_p_max != msg_p_max && msg_p_min == msg_p_min) {
1205  msg_p_max = msg_p_min;
1206  std::cout << std::endl;
1207  std::cout << "msg_p_max is NaN" << std::endl;
1208  }
1209 
1210  if (msg_p_max != msg_p_max && msg_p_min != msg_p_min) {
1211  std::cout << std::endl;
1212  std::cout << "pas de message P calculable (verifier observations)"
1213  << std::endl;
1214  return;
1215  }
1216 
1217  if (msg_p_min < 0.) { msg_p_min = 0.; }
1218 
1219  if (msg_p_max < 0.) { msg_p_max = 0.; }
1220 
1221  bool update = false;
1222 
1223  if (msg_p_min != _ArcsP_min[Arc(X, demanding_child)]) {
1224  _ArcsP_min[Arc(X, demanding_child)] = msg_p_min;
1225  update = true;
1226  }
1227 
1228  if (_ArcsP_max.exists(Arc(X, demanding_child))) {
1229  if (msg_p_max != _ArcsP_max[Arc(X, demanding_child)]) {
1230  if (msg_p_max != msg_p_min) {
1231  _ArcsP_max[Arc(X, demanding_child)] = msg_p_max;
1232  } else { // if ( msg_p_max == msg_p_min )
1233  _ArcsP_max.erase(Arc(X, demanding_child));
1234  }
1235 
1236  update = true;
1237  }
1238  } else {
1239  if (msg_p_max != msg_p_min) {
1240  _ArcsP_max.insert(Arc(X, demanding_child), msg_p_max);
1241  update = true;
1242  }
1243  }
1244 
1245  if (update) {
1246  _update_p.set(demanding_child, true);
1247  next_active_nodes_set.insert(demanding_child);
1248  }
1249 
1250  } // end of : update_l || update_p
1251  }
NodeProperty< bool > _update_p
Used to keep track of which node needs to update it&#39;s information coming from it&#39;s parents...
NodeProperty< GUM_SCALAR > _NodesP_min
"Lower" node information obtained by combinaison of parent&#39;s messages.
Set< NodeId > NodeSet
Some typdefs and define for shortcuts ...
#define _INF
ArcProperty< GUM_SCALAR > _ArcsP_min
"Lower" information coming from one&#39;s parent.
bool exists(const Key &key) const
Checks whether there exists an element with a given key in the hashtable.
ArcProperty< GUM_SCALAR > _ArcsP_max
"Upper" information coming from one&#39;s parent.
ArcProperty< GUM_SCALAR > _ArcsL_max
"Upper" information coming from one&#39;s children.
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
ArcProperty< GUM_SCALAR > _ArcsL_min
"Lower" information coming from one&#39;s children.
void _enum_combi(std::vector< std::vector< std::vector< GUM_SCALAR > > > &msgs_p, const NodeId &id, GUM_SCALAR &msg_l_min, GUM_SCALAR &msg_l_max, std::vector< GUM_SCALAR > &lx, const Idx &pos)
Used by _msgL.
margi _evidence
Holds observed variables states.
NodeSet next_active_nodes_set
The next node-set, i.e.
NodeProperty< GUM_SCALAR > _NodesP_max
"Upper" node information obtained by combinaison of parent&#39;s messages.
void insert(const Key &k)
Inserts a new element into the set.
Definition: set_tpl.h:613
NodeProperty< bool > _update_l
Used to keep track of which node needs to update it&#39;s information coming from it&#39;s children...
+ Here is the call graph for this function:

◆ _refreshLMsPIs()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_refreshLMsPIs ( bool  refreshIndic = false)
protected

Get the last messages from one's parents and children.

Definition at line 1254 of file CNLoopyPropagation_tpl.h.

References _INF, gum::Set< Key, Alloc >::empty(), and gum::Set< Key, Alloc >::erase().

1254  {
1255  for (auto node : __bnet->nodes()) {
1256  if ((!refreshIndic)
1257  && __cn->currentNodeType(node)
1259  continue;
1260  }
1261 
1262  NodeSet const& children = __bnet->children(node);
1263 
1264  auto parents = &__bnet->cpt(node).variablesSequence();
1265 
1266  if (_update_l[node]) {
1267  GUM_SCALAR lmin = 1.;
1268  GUM_SCALAR lmax = 1.;
1269 
1270  if (!children.empty() && !__infE::_evidence.exists(node)) {
1271  for (auto chil : children) {
1272  lmin *= _ArcsL_min[Arc(node, chil)];
1273 
1274  if (_ArcsL_max.exists(Arc(node, chil))) {
1275  lmax *= _ArcsL_max[Arc(node, chil)];
1276  } else {
1277  lmax *= _ArcsL_min[Arc(node, chil)];
1278  }
1279  }
1280 
1281  if (lmin != lmin && lmax == lmax) { lmin = lmax; }
1282 
1283  lmax = lmin;
1284 
1285  if (lmax != lmax && lmin != lmin) {
1286  std::cout
1287  << "pas de vraisemblance definie [lmin, lmax] (observations "
1288  "incompatibles ?)"
1289  << std::endl;
1290  return;
1291  }
1292 
1293  if (lmin < 0.) { lmin = 0.; }
1294 
1295  if (lmax < 0.) { lmax = 0.; }
1296 
1297  _NodesL_min[node] = lmin;
1298 
1299  if (lmin != lmax) {
1300  _NodesL_max.set(node, lmax);
1301  } else if (_NodesL_max.exists(node)) {
1302  _NodesL_max.erase(node);
1303  }
1304  }
1305 
1306  } // end of : update_l
1307 
1308  if (_update_p[node]) {
1309  if ((parents->size() - 1) > 0 && !__infE::_evidence.exists(node)) {
1310  std::vector< std::vector< std::vector< GUM_SCALAR > > > msgs_p;
1311  std::vector< std::vector< GUM_SCALAR > > msg_p;
1312  std::vector< GUM_SCALAR > distri(2);
1313 
1314  // +1 from start to avoid _counting itself
1315  // cbegin
1316  for (auto jt = ++parents->begin(), theEnd = parents->end();
1317  jt != theEnd;
1318  ++jt) {
1319  // compute probability distribution to avoid doing it multiple
1320  // times
1321  // (at each combination of messages)
1322  distri[1] = _ArcsP_min[Arc(__bnet->nodeId(**jt), node)];
1323  distri[0] = GUM_SCALAR(1.) - distri[1];
1324  msg_p.push_back(distri);
1325 
1326  if (_ArcsP_max.exists(Arc(__bnet->nodeId(**jt), node))) {
1327  distri[1] = _ArcsP_max[Arc(__bnet->nodeId(**jt), node)];
1328  distri[0] = GUM_SCALAR(1.) - distri[1];
1329  msg_p.push_back(distri);
1330  }
1331 
1332  msgs_p.push_back(msg_p);
1333  msg_p.clear();
1334  }
1335 
1336  GUM_SCALAR min = _INF;
1337  GUM_SCALAR max = 0.;
1338 
1339  _enum_combi(msgs_p, node, min, max);
1340 
1341  if (min < 0.) { min = 0.; }
1342 
1343  if (max < 0.) { max = 0.; }
1344 
1345  _NodesP_min[node] = min;
1346 
1347  if (min != max) {
1348  _NodesP_max.set(node, max);
1349  } else if (_NodesP_max.exists(node)) {
1350  _NodesP_max.erase(node);
1351  }
1352 
1353  _update_p[node] = false;
1354  }
1355  } // end of update_p
1356 
1357  } // end of : for each node
1358  }
NodeProperty< bool > _update_p
Used to keep track of which node needs to update it&#39;s information coming from it&#39;s parents...
NodeProperty< GUM_SCALAR > _NodesP_min
"Lower" node information obtained by combinaison of parent&#39;s messages.
Set< NodeId > NodeSet
Some typdefs and define for shortcuts ...
#define _INF
ArcProperty< GUM_SCALAR > _ArcsP_min
"Lower" information coming from one&#39;s parent.
bool exists(const Key &key) const
Checks whether there exists an element with a given key in the hashtable.
const CredalNet< GUM_SCALAR > * __cn
A pointer to the CredalNet to be used.
ArcProperty< GUM_SCALAR > _ArcsP_max
"Upper" information coming from one&#39;s parent.
ArcProperty< GUM_SCALAR > _ArcsL_max
"Upper" information coming from one&#39;s children.
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
ArcProperty< GUM_SCALAR > _ArcsL_min
"Lower" information coming from one&#39;s children.
void _enum_combi(std::vector< std::vector< std::vector< GUM_SCALAR > > > &msgs_p, const NodeId &id, GUM_SCALAR &msg_l_min, GUM_SCALAR &msg_l_max, std::vector< GUM_SCALAR > &lx, const Idx &pos)
Used by _msgL.
margi _evidence
Holds observed variables states.
NodeProperty< GUM_SCALAR > _NodesL_min
"Lower" node information obtained by combinaison of children messages.
NodeProperty< GUM_SCALAR > _NodesP_max
"Upper" node information obtained by combinaison of parent&#39;s messages.
NodeProperty< bool > _update_l
Used to keep track of which node needs to update it&#39;s information coming from it&#39;s children...
NodeProperty< GUM_SCALAR > _NodesL_max
"Upper" node information obtained by combinaison of children messages.
+ Here is the call graph for this function:

◆ _repetitiveInit()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit ( )
protectedinherited

Initialize _t0 and _t1 clusters.

Definition at line 784 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_t0, gum::credal::InferenceEngine< GUM_SCALAR >::_t1, gum::credal::InferenceEngine< GUM_SCALAR >::_timeSteps, gum::HashTable< Key, Val, Alloc >::clear(), GUM_ERROR, and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference(), and gum::credal::InferenceEngine< GUM_SCALAR >::setRepetitiveInd().

784  {
785  _timeSteps = 0;
786  _t0.clear();
787  _t1.clear();
788 
789  // t = 0 vars belongs to _t0 as keys
790  for (auto node : _credalNet->current_bn().dag().nodes()) {
791  std::string var_name = _credalNet->current_bn().variable(node).name();
792  auto delim = var_name.find_first_of("_");
793 
794  if (delim > var_name.size()) {
795  GUM_ERROR(InvalidArgument,
796  "void InferenceEngine< GUM_SCALAR "
797  ">::_repetitiveInit() : the network does not "
798  "appear to be dynamic");
799  }
800 
801  std::string time_step = var_name.substr(delim + 1, 1);
802 
803  if (time_step.compare("0") == 0) _t0.insert(node, std::vector< NodeId >());
804  }
805 
806  // t = 1 vars belongs to either _t0 as member value or _t1 as keys
807  for (const auto& node : _credalNet->current_bn().dag().nodes()) {
808  std::string var_name = _credalNet->current_bn().variable(node).name();
809  auto delim = var_name.find_first_of("_");
810  std::string time_step = var_name.substr(delim + 1, var_name.size());
811  var_name = var_name.substr(0, delim);
812  delim = time_step.find_first_of("_");
813  time_step = time_step.substr(0, delim);
814 
815  if (time_step.compare("1") == 0) {
816  bool found = false;
817 
818  for (const auto& elt : _t0) {
819  std::string var_0_name =
820  _credalNet->current_bn().variable(elt.first).name();
821  delim = var_0_name.find_first_of("_");
822  var_0_name = var_0_name.substr(0, delim);
823 
824  if (var_name.compare(var_0_name) == 0) {
825  const Potential< GUM_SCALAR >* potential(
826  &_credalNet->current_bn().cpt(node));
827  const Potential< GUM_SCALAR >* potential2(
828  &_credalNet->current_bn().cpt(elt.first));
829 
830  if (potential->domainSize() == potential2->domainSize())
831  _t0[elt.first].push_back(node);
832  else
833  _t1.insert(node, std::vector< NodeId >());
834 
835  found = true;
836  break;
837  }
838  }
839 
840  if (!found) { _t1.insert(node, std::vector< NodeId >()); }
841  }
842  }
843 
844  // t > 1 vars belongs to either _t0 or _t1 as member value
845  // remember _timeSteps
846  for (auto node : _credalNet->current_bn().dag().nodes()) {
847  std::string var_name = _credalNet->current_bn().variable(node).name();
848  auto delim = var_name.find_first_of("_");
849  std::string time_step = var_name.substr(delim + 1, var_name.size());
850  var_name = var_name.substr(0, delim);
851  delim = time_step.find_first_of("_");
852  time_step = time_step.substr(0, delim);
853 
854  if (time_step.compare("0") != 0 && time_step.compare("1") != 0) {
855  // keep max time_step
856  if (atoi(time_step.c_str()) > _timeSteps)
857  _timeSteps = atoi(time_step.c_str());
858 
859  std::string var_0_name;
860  bool found = false;
861 
862  for (const auto& elt : _t0) {
863  std::string var_0_name =
864  _credalNet->current_bn().variable(elt.first).name();
865  delim = var_0_name.find_first_of("_");
866  var_0_name = var_0_name.substr(0, delim);
867 
868  if (var_name.compare(var_0_name) == 0) {
869  const Potential< GUM_SCALAR >* potential(
870  &_credalNet->current_bn().cpt(node));
871  const Potential< GUM_SCALAR >* potential2(
872  &_credalNet->current_bn().cpt(elt.first));
873 
874  if (potential->domainSize() == potential2->domainSize()) {
875  _t0[elt.first].push_back(node);
876  found = true;
877  break;
878  }
879  }
880  }
881 
882  if (!found) {
883  for (const auto& elt : _t1) {
884  std::string var_0_name =
885  _credalNet->current_bn().variable(elt.first).name();
886  auto delim = var_0_name.find_first_of("_");
887  var_0_name = var_0_name.substr(0, delim);
888 
889  if (var_name.compare(var_0_name) == 0) {
890  const Potential< GUM_SCALAR >* potential(
891  &_credalNet->current_bn().cpt(node));
892  const Potential< GUM_SCALAR >* potential2(
893  &_credalNet->current_bn().cpt(elt.first));
894 
895  if (potential->domainSize() == potential2->domainSize()) {
896  _t1[elt.first].push_back(node);
897  break;
898  }
899  }
900  }
901  }
902  }
903  }
904  }
int _timeSteps
The number of time steps of this network (only usefull for dynamic networks).
cluster _t0
Clusters of nodes used with dynamic networks.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
cluster _t1
Clusters of nodes used with dynamic networks.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _updateCredalSets()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_updateCredalSets ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex,
const bool elimRedund = false 
)
inlineprotectedinherited

Given a node id and one of it's possible vertex, update it's credal set.

To maximise efficiency, don't pass a vertex we know is inside the polytope (i.e. not at an extreme value for any modality)

Parameters
idThe id of the node to be updated
vertexA (potential) vertex of the node credal set
elimRedundremove redundant vertex (inside a facet)

Definition at line 928 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets, gum::HashTable< Key, Val, Alloc >::cbegin(), gum::HashTable< Key, Val, Alloc >::cend(), gum::credal::LRSWrapper< GUM_SCALAR >::elimRedundVrep(), gum::credal::LRSWrapper< GUM_SCALAR >::fillV(), gum::credal::LRSWrapper< GUM_SCALAR >::getOutput(), and gum::credal::LRSWrapper< GUM_SCALAR >::setUpV().

Referenced by gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_verticesFusion().

931  {
932  auto& nodeCredalSet = _marginalSets[id];
933  auto dsize = vertex.size();
934 
935  bool eq = true;
936 
937  for (auto it = nodeCredalSet.cbegin(), itEnd = nodeCredalSet.cend();
938  it != itEnd;
939  ++it) {
940  eq = true;
941 
942  for (Size i = 0; i < dsize; i++) {
943  if (std::fabs(vertex[i] - (*it)[i]) > 1e-6) {
944  eq = false;
945  break;
946  }
947  }
948 
949  if (eq) break;
950  }
951 
952  if (!eq || nodeCredalSet.size() == 0) {
953  nodeCredalSet.push_back(vertex);
954  return;
955  } else
956  return;
957 
958  // because of next lambda return condition
959  if (nodeCredalSet.size() == 1) return;
960 
961  // check that the point and all previously added ones are not inside the
962  // actual
963  // polytope
964  auto itEnd = std::remove_if(
965  nodeCredalSet.begin(),
966  nodeCredalSet.end(),
967  [&](const std::vector< GUM_SCALAR >& v) -> bool {
968  for (auto jt = v.cbegin(),
969  jtEnd = v.cend(),
970  minIt = _marginalMin[id].cbegin(),
971  minItEnd = _marginalMin[id].cend(),
972  maxIt = _marginalMax[id].cbegin(),
973  maxItEnd = _marginalMax[id].cend();
974  jt != jtEnd && minIt != minItEnd && maxIt != maxItEnd;
975  ++jt, ++minIt, ++maxIt) {
976  if ((std::fabs(*jt - *minIt) < 1e-6 || std::fabs(*jt - *maxIt) < 1e-6)
977  && std::fabs(*minIt - *maxIt) > 1e-6)
978  return false;
979  }
980  return true;
981  });
982 
983  nodeCredalSet.erase(itEnd, nodeCredalSet.end());
984 
985  // we need at least 2 points to make a convex combination
986  if (!elimRedund || nodeCredalSet.size() <= 2) return;
987 
988  // there may be points not inside the polytope but on one of it's facet,
989  // meaning it's still a convex combination of vertices of this facet. Here
990  // we
991  // need lrs.
992  LRSWrapper< GUM_SCALAR > lrsWrapper;
993  lrsWrapper.setUpV((unsigned int)dsize, (unsigned int)(nodeCredalSet.size()));
994 
995  for (const auto& vtx : nodeCredalSet)
996  lrsWrapper.fillV(vtx);
997 
998  lrsWrapper.elimRedundVrep();
999 
1000  _marginalSets[id] = lrsWrapper.getOutput();
1001  }
credalSet _marginalSets
Credal sets vertices, if enabled.
margi _marginalMin
Lower marginals.
const const_iterator & cend() const noexcept
Returns the unsafe const_iterator pointing to the end of the hashtable.
const_iterator cbegin() const
Returns an unsafe const_iterator pointing to the beginning of the hashtable.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _updateExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_updateExpectations ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex 
)
inlineprotectedinherited

Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations.

Parameters
idThe id of the node to be updated
vertexA (potential) vertex of the node credal set

Definition at line 907 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin, and gum::credal::InferenceEngine< GUM_SCALAR >::_modal.

908  {
909  std::string var_name = _credalNet->current_bn().variable(id).name();
910  auto delim = var_name.find_first_of("_");
911 
912  var_name = var_name.substr(0, delim);
913 
914  if (_modal.exists(var_name) /*_modal.find(var_name) != _modal.end()*/) {
915  GUM_SCALAR exp = 0;
916  auto vsize = vertex.size();
917 
918  for (Size mod = 0; mod < vsize; mod++)
919  exp += vertex[mod] * _modal[var_name][mod];
920 
921  if (exp > _expectationMax[id]) _expectationMax[id] = exp;
922 
923  if (exp < _expectationMin[id]) _expectationMin[id] = exp;
924  }
925  }
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
dynExpe _modal
Variables modalities used to compute expectations.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48

◆ _updateIndicatrices()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_updateIndicatrices ( )
protected

Only update indicatrices variables at the end of computations ( calls _msgP ).

Definition at line 1465 of file CNLoopyPropagation_tpl.h.

1465  {
1466  for (auto node : __bnet->nodes()) {
1467  if (__cn->currentNodeType(node)
1469  continue;
1470  }
1471 
1472  for (auto pare : __bnet->parents(node)) {
1473  _msgP(pare, node);
1474  }
1475  }
1476 
1477  _refreshLMsPIs(true);
1478  _updateMarginals();
1479  }
const CredalNet< GUM_SCALAR > * __cn
A pointer to the CredalNet to be used.
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
void _updateMarginals()
Compute marginals from up-to-date messages.
void _refreshLMsPIs(bool refreshIndic=false)
Get the last messages from one&#39;s parents and children.
void _msgP(const NodeId X, const NodeId demanding_child)
Sends a message to one&#39;s child, i.e.

◆ _updateMarginals()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::_updateMarginals ( )
protected

Compute marginals from up-to-date messages.

Definition at line 1361 of file CNLoopyPropagation_tpl.h.

References _INF.

1361  {
1362  for (auto node : __bnet->nodes()) {
1363  GUM_SCALAR msg_p_min = 1.;
1364  GUM_SCALAR msg_p_max = 0.;
1365 
1366  if (__infE::_evidence.exists(node)) {
1367  if (__infE::_evidence[node][1] == 0.) {
1368  msg_p_min = (GUM_SCALAR)0.;
1369  } else if (__infE::_evidence[node][1] == 1.) {
1370  msg_p_min = 1.;
1371  }
1372 
1373  msg_p_max = msg_p_min;
1374  } else {
1375  GUM_SCALAR min = _NodesP_min[node];
1376  GUM_SCALAR max;
1377 
1378  if (_NodesP_max.exists(node)) {
1379  max = _NodesP_max[node];
1380  } else {
1381  max = min;
1382  }
1383 
1384  GUM_SCALAR lmin = _NodesL_min[node];
1385  GUM_SCALAR lmax;
1386 
1387  if (_NodesL_max.exists(node)) {
1388  lmax = _NodesL_max[node];
1389  } else {
1390  lmax = lmin;
1391  }
1392 
1393  if (min == _INF || max == _INF) {
1394  std::cout << " min ou max === _INF !!!!!!!!!!!!!!!!!!!!!!!!!! "
1395  << std::endl;
1396  return;
1397  }
1398 
1399  if (min == _INF && lmin == 0.) {
1400  std::cout << "proba ERR (negatif) : pi = inf, l = 0" << std::endl;
1401  return;
1402  }
1403 
1404  if (lmin == _INF) {
1405  msg_p_min = GUM_SCALAR(1.);
1406  } else if (min == 0. || lmin == 0.) {
1407  msg_p_min = GUM_SCALAR(0.);
1408  } else {
1409  msg_p_min = GUM_SCALAR(1. / (1. + ((1. / min - 1.) * 1. / lmin)));
1410  }
1411 
1412  if (max == _INF && lmax == 0.) {
1413  std::cout << "proba ERR (negatif) : pi = inf, l = 0" << std::endl;
1414  return;
1415  }
1416 
1417  if (lmax == _INF) {
1418  msg_p_max = GUM_SCALAR(1.);
1419  } else if (max == 0. || lmax == 0.) {
1420  msg_p_max = GUM_SCALAR(0.);
1421  } else {
1422  msg_p_max = GUM_SCALAR(1. / (1. + ((1. / max - 1.) * 1. / lmax)));
1423  }
1424  }
1425 
1426  if (msg_p_min != msg_p_min && msg_p_max == msg_p_max) {
1427  msg_p_min = msg_p_max;
1428  std::cout << std::endl;
1429  std::cout << "msg_p_min is NaN" << std::endl;
1430  }
1431 
1432  if (msg_p_max != msg_p_max && msg_p_min == msg_p_min) {
1433  msg_p_max = msg_p_min;
1434  std::cout << std::endl;
1435  std::cout << "msg_p_max is NaN" << std::endl;
1436  }
1437 
1438  if (msg_p_max != msg_p_max && msg_p_min != msg_p_min) {
1439  std::cout << std::endl;
1440  std::cout << "Please check the observations (no proba can be computed)"
1441  << std::endl;
1442  return;
1443  }
1444 
1445  if (msg_p_min < 0.) { msg_p_min = 0.; }
1446 
1447  if (msg_p_max < 0.) { msg_p_max = 0.; }
1448 
1449  __infE::_marginalMin[node][0] = 1 - msg_p_max;
1450  __infE::_marginalMax[node][0] = 1 - msg_p_min;
1451  __infE::_marginalMin[node][1] = msg_p_min;
1452  __infE::_marginalMax[node][1] = msg_p_max;
1453  }
1454  }
NodeProperty< GUM_SCALAR > _NodesP_min
"Lower" node information obtained by combinaison of parent&#39;s messages.
#define _INF
margi _marginalMin
Lower marginals.
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
margi _evidence
Holds observed variables states.
NodeProperty< GUM_SCALAR > _NodesL_min
"Lower" node information obtained by combinaison of children messages.
NodeProperty< GUM_SCALAR > _NodesP_max
"Upper" node information obtained by combinaison of parent&#39;s messages.
NodeProperty< GUM_SCALAR > _NodesL_max
"Upper" node information obtained by combinaison of children messages.
margi _marginalMax
Upper marginals.

◆ continueApproximationScheme()

INLINE bool gum::ApproximationScheme::continueApproximationScheme ( double  error)
inherited

Update the scheme w.r.t the new error.

Test the stopping criterion that are enabled.

Parameters
errorThe new error value.
Returns
false if state become != ApproximationSchemeSTATE::Continue
Exceptions
OperationNotAllowedRaised if state != ApproximationSchemeSTATE::Continue.

Definition at line 227 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_epsilon, gum::ApproximationScheme::_current_rate, gum::ApproximationScheme::_current_state, gum::ApproximationScheme::_current_step, gum::ApproximationScheme::_enabled_eps, gum::ApproximationScheme::_enabled_max_iter, gum::ApproximationScheme::_enabled_max_time, gum::ApproximationScheme::_enabled_min_rate_eps, gum::ApproximationScheme::_eps, gum::ApproximationScheme::_history, gum::ApproximationScheme::_last_epsilon, gum::ApproximationScheme::_max_iter, gum::ApproximationScheme::_max_time, gum::ApproximationScheme::_min_rate_eps, gum::ApproximationScheme::_stopScheme(), gum::ApproximationScheme::_timer, gum::IApproximationSchemeConfiguration::Continue, gum::IApproximationSchemeConfiguration::Epsilon, GUM_EMIT3, GUM_ERROR, gum::IApproximationSchemeConfiguration::Limit, gum::IApproximationSchemeConfiguration::messageApproximationScheme(), gum::IApproximationSchemeConfiguration::onProgress, gum::IApproximationSchemeConfiguration::Rate, gum::ApproximationScheme::startOfPeriod(), gum::ApproximationScheme::stateApproximationScheme(), gum::Timer::step(), gum::IApproximationSchemeConfiguration::TimeLimit, and gum::ApproximationScheme::verbosity().

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), gum::SamplingInference< GUM_SCALAR >::_loopApproxInference(), gum::learning::DAG2BNLearner< ALLOC >::createBN(), gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

227  {
228  // For coherence, we fix the time used in the method
229 
230  double timer_step = _timer.step();
231 
232  if (_enabled_max_time) {
233  if (timer_step > _max_time) {
235  return false;
236  }
237  }
238 
239  if (!startOfPeriod()) { return true; }
240 
242  GUM_ERROR(OperationNotAllowed,
243  "state of the approximation scheme is not correct : "
245  }
246 
247  if (verbosity()) { _history.push_back(error); }
248 
249  if (_enabled_max_iter) {
250  if (_current_step > _max_iter) {
252  return false;
253  }
254  }
255 
257  _current_epsilon = error; // eps rate isEnabled needs it so affectation was
258  // moved from eps isEnabled below
259 
260  if (_enabled_eps) {
261  if (_current_epsilon <= _eps) {
263  return false;
264  }
265  }
266 
267  if (_last_epsilon >= 0.) {
268  if (_current_epsilon > .0) {
269  // ! _current_epsilon can be 0. AND epsilon
270  // isEnabled can be disabled !
271  _current_rate =
273  }
274  // limit with current eps ---> 0 is | 1 - ( last_eps / 0 ) | --->
275  // infinity the else means a return false if we isEnabled the rate below,
276  // as we would have returned false if epsilon isEnabled was enabled
277  else {
279  }
280 
281  if (_enabled_min_rate_eps) {
282  if (_current_rate <= _min_rate_eps) {
284  return false;
285  }
286  }
287  }
288 
290  if (onProgress.hasListener()) {
292  }
293 
294  return true;
295  } else {
296  return false;
297  }
298  }
double step() const
Returns the delta time between now and the last reset() call (or the constructor).
Definition: timer_inl.h:42
Signaler3< Size, double, double > onProgress
Progression, error and time.
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
bool _enabled_eps
If true, the threshold convergence is enabled.
void _stopScheme(ApproximationSchemeSTATE new_state)
Stop the scheme given a new state.
double _current_epsilon
Current epsilon.
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
bool startOfPeriod()
Returns true if we are at the beginning of a period (compute error is mandatory). ...
double _eps
Threshold for convergence.
double _current_rate
Current rate.
bool _enabled_max_time
If true, the timeout is enabled.
Size _current_step
The current step.
std::vector< double > _history
The scheme history, used only if verbosity == true.
double _min_rate_eps
Threshold for the epsilon rate.
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
bool verbosity() const
Returns true if verbosity is enabled.
std::string messageApproximationScheme() const
Returns the approximation scheme message.
double _last_epsilon
Last epsilon value.
Size _max_iter
The maximum iterations.
#define GUM_EMIT3(signal, arg1, arg2, arg3)
Definition: signaler3.h:42
ApproximationSchemeSTATE _current_state
The current state.
double _max_time
The timeout.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ credalNet()

template<typename GUM_SCALAR >
const CredalNet< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::credalNet ( )
inherited

Get this creadal network.

Returns
A constant reference to this CredalNet.

Definition at line 59 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet.

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::InferenceEngine().

59  {
60  return *_credalNet;
61  }
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
+ Here is the caller graph for this function:

◆ currentTime()

INLINE double gum::ApproximationScheme::currentTime ( ) const
virtualinherited

Returns the current running time in second.

Returns
Returns the current running time in second.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 128 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_timer, and gum::Timer::step().

Referenced by gum::learning::genericBNLearner::currentTime().

128 { return _timer.step(); }
double step() const
Returns the delta time between now and the last reset() call (or the constructor).
Definition: timer_inl.h:42
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ disableEpsilon()

INLINE void gum::ApproximationScheme::disableEpsilon ( )
virtualinherited

Disable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 54 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_eps.

Referenced by gum::learning::genericBNLearner::disableEpsilon().

54 { _enabled_eps = false; }
bool _enabled_eps
If true, the threshold convergence is enabled.
+ Here is the caller graph for this function:

◆ disableMaxIter()

INLINE void gum::ApproximationScheme::disableMaxIter ( )
virtualinherited

Disable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 105 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_iter.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::learning::genericBNLearner::disableMaxIter(), and gum::learning::GreedyHillClimbing::GreedyHillClimbing().

105 { _enabled_max_iter = false; }
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
+ Here is the caller graph for this function:

◆ disableMaxTime()

INLINE void gum::ApproximationScheme::disableMaxTime ( )
virtualinherited

Disable stopping criterion on timeout.

Returns
Disable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 131 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_time.

Referenced by gum::learning::genericBNLearner::disableMaxTime(), and gum::learning::GreedyHillClimbing::GreedyHillClimbing().

131 { _enabled_max_time = false; }
bool _enabled_max_time
If true, the timeout is enabled.
+ Here is the caller graph for this function:

◆ disableMinEpsilonRate()

INLINE void gum::ApproximationScheme::disableMinEpsilonRate ( )
virtualinherited

Disable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 79 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_min_rate_eps.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), gum::learning::genericBNLearner::disableMinEpsilonRate(), and gum::learning::GreedyHillClimbing::GreedyHillClimbing().

79  {
80  _enabled_min_rate_eps = false;
81  }
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the caller graph for this function:

◆ dynamicExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations ( )
inherited

Compute dynamic expectations.

See also
_dynamicExpectations Only call this if an algorithm does not call it by itself.

Definition at line 716 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpectations().

716  {
718  }
void _dynamicExpectations()
Rearrange lower and upper expectations to suit dynamic networks.
+ Here is the call graph for this function:

◆ dynamicExpMax()

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMax ( const std::string &  varName) const
inherited

Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which upper expectation we want.
Returns
A constant reference to the variable upper expectation over all time steps.

Definition at line 504 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMax, and GUM_ERROR.

505  {
506  std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
507  "GUM_SCALAR >::dynamicExpMax ( const std::string & "
508  "varName ) const : ";
509 
510  if (_dynamicExpMax.empty())
511  GUM_ERROR(OperationNotAllowed,
512  errTxt + "_dynamicExpectations() needs to be called before");
513 
514  if (!_dynamicExpMax.exists(
515  varName) /*_dynamicExpMin.find(varName) == _dynamicExpMin.end()*/)
516  GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName);
517 
518  return _dynamicExpMax[varName];
519  }
dynExpe _dynamicExpMax
Upper dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55

◆ dynamicExpMin()

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMin ( const std::string &  varName) const
inherited

Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which lower expectation we want.
Returns
A constant reference to the variable lower expectation over all time steps.

Definition at line 486 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMin, and GUM_ERROR.

487  {
488  std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
489  "GUM_SCALAR >::dynamicExpMin ( const std::string & "
490  "varName ) const : ";
491 
492  if (_dynamicExpMin.empty())
493  GUM_ERROR(OperationNotAllowed,
494  errTxt + "_dynamicExpectations() needs to be called before");
495 
496  if (!_dynamicExpMin.exists(
497  varName) /*_dynamicExpMin.find(varName) == _dynamicExpMin.end()*/)
498  GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName);
499 
500  return _dynamicExpMin[varName];
501  }
dynExpe _dynamicExpMin
Lower dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55

◆ enableEpsilon()

INLINE void gum::ApproximationScheme::enableEpsilon ( )
virtualinherited

Enable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 57 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_eps.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), and gum::learning::genericBNLearner::enableEpsilon().

57 { _enabled_eps = true; }
bool _enabled_eps
If true, the threshold convergence is enabled.
+ Here is the caller graph for this function:

◆ enableMaxIter()

INLINE void gum::ApproximationScheme::enableMaxIter ( )
virtualinherited

Enable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 108 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_iter.

Referenced by gum::learning::genericBNLearner::enableMaxIter().

108 { _enabled_max_iter = true; }
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
+ Here is the caller graph for this function:

◆ enableMaxTime()

INLINE void gum::ApproximationScheme::enableMaxTime ( )
virtualinherited

Enable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 134 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_time.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling(), and gum::learning::genericBNLearner::enableMaxTime().

134 { _enabled_max_time = true; }
bool _enabled_max_time
If true, the timeout is enabled.
+ Here is the caller graph for this function:

◆ enableMinEpsilonRate()

INLINE void gum::ApproximationScheme::enableMinEpsilonRate ( )
virtualinherited

Enable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 84 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_min_rate_eps.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), and gum::learning::genericBNLearner::enableMinEpsilonRate().

84  {
85  _enabled_min_rate_eps = true;
86  }
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the caller graph for this function:

◆ epsilon()

INLINE double gum::ApproximationScheme::epsilon ( ) const
virtualinherited

Returns the value of epsilon.

Returns
Returns the value of epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 51 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_eps.

Referenced by gum::ImportanceSampling< GUM_SCALAR >::_onContextualize(), and gum::learning::genericBNLearner::epsilon().

51 { return _eps; }
double _eps
Threshold for convergence.
+ Here is the caller graph for this function:

◆ eraseAllEvidence()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::eraseAllEvidence ( )
virtual

Erase all inference related data to perform another one.

You need to insert evidence again if needed but modalities are kept. You can insert new ones by using the appropriate method which will delete the old ones.

Reimplemented from gum::credal::InferenceEngine< GUM_SCALAR >.

Definition at line 576 of file CNLoopyPropagation_tpl.h.

576  {
578 
579  _ArcsL_min.clear();
580  _ArcsL_max.clear();
581  _ArcsP_min.clear();
582  _ArcsP_max.clear();
583  _NodesL_min.clear();
584  _NodesL_max.clear();
585  _NodesP_min.clear();
586  _NodesP_max.clear();
587 
588  _InferenceUpToDate = false;
589 
590  if (_msg_l_sent.size() > 0) {
591  for (auto node : __bnet->nodes()) {
592  delete _msg_l_sent[node];
593  }
594  }
595 
596  _msg_l_sent.clear();
597  _update_l.clear();
598  _update_p.clear();
599 
602  }
NodeProperty< bool > _update_p
Used to keep track of which node needs to update it&#39;s information coming from it&#39;s parents...
NodeProperty< GUM_SCALAR > _NodesP_min
"Lower" node information obtained by combinaison of parent&#39;s messages.
ArcProperty< GUM_SCALAR > _ArcsP_min
"Lower" information coming from one&#39;s parent.
NodeSet active_nodes_set
The current node-set to iterate through at this current step.
ArcProperty< GUM_SCALAR > _ArcsP_max
"Upper" information coming from one&#39;s parent.
ArcProperty< GUM_SCALAR > _ArcsL_max
"Upper" information coming from one&#39;s children.
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
bool _InferenceUpToDate
TRUE if inference has already been performed, FALSE otherwise.
ArcProperty< GUM_SCALAR > _ArcsL_min
"Lower" information coming from one&#39;s children.
NodeProperty< NodeSet *> _msg_l_sent
Used to keep track of one&#39;s messages sent to it&#39;s parents.
NodeProperty< GUM_SCALAR > _NodesL_min
"Lower" node information obtained by combinaison of children messages.
void clear()
Removes all the elements, if any, from the set.
Definition: set_tpl.h:375
NodeSet next_active_nodes_set
The next node-set, i.e.
NodeProperty< GUM_SCALAR > _NodesP_max
"Upper" node information obtained by combinaison of parent&#39;s messages.
NodeProperty< bool > _update_l
Used to keep track of which node needs to update it&#39;s information coming from it&#39;s children...
NodeProperty< GUM_SCALAR > _NodesL_max
"Upper" node information obtained by combinaison of children messages.
virtual void eraseAllEvidence()
Erase all inference related data to perform another one.

◆ expectationMax() [1/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const NodeId  id) const
inherited

Get the upper expectation of a given node id.

Parameters
idThe node id which upper expectation we want.
Returns
A constant reference to this node upper expectation.

Definition at line 479 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax.

479  {
480  try {
481  return _expectationMax[id];
482  } catch (NotFound& err) { throw(err); }
483  }
expe _expectationMax
Upper expectations, if some variables modalities were inserted.

◆ expectationMax() [2/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const std::string &  varName) const
inherited

Get the upper expectation of a given variable name.

Parameters
varNameThe variable name which upper expectation we want.
Returns
A constant reference to this variable upper expectation.

Definition at line 462 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, and gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax.

463  {
464  try {
465  return _expectationMax[_credalNet->current_bn().idFromName(varName)];
466  } catch (NotFound& err) { throw(err); }
467  }
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.

◆ expectationMin() [1/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const NodeId  id) const
inherited

Get the lower expectation of a given node id.

Parameters
idThe node id which lower expectation we want.
Returns
A constant reference to this node lower expectation.

Definition at line 471 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin.

471  {
472  try {
473  return _expectationMin[id];
474  } catch (NotFound& err) { throw(err); }
475  }
expe _expectationMin
Lower expectations, if some variables modalities were inserted.

◆ expectationMin() [2/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const std::string &  varName) const
inherited

Get the lower expectation of a given variable name.

Parameters
varNameThe variable name which lower expectation we want.
Returns
A constant reference to this variable lower expectation.

Definition at line 454 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, and gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin.

455  {
456  try {
457  return _expectationMin[_credalNet->current_bn().idFromName(varName)];
458  } catch (NotFound& err) { throw(err); }
459  }
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.

◆ getApproximationSchemeMsg()

template<typename GUM_SCALAR >
const std::string gum::credal::InferenceEngine< GUM_SCALAR >::getApproximationSchemeMsg ( )
inlineinherited

Get approximation scheme state.

Returns
A constant string about approximation scheme state.

Definition at line 515 of file inferenceEngine.h.

References gum::IApproximationSchemeConfiguration::messageApproximationScheme().

515  {
516  return this->messageApproximationScheme();
517  }
std::string messageApproximationScheme() const
Returns the approximation scheme message.
+ Here is the call graph for this function:

◆ getT0Cluster()

template<typename GUM_SCALAR >
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT0Cluster ( ) const
inherited

Get the _t0 cluster.

Returns
A constant reference to the _t0 cluster.

Definition at line 1005 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_t0.

1005  {
1006  return _t0;
1007  }
cluster _t0
Clusters of nodes used with dynamic networks.

◆ getT1Cluster()

template<typename GUM_SCALAR >
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT1Cluster ( ) const
inherited

Get the _t1 cluster.

Returns
A constant reference to the _t1 cluster.

Definition at line 1011 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_t1.

1011  {
1012  return _t1;
1013  }
cluster _t1
Clusters of nodes used with dynamic networks.

◆ getVarMod2BNsMap()

template<typename GUM_SCALAR >
VarMod2BNsMap< GUM_SCALAR > * gum::credal::InferenceEngine< GUM_SCALAR >::getVarMod2BNsMap ( )
inherited

Get optimum IBayesNet.

Returns
A pointer to the optimal net object.

Definition at line 141 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dbnOpt.

141  {
142  return &_dbnOpt;
143  }
VarMod2BNsMap< GUM_SCALAR > _dbnOpt
Object used to efficiently store optimal bayes net during inference, for some algorithms.

◆ history()

INLINE const std::vector< double > & gum::ApproximationScheme::history ( ) const
virtualinherited

Returns the scheme history.

Returns
Returns the scheme history.
Exceptions
OperationNotAllowedRaised if the scheme did not performed or if verbosity is set to false.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 173 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_history, GUM_ERROR, gum::ApproximationScheme::stateApproximationScheme(), gum::IApproximationSchemeConfiguration::Undefined, and gum::ApproximationScheme::verbosity().

Referenced by gum::learning::genericBNLearner::history().

173  {
175  GUM_ERROR(OperationNotAllowed,
176  "state of the approximation scheme is udefined");
177  }
178 
179  if (verbosity() == false) {
180  GUM_ERROR(OperationNotAllowed, "No history when verbosity=false");
181  }
182 
183  return _history;
184  }
std::vector< double > _history
The scheme history, used only if verbosity == true.
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
bool verbosity() const
Returns true if verbosity is enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ inferenceType() [1/2]

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::inferenceType ( InferenceType  inft)

Set the inference type.

Parameters
inftThe choosen InferenceType.

Definition at line 1560 of file CNLoopyPropagation_tpl.h.

References gum::credal::CNLoopyPropagation< GUM_SCALAR >::__inferenceType.

1560  {
1561  __inferenceType = inft;
1562  }
InferenceType __inferenceType
The choosen inference type.

◆ inferenceType() [2/2]

template<typename GUM_SCALAR >
CNLoopyPropagation< GUM_SCALAR >::InferenceType gum::credal::CNLoopyPropagation< GUM_SCALAR >::inferenceType ( )

Get the inference type.

Returns
The inference type.

Definition at line 1566 of file CNLoopyPropagation_tpl.h.

References gum::credal::CNLoopyPropagation< GUM_SCALAR >::__inferenceType.

1566  {
1567  return __inferenceType;
1568  }
InferenceType __inferenceType
The choosen inference type.

◆ initApproximationScheme()

INLINE void gum::ApproximationScheme::initApproximationScheme ( )
inherited

Initialise the scheme.

Definition at line 187 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_epsilon, gum::ApproximationScheme::_current_rate, gum::ApproximationScheme::_current_state, gum::ApproximationScheme::_current_step, gum::ApproximationScheme::_history, gum::ApproximationScheme::_timer, gum::IApproximationSchemeConfiguration::Continue, and gum::Timer::reset().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), gum::SamplingInference< GUM_SCALAR >::_loopApproxInference(), gum::SamplingInference< GUM_SCALAR >::_onStateChanged(), gum::learning::DAG2BNLearner< ALLOC >::createBN(), gum::learning::GreedyHillClimbing::learnStructure(), and gum::learning::LocalSearchWithTabuList::learnStructure().

187  {
189  _current_step = 0;
191  _history.clear();
192  _timer.reset();
193  }
double _current_epsilon
Current epsilon.
void reset()
Reset the timer.
Definition: timer_inl.h:32
double _current_rate
Current rate.
Size _current_step
The current step.
std::vector< double > _history
The scheme history, used only if verbosity == true.
ApproximationSchemeSTATE _current_state
The current state.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ insertEvidence() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const std::map< std::string, std::vector< GUM_SCALAR > > &  eviMap)
inherited

Insert evidence from map.

Parameters
eviMapThe map variable name - likelihood.

Definition at line 229 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_evidence, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

230  {
231  if (!_evidence.empty()) _evidence.clear();
232 
233  for (auto it = eviMap.cbegin(), theEnd = eviMap.cend(); it != theEnd; ++it) {
234  NodeId id;
235 
236  try {
237  id = _credalNet->current_bn().idFromName(it->first);
238  } catch (NotFound& err) {
239  GUM_SHOWERROR(err);
240  continue;
241  }
242 
243  _evidence.insert(id, it->second);
244  }
245  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:61
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
margi _evidence
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
Size NodeId
Type for node ids.
Definition: graphElements.h:98
+ Here is the call graph for this function:

◆ insertEvidence() [2/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const NodeProperty< std::vector< GUM_SCALAR > > &  evidence)
inherited

Insert evidence from Property.

Parameters
evidenceThe on nodes Property containing likelihoods.

Definition at line 251 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_evidence, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

252  {
253  if (!_evidence.empty()) _evidence.clear();
254 
255  // use cbegin() to get const_iterator when available in aGrUM hashtables
256  for (const auto& elt : evidence) {
257  try {
258  _credalNet->current_bn().variable(elt.first);
259  } catch (NotFound& err) {
260  GUM_SHOWERROR(err);
261  continue;
262  }
263 
264  _evidence.insert(elt.first, elt.second);
265  }
266  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:61
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
margi _evidence
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
+ Here is the call graph for this function:

◆ insertEvidenceFile()

template<typename GUM_SCALAR >
virtual void gum::credal::CNLoopyPropagation< GUM_SCALAR >::insertEvidenceFile ( const std::string &  path)
inlinevirtual

Insert evidence from file.

Parameters
pathThe path to the evidence file.

Reimplemented from gum::credal::InferenceEngine< GUM_SCALAR >.

Definition at line 382 of file CNLoopyPropagation.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidenceFile().

382  {
384  };
virtual void insertEvidenceFile(const std::string &path)
Insert evidence from file.
+ Here is the call graph for this function:

◆ insertModals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModals ( const std::map< std::string, std::vector< GUM_SCALAR > > &  modals)
inherited

Insert variables modalities from map to compute expectations.

Parameters
modalsThe map variable name - modalities.

Definition at line 193 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_modal, and GUM_SHOWERROR.

194  {
195  if (!_modal.empty()) _modal.clear();
196 
197  for (auto it = modals.cbegin(), theEnd = modals.cend(); it != theEnd; ++it) {
198  NodeId id;
199 
200  try {
201  id = _credalNet->current_bn().idFromName(it->first);
202  } catch (NotFound& err) {
203  GUM_SHOWERROR(err);
204  continue;
205  }
206 
207  // check that modals are net compatible
208  auto dSize = _credalNet->current_bn().variable(id).domainSize();
209 
210  if (dSize != it->second.size()) continue;
211 
212  // GUM_ERROR(OperationNotAllowed, "void InferenceEngine< GUM_SCALAR
213  // >::insertModals( const std::map< std::string, std::vector< GUM_SCALAR
214  // > >
215  // &modals) : modalities does not respect variable cardinality : " <<
216  // _credalNet->current_bn().variable( id ).name() << " : " << dSize << "
217  // != "
218  // << it->second.size());
219 
220  _modal.insert(it->first, it->second); //[ it->first ] = it->second;
221  }
222 
223  //_modal = modals;
224 
226  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:61
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
dynExpe _modal
Variables modalities used to compute expectations.
void _initExpectations()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...
Size NodeId
Type for node ids.
Definition: graphElements.h:98
+ Here is the call graph for this function:

◆ insertModalsFile()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModalsFile ( const std::string &  path)
inherited

Insert variables modalities from file to compute expectations.

Parameters
pathThe path to the modalities file.

Definition at line 146 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_modal, and GUM_ERROR.

146  {
147  std::ifstream mod_stream(path.c_str(), std::ios::in);
148 
149  if (!mod_stream.good()) {
150  GUM_ERROR(OperationNotAllowed,
151  "void InferenceEngine< GUM_SCALAR "
152  ">::insertModals(const std::string & path) : "
153  "could not open input file : "
154  << path);
155  }
156 
157  if (!_modal.empty()) _modal.clear();
158 
159  std::string line, tmp;
160  char * cstr, *p;
161 
162  while (mod_stream.good()) {
163  getline(mod_stream, line);
164 
165  if (line.size() == 0) continue;
166 
167  cstr = new char[line.size() + 1];
168  strcpy(cstr, line.c_str());
169 
170  p = strtok(cstr, " ");
171  tmp = p;
172 
173  std::vector< GUM_SCALAR > values;
174  p = strtok(nullptr, " ");
175 
176  while (p != nullptr) {
177  values.push_back(GUM_SCALAR(atof(p)));
178  p = strtok(nullptr, " ");
179  } // end of : line
180 
181  _modal.insert(tmp, values); //[tmp] = values;
182 
183  delete[] p;
184  delete[] cstr;
185  } // end of : file
186 
187  mod_stream.close();
188 
190  }
dynExpe _modal
Variables modalities used to compute expectations.
void _initExpectations()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:

◆ insertQuery()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQuery ( const NodeProperty< std::vector< bool > > &  query)
inherited

Insert query variables and states from Property.

Parameters
queryThe on nodes Property containing queried variables states.

Definition at line 331 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_query, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

332  {
333  if (!_query.empty()) _query.clear();
334 
335  for (const auto& elt : query) {
336  try {
337  _credalNet->current_bn().variable(elt.first);
338  } catch (NotFound& err) {
339  GUM_SHOWERROR(err);
340  continue;
341  }
342 
343  _query.insert(elt.first, elt.second);
344  }
345  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:61
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
query _query
Holds the query nodes states.
void clear()
Removes all the elements in the hash table.
NodeProperty< std::vector< bool > > query
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
+ Here is the call graph for this function:

◆ insertQueryFile()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQueryFile ( const std::string &  path)
inherited

Insert query variables states from file.

Parameters
pathThe path to the query file.

Definition at line 348 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_query, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_ERROR, GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

348  {
349  std::ifstream evi_stream(path.c_str(), std::ios::in);
350 
351  if (!evi_stream.good()) {
352  GUM_ERROR(IOError,
353  "void InferenceEngine< GUM_SCALAR >::insertQuery(const "
354  "std::string & path) : could not open input file : "
355  << path);
356  }
357 
358  if (!_query.empty()) _query.clear();
359 
360  std::string line, tmp;
361  char * cstr, *p;
362 
363  while (evi_stream.good() && std::strcmp(line.c_str(), "[QUERY]") != 0) {
364  getline(evi_stream, line);
365  }
366 
367  while (evi_stream.good()) {
368  getline(evi_stream, line);
369 
370  if (std::strcmp(line.c_str(), "[EVIDENCE]") == 0) break;
371 
372  if (line.size() == 0) continue;
373 
374  cstr = new char[line.size() + 1];
375  strcpy(cstr, line.c_str());
376 
377  p = strtok(cstr, " ");
378  tmp = p;
379 
380  // if user input is wrong
381  NodeId node = -1;
382 
383  try {
384  node = _credalNet->current_bn().idFromName(tmp);
385  } catch (NotFound& err) {
386  GUM_SHOWERROR(err);
387  continue;
388  }
389 
390  auto dSize = _credalNet->current_bn().variable(node).domainSize();
391 
392  p = strtok(nullptr, " ");
393 
394  if (p == nullptr) {
395  _query.insert(node, std::vector< bool >(dSize, true));
396  } else {
397  std::vector< bool > values(dSize, false);
398 
399  while (p != nullptr) {
400  if ((Size)atoi(p) >= dSize)
401  GUM_ERROR(OutOfBounds,
402  "void InferenceEngine< GUM_SCALAR "
403  ">::insertQuery(const std::string & path) : "
404  "query modality is higher or equal to "
405  "cardinality");
406 
407  values[atoi(p)] = true;
408  p = strtok(nullptr, " ");
409  } // end of : line
410 
411  _query.insert(node, values);
412  }
413 
414  delete[] p;
415  delete[] cstr;
416  } // end of : file
417 
418  evi_stream.close();
419  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:61
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
query _query
Holds the query nodes states.
void clear()
Removes all the elements in the hash table.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
Size NodeId
Type for node ids.
Definition: graphElements.h:98
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:

◆ isEnabledEpsilon()

INLINE bool gum::ApproximationScheme::isEnabledEpsilon ( ) const
virtualinherited

Returns true if stopping criterion on epsilon is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 61 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_eps.

Referenced by gum::learning::genericBNLearner::isEnabledEpsilon().

61  {
62  return _enabled_eps;
63  }
bool _enabled_eps
If true, the threshold convergence is enabled.
+ Here is the caller graph for this function:

◆ isEnabledMaxIter()

INLINE bool gum::ApproximationScheme::isEnabledMaxIter ( ) const
virtualinherited

Returns true if stopping criterion on max iterations is enabled, false otherwise.

Returns
Returns true if stopping criterion on max iterations is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 112 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_iter.

Referenced by gum::learning::genericBNLearner::isEnabledMaxIter().

112  {
113  return _enabled_max_iter;
114  }
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
+ Here is the caller graph for this function:

◆ isEnabledMaxTime()

INLINE bool gum::ApproximationScheme::isEnabledMaxTime ( ) const
virtualinherited

Returns true if stopping criterion on timeout is enabled, false otherwise.

Returns
Returns true if stopping criterion on timeout is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 138 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_time.

Referenced by gum::learning::genericBNLearner::isEnabledMaxTime().

138  {
139  return _enabled_max_time;
140  }
bool _enabled_max_time
If true, the timeout is enabled.
+ Here is the caller graph for this function:

◆ isEnabledMinEpsilonRate()

INLINE bool gum::ApproximationScheme::isEnabledMinEpsilonRate ( ) const
virtualinherited

Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 90 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_min_rate_eps.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), and gum::learning::genericBNLearner::isEnabledMinEpsilonRate().

90  {
91  return _enabled_min_rate_eps;
92  }
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the caller graph for this function:

◆ makeInference()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::makeInference ( )
virtual

Starts the inference.

Implements gum::credal::InferenceEngine< GUM_SCALAR >.

Definition at line 550 of file CNLoopyPropagation_tpl.h.

550  {
551  if (_InferenceUpToDate) { return; }
552 
553  _initialize();
554 
556 
557  switch (__inferenceType) {
560  break;
561 
563 
565  }
566 
567  //_updateMarginals();
568  _updateIndicatrices(); // will call _updateMarginals()
569 
571 
572  _InferenceUpToDate = true;
573  }
void _makeInferenceByOrderedArcs()
Starts the inference with this inference type.
InferenceType __inferenceType
The choosen inference type.
void _initialize()
Topological forward propagation to initialize old marginals & messages.
void initApproximationScheme()
Initialise the scheme.
Chooses an arc ordering and sends messages accordingly at all steps.
void _computeExpectations()
Since the network is binary, expectations can be computed from the final marginals which give us the ...
void _makeInferenceByRandomOrder()
Starts the inference with this inference type.
bool _InferenceUpToDate
TRUE if inference has already been performed, FALSE otherwise.
Uses a node-set so we don&#39;t iterate on nodes that can&#39;t send a new message.
void _updateIndicatrices()
Only update indicatrices variables at the end of computations ( calls _msgP ).
Chooses a random arc ordering and sends messages accordingly.
void _makeInferenceNodeToNeighbours()
Starts the inference with this inference type.

◆ marginalMax() [1/2]

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const NodeId  id) const
inherited

Get the upper marginals of a given node id.

Parameters
idThe node id which upper marginals we want.
Returns
A constant reference to this node upper marginals.

Definition at line 447 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax.

447  {
448  try {
449  return _marginalMax[id];
450  } catch (NotFound& err) { throw(err); }
451  }
margi _marginalMax
Upper marginals.

◆ marginalMax() [2/2]

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const std::string &  varName) const
inherited

Get the upper marginals of a given variable name.

Parameters
varNameThe variable name which upper marginals we want.
Returns
A constant reference to this variable upper marginals.

Definition at line 430 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, and gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax.

431  {
432  try {
433  return _marginalMax[_credalNet->current_bn().idFromName(varName)];
434  } catch (NotFound& err) { throw(err); }
435  }
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
margi _marginalMax
Upper marginals.

◆ marginalMin() [1/2]

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const NodeId  id) const
inherited

Get the lower marginals of a given node id.

Parameters
idThe node id which lower marginals we want.
Returns
A constant reference to this node lower marginals.

Definition at line 439 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin.

439  {
440  try {
441  return _marginalMin[id];
442  } catch (NotFound& err) { throw(err); }
443  }
margi _marginalMin
Lower marginals.

◆ marginalMin() [2/2]

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const std::string &  varName) const
inherited

Get the lower marginals of a given variable name.

Parameters
varNameThe variable name which lower marginals we want.
Returns
A constant reference to this variable lower marginals.

Definition at line 422 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, and gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin.

423  {
424  try {
425  return _marginalMin[_credalNet->current_bn().idFromName(varName)];
426  } catch (NotFound& err) { throw(err); }
427  }
margi _marginalMin
Lower marginals.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.

◆ maxIter()

INLINE Size gum::ApproximationScheme::maxIter ( ) const
virtualinherited

Returns the criterion on number of iterations.

Returns
Returns the criterion on number of iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 102 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_max_iter.

Referenced by gum::learning::genericBNLearner::maxIter().

102 { return _max_iter; }
Size _max_iter
The maximum iterations.
+ Here is the caller graph for this function:

◆ maxTime()

INLINE double gum::ApproximationScheme::maxTime ( ) const
virtualinherited

Returns the timeout (in seconds).

Returns
Returns the timeout (in seconds).

Implements gum::IApproximationSchemeConfiguration.

Definition at line 125 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_max_time.

Referenced by gum::learning::genericBNLearner::maxTime().

125 { return _max_time; }
double _max_time
The timeout.
+ Here is the caller graph for this function:

◆ messageApproximationScheme()

INLINE std::string gum::IApproximationSchemeConfiguration::messageApproximationScheme ( ) const
inherited

Returns the approximation scheme message.

Returns
Returns the approximation scheme message.

Definition at line 40 of file IApproximationSchemeConfiguration_inl.h.

References gum::IApproximationSchemeConfiguration::Continue, gum::IApproximationSchemeConfiguration::Epsilon, gum::IApproximationSchemeConfiguration::epsilon(), gum::IApproximationSchemeConfiguration::Limit, gum::IApproximationSchemeConfiguration::maxIter(), gum::IApproximationSchemeConfiguration::maxTime(), gum::IApproximationSchemeConfiguration::minEpsilonRate(), gum::IApproximationSchemeConfiguration::Rate, gum::IApproximationSchemeConfiguration::stateApproximationScheme(), gum::IApproximationSchemeConfiguration::Stopped, gum::IApproximationSchemeConfiguration::TimeLimit, and gum::IApproximationSchemeConfiguration::Undefined.

Referenced by gum::ApproximationScheme::_stopScheme(), gum::ApproximationScheme::continueApproximationScheme(), and gum::credal::InferenceEngine< GUM_SCALAR >::getApproximationSchemeMsg().

40  {
41  std::stringstream s;
42 
43  switch (stateApproximationScheme()) {
44  case ApproximationSchemeSTATE::Continue: s << "in progress"; break;
45 
47  s << "stopped with epsilon=" << epsilon();
48  break;
49 
51  s << "stopped with rate=" << minEpsilonRate();
52  break;
53 
55  s << "stopped with max iteration=" << maxIter();
56  break;
57 
59  s << "stopped with timeout=" << maxTime();
60  break;
61 
62  case ApproximationSchemeSTATE::Stopped: s << "stopped on request"; break;
63 
64  case ApproximationSchemeSTATE::Undefined: s << "undefined state"; break;
65  };
66 
67  return s.str();
68  }
virtual double epsilon() const =0
Returns the value of epsilon.
virtual ApproximationSchemeSTATE stateApproximationScheme() const =0
Returns the approximation scheme state.
virtual double maxTime() const =0
Returns the timeout (in seconds).
virtual Size maxIter() const =0
Returns the criterion on number of iterations.
virtual double minEpsilonRate() const =0
Returns the value of the minimal epsilon rate.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ minEpsilonRate()

INLINE double gum::ApproximationScheme::minEpsilonRate ( ) const
virtualinherited

Returns the value of the minimal epsilon rate.

Returns
Returns the value of the minimal epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 74 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_min_rate_eps.

Referenced by gum::learning::genericBNLearner::minEpsilonRate().

74  {
75  return _min_rate_eps;
76  }
double _min_rate_eps
Threshold for the epsilon rate.
+ Here is the caller graph for this function:

◆ nbrIterations()

INLINE Size gum::ApproximationScheme::nbrIterations ( ) const
virtualinherited

Returns the number of iterations.

Returns
Returns the number of iterations.
Exceptions
OperationNotAllowedRaised if the scheme did not perform.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 163 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_step, GUM_ERROR, gum::ApproximationScheme::stateApproximationScheme(), and gum::IApproximationSchemeConfiguration::Undefined.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), and gum::learning::genericBNLearner::nbrIterations().

163  {
165  GUM_ERROR(OperationNotAllowed,
166  "state of the approximation scheme is undefined");
167  }
168 
169  return _current_step;
170  }
Size _current_step
The current step.
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ periodSize()

INLINE Size gum::ApproximationScheme::periodSize ( ) const
virtualinherited

Returns the period size.

Returns
Returns the period size.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 149 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_period_size.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference(), and gum::learning::genericBNLearner::periodSize().

149 { return _period_size; }
Size _period_size
Checking criteria frequency.
+ Here is the caller graph for this function:

◆ remainingBurnIn()

INLINE Size gum::ApproximationScheme::remainingBurnIn ( )
inherited

Returns the remaining burn in.

Returns
Returns the remaining burn in.

Definition at line 210 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_burn_in, and gum::ApproximationScheme::_current_step.

210  {
211  if (_burn_in > _current_step) {
212  return _burn_in - _current_step;
213  } else {
214  return 0;
215  }
216  }
Size _burn_in
Number of iterations before checking stopping criteria.
Size _current_step
The current step.

◆ repetitiveInd()

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInd ( ) const
inherited

Get the current independence status.

Returns
True if repetitive, False otherwise.

Definition at line 120 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInd.

120  {
121  return _repetitiveInd;
122  }
bool _repetitiveInd
True if using repetitive independence ( dynamic network only ), False otherwise.

◆ saveExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveExpectations ( const std::string &  path) const
inherited

Saves expectations to file.

Parameters
pathThe path to the file to be used.

Definition at line 554 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMax, gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMin, and GUM_ERROR.

555  {
556  if (_dynamicExpMin.empty()) //_modal.empty())
557  return;
558 
559  // else not here, to keep the const (natural with a saving process)
560  // else if(_dynamicExpMin.empty() || _dynamicExpMax.empty())
561  //_dynamicExpectations(); // works with or without a dynamic network
562 
563  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
564 
565  if (!m_stream.good()) {
566  GUM_ERROR(IOError,
567  "void InferenceEngine< GUM_SCALAR "
568  ">::saveExpectations(const std::string & path) : could "
569  "not open output file : "
570  << path);
571  }
572 
573  for (const auto& elt : _dynamicExpMin) {
574  m_stream << elt.first; // it->first;
575 
576  // iterates over a vector
577  for (const auto& elt2 : elt.second) {
578  m_stream << " " << elt2;
579  }
580 
581  m_stream << std::endl;
582  }
583 
584  for (const auto& elt : _dynamicExpMax) {
585  m_stream << elt.first;
586 
587  // iterates over a vector
588  for (const auto& elt2 : elt.second) {
589  m_stream << " " << elt2;
590  }
591 
592  m_stream << std::endl;
593  }
594 
595  m_stream.close();
596  }
dynExpe _dynamicExpMin
Lower dynamic expectations.
dynExpe _dynamicExpMax
Upper dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55

◆ saveInference()

template<typename GUM_SCALAR >
void gum::credal::CNLoopyPropagation< GUM_SCALAR >::saveInference ( const std::string &  path)
Deprecated:
Use saveMarginals() from InferenceEngine instead.

This one is easier to read but harder for scripts to parse.

Parameters
pathThe path to the file to save marginals.

Definition at line 29 of file CNLoopyPropagation_tpl.h.

References _INF, and GUM_ERROR.

29  {
30  std::string path_name = path.substr(0, path.size() - 4);
31  path_name = path_name + ".res";
32 
33  std::ofstream res(path_name.c_str(), std::ios::out | std::ios::trunc);
34 
35  if (!res.good()) {
36  GUM_ERROR(NotFound,
37  "CNLoopyPropagation<GUM_SCALAR>::saveInference(std::"
38  "string & path) : could not open file : "
39  + path_name);
40  }
41 
42  std::string ext = path.substr(path.size() - 3, path.size());
43 
44  if (std::strcmp(ext.c_str(), "evi") == 0) {
45  std::ifstream evi(path.c_str(), std::ios::in);
46  std::string ligne;
47 
48  if (!evi.good()) {
49  GUM_ERROR(NotFound,
50  "CNLoopyPropagation<GUM_SCALAR>::saveInference(std::"
51  "string & path) : could not open file : "
52  + ext);
53  }
54 
55  while (evi.good()) {
56  getline(evi, ligne);
57  res << ligne << "\n";
58  }
59 
60  evi.close();
61  }
62 
63  res << "[RESULTATS]"
64  << "\n";
65 
66  for (auto node : __bnet->nodes()) {
67  // calcul distri posteriori
68  GUM_SCALAR msg_p_min = 1.0;
69  GUM_SCALAR msg_p_max = 0.0;
70 
71  // cas evidence, calcul immediat
72  if (__infE::_evidence.exists(node)) {
73  if (__infE::_evidence[node][1] == 0.) {
74  msg_p_min = 0.;
75  } else if (__infE::_evidence[node][1] == 1.) {
76  msg_p_min = 1.;
77  }
78 
79  msg_p_max = msg_p_min;
80  }
81  // sinon depuis node P et node L
82  else {
83  GUM_SCALAR min = _NodesP_min[node];
84  GUM_SCALAR max;
85 
86  if (_NodesP_max.exists(node)) {
87  max = _NodesP_max[node];
88  } else {
89  max = min;
90  }
91 
92  GUM_SCALAR lmin = _NodesL_min[node];
93  GUM_SCALAR lmax;
94 
95  if (_NodesL_max.exists(node)) {
96  lmax = _NodesL_max[node];
97  } else {
98  lmax = lmin;
99  }
100 
101  // cas limites sur min
102  if (min == _INF && lmin == 0.) {
103  std::cout << "proba ERR (negatif) : pi = inf, l = 0" << std::endl;
104  }
105 
106  if (lmin == _INF) { // cas infini
107  msg_p_min = GUM_SCALAR(1.);
108  } else if (min == 0. || lmin == 0.) {
109  msg_p_min = GUM_SCALAR(0.);
110  } else {
111  msg_p_min = GUM_SCALAR(1. / (1. + ((1. / min - 1.) * 1. / lmin)));
112  }
113 
114  // cas limites sur max
115  if (max == _INF && lmax == 0.) {
116  std::cout << "proba ERR (negatif) : pi = inf, l = 0" << std::endl;
117  }
118 
119  if (lmax == _INF) { // cas infini
120  msg_p_max = GUM_SCALAR(1.);
121  } else if (max == 0. || lmax == 0.) {
122  msg_p_max = GUM_SCALAR(0.);
123  } else {
124  msg_p_max = GUM_SCALAR(1. / (1. + ((1. / max - 1.) * 1. / lmax)));
125  }
126  }
127 
128  if (msg_p_min != msg_p_min && msg_p_max == msg_p_max) {
129  msg_p_min = msg_p_max;
130  }
131 
132  if (msg_p_max != msg_p_max && msg_p_min == msg_p_min) {
133  msg_p_max = msg_p_min;
134  }
135 
136  if (msg_p_max != msg_p_max && msg_p_min != msg_p_min) {
137  std::cout << std::endl;
138  std::cout << "pas de proba calculable (verifier observations)"
139  << std::endl;
140  }
141 
142  res << "P(" << __bnet->variable(node).name() << " | e) = ";
143 
144  if (__infE::_evidence.exists(node)) {
145  res << "(observe)" << std::endl;
146  } else {
147  res << std::endl;
148  }
149 
150  res << "\t\t" << __bnet->variable(node).label(0) << " [ "
151  << (GUM_SCALAR)1. - msg_p_max;
152 
153  if (msg_p_min != msg_p_max) {
154  res << ", " << (GUM_SCALAR)1. - msg_p_min << " ] | ";
155  } else {
156  res << " ] | ";
157  }
158 
159  res << __bnet->variable(node).label(1) << " [ " << msg_p_min;
160 
161  if (msg_p_min != msg_p_max) {
162  res << ", " << msg_p_max << " ]" << std::endl;
163  } else {
164  res << " ]" << std::endl;
165  }
166  } // end of : for each node
167 
168  res.close();
169  }
NodeProperty< GUM_SCALAR > _NodesP_min
"Lower" node information obtained by combinaison of parent&#39;s messages.
#define _INF
const IBayesNet< GUM_SCALAR > * __bnet
A pointer to it&#39;s IBayesNet used as a DAG.
margi _evidence
Holds observed variables states.
NodeProperty< GUM_SCALAR > _NodesL_min
"Lower" node information obtained by combinaison of children messages.
NodeProperty< GUM_SCALAR > _NodesP_max
"Upper" node information obtained by combinaison of parent&#39;s messages.
NodeProperty< GUM_SCALAR > _NodesL_max
"Upper" node information obtained by combinaison of children messages.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55

◆ saveMarginals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveMarginals ( const std::string &  path) const
inherited

Saves marginals to file.

Parameters
pathThe path to the file to be used.

Definition at line 528 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, and GUM_ERROR.

529  {
530  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
531 
532  if (!m_stream.good()) {
533  GUM_ERROR(IOError,
534  "void InferenceEngine< GUM_SCALAR >::saveMarginals(const "
535  "std::string & path) const : could not open output file "
536  ": "
537  << path);
538  }
539 
540  for (const auto& elt : _marginalMin) {
541  Size esize = Size(elt.second.size());
542 
543  for (Size mod = 0; mod < esize; mod++) {
544  m_stream << _credalNet->current_bn().variable(elt.first).name() << " "
545  << mod << " " << (elt.second)[mod] << " "
546  << _marginalMax[elt.first][mod] << std::endl;
547  }
548  }
549 
550  m_stream.close();
551  }
margi _marginalMin
Lower marginals.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
margi _marginalMax
Upper marginals.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55

◆ saveVertices()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveVertices ( const std::string &  path) const
inherited

Saves vertices to file.

Parameters
pathThe path to the file to be used.

Definition at line 628 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets, and GUM_ERROR.

628  {
629  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
630 
631  if (!m_stream.good()) {
632  GUM_ERROR(IOError,
633  "void InferenceEngine< GUM_SCALAR >::saveVertices(const "
634  "std::string & path) : could not open outpul file : "
635  << path);
636  }
637 
638  for (const auto& elt : _marginalSets) {
639  m_stream << _credalNet->current_bn().variable(elt.first).name()
640  << std::endl;
641 
642  for (const auto& elt2 : elt.second) {
643  m_stream << "[";
644  bool first = true;
645 
646  for (const auto& elt3 : elt2) {
647  if (!first) {
648  m_stream << ",";
649  first = false;
650  }
651 
652  m_stream << elt3;
653  }
654 
655  m_stream << "]\n";
656  }
657  }
658 
659  m_stream.close();
660  }
credalSet _marginalSets
Credal sets vertices, if enabled.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55

◆ setEpsilon()

INLINE void gum::ApproximationScheme::setEpsilon ( double  eps)
virtualinherited

Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|.

If the criterion was disabled it will be enabled.

Parameters
epsThe new epsilon value.
Exceptions
OutOfLowerBoundRaised if eps < 0.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 43 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_eps, gum::ApproximationScheme::_eps, and GUM_ERROR.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsSampling< GUM_SCALAR >::GibbsSampling(), gum::learning::GreedyHillClimbing::GreedyHillClimbing(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setEpsilon().

43  {
44  if (eps < 0.) { GUM_ERROR(OutOfLowerBound, "eps should be >=0"); }
45 
46  _eps = eps;
47  _enabled_eps = true;
48  }
bool _enabled_eps
If true, the threshold convergence is enabled.
double _eps
Threshold for convergence.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the caller graph for this function:

◆ setMaxIter()

INLINE void gum::ApproximationScheme::setMaxIter ( Size  max)
virtualinherited

Stopping criterion on number of iterations.

If the criterion was disabled it will be enabled.

Parameters
maxThe maximum number of iterations.
Exceptions
OutOfLowerBoundRaised if max <= 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 95 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_iter, gum::ApproximationScheme::_max_iter, and GUM_ERROR.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setMaxIter().

95  {
96  if (max < 1) { GUM_ERROR(OutOfLowerBound, "max should be >=1"); }
97  _max_iter = max;
98  _enabled_max_iter = true;
99  }
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
Size _max_iter
The maximum iterations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the caller graph for this function:

◆ setMaxTime()

INLINE void gum::ApproximationScheme::setMaxTime ( double  timeout)
virtualinherited

Stopping criterion on timeout.

If the criterion was disabled it will be enabled.

Parameters
timeoutThe timeout value in seconds.
Exceptions
OutOfLowerBoundRaised if timeout <= 0.0.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 118 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_time, gum::ApproximationScheme::_max_time, and GUM_ERROR.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setMaxTime().

118  {
119  if (timeout <= 0.) { GUM_ERROR(OutOfLowerBound, "timeout should be >0."); }
120  _max_time = timeout;
121  _enabled_max_time = true;
122  }
bool _enabled_max_time
If true, the timeout is enabled.
double _max_time
The timeout.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the caller graph for this function:

◆ setMinEpsilonRate()

INLINE void gum::ApproximationScheme::setMinEpsilonRate ( double  rate)
virtualinherited

Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|).

If the criterion was disabled it will be enabled

Parameters
rateThe minimal epsilon rate.
Exceptions
OutOfLowerBoundif rate<0

Implements gum::IApproximationSchemeConfiguration.

Definition at line 66 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_min_rate_eps, gum::ApproximationScheme::_min_rate_eps, and GUM_ERROR.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsSampling< GUM_SCALAR >::GibbsSampling(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setMinEpsilonRate().

66  {
67  if (rate < 0) { GUM_ERROR(OutOfLowerBound, "rate should be >=0"); }
68 
69  _min_rate_eps = rate;
70  _enabled_min_rate_eps = true;
71  }
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
double _min_rate_eps
Threshold for the epsilon rate.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the caller graph for this function:

◆ setPeriodSize()

INLINE void gum::ApproximationScheme::setPeriodSize ( Size  p)
virtualinherited

How many samples between two stopping is enable.

Parameters
pThe new period value.
Exceptions
OutOfLowerBoundRaised if p < 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 143 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_period_size, and GUM_ERROR.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setPeriodSize().

143  {
144  if (p < 1) { GUM_ERROR(OutOfLowerBound, "p should be >=1"); }
145 
146  _period_size = p;
147  }
Size _period_size
Checking criteria frequency.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the caller graph for this function:

◆ setRepetitiveInd()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::setRepetitiveInd ( const bool  repetitive)
inherited
Parameters
repetitiveTrue if repetitive independence is to be used, false otherwise. Only usefull with dynamic networks.

Definition at line 111 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInd, and gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit().

111  {
112  bool oldValue = _repetitiveInd;
113  _repetitiveInd = repetitive;
114 
115  // do not compute clusters more than once
116  if (_repetitiveInd && !oldValue) _repetitiveInit();
117  }
void _repetitiveInit()
Initialize _t0 and _t1 clusters.
bool _repetitiveInd
True if using repetitive independence ( dynamic network only ), False otherwise.
+ Here is the call graph for this function:

◆ setVerbosity()

INLINE void gum::ApproximationScheme::setVerbosity ( bool  v)
virtualinherited

Set the verbosity on (true) or off (false).

Parameters
vIf true, then verbosity is turned on.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 152 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_verbosity.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setVerbosity().

152 { _verbosity = v; }
bool _verbosity
If true, verbosity is enabled.
+ Here is the caller graph for this function:

◆ startOfPeriod()

INLINE bool gum::ApproximationScheme::startOfPeriod ( )
inherited

Returns true if we are at the beginning of a period (compute error is mandatory).

Returns
Returns true if we are at the beginning of a period (compute error is mandatory).

Definition at line 197 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_burn_in, gum::ApproximationScheme::_current_step, and gum::ApproximationScheme::_period_size.

Referenced by gum::ApproximationScheme::continueApproximationScheme().

197  {
198  if (_current_step < _burn_in) { return false; }
199 
200  if (_period_size == 1) { return true; }
201 
202  return ((_current_step - _burn_in) % _period_size == 0);
203  }
Size _burn_in
Number of iterations before checking stopping criteria.
Size _current_step
The current step.
Size _period_size
Checking criteria frequency.
+ Here is the caller graph for this function:

◆ stateApproximationScheme()

INLINE IApproximationSchemeConfiguration::ApproximationSchemeSTATE gum::ApproximationScheme::stateApproximationScheme ( ) const
virtualinherited

Returns the approximation scheme state.

Returns
Returns the approximation scheme state.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 158 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_state.

Referenced by gum::ApproximationScheme::continueApproximationScheme(), gum::ApproximationScheme::history(), gum::ApproximationScheme::nbrIterations(), and gum::learning::genericBNLearner::stateApproximationScheme().

158  {
159  return _current_state;
160  }
ApproximationSchemeSTATE _current_state
The current state.
+ Here is the caller graph for this function:

◆ stopApproximationScheme()

INLINE void gum::ApproximationScheme::stopApproximationScheme ( )
inherited

Stop the approximation scheme.

Definition at line 219 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_state, gum::ApproximationScheme::_stopScheme(), gum::IApproximationSchemeConfiguration::Continue, and gum::IApproximationSchemeConfiguration::Stopped.

Referenced by gum::learning::DAG2BNLearner< ALLOC >::createBN(), gum::learning::GreedyHillClimbing::learnStructure(), and gum::learning::LocalSearchWithTabuList::learnStructure().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ storeBNOpt() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( const bool  value)
inherited
Parameters
valueTrue if optimal bayesian networks are to be stored for each variable and each modality.

Definition at line 99 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt.

99  {
100  _storeBNOpt = value;
101  }
bool _storeBNOpt
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

◆ storeBNOpt() [2/2]

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( ) const
inherited
Returns
True if optimal bayes net are stored for each variable and each modality, False otherwise.

Definition at line 135 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt.

135  {
136  return _storeBNOpt;
137  }
bool _storeBNOpt
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

◆ storeVertices() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( const bool  value)
inherited
Parameters
valueTrue if vertices are to be stored, false otherwise.

Definition at line 104 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginalSets(), and gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices.

104  {
105  _storeVertices = value;
106 
107  if (value) _initMarginalSets();
108  }
void _initMarginalSets()
Initialize credal set vertices with empty sets.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
+ Here is the call graph for this function:

◆ storeVertices() [2/2]

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( ) const
inherited

Get the number of iterations without changes used to stop some algorithms.

Returns
the number of iterations.int iterStop () const;
True if vertice are stored, False otherwise.

Definition at line 130 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices.

130  {
131  return _storeVertices;
132  }
bool _storeVertices
True if credal sets vertices are stored, False otherwise.

◆ toString()

template<typename GUM_SCALAR >
std::string gum::credal::InferenceEngine< GUM_SCALAR >::toString ( ) const
inherited

Print all nodes marginals to standart output.

Definition at line 599 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_query, gum::HashTable< Key, Val, Alloc >::empty(), and gum::HashTable< Key, Val, Alloc >::exists().

599  {
600  std::stringstream output;
601  output << std::endl;
602 
603  // use cbegin() when available
604  for (const auto& elt : _marginalMin) {
605  Size esize = Size(elt.second.size());
606 
607  for (Size mod = 0; mod < esize; mod++) {
608  output << "P(" << _credalNet->current_bn().variable(elt.first).name()
609  << "=" << mod << "|e) = [ ";
610  output << _marginalMin[elt.first][mod] << ", "
611  << _marginalMax[elt.first][mod] << " ]";
612 
613  if (!_query.empty())
614  if (_query.exists(elt.first) && _query[elt.first][mod])
615  output << " QUERY";
616 
617  output << std::endl;
618  }
619 
620  output << std::endl;
621  }
622 
623  return output.str();
624  }
margi _marginalMin
Lower marginals.
bool exists(const Key &key) const
Checks whether there exists an element with a given key in the hashtable.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
query _query
Holds the query nodes states.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
bool empty() const noexcept
Indicates whether the hash table is empty.
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:

◆ updateApproximationScheme()

INLINE void gum::ApproximationScheme::updateApproximationScheme ( unsigned int  incr = 1)
inherited

Update the scheme w.r.t the new error and increment steps.

Parameters
incrThe new increment steps.

Definition at line 206 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_step.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), gum::SamplingInference< GUM_SCALAR >::_loopApproxInference(), gum::learning::DAG2BNLearner< ALLOC >::createBN(), gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

206  {
207  _current_step += incr;
208  }
Size _current_step
The current step.
+ Here is the caller graph for this function:

◆ verbosity()

INLINE bool gum::ApproximationScheme::verbosity ( ) const
virtualinherited

Returns true if verbosity is enabled.

Returns
Returns true if verbosity is enabled.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 154 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_verbosity.

Referenced by gum::ApproximationScheme::continueApproximationScheme(), gum::ApproximationScheme::history(), and gum::learning::genericBNLearner::verbosity().

154 { return _verbosity; }
bool _verbosity
If true, verbosity is enabled.
+ Here is the caller graph for this function:

◆ vertices()

template<typename GUM_SCALAR >
const std::vector< std::vector< GUM_SCALAR > > & gum::credal::InferenceEngine< GUM_SCALAR >::vertices ( const NodeId  id) const
inherited

Get the vertice of a given node id.

Parameters
idThe node id which vertice we want.
Returns
A constant reference to this node vertice.

Definition at line 523 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets.

523  {
524  return _marginalSets[id];
525  }
credalSet _marginalSets
Credal sets vertices, if enabled.

Member Data Documentation

◆ __bnet

template<typename GUM_SCALAR >
const IBayesNet< GUM_SCALAR >* gum::credal::CNLoopyPropagation< GUM_SCALAR >::__bnet
private

◆ __cn

template<typename GUM_SCALAR >
const CredalNet< GUM_SCALAR >* gum::credal::CNLoopyPropagation< GUM_SCALAR >::__cn
private

A pointer to the CredalNet to be used.

Definition at line 374 of file CNLoopyPropagation.h.

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::CNLoopyPropagation().

◆ __inferenceType

template<typename GUM_SCALAR >
InferenceType gum::credal::CNLoopyPropagation< GUM_SCALAR >::__inferenceType
private

The choosen inference type.

nodeToNeighbours by Default.

Definition at line 371 of file CNLoopyPropagation.h.

Referenced by gum::credal::CNLoopyPropagation< GUM_SCALAR >::CNLoopyPropagation(), and gum::credal::CNLoopyPropagation< GUM_SCALAR >::inferenceType().

◆ _ArcsL_max

template<typename GUM_SCALAR >
ArcProperty< GUM_SCALAR > gum::credal::CNLoopyPropagation< GUM_SCALAR >::_ArcsL_max
protected

"Upper" information \( \Lambda \) coming from one's children.

Definition at line 351 of file CNLoopyPropagation.h.

◆ _ArcsL_min

template<typename GUM_SCALAR >
ArcProperty< GUM_SCALAR > gum::credal::CNLoopyPropagation< GUM_SCALAR >::_ArcsL_min
protected

"Lower" information \( \Lambda \) coming from one's children.

Definition at line 339 of file CNLoopyPropagation.h.

◆ _ArcsP_max

template<typename GUM_SCALAR >
ArcProperty< GUM_SCALAR > gum::credal::CNLoopyPropagation< GUM_SCALAR >::_ArcsP_max
protected

"Upper" information \( \pi \) coming from one's parent.

Definition at line 353 of file CNLoopyPropagation.h.

◆ _ArcsP_min

template<typename GUM_SCALAR >
ArcProperty< GUM_SCALAR > gum::credal::CNLoopyPropagation< GUM_SCALAR >::_ArcsP_min
protected

"Lower" information \( \pi \) coming from one's parent.

Definition at line 341 of file CNLoopyPropagation.h.

◆ _burn_in

◆ _credalNet

template<typename GUM_SCALAR >
const CredalNet< GUM_SCALAR >* gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet
protectedinherited

A pointer to the Credal Net used.

Definition at line 74 of file inferenceEngine.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcThreadDataCopy(), gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__verticesSampling(), gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginals(), gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginalSets(), gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit(), gum::credal::InferenceEngine< GUM_SCALAR >::_updateExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::credalNet(), gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax(), gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin(), gum::credal::InferenceEngine< GUM_SCALAR >::InferenceEngine(), gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence(), gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidenceFile(), gum::credal::InferenceEngine< GUM_SCALAR >::insertModals(), gum::credal::InferenceEngine< GUM_SCALAR >::insertQuery(), gum::credal::InferenceEngine< GUM_SCALAR >::insertQueryFile(), gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax(), gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin(), gum::credal::InferenceEngine< GUM_SCALAR >::saveMarginals(), gum::credal::InferenceEngine< GUM_SCALAR >::saveVertices(), and gum::credal::InferenceEngine< GUM_SCALAR >::toString().

◆ _current_epsilon

double gum::ApproximationScheme::_current_epsilon
protectedinherited

◆ _current_rate

double gum::ApproximationScheme::_current_rate
protectedinherited

◆ _current_state

◆ _current_step

Size gum::ApproximationScheme::_current_step
protectedinherited

The current step.

Definition at line 378 of file approximationScheme.h.

Referenced by gum::l