aGrUM  0.15.1
gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine > Class Template Reference

<agrum/CN/CNMonteCarloSampling.h> More...

#include <CNMonteCarloSampling.h>

+ Inheritance diagram for gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >:
+ Collaboration diagram for gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >:

Public Attributes

Signaler3< Size, double, doubleonProgress
 Progression, error and time. More...
 
Signaler1< std::string > onStop
 Criteria messageApproximationScheme. More...
 

Public Member Functions

virtual void insertEvidenceFile (const std::string &path)
 unsigned int notOptDelete; More...
 
Constructors / Destructors
 CNMonteCarloSampling (const CredalNet< GUM_SCALAR > &credalNet)
 Constructor. More...
 
virtual ~CNMonteCarloSampling ()
 Destructor. More...
 
Public algorithm methods
void makeInference ()
 Starts the inference. More...
 
Post-inference methods
virtual void eraseAllEvidence ()
 Erase all inference related data to perform another one. More...
 
Getters and setters
VarMod2BNsMap< GUM_SCALAR > * getVarMod2BNsMap ()
 Get optimum IBayesNet. More...
 
const CredalNet< GUM_SCALAR > & credalNet ()
 Get this creadal network. More...
 
const NodeProperty< std::vector< NodeId > > & getT0Cluster () const
 Get the _t0 cluster. More...
 
const NodeProperty< std::vector< NodeId > > & getT1Cluster () const
 Get the _t1 cluster. More...
 
void setRepetitiveInd (const bool repetitive)
 
void storeVertices (const bool value)
 
bool storeVertices () const
 Get the number of iterations without changes used to stop some algorithms. More...
 
void storeBNOpt (const bool value)
 
bool storeBNOpt () const
 
bool repetitiveInd () const
 Get the current independence status. More...
 
Pre-inference initialization methods
void insertModalsFile (const std::string &path)
 Insert variables modalities from file to compute expectations. More...
 
void insertModals (const std::map< std::string, std::vector< GUM_SCALAR > > &modals)
 Insert variables modalities from map to compute expectations. More...
 
void insertEvidence (const std::map< std::string, std::vector< GUM_SCALAR > > &eviMap)
 Insert evidence from map. More...
 
void insertEvidence (const NodeProperty< std::vector< GUM_SCALAR > > &evidence)
 Insert evidence from Property. More...
 
void insertQueryFile (const std::string &path)
 Insert query variables states from file. More...
 
void insertQuery (const NodeProperty< std::vector< bool > > &query)
 Insert query variables and states from Property. More...
 
Post-inference methods
const std::vector< GUM_SCALAR > & marginalMin (const NodeId id) const
 Get the lower marginals of a given node id. More...
 
const std::vector< GUM_SCALAR > & marginalMin (const std::string &varName) const
 Get the lower marginals of a given variable name. More...
 
const std::vector< GUM_SCALAR > & marginalMax (const NodeId id) const
 Get the upper marginals of a given node id. More...
 
const std::vector< GUM_SCALAR > & marginalMax (const std::string &varName) const
 Get the upper marginals of a given variable name. More...
 
const GUM_SCALAR & expectationMin (const NodeId id) const
 Get the lower expectation of a given node id. More...
 
const GUM_SCALAR & expectationMin (const std::string &varName) const
 Get the lower expectation of a given variable name. More...
 
const GUM_SCALAR & expectationMax (const NodeId id) const
 Get the upper expectation of a given node id. More...
 
const GUM_SCALAR & expectationMax (const std::string &varName) const
 Get the upper expectation of a given variable name. More...
 
const std::vector< GUM_SCALAR > & dynamicExpMin (const std::string &varName) const
 Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e. More...
 
const std::vector< GUM_SCALAR > & dynamicExpMax (const std::string &varName) const
 Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e. More...
 
const std::vector< std::vector< GUM_SCALAR > > & vertices (const NodeId id) const
 Get the vertice of a given node id. More...
 
void saveMarginals (const std::string &path) const
 Saves marginals to file. More...
 
void saveExpectations (const std::string &path) const
 Saves expectations to file. More...
 
void saveVertices (const std::string &path) const
 Saves vertices to file. More...
 
void dynamicExpectations ()
 Compute dynamic expectations. More...
 
std::string toString () const
 Print all nodes marginals to standart output. More...
 
const std::string getApproximationSchemeMsg ()
 Get approximation scheme state. More...
 
Getters and setters
void setEpsilon (double eps)
 Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|. More...
 
double epsilon () const
 Returns the value of epsilon. More...
 
void disableEpsilon ()
 Disable stopping criterion on epsilon. More...
 
void enableEpsilon ()
 Enable stopping criterion on epsilon. More...
 
bool isEnabledEpsilon () const
 Returns true if stopping criterion on epsilon is enabled, false otherwise. More...
 
void setMinEpsilonRate (double rate)
 Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|). More...
 
double minEpsilonRate () const
 Returns the value of the minimal epsilon rate. More...
 
void disableMinEpsilonRate ()
 Disable stopping criterion on epsilon rate. More...
 
void enableMinEpsilonRate ()
 Enable stopping criterion on epsilon rate. More...
 
bool isEnabledMinEpsilonRate () const
 Returns true if stopping criterion on epsilon rate is enabled, false otherwise. More...
 
void setMaxIter (Size max)
 Stopping criterion on number of iterations. More...
 
Size maxIter () const
 Returns the criterion on number of iterations. More...
 
void disableMaxIter ()
 Disable stopping criterion on max iterations. More...
 
void enableMaxIter ()
 Enable stopping criterion on max iterations. More...
 
bool isEnabledMaxIter () const
 Returns true if stopping criterion on max iterations is enabled, false otherwise. More...
 
void setMaxTime (double timeout)
 Stopping criterion on timeout. More...
 
double maxTime () const
 Returns the timeout (in seconds). More...
 
double currentTime () const
 Returns the current running time in second. More...
 
void disableMaxTime ()
 Disable stopping criterion on timeout. More...
 
void enableMaxTime ()
 Enable stopping criterion on timeout. More...
 
bool isEnabledMaxTime () const
 Returns true if stopping criterion on timeout is enabled, false otherwise. More...
 
void setPeriodSize (Size p)
 How many samples between two stopping is enable. More...
 
Size periodSize () const
 Returns the period size. More...
 
void setVerbosity (bool v)
 Set the verbosity on (true) or off (false). More...
 
bool verbosity () const
 Returns true if verbosity is enabled. More...
 
ApproximationSchemeSTATE stateApproximationScheme () const
 Returns the approximation scheme state. More...
 
Size nbrIterations () const
 Returns the number of iterations. More...
 
const std::vector< double > & history () const
 Returns the scheme history. More...
 
void initApproximationScheme ()
 Initialise the scheme. More...
 
bool startOfPeriod ()
 Returns true if we are at the beginning of a period (compute error is mandatory). More...
 
void updateApproximationScheme (unsigned int incr=1)
 Update the scheme w.r.t the new error and increment steps. More...
 
Size remainingBurnIn ()
 Returns the remaining burn in. More...
 
void stopApproximationScheme ()
 Stop the approximation scheme. More...
 
bool continueApproximationScheme (double error)
 Update the scheme w.r.t the new error. More...
 
Getters and setters
std::string messageApproximationScheme () const
 Returns the approximation scheme message. More...
 

Public Types

enum  ApproximationSchemeSTATE : char {
  ApproximationSchemeSTATE::Undefined, ApproximationSchemeSTATE::Continue, ApproximationSchemeSTATE::Epsilon, ApproximationSchemeSTATE::Rate,
  ApproximationSchemeSTATE::Limit, ApproximationSchemeSTATE::TimeLimit, ApproximationSchemeSTATE::Stopped
}
 The different state of an approximation scheme. More...
 

Protected Attributes

bool _repetitiveInd
 
__margis _l_marginalMin
 Threads lower marginals, one per thread. More...
 
__margis _l_marginalMax
 Threads upper marginals, one per thread. More...
 
__expes _l_expectationMin
 Threads lower expectations, one per thread. More...
 
__expes _l_expectationMax
 Threads upper expectations, one per thread. More...
 
__modals _l_modal
 Threads modalities. More...
 
__credalSets _l_marginalSets
 Threads vertices. More...
 
__margis _l_evidence
 Threads evidence. More...
 
__clusters _l_clusters
 Threads clusters. More...
 
std::vector< __bnet *> _workingSet
 Threads IBayesNet. More...
 
std::vector< List< const Potential< GUM_SCALAR > *> *> _workingSetE
 Threads evidence. More...
 
std::vector< BNInferenceEngine *> _l_inferenceEngine
 Threads BNInferenceEngine. More...
 
std::vector< VarMod2BNsMap< GUM_SCALAR > *> _l_optimalNet
 Threads optimal IBayesNet. More...
 
const CredalNet< GUM_SCALAR > * _credalNet
 A pointer to the Credal Net used. More...
 
margi _oldMarginalMin
 Old lower marginals used to compute epsilon. More...
 
margi _oldMarginalMax
 Old upper marginals used to compute epsilon. More...
 
margi _marginalMin
 Lower marginals. More...
 
margi _marginalMax
 Upper marginals. More...
 
credalSet _marginalSets
 Credal sets vertices, if enabled. More...
 
expe _expectationMin
 Lower expectations, if some variables modalities were inserted. More...
 
expe _expectationMax
 Upper expectations, if some variables modalities were inserted. More...
 
dynExpe _dynamicExpMin
 Lower dynamic expectations. More...
 
dynExpe _dynamicExpMax
 Upper dynamic expectations. More...
 
dynExpe _modal
 Variables modalities used to compute expectations. More...
 
margi _evidence
 Holds observed variables states. More...
 
query _query
 Holds the query nodes states. More...
 
cluster _t0
 Clusters of nodes used with dynamic networks. More...
 
cluster _t1
 Clusters of nodes used with dynamic networks. More...
 
bool _storeVertices
 True if credal sets vertices are stored, False otherwise. More...
 
bool _storeBNOpt
 Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling. More...
 
VarMod2BNsMap< GUM_SCALAR > _dbnOpt
 Object used to efficiently store optimal bayes net during inference, for some algorithms. More...
 
int _timeSteps
 The number of time steps of this network (only usefull for dynamic networks). More...
 
double _current_epsilon
 Current epsilon. More...
 
double _last_epsilon
 Last epsilon value. More...
 
double _current_rate
 Current rate. More...
 
Size _current_step
 The current step. More...
 
Timer _timer
 The timer. More...
 
ApproximationSchemeSTATE _current_state
 The current state. More...
 
std::vector< double_history
 The scheme history, used only if verbosity == true. More...
 
double _eps
 Threshold for convergence. More...
 
bool _enabled_eps
 If true, the threshold convergence is enabled. More...
 
double _min_rate_eps
 Threshold for the epsilon rate. More...
 
bool _enabled_min_rate_eps
 If true, the minimal threshold for epsilon rate is enabled. More...
 
double _max_time
 The timeout. More...
 
bool _enabled_max_time
 If true, the timeout is enabled. More...
 
Size _max_iter
 The maximum iterations. More...
 
bool _enabled_max_iter
 If true, the maximum iterations stopping criterion is enabled. More...
 
Size _burn_in
 Number of iterations before checking stopping criteria. More...
 
Size _period_size
 Checking criteria frequency. More...
 
bool _verbosity
 If true, verbosity is enabled. More...
 

Protected Member Functions

Protected initialization methods

Fusion of threads optimal IBayesNet.

void _initThreadsData (const Size &num_threads, const bool __storeVertices, const bool __storeBNOpt)
 Initialize threads data. More...
 
Protected algorithms methods
bool _updateThread (const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
 Update thread information (marginals, expectations, IBayesNet, vertices) for a given node id. More...
 
void _updateMarginals ()
 Fusion of threads marginals. More...
 
const GUM_SCALAR _computeEpsilon ()
 Compute epsilon and update old marginals. More...
 
void _updateOldMarginals ()
 Update old marginals (from current marginals). More...
 
Proptected post-inference methods
void _optFusion ()
 Fusion of threads optimal IBayesNet. More...
 
void _expFusion ()
 Fusion of threads expectations. More...
 
void _verticesFusion ()
 
Protected initialization methods
void _repetitiveInit ()
 Initialize _t0 and _t1 clusters. More...
 
void _initExpectations ()
 Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality. More...
 
void _initMarginals ()
 Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0. More...
 
void _initMarginalSets ()
 Initialize credal set vertices with empty sets. More...
 
Protected algorithms methods
void _updateExpectations (const NodeId &id, const std::vector< GUM_SCALAR > &vertex)
 Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations. More...
 
void _updateCredalSets (const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
 Given a node id and one of it's possible vertex, update it's credal set. More...
 
Proptected post-inference methods
void _dynamicExpectations ()
 Rearrange lower and upper expectations to suit dynamic networks. More...
 

Detailed Description

template<typename GUM_SCALAR, class BNInferenceEngine = LazyPropagation< GUM_SCALAR >>
class gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >

<agrum/CN/CNMonteCarloSampling.h>

Inference by basic sampling algorithm (pure random) of bnet in credal networks.

Template Parameters
GUM_SCALARA floating type ( float, double, long double ... ).
BNInferenceEngineA IBayesNet inference engine such as LazyPropagation ( recommanded ).
Author
Matthieu HOURBRACQ and Pierre-Henri WUILLEMIN
Warning
p(e) must be available ( by a call to my_BNInferenceEngine.evidenceMarginal() ) !! the vertices are correct if p(e) > 0 for a sample the test is made once

Definition at line 62 of file CNMonteCarloSampling.h.

Member Typedef Documentation

◆ __infEs

template<typename GUM_SCALAR , class BNInferenceEngine = LazyPropagation< GUM_SCALAR >>
using gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__infEs = MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >
private

To easily acces MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine

methods.

Definition at line 68 of file CNMonteCarloSampling.h.

Member Enumeration Documentation

◆ ApproximationSchemeSTATE

The different state of an approximation scheme.

Enumerator
Undefined 
Continue 
Epsilon 
Rate 
Limit 
TimeLimit 
Stopped 

Definition at line 65 of file IApproximationSchemeConfiguration.h.

65  : char {
66  Undefined,
67  Continue,
68  Epsilon,
69  Rate,
70  Limit,
71  TimeLimit,
72  Stopped
73  };

Constructor & Destructor Documentation

◆ CNMonteCarloSampling()

template<typename GUM_SCALAR , class BNInferenceEngine >
gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling ( const CredalNet< GUM_SCALAR > &  credalNet)
explicit

Constructor.

Parameters
credalNetThe CredalNet to be used by the algorithm.

Definition at line 30 of file CNMonteCarloSampling_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInd, gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt, gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices, gum::ApproximationScheme::enableMaxTime(), gum::ApproximationScheme::setMaxTime(), and gum::ApproximationScheme::setPeriodSize().

31  :
35  //__infEs::_iterStop = 1000;
37  __infEs::_storeBNOpt = false;
38 
39  this->setMaxTime(60);
40  this->enableMaxTime();
41 
43  this->setPeriodSize(1000);
44 
45  GUM_CONSTRUCTOR(CNMonteCarloSampling);
46  }
bool _storeBNOpt
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
void setPeriodSize(Size p)
How many samples between two stopping is enable.
MultipleInferenceEngine(const CredalNet< GUM_SCALAR > &credalNet)
Constructor.
void setMaxTime(double timeout)
Stopping criterion on timeout.
CNMonteCarloSampling(const CredalNet< GUM_SCALAR > &credalNet)
Constructor.
bool _repetitiveInd
True if using repetitive independence ( dynamic network only ), False otherwise.
const CredalNet< GUM_SCALAR > & credalNet()
Get this creadal network.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
void enableMaxTime()
Enable stopping criterion on timeout.
+ Here is the call graph for this function:

◆ ~CNMonteCarloSampling()

template<typename GUM_SCALAR , class BNInferenceEngine >
gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::~CNMonteCarloSampling ( )
virtual

Destructor.

Definition at line 50 of file CNMonteCarloSampling_tpl.h.

50  {
51  GUM_DESTRUCTOR(CNMonteCarloSampling);
52  }
CNMonteCarloSampling(const CredalNet< GUM_SCALAR > &credalNet)
Constructor.

Member Function Documentation

◆ __binaryRep()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__binaryRep ( std::vector< bool > &  toFill,
const Idx  value 
) const
inlineprivate

Get the binary representation of a given value.

Parameters
toFillA reference to the bits to fill. Size must be correct before passing argument (i.e. big enough to represent value)
valueThe constant integer we want to binarize.

Definition at line 285 of file CNMonteCarloSampling_tpl.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__verticesSampling().

286  {
287  Idx n = value;
288  auto tfsize = toFill.size();
289 
290  // get bits of choosen_vertex
291  for (decltype(tfsize) i = 0; i < tfsize; i++) {
292  toFill[i] = n & 1;
293  n /= 2;
294  }
295  }
+ Here is the caller graph for this function:

◆ __insertEvidence()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__insertEvidence ( )
inlineprivate

Insert CredalNet evidence into a thread BNInferenceEngine.

Definition at line 426 of file CNMonteCarloSampling_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_evidence, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_inferenceEngine, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSetE, gum::Potential< GUM_SCALAR >::fillWith(), gum::getThreadNumber(), GUM_SHOWERROR, gum::List< Val, Alloc >::insert(), gum::List< Val, Alloc >::size(), gum::HashTable< Key, Val, Alloc >::size(), and gum::IBayesNet< GUM_SCALAR >::variable().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__threadInference().

426  {
427  if (this->_evidence.size() == 0) { return; }
428 
429  int this_thread = getThreadNumber();
430 
431  BNInferenceEngine* inference_engine = this->_l_inferenceEngine[this_thread];
432 
433  IBayesNet< GUM_SCALAR >* working_bn = this->_workingSet[this_thread];
434 
435  List< const Potential< GUM_SCALAR >* >* evi_list =
436  this->_workingSetE[this_thread];
437 
438  if (evi_list->size() > 0) {
439  for (const auto pot : *evi_list)
440  inference_engine->addEvidence(*pot);
441  return;
442  }
443 
444  for (const auto& elt : this->_evidence) {
445  Potential< GUM_SCALAR >* p = new Potential< GUM_SCALAR >;
446  (*p) << working_bn->variable(elt.first);
447 
448  try {
449  p->fillWith(elt.second);
450  } catch (Exception& err) {
451  GUM_SHOWERROR(err);
452  throw(err);
453  }
454 
455  evi_list->insert(p);
456  }
457 
458  if (evi_list->size() > 0) {
459  for (const auto pot : *evi_list)
460  inference_engine->addEvidence(*pot);
461  }
462  }
std::vector< BNInferenceEngine *> _l_inferenceEngine
Threads BNInferenceEngine.
Size size() const noexcept
Returns the number of elements stored into the hashtable.
unsigned int getThreadNumber()
Get the calling thread id.
#define GUM_SHOWERROR(e)
Definition: exceptions.h:61
std::vector< List< const Potential< GUM_SCALAR > *> *> _workingSetE
Threads evidence.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
margi _evidence
Holds observed variables states.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ __mcInitApproximationScheme()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme ( )
private

Initialize approximation Scheme.

Definition at line 196 of file CNMonteCarloSampling_tpl.h.

References gum::ApproximationScheme::disableMaxIter(), gum::ApproximationScheme::disableMinEpsilonRate(), gum::ApproximationScheme::enableEpsilon(), gum::ApproximationScheme::initApproximationScheme(), and gum::ApproximationScheme::setEpsilon().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

196  {
197  // this->setEpsilon ( std::numeric_limits< GUM_SCALAR >::min() );
201  this->setEpsilon(0.);
202  this->enableEpsilon(); // to be sure
203 
204  this->disableMinEpsilonRate();
205  this->disableMaxIter();
206 
207  this->initApproximationScheme();
208  }
void disableMinEpsilonRate()
Disable stopping criterion on epsilon rate.
void initApproximationScheme()
Initialise the scheme.
void setEpsilon(double eps)
Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|.
void disableMaxIter()
Disable stopping criterion on max iterations.
void enableEpsilon()
Enable stopping criterion on epsilon.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ __mcThreadDataCopy()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcThreadDataCopy ( )
private

Initialize threads data.

Definition at line 212 of file CNMonteCarloSampling_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_initThreadsData(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_clusters, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_expectationMax, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_expectationMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_inferenceEngine, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMax, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalSets, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_modal, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_optimalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets, gum::credal::InferenceEngine< GUM_SCALAR >::_modal, gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt, gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices, gum::credal::InferenceEngine< GUM_SCALAR >::_t0, gum::credal::InferenceEngine< GUM_SCALAR >::_t1, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSetE, gum::FIND_ALL, gum::getNumberOfRunningThreads(), and gum::getThreadNumber().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

212  {
213  int num_threads;
214 #pragma omp parallel
215  {
216  int this_thread = getThreadNumber();
217 
218 // implicit wait clause (don't put nowait)
219 #pragma omp single
220  {
221  // should we ask for max threads instead ( no differences here in
222  // practice
223  // )
224  num_threads = getNumberOfRunningThreads();
225 
226  this->_initThreadsData(
228  this->_l_inferenceEngine.resize(num_threads, nullptr);
229 
230  // if ( __infEs::_storeBNOpt )
231  // this->_l_sampledNet.resize ( num_threads );
232  } // end of : single region
233 
234  // we could put those below in a function in InferenceEngine, but let's
235  // keep
236  // this parallel region instead of breaking it and making another one to
237  // do
238  // the same stuff in 2 places since :
239  // !!! BNInferenceEngine still needs to be initialized here anyway !!!
240 
241  BayesNet< GUM_SCALAR >* thread_bn = new BayesNet< GUM_SCALAR >();
242 #pragma omp critical(Init)
243  {
244  // IBayesNet< GUM_SCALAR > * thread_bn = new IBayesNet< GUM_SCALAR
245  // >();//(this->_credalNet->current_bn());
246  *thread_bn = this->_credalNet->current_bn();
247  }
248  this->_workingSet[this_thread] = thread_bn;
249 
250  this->_l_marginalMin[this_thread] = this->_marginalMin;
251  this->_l_marginalMax[this_thread] = this->_marginalMax;
252  this->_l_expectationMin[this_thread] = this->_expectationMin;
253  this->_l_expectationMax[this_thread] = this->_expectationMax;
254  this->_l_modal[this_thread] = this->_modal;
255 
256  __infEs::_l_clusters[this_thread].resize(2);
257  __infEs::_l_clusters[this_thread][0] = __infEs::_t0;
258  __infEs::_l_clusters[this_thread][1] = __infEs::_t1;
259 
261  this->_l_marginalSets[this_thread] = this->_marginalSets;
262  }
263 
264  List< const Potential< GUM_SCALAR >* >* evi_list =
265  new List< const Potential< GUM_SCALAR >* >();
266  this->_workingSetE[this_thread] = evi_list;
267 
268  // #TODO: the next instruction works only for lazy propagation.
269  // => find a way to remove the second argument
270  BNInferenceEngine* inference_engine =
271  new BNInferenceEngine((this->_workingSet[this_thread]),
273 
274  this->_l_inferenceEngine[this_thread] = inference_engine;
275 
276  if (__infEs::_storeBNOpt) {
277  VarMod2BNsMap< GUM_SCALAR >* threadOpt =
278  new VarMod2BNsMap< GUM_SCALAR >(*this->_credalNet);
279  this->_l_optimalNet[this_thread] = threadOpt;
280  }
281  }
282  }
std::vector< BNInferenceEngine *> _l_inferenceEngine
Threads BNInferenceEngine.
void _initThreadsData(const Size &num_threads, const bool __storeVertices, const bool __storeBNOpt)
Initialize threads data.
__expes _l_expectationMin
Threads lower expectations, one per thread.
unsigned int getNumberOfRunningThreads()
Get the current number of running threads.
unsigned int getThreadNumber()
Get the calling thread id.
bool _storeBNOpt
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
credalSet _marginalSets
Credal sets vertices, if enabled.
margi _marginalMin
Lower marginals.
__margis _l_marginalMin
Threads lower marginals, one per thread.
std::vector< List< const Potential< GUM_SCALAR > *> *> _workingSetE
Threads evidence.
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
cluster _t0
Clusters of nodes used with dynamic networks.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> _l_optimalNet
Threads optimal IBayesNet.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
__expes _l_expectationMax
Threads upper expectations, one per thread.
dynExpe _modal
Variables modalities used to compute expectations.
__clusters _l_clusters
Threads clusters.
cluster _t1
Clusters of nodes used with dynamic networks.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.
__margis _l_marginalMax
Threads upper marginals, one per thread.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
__credalSets _l_marginalSets
Threads vertices.
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ __threadInference()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__threadInference ( )
inlineprivate

Thread performs an inference using BNInferenceEngine.

Calls __verticesSampling and __insertEvidence.

Definition at line 185 of file CNMonteCarloSampling_tpl.h.

References gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__insertEvidence(), gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__verticesSampling(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_inferenceEngine, and gum::getThreadNumber().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

185  {
186  int tId = getThreadNumber();
188 
189  this->_l_inferenceEngine[tId]->eraseAllEvidence();
191  this->_l_inferenceEngine[tId]->makeInference();
192  }
std::vector< BNInferenceEngine *> _l_inferenceEngine
Threads BNInferenceEngine.
unsigned int getThreadNumber()
Get the calling thread id.
void __insertEvidence()
Insert CredalNet evidence into a thread BNInferenceEngine.
void __verticesSampling()
Thread samples a IBayesNet from the CredalNet.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ __threadUpdate()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__threadUpdate ( )
inlineprivate

Update thread data after a IBayesNet inference.

Definition at line 156 of file CNMonteCarloSampling_tpl.h.

References gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_inferenceEngine, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_updateThread(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet, gum::Instantiation::end(), gum::getThreadNumber(), gum::NodeGraphPart::nodes(), and gum::Instantiation::setFirst().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

156  {
157  int tId = getThreadNumber();
158  // bool keepSample = false;
159 
160  if (this->_l_inferenceEngine[tId]->evidenceProbability() > 0) {
161  const DAG& tDag = this->_workingSet[tId]->dag();
162 
163  for (auto node : tDag.nodes()) {
164  const Potential< GUM_SCALAR >& potential(
165  this->_l_inferenceEngine[tId]->posterior(node));
166  Instantiation ins(potential);
167  std::vector< GUM_SCALAR > vertex;
168 
169  for (ins.setFirst(); !ins.end(); ++ins) {
170  vertex.push_back(potential[ins]);
171  }
172 
173  // true for redundancy elimination of node it credal set
174  // but since global marginals are only updated at the end of each
175  // period of
176  // approximationScheme, it is "useless" ( and expensive ) to check now
177  this->_updateThread(node, vertex, false);
178 
179  } // end of : for all nodes
180  } // end of : if ( p(e) > 0 )
181  }
std::vector< BNInferenceEngine *> _l_inferenceEngine
Threads BNInferenceEngine.
unsigned int getThreadNumber()
Get the calling thread id.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
bool _updateThread(const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
Update thread information (marginals, expectations, IBayesNet, vertices) for a given node id...
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ __verticesSampling()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__verticesSampling ( )
inlineprivate

Thread samples a IBayesNet from the CredalNet.

Definition at line 299 of file CNMonteCarloSampling_tpl.h.

References gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__binaryRep(), gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_clusters, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_optimalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInd, gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet, gum::IBayesNet< GUM_SCALAR >::cpt(), gum::MultiDimDecorator< GUM_SCALAR >::domainSize(), gum::Potential< GUM_SCALAR >::fillWith(), gum::getThreadNumber(), gum::DAGmodel::nodes(), and gum::IBayesNet< GUM_SCALAR >::variable().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__threadInference().

299  {
300  int this_thread = getThreadNumber();
301  IBayesNet< GUM_SCALAR >* working_bn = this->_workingSet[this_thread];
302 
303  const auto cpt = &this->_credalNet->credalNet_currentCpt();
304 
305  using dBN = std::vector< std::vector< std::vector< bool > > >;
306 
307  dBN sample;
308 
309  if (__infEs::_storeBNOpt) {
310  sample = dBN(this->_l_optimalNet[this_thread]->getSampleDef());
311  }
312 
314  const auto& t0 = __infEs::_l_clusters[this_thread][0];
315  const auto& t1 = __infEs::_l_clusters[this_thread][1];
316 
317  for (const auto& elt : t0) {
318  auto dSize = working_bn->variable(elt.first).domainSize();
319  Potential< GUM_SCALAR >* potential(
320  const_cast< Potential< GUM_SCALAR >* >(&working_bn->cpt(elt.first)));
321  std::vector< GUM_SCALAR > var_cpt(potential->domainSize());
322 
323  Size pconfs = Size((*cpt)[elt.first].size());
324 
325  for (Size pconf = 0; pconf < pconfs; pconf++) {
326  Size choosen_vertex = rand() % (*cpt)[elt.first][pconf].size();
327 
328  if (__infEs::_storeBNOpt) {
329  __binaryRep(sample[elt.first][pconf], choosen_vertex);
330  }
331 
332  for (Size mod = 0; mod < dSize; mod++) {
333  var_cpt[pconf * dSize + mod] =
334  (*cpt)[elt.first][pconf][choosen_vertex][mod];
335  }
336  } // end of : pconf
337 
338  potential->fillWith(var_cpt);
339 
340  Size t0esize = Size(elt.second.size());
341 
342  for (Size pos = 0; pos < t0esize; pos++) {
343  if (__infEs::_storeBNOpt) {
344  sample[elt.second[pos]] = sample[elt.first];
345  }
346 
347  Potential< GUM_SCALAR >* potential2(
348  const_cast< Potential< GUM_SCALAR >* >(
349  &working_bn->cpt(elt.second[pos])));
350  potential2->fillWith(var_cpt);
351  }
352  }
353 
354  for (const auto& elt : t1) {
355  auto dSize = working_bn->variable(elt.first).domainSize();
356  Potential< GUM_SCALAR >* potential(
357  const_cast< Potential< GUM_SCALAR >* >(&working_bn->cpt(elt.first)));
358  std::vector< GUM_SCALAR > var_cpt(potential->domainSize());
359 
360  for (Size pconf = 0; pconf < (*cpt)[elt.first].size(); pconf++) {
361  Idx choosen_vertex = Idx(rand() % (*cpt)[elt.first][pconf].size());
362 
363  if (__infEs::_storeBNOpt) {
364  __binaryRep(sample[elt.first][pconf], choosen_vertex);
365  }
366 
367  for (decltype(dSize) mod = 0; mod < dSize; mod++) {
368  var_cpt[pconf * dSize + mod] =
369  (*cpt)[elt.first][pconf][choosen_vertex][mod];
370  }
371  } // end of : pconf
372 
373  potential->fillWith(var_cpt);
374 
375  auto t1esize = elt.second.size();
376 
377  for (decltype(t1esize) pos = 0; pos < t1esize; pos++) {
378  if (__infEs::_storeBNOpt) {
379  sample[elt.second[pos]] = sample[elt.first];
380  }
381 
382  Potential< GUM_SCALAR >* potential2(
383  const_cast< Potential< GUM_SCALAR >* >(
384  &working_bn->cpt(elt.second[pos])));
385  potential2->fillWith(var_cpt);
386  }
387  }
388 
389  if (__infEs::_storeBNOpt) {
390  this->_l_optimalNet[this_thread]->setCurrentSample(sample);
391  }
392  } else {
393  for (auto node : working_bn->nodes()) {
394  auto dSize = working_bn->variable(node).domainSize();
395  Potential< GUM_SCALAR >* potential(
396  const_cast< Potential< GUM_SCALAR >* >(&working_bn->cpt(node)));
397  std::vector< GUM_SCALAR > var_cpt(potential->domainSize());
398 
399  auto pConfs = (*cpt)[node].size();
400 
401  for (decltype(pConfs) pconf = 0; pconf < pConfs; pconf++) {
402  Size nVertices = Size((*cpt)[node][pconf].size());
403  Idx choosen_vertex = Idx(rand() % nVertices);
404 
405  if (__infEs::_storeBNOpt) {
406  __binaryRep(sample[node][pconf], choosen_vertex);
407  }
408 
409  for (decltype(dSize) mod = 0; mod < dSize; mod++) {
410  var_cpt[pconf * dSize + mod] =
411  (*cpt)[node][pconf][choosen_vertex][mod];
412  }
413  } // end of : pconf
414 
415  potential->fillWith(var_cpt);
416  }
417 
418  if (__infEs::_storeBNOpt) {
419  this->_l_optimalNet[this_thread]->setCurrentSample(sample);
420  }
421  }
422  }
unsigned int getThreadNumber()
Get the calling thread id.
bool _storeBNOpt
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> _l_optimalNet
Threads optimal IBayesNet.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
bool _repetitiveInd
True if using repetitive independence ( dynamic network only ), False otherwise.
__clusters _l_clusters
Threads clusters.
Size Idx
Type for indexes.
Definition: types.h:53
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
void __binaryRep(std::vector< bool > &toFill, const Idx value) const
Get the binary representation of a given value.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _computeEpsilon()

template<typename GUM_SCALAR , class BNInferenceEngine >
const GUM_SCALAR gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_computeEpsilon ( )
inlineprotectedinherited

Compute epsilon and update old marginals.

Returns
Epsilon.

Definition at line 304 of file multipleInferenceEngine_tpl.h.

References gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet, and gum::getThreadNumber().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

304  {
305  GUM_SCALAR eps = 0;
306 #pragma omp parallel
307  {
308  GUM_SCALAR tEps = 0;
309  GUM_SCALAR delta;
310 
311  int tId = getThreadNumber();
312  long nsize = long(_workingSet[tId]->size());
313 
314 #pragma omp for
315 
316  for (long i = 0; i < nsize; i++) {
317  Size dSize = Size(_l_marginalMin[tId][i].size());
318 
319  for (Size j = 0; j < dSize; j++) {
320  // on min
321  delta = this->_marginalMin[i][j] - this->_oldMarginalMin[i][j];
322  delta = (delta < 0) ? (-delta) : delta;
323  tEps = (tEps < delta) ? delta : tEps;
324 
325  // on max
326  delta = this->_marginalMax[i][j] - this->_oldMarginalMax[i][j];
327  delta = (delta < 0) ? (-delta) : delta;
328  tEps = (tEps < delta) ? delta : tEps;
329 
330  this->_oldMarginalMin[i][j] = this->_marginalMin[i][j];
331  this->_oldMarginalMax[i][j] = this->_marginalMax[i][j];
332  }
333  } // end of : all variables
334 
335 #pragma omp critical(epsilon_max)
336  {
337 #pragma omp flush(eps)
338  eps = (eps < tEps) ? tEps : eps;
339  }
340 
341  } // end of : parallel region
342  return eps;
343  }
margi _oldMarginalMin
Old lower marginals used to compute epsilon.
unsigned int getThreadNumber()
Get the calling thread id.
margi _marginalMin
Lower marginals.
__margis _l_marginalMin
Threads lower marginals, one per thread.
margi _oldMarginalMax
Old upper marginals used to compute epsilon.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _dynamicExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpectations ( )
protectedinherited

Rearrange lower and upper expectations to suit dynamic networks.

Definition at line 721 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMax, gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMin, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin, gum::credal::InferenceEngine< GUM_SCALAR >::_modal, and gum::HashTable< Key, Val, Alloc >::empty().

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

721  {
722  // no modals, no expectations computed during inference
723  if (_expectationMin.empty() || _modal.empty()) return;
724 
725  // already called by the algorithm or the user
726  if (_dynamicExpMax.size() > 0 && _dynamicExpMin.size() > 0) return;
727 
728  // typedef typename std::map< int, GUM_SCALAR > innerMap;
729  using innerMap = typename gum::HashTable< int, GUM_SCALAR >;
730 
731  // typedef typename std::map< std::string, innerMap > outerMap;
732  using outerMap = typename gum::HashTable< std::string, innerMap >;
733 
734  // typedef typename std::map< std::string, std::vector< GUM_SCALAR > >
735  // mod;
736 
737  // si non dynamique, sauver directement _expectationMin et Max (revient au
738  // meme
739  // mais plus rapide)
740  outerMap expectationsMin, expectationsMax;
741 
742  for (const auto& elt : _expectationMin) {
743  std::string var_name, time_step;
744 
745  var_name = _credalNet->current_bn().variable(elt.first).name();
746  auto delim = var_name.find_first_of("_");
747  time_step = var_name.substr(delim + 1, var_name.size());
748  var_name = var_name.substr(0, delim);
749 
750  // to be sure (don't store not monitored variables' expectations)
751  // although it
752  // should be taken care of before this point
753  if (!_modal.exists(var_name)) continue;
754 
755  expectationsMin.getWithDefault(var_name, innerMap())
756  .getWithDefault(atoi(time_step.c_str()), 0) =
757  elt.second; // we iterate with min iterators
758  expectationsMax.getWithDefault(var_name, innerMap())
759  .getWithDefault(atoi(time_step.c_str()), 0) =
760  _expectationMax[elt.first];
761  }
762 
763  for (const auto& elt : expectationsMin) {
764  typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
765 
766  for (const auto& elt2 : elt.second)
767  dynExp[elt2.first] = elt2.second;
768 
769  _dynamicExpMin.insert(elt.first, dynExp);
770  }
771 
772  for (const auto& elt : expectationsMax) {
773  typename std::vector< GUM_SCALAR > dynExp(elt.second.size());
774 
775  for (const auto& elt2 : elt.second) {
776  dynExp[elt2.first] = elt2.second;
777  }
778 
779  _dynamicExpMax.insert(elt.first, dynExp);
780  }
781  }
dynExpe _dynamicExpMin
Lower dynamic expectations.
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
The class for generic Hash Tables.
Definition: hashTable.h:679
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
dynExpe _dynamicExpMax
Upper dynamic expectations.
dynExpe _modal
Variables modalities used to compute expectations.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.
bool empty() const noexcept
Indicates whether the hash table is empty.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _expFusion()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_expFusion ( )
protectedinherited

Fusion of threads expectations.

Definition at line 410 of file multipleInferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_expectationMax, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_expectationMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_modal, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets, gum::credal::InferenceEngine< GUM_SCALAR >::_modal, gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet, and gum::getThreadNumber().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

410  {
411  // don't create threads if there are no modalities to compute expectations
412  if (this->_modal.empty()) return;
413 
414  // we can compute expectations from vertices of the final credal set
416 #pragma omp parallel
417  {
418  int threadId = getThreadNumber();
419 
420  if (!this->_l_modal[threadId].empty()) {
421  Size nsize = Size(_workingSet[threadId]->size());
422 
423 #pragma omp for
424 
425  for (long i = 0; i < long(nsize);
426  i++) { // i needs to be signed (due to omp with visual c++
427  // 15)
428  std::string var_name = _workingSet[threadId]->variable(i).name();
429  auto delim = var_name.find_first_of("_");
430  var_name = var_name.substr(0, delim);
431 
432  if (!_l_modal[threadId].exists(var_name)) continue;
433 
434  for (const auto& vertex : __infE::_marginalSets[i]) {
435  GUM_SCALAR exp = 0;
436  Size vsize = Size(vertex.size());
437 
438  for (Size mod = 0; mod < vsize; mod++)
439  exp += vertex[mod] * _l_modal[threadId][var_name][mod];
440 
441  if (exp > __infE::_expectationMax[i])
442  __infE::_expectationMax[i] = exp;
443 
444  if (exp < __infE::_expectationMin[i])
445  __infE::_expectationMin[i] = exp;
446  }
447  } // end of : each variable parallel for
448  } // end of : if this thread has modals
449  } // end of parallel region
450  return;
451  }
452 
453 #pragma omp parallel
454  {
455  int threadId = getThreadNumber();
456 
457  if (!this->_l_modal[threadId].empty()) {
458  Size nsize = Size(_workingSet[threadId]->size());
459 #pragma omp for
460  for (long i = 0; i < long(nsize);
461  i++) { // long instead of Idx due to omp for visual C++15
462  std::string var_name = _workingSet[threadId]->variable(i).name();
463  auto delim = var_name.find_first_of("_");
464  var_name = var_name.substr(0, delim);
465 
466  if (!_l_modal[threadId].exists(var_name)) continue;
467 
468  Size tsize = Size(_l_expectationMax.size());
469 
470  for (Idx tId = 0; tId < tsize; tId++) {
471  if (_l_expectationMax[tId][i] > this->_expectationMax[i])
472  this->_expectationMax[i] = _l_expectationMax[tId][i];
473 
474  if (_l_expectationMin[tId][i] < this->_expectationMin[i])
475  this->_expectationMin[i] = _l_expectationMin[tId][i];
476  } // end of : each thread
477  } // end of : each variable
478  } // end of : if modals not empty
479  } // end of : parallel region
480  }
__expes _l_expectationMin
Threads lower expectations, one per thread.
unsigned int getThreadNumber()
Get the calling thread id.
credalSet _marginalSets
Credal sets vertices, if enabled.
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
__expes _l_expectationMax
Threads upper expectations, one per thread.
dynExpe _modal
Variables modalities used to compute expectations.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _initExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations ( )
protectedinherited

Initialize lower and upper expectations before inference, with the lower expectation being initialized on the highest modality and the upper expectation being initialized on the lowest modality.

Definition at line 695 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin, gum::credal::InferenceEngine< GUM_SCALAR >::_modal, gum::HashTable< Key, Val, Alloc >::clear(), and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence(), gum::credal::InferenceEngine< GUM_SCALAR >::insertModals(), and gum::credal::InferenceEngine< GUM_SCALAR >::insertModalsFile().

695  {
698 
699  if (_modal.empty()) return;
700 
701  for (auto node : _credalNet->current_bn().nodes()) {
702  std::string var_name, time_step;
703 
704  var_name = _credalNet->current_bn().variable(node).name();
705  auto delim = var_name.find_first_of("_");
706  var_name = var_name.substr(0, delim);
707 
708  if (!_modal.exists(var_name)) continue;
709 
710  _expectationMin.insert(node, _modal[var_name].back());
711  _expectationMax.insert(node, _modal[var_name].front());
712  }
713  }
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
dynExpe _modal
Variables modalities used to compute expectations.
void clear()
Removes all the elements in the hash table.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _initMarginals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginals ( )
protectedinherited

Initialize lower and upper old marginals and marginals before inference, with the lower marginal being 1 and the upper 0.

Definition at line 663 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMin, gum::HashTable< Key, Val, Alloc >::clear(), and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence(), and gum::credal::InferenceEngine< GUM_SCALAR >::InferenceEngine().

663  {
668 
669  for (auto node : _credalNet->current_bn().nodes()) {
670  auto dSize = _credalNet->current_bn().variable(node).domainSize();
671  _marginalMin.insert(node, std::vector< GUM_SCALAR >(dSize, 1));
672  _oldMarginalMin.insert(node, std::vector< GUM_SCALAR >(dSize, 1));
673 
674  _marginalMax.insert(node, std::vector< GUM_SCALAR >(dSize, 0));
675  _oldMarginalMax.insert(node, std::vector< GUM_SCALAR >(dSize, 0));
676  }
677  }
margi _oldMarginalMin
Old lower marginals used to compute epsilon.
margi _marginalMin
Lower marginals.
margi _oldMarginalMax
Old upper marginals used to compute epsilon.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _initMarginalSets()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginalSets ( )
protectedinherited

Initialize credal set vertices with empty sets.

Definition at line 680 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets, gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices, gum::HashTable< Key, Val, Alloc >::clear(), and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence(), and gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices().

680  {
682 
683  if (!_storeVertices) return;
684 
685  for (auto node : _credalNet->current_bn().nodes())
686  _marginalSets.insert(node, std::vector< std::vector< GUM_SCALAR > >());
687  }
credalSet _marginalSets
Credal sets vertices, if enabled.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
void clear()
Removes all the elements in the hash table.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _initThreadsData()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_initThreadsData ( const Size num_threads,
const bool  __storeVertices,
const bool  __storeBNOpt 
)
inlineprotectedinherited

Initialize threads data.

Parameters
num_threadsThe number of threads.
__storeVerticesTrue if vertices should be stored, False otherwise.
__storeBNOptTrue if optimal IBayesNet should be stored, false otherwise.

Definition at line 44 of file multipleInferenceEngine_tpl.h.

References gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_clusters, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_expectationMax, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_expectationMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMax, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalSets, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_modal, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_optimalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSetE, and gum::HashTable< Key, Val, Alloc >::clear().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcThreadDataCopy().

47  {
48  _workingSet.clear();
49  _workingSet.resize(num_threads, nullptr);
50  _workingSetE.clear();
51  _workingSetE.resize(num_threads, nullptr);
52 
53  _l_marginalMin.clear();
54  _l_marginalMin.resize(num_threads);
55  _l_marginalMax.clear();
56  _l_marginalMax.resize(num_threads);
57  _l_expectationMin.clear();
58  _l_expectationMin.resize(num_threads);
59  _l_expectationMax.clear();
60  _l_expectationMax.resize(num_threads);
61 
62  _l_clusters.clear();
63  _l_clusters.resize(num_threads);
64 
65  if (__storeVertices) {
66  _l_marginalSets.clear();
67  _l_marginalSets.resize(num_threads);
68  }
69 
70  if (__storeBNOpt) {
71  for (Size ptr = 0; ptr < this->_l_optimalNet.size(); ptr++)
72  if (this->_l_optimalNet[ptr] != nullptr) delete _l_optimalNet[ptr];
73 
74  _l_optimalNet.clear();
75  _l_optimalNet.resize(num_threads);
76  }
77 
78  _l_modal.clear();
79  _l_modal.resize(num_threads);
80 
82  this->_oldMarginalMin = this->_marginalMin;
83  this->_oldMarginalMax.clear();
84  this->_oldMarginalMax = this->_marginalMax;
85  }
margi _oldMarginalMin
Old lower marginals used to compute epsilon.
__expes _l_expectationMin
Threads lower expectations, one per thread.
margi _marginalMin
Lower marginals.
__margis _l_marginalMin
Threads lower marginals, one per thread.
margi _oldMarginalMax
Old upper marginals used to compute epsilon.
std::vector< List< const Potential< GUM_SCALAR > *> *> _workingSetE
Threads evidence.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> _l_optimalNet
Threads optimal IBayesNet.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
__expes _l_expectationMax
Threads upper expectations, one per thread.
__clusters _l_clusters
Threads clusters.
void clear()
Removes all the elements in the hash table.
__margis _l_marginalMax
Threads upper marginals, one per thread.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
__credalSets _l_marginalSets
Threads vertices.
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _optFusion()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_optFusion ( )
protectedinherited

Fusion of threads optimal IBayesNet.

Definition at line 483 of file multipleInferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dbnOpt, gum::credal::InferenceEngine< GUM_SCALAR >::_evidence, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMax, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_optimalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, and gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

483  {
484  typedef std::vector< bool > dBN;
485 
486  Size nsize = Size(_workingSet[0]->size());
487 
488  // no parallel insert in hash-tables (OptBN)
489  for (Idx i = 0; i < nsize; i++) {
490  // we don't store anything for observed variables
491  if (__infE::_evidence.exists(i)) continue;
492 
493  Size dSize = Size(_l_marginalMin[0][i].size());
494 
495  for (Size j = 0; j < dSize; j++) {
496  // go through all threads
497  std::vector< Size > keymin(3);
498  keymin[0] = i;
499  keymin[1] = j;
500  keymin[2] = 0;
501  std::vector< Size > keymax(keymin);
502  keymax[2] = 1;
503 
504  Size tsize = Size(_l_marginalMin.size());
505 
506  for (Size tId = 0; tId < tsize; tId++) {
507  if (_l_marginalMin[tId][i][j] == this->_marginalMin[i][j]) {
508  const std::vector< dBN* >& tOpts =
509  _l_optimalNet[tId]->getBNOptsFromKey(keymin);
510  Size osize = Size(tOpts.size());
511 
512  for (Size bn = 0; bn < osize; bn++) {
513  __infE::_dbnOpt.insert(*tOpts[bn], keymin);
514  }
515  }
516 
517  if (_l_marginalMax[tId][i][j] == this->_marginalMax[i][j]) {
518  const std::vector< dBN* >& tOpts =
519  _l_optimalNet[tId]->getBNOptsFromKey(keymax);
520  Size osize = Size(tOpts.size());
521 
522  for (Size bn = 0; bn < osize; bn++) {
523  __infE::_dbnOpt.insert(*tOpts[bn], keymax);
524  }
525  }
526  } // end of : all threads
527  } // end of : all modalities
528  } // end of : all variables
529  }
margi _marginalMin
Lower marginals.
__margis _l_marginalMin
Threads lower marginals, one per thread.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> _l_optimalNet
Threads optimal IBayesNet.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
VarMod2BNsMap< GUM_SCALAR > _dbnOpt
Object used to efficiently store optimal bayes net during inference, for some algorithms.
margi _evidence
Holds observed variables states.
__margis _l_marginalMax
Threads upper marginals, one per thread.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
margi _marginalMax
Upper marginals.
+ Here is the caller graph for this function:

◆ _repetitiveInit()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit ( )
protectedinherited

Initialize _t0 and _t1 clusters.

Definition at line 784 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_t0, gum::credal::InferenceEngine< GUM_SCALAR >::_t1, gum::credal::InferenceEngine< GUM_SCALAR >::_timeSteps, gum::HashTable< Key, Val, Alloc >::clear(), GUM_ERROR, and gum::HashTable< Key, Val, Alloc >::insert().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference(), and gum::credal::InferenceEngine< GUM_SCALAR >::setRepetitiveInd().

784  {
785  _timeSteps = 0;
786  _t0.clear();
787  _t1.clear();
788 
789  // t = 0 vars belongs to _t0 as keys
790  for (auto node : _credalNet->current_bn().dag().nodes()) {
791  std::string var_name = _credalNet->current_bn().variable(node).name();
792  auto delim = var_name.find_first_of("_");
793 
794  if (delim > var_name.size()) {
795  GUM_ERROR(InvalidArgument,
796  "void InferenceEngine< GUM_SCALAR "
797  ">::_repetitiveInit() : the network does not "
798  "appear to be dynamic");
799  }
800 
801  std::string time_step = var_name.substr(delim + 1, 1);
802 
803  if (time_step.compare("0") == 0) _t0.insert(node, std::vector< NodeId >());
804  }
805 
806  // t = 1 vars belongs to either _t0 as member value or _t1 as keys
807  for (const auto& node : _credalNet->current_bn().dag().nodes()) {
808  std::string var_name = _credalNet->current_bn().variable(node).name();
809  auto delim = var_name.find_first_of("_");
810  std::string time_step = var_name.substr(delim + 1, var_name.size());
811  var_name = var_name.substr(0, delim);
812  delim = time_step.find_first_of("_");
813  time_step = time_step.substr(0, delim);
814 
815  if (time_step.compare("1") == 0) {
816  bool found = false;
817 
818  for (const auto& elt : _t0) {
819  std::string var_0_name =
820  _credalNet->current_bn().variable(elt.first).name();
821  delim = var_0_name.find_first_of("_");
822  var_0_name = var_0_name.substr(0, delim);
823 
824  if (var_name.compare(var_0_name) == 0) {
825  const Potential< GUM_SCALAR >* potential(
826  &_credalNet->current_bn().cpt(node));
827  const Potential< GUM_SCALAR >* potential2(
828  &_credalNet->current_bn().cpt(elt.first));
829 
830  if (potential->domainSize() == potential2->domainSize())
831  _t0[elt.first].push_back(node);
832  else
833  _t1.insert(node, std::vector< NodeId >());
834 
835  found = true;
836  break;
837  }
838  }
839 
840  if (!found) { _t1.insert(node, std::vector< NodeId >()); }
841  }
842  }
843 
844  // t > 1 vars belongs to either _t0 or _t1 as member value
845  // remember _timeSteps
846  for (auto node : _credalNet->current_bn().dag().nodes()) {
847  std::string var_name = _credalNet->current_bn().variable(node).name();
848  auto delim = var_name.find_first_of("_");
849  std::string time_step = var_name.substr(delim + 1, var_name.size());
850  var_name = var_name.substr(0, delim);
851  delim = time_step.find_first_of("_");
852  time_step = time_step.substr(0, delim);
853 
854  if (time_step.compare("0") != 0 && time_step.compare("1") != 0) {
855  // keep max time_step
856  if (atoi(time_step.c_str()) > _timeSteps)
857  _timeSteps = atoi(time_step.c_str());
858 
859  std::string var_0_name;
860  bool found = false;
861 
862  for (const auto& elt : _t0) {
863  std::string var_0_name =
864  _credalNet->current_bn().variable(elt.first).name();
865  delim = var_0_name.find_first_of("_");
866  var_0_name = var_0_name.substr(0, delim);
867 
868  if (var_name.compare(var_0_name) == 0) {
869  const Potential< GUM_SCALAR >* potential(
870  &_credalNet->current_bn().cpt(node));
871  const Potential< GUM_SCALAR >* potential2(
872  &_credalNet->current_bn().cpt(elt.first));
873 
874  if (potential->domainSize() == potential2->domainSize()) {
875  _t0[elt.first].push_back(node);
876  found = true;
877  break;
878  }
879  }
880  }
881 
882  if (!found) {
883  for (const auto& elt : _t1) {
884  std::string var_0_name =
885  _credalNet->current_bn().variable(elt.first).name();
886  auto delim = var_0_name.find_first_of("_");
887  var_0_name = var_0_name.substr(0, delim);
888 
889  if (var_name.compare(var_0_name) == 0) {
890  const Potential< GUM_SCALAR >* potential(
891  &_credalNet->current_bn().cpt(node));
892  const Potential< GUM_SCALAR >* potential2(
893  &_credalNet->current_bn().cpt(elt.first));
894 
895  if (potential->domainSize() == potential2->domainSize()) {
896  _t1[elt.first].push_back(node);
897  break;
898  }
899  }
900  }
901  }
902  }
903  }
904  }
int _timeSteps
The number of time steps of this network (only usefull for dynamic networks).
cluster _t0
Clusters of nodes used with dynamic networks.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
cluster _t1
Clusters of nodes used with dynamic networks.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _updateCredalSets()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_updateCredalSets ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex,
const bool elimRedund = false 
)
inlineprotectedinherited

Given a node id and one of it's possible vertex, update it's credal set.

To maximise efficiency, don't pass a vertex we know is inside the polytope (i.e. not at an extreme value for any modality)

Parameters
idThe id of the node to be updated
vertexA (potential) vertex of the node credal set
elimRedundremove redundant vertex (inside a facet)

Definition at line 928 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets, gum::HashTable< Key, Val, Alloc >::cbegin(), gum::HashTable< Key, Val, Alloc >::cend(), gum::credal::LRSWrapper< GUM_SCALAR >::elimRedundVrep(), gum::credal::LRSWrapper< GUM_SCALAR >::fillV(), gum::credal::LRSWrapper< GUM_SCALAR >::getOutput(), and gum::credal::LRSWrapper< GUM_SCALAR >::setUpV().

Referenced by gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_verticesFusion().

931  {
932  auto& nodeCredalSet = _marginalSets[id];
933  auto dsize = vertex.size();
934 
935  bool eq = true;
936 
937  for (auto it = nodeCredalSet.cbegin(), itEnd = nodeCredalSet.cend();
938  it != itEnd;
939  ++it) {
940  eq = true;
941 
942  for (Size i = 0; i < dsize; i++) {
943  if (std::fabs(vertex[i] - (*it)[i]) > 1e-6) {
944  eq = false;
945  break;
946  }
947  }
948 
949  if (eq) break;
950  }
951 
952  if (!eq || nodeCredalSet.size() == 0) {
953  nodeCredalSet.push_back(vertex);
954  return;
955  } else
956  return;
957 
958  // because of next lambda return condition
959  if (nodeCredalSet.size() == 1) return;
960 
961  // check that the point and all previously added ones are not inside the
962  // actual
963  // polytope
964  auto itEnd = std::remove_if(
965  nodeCredalSet.begin(),
966  nodeCredalSet.end(),
967  [&](const std::vector< GUM_SCALAR >& v) -> bool {
968  for (auto jt = v.cbegin(),
969  jtEnd = v.cend(),
970  minIt = _marginalMin[id].cbegin(),
971  minItEnd = _marginalMin[id].cend(),
972  maxIt = _marginalMax[id].cbegin(),
973  maxItEnd = _marginalMax[id].cend();
974  jt != jtEnd && minIt != minItEnd && maxIt != maxItEnd;
975  ++jt, ++minIt, ++maxIt) {
976  if ((std::fabs(*jt - *minIt) < 1e-6 || std::fabs(*jt - *maxIt) < 1e-6)
977  && std::fabs(*minIt - *maxIt) > 1e-6)
978  return false;
979  }
980  return true;
981  });
982 
983  nodeCredalSet.erase(itEnd, nodeCredalSet.end());
984 
985  // we need at least 2 points to make a convex combination
986  if (!elimRedund || nodeCredalSet.size() <= 2) return;
987 
988  // there may be points not inside the polytope but on one of it's facet,
989  // meaning it's still a convex combination of vertices of this facet. Here
990  // we
991  // need lrs.
992  LRSWrapper< GUM_SCALAR > lrsWrapper;
993  lrsWrapper.setUpV((unsigned int)dsize, (unsigned int)(nodeCredalSet.size()));
994 
995  for (const auto& vtx : nodeCredalSet)
996  lrsWrapper.fillV(vtx);
997 
998  lrsWrapper.elimRedundVrep();
999 
1000  _marginalSets[id] = lrsWrapper.getOutput();
1001  }
credalSet _marginalSets
Credal sets vertices, if enabled.
margi _marginalMin
Lower marginals.
const const_iterator & cend() const noexcept
Returns the unsafe const_iterator pointing to the end of the hashtable.
const_iterator cbegin() const
Returns an unsafe const_iterator pointing to the beginning of the hashtable.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _updateExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::_updateExpectations ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex 
)
inlineprotectedinherited

Given a node id and one of it's possible vertex obtained during inference, update this node lower and upper expectations.

Parameters
idThe id of the node to be updated
vertexA (potential) vertex of the node credal set

Definition at line 907 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax, gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin, and gum::credal::InferenceEngine< GUM_SCALAR >::_modal.

908  {
909  std::string var_name = _credalNet->current_bn().variable(id).name();
910  auto delim = var_name.find_first_of("_");
911 
912  var_name = var_name.substr(0, delim);
913 
914  if (_modal.exists(var_name) /*_modal.find(var_name) != _modal.end()*/) {
915  GUM_SCALAR exp = 0;
916  auto vsize = vertex.size();
917 
918  for (Size mod = 0; mod < vsize; mod++)
919  exp += vertex[mod] * _modal[var_name][mod];
920 
921  if (exp > _expectationMax[id]) _expectationMax[id] = exp;
922 
923  if (exp < _expectationMin[id]) _expectationMin[id] = exp;
924  }
925  }
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
dynExpe _modal
Variables modalities used to compute expectations.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48

◆ _updateMarginals()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_updateMarginals ( )
inlineprotectedinherited

Fusion of threads marginals.

Definition at line 274 of file multipleInferenceEngine_tpl.h.

References gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMax, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet, and gum::getThreadNumber().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

274  {
275 #pragma omp parallel
276  {
277  int threadId = getThreadNumber();
278  long nsize = long(_workingSet[threadId]->size());
279 
280 #pragma omp for
281 
282  for (long i = 0; i < nsize; i++) {
283  Size dSize = Size(_l_marginalMin[threadId][i].size());
284 
285  for (Size j = 0; j < dSize; j++) {
286  Size tsize = Size(_l_marginalMin.size());
287 
288  // go through all threads
289  for (Size tId = 0; tId < tsize; tId++) {
290  if (_l_marginalMin[tId][i][j] < this->_marginalMin[i][j])
291  this->_marginalMin[i][j] = _l_marginalMin[tId][i][j];
292 
293  if (_l_marginalMax[tId][i][j] > this->_marginalMax[i][j])
294  this->_marginalMax[i][j] = _l_marginalMax[tId][i][j];
295  } // end of : all threads
296  } // end of : all modalities
297  } // end of : all variables
298  } // end of : parallel region
299  }
unsigned int getThreadNumber()
Get the calling thread id.
margi _marginalMin
Lower marginals.
__margis _l_marginalMin
Threads lower marginals, one per thread.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
__margis _l_marginalMax
Threads upper marginals, one per thread.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _updateOldMarginals()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_updateOldMarginals ( )
protectedinherited

Update old marginals (from current marginals).

Call this once to initialize old marginals (after burn-in for example) and then use _computeEpsilon which does the same job but compute epsilon too.

Definition at line 347 of file multipleInferenceEngine_tpl.h.

References gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMax, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_oldMarginalMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet, and gum::getThreadNumber().

347  {
348 #pragma omp parallel
349  {
350  int threadId = getThreadNumber();
351  long nsize = long(_workingSet[threadId]->size());
352 
353 #pragma omp for
354 
355  for (long i = 0; i < nsize; i++) {
356  Size dSize = Size(_l_marginalMin[threadId][i].size());
357 
358  for (Size j = 0; j < dSize; j++) {
359  Size tsize = Size(_l_marginalMin.size());
360 
361  // go through all threads
362  for (Size tId = 0; tId < tsize; tId++) {
363  if (_l_marginalMin[tId][i][j] < this->_oldMarginalMin[i][j])
364  this->_oldMarginalMin[i][j] = _l_marginalMin[tId][i][j];
365 
366  if (_l_marginalMax[tId][i][j] > this->_oldMarginalMax[i][j])
367  this->_oldMarginalMax[i][j] = _l_marginalMax[tId][i][j];
368  } // end of : all threads
369  } // end of : all modalities
370  } // end of : all variables
371  } // end of : parallel region
372  }
margi _oldMarginalMin
Old lower marginals used to compute epsilon.
unsigned int getThreadNumber()
Get the calling thread id.
__margis _l_marginalMin
Threads lower marginals, one per thread.
margi _oldMarginalMax
Old upper marginals used to compute epsilon.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
__margis _l_marginalMax
Threads upper marginals, one per thread.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
+ Here is the call graph for this function:

◆ _updateThread()

template<typename GUM_SCALAR , class BNInferenceEngine >
bool gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_updateThread ( const NodeId id,
const std::vector< GUM_SCALAR > &  vertex,
const bool elimRedund = false 
)
inlineprotectedinherited

Update thread information (marginals, expectations, IBayesNet, vertices) for a given node id.

Parameters
idThe id of the node to be updated.
vertexThe vertex.
elimRedundtrue if redundancy elimination is to be performed, false otherwise and by default.
Returns
True if the IBayesNet is kept (for now), False otherwise.

Definition at line 89 of file multipleInferenceEngine_tpl.h.

References gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::__updateThreadCredalSets(), gum::credal::InferenceEngine< GUM_SCALAR >::_evidence, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_expectationMax, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_expectationMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMax, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_modal, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_optimalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt, gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet, and gum::getThreadNumber().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__threadUpdate().

92  {
93  int tId = getThreadNumber();
94 
95  // save E(X) if we don't save vertices
96  if (!__infE::_storeVertices && !_l_modal[tId].empty()) {
97  std::string var_name = _workingSet[tId]->variable(id).name();
98  auto delim = var_name.find_first_of("_");
99  var_name = var_name.substr(0, delim);
100 
101  if (_l_modal[tId].exists(var_name)) {
102  GUM_SCALAR exp = 0;
103  Size vsize = Size(vertex.size());
104 
105  for (Size mod = 0; mod < vsize; mod++)
106  exp += vertex[mod] * _l_modal[tId][var_name][mod];
107 
108  if (exp > _l_expectationMax[tId][id]) _l_expectationMax[tId][id] = exp;
109 
110  if (exp < _l_expectationMin[tId][id]) _l_expectationMin[tId][id] = exp;
111  }
112  } // end of : if modal (map) not empty
113 
114  bool newOne = false;
115  bool added = false;
116  bool result = false;
117  // for burn in, we need to keep checking on local marginals and not global
118  // ones
119  // (faster inference)
120  // we also don't want to store dbn for observed variables since there will
121  // be a
122  // huge number of them (probably all of them).
123  Size vsize = Size(vertex.size());
124 
125  for (Size mod = 0; mod < vsize; mod++) {
126  if (vertex[mod] < _l_marginalMin[tId][id][mod]) {
127  _l_marginalMin[tId][id][mod] = vertex[mod];
128  newOne = true;
129 
130  if (__infE::_storeBNOpt && !__infE::_evidence.exists(id)) {
131  std::vector< Size > key(3);
132  key[0] = id;
133  key[1] = mod;
134  key[2] = 0;
135 
136  if (_l_optimalNet[tId]->insert(key, true)) result = true;
137  }
138  }
139 
140  if (vertex[mod] > _l_marginalMax[tId][id][mod]) {
141  _l_marginalMax[tId][id][mod] = vertex[mod];
142  newOne = true;
143 
144  if (__infE::_storeBNOpt && !__infE::_evidence.exists(id)) {
145  std::vector< Size > key(3);
146  key[0] = id;
147  key[1] = mod;
148  key[2] = 1;
149 
150  if (_l_optimalNet[tId]->insert(key, true)) result = true;
151  }
152  } else if (vertex[mod] == _l_marginalMin[tId][id][mod]
153  || vertex[mod] == _l_marginalMax[tId][id][mod]) {
154  newOne = true;
155 
156  if (__infE::_storeBNOpt && vertex[mod] == _l_marginalMin[tId][id][mod]
157  && !__infE::_evidence.exists(id)) {
158  std::vector< Size > key(3);
159  key[0] = id;
160  key[1] = mod;
161  key[2] = 0;
162 
163  if (_l_optimalNet[tId]->insert(key, false)) result = true;
164  }
165 
166  if (__infE::_storeBNOpt && vertex[mod] == _l_marginalMax[tId][id][mod]
167  && !__infE::_evidence.exists(id)) {
168  std::vector< Size > key(3);
169  key[0] = id;
170  key[1] = mod;
171  key[2] = 1;
172 
173  if (_l_optimalNet[tId]->insert(key, false)) result = true;
174  }
175  }
176 
177  // store point to compute credal set vertices.
178  // check for redundancy at each step or at the end ?
179  if (__infE::_storeVertices && !added && newOne) {
180  __updateThreadCredalSets(id, vertex, elimRedund);
181  added = true;
182  }
183  }
184 
185  // if all variables didn't get better marginals, we will delete
186  if (__infE::_storeBNOpt && result) return true;
187 
188  return false;
189  }
__expes _l_expectationMin
Threads lower expectations, one per thread.
unsigned int getThreadNumber()
Get the calling thread id.
bool _storeBNOpt
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
__margis _l_marginalMin
Threads lower marginals, one per thread.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> _l_optimalNet
Threads optimal IBayesNet.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
__expes _l_expectationMax
Threads upper expectations, one per thread.
margi _evidence
Holds observed variables states.
__margis _l_marginalMax
Threads upper marginals, one per thread.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
void __updateThreadCredalSets(const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund)
Ask for redundancy elimination of a node credal set of a calling thread.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ _verticesFusion()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_verticesFusion ( )
protectedinherited
Deprecated:
Fusion of threads vertices.

Definition at line 376 of file multipleInferenceEngine_tpl.h.

References gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalSets, gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices, gum::credal::InferenceEngine< GUM_SCALAR >::_updateCredalSets(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet, and gum::getThreadNumber().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

376  {
377  // don't create threads if there are no vertices saved
378  if (!__infE::_storeVertices) return;
379 
380 #pragma omp parallel
381  {
382  int threadId = getThreadNumber();
383  Size nsize = Size(_workingSet[threadId]->size());
384 
385 #pragma omp for
386 
387  for (long i = 0; i < long(nsize); i++) {
388  Size tsize = Size(_l_marginalMin.size());
389 
390  // go through all threads
391  for (long tId = 0; tId < long(tsize); tId++) {
392  auto& nodeThreadCredalSet = _l_marginalSets[tId][i];
393 
394  // for each vertex, if we are at any opt marginal, add it to the set
395  for (const auto& vtx : nodeThreadCredalSet) {
396  // we run redundancy elimination at each step
397  // because there could be 100000 threads and the set will be so
398  // huge
399  // ...
400  // BUT not if vertices are of dimension 2 ! opt check and equality
401  // should be enough
402  __infE::_updateCredalSets(i, vtx, (vtx.size() > 2) ? true : false);
403  } // end of : nodeThreadCredalSet
404  } // end of : all threads
405  } // end of : all variables
406  } // end of : parallel region
407  }
unsigned int getThreadNumber()
Get the calling thread id.
__margis _l_marginalMin
Threads lower marginals, one per thread.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
void _updateCredalSets(const NodeId &id, const std::vector< GUM_SCALAR > &vertex, const bool &elimRedund=false)
Given a node id and one of it&#39;s possible vertex, update it&#39;s credal set.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
__credalSets _l_marginalSets
Threads vertices.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ continueApproximationScheme()

INLINE bool gum::ApproximationScheme::continueApproximationScheme ( double  error)
inherited

Update the scheme w.r.t the new error.

Test the stopping criterion that are enabled.

Parameters
errorThe new error value.
Returns
false if state become != ApproximationSchemeSTATE::Continue
Exceptions
OperationNotAllowedRaised if state != ApproximationSchemeSTATE::Continue.

Definition at line 227 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_epsilon, gum::ApproximationScheme::_current_rate, gum::ApproximationScheme::_current_state, gum::ApproximationScheme::_current_step, gum::ApproximationScheme::_enabled_eps, gum::ApproximationScheme::_enabled_max_iter, gum::ApproximationScheme::_enabled_max_time, gum::ApproximationScheme::_enabled_min_rate_eps, gum::ApproximationScheme::_eps, gum::ApproximationScheme::_history, gum::ApproximationScheme::_last_epsilon, gum::ApproximationScheme::_max_iter, gum::ApproximationScheme::_max_time, gum::ApproximationScheme::_min_rate_eps, gum::ApproximationScheme::_stopScheme(), gum::ApproximationScheme::_timer, gum::IApproximationSchemeConfiguration::Continue, gum::IApproximationSchemeConfiguration::Epsilon, GUM_EMIT3, GUM_ERROR, gum::IApproximationSchemeConfiguration::Limit, gum::IApproximationSchemeConfiguration::messageApproximationScheme(), gum::IApproximationSchemeConfiguration::onProgress, gum::IApproximationSchemeConfiguration::Rate, gum::ApproximationScheme::startOfPeriod(), gum::ApproximationScheme::stateApproximationScheme(), gum::Timer::step(), gum::IApproximationSchemeConfiguration::TimeLimit, and gum::ApproximationScheme::verbosity().

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), gum::SamplingInference< GUM_SCALAR >::_loopApproxInference(), gum::learning::DAG2BNLearner< ALLOC >::createBN(), gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

227  {
228  // For coherence, we fix the time used in the method
229 
230  double timer_step = _timer.step();
231 
232  if (_enabled_max_time) {
233  if (timer_step > _max_time) {
235  return false;
236  }
237  }
238 
239  if (!startOfPeriod()) { return true; }
240 
242  GUM_ERROR(OperationNotAllowed,
243  "state of the approximation scheme is not correct : "
245  }
246 
247  if (verbosity()) { _history.push_back(error); }
248 
249  if (_enabled_max_iter) {
250  if (_current_step > _max_iter) {
252  return false;
253  }
254  }
255 
257  _current_epsilon = error; // eps rate isEnabled needs it so affectation was
258  // moved from eps isEnabled below
259 
260  if (_enabled_eps) {
261  if (_current_epsilon <= _eps) {
263  return false;
264  }
265  }
266 
267  if (_last_epsilon >= 0.) {
268  if (_current_epsilon > .0) {
269  // ! _current_epsilon can be 0. AND epsilon
270  // isEnabled can be disabled !
271  _current_rate =
273  }
274  // limit with current eps ---> 0 is | 1 - ( last_eps / 0 ) | --->
275  // infinity the else means a return false if we isEnabled the rate below,
276  // as we would have returned false if epsilon isEnabled was enabled
277  else {
279  }
280 
281  if (_enabled_min_rate_eps) {
282  if (_current_rate <= _min_rate_eps) {
284  return false;
285  }
286  }
287  }
288 
290  if (onProgress.hasListener()) {
292  }
293 
294  return true;
295  } else {
296  return false;
297  }
298  }
double step() const
Returns the delta time between now and the last reset() call (or the constructor).
Definition: timer_inl.h:42
Signaler3< Size, double, double > onProgress
Progression, error and time.
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
bool _enabled_eps
If true, the threshold convergence is enabled.
void _stopScheme(ApproximationSchemeSTATE new_state)
Stop the scheme given a new state.
double _current_epsilon
Current epsilon.
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
bool startOfPeriod()
Returns true if we are at the beginning of a period (compute error is mandatory). ...
double _eps
Threshold for convergence.
double _current_rate
Current rate.
bool _enabled_max_time
If true, the timeout is enabled.
Size _current_step
The current step.
std::vector< double > _history
The scheme history, used only if verbosity == true.
double _min_rate_eps
Threshold for the epsilon rate.
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
bool verbosity() const
Returns true if verbosity is enabled.
std::string messageApproximationScheme() const
Returns the approximation scheme message.
double _last_epsilon
Last epsilon value.
Size _max_iter
The maximum iterations.
#define GUM_EMIT3(signal, arg1, arg2, arg3)
Definition: signaler3.h:42
ApproximationSchemeSTATE _current_state
The current state.
double _max_time
The timeout.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ credalNet()

template<typename GUM_SCALAR >
const CredalNet< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::credalNet ( )
inherited

Get this creadal network.

Returns
A constant reference to this CredalNet.

Definition at line 59 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet.

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::InferenceEngine().

59  {
60  return *_credalNet;
61  }
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
+ Here is the caller graph for this function:

◆ currentTime()

INLINE double gum::ApproximationScheme::currentTime ( ) const
virtualinherited

Returns the current running time in second.

Returns
Returns the current running time in second.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 128 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_timer, and gum::Timer::step().

Referenced by gum::learning::genericBNLearner::currentTime().

128 { return _timer.step(); }
double step() const
Returns the delta time between now and the last reset() call (or the constructor).
Definition: timer_inl.h:42
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ disableEpsilon()

INLINE void gum::ApproximationScheme::disableEpsilon ( )
virtualinherited

Disable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 54 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_eps.

Referenced by gum::learning::genericBNLearner::disableEpsilon().

54 { _enabled_eps = false; }
bool _enabled_eps
If true, the threshold convergence is enabled.
+ Here is the caller graph for this function:

◆ disableMaxIter()

INLINE void gum::ApproximationScheme::disableMaxIter ( )
virtualinherited

Disable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 105 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_iter.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::learning::genericBNLearner::disableMaxIter(), and gum::learning::GreedyHillClimbing::GreedyHillClimbing().

105 { _enabled_max_iter = false; }
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
+ Here is the caller graph for this function:

◆ disableMaxTime()

INLINE void gum::ApproximationScheme::disableMaxTime ( )
virtualinherited

Disable stopping criterion on timeout.

Returns
Disable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 131 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_time.

Referenced by gum::learning::genericBNLearner::disableMaxTime(), and gum::learning::GreedyHillClimbing::GreedyHillClimbing().

131 { _enabled_max_time = false; }
bool _enabled_max_time
If true, the timeout is enabled.
+ Here is the caller graph for this function:

◆ disableMinEpsilonRate()

INLINE void gum::ApproximationScheme::disableMinEpsilonRate ( )
virtualinherited

Disable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 79 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_min_rate_eps.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), gum::learning::genericBNLearner::disableMinEpsilonRate(), and gum::learning::GreedyHillClimbing::GreedyHillClimbing().

79  {
80  _enabled_min_rate_eps = false;
81  }
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the caller graph for this function:

◆ dynamicExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpectations ( )
inherited

Compute dynamic expectations.

See also
_dynamicExpectations Only call this if an algorithm does not call it by itself.

Definition at line 716 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpectations().

716  {
718  }
void _dynamicExpectations()
Rearrange lower and upper expectations to suit dynamic networks.
+ Here is the call graph for this function:

◆ dynamicExpMax()

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMax ( const std::string &  varName) const
inherited

Get the upper dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which upper expectation we want.
Returns
A constant reference to the variable upper expectation over all time steps.

Definition at line 504 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMax, and GUM_ERROR.

505  {
506  std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
507  "GUM_SCALAR >::dynamicExpMax ( const std::string & "
508  "varName ) const : ";
509 
510  if (_dynamicExpMax.empty())
511  GUM_ERROR(OperationNotAllowed,
512  errTxt + "_dynamicExpectations() needs to be called before");
513 
514  if (!_dynamicExpMax.exists(
515  varName) /*_dynamicExpMin.find(varName) == _dynamicExpMin.end()*/)
516  GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName);
517 
518  return _dynamicExpMax[varName];
519  }
dynExpe _dynamicExpMax
Upper dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55

◆ dynamicExpMin()

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::dynamicExpMin ( const std::string &  varName) const
inherited

Get the lower dynamic expectation of a given variable prefix (without the time step included, i.e.

call with "temp" to get "temp_0", ..., "temp_T").

Parameters
varNameThe variable name prefix which lower expectation we want.
Returns
A constant reference to the variable lower expectation over all time steps.

Definition at line 486 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMin, and GUM_ERROR.

487  {
488  std::string errTxt = "const std::vector< GUM_SCALAR > & InferenceEngine< "
489  "GUM_SCALAR >::dynamicExpMin ( const std::string & "
490  "varName ) const : ";
491 
492  if (_dynamicExpMin.empty())
493  GUM_ERROR(OperationNotAllowed,
494  errTxt + "_dynamicExpectations() needs to be called before");
495 
496  if (!_dynamicExpMin.exists(
497  varName) /*_dynamicExpMin.find(varName) == _dynamicExpMin.end()*/)
498  GUM_ERROR(NotFound, errTxt + "variable name not found : " << varName);
499 
500  return _dynamicExpMin[varName];
501  }
dynExpe _dynamicExpMin
Lower dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55

◆ enableEpsilon()

INLINE void gum::ApproximationScheme::enableEpsilon ( )
virtualinherited

Enable stopping criterion on epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 57 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_eps.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), and gum::learning::genericBNLearner::enableEpsilon().

57 { _enabled_eps = true; }
bool _enabled_eps
If true, the threshold convergence is enabled.
+ Here is the caller graph for this function:

◆ enableMaxIter()

INLINE void gum::ApproximationScheme::enableMaxIter ( )
virtualinherited

Enable stopping criterion on max iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 108 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_iter.

Referenced by gum::learning::genericBNLearner::enableMaxIter().

108 { _enabled_max_iter = true; }
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
+ Here is the caller graph for this function:

◆ enableMaxTime()

INLINE void gum::ApproximationScheme::enableMaxTime ( )
virtualinherited

Enable stopping criterion on timeout.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 134 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_time.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling(), and gum::learning::genericBNLearner::enableMaxTime().

134 { _enabled_max_time = true; }
bool _enabled_max_time
If true, the timeout is enabled.
+ Here is the caller graph for this function:

◆ enableMinEpsilonRate()

INLINE void gum::ApproximationScheme::enableMinEpsilonRate ( )
virtualinherited

Enable stopping criterion on epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 84 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_min_rate_eps.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), and gum::learning::genericBNLearner::enableMinEpsilonRate().

84  {
85  _enabled_min_rate_eps = true;
86  }
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the caller graph for this function:

◆ epsilon()

INLINE double gum::ApproximationScheme::epsilon ( ) const
virtualinherited

Returns the value of epsilon.

Returns
Returns the value of epsilon.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 51 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_eps.

Referenced by gum::ImportanceSampling< GUM_SCALAR >::_onContextualize(), and gum::learning::genericBNLearner::epsilon().

51 { return _eps; }
double _eps
Threshold for convergence.
+ Here is the caller graph for this function:

◆ eraseAllEvidence()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::eraseAllEvidence ( )
virtualinherited

Erase all inference related data to perform another one.

You need to insert evidence again if needed but modalities are kept. You can insert new ones by using the appropriate method which will delete the old ones.

Reimplemented from gum::credal::InferenceEngine< GUM_SCALAR >.

Definition at line 533 of file multipleInferenceEngine_tpl.h.

References gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_clusters, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_evidence, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_expectationMax, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_expectationMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_inferenceEngine, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMax, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalMin, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_marginalSets, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_modal, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_optimalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt, gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSet, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSetE, and gum::credal::InferenceEngine< GUM_SCALAR >::eraseAllEvidence().

533  {
535  Size tsize = Size(_workingSet.size());
536 
537  // delete pointers
538  for (Size bn = 0; bn < tsize; bn++) {
539  if (__infE::_storeVertices) _l_marginalSets[bn].clear();
540 
541  if (_workingSet[bn] != nullptr) delete _workingSet[bn];
542 
544  if (_l_inferenceEngine[bn] != nullptr) delete _l_optimalNet[bn];
545 
546  if (this->_workingSetE[bn] != nullptr) {
547  for (const auto ev : *_workingSetE[bn])
548  delete ev;
549 
550  delete _workingSetE[bn];
551  }
552 
553  if (_l_inferenceEngine[bn] != nullptr) delete _l_inferenceEngine[bn];
554  }
555 
556  // this is important, those will be resized with the correct number of
557  // threads.
558 
559  _workingSet.clear();
560  _workingSetE.clear();
561  _l_inferenceEngine.clear();
562  _l_optimalNet.clear();
563 
564  _l_marginalMin.clear();
565  _l_marginalMax.clear();
566  _l_expectationMin.clear();
567  _l_expectationMax.clear();
568  _l_modal.clear();
569  _l_marginalSets.clear();
570  _l_evidence.clear();
571  _l_clusters.clear();
572  }
std::vector< BNInferenceEngine *> _l_inferenceEngine
Threads BNInferenceEngine.
__expes _l_expectationMin
Threads lower expectations, one per thread.
bool _storeBNOpt
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
__margis _l_marginalMin
Threads lower marginals, one per thread.
std::vector< List< const Potential< GUM_SCALAR > *> *> _workingSetE
Threads evidence.
std::vector< VarMod2BNsMap< GUM_SCALAR > *> _l_optimalNet
Threads optimal IBayesNet.
std::vector< __bnet *> _workingSet
Threads IBayesNet.
__expes _l_expectationMax
Threads upper expectations, one per thread.
__clusters _l_clusters
Threads clusters.
__margis _l_marginalMax
Threads upper marginals, one per thread.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
__credalSets _l_marginalSets
Threads vertices.
virtual void eraseAllEvidence()
Erase all inference related data to perform another one.
+ Here is the call graph for this function:

◆ expectationMax() [1/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const NodeId  id) const
inherited

Get the upper expectation of a given node id.

Parameters
idThe node id which upper expectation we want.
Returns
A constant reference to this node upper expectation.

Definition at line 479 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax.

479  {
480  try {
481  return _expectationMax[id];
482  } catch (NotFound& err) { throw(err); }
483  }
expe _expectationMax
Upper expectations, if some variables modalities were inserted.

◆ expectationMax() [2/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax ( const std::string &  varName) const
inherited

Get the upper expectation of a given variable name.

Parameters
varNameThe variable name which upper expectation we want.
Returns
A constant reference to this variable upper expectation.

Definition at line 462 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, and gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMax.

463  {
464  try {
465  return _expectationMax[_credalNet->current_bn().idFromName(varName)];
466  } catch (NotFound& err) { throw(err); }
467  }
expe _expectationMax
Upper expectations, if some variables modalities were inserted.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.

◆ expectationMin() [1/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const NodeId  id) const
inherited

Get the lower expectation of a given node id.

Parameters
idThe node id which lower expectation we want.
Returns
A constant reference to this node lower expectation.

Definition at line 471 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin.

471  {
472  try {
473  return _expectationMin[id];
474  } catch (NotFound& err) { throw(err); }
475  }
expe _expectationMin
Lower expectations, if some variables modalities were inserted.

◆ expectationMin() [2/2]

template<typename GUM_SCALAR >
const GUM_SCALAR & gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin ( const std::string &  varName) const
inherited

Get the lower expectation of a given variable name.

Parameters
varNameThe variable name which lower expectation we want.
Returns
A constant reference to this variable lower expectation.

Definition at line 454 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, and gum::credal::InferenceEngine< GUM_SCALAR >::_expectationMin.

455  {
456  try {
457  return _expectationMin[_credalNet->current_bn().idFromName(varName)];
458  } catch (NotFound& err) { throw(err); }
459  }
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
expe _expectationMin
Lower expectations, if some variables modalities were inserted.

◆ getApproximationSchemeMsg()

template<typename GUM_SCALAR >
const std::string gum::credal::InferenceEngine< GUM_SCALAR >::getApproximationSchemeMsg ( )
inlineinherited

Get approximation scheme state.

Returns
A constant string about approximation scheme state.

Definition at line 515 of file inferenceEngine.h.

References gum::IApproximationSchemeConfiguration::messageApproximationScheme().

515  {
516  return this->messageApproximationScheme();
517  }
std::string messageApproximationScheme() const
Returns the approximation scheme message.
+ Here is the call graph for this function:

◆ getT0Cluster()

template<typename GUM_SCALAR >
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT0Cluster ( ) const
inherited

Get the _t0 cluster.

Returns
A constant reference to the _t0 cluster.

Definition at line 1005 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_t0.

1005  {
1006  return _t0;
1007  }
cluster _t0
Clusters of nodes used with dynamic networks.

◆ getT1Cluster()

template<typename GUM_SCALAR >
const NodeProperty< std::vector< NodeId > > & gum::credal::InferenceEngine< GUM_SCALAR >::getT1Cluster ( ) const
inherited

Get the _t1 cluster.

Returns
A constant reference to the _t1 cluster.

Definition at line 1011 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_t1.

1011  {
1012  return _t1;
1013  }
cluster _t1
Clusters of nodes used with dynamic networks.

◆ getVarMod2BNsMap()

template<typename GUM_SCALAR >
VarMod2BNsMap< GUM_SCALAR > * gum::credal::InferenceEngine< GUM_SCALAR >::getVarMod2BNsMap ( )
inherited

Get optimum IBayesNet.

Returns
A pointer to the optimal net object.

Definition at line 141 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dbnOpt.

141  {
142  return &_dbnOpt;
143  }
VarMod2BNsMap< GUM_SCALAR > _dbnOpt
Object used to efficiently store optimal bayes net during inference, for some algorithms.

◆ history()

INLINE const std::vector< double > & gum::ApproximationScheme::history ( ) const
virtualinherited

Returns the scheme history.

Returns
Returns the scheme history.
Exceptions
OperationNotAllowedRaised if the scheme did not performed or if verbosity is set to false.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 173 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_history, GUM_ERROR, gum::ApproximationScheme::stateApproximationScheme(), gum::IApproximationSchemeConfiguration::Undefined, and gum::ApproximationScheme::verbosity().

Referenced by gum::learning::genericBNLearner::history().

173  {
175  GUM_ERROR(OperationNotAllowed,
176  "state of the approximation scheme is udefined");
177  }
178 
179  if (verbosity() == false) {
180  GUM_ERROR(OperationNotAllowed, "No history when verbosity=false");
181  }
182 
183  return _history;
184  }
std::vector< double > _history
The scheme history, used only if verbosity == true.
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
bool verbosity() const
Returns true if verbosity is enabled.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ initApproximationScheme()

INLINE void gum::ApproximationScheme::initApproximationScheme ( )
inherited

Initialise the scheme.

Definition at line 187 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_epsilon, gum::ApproximationScheme::_current_rate, gum::ApproximationScheme::_current_state, gum::ApproximationScheme::_current_step, gum::ApproximationScheme::_history, gum::ApproximationScheme::_timer, gum::IApproximationSchemeConfiguration::Continue, and gum::Timer::reset().

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), gum::SamplingInference< GUM_SCALAR >::_loopApproxInference(), gum::SamplingInference< GUM_SCALAR >::_onStateChanged(), gum::learning::DAG2BNLearner< ALLOC >::createBN(), gum::learning::GreedyHillClimbing::learnStructure(), and gum::learning::LocalSearchWithTabuList::learnStructure().

187  {
189  _current_step = 0;
191  _history.clear();
192  _timer.reset();
193  }
double _current_epsilon
Current epsilon.
void reset()
Reset the timer.
Definition: timer_inl.h:32
double _current_rate
Current rate.
Size _current_step
The current step.
std::vector< double > _history
The scheme history, used only if verbosity == true.
ApproximationSchemeSTATE _current_state
The current state.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ insertEvidence() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const std::map< std::string, std::vector< GUM_SCALAR > > &  eviMap)
inherited

Insert evidence from map.

Parameters
eviMapThe map variable name - likelihood.

Definition at line 229 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_evidence, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

230  {
231  if (!_evidence.empty()) _evidence.clear();
232 
233  for (auto it = eviMap.cbegin(), theEnd = eviMap.cend(); it != theEnd; ++it) {
234  NodeId id;
235 
236  try {
237  id = _credalNet->current_bn().idFromName(it->first);
238  } catch (NotFound& err) {
239  GUM_SHOWERROR(err);
240  continue;
241  }
242 
243  _evidence.insert(id, it->second);
244  }
245  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:61
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
margi _evidence
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
Size NodeId
Type for node ids.
Definition: graphElements.h:98
+ Here is the call graph for this function:

◆ insertEvidence() [2/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence ( const NodeProperty< std::vector< GUM_SCALAR > > &  evidence)
inherited

Insert evidence from Property.

Parameters
evidenceThe on nodes Property containing likelihoods.

Definition at line 251 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_evidence, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

252  {
253  if (!_evidence.empty()) _evidence.clear();
254 
255  // use cbegin() to get const_iterator when available in aGrUM hashtables
256  for (const auto& elt : evidence) {
257  try {
258  _credalNet->current_bn().variable(elt.first);
259  } catch (NotFound& err) {
260  GUM_SHOWERROR(err);
261  continue;
262  }
263 
264  _evidence.insert(elt.first, elt.second);
265  }
266  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:61
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
margi _evidence
Holds observed variables states.
void clear()
Removes all the elements in the hash table.
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
+ Here is the call graph for this function:

◆ insertEvidenceFile()

template<typename GUM_SCALAR , class BNInferenceEngine = LazyPropagation< GUM_SCALAR >>
virtual void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::insertEvidenceFile ( const std::string &  path)
inlinevirtual

unsigned int notOptDelete;

Reimplemented from gum::credal::InferenceEngine< GUM_SCALAR >.

Definition at line 127 of file CNMonteCarloSampling.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidenceFile().

127  {
129  };
virtual void insertEvidenceFile(const std::string &path)
Insert evidence from file.
+ Here is the call graph for this function:

◆ insertModals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModals ( const std::map< std::string, std::vector< GUM_SCALAR > > &  modals)
inherited

Insert variables modalities from map to compute expectations.

Parameters
modalsThe map variable name - modalities.

Definition at line 193 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_modal, and GUM_SHOWERROR.

194  {
195  if (!_modal.empty()) _modal.clear();
196 
197  for (auto it = modals.cbegin(), theEnd = modals.cend(); it != theEnd; ++it) {
198  NodeId id;
199 
200  try {
201  id = _credalNet->current_bn().idFromName(it->first);
202  } catch (NotFound& err) {
203  GUM_SHOWERROR(err);
204  continue;
205  }
206 
207  // check that modals are net compatible
208  auto dSize = _credalNet->current_bn().variable(id).domainSize();
209 
210  if (dSize != it->second.size()) continue;
211 
212  // GUM_ERROR(OperationNotAllowed, "void InferenceEngine< GUM_SCALAR
213  // >::insertModals( const std::map< std::string, std::vector< GUM_SCALAR
214  // > >
215  // &modals) : modalities does not respect variable cardinality : " <<
216  // _credalNet->current_bn().variable( id ).name() << " : " << dSize << "
217  // != "
218  // << it->second.size());
219 
220  _modal.insert(it->first, it->second); //[ it->first ] = it->second;
221  }
222 
223  //_modal = modals;
224 
226  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:61
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
dynExpe _modal
Variables modalities used to compute expectations.
void _initExpectations()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...
Size NodeId
Type for node ids.
Definition: graphElements.h:98
+ Here is the call graph for this function:

◆ insertModalsFile()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertModalsFile ( const std::string &  path)
inherited

Insert variables modalities from file to compute expectations.

Parameters
pathThe path to the modalities file.

Definition at line 146 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_modal, and GUM_ERROR.

146  {
147  std::ifstream mod_stream(path.c_str(), std::ios::in);
148 
149  if (!mod_stream.good()) {
150  GUM_ERROR(OperationNotAllowed,
151  "void InferenceEngine< GUM_SCALAR "
152  ">::insertModals(const std::string & path) : "
153  "could not open input file : "
154  << path);
155  }
156 
157  if (!_modal.empty()) _modal.clear();
158 
159  std::string line, tmp;
160  char * cstr, *p;
161 
162  while (mod_stream.good()) {
163  getline(mod_stream, line);
164 
165  if (line.size() == 0) continue;
166 
167  cstr = new char[line.size() + 1];
168  strcpy(cstr, line.c_str());
169 
170  p = strtok(cstr, " ");
171  tmp = p;
172 
173  std::vector< GUM_SCALAR > values;
174  p = strtok(nullptr, " ");
175 
176  while (p != nullptr) {
177  values.push_back(GUM_SCALAR(atof(p)));
178  p = strtok(nullptr, " ");
179  } // end of : line
180 
181  _modal.insert(tmp, values); //[tmp] = values;
182 
183  delete[] p;
184  delete[] cstr;
185  } // end of : file
186 
187  mod_stream.close();
188 
190  }
dynExpe _modal
Variables modalities used to compute expectations.
void _initExpectations()
Initialize lower and upper expectations before inference, with the lower expectation being initialize...
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:

◆ insertQuery()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQuery ( const NodeProperty< std::vector< bool > > &  query)
inherited

Insert query variables and states from Property.

Parameters
queryThe on nodes Property containing queried variables states.

Definition at line 331 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_query, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

332  {
333  if (!_query.empty()) _query.clear();
334 
335  for (const auto& elt : query) {
336  try {
337  _credalNet->current_bn().variable(elt.first);
338  } catch (NotFound& err) {
339  GUM_SHOWERROR(err);
340  continue;
341  }
342 
343  _query.insert(elt.first, elt.second);
344  }
345  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:61
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
query _query
Holds the query nodes states.
void clear()
Removes all the elements in the hash table.
NodeProperty< std::vector< bool > > query
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
+ Here is the call graph for this function:

◆ insertQueryFile()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::insertQueryFile ( const std::string &  path)
inherited

Insert query variables states from file.

Parameters
pathThe path to the query file.

Definition at line 348 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_query, gum::HashTable< Key, Val, Alloc >::clear(), gum::HashTable< Key, Val, Alloc >::empty(), GUM_ERROR, GUM_SHOWERROR, and gum::HashTable< Key, Val, Alloc >::insert().

348  {
349  std::ifstream evi_stream(path.c_str(), std::ios::in);
350 
351  if (!evi_stream.good()) {
352  GUM_ERROR(IOError,
353  "void InferenceEngine< GUM_SCALAR >::insertQuery(const "
354  "std::string & path) : could not open input file : "
355  << path);
356  }
357 
358  if (!_query.empty()) _query.clear();
359 
360  std::string line, tmp;
361  char * cstr, *p;
362 
363  while (evi_stream.good() && std::strcmp(line.c_str(), "[QUERY]") != 0) {
364  getline(evi_stream, line);
365  }
366 
367  while (evi_stream.good()) {
368  getline(evi_stream, line);
369 
370  if (std::strcmp(line.c_str(), "[EVIDENCE]") == 0) break;
371 
372  if (line.size() == 0) continue;
373 
374  cstr = new char[line.size() + 1];
375  strcpy(cstr, line.c_str());
376 
377  p = strtok(cstr, " ");
378  tmp = p;
379 
380  // if user input is wrong
381  NodeId node = -1;
382 
383  try {
384  node = _credalNet->current_bn().idFromName(tmp);
385  } catch (NotFound& err) {
386  GUM_SHOWERROR(err);
387  continue;
388  }
389 
390  auto dSize = _credalNet->current_bn().variable(node).domainSize();
391 
392  p = strtok(nullptr, " ");
393 
394  if (p == nullptr) {
395  _query.insert(node, std::vector< bool >(dSize, true));
396  } else {
397  std::vector< bool > values(dSize, false);
398 
399  while (p != nullptr) {
400  if ((Size)atoi(p) >= dSize)
401  GUM_ERROR(OutOfBounds,
402  "void InferenceEngine< GUM_SCALAR "
403  ">::insertQuery(const std::string & path) : "
404  "query modality is higher or equal to "
405  "cardinality");
406 
407  values[atoi(p)] = true;
408  p = strtok(nullptr, " ");
409  } // end of : line
410 
411  _query.insert(node, values);
412  }
413 
414  delete[] p;
415  delete[] cstr;
416  } // end of : file
417 
418  evi_stream.close();
419  }
#define GUM_SHOWERROR(e)
Definition: exceptions.h:61
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
query _query
Holds the query nodes states.
void clear()
Removes all the elements in the hash table.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
value_type & insert(const Key &key, const Val &val)
Adds a new element (actually a copy of this element) into the hash table.
bool empty() const noexcept
Indicates whether the hash table is empty.
Size NodeId
Type for node ids.
Definition: graphElements.h:98
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:

◆ isEnabledEpsilon()

INLINE bool gum::ApproximationScheme::isEnabledEpsilon ( ) const
virtualinherited

Returns true if stopping criterion on epsilon is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 61 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_eps.

Referenced by gum::learning::genericBNLearner::isEnabledEpsilon().

61  {
62  return _enabled_eps;
63  }
bool _enabled_eps
If true, the threshold convergence is enabled.
+ Here is the caller graph for this function:

◆ isEnabledMaxIter()

INLINE bool gum::ApproximationScheme::isEnabledMaxIter ( ) const
virtualinherited

Returns true if stopping criterion on max iterations is enabled, false otherwise.

Returns
Returns true if stopping criterion on max iterations is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 112 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_iter.

Referenced by gum::learning::genericBNLearner::isEnabledMaxIter().

112  {
113  return _enabled_max_iter;
114  }
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
+ Here is the caller graph for this function:

◆ isEnabledMaxTime()

INLINE bool gum::ApproximationScheme::isEnabledMaxTime ( ) const
virtualinherited

Returns true if stopping criterion on timeout is enabled, false otherwise.

Returns
Returns true if stopping criterion on timeout is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 138 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_time.

Referenced by gum::learning::genericBNLearner::isEnabledMaxTime().

138  {
139  return _enabled_max_time;
140  }
bool _enabled_max_time
If true, the timeout is enabled.
+ Here is the caller graph for this function:

◆ isEnabledMinEpsilonRate()

INLINE bool gum::ApproximationScheme::isEnabledMinEpsilonRate ( ) const
virtualinherited

Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Returns
Returns true if stopping criterion on epsilon rate is enabled, false otherwise.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 90 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_min_rate_eps.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), and gum::learning::genericBNLearner::isEnabledMinEpsilonRate().

90  {
91  return _enabled_min_rate_eps;
92  }
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
+ Here is the caller graph for this function:

◆ makeInference()

template<typename GUM_SCALAR , class BNInferenceEngine >
void gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference ( )
virtual

Starts the inference.

Implements gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >.

Definition at line 55 of file CNMonteCarloSampling_tpl.h.

References gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcThreadDataCopy(), gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__threadInference(), gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__threadUpdate(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_computeEpsilon(), gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpectations(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_expFusion(), gum::credal::InferenceEngine< GUM_SCALAR >::_modal, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_optFusion(), gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInd, gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit(), gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt, gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices, gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_updateMarginals(), gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_verticesFusion(), gum::ApproximationScheme::continueApproximationScheme(), GUM_SHOWERROR, gum::ApproximationScheme::periodSize(), and gum::ApproximationScheme::updateApproximationScheme().

55  {
57  try {
58  this->_repetitiveInit();
59  } catch (InvalidArgument& err) {
60  GUM_SHOWERROR(err);
62  }
63  }
64 
65  // debug
67 
69 
71 
72  // don't put it after burnIn, it could stop with timeout : we want at
73  // least one
74  // burnIn and one periodSize
75  GUM_SCALAR eps = 1.; // to validate testSuite ?
76 
78  auto psize = this->periodSize();
79  /*
80 
81  auto remaining = this->remainingBurnIn();
82 
84  be
86  if ( remaining != 0 ) {
90  and
94  )
95  do {
96  eps = 0;
97 
98  int iters = int( ( remaining < psize ) ? remaining : psize );
99 
100  #pragma omp parallel for
101 
102  for ( int iter = 0; iter < iters; iter++ ) {
103  __threadInference();
104  __threadUpdate();
105  } // end of : parallel periodSize
106 
107  this->updateApproximationScheme( iters );
108 
110 
112 
113  remaining = this->remainingBurnIn();
114 
115  } while ( ( remaining > 0 ) && this->continueApproximationScheme( eps
116  ) );
117  }
118  */
119 
120  if (this->continueApproximationScheme(eps)) {
121  do {
122  eps = 0;
123 
124 // less overheads with high periodSize
125 #pragma omp parallel for
126 
127  for (int iter = 0; iter < int(psize); iter++) {
129  __threadUpdate();
130  } // end of : parallel periodSize
131 
132  this->updateApproximationScheme(int(psize));
133 
134  this->_updateMarginals(); // fusion threads + update margi
135 
136  eps = this->_computeEpsilon(); // also updates oldMargi
137 
138  } while (this->continueApproximationScheme(eps));
139  }
140 
141  if (!this->_modal.empty()) { this->_expFusion(); }
142 
143  if (__infEs::_storeBNOpt) { this->_optFusion(); }
144 
145  if (__infEs::_storeVertices) { this->_verticesFusion(); }
146 
147  if (!this->_modal.empty()) {
148  this->_dynamicExpectations(); // work with any network
149  }
150 
152  }
void __mcThreadDataCopy()
Initialize threads data.
bool _storeBNOpt
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.
#define GUM_SHOWERROR(e)
Definition: exceptions.h:61
void _dynamicExpectations()
Rearrange lower and upper expectations to suit dynamic networks.
void _optFusion()
Fusion of threads optimal IBayesNet.
void _expFusion()
Fusion of threads expectations.
void _repetitiveInit()
Initialize _t0 and _t1 clusters.
Size periodSize() const
Returns the period size.
void __threadInference()
Thread performs an inference using BNInferenceEngine.
void __threadUpdate()
Update thread data after a IBayesNet inference.
bool continueApproximationScheme(double error)
Update the scheme w.r.t the new error.
void _updateMarginals()
Fusion of threads marginals.
void __mcInitApproximationScheme()
Initialize approximation Scheme.
bool _repetitiveInd
True if using repetitive independence ( dynamic network only ), False otherwise.
dynExpe _modal
Variables modalities used to compute expectations.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
const GUM_SCALAR _computeEpsilon()
Compute epsilon and update old marginals.
void updateApproximationScheme(unsigned int incr=1)
Update the scheme w.r.t the new error and increment steps.
+ Here is the call graph for this function:

◆ marginalMax() [1/2]

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const NodeId  id) const
inherited

Get the upper marginals of a given node id.

Parameters
idThe node id which upper marginals we want.
Returns
A constant reference to this node upper marginals.

Definition at line 447 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax.

447  {
448  try {
449  return _marginalMax[id];
450  } catch (NotFound& err) { throw(err); }
451  }
margi _marginalMax
Upper marginals.

◆ marginalMax() [2/2]

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax ( const std::string &  varName) const
inherited

Get the upper marginals of a given variable name.

Parameters
varNameThe variable name which upper marginals we want.
Returns
A constant reference to this variable upper marginals.

Definition at line 430 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, and gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax.

431  {
432  try {
433  return _marginalMax[_credalNet->current_bn().idFromName(varName)];
434  } catch (NotFound& err) { throw(err); }
435  }
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
margi _marginalMax
Upper marginals.

◆ marginalMin() [1/2]

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const NodeId  id) const
inherited

Get the lower marginals of a given node id.

Parameters
idThe node id which lower marginals we want.
Returns
A constant reference to this node lower marginals.

Definition at line 439 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin.

439  {
440  try {
441  return _marginalMin[id];
442  } catch (NotFound& err) { throw(err); }
443  }
margi _marginalMin
Lower marginals.

◆ marginalMin() [2/2]

template<typename GUM_SCALAR >
const std::vector< GUM_SCALAR > & gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin ( const std::string &  varName) const
inherited

Get the lower marginals of a given variable name.

Parameters
varNameThe variable name which lower marginals we want.
Returns
A constant reference to this variable lower marginals.

Definition at line 422 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, and gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin.

423  {
424  try {
425  return _marginalMin[_credalNet->current_bn().idFromName(varName)];
426  } catch (NotFound& err) { throw(err); }
427  }
margi _marginalMin
Lower marginals.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.

◆ maxIter()

INLINE Size gum::ApproximationScheme::maxIter ( ) const
virtualinherited

Returns the criterion on number of iterations.

Returns
Returns the criterion on number of iterations.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 102 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_max_iter.

Referenced by gum::learning::genericBNLearner::maxIter().

102 { return _max_iter; }
Size _max_iter
The maximum iterations.
+ Here is the caller graph for this function:

◆ maxTime()

INLINE double gum::ApproximationScheme::maxTime ( ) const
virtualinherited

Returns the timeout (in seconds).

Returns
Returns the timeout (in seconds).

Implements gum::IApproximationSchemeConfiguration.

Definition at line 125 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_max_time.

Referenced by gum::learning::genericBNLearner::maxTime().

125 { return _max_time; }
double _max_time
The timeout.
+ Here is the caller graph for this function:

◆ messageApproximationScheme()

INLINE std::string gum::IApproximationSchemeConfiguration::messageApproximationScheme ( ) const
inherited

Returns the approximation scheme message.

Returns
Returns the approximation scheme message.

Definition at line 40 of file IApproximationSchemeConfiguration_inl.h.

References gum::IApproximationSchemeConfiguration::Continue, gum::IApproximationSchemeConfiguration::Epsilon, gum::IApproximationSchemeConfiguration::epsilon(), gum::IApproximationSchemeConfiguration::Limit, gum::IApproximationSchemeConfiguration::maxIter(), gum::IApproximationSchemeConfiguration::maxTime(), gum::IApproximationSchemeConfiguration::minEpsilonRate(), gum::IApproximationSchemeConfiguration::Rate, gum::IApproximationSchemeConfiguration::stateApproximationScheme(), gum::IApproximationSchemeConfiguration::Stopped, gum::IApproximationSchemeConfiguration::TimeLimit, and gum::IApproximationSchemeConfiguration::Undefined.

Referenced by gum::ApproximationScheme::_stopScheme(), gum::ApproximationScheme::continueApproximationScheme(), and gum::credal::InferenceEngine< GUM_SCALAR >::getApproximationSchemeMsg().

40  {
41  std::stringstream s;
42 
43  switch (stateApproximationScheme()) {
44  case ApproximationSchemeSTATE::Continue: s << "in progress"; break;
45 
47  s << "stopped with epsilon=" << epsilon();
48  break;
49 
51  s << "stopped with rate=" << minEpsilonRate();
52  break;
53 
55  s << "stopped with max iteration=" << maxIter();
56  break;
57 
59  s << "stopped with timeout=" << maxTime();
60  break;
61 
62  case ApproximationSchemeSTATE::Stopped: s << "stopped on request"; break;
63 
64  case ApproximationSchemeSTATE::Undefined: s << "undefined state"; break;
65  };
66 
67  return s.str();
68  }
virtual double epsilon() const =0
Returns the value of epsilon.
virtual ApproximationSchemeSTATE stateApproximationScheme() const =0
Returns the approximation scheme state.
virtual double maxTime() const =0
Returns the timeout (in seconds).
virtual Size maxIter() const =0
Returns the criterion on number of iterations.
virtual double minEpsilonRate() const =0
Returns the value of the minimal epsilon rate.
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ minEpsilonRate()

INLINE double gum::ApproximationScheme::minEpsilonRate ( ) const
virtualinherited

Returns the value of the minimal epsilon rate.

Returns
Returns the value of the minimal epsilon rate.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 74 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_min_rate_eps.

Referenced by gum::learning::genericBNLearner::minEpsilonRate().

74  {
75  return _min_rate_eps;
76  }
double _min_rate_eps
Threshold for the epsilon rate.
+ Here is the caller graph for this function:

◆ nbrIterations()

INLINE Size gum::ApproximationScheme::nbrIterations ( ) const
virtualinherited

Returns the number of iterations.

Returns
Returns the number of iterations.
Exceptions
OperationNotAllowedRaised if the scheme did not perform.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 163 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_step, GUM_ERROR, gum::ApproximationScheme::stateApproximationScheme(), and gum::IApproximationSchemeConfiguration::Undefined.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), and gum::learning::genericBNLearner::nbrIterations().

163  {
165  GUM_ERROR(OperationNotAllowed,
166  "state of the approximation scheme is undefined");
167  }
168 
169  return _current_step;
170  }
Size _current_step
The current step.
ApproximationSchemeSTATE stateApproximationScheme() const
Returns the approximation scheme state.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ periodSize()

INLINE Size gum::ApproximationScheme::periodSize ( ) const
virtualinherited

Returns the period size.

Returns
Returns the period size.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 149 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_period_size.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference(), and gum::learning::genericBNLearner::periodSize().

149 { return _period_size; }
Size _period_size
Checking criteria frequency.
+ Here is the caller graph for this function:

◆ remainingBurnIn()

INLINE Size gum::ApproximationScheme::remainingBurnIn ( )
inherited

Returns the remaining burn in.

Returns
Returns the remaining burn in.

Definition at line 210 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_burn_in, and gum::ApproximationScheme::_current_step.

210  {
211  if (_burn_in > _current_step) {
212  return _burn_in - _current_step;
213  } else {
214  return 0;
215  }
216  }
Size _burn_in
Number of iterations before checking stopping criteria.
Size _current_step
The current step.

◆ repetitiveInd()

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::repetitiveInd ( ) const
inherited

Get the current independence status.

Returns
True if repetitive, False otherwise.

Definition at line 120 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInd.

120  {
121  return _repetitiveInd;
122  }
bool _repetitiveInd
True if using repetitive independence ( dynamic network only ), False otherwise.

◆ saveExpectations()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveExpectations ( const std::string &  path) const
inherited

Saves expectations to file.

Parameters
pathThe path to the file to be used.

Definition at line 554 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMax, gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMin, and GUM_ERROR.

555  {
556  if (_dynamicExpMin.empty()) //_modal.empty())
557  return;
558 
559  // else not here, to keep the const (natural with a saving process)
560  // else if(_dynamicExpMin.empty() || _dynamicExpMax.empty())
561  //_dynamicExpectations(); // works with or without a dynamic network
562 
563  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
564 
565  if (!m_stream.good()) {
566  GUM_ERROR(IOError,
567  "void InferenceEngine< GUM_SCALAR "
568  ">::saveExpectations(const std::string & path) : could "
569  "not open output file : "
570  << path);
571  }
572 
573  for (const auto& elt : _dynamicExpMin) {
574  m_stream << elt.first; // it->first;
575 
576  // iterates over a vector
577  for (const auto& elt2 : elt.second) {
578  m_stream << " " << elt2;
579  }
580 
581  m_stream << std::endl;
582  }
583 
584  for (const auto& elt : _dynamicExpMax) {
585  m_stream << elt.first;
586 
587  // iterates over a vector
588  for (const auto& elt2 : elt.second) {
589  m_stream << " " << elt2;
590  }
591 
592  m_stream << std::endl;
593  }
594 
595  m_stream.close();
596  }
dynExpe _dynamicExpMin
Lower dynamic expectations.
dynExpe _dynamicExpMax
Upper dynamic expectations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55

◆ saveMarginals()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveMarginals ( const std::string &  path) const
inherited

Saves marginals to file.

Parameters
pathThe path to the file to be used.

Definition at line 528 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, and GUM_ERROR.

529  {
530  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
531 
532  if (!m_stream.good()) {
533  GUM_ERROR(IOError,
534  "void InferenceEngine< GUM_SCALAR >::saveMarginals(const "
535  "std::string & path) const : could not open output file "
536  ": "
537  << path);
538  }
539 
540  for (const auto& elt : _marginalMin) {
541  Size esize = Size(elt.second.size());
542 
543  for (Size mod = 0; mod < esize; mod++) {
544  m_stream << _credalNet->current_bn().variable(elt.first).name() << " "
545  << mod << " " << (elt.second)[mod] << " "
546  << _marginalMax[elt.first][mod] << std::endl;
547  }
548  }
549 
550  m_stream.close();
551  }
margi _marginalMin
Lower marginals.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
margi _marginalMax
Upper marginals.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55

◆ saveVertices()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::saveVertices ( const std::string &  path) const
inherited

Saves vertices to file.

Parameters
pathThe path to the file to be used.

Definition at line 628 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets, and GUM_ERROR.

628  {
629  std::ofstream m_stream(path.c_str(), std::ios::out | std::ios::trunc);
630 
631  if (!m_stream.good()) {
632  GUM_ERROR(IOError,
633  "void InferenceEngine< GUM_SCALAR >::saveVertices(const "
634  "std::string & path) : could not open outpul file : "
635  << path);
636  }
637 
638  for (const auto& elt : _marginalSets) {
639  m_stream << _credalNet->current_bn().variable(elt.first).name()
640  << std::endl;
641 
642  for (const auto& elt2 : elt.second) {
643  m_stream << "[";
644  bool first = true;
645 
646  for (const auto& elt3 : elt2) {
647  if (!first) {
648  m_stream << ",";
649  first = false;
650  }
651 
652  m_stream << elt3;
653  }
654 
655  m_stream << "]\n";
656  }
657  }
658 
659  m_stream.close();
660  }
credalSet _marginalSets
Credal sets vertices, if enabled.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55

◆ setEpsilon()

INLINE void gum::ApproximationScheme::setEpsilon ( double  eps)
virtualinherited

Given that we approximate f(t), stopping criterion on |f(t+1)-f(t)|.

If the criterion was disabled it will be enabled.

Parameters
epsThe new epsilon value.
Exceptions
OutOfLowerBoundRaised if eps < 0.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 43 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_eps, gum::ApproximationScheme::_eps, and GUM_ERROR.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcInitApproximationScheme(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsSampling< GUM_SCALAR >::GibbsSampling(), gum::learning::GreedyHillClimbing::GreedyHillClimbing(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setEpsilon().

43  {
44  if (eps < 0.) { GUM_ERROR(OutOfLowerBound, "eps should be >=0"); }
45 
46  _eps = eps;
47  _enabled_eps = true;
48  }
bool _enabled_eps
If true, the threshold convergence is enabled.
double _eps
Threshold for convergence.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the caller graph for this function:

◆ setMaxIter()

INLINE void gum::ApproximationScheme::setMaxIter ( Size  max)
virtualinherited

Stopping criterion on number of iterations.

If the criterion was disabled it will be enabled.

Parameters
maxThe maximum number of iterations.
Exceptions
OutOfLowerBoundRaised if max <= 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 95 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_iter, gum::ApproximationScheme::_max_iter, and GUM_ERROR.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setMaxIter().

95  {
96  if (max < 1) { GUM_ERROR(OutOfLowerBound, "max should be >=1"); }
97  _max_iter = max;
98  _enabled_max_iter = true;
99  }
bool _enabled_max_iter
If true, the maximum iterations stopping criterion is enabled.
Size _max_iter
The maximum iterations.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the caller graph for this function:

◆ setMaxTime()

INLINE void gum::ApproximationScheme::setMaxTime ( double  timeout)
virtualinherited

Stopping criterion on timeout.

If the criterion was disabled it will be enabled.

Parameters
timeoutThe timeout value in seconds.
Exceptions
OutOfLowerBoundRaised if timeout <= 0.0.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 118 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_max_time, gum::ApproximationScheme::_max_time, and GUM_ERROR.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setMaxTime().

118  {
119  if (timeout <= 0.) { GUM_ERROR(OutOfLowerBound, "timeout should be >0."); }
120  _max_time = timeout;
121  _enabled_max_time = true;
122  }
bool _enabled_max_time
If true, the timeout is enabled.
double _max_time
The timeout.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the caller graph for this function:

◆ setMinEpsilonRate()

INLINE void gum::ApproximationScheme::setMinEpsilonRate ( double  rate)
virtualinherited

Given that we approximate f(t), stopping criterion on d/dt(|f(t+1)-f(t)|).

If the criterion was disabled it will be enabled

Parameters
rateThe minimal epsilon rate.
Exceptions
OutOfLowerBoundif rate<0

Implements gum::IApproximationSchemeConfiguration.

Definition at line 66 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_enabled_min_rate_eps, gum::ApproximationScheme::_min_rate_eps, and GUM_ERROR.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::GibbsSampling< GUM_SCALAR >::GibbsSampling(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setMinEpsilonRate().

66  {
67  if (rate < 0) { GUM_ERROR(OutOfLowerBound, "rate should be >=0"); }
68 
69  _min_rate_eps = rate;
70  _enabled_min_rate_eps = true;
71  }
bool _enabled_min_rate_eps
If true, the minimal threshold for epsilon rate is enabled.
double _min_rate_eps
Threshold for the epsilon rate.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the caller graph for this function:

◆ setPeriodSize()

INLINE void gum::ApproximationScheme::setPeriodSize ( Size  p)
virtualinherited

How many samples between two stopping is enable.

Parameters
pThe new period value.
Exceptions
OutOfLowerBoundRaised if p < 1.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 143 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_period_size, and GUM_ERROR.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::CNMonteCarloSampling(), gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setPeriodSize().

143  {
144  if (p < 1) { GUM_ERROR(OutOfLowerBound, "p should be >=1"); }
145 
146  _period_size = p;
147  }
Size _period_size
Checking criteria frequency.
#define GUM_ERROR(type, msg)
Definition: exceptions.h:55
+ Here is the caller graph for this function:

◆ setRepetitiveInd()

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::setRepetitiveInd ( const bool  repetitive)
inherited
Parameters
repetitiveTrue if repetitive independence is to be used, false otherwise. Only usefull with dynamic networks.

Definition at line 111 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInd, and gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit().

111  {
112  bool oldValue = _repetitiveInd;
113  _repetitiveInd = repetitive;
114 
115  // do not compute clusters more than once
116  if (_repetitiveInd && !oldValue) _repetitiveInit();
117  }
void _repetitiveInit()
Initialize _t0 and _t1 clusters.
bool _repetitiveInd
True if using repetitive independence ( dynamic network only ), False otherwise.
+ Here is the call graph for this function:

◆ setVerbosity()

INLINE void gum::ApproximationScheme::setVerbosity ( bool  v)
virtualinherited

Set the verbosity on (true) or off (false).

Parameters
vIf true, then verbosity is turned on.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 152 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_verbosity.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::GibbsBNdistance(), gum::SamplingInference< GUM_SCALAR >::SamplingInference(), and gum::learning::genericBNLearner::setVerbosity().

152 { _verbosity = v; }
bool _verbosity
If true, verbosity is enabled.
+ Here is the caller graph for this function:

◆ startOfPeriod()

INLINE bool gum::ApproximationScheme::startOfPeriod ( )
inherited

Returns true if we are at the beginning of a period (compute error is mandatory).

Returns
Returns true if we are at the beginning of a period (compute error is mandatory).

Definition at line 197 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_burn_in, gum::ApproximationScheme::_current_step, and gum::ApproximationScheme::_period_size.

Referenced by gum::ApproximationScheme::continueApproximationScheme().

197  {
198  if (_current_step < _burn_in) { return false; }
199 
200  if (_period_size == 1) { return true; }
201 
202  return ((_current_step - _burn_in) % _period_size == 0);
203  }
Size _burn_in
Number of iterations before checking stopping criteria.
Size _current_step
The current step.
Size _period_size
Checking criteria frequency.
+ Here is the caller graph for this function:

◆ stateApproximationScheme()

INLINE IApproximationSchemeConfiguration::ApproximationSchemeSTATE gum::ApproximationScheme::stateApproximationScheme ( ) const
virtualinherited

Returns the approximation scheme state.

Returns
Returns the approximation scheme state.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 158 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_state.

Referenced by gum::ApproximationScheme::continueApproximationScheme(), gum::ApproximationScheme::history(), gum::ApproximationScheme::nbrIterations(), and gum::learning::genericBNLearner::stateApproximationScheme().

158  {
159  return _current_state;
160  }
ApproximationSchemeSTATE _current_state
The current state.
+ Here is the caller graph for this function:

◆ stopApproximationScheme()

INLINE void gum::ApproximationScheme::stopApproximationScheme ( )
inherited

Stop the approximation scheme.

Definition at line 219 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_state, gum::ApproximationScheme::_stopScheme(), gum::IApproximationSchemeConfiguration::Continue, and gum::IApproximationSchemeConfiguration::Stopped.

Referenced by gum::learning::DAG2BNLearner< ALLOC >::createBN(), gum::learning::GreedyHillClimbing::learnStructure(), and gum::learning::LocalSearchWithTabuList::learnStructure().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ storeBNOpt() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( const bool  value)
inherited
Parameters
valueTrue if optimal bayesian networks are to be stored for each variable and each modality.

Definition at line 99 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt.

99  {
100  _storeBNOpt = value;
101  }
bool _storeBNOpt
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

◆ storeBNOpt() [2/2]

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeBNOpt ( ) const
inherited
Returns
True if optimal bayes net are stored for each variable and each modality, False otherwise.

Definition at line 135 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt.

135  {
136  return _storeBNOpt;
137  }
bool _storeBNOpt
Iterations limit stopping rule used by some algorithms such as CNMonteCarloSampling.

◆ storeVertices() [1/2]

template<typename GUM_SCALAR >
void gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( const bool  value)
inherited
Parameters
valueTrue if vertices are to be stored, false otherwise.

Definition at line 104 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginalSets(), and gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices.

104  {
105  _storeVertices = value;
106 
107  if (value) _initMarginalSets();
108  }
void _initMarginalSets()
Initialize credal set vertices with empty sets.
bool _storeVertices
True if credal sets vertices are stored, False otherwise.
+ Here is the call graph for this function:

◆ storeVertices() [2/2]

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::storeVertices ( ) const
inherited

Get the number of iterations without changes used to stop some algorithms.

Returns
the number of iterations.int iterStop () const;
True if vertice are stored, False otherwise.

Definition at line 130 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_storeVertices.

130  {
131  return _storeVertices;
132  }
bool _storeVertices
True if credal sets vertices are stored, False otherwise.

◆ toString()

template<typename GUM_SCALAR >
std::string gum::credal::InferenceEngine< GUM_SCALAR >::toString ( ) const
inherited

Print all nodes marginals to standart output.

Definition at line 599 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMax, gum::credal::InferenceEngine< GUM_SCALAR >::_marginalMin, gum::credal::InferenceEngine< GUM_SCALAR >::_query, gum::HashTable< Key, Val, Alloc >::empty(), and gum::HashTable< Key, Val, Alloc >::exists().

599  {
600  std::stringstream output;
601  output << std::endl;
602 
603  // use cbegin() when available
604  for (const auto& elt : _marginalMin) {
605  Size esize = Size(elt.second.size());
606 
607  for (Size mod = 0; mod < esize; mod++) {
608  output << "P(" << _credalNet->current_bn().variable(elt.first).name()
609  << "=" << mod << "|e) = [ ";
610  output << _marginalMin[elt.first][mod] << ", "
611  << _marginalMax[elt.first][mod] << " ]";
612 
613  if (!_query.empty())
614  if (_query.exists(elt.first) && _query[elt.first][mod])
615  output << " QUERY";
616 
617  output << std::endl;
618  }
619 
620  output << std::endl;
621  }
622 
623  return output.str();
624  }
margi _marginalMin
Lower marginals.
bool exists(const Key &key) const
Checks whether there exists an element with a given key in the hashtable.
const CredalNet< GUM_SCALAR > * _credalNet
A pointer to the Credal Net used.
query _query
Holds the query nodes states.
std::size_t Size
In aGrUM, hashed values are unsigned long int.
Definition: types.h:48
bool empty() const noexcept
Indicates whether the hash table is empty.
margi _marginalMax
Upper marginals.
+ Here is the call graph for this function:

◆ updateApproximationScheme()

INLINE void gum::ApproximationScheme::updateApproximationScheme ( unsigned int  incr = 1)
inherited

Update the scheme w.r.t the new error and increment steps.

Parameters
incrThe new increment steps.

Definition at line 206 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_current_step.

Referenced by gum::GibbsBNdistance< GUM_SCALAR >::_computeKL(), gum::SamplingInference< GUM_SCALAR >::_loopApproxInference(), gum::learning::DAG2BNLearner< ALLOC >::createBN(), gum::learning::GreedyHillClimbing::learnStructure(), gum::learning::LocalSearchWithTabuList::learnStructure(), and gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::makeInference().

206  {
207  _current_step += incr;
208  }
Size _current_step
The current step.
+ Here is the caller graph for this function:

◆ verbosity()

INLINE bool gum::ApproximationScheme::verbosity ( ) const
virtualinherited

Returns true if verbosity is enabled.

Returns
Returns true if verbosity is enabled.

Implements gum::IApproximationSchemeConfiguration.

Definition at line 154 of file approximationScheme_inl.h.

References gum::ApproximationScheme::_verbosity.

Referenced by gum::ApproximationScheme::continueApproximationScheme(), gum::ApproximationScheme::history(), and gum::learning::genericBNLearner::verbosity().

154 { return _verbosity; }
bool _verbosity
If true, verbosity is enabled.
+ Here is the caller graph for this function:

◆ vertices()

template<typename GUM_SCALAR >
const std::vector< std::vector< GUM_SCALAR > > & gum::credal::InferenceEngine< GUM_SCALAR >::vertices ( const NodeId  id) const
inherited

Get the vertice of a given node id.

Parameters
idThe node id which vertice we want.
Returns
A constant reference to this node vertice.

Definition at line 523 of file inferenceEngine_tpl.h.

References gum::credal::InferenceEngine< GUM_SCALAR >::_marginalSets.

523  {
524  return _marginalSets[id];
525  }
credalSet _marginalSets
Credal sets vertices, if enabled.

Member Data Documentation

◆ _burn_in

◆ _credalNet

template<typename GUM_SCALAR >
const CredalNet< GUM_SCALAR >* gum::credal::InferenceEngine< GUM_SCALAR >::_credalNet
protectedinherited

A pointer to the Credal Net used.

Definition at line 74 of file inferenceEngine.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcThreadDataCopy(), gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__verticesSampling(), gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_initExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginals(), gum::credal::InferenceEngine< GUM_SCALAR >::_initMarginalSets(), gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit(), gum::credal::InferenceEngine< GUM_SCALAR >::_updateExpectations(), gum::credal::InferenceEngine< GUM_SCALAR >::credalNet(), gum::credal::InferenceEngine< GUM_SCALAR >::expectationMax(), gum::credal::InferenceEngine< GUM_SCALAR >::expectationMin(), gum::credal::InferenceEngine< GUM_SCALAR >::InferenceEngine(), gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidence(), gum::credal::InferenceEngine< GUM_SCALAR >::insertEvidenceFile(), gum::credal::InferenceEngine< GUM_SCALAR >::insertModals(), gum::credal::InferenceEngine< GUM_SCALAR >::insertQuery(), gum::credal::InferenceEngine< GUM_SCALAR >::insertQueryFile(), gum::credal::InferenceEngine< GUM_SCALAR >::marginalMax(), gum::credal::InferenceEngine< GUM_SCALAR >::marginalMin(), gum::credal::InferenceEngine< GUM_SCALAR >::saveMarginals(), gum::credal::InferenceEngine< GUM_SCALAR >::saveVertices(), and gum::credal::InferenceEngine< GUM_SCALAR >::toString().

◆ _current_epsilon

double gum::ApproximationScheme::_current_epsilon
protectedinherited

◆ _current_rate

double gum::ApproximationScheme::_current_rate
protectedinherited

◆ _current_state

◆ _current_step

◆ _dbnOpt

template<typename GUM_SCALAR >
VarMod2BNsMap< GUM_SCALAR > gum::credal::InferenceEngine< GUM_SCALAR >::_dbnOpt
protectedinherited

◆ _dynamicExpMax

template<typename GUM_SCALAR >
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMax
protectedinherited

◆ _dynamicExpMin

template<typename GUM_SCALAR >
dynExpe gum::credal::InferenceEngine< GUM_SCALAR >::_dynamicExpMin
protectedinherited

◆ _enabled_eps

◆ _enabled_max_iter

bool gum::ApproximationScheme::_enabled_max_iter
protectedinherited

◆ _enabled_max_time

◆ _enabled_min_rate_eps

bool gum::ApproximationScheme::_enabled_min_rate_eps
protectedinherited

◆ _eps

double gum::ApproximationScheme::_eps
protectedinherited

◆ _evidence

◆ _expectationMax

◆ _expectationMin

◆ _history

std::vector< double > gum::ApproximationScheme::_history
protectedinherited

◆ _l_clusters

◆ _l_evidence

template<typename GUM_SCALAR , class BNInferenceEngine >
__margis gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_l_evidence
protectedinherited

◆ _l_expectationMax

◆ _l_expectationMin

◆ _l_inferenceEngine

◆ _l_marginalMax

◆ _l_marginalMin

◆ _l_marginalSets

◆ _l_modal

◆ _l_optimalNet

◆ _last_epsilon

double gum::ApproximationScheme::_last_epsilon
protectedinherited

Last epsilon value.

Definition at line 372 of file approximationScheme.h.

Referenced by gum::ApproximationScheme::continueApproximationScheme().

◆ _marginalMax

◆ _marginalMin

◆ _marginalSets

◆ _max_iter

Size gum::ApproximationScheme::_max_iter
protectedinherited

◆ _max_time

double gum::ApproximationScheme::_max_time
protectedinherited

◆ _min_rate_eps

double gum::ApproximationScheme::_min_rate_eps
protectedinherited

◆ _modal

◆ _oldMarginalMax

◆ _oldMarginalMin

◆ _period_size

Size gum::ApproximationScheme::_period_size
protectedinherited

◆ _query

◆ _repetitiveInd

template<typename GUM_SCALAR , class BNInferenceEngine = LazyPropagation< GUM_SCALAR >>
bool gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::_repetitiveInd
protected

Definition at line 129 of file CNMonteCarloSampling.h.

◆ _storeBNOpt

template<typename GUM_SCALAR >
bool gum::credal::InferenceEngine< GUM_SCALAR >::_storeBNOpt
protectedinherited

◆ _storeVertices

◆ _t0

template<typename GUM_SCALAR >
cluster gum::credal::InferenceEngine< GUM_SCALAR >::_t0
protectedinherited

Clusters of nodes used with dynamic networks.

Any node key in _t0 is present at \( t=0 \) and any node belonging to the node set of this key share the same CPT than the key. Used for sampling with repetitive independence.

Definition at line 117 of file inferenceEngine.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcThreadDataCopy(), gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit(), and gum::credal::InferenceEngine< GUM_SCALAR >::getT0Cluster().

◆ _t1

template<typename GUM_SCALAR >
cluster gum::credal::InferenceEngine< GUM_SCALAR >::_t1
protectedinherited

Clusters of nodes used with dynamic networks.

Any node key in _t1 is present at \( t=1 \) and any node belonging to the node set of this key share the same CPT than the key. Used for sampling with repetitive independence.

Definition at line 124 of file inferenceEngine.h.

Referenced by gum::credal::CNMonteCarloSampling< GUM_SCALAR, BNInferenceEngine >::__mcThreadDataCopy(), gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit(), and gum::credal::InferenceEngine< GUM_SCALAR >::getT1Cluster().

◆ _timer

◆ _timeSteps

template<typename GUM_SCALAR >
int gum::credal::InferenceEngine< GUM_SCALAR >::_timeSteps
protectedinherited

The number of time steps of this network (only usefull for dynamic networks).

Deprecated:

Definition at line 154 of file inferenceEngine.h.

Referenced by gum::credal::InferenceEngine< GUM_SCALAR >::_repetitiveInit().

◆ _verbosity

bool gum::ApproximationScheme::_verbosity
protectedinherited

If true, verbosity is enabled.

Definition at line 420 of file approximationScheme.h.

Referenced by gum::ApproximationScheme::setVerbosity(), and gum::ApproximationScheme::verbosity().

◆ _workingSet

◆ _workingSetE

template<typename GUM_SCALAR , class BNInferenceEngine >
std::vector< List< const Potential< GUM_SCALAR >* >* > gum::credal::MultipleInferenceEngine< GUM_SCALAR, BNInferenceEngine >::_workingSetE
protectedinherited